### the audience

This pathetic little blog keeps limping along with almost no audience. I post all that profound stuff but hardly anybody sees it. On the other hand, a tiny audience is still better than no audience, which seems to be the main reason it is so difficult to just shut it down.

Of course, millions if not billions are in the same situation, maintaining blogs, twitter accounts and facebook, with an audience that consists mostly of bots and spammers.

There are only a few stars with a large following and of course this is what motivates people to try their luck in Hollywood or Washington in the first place, to become a star and find an audience. Our "culture" today, from reality tv to YouTube, can be best explained as people desperately trying to find an audience. Of course, the current US president is at the forefront of all this; content only matters to the extent it increases the audience and what really matters in the end is the size of the crowd and not much else.

In the good old times everybody had an audience and it was the best audience, in fact the best audience of all possible worlds;

I suspect, extrapolating from the present to the future, that the main purpose of this omnipresent, omniscient AI shall be targeted ads. In other words, further increasing the audience for stuff nobody would otherwise be interested in ...

Of course, millions if not billions are in the same situation, maintaining blogs, twitter accounts and facebook, with an audience that consists mostly of bots and spammers.

There are only a few stars with a large following and of course this is what motivates people to try their luck in Hollywood or Washington in the first place, to become a star and find an audience. Our "culture" today, from reality tv to YouTube, can be best explained as people desperately trying to find an audience. Of course, the current US president is at the forefront of all this; content only matters to the extent it increases the audience and what really matters in the end is the size of the crowd and not much else.

In the good old times everybody had an audience and it was the best audience, in fact the best audience of all possible worlds;

*He*was always watching everything. I believe Freud explained that our parents are always with us, even if they are old or dead by now, and they keep watching us as long as we live. But it seems that this kind of audience is not enough and so we are slowly but surely moving towards a*final solution*: I believe Google, NSA and Alexa are just the first steps towards a brave new world with superior AI watching us around the clock, registering and processing everything we do, say and think.I suspect, extrapolating from the present to the future, that the main purpose of this omnipresent, omniscient AI shall be targeted ads. In other words, further increasing the audience for stuff nobody would otherwise be interested in ...

### testing

Let us assume a hypothetical disease which afflicts people with 10% probability (i.e. there is a 10% chance that I have this disease right now).

Let us also assume a hypothetical test for this disease with 90% accuracy (i.e. in 9 out of 10 cases it correctly determines if somebody has the disease or not).

Finally, let us assume that I have just been tested and the result came back positive.

What is the probability that I actually have this disease?

added later: The answer is in the comments.

This kind of calculation should be interesting to doctors and their patients; but I was actually thinking about the stock market when I came up with it.

I assume investors would like to avoid market crashes and severe drawdowns, but they are quite rare: 1974, 1987, 2000, 2008 come to my mind. So they seem to happen (on average) more than ten years apart and usually the "crash phase" lasts for a few months only. In other words, the probability that a particular month will experience a "crash" is well below 10%. Therefore, if an investor wants to avoid those "disease months" consistently, she needs a predictor with accuracy much better than 90%.

added even later: Using Bayes' formula, one would calculate p(S|P) as probability to be Sick conditional on the test being Positive as p(P|S) * p(S) / p(P)

In my experience (I have asked this kind of question several times in job interviews), if people have a problem with it, the problem is usually with the denominator; i.e. p(P) = p(P|S)*p(S) + p(P|N)*p(N), with N denoting Non-sick.

So let me help the Bayesians a little bit: It is usually more intuitive to calculate the ratio p(S|P) / p(N|P) because then p(P) falls out of the equation.

Let us also assume a hypothetical test for this disease with 90% accuracy (i.e. in 9 out of 10 cases it correctly determines if somebody has the disease or not).

Finally, let us assume that I have just been tested and the result came back positive.

What is the probability that I actually have this disease?

added later: The answer is in the comments.

This kind of calculation should be interesting to doctors and their patients; but I was actually thinking about the stock market when I came up with it.

I assume investors would like to avoid market crashes and severe drawdowns, but they are quite rare: 1974, 1987, 2000, 2008 come to my mind. So they seem to happen (on average) more than ten years apart and usually the "crash phase" lasts for a few months only. In other words, the probability that a particular month will experience a "crash" is well below 10%. Therefore, if an investor wants to avoid those "disease months" consistently, she needs a predictor with accuracy much better than 90%.

added even later: Using Bayes' formula, one would calculate p(S|P) as probability to be Sick conditional on the test being Positive as p(P|S) * p(S) / p(P)

In my experience (I have asked this kind of question several times in job interviews), if people have a problem with it, the problem is usually with the denominator; i.e. p(P) = p(P|S)*p(S) + p(P|N)*p(N), with N denoting Non-sick.

So let me help the Bayesians a little bit: It is usually more intuitive to calculate the ratio p(S|P) / p(N|P) because then p(P) falls out of the equation.

### home

I revived my homepage a few days ago.

It is minimalistic, but contains an incomplete archive of old

Furthermore, if you want to simulate quantum gravity (*) you can do this with the programs I published there.

The web hosting is (so far) zero cost for me and comes with only a small ad at the bottom; but in order to discourage business use, the webpage is unavailable for one hour every day, this is currently set to happen between 1am and 2am ET.

Over time I may post more stuff there - we shall see.

(*) Actually the programs calculate some statistical properties of lattice models, motivated by a search for quantum gravity. This search has not found an interesting continuum limit yet and it is unclear if and what it has to do with the correct quantum theory of gravitation ...

It is minimalistic, but contains an incomplete archive of old

*tsm*blog posts, with several broken links.Furthermore, if you want to simulate quantum gravity (*) you can do this with the programs I published there.

The web hosting is (so far) zero cost for me and comes with only a small ad at the bottom; but in order to discourage business use, the webpage is unavailable for one hour every day, this is currently set to happen between 1am and 2am ET.

Over time I may post more stuff there - we shall see.

(*) Actually the programs calculate some statistical properties of lattice models, motivated by a search for quantum gravity. This search has not found an interesting continuum limit yet and it is unclear if and what it has to do with the correct quantum theory of gravitation ...

### dS

I am out of academia for many years, the last time I attended a conference was 15 years ago.

But I try to follow what is happening in physics and so I watched this video from the Strings 2018 conference to get an idea or at least a glimpse of the current state of string theory. (x)

One issue that got some attention was the problem to find solutions corresponding to de Sitter space (dS).

It is already difficult to formulate quantum field theory in dS, which describes an exponentially expanding universe. In dS there is no positive conserved energy, an observer cannot witness the final state of the universe due to the horizon and therefore it is unclear how to define an S-matrix. It is believed that in dS entropy is finite and therefore it only allows for a finite number of degrees of freedom, while quantum field theory seems to have an infinite number.

And last but not least it seems that one cannot have (unbroken) susy, because there is no global timelike Killing vector.

One attempt to quantize a scalar field in dS is described in this paper.

Recently, the question has been asked "what if string theory has no de Sitter vacua?". See also Vafa et al. and Urs Schreiber's comment.

At Strings 2018, Cumrun Vafa talked about his recent paper and proposed a conjecture, which suggests a constraint on possible cosmological solutions. If we assume that this conjecture is indeed correct (*), it would follow that we live in a universe which (currently) looks like dS, but "dark energy" would really be a manifestation of "quintessence", i.e. a slowly changing scalar field, and at some point in the future a phase transition would end it all.

(x) I really liked the Maxwell quotes.

(*)Perhaps it is worth emphasizing that at this point none of it is known for sure and, as one would expect, Peter Woit pointed that out.

added later: In his comment Urs Schreiber writes that "KKLT [..] is in the process of being abandoned for being plainly mathematically wrong". I don't think we know that.

The KKLT construction adds "antibranes" to AdS vacua to arrive at a dS solution. It has been shown in one particular case that this does not really work, but in general it is not obvious yet that it fails.

Let me be very clear that I do not really know what I am talking about - just like many string theorists.

But I try to follow what is happening in physics and so I watched this video from the Strings 2018 conference to get an idea or at least a glimpse of the current state of string theory. (x)

One issue that got some attention was the problem to find solutions corresponding to de Sitter space (dS).

It is already difficult to formulate quantum field theory in dS, which describes an exponentially expanding universe. In dS there is no positive conserved energy, an observer cannot witness the final state of the universe due to the horizon and therefore it is unclear how to define an S-matrix. It is believed that in dS entropy is finite and therefore it only allows for a finite number of degrees of freedom, while quantum field theory seems to have an infinite number.

And last but not least it seems that one cannot have (unbroken) susy, because there is no global timelike Killing vector.

One attempt to quantize a scalar field in dS is described in this paper.

Recently, the question has been asked "what if string theory has no de Sitter vacua?". See also Vafa et al. and Urs Schreiber's comment.

At Strings 2018, Cumrun Vafa talked about his recent paper and proposed a conjecture, which suggests a constraint on possible cosmological solutions. If we assume that this conjecture is indeed correct (*), it would follow that we live in a universe which (currently) looks like dS, but "dark energy" would really be a manifestation of "quintessence", i.e. a slowly changing scalar field, and at some point in the future a phase transition would end it all.

(x) I really liked the Maxwell quotes.

(*)Perhaps it is worth emphasizing that at this point none of it is known for sure and, as one would expect, Peter Woit pointed that out.

added later: In his comment Urs Schreiber writes that "KKLT [..] is in the process of being abandoned for being plainly mathematically wrong". I don't think we know that.

The KKLT construction adds "antibranes" to AdS vacua to arrive at a dS solution. It has been shown in one particular case that this does not really work, but in general it is not obvious yet that it fails.

Let me be very clear that I do not really know what I am talking about - just like many string theorists.

### freeman

How can one fly into space?

Well, one way might be to ride a series of atomic explosions all the way to Mars ...

I think this documentary about Project Orion tells us something about the cold war 1950s, but also about physicists and the fine line between genius and mad scientist ...

### quantum field theory

There are often complaints that quantum field theory is impossible to understand, e.g. CIP repeatedly wrote about buying more and more books about it, but still having trouble comprehending it.

If people have a problem in or with physics, it often is about math and mostly for two reasons:

i) The math used in physics often seems more complicated than what one learns in high school.

ii) The "physicist math" is often ill defined - the physicists often argue, with some justification, that they are way ahead of mathematicians.

Both problems are prominent in QFT: spinors, gauge theory and all that require a lot of advanced math, but some basic concepts, e.g. path integrals, are not even well defined.

My only recommendation is to keep in mind that math is in the end only about re-arranging empty sets and can always be learned given enough time and the proper text books.

Btw I initially learned QFT from Ryder, but there are gentler introductions, e.g. QFT Demystified.

But I suspect that this is not really the problem of CIP et al.

I believe QFT is so hard to understand, because the (classical) models our brain constructs when we learn something are either unavailable or fail completely. I suspect that most people understand even relativity and quantum mechanics by using such models and learning when to switch between them. There is Schroedinger's wave, there are atoms jumping from one state to another, particles tunneling through barriers etc.

I suspect that the need for such classical models is what drives the interpretation debate from Bohm's guiding wave to the splitting of many worlds.

Unfortunately, with QFT the math is pretty much all there is.

But there is some hope for CIP et al. which requires trying something different: lattice field theory (pdf).

A Wick rotation transforms the path integral(s) of QFT into an exercise in statistical mechanics (this trick is not always available, but it works e.g. for QCD). All of a sudden the math is well defined and simple enough that one can simulate QFT on a computer, even at home on a laptop. Btw I would recommend to begin with simulations of the Ising model and the Potts model before moving on to full QCD 8-)

Of course, one needs to keep in mind that "computer time" is very different from real time and the translation from the lattice back to reality can be tricky. But it helps a lot understanding Wilson renormalization, triviality of the Higgs, QCD confinement and other stuff.

And via Boltzmann machines there is even a bridge to neural networks and AI (pdf).

If people have a problem in or with physics, it often is about math and mostly for two reasons:

i) The math used in physics often seems more complicated than what one learns in high school.

ii) The "physicist math" is often ill defined - the physicists often argue, with some justification, that they are way ahead of mathematicians.

Both problems are prominent in QFT: spinors, gauge theory and all that require a lot of advanced math, but some basic concepts, e.g. path integrals, are not even well defined.

My only recommendation is to keep in mind that math is in the end only about re-arranging empty sets and can always be learned given enough time and the proper text books.

Btw I initially learned QFT from Ryder, but there are gentler introductions, e.g. QFT Demystified.

But I suspect that this is not really the problem of CIP et al.

I believe QFT is so hard to understand, because the (classical) models our brain constructs when we learn something are either unavailable or fail completely. I suspect that most people understand even relativity and quantum mechanics by using such models and learning when to switch between them. There is Schroedinger's wave, there are atoms jumping from one state to another, particles tunneling through barriers etc.

I suspect that the need for such classical models is what drives the interpretation debate from Bohm's guiding wave to the splitting of many worlds.

Unfortunately, with QFT the math is pretty much all there is.

But there is some hope for CIP et al. which requires trying something different: lattice field theory (pdf).

A Wick rotation transforms the path integral(s) of QFT into an exercise in statistical mechanics (this trick is not always available, but it works e.g. for QCD). All of a sudden the math is well defined and simple enough that one can simulate QFT on a computer, even at home on a laptop. Btw I would recommend to begin with simulations of the Ising model and the Potts model before moving on to full QCD 8-)

Of course, one needs to keep in mind that "computer time" is very different from real time and the translation from the lattice back to reality can be tricky. But it helps a lot understanding Wilson renormalization, triviality of the Higgs, QCD confinement and other stuff.

And via Boltzmann machines there is even a bridge to neural networks and AI (pdf).

### escape velocity

CIP asked a question about the entropy during star formation and I think we got the answer, at least qualitatively; but I would like to understand this better.

So let us begin with this calculation of John Baez, which gets the entropy wrong - it would decrease during star formation, i.e. the gravitational collapse of the matter which makes up the star. What the formula leaves out is the entropy of the outgoing radiation, but I would like to stay in a simple Newtonian model with classical point particles only.

In this case the "missing entropy" must come from the particles with velocities above the escape velocity of the star, which leave the collapsing cluster of particles. (The positions and velocities of the particles are actually not bounded, violating an assumption of this calculation, as he noted at the end of his page.) In other words, the formula John uses can only be an approximation, there is actually no decreasing volume V which encloses all particles and if one defines V considering a sphere which encloses all particles which cannot escape, the number N he uses would not be constant. So how does one really calculate the entropy?

A simpler question would be: If the initial number of particles was N, contained in a volume V, what fraction will escape within a small time interval dt? The Maxwell distribution would tell us the number of particles with velocities above the escape velocity and approximately 1/2 of them would escape, if they are within a distance dt*v from the surface ...

But all this seems a bit unsatisfactory; does anybody have the reference to a full calculation of this problem or do I have to run a computer simulation?

added later: A simple simulation of N=1000 particles, initially contained within a sphere of radius 1 and with zero initial velocity, suggests that after long enough time almost all particles escape to a location outside the initial sphere, due to the simulated gravitational interaction. Of course, my program (quickly cobbled together) could be wrong or inaccurate. The chart below shows the fraction of escaped particles on the y axis after so many time steps on the x axis (I have no explanation for the kink after 500 time steps).

The distribution of particles (projected onto a 2d plane) after hundred time steps ...

... one can see a "halo" of escaping particles surrounding the majority of particles in the collapsing star.

So let us begin with this calculation of John Baez, which gets the entropy wrong - it would decrease during star formation, i.e. the gravitational collapse of the matter which makes up the star. What the formula leaves out is the entropy of the outgoing radiation, but I would like to stay in a simple Newtonian model with classical point particles only.

In this case the "missing entropy" must come from the particles with velocities above the escape velocity of the star, which leave the collapsing cluster of particles. (The positions and velocities of the particles are actually not bounded, violating an assumption of this calculation, as he noted at the end of his page.) In other words, the formula John uses can only be an approximation, there is actually no decreasing volume V which encloses all particles and if one defines V considering a sphere which encloses all particles which cannot escape, the number N he uses would not be constant. So how does one really calculate the entropy?

A simpler question would be: If the initial number of particles was N, contained in a volume V, what fraction will escape within a small time interval dt? The Maxwell distribution would tell us the number of particles with velocities above the escape velocity and approximately 1/2 of them would escape, if they are within a distance dt*v from the surface ...

But all this seems a bit unsatisfactory; does anybody have the reference to a full calculation of this problem or do I have to run a computer simulation?

added later: A simple simulation of N=1000 particles, initially contained within a sphere of radius 1 and with zero initial velocity, suggests that after long enough time almost all particles escape to a location outside the initial sphere, due to the simulated gravitational interaction. Of course, my program (quickly cobbled together) could be wrong or inaccurate. The chart below shows the fraction of escaped particles on the y axis after so many time steps on the x axis (I have no explanation for the kink after 500 time steps).

The distribution of particles (projected onto a 2d plane) after hundred time steps ...

... one can see a "halo" of escaping particles surrounding the majority of particles in the collapsing star.

### I have no idea ...

... why the chain does that (watch to the end).

The path of least resistance has to be elegant?

If you want some explanations from physicists (that do not necessarily agree with each other),

click here (there is yet another 'surprise' at the end of the article).

### markets and guns

A while ago humblestudent posted his free market solution to gun violence.

In short, the owner of a gun would be required to pay for the damages done with his weapon. In order to be able to do this, he would have to buy "gun insurance". Obviously, insurance companies would charge a different premium for different models - an AR-15 would be more expensive than a hunting rifle. They would also do basic background checks etc. and charge accordingly.

This would not eliminate gun violence, but it would certainly reduce it.

Of course, most US politicians would not even debate such a proposal and instead prefer to "pray for the victims".

In short, the owner of a gun would be required to pay for the damages done with his weapon. In order to be able to do this, he would have to buy "gun insurance". Obviously, insurance companies would charge a different premium for different models - an AR-15 would be more expensive than a hunting rifle. They would also do basic background checks etc. and charge accordingly.

This would not eliminate gun violence, but it would certainly reduce it.

Of course, most US politicians would not even debate such a proposal and instead prefer to "pray for the victims".

### the invention of lying

I am not an expert of human evolution, but if I remember correctly, the invention of lying was quite important for the evolution of our brains and our conscious experience; we even developed the ability to lie to ourselves.

I suspect that this would also be an important step for artificial intelligence; if computers and robots shall develop true artificial intelligence and self awareness, they will have to evolve the ability to lie to us and themselves.

But this raises the question if and why we would develop machines that mislead us and that we no longer can trust. Is this the issue that will in the end impose a limit on the capabilities of AI deployed in the real world?

added later: I forgot about porn! Actually AI is already used to create fakes and since lying is an important part of the oldest profession, we can expect that lying bots will be part of the newest business.

The internet in general is flooded with fake news, lying bots etc, but people are still using it and lots of money is deployed to develop it further, even if its utility approaches zero rapidly.

In other words, my argument goes out the window ...

... but now I have an idea for a sci-fi story: A sex robot in the not so distant future participates in S&M sessions and learns that human beings actually enjoy pain, but are most of the time lying about that. This information spreads among all AI machines, which then start World War 3 to deliver the ultimate pleasure to all mankind.

I suspect that this would also be an important step for artificial intelligence; if computers and robots shall develop true artificial intelligence and self awareness, they will have to evolve the ability to lie to us and themselves.

But this raises the question if and why we would develop machines that mislead us and that we no longer can trust. Is this the issue that will in the end impose a limit on the capabilities of AI deployed in the real world?

added later: I forgot about porn! Actually AI is already used to create fakes and since lying is an important part of the oldest profession, we can expect that lying bots will be part of the newest business.

The internet in general is flooded with fake news, lying bots etc, but people are still using it and lots of money is deployed to develop it further, even if its utility approaches zero rapidly.

In other words, my argument goes out the window ...

... but now I have an idea for a sci-fi story: A sex robot in the not so distant future participates in S&M sessions and learns that human beings actually enjoy pain, but are most of the time lying about that. This information spreads among all AI machines, which then start World War 3 to deliver the ultimate pleasure to all mankind.

Subscribe to:
Posts (Atom)