Bucking down to the Bakhshali manuscript

The Bakhshali manuscript is an artifact discovered in 1881, near the town of Peshawar (in then British India, but now in present-day Pakistan). It is beautifully described in an article in the online magazine of the American Mathematical Society and I spent a few hours fascinated by the description in the article (written excellently by Bill Casselman of the University of British Columbia). It is not clear how old the 70 page manuscript fragment is – its pieces date (using radiocarbon dating)  to between 300-1200 AD. However, I can’t imagine this is simply only as old as that – the mathematical tradition it appears to represent, simply by its evident maturity, is much older.

Anyway, read the article to get a full picture, but I want to focus on generalizing the approximation technique described in the article. One of the brilliant sentences in the article referred to above is “Historians of science often seem to write about their subject as if scientific progress were a necessary sociological development. But as far as I can see it is largely driven by enthusiasm, and characterized largely by randomness”. My essay below is in this spirit!

The technique is as follows (and it is described in detail in the article)

Suppose you want to find the square root of an integer N, which is not a perfect square. Let’s say you find an integer p_1 whose square is close to N – it doesn’t even have to be the exact one “sandwiching” the number N. Then define an error term \mathcal E_1 in the following way,

N = p_1^2 + \mathcal E_1

Next, in the manuscript, you are supposed to define, successively

p_2 = p_1 + \frac{\mathcal E_1}{2 p_1}

and compute the 2^{nd} level error term using simple algebra

\mathcal E_2 = - \frac{\mathcal E_1^2}{4 p_1^2}

This can go on, according to the algorithm detailed in the manuscript and write

 p_3 = p_2 + \frac{\mathcal E_2}{2 p_2}

and compute the error term with the same error formula above, with p_2 replacing p_1. The series converges extremely fast (the p_n approaches the true answer quickly).

Here comes the generalization (this is not in the Bakhshali manuscript, I don’t know if it is a known technique) –  you can use this technique to find higher roots.

Say you want to find the cube root of N. Start with a number p_1 whose cube is close to N. Then we define the error term \mathcal E_1 using

N = p_1^3 + \mathcal E_1

Next, define p_2 = p_1 + \frac{\mathcal E_1}{3 p_1^2}

and compute the 2^{nd} level error term using simple algebra

\mathcal E_2 = -3 \frac{\mathcal E_1^2}{9 p_1^3} - \frac{\mathcal E_1^3}{27 p_6}

and carry on to define p_3 = p_2 +  \frac{\mathcal E_2}{3 p_2^2} and so on.

The series can be computed in Excel – it converges extremely fast. I computed the cube root of 897373742731 to Excel’s numerical accuracy in six iterations.

Carrying on, we can compute the fourth root of N, as should be obvious now, by finding a potential fourth root p_1 that is a close candidate and then

p_2 = p_1 + \frac{\mathcal E_1}{4 p_1^3}

and so on.

You can generalize this to a general q^{th} root by choosing p_2=p_1 + \frac{\mathcal E_1}{q p_1^{q-1}} and carrying on as before. It is a little complex to write down the error term, but it just needs you to brush up the binomial theorem expansion a little. I used this to compute the 19^{th} root of  198456, starting from the convenient 2, rather than 1 (see below!). It is 1.900309911. If I started from the intial number 3, convergence is slower but it still gets there quite fast (in twelve iterations).

Here’s a bit of the Excel calculation with a starting point of 2

p E
p_1 2 -325832
p_2 1.93458156 -80254.8113
p_3 1.90526245 -10060.9486
p_4 1.90042408 -226.670119
p_5 1.90030997 -0.12245308
p_6 1.90030991 -3.6118E-08
p_7 1.90030991 0

and here’s a starting point of 3. Notice how much larger the initial error is (under the column E).

p E
p_1 3 -1162063011
p_2 2.84213222 -415943298
p_3 2.69261765 -148847120
p_4 2.55108963 -53231970.3
p_5 2.41732047 -19003715.9
p_6 2.29140798 -6750923.06
p_7 2.17425159 -2365355.74
p_8 2.06867527 -797302.733
p_9 1.98149708 -240959.414
p_10 1.92430861 -53439.1151
p_11 1.90282236 -5045.06319
p_12 1.90033955 -58.8097322
p_13 1.90030991 -0.00825194
p_14 1.90030991 0 : CONVERGED!

The neat thing about this method is, unlike what Bill Casselman states, you don’t need to start from an initial point that sandwiches the number N. If you start reasonably close (though never at 1, for obvious reasons!), you do pretty well. The reason why these iterations converge is that the error term \mathcal E_m is always smaller than p_q^{m-1}.

The actual writer of the Bakhshali manuscript did not use decimal notation, but used rational numbers (fractions) to represent decimals, which required an order of magnitude more work to get the arithmetic right! It is also interesting how the numerals used in the manuscript are so close to the numerals I grew up using in Hindi-speaking Mumbai!

Note added: Bill Casselman wrote to me that the use of rational numbers with a large number of digits instead of decimals represents “several orders of magnitude” more difficulty. I have no difficulty in agreeing with that sentiment – if you want the full analysis, read the article.

There is also a scholarly analysis by M N Channabasappa (1974) that predates the AMS article that analyzes the manuscript very similarly.

A wonderful compendium of original articles on math history is to be found at Math10.

 

 

Schrodinger’s Zoo

I have been enjoying reading Richard Muller’s “Now: The Physics of Time” – Muller is an extremely imaginative experimental physicist and his writings on the “arrow of time” are quite a nice compendium of the various proposed solutions. Even though none of those solutions is to my liking, they are certainly worth a read.

Meanwhile, his chapter on the famous Schrödinger’s cat experiment is interesting though I think I can come up with a far better explanation. Some of these ideas have come out of discussions with two colleagues at Rutgers (though I would hate to have them labeled as the reason for any errors), so thanks to them for being open to discussing these oft-ignored subjects.

The “cat” experiment is simple. An unfortunate kitty is stuffed into a strongly built box, where a radioactive atom is also placed. The atom has a chance, but only a chance, to decay and when it does (through, let’s say, beta decay) produces an electron. The electron triggers a cathode-ray amplifier that sets off a bomb and “poof!”😪 – poor kitty is chasing mice in a different universe!

The kerfuffle appears to be that if we describe the system using standard quantum mechanics, Muller writes that it appears that we should describe the system (in the future) as

|\Psi> = \frac{|cat \: alive>+|cat \: dead>}{\sqrt{2}}

To be an accurate depiction of what he means, the initial state of the system is

|\Psi(0)> = |cat \: alive, undecayed \: atom>

and due to the passage of time and the possibility of decay of the atom, we should write the state in the future (at time t) as

|\Psi(t)>=|cat \: alive, undecayed \: atom > (1 - p(t)) + \\  \: \: \: \: \: \: |cat \: dead, decayed \: atom> p(t)

Here, p(t) is the probability that the atom has decayed in the time t. Clearly, p(0)=0 since the atom starts out undecayed and p(\infty)=1, since the atom will eventually decay. A “superposition” of states is much more than “this or that”. The system truly has to be thought of as being in both states and cannot be even conceived of as being in one of these states at that instant. Thinking it as being in “one” or the “other” leads to logical and experimental contradictions, due to the presence of “interference” between the two states.

To be honest, if someone asked you if cats were intelligent, you would probably say “Yes” and they certainly seem quite capable of understanding that they are alive, or facing imminent demise in a rapidly exploding fireball. Stating that the system is in a state where the cat is either alive or dead and worse, in a superposition of these states, is truly silly. Einstein found this galling, asking in eloquent terms, “Do you believe that the moon is not there when you are not looking at it?”.

Having the state of the system “collapse” into one of several alternatives is picturesquely referred to as “wave-function collapse”. Many physicist – hours have been spent thinking about how wave-functions must collapse (physicist hours are like man hours, except they can be charged to the NSF and other funding organizations). Solutions range from the GHZ mechanism, which says that wave-functions can spontaneously collapse (like stock prices can randomly jump) and this is modeled by a stochastic jump process (just as with stock prices). There is also the QB-ism approach (of David Mermin and others) that suggests that wave-function descriptions of nature are in your head and that they represent YOUR knowledge and beliefs about the state of the system. While that seems like an acceptable way to handle the “cat” experiment above, it begs the question why we all seem to agree on the Hamiltonian and state variables for systems in general. Could we manufacture, from our sense impressions, an alternative-reality explanation of the same situation that agrees in detail with the “real” description. This is one that a lot of Trump supporters would certainly embrace! But it doesn’t give one a warm and fuzzy feeling that there is “a reality”out there to explain.

Other solutions rely on the consciousness of the observer and how his/her consciousness collapses wave-functions. To which, I would ask (paraphrasing Aharonov, a famous physicist), “If you believe that you collapse wave-functions yourself, is this an inherited ability? Did your parents collapse wave-functions too?” If you believe that only you, the observer, has the right to decide when a “wave-function has collapsed”, it seems to give little meaning to my experience and the experience of millions of others. At that rate, the Taj Mahal only exists the moment you yourself see it. And America didn’t exist till Columbus saw it, unless you were Native American of course!

The key, in my opinion, to understanding all this is to recognize (admit, rather), that we, as observers, are just another cog in the quantum universe. We participate in measurements, we are part of wave-functions and a wave-function is supposed to “collapse” to varying degrees based on how many interactions the system it represents has had with the surroundings. So, if the only thing that happened was that a radioactive atom decayed, that’s one atom that emitted an electron, an anti-neutrino and created a proton out of a neutron, little, if anything in the universe was altered. This piffle of a disturbance isn’t big enough to collapse anything, let alone a wave-function. If the electron subsequently collided with a couple of other atoms on its way to the anode of the amplifier, that’s piffle too, The “state” of the system, which includes the original atom, the electron, the anti-neutrino, the proton, the couple of other atoms is still in a superposition of several quantum states, including states where the original atom hasn’t even decayed.

The fun begins when the disturbance is colossal enough to affect a macroscopic number of atomic-sized particles. Now, there is one state (in the superposition of quantum states) where nothing happened and no other particles were affected. There are a {\bf multitude} of other states where a colossal number of other particles were affected. At some point, we reach no-return – the number of “un-doings” we would need to do to “un-do” the effect of the (potential) decay on the (colossal number of) other particles is so large and the energy expended to reverse ALL the after-effects would be so humongous that there is no comparison between the two states. The wave-function has collapsed. Think of it this way – if the bomb exploded and the compartment where the cat was sitting was wrecked, the amount of effort you would need to put in to “un-do” the bomb would be macroscopically large. This is what we would refer to as wave-function collapse. Of course, if you were an observer the size of our galaxy and to you this energy was a piddlingly small amount, you might say – it is still too small! However, this energy is much much larger (by colossal amounts) than the energy you would need to expend to not do anything if the decay were {\bf not} to happen.

This explanation seems to capture the idea of “macroscopic effect” as being the way to understand wave-function collapse. It also allows one to quantitatively think of this problem and relate to it the way one relates to, say, statistical mechanics.

So, if you think one little kitty-cat is “piffle”, consider a zoo of animals in a colossal compartment, along with their keepers – Schrodinger’s Zoo, to be precise. Now, consider the effect of the detonation on this macroscopic collection of particles. It should be clear that even if you, as an observer, are far from the scene, a macroscopic number of variables were affected and you should consider the wave-function to have collapsed – even if you didn’t know about it at the time, since you were clearly not measuring things properly. No one said that physics needs to explain incorrect measurements, did we?

I’d like to thank Scott Thomas and Thomas Banks, both of Rutgers and both incomparably good physicists, for educating me through useful discussions. Tom, in particular, has a forthcoming book on Quantum Mechanics that has an extensive discussion of these matters.

Why do chocolate wrappers stick to things

Here’s something I saw while lazily surfing the net this morning. Someone throws a candy wrapper towards the floor and it sticks to the curtain or a book cover. How long will it stick?

First, the reason this happens is because of static electricity – and this is why this rarely happens in humid climates (except inside the confines of an air-conditioned room, which is usually much drier). Why static electricity – when you eagerly unwrap a piece of chocolate, some charges jump in or out. Usually slow moving positive ions stay where they are – the electrons jump and which direction they jump in is a simple question with a complicated answer.

Clearly the answer to how long it will stick depends on how dry the air is – the mechanism for leaking the charge back between the wrapper to the curtain to the ground depends on the resistance to current flow offered by the air between the wrapper and the curtain and the capacitance of the wrapper-curtain capacitor.

Since I finished teaching “RC” circuits to a whole lot of undergraduates last month, I thought it would be fun to carry out a small calculation.

What does the system look like – let’s idealize the wrapper/curtain capacitor to the below picture.

CandyWrapperCurtain

The effective area of contact is of area A and separated by a distance d from the curtain. This is a parallel plate capacitor, whose capacitance is (from your dim memory of high school physics) = \frac{ \epsilon_0 A}{d} where \epsilon_0 is the permittivity of the vacuum and I have used a dielctric constant of 1 for air. Using Meter-Kilogram-Seconds-SI units, if you make the entirely reasonable assumption that d = 1 \: mm and A = 1 \; mm^2, this translates to a capacitance of \approx 10^{-14} farads. The resistivity of air (between the plates of this parallel plate capacitor) is \rho \approx 10^{16} \Omega -m and again the resistance is R = \rho \frac{d}{A}, which approximates to 10^{19} \Omega. The value of R \times C \approx 10^5 secs \approx 1 \: \: day which is an enormously long time constant – post one day, chances are the force keeping it up (the electrical attraction) is not big enough to withstand gravity and it is likely to fall. So a candy wrapper could remain stuck for a day at least – I would bet on more since the contact area might actually be much more than 1 mm^2 given lots of folds in the wrapper and the curtain, as well as all the sticky saliva on the wrapper from licking the candy off….reminds me I need to get back to finishing the candy!

 

Is the longest day the warmest day?

I woke up to a snowy day on the 30th of December, here in New Jersey and immediately realized two things! It was colder and darker than at the same time on the shortest day of the year, the 21st of December. I suppose you could blame the colder weather on the polar vortex swinging its awful oscillation down towards us yokels in the New York metro area, rather than simply harass a bunch of Brahminical New Englanders. But was it really darker? Was I missing something? Is the shortest day for us folks living at 41^{\circ} North {\bf different} from the 21^{st} of December? After all, the winter solstice is defined as the day when the sun is over its extreme southward excursion, when it is over the Tropic of Capricorn.

And if you average out the vagaries of weather patterns, is the shortest day really the coldest day? Conversely, is the longest day (June 21^{st}) actually the warmest day?

Let’s look at why we have solstices and equinoxes, first. If you look at the Earth’s orbit around the sun, it looks like this

SolsticeEquinox

This is a view looking down over the North pole (a colorful view is here). The earth is then known to be spinning counter-clockwise, and also revolving around the sun in the counter-clockwise direction. The Angular Momentum associated with the spinning of the earth around its own axis is conserved. This phenomenon, which we know from tops and gyroscopes, keeps the axis pointed in the same absolute direction (towards Polaris, the Pole Star). While this is not exactly true due to something called precessional motion that causes the axis to slowly point along different directions, which I will discuss later – it is rather slow so we can ignore it for now.

In position marked {\bf A}, the Northern Hemisphere experiences summer, as the Sun is directly overhead the Tropic of Cancer (an imaginary line that is drawn at roughly 22.5^{\circ}) – this is on the 21^{st} of June. At position marked {\bf B}, which follows after the Northern Summer, the Sun is directly above the Equator; this is called the autumnal (or fall) equinox (21^{st} September}. Next, the position marked {\bf C} is the Northern winter (21^{st} December) which also corresponds to the Southern summer, while position {\bf D} corresponds to the Spring Equinox (the 21^{st} of March).

Why is the summer solstice the longest day? And why is it called a “sol-stice” – which is Latin for “the sun is still”. Look at the picture below for the earth on that day.

Terminator

The yellow lines on the right depict the Sun’s rays. The vertical (blue) line is called the Terminator, not after the eponymous 90’s movie, but a line that separates those folk that can see the Sun from those that cannot. It is clear that at the maximum angle to the sun’s rays, as depicted, the folks in the Northern hemisphere experience the longest daytime, from leaving the Terminator behind them on one side to entering it on the other.

Why does the sun seem stationary? Well, it certainly still rises in the east and sets in the west due to the Earth’s rotation, so it does not seem stationary in that respect. However, it is stationary insofar as its North-South oscillation is concerned. Notice that on the 21^{st} of June, the Sun has proceeded (due to the Earth’s orbital motion of course) as far North as it can possibly get. That is a maximum of the sun’s latitudinal displacement and as you might remember from high school math, if something is at a maximum, the slope (the rate of change) is zero. {\it Ergo}, the sun appears to not move along the North-South direction.

Is the Northern hemisphere the warmest on this day. Let’s ignore the fact that water and soil take time to heat up; this is called the Lag of the Seasons, but that is only part of the answer. For it to feel warm, the sun has to be right atop your head, at the zenith of the sky. It is very clear that when the Tropic of Cancer has the sun at its zenith, other parts of the Earth, to the south, do {\bf not!}. So, there are going to be two hottest days of the year for parts of the earth between the Tropic of Cancer and the Equator (and similarly between the Tropic of Capricorn and the Equator). They will be the days that the Sun is overhead, but on its journey to the higher latitudes, then on its journey away from them! So May and July might actually be warmer than June!

There is one small problem – the Earth is also not always the same distance from the Sun. It is closest to the Sun in January and furthest in July. That also causes the temperature to vary a lot. But one can learn a lot by looking at annual charts of Sun Hours and Average Temperature for two cities, one at the Tropic of Cancer and the other between that and the Equator. Indore (India) is at {23 \frac{1}{2}}^{\circ} while Port Blair (also an Indian territory, but close to Indonesia) is at {11 \frac{1}{2}}^{\circ}, halfway between. Look at the dip in June between May and July for Port Blair vs. Indore (which has no such dip).

average-temperature-india-ahmedabadaverage-temperature-india-port-blair

By the way, several websites as well as people discuss this topic, as does the physicist Hitoshi Murayama in a neat essay. Other websites discuss the  solstice , and you can find a list of cities by latitude here. You will find temperature data here.

Can you travel faster through time?

If you watch science fiction movies, the most dramatic effects are obtained through some form of time travel. Pick some time in the future, or the past and a fabulous machine or spell swoops you away to that time.

I have always had a problem with this simple approach to time travel. One obvious objection would be this. Remember that the earth is rotating around its axis at 1000 \frac{km}{hr}, revolving around the Sun at 100,000 \frac{km}{hr}, spinning around the center of the Milky Way at 792,000 \frac{km}{hr} and being dragged at 2.1 million \frac{km}{hr} towards the Great Attractor in Leo/Virgo together with the other denizens of the Milky Way! If someone took you away for a few seconds and plonked you back at the same {\bf {SPOT}} in the universe after a few couple of minutes, but several thousands years ago or ahead, you sure would need a space suit. Earth would be several billions of kilometers away, you’d be either totally in empty space or worse, somewhere inside a sun or something. I can’t imagine the Connecticut Yankee in King Arthur’s court carried spare oxygen or a shovel to dig himself out of an asteroid!

OK, so if you built a time machine without an attached space capsule to bring you back to the earth, woe is you. In addition, here is a simple way to sort out the paradoxes of time travel – just put down a rule that you can travel back in time, but you will land so far away that you can’t affect your history! This should surely be possible! Only time will tell.

Anyway, I didn’t really mean to share with you my proposal for how to make time travel possible. I was pondering a peculiarity of Einstein’s theory of relativity that isn’t often made clear in basic courses.

In Einstein’s world view, as modified by his teacher Hermann Minkowski, we live in a four dimensional world, where there are three space axes and one time axis. In this peculiar space, {\bf {Events}} are labeled by (t, \vec x) describing the precise time they occurred and their position (three coordinates give you a vector). In addition, if there are two such events (t_1, \vec x_1) and (t_2, \vec x_2), then the “distance” (t_1-t_2)^2 - (\vec x_1- \vec x_2)^2 is preserved in all reference frames. Why the peculiar minus sign between the time and space pieces? That’s what preserves the speed of light between all reference frames, as Minkowski realized was the key to Einstein’s reformulation of the geometry of space time. It is interesting that Einstein did not think of this initially and decided that Minkowski and other mathematicians were getting their dirty hands on his physical insights and turning them into complicated beasts – he came around very quickly of course, he was Einstein!

To generalize this idea further, physicists invented the idea of a 4-vector. A regular 3-vector (example \vec x) describes, for instance, the position of a particle in space. A 4-vector for this particle would be (t,\vec x) and it describes not just where it is, but when it is. Writing equations of relativity in vector form is useful. We can write them down without reference to one particular observer, without specifying how that observer is traveling.

In notation, the 4-vector for position of a particle is x^{\mu} \equiv (x^0, \vec x) = (t, \vec x).

Before we go any further, it is useful to consider the concept of the “length” of a vector. An ordinary 3-vector has the length |\vec x| = \sqrt{x^2+y^2+z^2}, but in the 4-vector situation, the appropriate length definition has a minus sign, |x^{\mu}| = \sqrt{c^2 t^2 - x^2 - y^2 - z^2}. The relative minus sign again comes from the essential consideration that the speed of light is constant between reference frames. Note that if you look at the 4-vector for an event – it has four independent components – the time and the position in three space.

Next, one usually needs a notion of velocity. Usually we would just write down \frac{d \vec x}{dt} with 3-vectors. However, when we differentiate with respect to t, we are using coordinates particular to one observer. A better quantity to use it one that all observers would agree on – the time measured by a clock traveling with the particle. This time is called the “proper time” and it is denoted by \tau. So the relativistic 4-velocity is defined as V^{\mu} = \frac{d x^{\mu}}{d \tau}. In terms of coordinates used by an observer standing in a laboratory, watching the particle, this is

V^{\mu} \equiv (V^0,\vec V) = (\frac{1}{\sqrt{1-v^2/c^2}}, \frac{\vec v}{\sqrt{1-v^2/c^2}}).

If you haven’t seen this formula before, you have to take it from me – it is a fairly elementary exercise to derive.

The peculiar thing about this formula is that if you look at the four components of the velocity 4-vector, there are only three independent components. Given the last three, you can compute the first one exactly.

In the usual way that this is described, people say that the magnitude of this vector is (V^0)^2-(\vec V)^2=1 as you can quickly check.

But it is peculiar. After all x^{\mu}, the position, does have four independent components. Why does the velocity vector V^{\mu} only have three independent components. Those three are the velocities along the three spatial directions. What about the velocity in the “time” direction?

Aha! That is \frac{dt}{dt}=1. By definition, or rather, by construction, you travel along time at 1 day per day, or 1 year per year or whatever unit you prefer. The way the theory of relativity is constructed, it is {\bf {incompatible}} with any other rate of travel. {\bf {You \: \: cannot \: \: travel \: \: faster \: \: or \: \: slower, \: \: or \: \: even \: \: backwards, \: \: in \: \: time \: \: without \: \: violating \: \: the \: \: classical \: \: theory \: \: of \: \: relativity}}.

The only relativistically correct way one can traverse the time axis slower is to rotate the time axis – that’s what happens when then observer sitting in a laboratory hops onto a speeding vehicle or spaceship, i.e., performs a Lorentz transformation upon him/herself. That’s what produces the effects of time dilation.

Your only consolation is that virtual quantum particles can violate relativity for short periods of time inversely proportional to their energy. However, they are just that – virtual particles. And you cannot create them for communication.

Oh, well!

Coffee, anyon?

Based on the stats I receive from WordPress.com, most readers of this blog live in the US, India and the UK. In addition, there are several readers in Canada, Saudi Arabia, China, Romania, Turkey, Nigeria and France. Suppose you live in the first three of these countries. In addition, let’s say you go to your favorite coffee shop (Starbucks, Cafe Coffee Day or your friendly neighbourhood Costa) and get a cup of coffee. Your coffee is brewed with hot water – usually filtered and quite pure, let’s even assume it is distilled, so it contains only H_2 O.

Now, here is an interesting question. Can you tell, from tasting the water, where it came from, which animals in the long history of the planet passed it through their digestive systems, or which supernova or neutron star collision chucked out the heavier elements that became these particular water molecules?

No, obviously, you can’t. Think of the chaos it would create – your coffee would taste different every day, for instance!

This is an interesting observation! It indicates that the way materials behave, they lose their memory of their past state. But isn’t this in apparent contradiction with the laws of classical mechanics that determine {\bf all} final conditions based on initial conditions. In fact, until the understanding of chaos and “mixing” came along, physicists and mathematicians spent a lot of time trying to reconcile this apparent contradiction.

When quantum mechanics came along, it allowed physicists to make sense of this in a rather simple way. Quantum states are superpositions of several microstates. In addition, when a measurement is made on a quantum system, it falls into one of the several microstates. Once it does, it has no memory of its previous superposition of states. However, there is yet another reason why history may not matter. This reason is only relevant in two spatial dimensions.

Let’s study some topology.

Suppose you have two particles, say two electrons. A quantum-mechanical system is described by a wave-function \psi, which is a function of the coordinates of the two particles, i.e., \psi(\vec x_1, \vec x_2). If the particles are {\bf {identical}}, then one can’t tell which particle is the “first” and which the “second” – they are like identical twins and physically we cannot tell them apart from each other. So, the wave function \psi might mean by its first argument, the first particle, or the second particle. How should the wave-function with one assumed order of particles be related to the wave-function with the “exchanged” order of particles? Are they the same wave function?

In three or more dimensions, the argument that’s made is as follows. If you exchange the two particles, then exchange them back again, you recover the original situation. So you should be back to your original wave function, correct? Not necessarily – the wave function is  a complex number. It is not exactly the quantity with physical relevance, only the square of the magnitude (which yields the probability of the configuration) is. So, the wave function could get multiplied by a complex phase – a number like e^{i \theta} which has magnitude 1 and still be a perfectly good description of the same situation.

So if we had the particles A & B like this and switch them

Fig1

the wave function could get multiplied by a phase e^{i \theta} with the wave function describing an identical situation. But if we did the same operation and switched them again, we have to come back to exactly the same wave function as before –  we might need to multiply by the square of the above complex number (e^{i \theta})^2; however, this square has to be equal to 1. Why in this case? Well, remember, it looks like we are back to the identical original situation and it would be rather strange if the same situation were described by a different wave function just because some one might or might not have carried out a double “switcheroo”.

If the square of some number is 1, that number is either 1 or -1. Those two cases correspond to either bosons or fermions – yes the same guys we met in a previous post. Now, why the exchange property determines the statistical properties is a subtle theorem called the spin-statistics theorem and it will be discussed in a future post.

However, in two dimensions, it is very obvious that something very peculiar can happen.

If you have switched two particles in three dimensions, look at the picture below.

Fig2

then let’s mould the switching paths for the total double switch as below

Fig3

we can move the “switcheroo” paths around and shrink each to a point. Do the experiment with  strings to convince yourself.

So, exchanging the two particles twice is equivalent to doing nothing at all.

However, if we were denizens of “Flatland” and lived on a two-dimensional flat space, we {\bf {cannot}} shrink those spiral shaped paths to a point – there isn’t an extra dimension to unwind the two loops and shrink them to a point. We cannot pull one of the strings out into the third dimension and unravel it from the other! So, there is no reason for two switches to bring you back to the original wave function for the two identical particles. {\bf {The \: \: particles \: \: remember \: \: where \: \: they \: \: came \: \: from}}.This is why the quantum physics of particles in two dimensions is complicated – you can’t make the usual approximations to neglect the past and start a system off from scratch, the past be damned.

Particles in two dimensions are called anyons. Coffee with anyons in two dimensions would taste mighty funny and you would smell every damned place in two dimensions it had been in before it entered your two-dimensional mouth!

The unreasonable importance of 1.74 seconds

1.74 seconds.

If you know what I am talking about, you can discontinue reading this – its old news. If you don’t, its interesting what physicists can learn from 1.74 seconds. Its all buried in the story about GW170817.

A few days ago, the people who constructed the LIGO telescope observed gravitational waves from what appears to be the collision and collapse of a pair of neutron stars (of masses that are believed to be 1.16 M_{\odot} and 1.6 M_{\odot}. The gravitational wave observation is described in this Youtube posting, as also a reconstruction of how it might sound in audio (of course, it wasn’t something you could {\bf {hear}}!).

As soon as the gravitational wave was detected, a search was done through data recorded at several telescopes and satellites for  coincident optical or gamma ray (high frequency light waves) emissions – the Fermi (space-based) telescope did record a Gamma ray burst 1.7 seconds later. Careful analysis of the data followed.

Sky & Telescope has a nice article discussing this. To summarize it and some of the papers briefly, it turns out that a key part to unravelling the exact details about the stars involved is to figure out the distance. To find the distance, we need to know where to look (was the Fermi telescope actually observing the same event?). There are three detectors currently at work under the LIGO collaboration and two of them detected the event. This is the minimum needed to detect an event anyway (given the extremely high noise) one needs the near-simultaneous detection at two widely separated detectors to confirm that we have seen something. All the detectors have blind spots due to the angles at which they are placed, so the fact that two saw something and the third {\bf {didn't}} indicates something was afoot in the blind-spot of the third detector. That didn’t localize the event enough though. Enter Fermi’s observation – which only localized the event to tens of degrees (about twenty times the size of the moon or sun). But the combination was enough to put a small field of view as “region of interest”. Optical telescopes then looked at the region and discovered the “smoking gun” – the actual increasingly bright, then dimming star. The star appears to be in a suburb of the NGC 4993 galaxy, some 130 million light years away – note that our nearest galactic neighbour is the Andromeda galaxy which is roughly 2 million light years away. Finding the distance makes the precision in the spectra, the masses of the involved neutron stars etc much higher, so one can actually do a lot more precise analysis. The red-shift to the galaxy is 0.008, which looks small, but this connection helps understand the propagation of the light and gravitational waves from the event to us on the Earth.

Now, on to the simple topic of this post. If you have an event from which you received gravitational waves and photons and the photons reached us 1.74 seconds after the gravitational waves, we can estimate a limit on the difference in the mass of the photon versus the graviton (hypothesized particle that is the force carrier of gravity). If we, or simplicity, assume the mass of the graviton is zero, then we make the following argument:

From special relativity, if a particle has speed v, momentum p, energy E and the speed of light is c,

\frac{v}{c} = \frac{pc}{E} = \frac{\sqrt{E^2 - m^2c^4}}{E} \approx 1 - \frac{m^2c^4}{2E^2}

which implies

\frac{mc^2}{E} = \sqrt{1 - \frac{v}{c}} \approx 1 - \frac{v}{2c}

Since the graviton reached 1.7 seconds before, after travelling for 130 million years, that translates into a differential of speed, i.e., \frac {\delta v}{c}  of 4 \times 10^{-19}., i.e.,

\frac{\delta(m)c^2}{E} =2 \times 10^{-19}

Next, we need to compute the energy of the typical photons in this event. They follow an approximate black-body spectrum and the peak wavelength of a black-body spectrum follows the famous Wien’s law. To cut to the chase, the peak energy emission was in the 10 keV range (kilo-electronvolt), which means our mass differential is 2 \times 10^{-15} eV (this is measuring mass in energy terms, as Einstein instructed us to). While the current limits on the photon’s mass are much tighter, this is an interesting way to get a bound on the mass. In general, the models for this sort of emission indicate that gamma rays are emitted roughly a second after the collision, so the limits will get tighter as the models sharpen their pencils through observation.

However, the models are already improving bounds on various theories that rely on modifying Einstein’s (and Newton’s) theories of gravity. Keep a sharp eye out!

New kinds of Cash & the connection to the Conservation of Energy And Momentum

Its been difficult to find time to write articles on this blog – what with running a section teaching undergraduates (after 27 years of {\underline {not \: \: doing \: \:  so}}), as well as learning about topological quantum field theory – a topic I always fancied but knew little about.

However, a trip with my daughter brought up something that sparked an interesting answer to questions I got at my undergraduate section. I had taken my daughter to the grocery store – she ran out of the car to go shopping and left her wallet behind. I quickly honked at her and waved, displaying the wallet. She waved back, displaying her phone. And insight struck me – she had the usual gamut of applications on her phone that serve as ways to pay at retailers – who needs a credit card when you have Apple Pay or Google Pay. I clearly hadn’t adopted the Millennial ways of life enough to understand that money comes in yet another form, adapted to your cell phone and aren’t only the kinds of things you can see, smell or Visa!

And that’s the connection to the Law Of Conservation of Energy, in the following way. There were a set of phenomena that Wolfgang Pauli considered in the 1930s – beta decay. The nucleus was known and so were negatively charged electrons (these were called \beta-particles). People had a good idea of the composition and mass of the nucleus (as being composed of protons and neutrons), the structure of the atom (with electrons in orbit around the nucleus) and also understood Einstein’s revolutionary conceptions of the unity of mass and energy. Experimenters were studying the phenomenon of nuclear radioactive decay. Here, a nucleus abruptly emits an electron, then turns into a nucleus with one higher proton number and one less neutron number, so roughly the same atomic weight, but with an extra positive charge. This appears to happen spontaneously, but in concert with the “creation” of a proton, an electron is also produced (and emitted from the atom), so the change in the total electric charge is +1 -1 = 0 – it is “conserved”.  What seemed to be happening inside the nucleus was, that one of the neutrons was decaying into a proton and an electron. Now, scientists had constructed rather precise devices  to “stop” electrons, thereby measuring their momentum and energy. It was immediately clear that the total energy we started with – the mass-energy of the neutron (which starts out not moving very much in the experiment), in decaying into the proton and electron was more than the energy of the said proton (which also wasn’t moving very much at the end) and aforesaid electron.

People were quite confused about all this. What was happening? Where was the energy going? It wasn’t being lost to heating up the samples (that was possible to check). Maybe the underlying process going on wasn’t that simple? Some people, including some famous physicists, were convinced that the Law of Conservation of Energy and Momentum had to go.

As it turned out, much like I was confused in the car because I had neglected that money could be created and destroyed in an iPhone, people had neglected that energy could be carried away or brought in by invisible particles called neutrinos. It was just a proposal, till they were actually discovered in 1956 through careful experiments.

In fact, as has been rather clear since Emmy Noether discovered the connection between a symmetry and this principle years ago, getting rid of the Law of Conservation of Energy and Momentum is not that easy. It is connected to a belief that physics (and the result of Physics experiments) is the same whether done here, on Pluto or in empty space outside one of the galaxies on the Hubble deep field view! As long as you systematically get rid of all “known” differences at these locations – the gravity and magnetic field of the earth, your noisy cousin next door, the tectonic activity on Pluto, or small black holes in the Universe’s distant past, the fundamental nature of the universe is translationally \: \: invariant. So if you discover that you have found some violation of the Law of Conservation of Energy and Momentum, i.e., a perpetual motion machine, remember that you are announcing that there is some deep inequivalence between different points and time in the Universe.

The usual story is that if you notice some “violation” of this Law, you immediately start looking for particles or sources that ate up the missing energy and momentum rather than announce that you are creating or destroying energy. This principle gets carried into the introduction of new forms of “potential energy” too, in physics, as we discover new ways in which the Universe can bamboozle us and reserve energy for later use in so many different ways. Just like you have to add up so many ways you can store money up for later use!

That leads to a conundrum. If the Universe has a finite size and has a finite lifetime, what does it mean to say that all times and points are equivalent? We can deal with the spatial finiteness – after all, the Earth is finite, but all points on it are geographically equivalent, once you account for the rotation axis (which is currently where Antarctica and the Arctic are, but really could be anywhere). But how do you account for the fact that time seems to start from zero? More on this in a future post.

So, before you send me mail telling me you have built a perpetual motion machine, you really have to be Divine and if so, I am expecting some miracles too.

The Normal Distribution is AbNormal

I gave a talk on this topic exactly two years ago at my undergraduate institution, the Indian Institute of Technology, in Chennai (India). The speech is here, with the powerpoint presentation accompanying it The Normal Distribution is Abnormal And Other Oddities. The general import of the speech was that the Normal Distribution, which is a statistical distribution that applies to random data of a variety of sorts, that’s often used to model the random data, is often not particularly appropriate at all. I presented cases where this is the case and the assumption (of a normal distribution) leads to costly errors.

Enjoy!

Mr. Olbers and his paradox

Why is the night sky dark? Wilhelm Olbers asked this question, certainly not for the first time in history, in the 1800s.

That’s a silly question with an obvious answer. Isn’t that so?

Let’s see. There certainly is no sun visible, which is the definition of night, after all. The moon might be, but on a new moon night, the moon isn’t, so all we have are stars in the sky.

Now, let’s make some rather simple-minded assumptions. Suppose the stars are distributed equally throughout space, at all previous times too. Why do we have to think about previous times? You know that light travels at 300,000 \: km/s, so when you look out into space, you also look back in time. So, one has to make some assumptions about the distribution of stars at prior times.

Then, if you draw a shell around the earth, that has a radius R and thickness \delta R, the volume of this thin shell is 4 \pi R^2 \delta R.

Shell Radius R

Suppose there were a constant density of stars n stars per unit volume, this thin shell has n 4 \pi R^2 \delta R stars. Now the further away a star is, the dimmer it seems – the light spreads out in a sphere around the star. A star a distance R that emits I units of energy per second, will project an intensity of \frac{I}{4 \pi R^2} per unit area at a distance R away. So a shell (thickness \delta R) of stars at a radius R will bombard us (on the earth) with intensity \frac {I}{4 \pi R^2} n 4 \pi R^2 \delta R units of energy per unit area. This is = I n \ \delta R units of energy per unit area.

This is independent of R!. Since we can do this for successive shells of stars, the brightness of each shell adds! The night sky would be infinitely bright, IF the universe were infinitely big.

Some assumptions were made in the above description.

  1. We assumed that the stars are distributed uniformly in space, at past times too. We also assumed, in particular,
  2. isotropy, so there are no special directions along which stars lie.
  3. We also assumed that stars all shine with the same brightness.
  4. We didn’t mention it, but we assumed nothing obscures the light of far away stars, so we are able to see everything.
  5. In addition, we also assumed that the universe is infinite in size and that we can see all the objects in it, so it also must have had an infinite lifetime before this moment. Since light travels at a finite speed, light from an object would take some time to reach us; if the universe were infinitely old, we’d see every object in it at the moment we look up.
  6. We also assumed that the Universe wasn’t expanding rapidly – in fact with a recession speed for far away stars that increased (at least proportionately, but could be even faster than linear) with their distance. In such a universe, if we went far enough, we’d have stars whose recession speed from us exceeds the speed of light. If so, the light from those stars couldn’t possibly reach us – like a fellow on an escalator trying hard to progress upwards while the escalator goes down.

There are a tremendous number of population analyses of the distribution of light-emitting objects in the universe that make a convincing case (next post!) that the universe is isotropic and homogeneous on enormously large length scales (as in 100 Mega parsecs). We don’t see the kind of peculiar distributions that would lead us to assume a conspiracy of the sort implied in point 2.

We have a good idea of the life cycles of stars, but the argument would proceed on the same lines, unless we had a systematic diminution of intrinsic brightness as we looked at stars further and further away. Actually, the converse appears to be true. Stars and galaxies further away had tremendous amounts of hydrogen and little else and appear to be brighter, much brighter.

If there were actually dust obscuring far away stars, then the dust would have absorbed radiation from the stars, started to heat up, then would have emitted radiation of the same amount, once it reached thermodynamic equilibrium. This is not really a valid objection.

The best explanation is that either the universe hasn’t been around infinitely long, or the distant parts are receding from us so rapidly that they are exiting our visibility sphere. Or both.

And that is the start of the study of modern cosmology.

The Great American Eclipse of 2017

I really had to see this eclipse – met up with my nephew at KSU, then eclipse chasing (versus the clouds) all the way from Kansas to central and south-east Missouri. The pictures I got were interesting, but I think the videos (and audio) reflect the experience of totality much better. The initial crescent shaped shadows through the “pinholes” in the leafy branches,

With the slowly creeping moon swallowing the sun

Followed by totality

and the sudden disappearance of sunlight, followed by crickets chirping (listen to the sounds as the sky darkens)

I must confess, it became dark and my camera exposure settings got screwed up – no pictures of the diamond ring. Ah, well, better luck next time!

This is definitely an ethereal experience and one worth the effort to see it. Everybody and his uncle did!

Mr. Einstein and my GPS

I promised to continue one of my previous posts and explain how Einstein’s theories of 1905 and 1915 together affect our GPS systems. If we hadn’t discovered relativity (special and general) by now, we’d have certainly discovered it by the odd behaviour of our clocks on the surface of the earth and on an orbiting satellite.

The previous post ended by demonstrating that the time interval between successive ticks of a clock at the earth’s surface \Delta t_R and a clock ticking infinitely far away from all masses \Delta t_{\infty} are related by the formula

\Delta t_{R} =  \Delta t_{\infty} (1 + \frac{ \Phi(R)}{c^2})

The gravitational potential \Phi(R)=-\frac{G M_E}{R} is a {\bf {negative}} number for all R. This means that the time intervals measured by the clock at the earth’s surface is {\bf {shorter}} than the time interval measured far away from the earth. If you saw the movie “Interstellar“, you will hopefully remember that a year passed on Miller’s planet (the one with the huge tidal waves) while 23 years passed on the Earth, since Miller’s planet was close to the giant Black Hole Gargantua. So time appears to slow down on the surface of the Earth compared to a clock placed far away.

Time for some computations. The mass of the earth  is 5.97 \times 10^{24} \: kg, Earth’s radius is R = 6370 \: km \: = 6.37\times 10^6 \: meters and the MKS units for G = 6.67 \times 10^{-11} \: MKS \: units. In addition, the speed of light c = 3 \times 10^8 \frac {m}{s}. If \Delta t_{\infty} = 1 \: sec, the clock on an orbiting satellite (assumed to be really far away from the earth) measures one second, the clock at the surface measures

\Delta t_R = (1 \: sec) \times (1 - \frac {(6.67 \times 10^{-11}) \:  \times \:  (5.97 \times 10^{-24} )}{(6.37 \times 10^6 )\: (3 \times 10^8 )^2})

this can be simplified to  0.69 \: nanoseconds less than 1 \: sec. In a day, which is  (24 \times 3600) \: secs, this is 70 \times 10^-6 = 60 \: \mu \: seconds (microseconds are a millionth of a second).

In reality, as will be explained below, the GPS satellites are operating at roughly 22,000 \: km above the earth’s surface, so what’s relevant is the {\bf {difference}} in the gravitational potential at 28,370 \: km and 6,370 \: km from the earth’s center. That modifies the difference in clock rates to 53 \: nanoseconds per second, or 46 \: microseconds in a day.

How does GPS work? The US (other countries too – Russia, the EU, China, India) launched several satellites into a distant orbit 20,000 \: - 25,000 \: km above the earth’s surface. Most of the orbits are designed to allow different satellites to cover the earth’s surface at various points of time. A few of the systems (in particular, India’s) have satellites placed in a Geo-Stationary orbit, so they rotate around the earth with the earth – they are always above a certain point on the earth’s surface. The key is that they possess rather accurate and synchronized atomic clocks and send the time signals, along with the satellite position and ID to GPS receivers.

If you think about how to locate someone on the earth, if I told you I was 10 miles from the Empire State Building in Manhattan, you wouldn’t know where I was. Then, if I told you that I was 5 miles from the Chrysler building (also in Manhattan), you would be better off, but you still wouldn’t know how high I was. If I receive a third coordinate (distance from yet another landmark), I’d be set.  So we need distances from three well-known locations in order to locate ourselves on the Earth’s surface.

The GPS receiver on your dashboard receives signals from three GPS satellites. It knows how far they are, because it knows when the signals were emitted, as well as what the time at your location is.  Since these signals travel at the speed of light (and this is sometimes a problem if you have atmospheric interference), the receiver can compute how far away the satellites are. Since it has distances to three “landmarks”, it can be programmed to compute its own location.

Of course, if its clock was constantly running slower than the satellite clocks, it would constantly overestimate the distance to these satellites, for it would think the signals were emitted \: earlier than they actually were. This would screw up the location calculation, to the distance travelled by light in 0.53 \: nanoseconds, which is 0.16 meters. Over a day, this would become 14 kilometers. You could well be in a different city!

There’s another effect – that of time dilation. To explain this, there is no better than the below thought experiment, that I think I first heard of from George Gamow’s book. As with {\bf {ALL}} arguments in special and general relativity, the only things observers can agree on is the speed of light and (hence) the order of causally related events. That’s what we use in the below.

There’s an observer standing in a much–abused rail carriage. The rail carriage is travelling to the right, at a high speed V. The observer has a rather cool contraption / clock. It is made with a laser, that emits photons and a mirror, that reflects them. The laser emits photons from the bottom of the carriage towards the ceiling, where the mirror is mounted. The mirror reflects the photon back to the floor of the car, where it is received by a photo-detector (yet another thing that Einstein first explained!).

Light Clock On Train

The time taken for this up- and down- journey (the emitter and mirror are separated by a length L) is

\Delta t' = \frac{2 L}{c}

That’s what the observer on the train measures the time interval to be. What does an observer on the track, outside the train, see?

Light Clock Seen from Outside Train

She sees the light traverse the path down in blue above. However, she also sees the light traveling at the same (numerical) speed, so she decides that the time between emission and reception of the photon is found using Pythagoras’ theorem

L^2 = (c \frac{\Delta t}{2})^2 - (V \frac {\Delta t}{2})^2

\rightarrow  \Delta t = \frac {2 L}{c} \frac{1}{\sqrt{1 - \frac{V^2}{c^2}}}

So, the time interval between the same two events is computed to be larger on the stationary observer’s clock, than on the moving observer’s clock. The relationship is

\Delta t = \frac {\Delta t'}{ \sqrt{1 - \frac{V^2}{c^2}} }

How about that old chestnut – well, isn’t the observer on the track moving relative to the observer on the train? How come you can’t reverse this argument?

The answer is – who’s going to have to turn the train around and sheepishly come back after this silly experiment runs its course? Well! The point is that one of these observers has to actively come back in order to compare clocks. Relativity just observes that you cannot make statements about {\bf {absolute}} motion. You certainly have to accept relative motion and in particular, how observers have to compare clocks at the same point in space.

From the above, 1 second on the moving clock would correspond to \frac {1}{ \sqrt{1 - \frac{V^2}{c^2}} } seconds on the clock by the tracks. A satellite at a distance D from the center of the earth has an orbital speed of \sqrt {\frac {G M_E}{D} } , which for an orbit 22,000 km above the earth’s surface, which is 28,370 \: km from the earth’s center, would be roughly

\sqrt { \frac {(6.67 \times 10^{-11} (5.97 \times 10^{-24})}{28370 \times 10^3} }\equiv  3700 \: \frac{meters}{sec}

which means that 1 second on the moving clock would correspond to 1 \: sec + 0.078 \: nanoseconds on the clock by the tracks. Over a day, this would correspond to a drift of 6 \: microseconds, in the {\bf {opposite}} direction to the above calculation for gravitational slowing.

Net result – the satellite clocks run faster by 40 microseconds in a day. They need to be continually adjusted to bring them in sync with earth-based clocks.

So, that’s three ways in which Mr. Einstein matters to you EVERY day!

 

Master Traders and Bayes’ theorem

Imagine you were walking around in Manhattan and you chanced upon an interesting game going on at the side of the road. By the way, when you see these games going on, a safe strategy is to walk on, since they usually reduce to methods of separating a lot of money from you in various ways.

The protagonist, sitting at the table tells you (and you are able to confirm this by a video taken by a nearby security camera run by a disinterested police officer), that he has managed to toss the same quarter (an American coin) thirty times and managed to get “Heads” {\bf ALL} of those times. What would you say about the fairness or unfairness of the coin in question?

Next, your good friend rushes to your side and whispers to you that this guy is actually one of a really \: large number of people (a little more than a billion) that were asked to successively toss freshly minted, scrupulously clean and fair quarters. People that tossed tails were “tossed” out at each successive toss and only those that tossed heads were allowed to toss again. This guy (and one more like him) were the only ones that remained. What can you say now about the fairness or unfairness of the coin in question?

What if the number of coin tosses was 100 rather than 30, with a larger number of initial subjects?

Just to make sure you think about this correctly, suppose you were the Director of a large State Pension Fund and you need to invest the life savings of your state’s teachers, firemen, policemen, highway maintenance workers and the like. You get told you have to decide to allocate some money to a bet based made by an investment manager based on his or her track record (he successively tossed “Heads” a hundred times in a row). Should you invest money on the possibility that he or she will toss “Heads” again? If so, how much should you invest? Should you stay away?

This question cuts to the heart of how we operate in real life. If you cut out the analytical skills you learnt in school and revert to how our “lizard” brain thinks, we would assume the coin was unfair (in the first instance) and express total surprise at the knowledge of the second fact. In fact, even though the second situation could well have happened to every similar situation of the first sort we encounter in the real world, we would still operate as if the coin was unfair, as our “lizard” brain would instruct us to behave.

What we are doing unconsciously is using Bayes’ theorem. Bayes’ theorem is the linchpin of inferential deduction and is often misused even by people who understand what they are doing with it. If you want to read couple of rather interesting books that use it in various ways, read Gerd Gigirenzer’s “Reckoning with Risk: Learning to Live with Uncertainty” or Hans Christian von Baeyer’s “QBism“. I will discuss a few classic examples. In particular Gigirenzer’s book discusses several such, as well as ways to overcome popular mistakes made in the interpretation of the results.

Here’s a very overused, but instructive example. Let’s say there is a rare disease (pick your poison) that afflicts 0.25 \% of the population. Unfortunately, you are worried that you might have it. Fortunately for you, there is a test that can be performed, that is 99 \% accurate – so if you do have the disease, the test will detect it 99 \% of the time. Unfortunately for us, the test has a 0.1 \% false positive rate, which means that if you don’t have the disease, 0.1 \% of such tested people will mistakenly get a positive result. Despite this, the results look exceedingly good, so the test is much admired.

You nervously proceed to your doctor’s office and get tested. Alas, the result comes back “Positive”. Now, ask yourself, what the chances you actually have the disease? After all, you have heard of false positives!

A simple way to turn the percentages above into numbers, suppose you consider a population of 1,000,000 people. Since the disease is rather rare, only (0.25 \% \equiv ) \: 2,500 have the disease. If they are tested, only (1 \% \equiv ) \: 25 of them will get an erroneous “negative” result. However, if the rest of the population were tested in the same way, (0.1 \%=) \: 1000 people would get a “Positive” result, despite not having the disease. In other words, of the 3475 people who would get a “Positive” result, only 2475 actually have the disease, which is roughly 72\% – so such an accurate test can only give you a 7-in-10 chance of actually being diseased, despite its incredible accuracy. The reason is that the “false positive” rate is low, but not low enough to overcome the extreme rarity of the disease in question.

Notice, as Gigirenzer does, how simple the argument seems when phrased with numbers, rather than with percentages. To do this using standard probability theory, one writes, if we are speaking about Events A and B and write the probability that A could occur once we know that B has occurred as P(A/B), then

P(A/B) P(B) = P(A)

Using this

P(I \: am \: diseased \: GIVEN \: I \: tested \: positive) = \frac {P(I \: am \: diseased)}{P(I \: test \: positive)}

and then we note

P(I \: am \: diseased) = 0.25\%

P(I \: test \: positive) = 0.25 \% \times 99 \% + 99.75 \% \times 0.1 \%

since I could test positive for two reasons – either I really among the 0.25 \% positive people and additionally was among the 99 \% that the test caught OR I really was among the 99.75 \% negative people but was among the 0.1 \% that unfortunately got a false positive.

Indeed, \frac{0.25 \%}{0.25 \% \times 99 \% + 99.75 \% \times 0.1 \%} \approx  0.72

which was the answer we got before.

The rather straightforward formula I used in the above is one formulation of Bayes’ theorem. Bayes’ theorem allows one to incorporate one’s knowledge of partial outcomes to deduce what the underlying probabilities of events were to start with.

There is no good answer to the question that I posed in the first paragraph. It is true that both a fair and an unfair coin could give results consistent with the first event (someone gets 30 or even 100 coin tosses). However, if one desires that probability has an objective meaning independent of our experience, based upon the results of an infinite number of repetitions of some experiment (the so-called “frequentist” interpretation of probability), then one is stuck. In fact, based upon that principle, if you haven’t heard something contrary to the facts about the coin, your a priori assumption about the probability of heads must be \frac {1}{2}. On the other hand, that isn’t how you run your daily life. In fact, the most legally defensible (many people would argue the {\bf {only}} defensible) strategy for the Director of the Pension Fund would be to

  • not assume that prior returns were based on pure chance and would be equally likely to be positive or negative
  • bet on the manager with the best track record

At a minimum, I would advise people to stay away from a stable of managers that simply are the survivors of a talent test where the losers were rejected (oh wait, that sounds like a large number of investment managers in business these days!). Of course, the manager that knows they have a good thing going is likely to not allow investors at all for fear of reducing their returns due to crowding. Such managers also exist in the global market.

The Bayesian approach has a lot in common with our every-day approach to life. It is not surprising that it has been applied to the interpretation of Quantum Mechanics and that will be discussed in a future post.

 

 

 

 

 

 

 

 

 

 

 

Fermi Gases and Stellar Collapse – Cosmology Post #6

The most refined Standard Candle there is today is a particular kind of Stellar Collapse, called a Type 1a Supernova. To understand this, you will need to read the previous posts (#1-#5), in particular, the Fermi-Dirac statistics argument in Post #5 in the sequence. While this is the most mathematical of the posts, it might be useful to skim over the argument to understand the reason for the amazing regularity in these explosions.

Type 1a supernovas happen to white dwarf stars. A white dwarf is a kind of star that has reached the end of its starry career. It has burnt through its hydrogen fuel, producing all sorts of heavier elements, through to carbon and oxygen. It has also ceased being hot enough to burn Carbon and Oxygen in fusion reactions. Since these two elements burn rather less efficiently than Hydrogen or Helium in fusion reactions, the star is dense (there is less pressure from light being radiated by fusion reactions in the inside to counteract the gravitational pressure of matter, so it compresses itself) and the interior is composed of ionized carbon and oxygen (all the negatively charged electrons are pulled out of every atom, the remaining ions are positively charged and the electrons roam freely in the star). Just as in a crystalline lattice (as in a typical metal), the light electrons are good at keeping the positively charged ions screened from other ions. In addition, they also use them in turn to screen themselves from other electrons; the upshot is, the electrons behave like free particles.

At this point, the star is being pulled in by its own mass and is being held up by the pressure exerted by the gas of free electrons in its midst. The “lattice” of positive ions also exerts pressure, but the pressure is much less, as we will see. The temperature of the surface of the white dwarf is known from observations to be quite high, \sim 10,000-100,000 \: Kelvin. More important, the free electrons in a white dwarf of mass much greater than the Sun’s mass (written as M_{\odot}) are ultra-relativistic, with energy much higher than their individual mass. Remember, too, that electrons are a species of “fermion”, which obey Fermi-Dirac statistics.

The Fermi-Dirac formula is written as

P(\vec k) = 2 \frac {1}{e^{\frac{\hbar c k - \hbar c k_F}{k_B T}}+1}

What does this formula mean? The energy of an ultra-relativistic electron, that has energy far in excess of its mass, is

E = \hbar c k

where k c is the “frequency” corresponding to the electron of momentum c k, while \hbar is the “reduced” Planck’s constant (=\frac {h}{2 \pi}) (here h is the regular Planck’s constant) and c is the speed of light. The quantity k_F is called the Fermi wave-vector.  The function P(\vec k) is the (density of) probability of finding an electron in the momentum state specified by \hbar \vec k . In the study of particles where their wave nature is apparent, it is useful to use the concept of the de Broglie “frequency” (\nu = \frac{E}{h}), the de Broglie “wavelength” (\lambda=\frac {V}{\nu} where V is the particle velocity) and k=\frac{2 \pi}{\lambda}, the “wave-number”  corresponding to the particle. It is customary for lazy people to forget to write c and \hbar in formulas, hence, we speak of momentum k for a hyper-relativistic particle travelling at speed close to the speed of light, when it should really be h \frac {\nu}{c} = h \frac{V}{\lambda c} \approx {h}{\lambda} = \frac {h}{2 \pi} \frac{2 \pi}{\lambda} = {\bf {\hbar k}}.

Why a factor of 2? It wasn’t there in the previous post!

From the previous post, you know that fermions don’t like to be in the same state together. We also know that electrons have a property called spin and they can be spin-up or spin-down. Spin is a property akin to angular momentum, which is a property that we understand classically, for instance, as describing the rotation of a bicycle wheel. You might remember that angular momentum is conserved unless someone applies a torque to the wheel. This is the reason why free-standing gyroscopes can be used for airplane navigation – they “remember” which direction they are pointing in. Similarly, spin is usually conserved, unless you apply a magnetic field to “twist” a spin-up electron into a spin-down configuration. So, you can actually have two kinds of electrons – spin-up and spin-down, in each momentum state \vec k . This is the reason for the factor of 2 in the formula above – there are two “spin” states per \vec k state.

Let’s understand the Fermi wave-vector k_F. Since the fermions need to occupy momentum states two-at-a-time for each, and if they were forced into a cube of side L, you can ask how many levels they occupy. They will, like all sensible particles, start occupying levels starting with the lowest energy level, going up till all the fermions available are exhausted. The fermions are described by waves and, in turn, waves are described by wavelength. You need to classify all the possible ways to fit waves into a cube. Let’s look at a one-dimensional case to start

Fermion Modes 1D

The fermions need to bounce off the ends of the one-dimensional lattice of length L so we need the waves to be pinned to 0 at the ends. If you look at the above pictures, the wavelengths of the waves are 2L (for n=1), L (for n=2), \frac{2 L}{3} (for n=3), \frac {L}{2} (for n=4).  In that case, the wavenumbers, which are basically \frac {2 \pi}{\lambda} for the fermions need to be of the sort \frac {n \pi}{L}, where n is an integer (0, \pm 1, \pm 2 ...).

Fermion Modes 1D Table

For a cube of side L, the corresponding wave-numbers are described by \vec k = (n_x, n_y, n_z) \frac {\pi}{L} since a vector will have three components in three dimensions. These wave-numbers correspond to the momenta of the fermions (this is basically what’s referred to as wave-particle duality), so the momentum is \vec p = \hbar \vec k. The energy of each level is \hbar c k. It is therefore convenient to think of the electrons as filling spots in the space of k_x, k_y, k_z.

What do we have so far? These “free” electrons are going to occupy energy levels starting from the lowest k = (\pm 1, \pm 1, \pm 1) \frac{\pi}{L} and so on in a neat symmetric fashion. In k space, which is “momentum space”, since we have many, many electrons, we could think of them filling up a sphere of radius k_F in momentum space. This radius is called the Fermi wave-vector. It represents the most energetic of the electrons, when they are all arranged in as economically as possible – with the lowest possible energy for the gas of electrons. This would happen at zero temperature (which is the approximation we are going to work with at this time). This comes out from the probability distribution formula (ignore the 2 for this, consider the probability of occupation of levels for a one-dimensional fermion gas). Note that all the electrons are inside the Fermi sphere at low temperature and leak out as the temperature is raised (graphs towards the right).

It is remarkable and you should realize this, that a gas of fermions in its lowest energy configuration has a huge amount of energy. The Pauli principle requires it. If they were bosons, all of them would be sitting at the lowest possible energy level, which couldn’t be zero (because we live in a quantum world) but just above it.

What’s the energy of this gas? Its an exercise in arithmetic for zero temperature. Is that good enough? No, but it gets us pretty close to the correct answer and it is instructive.

The total energy in the gas of electrons is (in a spherical white dwarf of volume V = \frac {4}{3} \pi R^3, with 2 spins per state, is

E_{Total} =  2 V  \int \frac {d^3 \vec k}{(2 \pi)^3} \hbar c k = V \frac {\hbar c k_F^4}{4 \pi^2}

The total number of electrons is obtained by just adding up all the available states in momentum space, up to k_F

N = 2 V \int \frac {d^3 \vec k}{(2 \pi)^3} \rightarrow k_F^3 = 3 \pi^2 \frac {N}{V}

We need to estimate the number of electrons in the white dwarf to start this calculation off. That’s what sets the value of k_F, the radius of the “sphere” in momentum space of filled energy states at zero temperature.

The mass of the star is M. That corresponds to \frac {M} { \mu_e m_p} full atoms, where m_p is the mass of the proton (the dominant part of mass) and \mu_e is the ratio of atomic weight to atomic number for a typical constituent atom in the white dwarf. For a star composed of Carbon and Oxygen, this is 2. So, N = V \frac {M}{\mu_e m_p} = V \frac {M}{2 m_p}.

Using all the above

E_{Total} = \frac {4\pi}{3} R^3  \frac {\hbar c}{4 \pi^2} \left(  3 \pi^2 \frac {M}{\mu_e m_p \frac{4\pi}{3} R^3 }\right)^{4/3}

Next, the white dwarf has some gravitational potential energy just because of its existence. This is calculated in high school classes by integration over successive spherical shells from 0 to the radius R, as shown below

Spherical Shell White Dwarf

The gravitational potential energy is

\int_{0}^{R} (-) G \frac {\frac{4 \pi}{3} \rho_m r^3     4 \pi r^2}{r} dr = - \frac{3}{5} \frac {G M^2}{R}

A strange set of things happen if the energy of the electrons (which is called, by the way, the “degeneracy energy”) plus the gravitational energy goes negative. At that point, the total energy can become even more negative as the white dwarf’s radius gets smaller – this can continue {\it ad \: infinitum} – the star collapses. This starts to  happen when you set the gravitational potential energy equal to the Fermi gas energy, this leads to

 \frac{3}{5} \frac{G M^2}{R} = \frac {4\pi}{3} R^3  \frac {\hbar c}{4 \pi^2} \left(  3 \pi^2 \frac {M}{\mu_e m_p \frac{4\pi}{3} R^3 }\right)^{4/3}

the R (radius) of the star drops out and we are left with a unique mass M where this happens – the calculation above gives an answer of 1.7 M_{\odot}. A more precise calculation at non-zero temperature gives 1.44 M_{\odot}.

The famous physicist S. Chandrasekhar, after whom the Chandra X-ray space observatory is named, discovered this (Chandrasekhar limit) while ruminating about the effect of hyper-relativistic fermions in a white dwarf.

chandrasekhar

He was on  a cruise from India to Great Britain at the time and had the time for unrestricted rumination of these sorts!

Therefore, as is often the case, if a white dwarf is surrounded by a swirling cloud of gas of various sorts or has a companion star of some sort that it accretes matter from, it will collapse into a denser neutron star in a cataclysmic collapse precisely when this limit is reached. If so, once one understands the type of light emitted from such a supernova from some nearer location, one has a Standard Candle – it is like having a hand grenade that is of {\bf exactly} the same quality at various distances. By looking at how bright the explosion is, you can tell how far away it is.

After this longish post, I will describe the wonderful results from this analysis in the next post – it has changed our views of the Universe and our place in it, in the last several years.

Coincidences and the stealthiness of the Calculus of Probabilities

You know this story (or something similar) from your own life. I was walking from my parked car to the convenience store to purchase a couple of bottles of sparkling water. As I walked there, I noticed a car with the number 1966 – that’s the year I was born! This must be a coincidence – today must be a lucky day!

There are other coincidences, numerical or otherwise. Carl Sagan, in one of his books mentions a person that thought of his mother the very day she passed away in a different city. He (this person) was convinced this was proof of life after/before/during death.

There are others in the Natural World around us (I will be writing about the “Naturalness” idea in the future) – for eclipse aficionados, there is going to be a total solar eclipse over a third of the United States on the 21st of August 2017. It is a coincidence that the moon is exactly the right size to completely cover the sun (precisely – see eclipse photos from NASA below)

170316134335-total-solar-eclipse-process-nasa-exlarge-169.jpgwhite_light_corona

Isn’t is peculiar that the moon is exactly the right size? For instance, the moon has other properties – for instance, the face of the moon that we see is always the same face. Mercury does the same thing with the Sun – it also exhibits the same face to the Sun. This is well understood as a tidal effect of a small object in the gravitational field of a large neighbor. There’s an excellent Wikipedia article about this effect and I well explain it further in the future. But there is no simple explanation for why the moon is the right size for total eclipses. It is not believed to be anything but an astonishing coincidence. After all, we have 6000 odd other visible objects in the sky that aren’t exactly eclipsed by any other satellite, so why should this particular pair matter, except that they provide us much-needed heat and light?

The famous physicist Paul Dirac discovered an interesting numerical coincidence based on some other numerology that another scientist called Eddington was obsessed with. It turns out that a number of (somewhat carefully constructed) ratios are of the same order of magnitude – basically remember the number 10^{40}!

  • The ratio of the electrical and gravitational forces between the electron and the proton (\frac{1}{4 \pi \epsilon_0} e^2 vs. G m_p m_e) is approximately 10^{40}
  • The ratio of the size of the universe to the electron’s Compton wavelength, which is the de Broglie wavelength of a photon of the same energy as the electron – 10^{27}m \: vs \: 10^{-12} m \: \approx 10^{39}

On the basis of this astonishing coincidence, Dirac made the startling observation that this could indicate (since the size of the universe is related to it’s age) the value of G would fall with time (why not e going up with time, or something else?). Whereas precision experiments in measuring the value of G are beginning now, there would have been cosmological consequences if G had indeed behaved as 1/t in the past! For this reason, people discount this “theory” these days.

I heard of another coincidence recently – the value of the quantity \frac {c^2}{g}, where c is the speed of light and g is the acceleration due to gravity at the surface of the earth (9.81 \frac{m}{s^2}), is very close to 1 light year. This implies (if you put together the formulas for g, and 1 light year), a relationship between the masses of the earth, the sun, the earth’s radius and the earth-moon distance!

The question with coincidences of any sort is that it is imperative to separate signal from noise. And this is why some simple examples of probability are useful to consider. Let’s understand this.

If you have two specific people in mind, the probability of both having the same birthday is \frac{1}{365} – there are 365 possibilities for the second person’s birthday and only 1 way for it to match the first person’s birthday.

If, however, you have N people and you ask for any match of birthdays, there are \frac {N(N-1)}{2} pairs of people to consider and you have a substantially higher probability of a match. In fact, the easy way to calculate this is to ask for the probability of NO matches – that is \frac {364}{365} \times \frac {363}{365} \times ... \frac {365 - (N-1)}{365}, which is \frac {364}{365} for the first non-match, then \frac {363}{365} from the second non-match for the third person and so on. Then subtracting from 1 gives the probability of at least one match. Among other things, this implies that the chance of at least one match is over 50% (1 in 2) for a bunch of 23 un-connected people (no twins etc). And if you have 60 people, the probability is extremely close to 1.

Probability Graph for Birthdays

The key takeaway from this is that a less probable event has a chance to become much more probable when you have the luxury of adding more possibilities for the event to occur. As an example, if you went around your life declaring in advance that you will worry about coincidences ONLY if you find a number matching the specific birth year for your second cousin – the chances are low that you will observe such a number – unless you happen to hang about near your second cousin’s home! On the other hand, if you are willing to accept most numbers close to your heart – the possibilities stealthily abound and the probability of a match increases! Your birthday, your age, room number or address while in college, your current or previous addresses, the license plates for your cars, your current and previous passport numbers – the possibilities are literally, endless. And this means that the probability of a “coincidence” is that much higher.

I have a suggestion if you notice an unexplained coincidence in your life. Figure out if that same coincidence repeats itself in a bit – a week say. You have much stronger grounds for an argument with someone like me if you do! And then you still have to have a coherent theory why it was a real coincidence in the first place!

 

Addendum: Just to clarify, I write above “It is a coincidence that the moon is exactly the right size to completely cover the sun …” – this is from our point of view, of course. These objects would have radically different sizes when viewed from Jupiter, for instance.