Mr. Einstein and my GPS

Posted on Updated on

I promised to continue one of my previous posts and explain how Einstein’s theories of 1905 and 1915 together affect our GPS systems. If we hadn’t discovered relativity (special and general) by now, we’d have certainly discovered it by the odd behaviour of our clocks on the surface of the earth and on an orbiting satellite.

The previous post ended by demonstrating that the time interval between successive ticks of a clock at the earth’s surface \Delta t_R and a clock ticking infinitely far away from all masses \Delta t_{\infty} are related by the formula

\Delta t_{R} =  \Delta t_{\infty} (1 + \frac{ \Phi(R)}{c^2})

The gravitational potential \Phi(R)=-\frac{G M_E}{R} is a {\bf {negative}} number for all R. This means that the time intervals measured by the clock at the earth’s surface is {\bf {shorter}} than the time interval measured far away from the earth. If you saw the movie “Interstellar“, you will hopefully remember that a year passed on Miller’s planet (the one with the huge tidal waves) while 23 years passed on the Earth, since Miller’s planet was close to the giant Black Hole Gargantua. So time appears to slow down on the surface of the Earth compared to a clock placed far away.

Time for some computations. The mass of the earth  is 5.97 \times 10^{24} \: kg, Earth’s radius is R = 6370 \: km \: = 6.37\times 10^6 \: meters and the MKS units for G = 6.67 \times 10^{-11} \: MKS \: units. In addition, the speed of light c = 3 \times 10^8 \frac {m}{s}. If \Delta t_{\infty} = 1 \: sec, the clock on an orbiting satellite (assumed to be really far away from the earth) measures one second, the clock at the surface measures

\Delta t_R = (1 \: sec) \times (1 - \frac {(6.67 \times 10^{-11}) \:  \times \:  (5.97 \times 10^{-24} )}{(6.37 \times 10^6 )\: (3 \times 10^8 )^2})

this can be simplified to  0.69 \: nanoseconds less than 1 \: sec. In a day, which is  (24 \times 3600) \: secs, this is 70 \times 10^-6 = 60 \: \mu \: seconds (microseconds are a millionth of a second).

In reality, as will be explained below, the GPS satellites are operating at roughly 22,000 \: km above the earth’s surface, so what’s relevant is the {\bf {difference}} in the gravitational potential at 28,370 \: km and 6,370 \: km from the earth’s center. That modifies the difference in clock rates to 53 \: nanoseconds per second, or 46 \: microseconds in a day.

How does GPS work? The US (other countries too – Russia, the EU, China, India) launched several satellites into a distant orbit 20,000 \: - 25,000 \: km above the earth’s surface. Most of the orbits are designed to allow different satellites to cover the earth’s surface at various points of time. A few of the systems (in particular, India’s) have satellites placed in a Geo-Stationary orbit, so they rotate around the earth with the earth – they are always above a certain point on the earth’s surface. The key is that they possess rather accurate and synchronized atomic clocks and send the time signals, along with the satellite position and ID to GPS receivers.

If you think about how to locate someone on the earth, if I told you I was 10 miles from the Empire State Building in Manhattan, you wouldn’t know where I was. Then, if I told you that I was 5 miles from the Chrysler building (also in Manhattan), you would be better off, but you still wouldn’t know how high I was. If I receive a third coordinate (distance from yet another landmark), I’d be set.  So we need distances from three well-known locations in order to locate ourselves on the Earth’s surface.

The GPS receiver on your dashboard receives signals from three GPS satellites. It knows how far they are, because it knows when the signals were emitted, as well as what the time at your location is.  Since these signals travel at the speed of light (and this is sometimes a problem if you have atmospheric interference), the receiver can compute how far away the satellites are. Since it has distances to three “landmarks”, it can be programmed to compute its own location.

Of course, if its clock was constantly running slower than the satellite clocks, it would constantly overestimate the distance to these satellites, for it would think the signals were emitted \: earlier than they actually were. This would screw up the location calculation, to the distance travelled by light in 0.53 \: nanoseconds, which is 0.16 meters. Over a day, this would become 14 kilometers. You could well be in a different city!

There’s another effect – that of time dilation. To explain this, there is no better than the below thought experiment, that I think I first heard of from George Gamow’s book. As with {\bf {ALL}} arguments in special and general relativity, the only things observers can agree on is the speed of light and (hence) the order of causally related events. That’s what we use in the below.

There’s an observer standing in a much–abused rail carriage. The rail carriage is travelling to the right, at a high speed V. The observer has a rather cool contraption / clock. It is made with a laser, that emits photons and a mirror, that reflects them. The laser emits photons from the bottom of the carriage towards the ceiling, where the mirror is mounted. The mirror reflects the photon back to the floor of the car, where it is received by a photo-detector (yet another thing that Einstein first explained!).

Light Clock On Train

The time taken for this up- and down- journey (the emitter and mirror are separated by a length L) is

\Delta t' = \frac{2 L}{c}

That’s what the observer on the train measures the time interval to be. What does an observer on the track, outside the train, see?

Light Clock Seen from Outside Train

She sees the light traverse the path down in blue above. However, she also sees the light traveling at the same (numerical) speed, so she decides that the time between emission and reception of the photon is found using Pythagoras’ theorem

L^2 = (c \frac{\Delta t}{2})^2 - (V \frac {\Delta t}{2})^2

\rightarrow  \Delta t = \frac {2 L}{c} \frac{1}{\sqrt{1 - \frac{V^2}{c^2}}}

So, the time interval between the same two events is computed to be larger on the stationary observer’s clock, than on the moving observer’s clock. The relationship is

\Delta t = \frac {\Delta t'}{ \sqrt{1 - \frac{V^2}{c^2}} }

How about that old chestnut – well, isn’t the observer on the track moving relative to the observer on the train? How come you can’t reverse this argument?

The answer is – who’s going to have to turn the train around and sheepishly come back after this silly experiment runs its course? Well! The point is that one of these observers has to actively come back in order to compare clocks. Relativity just observes that you cannot make statements about {\bf {absolute}} motion. You certainly have to accept relative motion and in particular, how observers have to compare clocks at the same point in space.

From the above, 1 second on the moving clock would correspond to \frac {1}{ \sqrt{1 - \frac{V^2}{c^2}} } seconds on the clock by the tracks. A satellite at a distance D from the center of the earth has an orbital speed of \sqrt {\frac {G M_E}{D} } , which for an orbit 22,000 km above the earth’s surface, which is 28,370 \: km from the earth’s center, would be roughly

\sqrt { \frac {(6.67 \times 10^{-11} (5.97 \times 10^{-24})}{28370 \times 10^3} }\equiv  3700 \: \frac{meters}{sec}

which means that 1 second on the moving clock would correspond to 1 \: sec + 0.078 \: nanoseconds on the clock by the tracks. Over a day, this would correspond to a drift of 6 \: microseconds, in the {\bf {opposite}} direction to the above calculation for gravitational slowing.

Net result – the satellite clocks run faster by 40 microseconds in a day. They need to be continually adjusted to bring them in sync with earth-based clocks.

So, that’s three ways in which Mr. Einstein matters to you EVERY day!

 

Master Traders and Bayes’ theorem

Posted on Updated on

Imagine you were walking around in Manhattan and you chanced upon an interesting game going on at the side of the road. By the way, when you see these games going on, a safe strategy is to walk on, since they usually reduce to methods of separating a lot of money from you in various ways.

The protagonist, sitting at the table tells you (and you are able to confirm this by a video taken by a nearby security camera run by a disinterested police officer), that he has managed to toss the same quarter (an American coin) thirty times and managed to get “Heads” {\bf ALL} of those times. What would you say about the fairness or unfairness of the coin in question?

Next, your good friend rushes to your side and whispers to you that this guy is actually one of a really \: large number of people (a little more than a billion) that were asked to successively toss freshly minted, scrupulously clean and fair quarters. People that tossed tails were “tossed” out at each successive toss and only those that tossed heads were allowed to toss again. This guy (and one more like him) were the only ones that remained. What can you say now about the fairness or unfairness of the coin in question?

What if the number of coin tosses was 100 rather than 30, with a larger number of initial subjects?

Just to make sure you think about this correctly, suppose you were the Director of a large State Pension Fund and you need to invest the life savings of your state’s teachers, firemen, policemen, highway maintenance workers and the like. You get told you have to decide to allocate some money to a bet based made by an investment manager based on his or her track record (he successively tossed “Heads” a hundred times in a row). Should you invest money on the possibility that he or she will toss “Heads” again? If so, how much should you invest? Should you stay away?

This question cuts to the heart of how we operate in real life. If you cut out the analytical skills you learnt in school and revert to how our “lizard” brain thinks, we would assume the coin was unfair (in the first instance) and express total surprise at the knowledge of the second fact. In fact, even though the second situation could well have happened to every similar situation of the first sort we encounter in the real world, we would still operate as if the coin was unfair, as our “lizard” brain would instruct us to behave.

What we are doing unconsciously is using Bayes’ theorem. Bayes’ theorem is the linchpin of inferential deduction and is often misused even by people who understand what they are doing with it. If you want to read couple of rather interesting books that use it in various ways, read Gerd Gigirenzer’s “Reckoning with Risk: Learning to Live with Uncertainty” or Hans Christian von Baeyer’s “QBism“. I will discuss a few classic examples. In particular Gigirenzer’s book discusses several such, as well as ways to overcome popular mistakes made in the interpretation of the results.

Here’s a very overused, but instructive example. Let’s say there is a rare disease (pick your poison) that afflicts 0.25 \% of the population. Unfortunately, you are worried that you might have it. Fortunately for you, there is a test that can be performed, that is 99 \% accurate – so if you do have the disease, the test will detect it 99 \% of the time. Unfortunately for us, the test has a 0.1 \% false positive rate, which means that if you don’t have the disease, 0.1 \% of such tested people will mistakenly get a positive result. Despite this, the results look exceedingly good, so the test is much admired.

You nervously proceed to your doctor’s office and get tested. Alas, the result comes back “Positive”. Now, ask yourself, what the chances you actually have the disease? After all, you have heard of false positives!

A simple way to turn the percentages above into numbers, suppose you consider a population of 1,000,000 people. Since the disease is rather rare, only (0.25 \% \equiv ) \: 2,500 have the disease. If they are tested, only (1 \% \equiv ) \: 25 of them will get an erroneous “negative” result. However, if the rest of the population were tested in the same way, (0.1 \%=) \: 1000 people would get a “Positive” result, despite not having the disease. In other words, of the 3475 people who would get a “Positive” result, only 2475 actually have the disease, which is roughly 72\% – so such an accurate test can only give you a 7-in-10 chance of actually being diseased, despite its incredible accuracy. The reason is that the “false positive” rate is low, but not low enough to overcome the extreme rarity of the disease in question.

Notice, as Gigirenzer does, how simple the argument seems when phrased with numbers, rather than with percentages. To do this using standard probability theory, one writes, if we are speaking about Events A and B and write the probability that A could occur once we know that B has occurred as P(A/B), then

P(A/B) P(B) = P(A)

Using this

P(I \: am \: diseased \: GIVEN \: I \: tested \: positive) = \frac {P(I \: am \: diseased)}{P(I \: test \: positive)}

and then we note

P(I \: am \: diseased) = 0.25\%

P(I \: test \: positive) = 0.25 \% \times 99 \% + 99.75 \% \times 0.1 \%

since I could test positive for two reasons – either I really among the 0.25 \% positive people and additionally was among the 99 \% that the test caught OR I really was among the 99.75 \% negative people but was among the 0.1 \% that unfortunately got a false positive.

Indeed, \frac{0.25 \%}{0.25 \% \times 99 \% + 99.75 \% \times 0.1 \%} \approx  0.72

which was the answer we got before.

The rather straightforward formula I used in the above is one formulation of Bayes’ theorem. Bayes’ theorem allows one to incorporate one’s knowledge of partial outcomes to deduce what the underlying probabilities of events were to start with.

There is no good answer to the question that I posed in the first paragraph. It is true that both a fair and an unfair coin could give results consistent with the first event (someone gets 30 or even 100 coin tosses). However, if one desires that probability has an objective meaning independent of our experience, based upon the results of an infinite number of repetitions of some experiment (the so-called “frequentist” interpretation of probability), then one is stuck. In fact, based upon that principle, if you haven’t heard something contrary to the facts about the coin, your a priori assumption about the probability of heads must be \frac {1}{2}. On the other hand, that isn’t how you run your daily life. In fact, the most legally defensible (many people would argue the {\bf {only}} defensible) strategy for the Director of the Pension Fund would be to

  • not assume that prior returns were based on pure chance and would be equally likely to be positive or negative
  • bet on the manager with the best track record

At a minimum, I would advise people to stay away from a stable of managers that simply are the survivors of a talent test where the losers were rejected (oh wait, that sounds like a large number of investment managers in business these days!). Of course, the manager that knows they have a good thing going is likely to not allow investors at all for fear of reducing their returns due to crowding. Such managers also exist in the global market.

The Bayesian approach has a lot in common with our every-day approach to life. It is not surprising that it has been applied to the interpretation of Quantum Mechanics and that will be discussed in a future post.

 

 

 

 

 

 

 

 

 

 

 

Fermi Gases and Stellar Collapse – Cosmology Post #6

Posted on Updated on

The most refined Standard Candle there is today is a particular kind of Stellar Collapse, called a Type 1a Supernova. To understand this, you will need to read the previous posts (#1-#5), in particular, the Fermi-Dirac statistics argument in Post #5 in the sequence. While this is the most mathematical of the posts, it might be useful to skim over the argument to understand the reason for the amazing regularity in these explosions.

Type 1a supernovas happen to white dwarf stars. A white dwarf is a kind of star that has reached the end of its starry career. It has burnt through its hydrogen fuel, producing all sorts of heavier elements, through to carbon and oxygen. It has also ceased being hot enough to burn Carbon and Oxygen in fusion reactions. Since these two elements burn rather less efficiently than Hydrogen or Helium in fusion reactions, the star is dense (there is less pressure from light being radiated by fusion reactions in the inside to counteract the gravitational pressure of matter, so it compresses itself) and the interior is composed of ionized carbon and oxygen (all the negatively charged electrons are pulled out of every atom, the remaining ions are positively charged and the electrons roam freely in the star). Just as in a crystalline lattice (as in a typical metal), the light electrons are good at keeping the positively charged ions screened from other ions. In addition, they also use them in turn to screen themselves from other electrons; the upshot is, the electrons behave like free particles.

At this point, the star is being pulled in by its own mass and is being held up by the pressure exerted by the gas of free electrons in its midst. The “lattice” of positive ions also exerts pressure, but the pressure is much less, as we will see. The temperature of the surface of the white dwarf is known from observations to be quite high, \sim 10,000-100,000 \: Kelvin. More important, the free electrons in a white dwarf of mass much greater than the Sun’s mass (written as M_{\odot}) are ultra-relativistic, with energy much higher than their individual mass. Remember, too, that electrons are a species of “fermion”, which obey Fermi-Dirac statistics.

The Fermi-Dirac formula is written as

P(\vec k) = 2 \frac {1}{e^{\frac{\hbar c k - \hbar c k_F}{k_B T}}+1}

What does this formula mean? The energy of an ultra-relativistic electron, that has energy far in excess of its mass, is

E = \hbar c k

where k c is the “frequency” corresponding to the electron of momentum c k, while \hbar is the “reduced” Planck’s constant (=\frac {h}{2 \pi}) (here h is the regular Planck’s constant) and c is the speed of light. The quantity k_F is called the Fermi wave-vector.  The function P(\vec k) is the (density of) probability of finding an electron in the momentum state specified by \hbar \vec k . In the study of particles where their wave nature is apparent, it is useful to use the concept of the de Broglie “frequency” (\nu = \frac{E}{h}), the de Broglie “wavelength” (\lambda=\frac {V}{\nu} where V is the particle velocity) and k=\frac{2 \pi}{\lambda}, the “wave-number”  corresponding to the particle. It is customary for lazy people to forget to write c and \hbar in formulas, hence, we speak of momentum k for a hyper-relativistic particle travelling at speed close to the speed of light, when it should really be h \frac {\nu}{c} = h \frac{V}{\lambda c} \approx {h}{\lambda} = \frac {h}{2 \pi} \frac{2 \pi}{\lambda} = {\bf {\hbar k}}.

Why a factor of 2? It wasn’t there in the previous post!

From the previous post, you know that fermions don’t like to be in the same state together. We also know that electrons have a property called spin and they can be spin-up or spin-down. Spin is a property akin to angular momentum, which is a property that we understand classically, for instance, as describing the rotation of a bicycle wheel. You might remember that angular momentum is conserved unless someone applies a torque to the wheel. This is the reason why free-standing gyroscopes can be used for airplane navigation – they “remember” which direction they are pointing in. Similarly, spin is usually conserved, unless you apply a magnetic field to “twist” a spin-up electron into a spin-down configuration. So, you can actually have two kinds of electrons – spin-up and spin-down, in each momentum state \vec k . This is the reason for the factor of 2 in the formula above – there are two “spin” states per \vec k state.

Let’s understand the Fermi wave-vector k_F. Since the fermions need to occupy momentum states two-at-a-time for each, and if they were forced into a cube of side L, you can ask how many levels they occupy. They will, like all sensible particles, start occupying levels starting with the lowest energy level, going up till all the fermions available are exhausted. The fermions are described by waves and, in turn, waves are described by wavelength. You need to classify all the possible ways to fit waves into a cube. Let’s look at a one-dimensional case to start

Fermion Modes 1D

The fermions need to bounce off the ends of the one-dimensional lattice of length L so we need the waves to be pinned to 0 at the ends. If you look at the above pictures, the wavelengths of the waves are 2L (for n=1), L (for n=2), \frac{2 L}{3} (for n=3), \frac {L}{2} (for n=4).  In that case, the wavenumbers, which are basically \frac {2 \pi}{\lambda} for the fermions need to be of the sort \frac {n \pi}{L}, where n is an integer (0, \pm 1, \pm 2 ...).

Fermion Modes 1D Table

For a cube of side L, the corresponding wave-numbers are described by \vec k = (n_x, n_y, n_z) \frac {\pi}{L} since a vector will have three components in three dimensions. These wave-numbers correspond to the momenta of the fermions (this is basically what’s referred to as wave-particle duality), so the momentum is \vec p = \hbar \vec k. The energy of each level is \hbar c k. It is therefore convenient to think of the electrons as filling spots in the space of k_x, k_y, k_z.

What do we have so far? These “free” electrons are going to occupy energy levels starting from the lowest k = (\pm 1, \pm 1, \pm 1) \frac{\pi}{L} and so on in a neat symmetric fashion. In k space, which is “momentum space”, since we have many, many electrons, we could think of them filling up a sphere of radius k_F in momentum space. This radius is called the Fermi wave-vector. It represents the most energetic of the electrons, when they are all arranged in as economically as possible – with the lowest possible energy for the gas of electrons. This would happen at zero temperature (which is the approximation we are going to work with at this time). This comes out from the probability distribution formula (ignore the 2 for this, consider the probability of occupation of levels for a one-dimensional fermion gas). Note that all the electrons are inside the Fermi sphere at low temperature and leak out as the temperature is raised (graphs towards the right).

It is remarkable and you should realize this, that a gas of fermions in its lowest energy configuration has a huge amount of energy. The Pauli principle requires it. If they were bosons, all of them would be sitting at the lowest possible energy level, which couldn’t be zero (because we live in a quantum world) but just above it.

What’s the energy of this gas? Its an exercise in arithmetic for zero temperature. Is that good enough? No, but it gets us pretty close to the correct answer and it is instructive.

The total energy in the gas of electrons is (in a spherical white dwarf of volume V = \frac {4}{3} \pi R^3, with 2 spins per state, is

E_{Total} =  2 V  \int \frac {d^3 \vec k}{(2 \pi)^3} \hbar c k = V \frac {\hbar c k_F^4}{4 \pi^2}

The total number of electrons is obtained by just adding up all the available states in momentum space, up to k_F

N = 2 V \int \frac {d^3 \vec k}{(2 \pi)^3} \rightarrow k_F^3 = 3 \pi^2 \frac {N}{V}

We need to estimate the number of electrons in the white dwarf to start this calculation off. That’s what sets the value of k_F, the radius of the “sphere” in momentum space of filled energy states at zero temperature.

The mass of the star is M. That corresponds to \frac {M} { \mu_e m_p} full atoms, where m_p is the mass of the proton (the dominant part of mass) and \mu_e is the ratio of atomic weight to atomic number for a typical constituent atom in the white dwarf. For a star composed of Carbon and Oxygen, this is 2. So, N = V \frac {M}{\mu_e m_p} = V \frac {M}{2 m_p}.

Using all the above

E_{Total} = \frac {4\pi}{3} R^3  \frac {\hbar c}{4 \pi^2} \left(  3 \pi^2 \frac {M}{\mu_e m_p \frac{4\pi}{3} R^3 }\right)^{4/3}

Next, the white dwarf has some gravitational potential energy just because of its existence. This is calculated in high school classes by integration over successive spherical shells from 0 to the radius R, as shown below

Spherical Shell White Dwarf

The gravitational potential energy is

\int_{0}^{R} (-) G \frac {\frac{4 \pi}{3} \rho_m r^3     4 \pi r^2}{r} dr = - \frac{3}{5} \frac {G M^2}{R}

A strange set of things happen if the energy of the electrons (which is called, by the way, the “degeneracy energy”) plus the gravitational energy goes negative. At that point, the total energy can become even more negative as the white dwarf’s radius gets smaller – this can continue {\it ad \: infinitum} – the star collapses. This starts to  happen when you set the gravitational potential energy equal to the Fermi gas energy, this leads to

 \frac{3}{5} \frac{G M^2}{R} = \frac {4\pi}{3} R^3  \frac {\hbar c}{4 \pi^2} \left(  3 \pi^2 \frac {M}{\mu_e m_p \frac{4\pi}{3} R^3 }\right)^{4/3}

the R (radius) of the star drops out and we are left with a unique mass M where this happens – the calculation above gives an answer of 1.7 M_{\odot}. A more precise calculation at non-zero temperature gives 1.44 M_{\odot}.

The famous physicist S. Chandrasekhar, after whom the Chandra X-ray space observatory is named, discovered this (Chandrasekhar limit) while ruminating about the effect of hyper-relativistic fermions in a white dwarf.

chandrasekhar

He was on  a cruise from India to Great Britain at the time and had the time for unrestricted rumination of these sorts!

Therefore, as is often the case, if a white dwarf is surrounded by a swirling cloud of gas of various sorts or has a companion star of some sort that it accretes matter from, it will collapse into a denser neutron star in a cataclysmic collapse precisely when this limit is reached. If so, once one understands the type of light emitted from such a supernova from some nearer location, one has a Standard Candle – it is like having a hand grenade that is of {\bf exactly} the same quality at various distances. By looking at how bright the explosion is, you can tell how far away it is.

After this longish post, I will describe the wonderful results from this analysis in the next post – it has changed our views of the Universe and our place in it, in the last several years.

Coincidences and the stealthiness of the Calculus of Probabilities

Posted on Updated on

You know this story (or something similar) from your own life. I was walking from my parked car to the convenience store to purchase a couple of bottles of sparkling water. As I walked there, I noticed a car with the number 1966 – that’s the year I was born! This must be a coincidence – today must be a lucky day!

There are other coincidences, numerical or otherwise. Carl Sagan, in one of his books mentions a person that thought of his mother the very day she passed away in a different city. He (this person) was convinced this was proof of life after/before/during death.

There are others in the Natural World around us (I will be writing about the “Naturalness” idea in the future) – for eclipse aficionados, there is going to be a total solar eclipse over a third of the United States on the 21st of August 2017. It is a coincidence that the moon is exactly the right size to completely cover the sun (precisely – see eclipse photos from NASA below)

170316134335-total-solar-eclipse-process-nasa-exlarge-169.jpgwhite_light_corona

Isn’t is peculiar that the moon is exactly the right size? For instance, the moon has other properties – for instance, the face of the moon that we see is always the same face. Mercury does the same thing with the Sun – it also exhibits the same face to the Sun. This is well understood as a tidal effect of a small object in the gravitational field of a large neighbor. There’s an excellent Wikipedia article about this effect and I well explain it further in the future. But there is no simple explanation for why the moon is the right size for total eclipses. It is not believed to be anything but an astonishing coincidence. After all, we have 6000 odd other visible objects in the sky that aren’t exactly eclipsed by any other satellite, so why should this particular pair matter, except that they provide us much-needed heat and light?

The famous physicist Paul Dirac discovered an interesting numerical coincidence based on some other numerology that another scientist called Eddington was obsessed with. It turns out that a number of (somewhat carefully constructed) ratios are of the same order of magnitude – basically remember the number 10^{40}!

  • The ratio of the electrical and gravitational forces between the electron and the proton (\frac{1}{4 \pi \epsilon_0} e^2 vs. G m_p m_e) is approximately 10^{40}
  • The ratio of the size of the universe to the electron’s Compton wavelength, which is the de Broglie wavelength of a photon of the same energy as the electron – 10^{27}m \: vs \: 10^{-12} m \: \approx 10^{39}

On the basis of this astonishing coincidence, Dirac made the startling observation that this could indicate (since the size of the universe is related to it’s age) the value of G would fall with time (why not e going up with time, or something else?). Whereas precision experiments in measuring the value of G are beginning now, there would have been cosmological consequences if G had indeed behaved as 1/t in the past! For this reason, people discount this “theory” these days.

I heard of another coincidence recently – the value of the quantity \frac {c^2}{g}, where c is the speed of light and g is the acceleration due to gravity at the surface of the earth (9.81 \frac{m}{s^2}), is very close to 1 light year. This implies (if you put together the formulas for g, and 1 light year), a relationship between the masses of the earth, the sun, the earth’s radius and the earth-moon distance!

The question with coincidences of any sort is that it is imperative to separate signal from noise. And this is why some simple examples of probability are useful to consider. Let’s understand this.

If you have two specific people in mind, the probability of both having the same birthday is \frac{1}{365} – there are 365 possibilities for the second person’s birthday and only 1 way for it to match the first person’s birthday.

If, however, you have N people and you ask for any match of birthdays, there are \frac {N(N-1)}{2} pairs of people to consider and you have a substantially higher probability of a match. In fact, the easy way to calculate this is to ask for the probability of NO matches – that is \frac {364}{365} \times \frac {363}{365} \times ... \frac {365 - (N-1)}{365}, which is \frac {364}{365} for the first non-match, then \frac {363}{365} from the second non-match for the third person and so on. Then subtracting from 1 gives the probability of at least one match. Among other things, this implies that the chance of at least one match is over 50% (1 in 2) for a bunch of 23 un-connected people (no twins etc). And if you have 60 people, the probability is extremely close to 1.

Probability Graph for Birthdays

The key takeaway from this is that a less probable event has a chance to become much more probable when you have the luxury of adding more possibilities for the event to occur. As an example, if you went around your life declaring in advance that you will worry about coincidences ONLY if you find a number matching the specific birth year for your second cousin – the chances are low that you will observe such a number – unless you happen to hang about near your second cousin’s home! On the other hand, if you are willing to accept most numbers close to your heart – the possibilities stealthily abound and the probability of a match increases! Your birthday, your age, room number or address while in college, your current or previous addresses, the license plates for your cars, your current and previous passport numbers – the possibilities are literally, endless. And this means that the probability of a “coincidence” is that much higher.

I have a suggestion if you notice an unexplained coincidence in your life. Figure out if that same coincidence repeats itself in a bit – a week say. You have much stronger grounds for an argument with someone like me if you do! And then you still have to have a coherent theory why it was a real coincidence in the first place!

 

Addendum: Just to clarify, I write above “It is a coincidence that the moon is exactly the right size to completely cover the sun …” – this is from our point of view, of course. These objects would have radically different sizes when viewed from Jupiter, for instance.

Arbitrage arguments in Finance and Physics

Posted on Updated on

Arbitrage refers to a somewhat peculiar and rare situation in the financial world. It is succinctly described as follows. Suppose you start with an initial situation – let’s say you have some money in an ultra-safe bank that earns interest at a certain basic rate r. Assume, also, that there is a infinitely liquid market in the world, where you can choose to invest the money in any way you choose. If you can end up with {\bf {definite}} financial outcomes that are quite different, then you have an arbitrage between the two strategies. If so, the way to profit from the situation is to “short” one strategy (the one that makes less) and go “long” the other strategy (the one that makes more). An example of such a method would be to buy a cheaper class of shares and sell “short” an equivalent amount of an expensive class of shares for the same Company that has definitely committed to merge the two classes in a year.

An argument using arbitrage is hard to challenge except when basic assumptions about the market or initial conditions are violated. Hence, in the above example, suppose there was uncertainty about whether the merger of the two classes of shares in a year, the “arbitrage” wouldn’t really be one.

One of the best known arbitrage arguments was invented by Fischer Black, Myron Scholes and Robert Merton to deduce a price for Call and Put Options. Their argument is explained as follows. Suppose you have one interest rate for risk-free investments (the rate r two paragraphs above). Additionally, consider if you, Dear Reader, own a Call Option, with strike price \$X, on a stock price.  This is an instrument where at the end of (say) one year, you look at the market price of the stock and compute \$S - \$X. Let’s say X = \$100, while the stock price was initially \$76. At the end of the year, suppose the stock price became \$110, then the difference \$110 - \$100 = \$10, so you, Dear Reader and Fortunate-Call-Option-Owner, would make \$10. On the other hand, if the stock price unfortunately sank to \$55, then the difference \$55 - \$ 100 = - \$45 is negative. In this case, you, unfortunate Reader, would make nothing. A Call Option, therefore, is a way to speculate on the ascent of a stock price above the strike price.

Black-Scholes-Merton wanted to find a formula for the price that you should logically expect to pay for the option. The simplest assumption for the uncertainty in the stock price is to state that \log S follows a random walk. A random walk is the walk of a drunkard that walks on a one-dimensional street and can take each successive step to the front or the back with equal probability. Why \log S and not S? That’s because a random walker could end up walking backwards for a long time. If her walk was akin to a stock price, clearly the stock price couldn’t go below 0 – a more natural choice is \log S which goes to - \infty as S \rightarrow 0. A random walker is characterized by her step size. The larger the step size, the further she would be expected to be found relative to her starting point after N steps. The step size is called the “volatility” of the stock price.

In addition to an assumption about volatility, B-S-M needed to figure out the “drift” of the stock price. The “drift”, in our example, is akin to a drunkard starting on a slope. In that case, there is an unconscious tendency to drift down-slope. One can model drift by assuming that there isn’t the same probability to move to the right, as to the left.

The problem is, while it is possible to deduce, from uncertainty measures in the market, the “volatility” of the stock, there is no natural reason to prefer one “drift” over the other. Roughly speaking, if you ask people in the market whether IBM will achieve a higher stock price after one year, half will say “Yes”, the other half will say “No”. In addition, the ones that say “Yes” will not agree on exactly by how much it will be up. The same for the “No”-sayers! What to do?

B-S-M came up with a phenomenal argument. It goes as follows. We know, intuitively, that a Call Option (for a stock in one year) should be worth more today if the stock price were higher today (for the same Strike Price) by, say \$1. Can we find a portfolio that would decline by exactly the same amount if the stock price was up by \$1. Yes, we can. We could simply “short” that amount of shares in the market. A “short” position is like a position in a negative number of shares. Such a position loses money if the market were to go up. And I could do the same thing every day till the Option expires. I will need to know, every day, from the Option Formula that I have yet to find, a “first-derivative” – how much the Option Value would change for a \$1 increase in the stock price. But once I do this, I have a portfolio (Option plus this “short” position) that is {\bf {insensitive}} to stock price changes (for small changes).

Now, B-S-M had the ingredients for an arbitrage argument. They said, if such a portfolio definitely could make more than the rate offered by a risk-less bank account, there would be an arbitrage. If the portfolio definitely made more, borrow (from this risk-free bank) the money to buy the option, run the strategy, wait to maturity, return the loan and clear a risk-free profit. If it definitely made less, sell this option, invest the money received in the bank, run the hedging strategy with the opposite sign, wait to maturity, pay off  the Option by withdrawing your bank funds, then pocket your risk-free difference.

This meant that they could assume that the portfolio described by the Option and the Hedge, run in that way, were forced to appreciate at the “risk-free” rate. This was hence a natural choice of the “drift” parameter to use. The price of the Option would actually not depend on it.

If you are a hard-headed options trader, though, the arguments just start here. After all, the running of the above strategy needs markets that are infinitely liquid with infinitesimal “friction” – ability to sell infinite amounts of stock at the same price as at which to buy them. All of these are violated to varying degrees in the real stock market, which is what makes the B-S-M formula of doubtful accuracy. In addition, there are other possible processes (not a simple random-walk) that the quantity \log S might follow. All this contributes to a robust Options market.

An arbitrage argument is akin to an argument by contradiction.

Arguments of the above sort, abound in Physics. Here’s a cute one, due to Hermann Bondi. He was able to use it to deduce that clocks should run slower in a gravitational field. Here goes (this paraphrases a description by the incomparable T. Padmanabhan from his book on General Relativity).

Bondi considered the following sort of apparatus (I have really constructed my own example, but the concept is his).

Bondi Apparatus.JPG

One photon rushes from the bottom of the apparatus to the top. Let’s assume it has a frequency \nu_{bottom} at the bottom of the apparatus and a frequency \nu_{top} at the top. In our current unenlightened state of mind, we think these will be the same frequency. Once the photon reaches the top, it strikes a target and undergoes pair production (photon swerves close to a nucleus and spontaneously produces an electron-positron pair – the nucleus recoils, not in horror, but in order to conserve energy and momentum). Let’s assume the photon is rather close to the mass of the electron-positron pair, so the pair are rather slow moving afterwards.

Once the electron and positron are produced (each with momentum of magnitude p_{top}), they experience a strong magnetic field (in the picture, it points out of the paper). The law that describes the interaction between a charge and a magnetic field is called the Lorentz Force Law. It causes the (positively charged) positron to curve to the right, the (negatively charged) electron to curve to the left. The two then separately propagate down the apparatus (acquiring a momentum p_{bottom}) where they are forced to recombine, into a photon, of exactly the right frequency, which continues the cycle. In particular, writing the energy of the photons in each case.

h \nu_{top} = 2 \sqrt{(m_e c^2)^2+p_{top}^2 c^2} \approx 2 m_e c^2

h \nu_{bottom} = 2 \sqrt{(m_e c^2)^2+p_{bottom}^2 c^2} \approx 2 m_e c^2 + 2 m_e g L

In the above, p_{bottom} > p_{top}, the electrons have slightly higher speed at the bottom than at the top.

We know from the usual descriptions of potential energy and kinetic energy (from high school, hopefully), that the electron and positron pick up energy  m_e g L (each) on their path down to the bottom of the apparatus. Now, if the photon doesn’t experience a corresponding loss of energy as it travels from the bottom to the top of the apparatus, we have an arbitrage. We could use this apparatus to generate free energy (read “risk-less profit”) forever. This can’t be – this is nature, not a man-made market! So the change of energy of the photon will be

h \nu_{bottom} - h \nu_{top} =2 m_e g L \approx h \nu_{top} \frac{g L}{c^2}

indeed, the frequency of the photon is higher at the bottom of the apparatus than at the top. As photons “climb” out of the depths of the gravitational field, they get red-shifted – their wavelength lengthens/frequency reduces. This formula implies

\nu_{bottom} \approx \nu_{top} (1 + \frac{g L}{c^2})

writing this in terms of the gravitational potential due to the earth (mass M) at a distance R from its center

\Phi(R) = - \frac {G M}{R}

\nu_{bottom} \approx \nu_{top} (1 + \frac{\Phi(top) - \Phi(bottom)}{c^2})

so , for a weak gravitational field,

\nu_{bottom} (1 + \frac{ \Phi(bottom)}{c^2}) \approx \nu_{top} (1 + \frac{\Phi(top)}{c^2})

On the other time intervals are related to inverse frequencies (we consider the time between successive wave fronts)

\frac {1}{\Delta t_{bottom} } (1 + \frac{ \Phi(bottom)}{c^2}) \approx \frac {1}{\Delta t_{top}} (1 + \frac{\Phi(top)}{c^2})

so comparing the time intervals between successive ticks of a clock at the surface of the earth, versus at a point infinitely far away, where the gravitational potential is zero,

\frac {1}{\Delta t_{R} } (1 + \frac{ \Phi(R)}{c^2}) \approx \frac {1}{\Delta t_{\infty}}

which means

\Delta t_{R} =  \Delta t_{\infty} (1 + \frac{ \Phi(R)}{c^2})  

The conclusion is that the time between successive ticks of the clock is measured to be much smaller on the surface of the earth vs. far away. Note that \Phi(R) is negative, and the gravitational potential is usually assumed to be zero at infinity. This is the phenomenon of time dilation due to gravity. As an example, the GPS systems are run off clocks on satellites orbiting the earth at a distance of $20,700$ km. The clocks on the earth run slower than clocks on the satellites. In addition, as a smaller effect, the satellites are travelling at a high speed, so special relativity causes their clocks to run a little slower compared to those on the earth. The two effects act in opposite directions. This is the subject of a future post, but the effect, which has been precisely checked, is about 38 \museconds per day. If we didn’t correct for relativity, our planes would land at incorrect airports etc and we would experience total chaos in transportation.

The earth is flat – in Cleveland

Posted on Updated on

 

I stopped following basketball after Michael Jordan stopped playing for the Bulls – believe it or not, the sport appears to have become the place to believe and practice outlandish theories that might be described (in comparison to the Bulls) as bull****.

There’s a basketball star, that plays for the Cleveland Cavaliers. His name is Kyrie Irving. He believes that the earth is flat. He wishes to leave the Cleveland Cavaliers – but not go away too far, since he might fall off the side of the earth. However, he has inspired a large number of middle-schoolers (none of whom I have had the pleasure of meeting, but apparently they exist) that the earth is flat and that the “round-earthers” are government-conspiracy-inspired, pointy-headed, Russian spies – read this article if you want background. In fact, there is a club called the Flat Earth Society, that has members around the globe, that all believe the earth is flat as a pancake.

It would be really interesting, I thought, if, like my favorite detective – Sherlock Holmes – I decided to write the “Intelligent Person’s Guide to Why the Earth is Round”. I would ask you, dear Skeptical Reader, to use no more than tools readily available, some believable friends who possess phones with cameras and the ability to send and receive pictures by mail or text, as well as not being in the pay of the FSB  (or the North Koreans, who decidedly are trying very hard to check the flat earth theory by sending out ICBMs at increasing distances).

I live  in south New Jersey. At my location, the sun rose today at 5:57 am (you could figure this out by typing it out on Google search or just wake up in time to look for the sun). I have two friends, that live in Denver (Colorado) and Cheyenne (Wyoming). Their sunrises occur at 6:00 am and 5:53 am (their time) – averages to 5:56:30 am roughly. I realize that Denver is a mile high, which is also roughly Cheyenne’s height, but hey, you don’t pick your friends. I also live at an elevation of roughly 98′, which isn’t much and I ignore it. They sent me pictures of when the sun rose and I was able to prove they weren’t lying to me or part of a government conspiracy.

The distance from my town to these places is 1766 miles (to Denver) and 1613 miles (to Cheyenne). I used Google to calculate these, but you could schlep yourself there too. Based on just these facts, I should conclude that the earth curves between New Jersey and those places. To my mind, this should clinch the question of whether the earth is round. Since the  roughly 1700 mile separation equals 2 hours of time difference (in sunrises), a 24-hour time difference corresponds to 20,400 miles. This is roughly equal to 24,000  miles times the cosine of 40 degrees, which is the latitude of both New York City, Denver and Cheyenne (which is the circumference for radius r, rather than Earth’s radius R). This means 2 \pi R (which is the earth’s equatorial circumference) is roughly 26,000 miles, which is close to the correct figure to within 4%. The extreme height at Denver and Cheyenne has something to do with it! The sun {\bf should} have risen later in Denver and Cheyenne if they had been at lower elevations, so 1700 miles would {\bf really} have corresponded to a few minutes more than 2 hours, which would have meant a lower estimate for the earth’s equatorial circumference.

Earth Is Round

By the way, I picked Cheyenne because of its auditory resemblance to the town that Eratosthenes picked for his diameter-of-Earth measurement, Syrene in present-day Libya. Yes, the first person to measure the Earth’s diameter was Libyan!

Some objections to these entirely reasonable calculations include – if the earth is actually rotating, why doesn’t it move under you when you go up in a balloon. Sorry, this has been thought of already! When I was young, I was consumed by Yakov Perelman’s “Astronomy for Entertainment” – a book written by the tragically short-lived Soviet popularizer of science who died during the siege of Leningrad (St. Petersburg) in 1942. Perelman wrote about a young, enterprising, French advertising executive/scammer at the turn of the 19th century that dreamed up a new scheme to separate people from their money. He advertised balloon flights that would take you to different parts of the world without moving – just go up in a balloon and stay aloft till your favorite country comes up beneath you. It doesn’t happen because all the stuff around you is moving with you. Why? Its the same reason why the rain drops don’t fly off your side-windows even when you are driving on the road at high speed in the rain – forgetting for a second about the gravitational force that pulls things towards the earth’s center. There is a boundary layer of material that rotates or moves as fast as a moving object – its a consequence of the mechanics of fluids and we live with it in various places. For instance, it is one reason why icing occurs on airplane wings – if there was a terrible force of wind all the time, ice wouldn’t form.

So, if you are willing to listen to reason, no reason to restrict yourself to Cleveland. The world is invitingly round.

Addendum : a rather insightful friend of mine just told me that Kyrie Irving was actually born in Australia on the other side of the Flat Earth. If so, I doubt that even my robust arguments would convince him to globalize his views.

Special Relativity; Or how I learned to relax and love the Anti-Particle

Posted on Updated on

The Special Theory of Relativity, which is the name for the set of ideas that Einstein proposed in 1905 in a paper titled “On the Electrodynamics of moving bodies”, starts with the premise that the Laws of Physics are the same for all observers that are traveling at uniform speeds relative to each other. One of the Laws of Physics includes a special velocity – Maxwell’s equations for electromagnetism include a special speed c, which is the same for all observers. This leads to some spectacular consequences. One of them is called the “Relativity of Simultaneity”. Let’s discuss this with the help of the picture below.

CKansen-ToUse

Babu is sitting in a railway carriage, manufactured by the famous C-Kansen company, that travels at speeds close to that of light. Babu’s sitting exactly in the middle of the carriage and for reasons best known to himself (I guess the pantry car was closed and he was bored), decides to shoot a laser beam simultaneously at either end of the carriage from his position. There are detectors/mirrors that detect the light at the two ends of the carriage. As far as he is concerned, light travels at 3 \times 10^5 \frac {km}{sec} and he will challenge anyone who says otherwise to a gunfight – note that he is wearing a cowboy hat and probably practices open carry.

Since the detectors at the end of the carriage are equidistant from him, he is going to be sure to find the laser beams hit the detectors simultaneously, from his point of view.

Now, consider the situation from the point of view of Alisha, standing outside the train, near the tracks, but safely away from Babu and his openly carried munitions. She sees that the train is speeding away to the left, so clearly since {\bf she} thinks light also travels at 3 \times 10^5 \frac {km}{sec}, she would say that the light hit the {\bf right} detector first before the {\bf left} detector. She doesn’t {\underline {at \: all \: think}} that the light hit the two detectors simultaneously. If you asked her to explain, she’d say that the right detector is speeding towards the light, while the left detector is speeding away from the light, which is why the light strikes them at different times.

Wait – it is worse. If you had a third observer, Emmy, who is skiing to the {\bf {left}} at an even higher speed than the C-Kansen (some of these skiers are crazy), she thinks the C-Kansen train is going off to the right (think about it), not able to keep up with her. As far as {\underline {\bf {she}}} is concerned, the laser beam hit the {\bf {left}} detector before the other beam hit the {\bf {right}} detector.

What are we finding? The Events in question are – “Light hits Left Detector” and “Light hits Right Detector”. Babu claims the two events are simultaneous. Alisha claims the second happened earlier. Emmy is insistent that the first happened earlier. Who is right?

They are ALL correct, in their own reference frames. Events that appear simultaneous in one reference frame can appear to occur the one before the other in a different frame, or indeed the one after the other, in another frame. This is called the Relativity of Simultaneity. Basically, this means that you cannot attribute one of these events to have {\bf {caused}} the other, since their order can be changed. Events that are separated in this fashion are called “space-like separated”.

Now, on to the topic of this post. In the physics of quantum field theory, particles interact with each other by exchanging other particles, called gauge bosons. This interaction is depicted, in very simplified fashion so we can calculate things like the effective force between the particles, in a sequence of diagrams called Feynman diagrams. Here’s a diagram that depicts the simplest possible interaction between two electrons

MollerScattering

Time goes from the bottom to the top, the electrons approach each other, exchange a photon, then scoot off in different directions.

This is the simplest diagram, though and to get the exact numerical results for such scattering, you have to add higher orders of this diagram, as shown below

MoreMoller

When you study such processes, you have to perform mathematical integrals – all you know is that you sent in some particles from far away into your experimental set-up, something happened and some particles emerged from inside. Since you don’t know where and when the interaction occurred (where a particle was emitted or picked up, as at the vertexes in the above diagrams), you have to sum over all possible places and times that the interaction {\bf {could}} have occurred.

Now comes the strange bit. Look at what might happen when you sum over all possible paths for a collision between an electron and a photon.

e+e--Fig1

In the above diagram, the exchange was simultaneous.

In the next one, the electron emitted a photon, then went on to absorb a photon.

e+e--Fig2

and then comes the strange bit –

e+e--Fig3

 

Here the electron emitted a photon, then went backwards in time, absorbed a photon, then went its way.

When we sum over all possible event times and locations, this is really what the integrals in quantum field theory instruct us to do!

Really, should we allow ourselves to count processes where  two events occur simultaneously, which means we would then have to allow for them to happen in reverse order, as in the third diagram? What’s going on? This has to be wrong! And what’s an electron going backwards in time anyway? Have we ever seen such a thing?

Could we simply ban such processes? So, we would only sum over positions and times where the intermediate particles had enough time to go from one place to another,

There’s a problem with this. Notice the individual vertexes where an electron comes in, emits (or absorbs) a photon, then moves on. If this were a “real” process, it wouldn’t be allowed. It violates the principle of energy-momentum conservation. A simple way to understand this is to ask, could a stationary electron suddenly emit a photon and shoot off in a direction opposite to the photon, It looks deceptively possible! The photon would have, say, energy E and momentum p = E/c. This means that the electron would also have momentum E/c, in the opposite direction (for conservation) but then its energy would have to be \sqrt{E^2+m^2 c^4} from the relativistic formula. This is higher than the energy m c^2 of the initial electron : A+ \sqrt{A^2+m^2} is bigger than m! Not allowed !

We are stuck. We have to assume that energy-momentum conservation is violated in the intermediate state – in all possible ways. But then, all hell breaks loose – in relativity, the speed of a particle v is related to its momentum p and its energy E by v = \frac {p}{E} – since p and E can be {\underline {anything}}, the intermediate electron could, for instance, travel faster than light. If so, in the appropriate reference frame, it would be absorbed before it was created. If you can travel faster than light, you can travel backwards in time (read this post in Matt Buckley’s blog for a neat explanation).

If the electron were uncharged, we would probably be hard-pressed to notice. But the electron is charged. This means if we had the following sequence of events,

– the world has -1 net charge

– electron emits a photon and travels forward in time

– electron absorbs a photon and goes on.

This sequence doesn’t appear to change the net charge in the universe.

But consider the following sequence of events

– the world has -1 net charge

– the electron emits a photon and travels backwards in time

– the electron absorbs a photon in the past and then starts going forwards in time

Now, at some intermediate time, the universe seems to have developed two extra negative charges.

This can’t happen – we’d notice! Extra charges tend to cause trouble, as you’d realize if you ever received an electric shock.

The only way to solve this is to postulate that an electron moving backward in time has a positive charge. Then the net charge added for all time slices is always -1.

ergo, we have antiparticles. We need to introduce the concept to marry relativity with quantum field theory.

There is a way out of this morass if we insist that all interactions occur at the same point in space and that we never have to deal with “virtual” particles that violate momentum-energy conservation at intermediate times. This doesn’t work because of something that Ken Wilson discovered in the late ’70s, called the renormalization group – the results of our insistence would be we would disagree with experiment – the effects would be too weak.

For quantum-field-theory students, this is basically saying that the expansion of the electron’s field operator into its components can’t simply be

\Psi(\vec x, t) = \sum\limits_{spin \: s} \int \frac {d \vec k}{(2 \pi)^3} \frac{1}{\sqrt{2 E_k}} b_s(\vec k) e^{- i E_k t + i {\vec k}.{\vec x}}

but has to be

\Psi(\vec x, t) = \sum\limits_{spin \: s} \int \frac {d \vec k}{(2 \pi)^3} \frac{1}{\sqrt{2 E_k}} \left( b_s(\vec k) e^{- i E_k t + i {\vec k}.{\vec x}}  +d^{\dagger}_s(\vec k) e^{+ i E_k t - i {\vec k}.{\vec x}} \right)

including particles being destroyed on par with anti-particles being created and vice versa.

The next post in this sequence will discuss another interesting principle that governs particle interactions – C-P-T.

Quantum field theory is an over-arching theory of fundamental interactions. One bedrock of the theory is something called C-P-T invariance.  This means that if you take any physical situation involving any bunch of particles, then do the following

  • make time go backwards
  • parity-reverse space (so in three-dimensions, go into a mirror world, where you and everything else is opposite-handed)
  • change all particles into anti-particles (with the opposite charge)

then you will get a process which could (and should) happen in your own world. As far as we know this is always right, in general and it has been proven under a variety of assumptions. A violation of the C-P-T theorem in the universe would create quite a stir. I’ll discuss that in a future post.

Addendum: After this article was published, I got a message from someone I respect a huge amount that there is an interesting issue here. When we take a non-relativistic limit of the relativistic field theory, where do the anti-particles vanish off to? This is a question that is I am going to try and write about in a bit!