## Can you travel faster through time?

If you watch science fiction movies, the most dramatic effects are obtained through some form of time travel. Pick some time in the future, or the past and a fabulous machine or spell swoops you away to that time.

I have always had a problem with this simple approach to time travel. One obvious objection would be this. Remember that the earth is rotating around its axis at $1000 \frac{km}{hr}$, revolving around the Sun at $100,000 \frac{km}{hr}$, spinning around the center of the Milky Way at $792,000 \frac{km}{hr}$ and being dragged at $2.1 million \frac{km}{hr}$ towards the Great Attractor in Leo/Virgo together with the other denizens of the Milky Way! If someone took you away for a few seconds and plonked you back at the same ${\bf {SPOT}}$ in the universe after a few couple of minutes, but several thousands years ago or ahead, you sure would need a space suit. Earth would be several billions of kilometers away, you’d be either totally in empty space or worse, somewhere inside a sun or something. I can’t imagine the Connecticut Yankee in King Arthur’s court carried spare oxygen or a shovel to dig himself out of an asteroid!

OK, so if you built a time machine without an attached space capsule to bring you back to the earth, woe is you. In addition, here is a simple way to sort out the paradoxes of time travel – just put down a rule that you can travel back in time, but you will land so far away that you can’t affect your history! This should surely be possible! Only time will tell.

Anyway, I didn’t really mean to share with you my proposal for how to make time travel possible. I was pondering a peculiarity of Einstein’s theory of relativity that isn’t often made clear in basic courses.

In Einstein’s world view, as modified by his teacher Hermann Minkowski, we live in a four dimensional world, where there are three space axes and one time axis. In this peculiar space, ${\bf {Events}}$ are labeled by $(t, \vec x)$ describing the precise time they occurred and their position (three coordinates give you a vector). In addition, if there are two such events $(t_1, \vec x_1)$ and $(t_2, \vec x_2)$, then the “distance” $(t_1-t_2)^2 - (\vec x_1- \vec x_2)^2$ is preserved in all reference frames. Why the peculiar $minus$ sign between the time and space pieces? That’s what preserves the speed of light between all reference frames, as Minkowski realized was the key to Einstein’s reformulation of the geometry of space time. It is interesting that Einstein did not think of this initially and decided that Minkowski and other mathematicians were getting their dirty hands on his physical insights and turning them into complicated beasts – he came around very quickly of course, he was Einstein!

To generalize this idea further, physicists invented the idea of a 4-vector. A regular 3-vector (example $\vec x$) describes, for instance, the position of a particle in space. A 4-vector for this particle would be $(t,\vec x)$ and it describes not just where it is, but when it is. Writing equations of relativity in vector form is useful. We can write them down without reference to one particular observer, without specifying how that observer is traveling.

In notation, the 4-vector for position of a particle is $x^{\mu} \equiv (x^0, \vec x) = (t, \vec x)$.

Before we go any further, it is useful to consider the concept of the “length” of a vector. An ordinary 3-vector has the length $|\vec x| = \sqrt{x^2+y^2+z^2}$, but in the 4-vector situation, the appropriate length definition has a minus sign, $|x^{\mu}| = \sqrt{c^2 t^2 - x^2 - y^2 - z^2}$. The relative minus sign again comes from the essential consideration that the speed of light is constant between reference frames. Note that if you look at the 4-vector for an event – it has four independent components – the time and the position in three space.

Next, one usually needs a notion of velocity. Usually we would just write down $\frac{d \vec x}{dt}$ with 3-vectors. However, when we differentiate with respect to $t$, we are using coordinates particular to one observer. A better quantity to use it one that all observers would agree on – the time measured by a clock traveling with the particle. This time is called the “proper time” and it is denoted by $\tau$. So the relativistic 4-velocity is defined as $V^{\mu} = \frac{d x^{\mu}}{d \tau}$. In terms of coordinates used by an observer standing in a laboratory, watching the particle, this is

$V^{\mu} \equiv (V^0,\vec V) = (\frac{1}{\sqrt{1-v^2/c^2}}, \frac{\vec v}{\sqrt{1-v^2/c^2}})$.

If you haven’t seen this formula before, you have to take it from me – it is a fairly elementary exercise to derive.

The peculiar thing about this formula is that if you look at the four components of the velocity 4-vector, there are only three independent components. Given the last three, you can compute the first one exactly.

In the usual way that this is described, people say that the magnitude of this vector is $(V^0)^2-(\vec V)^2=1$ as you can quickly check.

But it is peculiar. After all $x^{\mu}$, the position, does have four independent components. Why does the velocity vector $V^{\mu}$ only have three independent components. Those three are the velocities along the three spatial directions. What about the velocity in the “time” direction?

Aha! That is $\frac{dt}{dt}=1$. By definition, or rather, by construction, you travel along time at 1 day per day, or 1 year per year or whatever unit you prefer. The way the theory of relativity is constructed, it is ${\bf {incompatible}}$ with any other rate of travel. ${\bf {You \: \: cannot \: \: travel \: \: faster \: \: or \: \: slower, \: \: or \: \: even \: \: backwards, \: \: in \: \: time \: \: without \: \: violating \: \: the \: \: classical \: \: theory \: \: of \: \: relativity}}$.

The only relativistically correct way one can traverse the time axis slower is to rotate the time axis – that’s what happens when then observer sitting in a laboratory hops onto a speeding vehicle or spaceship, i.e., performs a Lorentz transformation upon him/herself. That’s what produces the effects of time dilation.

Your only consolation is that virtual quantum particles can violate relativity for short periods of time inversely proportional to their energy. However, they are just that – virtual particles. And you cannot create them for communication.

Oh, well!

## Coffee, anyon?

Based on the stats I receive from WordPress.com, most readers of this blog live in the US, India and the UK. In addition, there are several readers in Canada, Saudi Arabia, China, Romania, Turkey, Nigeria and France. Suppose you live in the first three of these countries. In addition, let’s say you go to your favorite coffee shop (Starbucks, Cafe Coffee Day or your friendly neighbourhood Costa) and get a cup of coffee. Your coffee is brewed with hot water – usually filtered and quite pure, let’s even assume it is distilled, so it contains only $H_2 O$.

Now, here is an interesting question. Can you tell, from tasting the water, where it came from, which animals in the long history of the planet passed it through their digestive systems, or which supernova or neutron star collision chucked out the heavier elements that became these particular water molecules?

No, obviously, you can’t. Think of the chaos it would create – your coffee would taste different every day, for instance!

This is an interesting observation! It indicates that the way materials behave, they lose their memory of their past state. But isn’t this in apparent contradiction with the laws of classical mechanics that determine ${\bf all}$ final conditions based on initial conditions. In fact, until the understanding of chaos and “mixing” came along, physicists and mathematicians spent a lot of time trying to reconcile this apparent contradiction.

When quantum mechanics came along, it allowed physicists to make sense of this in a rather simple way. Quantum states are superpositions of several microstates. In addition, when a measurement is made on a quantum system, it falls into one of the several microstates. Once it does, it has no memory of its previous superposition of states. However, there is yet another reason why history may not matter. This reason is only relevant in two spatial dimensions.

Let’s study some topology.

Suppose you have two particles, say two electrons. A quantum-mechanical system is described by a wave-function $\psi$, which is a function of the coordinates of the two particles, i.e., $\psi(\vec x_1, \vec x_2)$. If the particles are ${\bf {identical}}$, then one can’t tell which particle is the “first” and which the “second” – they are like identical twins and physically we cannot tell them apart from each other. So, the wave function $\psi$ might mean by its first argument, the first particle, or the second particle. How should the wave-function with one assumed order of particles be related to the wave-function with the “exchanged” order of particles? Are they the same wave function?

In three or more dimensions, the argument that’s made is as follows. If you exchange the two particles, then exchange them back again, you recover the original situation. So you should be back to your original wave function, correct? Not necessarily – the wave function is  a complex number. It is not exactly the quantity with physical relevance, only the square of the magnitude (which yields the probability of the configuration) is. So, the wave function could get multiplied by a complex phase – a number like $e^{i \theta}$ which has magnitude 1 and still be a perfectly good description of the same situation.

So if we had the particles A & B like this and switch them

the wave function could get multiplied by a phase $e^{i \theta}$ with the wave function describing an identical situation. But if we did the same operation and switched them again, we have to come back to exactly the same wave function as before –  we might need to multiply by the square of the above complex number $(e^{i \theta})^2$; however, this square has to be equal to $1$. Why in this case? Well, remember, it looks like we are back to the identical original situation and it would be rather strange if the same situation were described by a different wave function just because some one might or might not have carried out a double “switcheroo”.

If the square of some number is $1$, that number is either $1$ or $-1$. Those two cases correspond to either bosons or fermions – yes the same guys we met in a previous post. Now, why the exchange property determines the statistical properties is a subtle theorem called the spin-statistics theorem and it will be discussed in a future post.

However, in two dimensions, it is very obvious that something very peculiar can happen.

If you have switched two particles in three dimensions, look at the picture below.

then let’s mould the switching paths for the total double switch as below

we can move the “switcheroo” paths around and shrink each to a point. Do the experiment with  strings to convince yourself.

So, exchanging the two particles twice is equivalent to doing nothing at all.

However, if we were denizens of “Flatland” and lived on a two-dimensional flat space, we ${\bf {cannot}}$ shrink those spiral shaped paths to a point – there isn’t an extra dimension to unwind the two loops and shrink them to a point. We cannot pull one of the strings out into the third dimension and unravel it from the other! So, there is no reason for two switches to bring you back to the original wave function for the two identical particles. ${\bf {The \: \: particles \: \: remember \: \: where \: \: they \: \: came \: \: from}}$.This is why the quantum physics of particles in two dimensions is complicated – you can’t make the usual approximations to neglect the past and start a system off from scratch, the past be damned.

Particles in two dimensions are called anyons. Coffee with anyons in two dimensions would taste mighty funny and you would smell every damned place in two dimensions it had been in before it entered your two-dimensional mouth!

## The unreasonable importance of 1.74 seconds

1.74 seconds.

If you know what I am talking about, you can discontinue reading this – its old news. If you don’t, its interesting what physicists can learn from 1.74 seconds. Its all buried in the story about GW170817.

A few days ago, the people who constructed the LIGO telescope observed gravitational waves from what appears to be the collision and collapse of a pair of neutron stars (of masses that are believed to be $1.16 M_{\odot}$ and $1.6 M_{\odot}$. The gravitational wave observation is described in this Youtube posting, as also a reconstruction of how it might sound in audio (of course, it wasn’t something you could ${\bf {hear}}$!).

As soon as the gravitational wave was detected, a search was done through data recorded at several telescopes and satellites for  coincident optical or gamma ray (high frequency light waves) emissions – the Fermi (space-based) telescope did record a Gamma ray burst 1.7 seconds later. Careful analysis of the data followed.

Sky & Telescope has a nice article discussing this. To summarize it and some of the papers briefly, it turns out that a key part to unravelling the exact details about the stars involved is to figure out the distance. To find the distance, we need to know where to look (was the Fermi telescope actually observing the same event?). There are three detectors currently at work under the LIGO collaboration and two of them detected the event. This is the minimum needed to detect an event anyway (given the extremely high noise) one needs the near-simultaneous detection at two widely separated detectors to confirm that we have seen something. All the detectors have blind spots due to the angles at which they are placed, so the fact that two saw something and the third ${\bf {didn't}}$ indicates something was afoot in the blind-spot of the third detector. That didn’t localize the event enough though. Enter Fermi’s observation – which only localized the event to tens of degrees (about twenty times the size of the moon or sun). But the combination was enough to put a small field of view as “region of interest”. Optical telescopes then looked at the region and discovered the “smoking gun” – the actual increasingly bright, then dimming star. The star appears to be in a suburb of the NGC 4993 galaxy, some 130 million light years away – note that our nearest galactic neighbour is the Andromeda galaxy which is roughly 2 million light years away. Finding the distance makes the precision in the spectra, the masses of the involved neutron stars etc much higher, so one can actually do a lot more precise analysis. The red-shift to the galaxy is 0.008, which looks small, but this connection helps understand the propagation of the light and gravitational waves from the event to us on the Earth.

Now, on to the simple topic of this post. If you have an event from which you received gravitational waves and photons and the photons reached us 1.74 seconds after the gravitational waves, we can estimate a limit on the difference in the mass of the photon versus the graviton (hypothesized particle that is the force carrier of gravity). If we, or simplicity, assume the mass of the graviton is zero, then we make the following argument:

From special relativity, if a particle has speed $v$, momentum $p$, energy $E$ and the speed of light is $c$,

$\frac{v}{c} = \frac{pc}{E} = \frac{\sqrt{E^2 - m^2c^4}}{E} \approx 1 - \frac{m^2c^4}{2E^2}$

which implies

$\frac{mc^2}{E} = \sqrt{1 - \frac{v}{c}} \approx 1 - \frac{v}{2c}$

Since the graviton reached 1.7 seconds before, after travelling for 130 million years, that translates into a differential of speed, i.e., $\frac {\delta v}{c}$  of $4 \times 10^{-19}$., i.e.,

$\frac{\delta(m)c^2}{E} =2 \times 10^{-19}$

Next, we need to compute the energy of the typical photons in this event. They follow an approximate black-body spectrum and the peak wavelength of a black-body spectrum follows the famous Wien’s law. To cut to the chase, the peak energy emission was in the $10 keV$ range (kilo-electronvolt), which means our mass differential is $2 \times 10^{-15}$ eV (this is measuring mass in energy terms, as Einstein instructed us to). While the current limits on the photon’s mass are much tighter, this is an interesting way to get a bound on the mass. In general, the models for this sort of emission indicate that gamma rays are emitted roughly a second after the collision, so the limits will get tighter as the models sharpen their pencils through observation.

However, the models are already improving bounds on various theories that rely on modifying Einstein’s (and Newton’s) theories of gravity. Keep a sharp eye out!

## New kinds of Cash & the connection to the Conservation of Energy And Momentum

Its been difficult to find time to write articles on this blog – what with running a section teaching undergraduates (after 27 years of ${\underline {not \: \: doing \: \: so}}$), as well as learning about topological quantum field theory – a topic I always fancied but knew little about.

However, a trip with my daughter brought up something that sparked an interesting answer to questions I got at my undergraduate section. I had taken my daughter to the grocery store – she ran out of the car to go shopping and left her wallet behind. I quickly honked at her and waved, displaying the wallet. She waved back, displaying her phone. And insight struck me – she had the usual gamut of applications on her phone that serve as ways to pay at retailers – who needs a credit card when you have Apple Pay or Google Pay. I clearly hadn’t adopted the Millennial ways of life enough to understand that money comes in yet another form, adapted to your cell phone and aren’t only the kinds of things you can see, smell or Visa!

And that’s the connection to the Law Of Conservation of Energy, in the following way. There were a set of phenomena that Wolfgang Pauli considered in the 1930s – beta decay. The nucleus was known and so were negatively charged electrons (these were called $\beta$-particles). People had a good idea of the composition and mass of the nucleus (as being composed of protons and neutrons), the structure of the atom (with electrons in orbit around the nucleus) and also understood Einstein’s revolutionary conceptions of the unity of mass and energy. Experimenters were studying the phenomenon of nuclear radioactive decay. Here, a nucleus abruptly emits an electron, then turns into a nucleus with one higher proton number and one less neutron number, so roughly the same atomic weight, but with an extra positive charge. This appears to happen spontaneously, but in concert with the “creation” of a proton, an electron is also produced (and emitted from the atom), so the change in the total electric charge is $+1 -1 = 0$ – it is “conserved”.  What seemed to be happening inside the nucleus was, that one of the neutrons was decaying into a proton and an electron. Now, scientists had constructed rather precise devices  to “stop” electrons, thereby measuring their momentum and energy. It was immediately clear that the total energy we started with – the mass-energy of the neutron (which starts out not moving very much in the experiment), in decaying into the proton and electron was more than the energy of the said proton (which also wasn’t moving very much at the end) and aforesaid electron.

People were quite confused about all this. What was happening? Where was the energy going? It wasn’t being lost to heating up the samples (that was possible to check). Maybe the underlying process going on wasn’t that simple? Some people, including some famous physicists, were convinced that the Law of Conservation of Energy and Momentum had to go.

As it turned out, much like I was confused in the car because I had neglected that money could be created and destroyed in an iPhone, people had neglected that energy could be carried away or brought in by invisible particles called neutrinos. It was just a proposal, till they were actually discovered in 1956 through careful experiments.

In fact, as has been rather clear since Emmy Noether discovered the connection between a symmetry and this principle years ago, getting rid of the Law of Conservation of Energy and Momentum is not that easy. It is connected to a belief that physics (and the result of Physics experiments) is the same whether done here, on Pluto or in empty space outside one of the galaxies on the Hubble deep field view! As long as you systematically get rid of all “known” differences at these locations – the gravity and magnetic field of the earth, your noisy cousin next door, the tectonic activity on Pluto, or small black holes in the Universe’s distant past, the fundamental nature of the universe is $translationally \: \: invariant$. So if you discover that you have found some violation of the Law of Conservation of Energy and Momentum, i.e., a perpetual motion machine, remember that you are announcing that there is some deep inequivalence between different points and time in the Universe.

The usual story is that if you notice some “violation” of this Law, you immediately start looking for particles or sources that ate up the missing energy and momentum rather than announce that you are creating or destroying energy. This principle gets carried into the introduction of new forms of “potential energy” too, in physics, as we discover new ways in which the Universe can bamboozle us and reserve energy for later use in so many different ways. Just like you have to add up so many ways you can store money up for later use!

That leads to a conundrum. If the Universe has a finite size and has a finite lifetime, what does it mean to say that all times and points are equivalent? We can deal with the spatial finiteness – after all, the Earth is finite, but all points on it are geographically equivalent, once you account for the rotation axis (which is currently where Antarctica and the Arctic are, but really could be anywhere). But how do you account for the fact that time seems to start from zero? More on this in a future post.

So, before you send me mail telling me you have built a perpetual motion machine, you really have to be Divine and if so, I am expecting some miracles too.