# The universe in a grain of sand

This article attempts to explain a paper I wrote that is published in Europhysics Letters.

The English engraver William Blake in a piece of poetry, the $Krishn{\bar a}$ stories, the colossal orders of magnitude of sizes from the humongous to the very small make us wonder if somehow the very large is connected to the very small.

A similar theme was explored in physics by a trio of scientists about twenty years ago. They looked at a puzzling problem that has been nagging a rather successful project to use quantum mechanics to explain the physics of fundamental particles. Called “Quantum field theory”, this marriage of quantum mechanics and special relativity (and later, some aspects of general relativity) is probably the most successful theory to emerge in physics in a long while. It has been studied extensively, expanded and some of its predictions in areas where it is possible to make very precise calculations are accurate to fourteen decimal places. It completely excludes the effects of gravity and predicts the precise behavior of small numbers of fundamental particles – how they scatter past each other etc.

The basic building block of the theory – the quantum field – is supposed to represent each kind of particle . There is an electron quantum field, one for the up quark etc. etc. If you want to study a theory with five electrons, that is just an excitation of the basic electron quantum field, just as a rapidly oscillating string on a violin has more energy than a slowly oscillating string. More energy in a quantum theory just corresponds to more “quanta” or particles of the field.

So far so good. Unfortunately, one inescapable conclusion of the theory is that even when the quantum field is at its lowest possible energy, there is something called “zero-point” motion. Quantum objects cannot just stay at rest, they are jittery and have some energy even in their most quiescent state. As it turns out, bosons have positive energy in this quiescent state. Fermions (like the electron) have negative energy in this quiescent state. This energy in each quantum field can be calculated.

It is, for every boson quantum field $+\infty$.

For every fermion quantum field, it is $-\infty$.

This is a conundrum. The energy in empty space in the universe can be estimated from cosmological measurements. It is roughly equivalent to a few protons to every cubic meter. It is certainly not $\infty$.

This conundrum (and its relatives) has affected particle physics for more than fifty years now. Variously referred to as the “cosmological constant” problem or its cousin, the “hierarchy problem”, people have tried many solutions. They need solutions, because if the energy were really $+\infty$ for the boson field (since the universe probably started as radiation dominated with photons), the universe would collapse on itself. This infinite energy spread through space would gravitate and pull the universe in.

Solutions, solutions, solutions. Some people have proposed that every particle has a “super”-partner – a boson would have a fermion “super”-partner with the same mass. Since the infinities would be identical, but have opposite signs, they would cancel and we would hence not have an overall energy density in empty space (this would be called the cosmological constant – it would be zero). Unfortunately, we have found no signs of such a “super”-symmetry, though we have looked hard and long.

Others have proposed that one should just command the universe to not let this energy gravitate, as a law of nature. That seems arbitrary and would have to be adduced as a separate natural law. And why is tough to answer.

Can we measure the effect of this “energy of empty space”, also called “vacuum energy” in some way other than through cosmology? Yes – there is an experiment called the “Casimir effect” which essentially measures the change in this energy when two metallic plates are separated from each other starting from being extremely close. This rather precise experiment confirms that such an energy does exist and can be changed in this fashion.

One way to make the conundrum at least finite is to say that our theories certainly do not work in a regime where gravity would be important, From the natural constants $G_N, \hbar, c$ (Newton’s gravitational constant, Planck’s constant and the speed of light), one can create a set of natural units – the Planck units. These are the Planck length $l_P$, Planck mass $m_P$ and Planck time $t_P$, where

$l_P = \sqrt{\frac{G_N \hbar}{c^3} } \sim 10^{-35} meters,$

$m_P = \sqrt{ \frac{\hbar c}{G_N} } \sim 10\: \mu \: grams\: \: \: , \: \: \: t_P = \sqrt{\frac{G_N \hbar }{c^5} } \sim 10^{-44} secs$

So, one can guess that gravity (represented by $G_N$) is relevant at Planck “scales”. One might reasonably expect pure quantum field theory to apply in regimes where gravity is irrelevant – so at length scales much larger than $10^{-35} meters$. Such a “cutoff” can be applied systematically in quantum field theories and it works – the answer for the “cosmological constant” is not infinitely bigger than the actual number, it is only bigger by a factor of $10^{121}$! What one does is to basically banish oscillations of the quantum field whose wavelengths are smaller than the Planck length.

Most people would not be happy with this state of affairs. There are other theories of fundamental particles. The most studied ones predict a negative cosmological constant, also not in line with our unfortunate reality.

About twenty years ago, three scientists – Andrew Cohen, David Kaplan and Ann Nelson (C-K-N) proposed that this vacuum energy actually should cut off at a much larger length scale – the size of the causally connected pieces of the universe (basically something one would consider the smallest wavelength possible in our observable universe. In this way, they connected the really small cutoff to the really large size of the universe.

Why did they do this? They made the pretty obvious observation that the universe does not appear to be a black hole. Suppose we assumed that the universe were dominated by radiation. The energy inside should be (they said) the energy in the vacuum, up to this cutoff. But this energy should be confined to a size that should be bigger than, never less than, the “Schwarzschild radius” for this energy. The Schwarzschild radius for some energy is the radius of the ball that this energy should be confined to, in order that it collapses into a black hole.

C-K-N assume that there is a natural principle that requires that the size of the universe is at least equal to the Schwarzschild radius corresponding to all that energy. They then derive some consequences of this assumption.

First, my objections. I would have much rather preferred that the universe be MUCH bigger than this radius. Next, if this is indeed the case, surely some natural law should cause this to happen, rather than a post-hoc requirement (we are here, so it must have been so). That last bit is usually referred to as the “weak” anthropic principle. Anthropic principles have always seemed to me the last resort of the damned physicist – it can also be when you throw up your hands and say – if it weren’t this way, we wouldn’t be here. Its OK to resort to such ideas when you clearly see there is a lot of randomness that drives a physical process. Just not knowing the underlying physical process doesn’t seem the right reason to throw out an anthropic type idea.

Anyway, I cast the entire problem as one in thermodynamics of the entire universe and suggested that the universe is this way because it is simply the most advantageous way for it to arrange itself. This method also lends itself to other extensions. It turns out that if the material of the universe is not the usual type (say “radiation” or “matter”), it might be possible for us to actually find a reasonable estimate of the cutoff that is in line with current experiments (at least the vacuum energy is not off by a factor of $10^{121}$, but only $10^{45}$ or so.

There is more to do!