Some facts are bigger than numbers – a story

Some facts are just boring, like 1 + 1 = 2. You already knew them before they were presented as such, and now that you do, it’s hard to know what to do with them. Some facts are clearly important, even if you don’t know how you can use them, like the spark plug fires after there’s fuel in the chamber. These two kinds of facts may seem far apart but you also know on some level that by repeatedly applying the first kind of fact in different combinations, to different materials in different circumstances, you get the second (and it’s fun to make this journey).

Then there are some other facts that, while seemingly simple, provoke in your mind profound realisations – not something new as much as a way to understand something deeply, so well, that it’s easy for you to believe that that single neural pathway among the multitude in your head has forever changed. It’s an epiphany.

I came across such a fact this morning when reading an article about a star that may have gone supernova. The author packs the fact into one throwaway sentence.

Roughly every second, one of the observable Universe’s stars dies in a fiery explosion.

The observable universe is 90-something billion lightyears wide. The universe was born only 13.8 billion years ago but it has been expanding since, pushed faster and faster apart by dark energy. This is a vast, vast space – too vast for the human mind to comprehend. I’m not just saying that. Scientists must regularly come up against numbers like 8E50 (8 followed by 50 zeroes), but they don’t have to be concerned about comprehending the full magnitude of those numbers. They don’t need to know how big it is in some dimension. They have the tools – formulae, laws, equations, etc. – to tame those numbers into submission, to beat them into discoveries and predictions that can be put to human use. (Then again, they do need to deal with monstrous moonshine.)

But for the rest of us, the untameability can be terrifying. How big is a number like 8E50? In kilograms, it’s about a 100-times lower than the mass of the observable universe. It’s the estimated volume of the galaxy NGC 1705 in cubic metres. It’s approximately the lifespan of a black hole with the mass of the Sun. You know these facts, yet you don’t know them. They’re true but they’re also very, very big, so big that they’re well past the point of true comprehension, into the realm of the I’d-rather-not-know. Yet the sentence above affords a way to bring these numbers back.

The author writes that every second or so, a star goes supernova. According to one estimate, 0.1% of stars have enough mass to eventually become a black hole. The observable universe has 200 billion trillion stars. This means there are 2E20 stars in the universe that could become a black hole, if they’re not already. Considering the universe has lived around 38% of its life and assuming a uniform rate of black hole formation (a big assumption but should suffice to illustrate my point), the universe should be visibly darkening by now, considering photons of light shouldn’t have to travel much before encountering a black hole.

But it isn’t. The simple reason is that that’s how big the universe is. We learn about stars, other plants, black holes, nebulae, galaxies and so forth. There are lots and lots of them, sure, but you know what there is the most of? The things we often discuss the least: the interstellar medium, the space between stars, and the intergalactic medium, the space between galaxies. Places where there isn’t anything big enough, ironically, to be able to catch the popular imagination. One calculation, based on three assumptions, suggests matter occupies an incomprehensibly low fraction of the observable universe (1. 85% of this is supposed to be dark matter; 2. please don’t assume atoms are also mostly empty).

In numbers, the bigness of all this transcends comprehension – but knowing that billions upon billions of black holes still only trap a tiny amount of the light going around can be… sobering. And enlivening. Why, in the time you’ve taken to read this article, 300 more black holes will have formed. Pfft.

A universe out of sight

We’ve been able to find that the universe is expanding faster than we thought. The LHC has produced the most data on one day. Good news, right?

Two things before we begin:

  1. The first subsection of this post assumes that humankind has colonised some distant extrasolar planet(s) within the observable universe, and that humanity won’t be wiped out in 5 billion years.
  2. Both subsections assume a pessimistic outlook, and neither projections they dwell on might ever come to be while humanity still exists. Nonetheless, it’s still fun to consider them and their science, and, most importantly, their potential to fuel fiction.

Cosmology

Astronomers using the Hubble Space Telescope have captured the most comprehensive picture ever assembled of the evolving Universe — and one of the most colourful. The study is called the Ultraviolet Coverage of the Hubble Ultra Deep Field. Caption and credit: hubble_esa/Flickr, CC BY 2.0
Astronomers using the Hubble Space Telescope have captured the most comprehensive picture ever assembled of the evolving universe — and one of the most colourful. The study is called the Ultraviolet Coverage of the Hubble Ultra Deep Field. Caption and credit: hubble_esa/Flickr, CC BY 2.0

Note: An edited version of this post has been published on The Wire.

A new study whose results were reported this morning made for a disconcerting read: it seems the universe is expanding 5-9% faster than we figured it was.

That the universe is expanding at all is disappointing, that it is growing in volume like a balloon and continuously birthing more emptiness within itself. Because of the suddenly larger distances between things, each passing day leaves us lonelier than we were yesterday. The universe’s expansion is accelerating, too, and that doesn’t simply mean objects getting farther away. It means some photons from those objects never reaching our telescopes despite travelling at lightspeed, doomed to yearn forever like Tantalus in Tartarus. At some point in the future, a part of the universe will become completely invisible to our telescopes, remaining that way no matter how hard we try.

And the darkness will only grow, until a day out of an Asimov story confronts us: a powerful telescope bearing witness to the last light of a star before it is stolen from us for all time. Even if such a day is far, far into the future – the effect of the universe’s expansion is perceptible only on intergalactic scales, as the Hubble constant indicates, and simply negligible within the Solar System – the day exists.

This is why we are uniquely positioned: to be able to see as much as we are able to see. At the same time, it is pointless to wonder how much more we are able to see than our successors because it calls into question what we have ever been able to see. Say the whole universe occupies a volume of X, that the part of it that remains accessible to us contains a volume Y, and what we are able to see today is Z. Then: Z < Y < X. We can dream of some future technological innovation that will engender a rapid expansion of what we are able to see, but with Y being what it is, we will likely forever play catch-up (unless we find tachyons, navigable wormholes, or the universe beginning to decelerate someday).

How fast is the universe expanding? There is a fixed number to this called the deceleration parameter:

q = – (1 + /H2),

where H is the Hubble constant and  is its first derivative. The Hubble constant is the speed at which an object one megaparsec from us is moving away at. So, if q is positive, the universe’s expansion is slowing down. If q is zero, then H is the time since the Big Bang. And if q is negative – as scientists have found to be the case – then the universe’s expansion is accelerating.

The age and ultimate fate of the universe can be determined by measuring the Hubble constant today and extrapolating with the observed value of the deceleration parameter, uniquely characterised by values of density parameters (Ω_M for matter and Ω_Λ for dark energy). Caption and credit: Wikimedia Commons
The age and ultimate fate of the universe can be determined by measuring the Hubble constant today and extrapolating with the observed value of the deceleration parameter, uniquely characterised by values of density parameters (Ω_M for matter and Ω_Λ for dark energy). Caption and credit: Wikimedia Commons

We measure the expansion of the universe from our position: on its surface (because, no, we’re not inside the universe). We look at light coming from distant objects, like supernovae; we work out how much that light is ‘red-shifted’; and we compare that to previous measurements. Here’s a rough guide.

What kind of objects do we use to measure these distances? Cosmologists prefer type Ia supernovae. In a type Ia supernova, a white-dwarf (the core of a dead stare made entirely of electrons) is slowly sucking in matter from an object orbiting it until it becomes hot enough to trigger fusion reaction. In the next few seconds, the reaction expels 1044 joules of energy, visible as a bright fleck in the gaze of a suitable telescope. Such explosions have a unique attribute: the mass of the white-dwarf that goes boom is uniform, which means type Ia supernova across the universe are almost equally bright. This is why cosmologists refer to them as ‘cosmic candles’. Based on how faint these candles are, you can tell how far away they are burning.

After a type Ia supernova occurs, photons set off from its surface toward a telescope on Earth. However, because the universe is continuously expanding, the distance between us and the supernova is continuously increasing. The effective interpretation is that the explosion appears to be moving away from us, becoming fainter. How much it has moved away is derived from the redshift. The wave nature of radiation allows us to think of light as having a frequency and a wavelength. When an object that is moving away from us emits light toward us, the waves of light appear to become stretched, i.e. the wavelength seems to become distended. If the light is in the visible part of the spectrum when starting out, then by the time it reached Earth, the increase in its wavelength will make it seem redder. And so the name.

The redshift, z – technically known as the cosmological redshift – can be calculated as:

z = (λobserved – λemitted)/λemitted

In English: the redshift is the factor by which the observed wavelength is changed from the emitted wavelength. If z = 1, then the observed wavelength is twice as much as the emitted wavelength. If z = 5, then the observed wavelength is six-times as much as the emitted wavelength. The farthest galaxy we know (MACS0647-JD) is estimated to be at a distance wherefrom = 10.7 (corresponding to 13.3 billion lightyears).

Anyway, z is used to calculate the cosmological scale-factor, a(t). This is the formula:

a(t) = 1/(1 + z)

a(t) is then used to calculate the distance between two objects:

d(t) = a(t) d0,

where d(t) is the distance between the two objects at time t and d0 is the distance between them at some reference time t0. Since the scale factor would be constant throughout the universe, d(t) and d0 can be stand-ins for the ‘size’ of the universe itself.

So, let’s say a type Ia supernova lit up at a redshift of 0.6. This gives a(t) = 0.625 = 5/8. So: d(t) = 5/8 * d0. In English, this means that the universe was 5/8th its current size when the supernova went off. Using z = 10.7, we infer that the universe was one-twelfth its current size when light started its journey from MACS0647-JD to reach us.

As it happens, residual radiation from the primordial universe is still around today – as the cosmic microwave background radiation. It originated 378,000 years after the Big Bang, following a period called the recombination epoch, 13.8 billion years ago. Its redshift is 1,089. Phew.

The relation between redshift (z) and distance (in billions of light years). d_H is the comoving distance between you and the object you're observing. Where it flattens out is the distance out to the edge of the observable universe. Credit: Redshiftimprove/Wikimedia Commons, CC BY-SA 3.0
The relation between redshift (z) and distance (in billions of light years). d_H is the comoving distance between you and the object you’re observing. Where it flattens out is the distance out to the edge of the observable universe. Credit: Redshiftimprove/Wikimedia Commons, CC BY-SA 3.0

A curious redshift is z = 1.4, corresponding to a distance of about 4,200 megaparsec (~0.13 trillion trillion km). Objects that are already this far from us will be moving away faster than at the speed of light. However, this isn’t faster-than-light travel because it doesn’t involve travelling. It’s just a case of the distance between us and the object increasing at such a rate that, if that distance was once covered by light in time t0, light will now need t > t0 to cover it*. The corresponding a(t) = 0.42. I wonder at times if this is what Douglas Adams was referring to (… and at other times I don’t because the exact z at which this happens is 1.69, which means a(t) = 0.37. But it’s something to think about).

Ultimately, we will never be able to detect any electromagnetic radiation from before the recombination epoch 13.8 billion years ago; then again, the universe has since expanded, leaving the supposed edge of the observable universe 46.5 billion lightyears away in any direction. In the same vein, we can imagine there will be a distance (closing in) at which objects are moving away from us so fast that the photons from their surface never reach us. These objects will define the outermost edges of the potentially observable universe, nature’s paltry alms to our insatiable hunger.

Now, a gentle reminder that the universe is expanding a wee bit faster than we thought it was. This means that our theoretical predictions, founded on Einstein’s theories of relativity, have been wrong for some reason; perhaps we haven’t properly accounted for the effects of dark matter? This also means that, in an Asimovian tale, there could be a twist in the plot.

*When making such a measurement, Earthlings assume that Earth as seen from the object is at rest and that it’s the object that is moving. In other words: we measure the relative velocity. A third observer will notice both Earth and the object to be moving away, and her measurement of the velocity between us will be different.


Particle physics

Candidate Higgs boson event from collisions in 2012 between protons in the ATLAS detector on the LHC. Credit: ATLAS/CERN
Candidate Higgs boson event from collisions in 2012 between protons in the ATLAS detector on the LHC. Credit: ATLAS/CERN

If the news that our universe is expanding 5-9% faster than we thought sooner portends a stellar barrenness in the future, then another foretells a fecundity of opportunities: in the opening days of its 2016 run, the Large Hadron Collider produced more data in a single day than it did in the entirety of its first run (which led to the discovery of the Higgs boson).

Now, so much about the cosmos was easy to visualise, abiding as it all did with Einstein’s conceptualisation of physics: as inherently classical, and never violating the principles of locality and causality. However, Einstein’s physics explains only one of the two infinities that modern physics has been able to comprehend – the other being the world of subatomic particles. And the kind of physics that reigns over the particles isn’t classical in any sense, and sometimes takes liberties with locality and causality as well. At the same time, it isn’t arbitrary either. How then do we reconcile these two sides of quantum physics?

Through the rules of statistics. Take the example of the Higgs boson: it is not created every time two protons smash together, no matter how energetic the protons are. It is created at a fixed rate – once every ~X collisions. Even better: we say that whenever a Higgs boson forms, it decays to a group of specific particles one-Yth of the time. The value of Y is related to a number called the coupling constant. The lower Y is, the higher the coupling constant is, and more often will the Higgs boson decay into that group of particles. When estimating a coupling constant, theoretical physicists assess the various ways in which the decays can happen (e.g., Higgs boson → two photons).

A similar interpretation is that the coupling constant determines how strongly a particle and a force acting on that particle will interact. Between the electron and the electromagnetic force is the fine-structure constant,

α = e2/2ε0hc;

and between quarks and the strong nuclear force is the constant defining the strength of the asymptotic freedom:

αs(k2) = [β0ln(k22)]-1

So, if the LHC’s experiments require P (number of) Higgs bosons to make their measurements, and its detectors are tuned to detect that group of particles, then at least P-times-that-coupling-constant collisions ought to have happened. The LHC might be a bad example because it’s a machine on the Energy Frontier: it is tasked with attaining higher and higher energies so that, at the moment the protons collide, heavier and much shorter-lived particles can show themselves. A better example would be a machine on the Intensity Frontier: its aim would be to produce orders of magnitude more collisions to spot extremely rare processes, such as particles that are formed very rarely. Then again, it’s not as straightforward as just being prolific.

It’s like rolling an unbiased die. The chance that you’ll roll a four is 1/6 (i.e. the coupling constant) – but it could happen that if you roll the die six times, you never get a four. This is because the chance can also be represented as 10/60. Then again, you could roll the die 60 times and still never get a four (though the odds of that happened are even lower). So you decide to take it to the next level: you build a die-rolling machine that rolls the die a thousand times. You would surely have gotten some fours – but say you didn’t get fours one-sixth of the time. So you take it up a notch: you make the machine roll the die a million times. The odds of a four should by now start converging toward 1/6. This is how a particle accelerator-collider aims to work, and succeeds.

And this is why the LHC producing as much data as it already has this year is exciting news. That much data means a lot more opportunities for ‘new physics’ – phenomena beyond what our theories can currently explain – to manifest itself. Analysing all this data completely will take many years (physicists continue to publish papers based on results gleaned from data generated in the first run), and all of it will be useful in some way even if very little of it ends up contributing to new ideas.

The steady (logarithmic) rise in luminosity – the number of collision events detected – at the CMS detector on the LHC. Credit: CMS/CERN
The steady (logarithmic) rise in luminosity – the number of collision events detected – at the CMS detector on the LHC. Credit: CMS/CERN

Occasionally, an oddball will show up – like a pentaquark, a state of five quarks bound together. As particles in their own right, they might not be as exciting as the Higgs boson, but in the larger schemes of things, they have a role to call their own. For example, the existence of a pentaquark teaches physicists about what sorts of configurations of the strong nuclear force, which holds the quarks together, are really possible, and what sorts are not. However, let’s say the LHC data throws up nothing. What then?

Tumult is what. In the first run, the LHC used to smash two beams of billions of protons, each beam accelerated to 4 TeV and separated into 2,000+ bunches, head on at the rate of two opposing bunches every 50 nanoseconds. In the second run, after upgrades through early 2015, the LHC smashes bunches accelerated to 6.5 TeV once every 25 nanoseconds. In the process, the number of collisions per sq. cm per second increased tenfold, to 1 × 1034. These heightened numbers are so new physics has fewer places to hide; we are at the verge of desperation to tease them out, to plumb the weakest coupling constants, because existing theories have not been able to answer all of our questions about fundamental physics (why things are the way they are, etc.). And even the barest hint of something new, something we haven’t seen before, will:

  • Tell us that we haven’t seen all that there is to see**, that there is yet more, and
  • Validate this or that speculative theory over a host of others, and point us down a new path to tread

Axiomatically, these are the desiderata at stake should the LHC find nothing, even more so that it’s yielded a massive dataset. Of course, not all will be lost: larger, more powerful, more innovative colliders will be built – even as a disappointment will linger. Let’s imagine for a moment that all of them continue to find nothing, and that persistent day comes to be when the cosmos falls out of our reach, too. Wouldn’t that be maddening?

**I’m not sure of what an expanding universe’s effects on gravitational waves will be, but I presume it will be the same as its effect on electromagnetic radiation. Both are energy transmissions travelling on the universe’s surface at the speed of light, right? Do correct me if I’m wrong.