# Tag: SI units

• ## The strange beauty of Planck units

What does it mean to say that the speed of light is 1?

We know the speed of light in the vacuum of space to be 299,792,458 m/s – or about 300,000 km/s. It’s a quantity of speed that’s very hard to visualise with the human brain. In fact, it’s so fast as to practically be instantaneous for the human experience. In some contexts it might be reassuring to remember the 300,000 km/s figure, such as when you’re a theoretical physicist working on quantum physics problems and you need to remember that reality is often (but not always) local, meaning that when a force appears to to transmit its effects on its surroundings really rapidly, the transmission is still limited by the speed of light. (‘Not always’ because quantum entanglement appears to break this rule.)

Another way to understand the speed of light is as an expression of proportionality. If another entity, which we’ll call X, can move at best at 150,000 km/s in the vacuum of space, we can say the speed of light is 2x the speed of X in this medium. Let’s say that instead of km/s we adopt a unit of speed called kb/s, where b stands for bloop: 1 bloop = 79 km. So the speed of light in vacuum becomes 3,797 kb/s and the speed of X in vacuum becomes 1,898.5 kb/s. The proportionality between the two entities – the speeds of light and X in vacuum – you’ll notice is still 2x.

Let’s change things up a bit more, to expressing the speed of light as the nth power of 2. n = 18 comes closest for light and n = 17 for X. (The exact answer in each case would be log s/log 2, where s is the speed of each entity.) The constant of proportionality is not even close to 2 in this case. The reason is that we switched from linear units to logarithmic units.

This example shows how even our SI units – which allow us to make sense of how much a mile is relative to a kilometre and how much a solar year is in terms of seconds, and thus standardise our sense of various dimensions – aren’t universally standard. The SI units have been defined keeping the human experience of reality in mind – as opposed to, say, those of tardigrades or blue whales.

As it happens, when you’re a theoretical physicist, the human experience isn’t very helpful as you’re trying to understand the vast scales on which gravity operates and the infinitesimal realm of quantum phenomena. Instead, physicists set aside their physical experiences and turned to the universal physical constants: numbers whose values are constant in space and time, and which together control the physical properties of our universe.

By combining only four universal physical constants, the German physicist Max Planck found in 1899 that he could express certain values of length, mass, time and temperature in units related to the human experience. Put another way, these are the smallest distance, mass, duration and temperature values that we can express using the constants of our universe. These are:

• G, the gravitational constant (roughly speaking, defines the strength of the gravitational force between two massive bodies)
• c, the speed of light in vacuum
• h, the Planck constant (the constant of proportionality between a photon’s energy and frequency)
• kB, the Boltzmann constant (the constant of proportionality between the average kinetic energy of a group of particles and the temperature of the group)

Based on Planck’s idea and calculations, physicists have been able to determine the following:

(Note here that the Planck constant, h, has been replaced with the reduced Planck constant ħ, which is h divided by 2π.)

When the speed of light is expressed in these Planck units, it comes out to a value of 1 (i.e. 1 times 1.616255×10−35 m per 5.391247×10−44 s). The same goes for the values of the gravitational constant, the Boltzmann constant and the reduced Planck constant.

Remember that units are expressions of proportionality. Because the Planck units are all expressed in terms of universal physical constants, they give us a better sense of what is and isn’t proportionate. To borrow Frank Wilczek’s example, we know that the binding energy due to gravity contributes only ~0.000000000000000000000000000000000000003% of a proton’s mass; the rest comes from its constituent particles and their energy fields. Why this enormous disparity? We don’t know. More importantly, which entity has the onus of providing an explanation for why it’s so out of proportion: gravity or the proton’s mass?

The answer is in the Planck units, in which the value of the gravitational constant G is the desired 1, whereas the proton’s mass is the one out of proportion – a ridiculously small 10-19 (approx.). So the onus is on the proton to explain why it’s so light, rather than on gravity to explain why it acts so feebly on the proton.

More broadly, the Planck units define our universe’s “truly fundamental” units. All other units – of length, mass, time, temperature, etc. – ought to be expressible in terms of the Planck units. If they can’t be, physicists will take that as a sign that their calculations are incomplete, wrong or that there’s a part of physics that they haven’t discovered yet. The use of Planck units can reveal such sources of tension.

For example, since our current theories of physics are founded on the universal physical constants, the theories can’t describe reality beyond the scale described by the Planck units. This is why we don’t really know what happened in the first 10-43 seconds after the Big Bang (and for that matter any events that happen for a duration shorter than this), how matter behaves beyond the Planck temperature or what gravity feels like at distances shorter than 10-35 m.

In fact, just like how gravity dominates the human experience of reality while quantum physics dominates the microscopic experience, physicists expect that theories of quantum gravity (like string theory) will dominate the experience of reality at the Planck length. What will this reality look like? We don’t know, but we know that it’s a good question.

• ## The clocks that used atoms and black holes to stay in sync

You’re familiar with clocks. There’s probably one if you look up just a little, at the upper corner of your laptop or smartphone screen, showing you what time of day it is, allowing you to quickly grasp the number of daytime or nighttime hours, depending on your needs.

There some other clocks that are less concerned about displaying ‘clock time’ and more about measuring the passage of. These devices are useful for applications designed to understand this dimension in a deeper sense. The usefulness of these clocks also depends more strongly on the timekeeping techniques they employ.

For example, consider the caesium atomic clock. Like all clocks, it is a combination of three things: an oscillator, a resonator and a detector. The oscillator is a finely tuned laser that shines on an ultra-cold gas of caesium atoms in a series of pulses. If the laser has the right frequency, an electron in a caesium atom will absorb a corresponding photon, jump to a higher energy level before then jumping back to its original place by emitting radiation of exactly 9,192,631,770 Hz. This radiation is the resonator.

The detector will be looking for radiation of this frequency – and the moment it has detected 9,192,631,770 waves (from crest to trough), it will signal that one second has passed. This is also why, technically, a caesium clock can be used to measure out a nine-billionth of a second.

Scientists have need for even more precise clocks, clocks that use extremely stable resonators and, increasingly of late, clocks that combine both advantages. This is why scientists developed optical atomic clocks. The caesium atomic clock has a resonant frequency of 9,192,631,770 Hz, which lies in the microwave part of the electromagnetic spectrum. Optical atomic clocks use resonators that have a frequency in the optical part. This is much higher.

For example, physicists at the Inter-University Centre for Astronomy and Astrophysics and the Indian Institute of Science Education and Research, both Pune, are building clocks that use ytterbium and strontium ions, respectively, with resonator frequencies of 642,121,496,772,645 Hz and 429,228,066,418,009 Hz. So technically, these clocks can measure out 600-trillionths and 400-trillionths of a second, allowing scientists ultra-precise insights into how long very short-lived events really last or how closely theoretical predictions and experimental observations match up.

In fact, because we have not managed to measure 400-trillionths of a kilogram, of a metre or in fact of any other SI unit, time is currently the most precisely measured physical quantity ever.

Sometimes, scientists need to use multiple atomic clocks in the course of an experiment or to ascertain how synchronised they are. This is not a trivial exercise.

For example, say you have two clocks whose performance you need to compare. If they are simple digital clocks, you could check how precisely each one of them records the amount of time between, say, astronomical dawn and astronomical dusk (the moments when the Sun is 18º below the horizon before sunrise and after sunset, respectively). Here, you take the act of looking at each clock face for granted. If the clocks are right in front of you, light travels nearly instantaneously between your eye and the display. And because the clocks tick one second at a time, you can repeat the task of checking their synchronisation as often as you need to just by looking.

What do you do if you need to know how well two optical atomic clocks are matched up continuously and if they are separated by, say, a thousand kilometres? Scientists in Europe demonstrated one solution to this problem in 2015.

They had optical clocks in Paris and Braunschweig connected with fibre optic cables to a processing station in Strasbourg. The resonant frequency of each clock was encoded in a ‘transfer laser’ that was then beamed through the cables to Strasbourg, where a detector measured the two laser pulses to decode the relative beat of each clock in real-time. The total length of the fibre optic cables in this case was 1,415 km. With this “all-optical” setup plus signal processing techniques, the research team reported a precision of three parts in 10-19 after an averaging time of just 1,000 seconds – a cutting-edge feat.

But scientists are likely to need one step better, if only because they also anticipate that the advent of optical atomic clocks at facilities around the world is likely to lead to a redefinition of the SI unit of time. The second’s current definition – “the time duration of 9,192,631,770 periods of the radiation” emitted by electrons transitioning between two particular energy levels of a caesium-133 atom – originated in 1967, when microwave atomic clocks were the state of the art.

Today, optical atomic clocks have this honour – and because they are more stable and use a higher resonator frequency than their microwave counterparts, it only makes sense to update the definition of a second. When this happens, optical clocks around the world will have to speak to each other constantly to make sure what each of them is measuring to be one second is the same everywhere.

Some of these clocks will be a few hundred kilometres apart, and others a lot more. In fact, scientists have figured it would be useful to have a way for two optical atomic clocks located on different continents to be able to work with each other. This represents the current version of the coordination problem, and scientists in Europe and Japan recently demonstrated a solution. It involves astronomy, because astronomy has a similar problem.

Everything in the universe is constantly in motion, which means telling the position of one moving object from another – like that of Venus from Earth – is bound to be more complicated from the start than knowing where your friend lives in a different city.

But astronomers have still figured out a way to establish a fixed reference frame that provides useful information about the location of different cosmic objects through space and time. They call it the International Celestial Reference Frame (ICRF). Its centre is located at the barycentre of the Solar System – the point around which all the planets in the Solar System orbit. Each of its three axes points in the direction of groups of objects called defining sources.

Many of these objects are quasars. ‘Quasar’ is a portmanteau of ‘quasi-stellar’, and is the name of the region at the centre of a galaxy where there is a supermassive black hole surrounded by a highly energised disk of gas and dust. Quasars are as such extremely bright. Astronomers spotted the first of them because they showed up in radio-telescope data as previously unknown star-like sources of radio waves. Because each galaxy can technically have only one quasar each, the number of quasars in the sky is not very high (relatively speaking) and most quasars are also located at such great distances that the radio waves they emit become very weak by the time they reach Earth’s radio telescopes.

So on Earth, physicists either use very powerful telescopes to detect them or a collection of telescopes that work together using a technique called very-long baseline interferometry (VLBI). The idea is elegant but the execution is complicated.

Say some process in the accretion disk around the black hole at the Milky Way’s centre emits radio waves into space. These waves propagate through the universe. At some point, after many thousands of years, they reach radio telescopes on Earth. Because the telescopes are located at vastly different locations, in Maharashtra, Canary Islands and Hawaii, say, they will each detect and measure the radio wave signals at slightly different points of time. There may also be slight differences in the waves’ characteristics because they are likely to have moved through different forms and densities of matter in their journey through space.

Computers combine the exact times at which the signals arrive at each telescope and the signals’ physical properties (like frequency, phase, etc.) with a sophisticated technique called cross-correlation to produce a better-resolved picture of the source that emitted them than if they had used data from only one telescope.

In fact, the resolving power of a radio telescope is proportional to the telescope’s baseline. If scientists are using only one telescope to make an observation, the baseline is equal to the dish’s diameter. But with VLBI radio astronomy, the baseline is equal to the longest distance between two telescopes in the array. This is why this technique is so powerful.

For example, to capture the first direct image of the black hole at the Milky Way’s centre, some 52,000 lightyears away, astronomers combined an array of eight telescopes located in North America, South America, Hawaii, Europe and the South Pole to form the Event Horizon Telescope. At any given time, the baseline would be determined by two telescopes that can observe the black hole simultaneously. And as Earth rotated, different pairs of telescopes would work together to keep observing the black hole even as their own view of the black hole would change.

Each telescope would record a signal together with a very precise timestamp, provided by an atomic clock installed at the same facility or nearby, in a hard-drive. Once an observing run ended, all the hard-drives would be shipped to a processing facility, where computers would combine the signal and time data from them to create an image of the source.

As it happens, the image of the black hole the Event Horizon collaboration released in 2019 could have been available sooner if not for the fact that there are no flights from April to October from the South Pole. So astrophysics also has some coordination problems, but astrophysicists have been able to figure them out thanks to tools like VLBI. Perhaps it’s not surprising then that scientists have thought to use VLBI to solve optical atomic clocks’ coordination problem as well.

According to a paper published in July 2020, the current version of ICRF is the third iteration, was adopted on January 1, 2019, and uses 4,588 sources. Of these, the positions of exactly 500 sources – including some quasars – are known with “extreme accuracy”. Using this information, the European-Japanese team reversed the purpose of VLBI to serve atomic clocks.

Using VLBI to measure the positions and features of distant astronomical objects is called VLBI astrometry. Doing the same to measure distances on Earth, like the European-Japanese team has done, is called VLBI geodesy. In the former, astronomers use VLBI to reduce uncertainties about distant sources of radio waves by being as certain as possible about the distance between the telescopes (and other mitigating factors like atmospheric distortion). Flip this: if you are as certain as possible about the distance from Earth to a particular quasar, you can use VLBI to reduce uncertainties about the distance between two atomic clocks instead.

And the science and technologies we have available today have allowed astronomers to resolve details down to a few billionths of a degree in astrometry – and to a few millimetres in geodesy.

The European-Japanese team implemented the same idea. The team members used three radio telescopes. Two of them, located in Medicina (Italy) and Koganei (Japan), were small, with dishes of diameter 2.4 m, but with a total baseline of 8,700 km. The Medicina telescope was connected to a ytterbium optical atomic clock in Torino and the Koganei telescope to a strontium optical atomic clock in the same facility.

First, the Torino clock’s resonator frequency was converted from the optical part of the spectrum to the microwave part using a device called a frequency comb, like in the schematic shown below.

(To quote myself from an older article: “A frequency comb is an advanced laser whose output radiation lies in multiple, evenly-spaced frequencies. This output can be used to convert high-frequency optical signals into more easily countable lower-frequency microwave signals.”)

This microwave frequency is transferred to a laser that is beamed through a fibre optic cable to the Medicina telescope. Similarly, at Koganei, the strontium clock’s resonator frequency is converted using a frequency comb to a corresponding microwave counterpart. At this point, both telescopes have time readings from optical atomic clocks in the form of more easily counted microwave radiation.

In the second step, the scientists used VLBI to determine as accurately as possible the time difference between the two telescopes. For this, the telescopes observed a quasar whose position was known to a high degree of accuracy in the ICRF system.

Since quasars are inherently far away and the two telescopes are quite small (as radio telescopes go), they were able to detect the quasar signal only weakly. To adjust for this, the team connected both telescopes via high-speed internet links to a large 34-m radio telescope in Kashima, also in Japan. This way, the team writes in its paper published in October 2020,

“the delay observable between the transportable stations can be calculated as the difference of the two delays with the large antenna after applying a small correction factor”.

Once the scientists had a delay figure, they worked backwards to estimate when exactly the two telescopes ought to have recorded their respective signals, based on which they could calculate the ratio of the microwave frequencies, and finally based on which they could calculate the ratio of the two clocks’ optical frequencies – autonomously, in real-time. To quote once again from the team’s paper:

“One node was installed at NICT headquarters in Koganei (Japan) while the other was transported to the Radio Astronomical Observatory operated by INAF in Medicina (Italy), forming an intercontinental baseline of 8,700 km. Observational data at Medicina and Koganei were stored on hard-disk drives at each station and transferred over high-speed internet networks to the correlation centre in Kashima for analysis. Ten frequency measurements were performed via VLBI between October 2018 and February 2019, and from these we calculated the frequency difference between the reference clocks at the two stations: the local hydrogen masers in Medicina and Koganei. Each session lasted from 28 h to 36 h and included at least 400 scans observing between 16 and 25 radio sources in the ICRF list.”

This way, they reported the ability to determine the frequency ratio with an uncertainty of 10-16 after ten-thousand seconds, and perhaps as low as 10-17 after a longer averaging time of ten days.

This is very good, but more importantly it’s better than the uncertainty arising from directly comparing the frequencies of two optical atomic clocks by relaying data through satellites. An uncertainty of 10-17 also means physicists can use multiple optical atomic clocks to study extremely slow changes, and potentially be confident about the results down to 0.00000000000000001 seconds.

The architecture of the solution also presents some unique advantages, as well as food for thought.

The setup effectively requires optical atomic clocks to be connected to small, even portable, radio telescopes as long as these telescopes are then connected to a larger one located somewhere else through a high-speed internet connection. These small instruments “can be operated without the need for a radio transmission licence,” the team writes in the paper, and “where laboratories lack the facilities or sky coverage to house a VLBI station, they can be connected by local optical-fibre links” like the one between Medicina and Torino.

The scientists have effectively used existing methods to solve a new problem instead of finding an altogether new solution. This isn’t to say new solutions are disfavoured but only that the achievement, apart from being relatively low cost and well-understood, is ingenious, and keeps the use of optical atomic clocks for all the applications they portend from becoming too resource-intensive.

It’s also fascinating that the clocks participating in this exercise are effectively a group of machines translating between processes playing out at two vastly different scales – one of minuscule electrons emitting tiny amounts of radiation over short distances and the other of radiation of similar provenance emerging from the exceedingly unique neighbourhoods of colossal black holes, travelling for many millennia at the speed of light through the cosmos.

Perhaps this was to be expected, considering the idea of using a clock is fundamentally a quest for a foothold, a way to translate the order lying at the intersection of seemingly chaotic physical processes, all directed by the laws of nature, to a metronome that the human mind can tick to.

Featured image: A simulation of a black hole from the 2014 film ‘Interstellar’. Source: YouTube.