Kepler data reveals a frost giant

I’ve been most fascinated lately by studies of planet formation. Every small detail is like that one letter in the crossword you need to fill all the other boxes in, every discovery a cornerstone that holds together a unique piece of the universe. For example, using just the find that the exoplanet Beta Pictoris b has a very short day of eight hours, astronomers could speculate on how it was formed, what its density could be, and how heavy it could get over time. And it isn’t surprising if a similar tale awaits telling by Kepler 421b, an exoplanet some 1,000 ly from Earth toward the constellation Lyra. Its discovery was reported on July 17, a week ago. And its pièce de résistance is that it has a long year, i.e. orbital period, of 704 days.

Illustrating the transit technique. The technique applies only when the planet can be seen head on against the background of its star. Image: http://www2.ifa.hawaii.edu/

Image: Illustrating the transit technique. The Kepler telescope looks for the drop in brightness in its search for exoplanets. The technique applies only when the planet can be seen head on against the background of its star. Credit: http://www2.ifa.hawaii.edu/

To have such a long year, it must be orbiting pretty far from its star – Kepler 421 – which in turn should’ve made it hard to discover. The NASA Kepler space telescope spots exoplanets by looking for the dip in a star’s brightness as the planet moves in front of it, called a transit. Because of Kepler 421b’s high orbital period, it transited its central star only twice in the four years Kepler was looking at it. Together with its orbital eccentricity – i.e. how elliptic its orbit is – Kepler had only a 0.3% chance of spotting it on its way around the star. In fact, 421b has the longest year for any known exoplanet discovered using the transit technique. This means we need to start considering if the M.O. isn’t good enough to spot exoplanets with large orbital periods, a class of planets that astronomers have been looking for. On the other hand, now that 421b has been spotted and studied to some extent, astronomers can form impressions of its history and future.

The frost line

For starters, they were able to deduce the planet’s size based on how much starlight it blocked and the shape of its orbit from how much light it blocked during each full transit. The readings point to 421b being like Uranus, with radius four times Earth’s, density at least 5 g/cc, and an eccentric orbit. Being like Uranus also means a surface temperature of -90 degrees Celsius (183 kelvin). This is plausible because 421b is 1.2 times as far from its star as Earth is from the Sun, and its star is a dimmer orange dwarf.

These wintry conditions are found beyond a star’s frost line, an imaginary line marking the distance beyond which space is cold enough to cause hydrogen-based molecules to condense into icy grains. So planets orbiting beyond this distance are also icy. Kepler 421b is likely the first exoplanet astronomers have found (using the transit technique) orbiting a star beyond its frost line. In other words, this might be our first exoplanet that’s an ice giant – “might” because 421b hasn’t been independently observed yet.

Not surprisingly, the frost line also marks a more significant boundary in terms of planet formation. Though observations made by Kepler are starting to show that the Solar System is a surprisingly unique planetary system, it’s still the one we understand best and use to analogize what we finds in other worlds. Astronomers believe planets in the system formed out of a disk of matter surrounding a younger Sun. The inner Earth-like (telluric) planets formed when rocky matter started to clump together and “fall out” of this disk. The outer gaseous planets, beyond the frost line, formed when icy grains stuck together to form watery planetary embryos.

In this artist's conception, gas and dust-the raw materials for making planets-swirl around a young star. The planets in our solar system formed from a similar disk of gas and dust captured by our sun. Credit: NASA/JPL-Caltech

 

Image: In this artist’s conception, gas and dust-the raw materials for making planets-swirl around a young star. The planets in our solar system formed from a similar disk of gas and dust captured by our sun. Credit: NASA/JPL-Caltech

The prevailing belief is that planets take at least three million years to form. In the same period, the central star is also evolving – in this case, Kepler 421 is a K-class star becoming brighter – and the amount of material available in the protoplanetary disk is diminishing because planets are feeding off it. Consequently, the frost line is on the move. Calculations by the astronomers who discovered 421b find the exoplanet to be now where the system’s frost line might’ve been three million years ago.

The sedate giant

Right now, we’ve a lot of letters in the crossword. Piecing them together, we can learn the following:

  1. If a beyond-the-frost-line gas giant is as big as Uranus but not as big as Jupiter, it’s possible that not enough material was available when it started to form, rendering it a latecomer in the system
  2. The abundance of material required to form Jupiter-sized planets makes smaller worlds likelier than larger ones, and in fact implies worlds like 421b should be less unique than Kepler makes it seem (a 2013 study cited by the discoverers suggests that there might actually be a pile-up of planets transiting at the frost line of their stars)
  3. If the planet had to have formed behind its star’s frost line, and the frost line was three million years ago where the planet is now, the planet could be around three million years old – assuming it hasn’t moved around since forming
  4. 421b is very Uranus-like; if it has to be a rocky world, its mass has to be 60 times Earth’s, pointing at an improbably massive protoplanetary disk within one or two AU of a star – something we’re yet to find

#3 warrants a comparison with the Solar System’s history, especially Jupiter’s. Jupiter didn’t form where it is right now, having possibly moving toward and away from the Sun as a result of gravitational interactions with other planets that were forming. During its journeys, its own gravitational pull could’ve tugged on asteroid belts and other free-floating objects, pulling them out of one location and depositing them in another. Contrarily, 421b appears to have been far more sedate, probably not having moved at all due to its youth and isolation. If only it had moved inward, like Jupiter eventually did, its orbital period would’ve been shorter and Kepler would’ve have spotted it easier.

The confusion Jupiter might've caused during its journey through Middle Earth. Image: http://www.astro.washington.edu/courses/astro557/GrandTack2.pdf

Image: The confusion Jupiter might’ve caused during its journey through a nascent Solar System. Credit: http://www.astro.washington.edu/courses/astro557/GrandTack2.pdf

Another comparison can be made with Beta Pictoris b, the other exoplanet mentioned at the beginning of this piece, the one with the eight-hour-long days. Younger planets spin faster because they still have the angular momentum they acquired while accumulating mass before slowing down in time. Heavier planets also spin faster because they have more angular momentum to conserve. Similarly, we might be able to find out more about Kepler 421b’s past by uncovering its spin rate and getting a better estimate of its mass.

Anyway, a simple piecing together of facts and possibilities tells us – at least me – this much. Astronomers have one more awesome fact to take away: as the finders of 421b write in their pre-print paper, “the first member of this missing class of planets” has been found, and that means more astronomy to look forward to!

~

References

Discovery of a transiting planet near the snow line, Kipping et al, arXiv:1407.4807 (accepted in The Astrophysical Journal)

The secrets of how planets form

Astronomers who were measuring the length of one day on an exoplanet for the first time were in for a surprise: it was shorter than any planet’s in the Solar System. Beta Pictoris b, orbiting the star Beta Pictoris, has a sunrise every eight hours. On Jupiter, there’s one once every 10 hours; on Earth, every 24 hours.

This exoplanet is located 63.4 light-years from the Solar System. It is a gas giant, a planet made mostly of gases of light elements like hydrogen and helium, and more than 10 times heavier than Earth. In fact, Beta Pictoris b is about eight times as heavy as Jupiter. It was first discovered by the Very Large Telescope and the European Southern Observatory in 2003. Six years and more observations later, it was confirmed that it was orbiting the star Beta Pictoris instead of the star just happening to be there.

On April 30, a team of scientists from The Netherlands published a paper in Nature saying Beta Pictoris b was rotating at a rate faster than any planet in the Solar System does. At the equator, its equatorial rotation velocity is 25 km/s. Jupiter’s equatorial rotation velocity is almost only half of that, 13.3 km/s.

The scientists used the Doppler effect to measure this value. “When a planet rotates, part of the planet surface is coming towards us, and a part is moving away from us. This means that due to the Doppler effect, part of the spectrum is a little bit blueshifted, and part of it a little redshifted,” said Ignas Snellen, the lead author on the Nature paper and an astronomy professor at the University of Leiden.

So a very high-precision color spectrum of the planet will reveal the blue- and redshifting as a broadening of the spectral lines: instead of seeing thin lines, the scientists will have seen something like a smear. The extent of smearing will correspond to the rate at which the planet is rotating.


Bigger is faster

So much is news. What is more interesting is what the Leiden team’s detailed analysis tells us, or doesn’t, about planet formation. For starters, check out the chart below.

Spin_rate_chart
Image: Macclesfield Astronomical Society

This chart shows us the relationship between a planet’s mass (X-axis) and its spin angular momentum (Y-axis), the momentum with which it spins on an axis. Clearly, the heavier a planet is, the faster it spins. Pluto and Charon, its moon, are the lightest of the lot and their spin rate is therefore the lowest. Jupiter, the heaviest planet in the Solar System, is the heaviest and its spin rate is also the highest. (Why are Mercury and Venus not on the line, and why have Pluto and Earth been clubbed with their moons? I’ll come to that later.)

Apparently the more massive the planet, the more angular momentum it acquires,” Prof. Snellen said. This would put Beta Pictoris b farther along the line, possibly slightly beyond the boundaries of this chart – as this screenshot from the Leiden team’s pre-print paper shows.

planet_spin_rate1

Unfortunately, science doesn’t yet know why heavier planets spin faster, although there are some possible explanations. A planet forms from grains of dust floating around a star into a large, discernible mass (with many steps in between). This mass is rotating in order to conserve angular momentum. As it accrues more matter over time, it has to conserve the kinetic and potential energy of that matter as well, so its angular momentum increases.

There have been a few exceptions to this definition. Mercury and Venus, the planets closest to the Sun, will have been affected by the star’s gravitational pull and experienced a kind of dragging force on their rotation. This is why their spin-mass correlations don’t sit on the line plotted in the chart above.

However, this hypothesis hasn’t been verified yet. There is no fixed formula which, when plotted, would result in that line. This is why the plots shown above are considered empirical – experimental in nature. As astronomers measure the spin rates of more planets, heavy and light, they will be able to add more points on either side of the line and see how its shape changes.

At the same time, Beta Pictoris b is a young planet – about 20 million years old. Prof. Snellen used this bit of information to explain why it doesn’t sit so precisely on the line:

planet_spin_rate2

Sitting precisely on the line would be an equatorial velocity of around 50 km/s. But because of its youth, Prof. Snellen explained, this exoplanet is still giving off a lot of heat (“this is why we can observe it”) and cooling down. In the next hundreds of millions of years, it will become the size of Jupiter. If it conserves its angular momentum during this process, it will go about its life pirouetting at 50 km/s. This would mean a sunrise every 3 hours.

I think we can stop complaining about our days being too long.


Spin velocity v. Escape velocity

Should the empirical relationship hold true, it will mean that the heaviest planets – or the lightest stars – will be spinning at incredible rates. In fact, the correlation isn’t even linear: even the line in the first chart is straight, the axes are both logarithmic. It is a log-log plot where, like shown in the chart below, even though the lines are straight, equal lengths of the axis demarcate exponentially increasing values.

log-log
Image: Wikipedia

If the axes were not logarithmic, the line f(x) = x3 (red line) between 0.1 and 1 would look like this:

plot3
Image: Fooplot.com

The equation of a line in a log-log plot is called a monomial, and goes like this: y = axk. In other words, y varies non-linearly with x, i.e. a planet’s spin-rate varies non-linearly with its mass. Say, if k = 5 and a (a scaling constant) = 1, then if x increases from 2 to 4, y will increase from 32 to 1,024!

Of course, a common, and often joked-about, assumption among physicists has been made: that the planet is a spherical object. In reality, the planet may not be perfectly spherical (have you known a perfectly spherical ball of gas?), but that’s okay. What’s important is that the monomial equation can be applied to a rotating planet.

Would this mean there might be planets out there rotating at hundreds of kilometres per second? Yes, if all that we’ve discussed until now holds.

… but no, if you discuss some more. Watch this video, then read the two points below it.

  1. The motorcyclists are driving their bikes around an apparent centre. What keeps them from falling down to the bottom of the sphere is the centrifugal force, a rotating force that, the faster they go, pushes them harder against the sphere’s surface. In general, any rotating body experiences this force: something in the body’s inside will be fleeing its centre of rotation and toward the surface. And such a rotating body can be a planet, too.
  2. Any planet – big or small – exerts some gravitational pull. If you jumped on Earth’s surface, you don’t shoot off into orbit. You return to land because Earth’s gravitational pull doesn’t let you go that easy. To escape once and for all, like rockets sometimes do, you need to jump up on the surface at a speed equal to the planet’s escape velocity. On Earth, that speed is 11.2 km/s. Anything moving up from Earth’s surface at this speed is destined for orbit.

Points 1 and 2 together, you realize that if a planet’s equatorial velocity is greater than its escape velocity, it’s going to break apart. This inequality puts a ceiling on how fast a planet can spin. But then, does it also place a ceiling on how big a planet can be? Prof. Snellen to the rescue:

Yes, and this is probably bringing us to the origin of this spin-mass relation. Planets cannot spin much faster than this relation predicts, otherwise they would spin faster than the escape velocity, and that would indeed break the planet apart. Apparently a proto-planet accretes near the maximum amount of gas such that it obtains a near-maximum spin-rate. If it accretes more, the growth in mass becomes very inefficient.

(Emphasis mine.)


Acting forces

The answer will also depend on the forces acting on the planet’s interior. To illustrate, consider the neutron star. These are the collapsed cores of stars that were once massive but are now dead. They are almost completely composed of neutrons (yes, the subatomic particles), are usually 10 km wide, and weigh 1.5-4 times the mass of our Sun. That implies an extremely high density – 1,000 litres of water will weigh 1 million trillion kg, while on Earth it weighs 1,000 kg.

Neutron stars spin extremely fast, more than 600 times per second. If we assume the diameter is 10 km, the circumference would be 10π = ~31 km. To get the equatorial velocity,

Vspin = circumference × frequency = 31 × 600/1 km/s = 18,600 km/s.

Is its escape velocity higher? Let’s find out.

Ve = (2GM/r)0.5

G = 6.67×10-11 m3 kg-1 s-2

M = density × volume = 1018 × (4/3 × π × 125) = 5.2×1020 kg

r = 5 km

∴ Ve = (2 × 6.67×10-11 × 5.2×1020/5)0.5 =  ~37,400 km/s

So, if you wanted to launch a rocket from the surface of a neutron star and wanted it to escape the body’s gravitational pull, it has to take off at more than 30 times the speed of sound. However, you wouldn’t get this far. Water’s density should have given it away: any object would be crushed and ground up under the influence of the neutron star’s phenomenal gravity. Moreover, at the surface of a neutron star, the strong nuclear force is also at play, the force that keeps neutrons from disintegrating into smaller particles. This force is 1032 times stronger than gravity, and the equation for escape velocity does not account for it.

However, neutron stars are a unique class of objects – somewhere between a white dwarf and a black hole. Even their formation has nothing in common with a planet’s. On a ‘conventional’ planet, the dominant force will be the gravitational force. As a result, there could be a limit on how big planets can get before we’re talking about some other kinds of bodies.

This is actually the case in the screenshot from the Leiden team’s pre-print paper, which I’ll paste here once again.

planet_spin_rate1

See those circles toward the top-right corner? They represent brown dwarfs, which are gas giants that weigh 13-75 times as much as Jupiter. They are considered too light to sustain the fusion of hydrogen into helium, casting them into a limbo between stars and planets. As Prof. Snellen calls them, they are “failed stars”. In the chart, they occupy a smattering of space beyond Beta Pictoris b. Because of their size, the connection between them and other planets will be interesting, since they may have formed in a different way.

Disruption during formation is actually why Pluto-Charon and Earth-Moon were clubbed in the first chart as well. Some theories of the Moon’s formation suggest that a large body crashed into Earth while it was forming, knocking off chunks of rock that condensed into our satellite. For Pluto and Charon, the Kuiper Belt might’ve been involved. So these influences would have altered the planets’ spin dynamics, but for as long as we don’t know how these moons formed, we can’t be sure how or how much.

The answers to all these questions, then, is to keep extending the line. At the moment, the only planets for which the spin-rate can be measured are very massive gas giants. If this mass-spin relation is really universal, than one would expect them all to have high spin-rates. “That is something to investigate now, to see whether Beta Pictoris b is the first of a general trend or whether it is an outlier.”


Reference:

Fast spin of the young extrasolar planet β Pictoris b. Nature. doi:10.1038/nature13253

Our universe, the poor man’s accelerator

The Hindu
March 25, 2014

On March 17, radio astronomers from the Harvard-Smithsonian Center for Astrophysics, Massachusetts, announced a remarkable discovery. They found evidence of primordial gravitational waves imprinted on the cosmic microwave background (CMB), a field of energy pervading the universe.

A confirmation that these waves exist is the validation of a theory called cosmic inflation. It describes the universe’s behaviour less than one-billionth of a second after it was born in the Big Bang, about 14 billion years ago, when it witnessed a brief but tremendous growth spurt. The residual energy of the Bang is the CMB, and the effect of gravitational waves on it is like the sonorous clang of a bell (the CMB) that was struck powerfully by an effect of cosmic inflation. Thanks to the announcement, now we know the bell was struck.

Detecting these waves is difficult. In fact, astrophysicists used to think this day was many more years into the future. If it has come now, we must be thankful to human ingenuity. There is more work to be done, of course, because the results hold only for a small patch of the sky surveyed, and there is also data due from studies done until 2012 on the CMB. Should any disagreement with the recent findings arise, scientists will have to rework their theories.

Remarkable in other ways

The astronomers from the Harvard-Smithsonian used a telescope called BICEP2, situated at the South Pole, to make their observations of the CMB. In turn, BICEP2’s readings of the CMB imply that when cosmic inflation occurred about 14 billion years ago, it happened at a tremendous amount of energy of 1016 GeV (GeV is a unit of energy used in particle physics). Astrophysicists didn’t think it would be so high.

Even the Large Hadron Collider (LHC), the world’s most powerful particle accelerator, manages a puny 104 GeV. The words of the physicist Yakov Zel’dovich, “The universe is the poor man’s accelerator”— written in the 1970s — prove timeless.

This energy at which inflation has occurred has drawn the attention of physicists studying various issues because here, finally, is a window that allows humankind to naturally study high-energy physics by observing the cosmos. Such a view holds many possibilities, too, from the trivial to the grand.

For example, consider the four naturally occurring fundamental forces: gravitation, strong and weak-nuclear force, and electromagnetic force. Normally, the strong-nuclear, weak-nuclear and electromagnetic forces act at very different energies and distances.

However, as we traverse higher and higher energies, these forces start to behave differently, as they might have in the early universe. This gives physicists probing the fundamental texture of nature an opportunity to explore the forces’ behaviours by studying astronomical data — such as from BICEP2 — instead of relying solely on particle accelerators like the LHC.

In fact, at energies around 1019 GeV, some physicists think gravity might become unified with the non-gravitational forces. However, this isn’t a well-defined goal of science, and doesn’t command as much consensus as it submits to rich veins of speculation. Theories like quantum gravity operate at this level, finding support from frameworks like string theory and loop quantum gravity.

Another perspective on cosmic inflation opens another window. Even though we now know that gravitational waves were sent rippling through the universe by cosmic inflation, we don’t know what caused them. An answer to this question has to come from high-energy physics — a journey that has taken diverse paths over the years.

Consider this: cosmic inflation is an effect associated with quantum field theory, which accommodates the three non-gravitational forces. Gravitational waves are an effect of the theories of relativity, which explain gravity. Because we may now have proof that the two effects are related, we know that quantum mechanics and relativity are also capable of being combined at a fundamental level. This means a theory unifying all the four forces could exist, although that doesn’t mean we’re on the right track.

At present, the Standard Model of particle physics, a paradigm of quantum field theory, is proving to be a mostly valid theory of particle physics, explaining interactions between various fundamental particles. The questions it does not have answers for could be answered by even more comprehensive theories that can use the Standard Model as a springboard to reach for solutions.

Physicists refer to such springboarders as “new physics”— a set of laws and principles capable of answering questions for which “old physics” has no answers; a set of ideas that can make seamless our understanding of nature at different energies.

Supersymmetry

One leading candidate of new physics is a theory called supersymmetry. It is an extension of the Standard Model, especially at higher energies. Finding symptoms of supersymmetry is one of the goals of the LHC, but in over three years of experimentation it has failed. This isn’t the end of the road, however, because supersymmetry holds much promise to solve certain pressing issues in physics which the Standard Model can’t, such as what dark matter is.

Thus, by finding evidence of cosmic inflation at very high energy, radio-astronomers from the Harvard-Smithsonian Center have twanged at one strand of a complex web connecting multiple theories. The help physicists have received from such astronomers is significant and will only mount as we look deeper into our skies.

Ambitious gamma-ray telescope takes shape

I wrote a shortened version of this piece for The Hindu on July 4, 2013. This is the longer version, with some more details thrown in.

Scientists and engineers from 27 countries including India are pitching for a next-generation gamma-ray telescope that could transform the future of high-energy astrophysics.

Called the Cherenkov Telescope Array (CTA), the proposed project is a large array of telescopes to complement existing observatories, the most potent of which are in orbit around Earth. By building it on land, scientists feel the CTA could be much more sophisticated than orbiting observatories, which are limited by logistical constraints.

Werner Hofmann, CTA spokesperson of the Max Planck Institute for Nuclear Physics, Germany, told Nature, a comparable orbiting telescope would have to be “the size of a football stadium”.

The CTA’s preliminary designs reveal that it boasts of greater angular resolution, and 10 times more sensitivity and energy-coverage, than existing telescopes. The collaboration will finalise the locations for setting up the CTA, which will consist of two networked arrays in the northern and southern hemispheres, by end-2013. Construction is slated for 2015 at a cost of $268 million.

One proposed northern hemisphere location is in Ladakh, Jammu and Kashmir.

Indian CTA collaboration

Dr. Pratik Majumdar, Saha Institute of Nuclear Physics (SINP), Kolkata, said via email, “A survey was undertaken in the late 1980s. Hanle, in Ladakh, was a good site fulfilling most of our needs: very clear and dark skies throughout the year, with a large number of photometric and spectroscopic nights at par with other similar places in the world, like La Palma in Canary Islands and Arizona desert, USA.”

However, it serves to note that the Indian government does not permit foreign nationals to visit Hanle. “I do think India needs to be more proactive about opening up to people from abroad, especially in science and technology, in order to benefit from international collaboration – unfortunately this is not happening,”said Dr. Subir Sarkar, Rudolf Peierls Centre for Theoretical Physics, Oxford University, via email. Dr. Sarkar is a member of the collaboration.

Each network will consist of four 23-metre telescopes to image weaker gamma-ray signals, and dozens of 12-metre and 2-4-metre telescopes to image the really strong ones. Altogether, they will cover an area of 10 sq. km on ground.

Scientists from SINP are also conducting simulations to better understand the performance of CTA.

Led by it, the Indian collaboration comprises Indian Institute of Astrophysics, Bhabha Atomic Research Centre, and Tata Institute of Fundamental Research (TIFR). They will be responsible for building the calibration system with the Max Planck Institute, and developing structural sub-systems of various telescopes to be fabricated in India.

Dr. B.S. Acharya, TIFR, believes the CTA can add great value to existing telescopes in India, especially the HAGAR gamma-ray telescope array in Hanle. “It is a natural extension of our work on ground-based gamma-ray astronomy in India, since 1969,” he said in an email to this Correspondent.

Larger, more powerful

While existing telescopes, like MAGIC (Canary Islands) and VERITAS (Arizona), and the orbiting Fermi-LAT and Swift, are efficient up to the 100-GeV energy mark, the CTA will be able to reach up to 100,000 GeV with the same efficiency.

Gamma rays originate from sources like dark matter annihilation, dying stars and supermassive black holes, whose physics has been barely understood. Such sources accelerate protons and electrons to huge energies and these interact with ambient matter, radiation and magnetic fields to generate gamma rays, which then travel through space.

When such a high-energy gamma-ray hits atoms in Earth’s upper atmosphere, a shower of particles are produced that cascade downward. Individual telescopes pick these up for analysis, but a network of telescopes spread over a large area would collect greater amounts, tracking them back better to their sources.

Here, CTA’s large collection area will come to play.

“No telescope based at one point on Earth can see the whole sky. The proposed CTA southern observatory will be able to study the centre of the galaxy, while the northern observatory will focus on extragalactic sources,” said Dr. Sarkar.

Gamma-ray astronomy has seen global interest since the early 1950s, when astronomers began to believe some cosmic phenomena ought to emit the radiation. After developing the telescopes in the 1960s to analyse it, some 150 sources have been mapped. The CTA is expected to chart a 1,000 more.

The HESS II gamma-ray telescope in the Khoma Highland, Namibia, is currently the world's largest telescope for gamma-ray astrophysics, possessing a 28-meter wide mirror.
The HESS II gamma-ray telescope in the Khoma Highland, Namibia, is currently the world’s largest telescope for gamma-ray astrophysics, possessing a 28-meter wide mirror.
A NASA photograph of the Voyager space probe, 1977.
A NASA photograph of the Voyager space probe, 1977. Photo: Wikimedia Commons

On September 5, 1977, NASA launched the Voyager 1 space probe to study the Jovian planets Jupiter and Saturn, and their moons, and the interstellar medium, the gigantic chasm between various star-systems in the universe. It’s been 35 years and 9 months, and Voyager has kept on, recently entering the boundary between our System and the Milky Way.

In 2012, however, when nine times farther from the Sun than is Neptune, the probe entered into a part of space completely unknown to astronomers.

On June 27, three papers were published in Science discussing what Voyager 1 had encountered, a region at the outermost edge of the Solar System they’re calling the ‘heliosheath depletion region’. They think it’s a feature of the heliosphere, the imagined bubble in space beyond whose borders the Sun has no influence.

“The principal result of the magnetic field observations made by our instrument on Voyager is that the heliosheath depletion region is a previously undetected part of the heliosphere,” said Dr. Leonard Burlaga, an astrophysicist at the NASA-Goddard Space Flight Centre, Maryland, and an author of one of the papers.

“If it were the region beyond the heliosphere, the interstellar medium, we would have expected a change in the magnetic field direction when we crossed the boundary of the region. No change was observed.”

More analysis of the magnetic field observations showed that the heliosheath depletion region has a weak magnetic field – of 0.1 nano-Tesla (nT), 0.6 million times weaker than Earth’s – oriented in such a direction that it could only have arisen because of the Sun. Even so, this weak field was twice as strong as what lay outside it in its vicinity. Astronomers would’ve known why, Burlaga clarifies, if it weren’t for the necessary instrument on the probe being long out of function.

When the probe crossed over into the region, this spike in strength was recorded within a day. Moreover, Burlaga and others have found that the spike happened thrice and a drop in strength twice, leaving Voyager 1 within the region at the time of their analysis. In fact, after August 25, 2012, no drops have been recorded. The implication is that it is not a smooth region.

“It is possible that the depletion region has a filamentary character, and we entered three different filaments. However, it is more likely that the boundary of the depletion region was moving toward and away from the sun,” Burlaga said.

The magnetic field and its movement through space are not the only oddities characterising the heliosheath depletion region. Low-energy ions blown outward by the Sun constantly emerge out of the heliosphere, but they were markedly absent within the depletion region. Burlaga was plainly surprised: “It was not predicted or even suggested.”

Analysis by Dr. Stamatios Krimigis, the NASA principal investigator for the Low-Energy Charged Particle (LECP) experiment aboard Voyager 1 and an author of the second paper, also found that cosmic rays, which are highly energised charged particles produced by various sources outside the System through unknown mechanisms, weren’t striking Voyager’s detectors equally from all directions. Instead, more hits were being recorded in certain directions inside the heliosheath depletion region.

Burlaga commented, “The sharp increase in the cosmic rays indicate that cosmic rays were able to enter the heliosphere more readily along the magnetic fields of the depletion region.”

Even though Voyager 1 was out there, Krimigis feels that humankind is blind: astronomers’ models were, are, clearly inadequate, and there is no roadmap of what lies ahead. “I feel like Columbus who thought he had gotten to West India, when in fact he had gone to America,” Krimigis contemplates. “We find that nature is much more imaginative than we are.”

With no idea of how the strange region originated or whence, we’ll just have to wait and see what additional measurements tell us. Until then, the probe will continue approaching the gateway to the Galaxy.

(This blog post first appeared on The Copernican on June 28, 2013.)

Some more questions concerning Herschel…

The Herschel Space Observatory, a.k.a. Herschel, was the largest space telescope at the time of its launch and still is. With a collecting area twice as large as the Hubble Space Telescope’s, and operating in the far-infrared part of the spectrum, Herschel could look through clouds of gases and dust in the farthest reaches universe and pick up even really faint signals from distant stars and nebulae.

To do this, it depends on three instruments – PACS, SPIRE and HIFI – that are cooled to fractions of degrees above absolute zero, much colder than anything you could find in the solar system. At these temperatures, the instruments are at their most sensitive. The frigidity it achieved using liquid helium, a superfluid coolant that constantly boils off as it removes heat from the instruments. By the end of this month (March, 2013), all the helium will have boiled off, leaving PACS, SPIRE and HIFI effectively blind.

I wrote an article in The Hindu on March 28, a run-of-the-mill news piece that had to leave out some interesting bits of information I’d gathered from the lead scientist, mission manager, and project scientist I’d spoken to. I’ve attached my questions and their answers that contain said bits of information. I think they’re important because

Here are the answers (My questions are in bold).

Herschel was foreseen to run out of helium in early 2013. Considering its unique position in space, why wasn’t a “warm” experiment installed on-board?

MATT GRIFFIN – Lead Scientist, SPIRE Instrument, Herschel Space Observatory

Herschel was designed to operate in the far infrared part of the spectrum – wavelengths typically hundreds of time longer than the wavelengths of visible light.  For the far infrared, extreme cooling is always required. For a telescope operating at shorter wavelengths (about ten times longer in wavelength than visible light) a “warm mission” is feasible.  This could have been done with Herschel, but it would have required that the surface of the telescope be made far more precise and smooth. That would have made it very much more expensive, leaving less money available for the rest of the spacecraft and the instruments.

Any space mission must be built within a certain budget, and it is usually best to design it to be as effective as possible for a certain wavelength range.  Herschel actually covers a very wide range – from 55 to nearly 700 microns in wavelength.  That’s more than a factor of ten, which is very impressive. To make a warm mission possible would have meant making the telescope good enough to work at ten times shorter wavelength, and adding a fourth instrument.

Herschel was and is the only space telescope observing in the submillimeter and far infrared part of the spectrum. After it goes blind, are there any plans to replace it with a more comprehensive telescope? Or how far do you think the loss of data because of its close-of-ops will be offset by upcoming ground-based telescopes such as the ALMA?

GORAN PILBRATT – Project Scientist, European Space Research and Technology Centre, Noordwijk

There are currently no concrete ESA (or anywhere else) plans for a replacement or follow-up mission. What many people hope for is the Japanese SPICA mission, which may fly beyond 2020 with an ESA telescope and a European instrument called SAFARI, both based on Herschel experience. Time will tell. Of course the NASA JWST will be important to almost every astronomer which it finally flies. In the meantime ALMA and also the flying SOFIA observatory are of interest. There is also a lot of follow-up observing to be done with many different ground-based telescopes based on Herschel data. This is already happening.

LEO METCALFE – Mission Manager, ESA Centre, Madrid

After the anticipated exhaustion of the Liquid Helium cryogen which keeps the Herschel instruments cold, scientific observations with Herschel will cease. However, the data gathered during the 4-years of operations, stored in the Herschel Science Archive (HSA) at ESAC, will remain available to the worldwide astronomical community for the foreseeable future. Until the end of 2017 ESA, for much of the time in collaboration with the instrument teams and NASA Herschel Science Centre, will actively support users in the exploitation of the data.

That said, there is no comparable mission in the currently approved ESA programme considering launches into the early 2020s. The Japanese Aerospace Exploration Agency (JAXA) mission SPICA is of comparable size to Herschel and will operate out to wavelengths a little over 210 microns – in the far-infrared, but only barely reaching what would generally be termed the sub-millimetre region. It may be launched before 2020.

Because of absorption of infrared radiation by the Earth’s atmosphere, ground based telescopes have limited capacity to compete with orbital systems over much of the Herschel wavelength range.

However, the Atacama Large Millimeter Array (ALMA) in the Chilean Andes overlaps in its wavelength coverage with the sub-millimeter parts of the Herschel range, but a typical map size for Alma might be on the order of, say, 10 arcseconds (the full Moon spans about 1800 arcesconds, to give some scale), while a typical Herschel map might cover an area 10 arcminutes (600 arcseconds) on a side. Instead of large area coverage, ALMA provides extremely high spatial resolutions (down to small fractions of an arcsecond), far finer than Herschel could achieve.

So ALMA is well suited to the detailed follow-up of Herschel observations of single high-interest sources, rather than providing comparable coverage to Herschel.

There must be a lot of data left to be analysed that was gathered by Herschel. While creating a legacy archive, will you also be following some threads of investigation over others?

MATT GRIFFIN – Lead Scientist, SPIRE Instrument, Herschel Space Observatory

Although Herschel’s life was limited, it was designed to make observations very quickly and efficiently, and it has collected a huge amount of data.  It will be very important during the next few years, in what we call the Post-Operations period, to process all the data in the best and the most uniform way, and to make it available in an easy-to-use archive for future astronomers.

This means that the real scientific power of Herschel is still to be realised, as its results will be used for many years in the future. Only a small fraction of the data from Herschel has so far been fully investigated.

It is clear that when the data are fully explored, and when Herschel’s observations are followed up with other telescopes, a great deal more will be learned.  This is especially true for the large surveys that Herschel has done – surveys of many thousands of distant galaxies, and surveys of clouds of gas and dust in our own galaxy in which stars are forming. In the coming years, although Herschel will no longer operate, its scientific project will continue – to understand the birth processes of stars and galaxies.

When did you start working with the Herschel mission? How has your experience been with it? What does the team that worked on Herschel move on to after this?

LEO METCALFE – Mission Manager, ESA Centre, Madrid

In 1982 the Far Infrared and Sub-millimetre Telescope (FIRST) was proposed to ESA. This mission concept evolved and eventually was named Herschel, in honour of the great German/English astronomer William Herschel.

The build-up of the ESA team for Herschel started in earnest in the early 2000s. I came on board as Herschel Science Operations Manager in 2007, with the main task of integrating and training the ESA Science Operations Team and the wider Science Ground Segment (SGA – which includes the external-to-ESA Instrument Control Centres) to be a smoothly functioning system in time for launch, which took place in May 2009.

So my experience of Herschel began with the recruitment of many of the operational team members and the integration of the Science Ground Segment (SGS) focussed on the pre-launch end-to-end testing of the entire observatory system, with data flowing from the spacecraft then on the ground in the test facility at ESTEC in the Netherlands, and continued through a series of pre-flight simulations which put the SGS through all the procedures they would need to follow during operations.

As a result we “hit the ground running” after launch, and the operations of the SGS have been smooth throughout the mission. Those operations have spanned the Launch and Early Orbit (LEOP) Phase, the in-flight Commissioning Phase, the Performance Verification, Science Demonstration, and Routine Operations Phases of the mission, and have included the recovery from the early failure of the prime chain of the HIFI instrument, and the handling of various lesser contingencies caused by ionising radiation induced corruptions of on-board instrument memory, among others.

It has been a fast paced and exciting mission which in the end has returned data from almost 35000 individual science observations. It’s going to be hard to adjust to not having an active spacecraft up there.

Concerning what happens to the team(s) that have worked on Herschel: The ESA team that supervised the construction of the Spacecraft already moved on to other missions soon after Herschel was launched.

The Science Operations Team at the Herschel Science Centre at HSC/ESAC in Spain, together with the Instrument Control Centres (ICCs) formed by the teams that built the scientific instruments (distributed through the ESA member states) and the Mission Operations Centre at ESOC in Germany, have been responsible for the operation of the Spacecraft and its instruments in flight. Those teams will now run down.

A fraction of the people will continue to work in the Herschel project through its almost 5-year Post-operations Phase mentioned already above, while the remainder have or will seek positions with upcoming missions like Rosetta, Gaia, BepiColombo, Solar Orbiter, Euclid … or in some cases may move on to other phases of their careers outside the space sector.

We are talking about people who are highly experienced software engineers, or PhD physicists or astronomers. Generally they are highly employable.

EOM

claimtoken-5162301aad18f

EUCLID/ESA: A cosmic vision looking into the darkness

I spoke to Dr. Giuseppe Racca and Dr. Rene Laureijs, both of the ESA, regarding the EUCLID mission, which will be the world’s first space-telescope launched to study dark energy and dark matter. For the ESA, EUCLID will be the centerpiece of their Cosmic Vision program (2015-2025). Dr. Racca is the mission’s project manager while Dr. Laureijs is a project scientist.

Could you explain, in simple terms, what the Lagrange point is, and how being able to study the universe from that vantage point could help the study? 

GR: Sun-Earth Lagrangian point 2 (SEL2) is a point in space about 1.5 million km from Earth in the direction opposite to the sun, co-rotating with the Earth around the Sun. It is a nice and calm point to make observations. It is not disturbed by the heat fluxes from the Earth but at the same time is not too far away to allow to send to Earth the large amount of data from the observation. The orbit around SEL2 that Euclid will employ is rather large and it is easy to reach (in terms of launcher capability) and not expensive to control (in terms of fuel required for the orbit corrections and maintenance manoeuvres).

Does Euclid in any way play into a broader program by ESA to delve into the Cosmic Frontier? Are there future upgrades/extensions planned? 

RL: Euclid is the second approved medium class mission of ESA’s Cosmic Vision programme. The first one is Solar Orbiter, which studies the Sun at short distance. The Cosmic Vision programme sets out a plan for Large, Medium and Small size missions in the decade 2015-2025. ESA’s missions Planck, which is presently in operation in L2, and Euclid will study the beginning, the evolution, and the predicted end of our Universe.

GR: A theme of this programme is: “How did the Universe originate and what is it made of?” Euclid is the first mission of this part of Cosmic Vision 2015-2025. There will be other missions, which have not been selected yet.

What’s NASA’s role in all of this? What are the different ways in which they will be participating in the Euclid mission? Is this a mission-specific commitment or, again, is it encompassed by a broader participation agreement?

GR: The NASA participation in the Euclid mission is very important but rather limited in extent. They will provide the Near-infrared detectors for one of the two Euclid instruments. In addition they will contribute to the scientific investigation with a team of about 40 US scientists. Financially speaking NASA contribution is limited to some 3-4% of the total Euclid mission cost.

RL: The Euclid Memorandum of Understanding between ESA and NASA is mission specific and does not involve a broader participation agreement. First of all, NASA will provide the detectors for the infrared instrument. Secondly, NASA will support 40 US scientists to participate in the scientific exploitation of the data. These US scientists will be part of the larger Euclid Consortium, which contains nearly 1000 mostly European scientists.

Do you have any goals in mind? Anything specific or exciting that you expect to find? Who gets the data?

GR: The goals of the Euclid mission are extremely exciting: in few words we want to investigate the nature and origin of the unseen Universe: the dark matter, five times more abundant than the ordinary matter made of atoms, and the dark energy, causing the accelerating expansion of the Universe. The “dark Universe” is reckoned today to amount at 95% of the total matter-energy density. Euclid will survey about 40% of the sky, looking back in cosmic time up to 10 billion years. A smaller part (1% of the sky) will look back to when the universe was only few million years old. This three dimensional survey will allow to map the extent and history of dark matter and dark energy. The results of the mission will allow to understand the nature of the dark matter and its position as part of an extension of the current standard model. Concerning the dark energy we will be able to distinguish between the so called “quintessence” or a modification necessary to current theories of gravity, including General Relativity.

RL: Euclid goals are to measure the accelerated expansion of the universe which tells us about Dark Energy, to determine the properties of gravity on cosmic scales, to learn about the properties of dark matter, and to refine the initial conditions leading to the Universe we see now. These goals have been chosen carefully, the instrumentation of Euclid is optimised to reach these goals as best as possible. The Euclid data opens the discovery space for many other areas in astronomy: Euclid will literally measure billions of stars and galaxies at visible and infrared wavelengths, with a very high image quality, comparable to that of Hubble Space Telescope. The most exiting prospect is the availability of these sharp images, which will certainly reveal new classes of objects with new science. The nominal mission will last for 6 years, but the first year of data will become already public 26 months after the start of the survey.

When will the EUCLID data be released?

GR: The Euclid data will be released to the public one year after their collection and will be made available to all researchers in the world.