A sanitised fuel

I debated myself for ten minutes as to whether I should criticise an article that appeared on the DD News website on this blog. The article is flawed in the way many science articles on the internet are, but at the same time it appeared on DD News – a news outlet that has a longstanding reputation for playing it safe, so to speak, despite being a state-run entity. But what ultimately changed my mind was that the Department of Science and Technology (DST) quote-tweeted the article on Twitter, writing that the findings were the product of a study the department had funded. The article goes:

As the world runs out of fossil fuels and looks out for alternate sources of clean energy, there is good news from the Krishna-Godavari (KG) basin. The methane hydrate deposit in this basin is a rich source that will ensure adequate supplies of methane, a natural gas. Methane is a clean and economical fuel. It is estimated that one cubic meter of methane hydrate contains 160-180 cubic meters of methane. Even the lowest estimate of methane present in the methane hydrates in KG Basin is twice that of all fossil fuel reserves available worldwide.

Methane is known as a clean fuel – but the label is a bit of a misnomer. When it is combusted, it produces carbon dioxide and water, as opposed to a host of other compounds as well. So as a fuel, it is cleaner than fossil fuels like crude oil and coal. However, it still releases carbon dioxide, and even if this is in quantities appreciably lower than the combustion of coal or crude oil emits, we don’t need more of that in the atmosphere. One report has found the planet’s surface could breach the 1.5º C warming mark, if only temporarily, as soon as 2024. We don’t need more methane in the atmosphere, such as through fugitive emissions, more so: a kilogram of methane has the same greenhouse potential as a little over 80 kilograms of carbon dioxide. Ultimately, what we need is to lower consumption.

This said, the cleanliness of a fuel is to my mind context-specific. The advantages methane offers relative to other fuels in common use today would almost entirely be offset in India by the government’s persistent weakening of environmental protections, pollution-control regulations and indigenous peoples’ rights. (The Krishna-Godavari basin has already been reeling under the impact of the ONGC’s hydrocarbon extraction activities since the 1970s.) Even if we possessed technologies that allowed us to obtain and use methane with 100% efficiency, the Centre will still only resort to the non-democratic methods it has adopted in the last half-decade or so, bulldozing ecosystems and rural livelihoods alike to get what it wants – which is ultimately the same thing: economic growth. This is at least the path it has been carving out for itself. Methane extracted from a large river-basin is not worth this.

The DST’s involvement is important for these two reasons, considering the questionable claims they advance, as well as a third.

At the broadest level, no energy source is completely clean. Even solar and wind power generation and consumption require access to land and to infrastructure whose design and production is by no stretch of the imagination ‘green’. Similarly, and setting aside methane’s substantial greenhouse potential for a moment, extracting methane from the Krishna-Godavari river basin is bound to exact a steep price – directly as well as indirectly in the form of a damaged river basin that will no longer be able to provide the ecosystem services it currently does. In addition, storing and transporting methane is painful because it is a low-density gas, so engineers prefer converting it into liquefied natural gas or methanol first, and doing so is at present an energy-intensive process.

The DST’s endorsement of the prospect of using this methane as fuel is worrying because it suggests the department is content to believe a study it funded led to a supposedly positive finding – and is not concerned with its wider, deadlier implications. At any other time, this anarchy of aspirations, whereby one department doesn’t have to be concerned with the goals of another, would be siloisation of the worst sort – as if mining for hydrocarbons in a river-basin is cleanly separable from water pollution, shortage and the cascade of ecological imbalances brought on by the local endangerment of various plant, animal and bird species.

However, it would be delusional to accuse the current Government of India of being anarchic. This government has displayed a breathtaking fetish for centralising authority and power. Instead, the DST’s seemingly harmless tweet and DD News’s insular article are symptoms of a problem that rests at the other extreme: where all departments are pressed to the common cause of plundering India’s natural resources and destroying its ecological security, even at risk of undermining their own respective mandates.

The singularity of purpose here may or may not have rendered methane an absolutely ‘clean’ fuel – but it may be a glimpse of a DST simply reflecting what the government would like to reduce the country’s scientific enterprise to: a deeply clinical affair, in which scientists should submit to the national interest and not be concerned about other things.

Prestige journals and their prestigious mistakes

On June 24, the journal Nature Scientific Reports published a paper claiming that Earth’s surface was warming by more than what non-anthropogenic sources could account for because it was simply moving closer to the Sun. I.e. global warming was the result of changes in the Earth-Sun distance. Excerpt:

The oscillations of the baseline of solar magnetic field are likely to be caused by the solar inertial motion about the barycentre of the solar system caused by large planets. This, in turn, is closely linked to an increase of solar irradiance caused by the positions of the Sun either closer to aphelion and autumn equinox or perihelion and spring equinox. Therefore, the oscillations of the baseline define the global trend of solar magnetic field and solar irradiance over a period of about 2100 years. In the current millennium since Maunder minimum we have the increase of the baseline magnetic field and solar irradiance for another 580 years. This increase leads to the terrestrial temperature increase as noted by Akasofu [26] during the past two hundred years.

The New Scientist reported on July 16 that Nature has since kickstarted an “established process” to investigate how a paper with “egregious errors” cleared peer-review and was published. One of the scientists it quotes says the journal should retract the paper if it wants to “retain any credibility”, but the fact that it cleared peer-review in the first place is to me the most notable part of this story. It is a reminder that peer-review has a failure rate as well as that ‘prestige’ titles like Nature can publish crap; for instance, look at the retraction index chart here).

That said, I am a little concerned because Scientific Reports is an open-access title. I hope it didn’t simply publish the paper in exchange for a fee like its less credible counterparts.

Almost as if it timed it to the day, the journal ScienceNature‘s big rival across the ocean – published a paper that did make legitimate claims but which brooks disagreement on a different tack. It describes a way to keep sea levels from rising due to the melting of Antarctic ice. Excerpt:

… we show that the [West Antarctic Ice Sheet] may be stabilized through mass deposition in coastal regions around Pine Island and Thwaites glaciers. In our numerical simulations, a minimum of 7400 [billion tonnes] of additional snowfall stabilizes the flow if applied over a short period of 10 years onto the region (~2 mm/year sea level equivalent). Mass deposition at a lower rate increases the intervention time and the required total amount of snow.

While I’m all for curiosity-driven research, climate change is rapidly becoming a climate emergency in many parts of the world, not least where the poorer live, without a corresponding set of protocols, resources and schemes to deal with it. In this situation, papers like this – and journals like Science that publish them – only make solutions like the one proposed above seem credible when in fact they should be trashed for implying that it’s okay to keep emitting more carbon into the atmosphere because we can apply a band-aid of snow over the ice sheet and postpone the consequences. Of course, the paper’s authors acknowledge the following:

Operations such as the one discussed pose the risk of moral hazard. We therefore stress that these projects are not an alternative to strengthening the efforts of climate mitigation. The ambitious reduction of greenhouse gas emissions is and will be the main lever to mitigate the impacts of sea level rise. The simulations of the current study do not consider a warming ocean and atmosphere as can be expected from the increase in anthropogenic CO2. The computed mass deposition scenarios are therefore valid only under a simultaneous drastic reduction of global CO2 emissions.

… but these words belong in the last few lines of the paper (before the ‘materials and methods’ section), as if they were a token addition to what reads, overall, like a dispassionate analysis. This is also borne out by the study not having modelled the deposition idea together with falling CO2 emissions.

I’m a big fan of curiosity-driven science as a matter of principle. While it seemed hard at first to reconcile my emotions on the Science paper with that position, I realised that I believe both curiosity- and application-driven research should still be conscientious. Setting aside the endless questions about how we ought to spend the taxpayers’ dollars – if only because interfering with research on the basis of public interest is a terrible idea – it is my personal, non-prescriptive opinion that research should still endeavour to be non-destructive (at least to the best of the researchers’ knowledge) when advancing new solutions to known problems.

If that is not possible, then researchers should acknowledge that their work could have real consequences and, setting aside all pretence of being quantitative, objective, etc., clarify the moral qualities of their work. This the authors of the Science paper have done but there are no brownie points for low-hanging fruits. Or maybe there should be considering there has been other work where the authors of a paper have written that they “make no judgment on the desirability” of their proposal (also about climate geo-engineering).

Most of all, let us not forget that being Nature or Science doesn’t automatically make what they put out better for having been published by them.

An illustration of an ocean planet with two natural satellites. Credit: Lucianomendez/Wikimedia Commons, CC BY-SA 4.0

Not all waterworlds can host life

During its formation, Venus was in the Solar System’s habitable zone – much like Earth is now. Scientists think its surface contained liquid water, and its atmosphere was somewhat like Earth’s. Maybe there was life, too. However, as the levels of carbon dioxide kept increasing, its atmosphere became opaque, trapping most of the heat reflected by its surface, and Venus heated up to the point where its oceans boiled away. Today, life on the planet’s waterless surface is considered unlikely, except perhaps by those who’ve read a November 2014 study involving supercritical carbon dioxide, and those who believe in Hell.

Why can’t this be the case on alien worlds possessing water as well? Discoveries made since the mid-1990s – especially by the Kepler space telescope and probes in the Jovian and Saturnian systems – have unearthed a variety of worlds that could, or do, have liquid water on or below the surface. On Earth, life has been found wherever liquid water has been found, so liquid water on other planets and moons gets scientists excited about the possibility of alien life. Recent discoveries of a subsurface ocean on Europa and possibly on some other moons of Jupiter and Saturn have even prompted NASA to plan for a probe to Europa in the mid-2020s.

A study published online (paywall) in the Monthly Notices of the Royal Astronomical Society applies the brakes on that excitement to some extent. A kind of exoplanet which scientists think could host lots of liquid water—some 100-times the amount of water on Earth, in fact— are the so-called ‘waterworlds‘. They would have oceans so deep and wide that, according to the study, their effects on themselves and the planet’s climate would be incomparable to that on Earth – and altogether might not be hospitable to life the way we know liquid water can usually be.

The study’s authors write, “One important consequence is, for example, the formation of high-pressure water ice at the bottom of the ocean, which prevents the immediate contact of the planetary crust with the liquid ocean.” This in turn mutes the carbon-silicate cycle, a recycling of carbon and silicon compounds on the ocean floor that determines how much carbon dioxide is released from the oceans into the atmosphere.

The authors calculate that on an (at least) Earth-sized waterworld in the habitable zone of its star, there can be 25-100 Earth oceans for temperatures ranging from the freezing point of water to just beyond the boiling point. So a colder planet, say at 0° C, would have a smaller ocean and lesser liquid water to be able to absorb the carbon dioxide (and its absorptive capabilities can’t ‘power up’ without the carbon-silicate cycle). Yet, at lower temperatures the oceans are able to dissolve more gases, even as the pressure exerted by the gas on the ocean’s surface is higher. So a colder planet with a smaller ocean will dissolve more carbon dioxide from the atmosphere – turning the planet even cooler.

Similarly, a warmer waterworld will be able to absorb less carbon dioxide, letting the greenhouse gas accumulate in the atmosphere, heat the surface up and eventually boil the oceans away (like on Venus). In short, a waterworld whose temperatures are outside a specific range will become hotter if it’s warm and even colder if it’s cold. These runaway effects can occur pretty quickly, too. 

Based on the chemical properties of water and carbon dioxide, the scientists estimate that the life-friendly temperature range is from 273 K to 400 K (0° to 127° C). And even in this range, there could be threats to life in the form of ocean acidity. On Earth, limestone that’s in contact with water dissolves and keeps the water’s acidity in check, but this may not be happening on waterworlds where large landmasses could be a rarity or relatively smaller in size.

At the same time, these pessimistic speculations are offset by some assumptions the scientists have made in their study. For example, they assume that the waterworld doesn’t have tectonic activity. Such activity on Earth involves the jigsaw of landmasses grindings against each other, sometimes subducting one below the other to push down some minerals while volcanoes in other areas spew out others—in all making for a giant geological cycle that ensures the substances needed to sustain life are constantly replenished. If a waterworld were to have tectonic activity, it would also influence the carbon-silicate cycle and keep a runaway greenhouse effect from happening.

On Earth, the warming of the oceans presents a big problem to climatologists partly because its mechanisms and consequences are not fully understood – and more so to marine creatures. And as the oceans are able to dissolve more anthropogenic carbon dioxide, they also become more acidic. Yet, the effects are relatively smaller (ignoring the presence of life for a moment) compared to that on waterworlds – comprising no above-sea-level landmasses and infinite seas 100 km deep.

Featured image credit: Lucianomendez/Wikimedia Commons, CC BY-SA 4.0.

The Wire
August 23, 2015

The devil in Obama’s new emissions target for the US lies in base year details

President Barack Obama announced a new climate change target for the United States’ electricity generation sector on August 3. Hailed by many as ambitious, the plan dictates that power plants in the world’s largest economy and second largest polluter reduce their emissions below 32% of their 2005 levels by 2030. The US had already committed to a 25% reduction from 2005 levels by 2025 in a deal with China made in October 2014. The new commitment now requires more than a doubling of the pace of emission cuts.

Such declarations allow economies like the US to assume leadership of the international climate negotiations at a time when declarations from other countries have been tardy. At the same time, the US has also given itself some leeway despite the symbolic headstart, the advantages of which lie in the historic details. Consider this chart, compiled by the World Resources Institute:

Source
Source

Among the five greenhouse gases that countries prioritise cutting down on, two contain carbon: methane and carbon dioxide. The chart above shows the methane emissions of the USA and the five BRICS nations from 1990 to 2011. The US’s methane emissions were at their minimum in 2005 – so any future commitment on methane emissions premised on 2005 levels are likely to represent daunting challenges. The dip happened because at that time the US was maximising its extraction and processing of shale gas, activities that emit little methane. But on the question of carbon dioxide emissions, consider the chart below:

Source
Source

The US’s carbon dioxide emissions peaked in 2005, at 5,828.63 million metric tons. This convenient choice of a base year allows the US a leeway that’s 18.64% higher than its 1990 emissions – 1990 being the year that the Kyoto Protocol uses as a base. The absence of any rules on what can or can’t constitute base years is leveraged by many countries. In Europe, for example, the base year is 1990 because that’s when emissions peaked followed by a steady decline in industrial activity as well as a growing adoption of renewable energy options.

However, the absence of options in choosing a base year – as under the Kyoto Protocol – is problematic for developing nations. Their domestic demands for energy translate to increasing emissions, so the choice of 1990 as a base year restricts such economies to feeble, economically infeasible increments in energy production. This is one reason why India and China feature among the protocol’s second-phase’s non-ratifiers, being opposed to agreements that legally bind them to their targets.

As it happens, the US is also a non-ratifier for the same reason. Based on the WRI data, its rate of emission cuts between 2005 and 2011 was 70.8 million metric tons of CO2 per year. Assuming a simplistic linear rate of cuts until 2030, its power plants will have to knock off 83.53 million metric tons of CO2 emissions per year starting 2015 to meet its target of having 32% less carbon emissions than it did in 2005. But had it gone with 1990 as the base year, it would’ve had to knock off 122.46 million metric tons of CO2 year on year – an averaged annual leeway of 46.6%.

The Wire
August 5, 2015