Orbital ATK and engine-maker GenCorp disagree over cause of Antares crash

Under threat of significant financial losses, conflicting reports of what caused the October 28 (2014) Antares rocket crash have emerged – one from its manufacturer, Orbital ATK, and the other from its engine’s maker, Aerojet Rocketdyne.

Reuters reports,

Ronald Grabe, Orbital’s executive vice president and president of its flight systems group, told the annual Space Symposium conference that an investigation led by his company had concluded the explosion was caused by excessive wear in the bearings of the GenCorp engine.

In reply, GenCorp – which owns Aerojet – spokesperson Glenn Mahone says,

GenCorp’s investigation had also identified excessive wear of the bearings as the direct cause of the explosion that destroyed the rocket, but further research revealed that the bearings likely wore out due to “foreign object debris” in the engine.

According to Mahone, GenCrop’s investigation will be completed in three weeks but “the bulk of the work had been done”. The engine in question is actually a pair of AJ26-62 first-stage liquid-fuel engines – and it’s not clear yet if both engines were found to be damaged. Their designs are derived from the Soviet-era NK-33 engines, last used in the 1970s.

The AJ26-62 engine. Credit: Aerojet Rocketdyne
The AJ26-62 engine. Credit: Aerojet Rocketdyne

The kerosene they use can be seen burning up in a distinctly visible first explosion, which occurred moments after Antares lifted off from the launchpad in October last year. When it fell back down 15 seconds later, the second explosion incinerated over 2,200 kg of cargo Orbital ATK was ferrying to the ISS under a service contract with NASA.

Since it wasn’t a NASA mission per se, the space agency is not leading the investigation but only conducting one of its own. With a disagreement erupting over Orbital ATK and GenCorp, NASA could be the arbiter. However, also according to Reuters, it has no plans of making its report public.

Read: Orbital, GenCorp spar over cause of October rocket crash, Reuters

Crowdsourcing earthquakes is not a big deal – keeping it reliable is

"Symbols show the few regions of the world where public citizens and organizations currently receive earthquake warnings and the types of data used to generate those warnings (7). Background color is peak ground acceleration with 10% probability of exceedance in 50 years from the Global Seismic Hazard Assessment Program." DOI: 10.1126/sciadv.1500036
“Symbols show the few regions of the world where public citizens and organizations currently receive earthquake warnings and the types of data used to generate those warnings (7). Background color is peak ground acceleration with 10% probability of exceedance in 50 years from the Global Seismic Hazard Assessment Program.” DOI: 10.1126/sciadv.1500036

This map stakes well the need for a decentralized earthquake warning system. The dark and light blue rings show where early earthquake warnings are available, while the reddish and yellow patches describe areas prone to earthquakes. There’s a readily visible disparity, which a team of scientists from the University of California, Berkeley, leverages to outline how early earthquake warnings can be crowdsourced. In a paper in Scientific Advances on April 10, the team proposes using the accelerometer in our smartphones to log and transmit tiny movements in the ground beneath us to a server that analyzes them for signs of a quake and returns the results (insert cute quote about crowdsourced information being used by the crowd).

This idea isn’t entirely new. In 2013, two seismologists from the Instituto Nazionale di Geosifica e Vulcanologia in Italy used cheap MEMS (micro-electromechanical system) accelerometers to determine that they’re good for anticipating quakes that are rated higher than five on the Richter scale if located close to the epicenter. Otherwise, the accelerometers weren’t reliable when logging seismic signals that weren’t sharp or unique enough – such as is the case with weaker earthquakes or the strong ground-motion associated with moving faults – because the instruments produced sufficient noise to drown their own readings out.

In fact, this issue might’ve been evident in 2010 itself. Then, a team out of Stanford University proposed using “all the computers” on the Internet to “catch” quakes. To be part of this so-called Quake Catcher Network, users would have to install a piece of QCN software along with a ‘low-maintenance’ motion sensor on their desktops/laptops to empower them with the same capabilities as a smartphone-borne accelerometer, but more sensitive. The software would log motion data due to mild tremors or stronger and strong ground-motion and relay it over the web in near-real-time. The QCN has been live for over a year now, although most of its users are situated in Europe and North America.

Perhaps the earliest instance of crowdsourcing in the Age of the Smartphone was with Twitter. In 2008, a 7.9-magnitude earthquake in China killed over 10,000 in a rain-hit region of the country. The CNN wrote, “Rainy weather and poor logistics thwarted efforts by relief troops who walked for hours over rock, debris and mud on Tuesday in hopes of reaching the worst-hit area”. Twitter, however, was swarming with updates from the region, often revealing gaps in the global media’s coverage of the disaster. The Online Journalism Blog summed it up:

Robert Scoble was following proceedings on his much-followed Twitter, and feeding back information from his followers, including, for instance (after he tweeted the fact that Tweetscan was struggling) that people were saying Summize was the best tool to use.

If you followed the conversation through Scoble using Quotably, you could then find Gregg Scott, who in turn was talking to RedChina, Karoli, mmsullivan, and inwalkedbud who was in Chengdu, China (also there was Casperodj and Lyrrael).

If you wanted to check out inwalkedbud you could do so using Tweetstats and find he has been twittering since December. Sadly the Internet Archive doesn’t bring any results, though.

The mainstream media had differing reports: RTE (Ireland) said “No major damage after China earthquake” – but UK’s Sky News reported four children killed and over 100 injured; Chinaview (China) said no buildings had collapsed – but an Australian newspaper said they had.

Filtering the noise

In all these cases – the Italian MEMS experiment, the QCN desktop/laptop-based tracker and with updates on Twitter – the problem has not been to leverage the crowd effectively. In 2015, we’re already there. The real problem has been reliability. Quakes stronger than five on the Richter scale signal danger everywhere, and there are enough smartphone-bearing users around the world to be on alert for them. But quakes less strong are bad news particularly in developing economies, where bad infrastructure and crowding are often to blame for collapsing buildings that claim hundreds of lives.

Let’s take another look at the disparity map:

"Symbols show the few regions of the world where public citizens and organizations currently receive earthquake warnings and the types of data used to generate those warnings (7). Background color is peak ground acceleration with 10% probability of exceedance in 50 years from the Global Seismic Hazard Assessment Program." DOI: 10.1126/sciadv.1500036
“Symbols show the few regions of the world where public citizens and organizations currently receive earthquake warnings and the types of data used to generate those warnings (7). Background color is peak ground acceleration with 10% probability of exceedance in 50 years from the Global Seismic Hazard Assessment Program.” DOI: 10.1126/sciadv.1500036

The redder belts are more prevalent in South America, Central and East Asia and in a patch running between Central Europe and the Middle East. Not being able to detect weaker quakes if not for centralized detection agencies in these regions keeps hundreds of millions of people under threat. So, the real achievement when scientists confidently crowdsource early earthquake warnings is the use of specialized filtering techniques and algorithms to increase the sensitivity of smartphones to subtle movements in the ground and so the reliability of their measurements. Where concepts like phase smoothing, Kalman filters and GNSS receivers thrumming in a smartphone’s chassis spell the difference between news and help.

Tech 1, Coarseness 0.

These are only some of the techniques in use – and whose use the Berkeley group thinks particularly significant in their early warning system’s designs. Phase smoothing is a technique where errors associated with data transmission between smartphones and satellites – such as measurement noise or reflection by metallic objects in the transmission’s path – are mitigated by keeping track of the rate of change of the distance between the phone and the satellite. A Kalman filter is an algorithm that specializes in picking out data patterns from a chaos of signals and using that pattern to fish for even more signals like it, thus steadily filtering out the noise. Together, they help scientists adjust for drift – which is when an object moves by a greater distance than an earthquake would have it move.

Finally, the scientists further refine the data by comparing it to legacy GNSS (Global Navigation Satellite System) data, which is the most accurate but also the most costly system with which to anticipate and track earthquakes. In their Science Advances Paper, the Berkeley group writes that the data obtained through thousands of smartphones “can be substantially improved by using differential corrections via satellite-based augmentation systems, tracking the more precise GNSS carrier phase and using it to filter the [crowdsourced] data (“phase smoothing”), or by combination with independent INS data in a Kalman filter.”

A warning system all India’s

But the best part: “Today’s smartphones have some or all of these capabilities”, negating the otherwise typical coarseness and unreliability associated with crowdsourced data. Here’s more evidence of this:

(B) Drift of position obtained from various devices (GNSS, double-integrated accelerometers, and Kalman filtering thereof) compared to observed earthquake displacements. DOI: 10.1126/sciadv.1500036
(B) Drift of position obtained from various devices (GNSS, double-integrated accelerometers, and Kalman filtering thereof) compared to observed earthquake displacements. DOI: 10.1126/sciadv.1500036

Chart (B), which is the one of interest to us, shows the amount of drift present in data acquired by various methods over time. The black lines show the observed displacements due to earthquakes of different magnitudes. So, a colored line represents reliable data as long as it is below the corresponding black line. For example, the red line for “C/A code + p-s + SBAS” shows a largely reliable reading of an M6 earthquake until about 50 seconds, after which it starts to drift. Similarly, most colored lines are below the black lines for M8-9 earthquakes, so all those methods can be used to reliably track the stronger earthquakes. The line described by the Berkeley group is the red line – the crowdsourced line.

The ideal thing would be to develop more sophisticated filtering mechanisms that’d bring the red line close to the blue GNSS line at the bottom, which of course exhibits zero drift. Fortunately, self-reliance on this front might be possible soon in the Indian Subcontinent region. Since 2013, the Indian Space Research Organization has launched four of its planned seven Regional Navigation Satellite System (IRNSS) that could augment regional efforts to crowdsource earthquake-warnings. The autonomous system is expected to live in 2016.

All goes well on LHC 2.0’s first day back in action

It finally happened! The particle-smasher known as the Large Hadron Collider is back online after more than two years, during which its various components were upgraded to make it even meaner. A team of scientists and engineers gathered at the collider’s control room at CERN over the weekend – giving up Easter celebrations at home – to revive the giant machine so it could resume feeding its four detectors with high-energy collisions of protons.

Before the particles enter the LHC itself, they are pre-accelerated to 450 GeV by the Super Proton Synchrotron. At 11.53 am (CET), the first beam of pre-accelerated protons was injected into the LHC at Point 2 (see image), starting a clockwise journey. By 11.59 am, it’d been reported crossing Point 3, and at 12.01 pm, it was past Point 5. The anxiety in the control room was palpable when an update was posted in the live-blog: “The LHC operators watching the screen now in anticipation for Beam 1 through sector 5-6”.

Beam 1 going from Point 2 to Point 3 during the second run of the Large Hadron Collider's first day in action. Credit: CERN
Beam 1 going from Point 2 to Point 3 during the second run of the Large Hadron Collider’s first day in action. Credit: CERN

Finally, at 12.12 pm, the beam had crossed Point 6. By 12.27, it had gone a full-circle around the LHC’s particles pipeline, signalling that the pathways were defect-free and ready for use. Already, as and when the beam snaked through a detector without glitches, some protons were smashed into static targets producing a so-called splash of particles like sparks, and groups of scientists erupted in cheers.

Both Rolf-Dieter Heuer, the CERN Director-General, and Frederick Bordry, Director for Accelerators and Technology, were present in the control room. Earlier in the day, Heuer had announced that another beam of protons – going anti-clockwise – had passed through the LHC pipe without any problems, providing the preliminary announcement that all was well with the experiment. In fact, CERN’s scientists were originally supposed to have run these beam-checks a week ago, when an electrical glitch spotted at the last minute thwarted them.

In its new avatar, the LHC sports almost double the energy it ran at, before it shut down for upgrades in early-2013, as well as more sensitive collision detectors and fresh safety systems. For the details of the upgrades, read this. For an ‘abridged’ version of the upgrades together with what new physics experiments the new LHC will focus on, read this. Finally, here’s to another great year for high-energy physics!

All goes well on LHC 2.0's first day back in action

It finally happened! The particle-smasher known as the Large Hadron Collider is back online after more than two years, during which its various components were upgraded to make it even meaner. A team of scientists and engineers gathered at the collider’s control room at CERN over the weekend – giving up Easter celebrations at home – to revive the giant machine so it could resume feeding its four detectors with high-energy collisions of protons.

Before the particles enter the LHC itself, they are pre-accelerated to 450 GeV by the Super Proton Synchrotron. At 11.53 am (CET), the first beam of pre-accelerated protons was injected into the LHC at Point 2 (see image), starting a clockwise journey. By 11.59 am, it’d been reported crossing Point 3, and at 12.01 pm, it was past Point 5. The anxiety in the control room was palpable when an update was posted in the live-blog: “The LHC operators watching the screen now in anticipation for Beam 1 through sector 5-6”.

Beam 1 going from Point 2 to Point 3 during the second run of the Large Hadron Collider's first day in action. Credit: CERN
Beam 1 going from Point 2 to Point 3 during the second run of the Large Hadron Collider’s first day in action. Credit: CERN

Finally, at 12.12 pm, the beam had crossed Point 6. By 12.27, it had gone a full-circle around the LHC’s particles pipeline, signalling that the pathways were defect-free and ready for use. Already, as and when the beam snaked through a detector without glitches, some protons were smashed into static targets producing a so-called splash of particles like sparks, and groups of scientists erupted in cheers.

Both Rolf-Dieter Heuer, the CERN Director-General, and Frederick Bordry, Director for Accelerators and Technology, were present in the control room. Earlier in the day, Heuer had announced that another beam of protons – going anti-clockwise – had passed through the LHC pipe without any problems, providing the preliminary announcement that all was well with the experiment. In fact, CERN’s scientists were originally supposed to have run these beam-checks a week ago, when an electrical glitch spotted at the last minute thwarted them.

In its new avatar, the LHC sports almost double the energy it ran at, before it shut down for upgrades in early-2013, as well as more sensitive collision detectors and fresh safety systems. For the details of the upgrades, read this. For an ‘abridged’ version of the upgrades together with what new physics experiments the new LHC will focus on, read this. Finally, here’s to another great year for high-energy physics!

The Planetary Society says humans orbiting Mars is important before they land on it

A notional timeline for the 2033 mission was presented also, with crewed test-flights in cislunar orbits being planned for 2025 and 2027. The launch window provides a suitable focus year because NASA hopes to have tested the necessary spaceflight technologies and experience through its ARM in the 2020s.

On Thursday night (IST), the Planetary Society announced the results of a workshop it hosted earlier this week to re-engage with the future of human spaceflight. The advocacy group concluded that humans orbiting Mars was a crucial step before humans could land on Mars.

The workshop, called “Humans Orbiting Mars”, was held with officials from the aerospace industry, scientific community and NASA in attendance. They addressed the question of whether human spaceflight to Mars by 2033 was feasible if NASA’s budget increased only by 2-3% between now and then (to keep up with inflation), and assuming the agency’s contribution to the International Space Station would end by 2024. The answer was ‘Yes’ conditional to the orbit-first-land-next strategy.

Some results from the workshop were made public by Scott Hubbard, former director of the NASA Ames Research Center, and John Logsdon, founder of the Space Policy Institute at George Washington University, in a presser. The Society’s president and popular science communicator Bill Nye also presented some tidbits, but none of them were forthcoming about the precise details of the Society’s strategy.

Hubbard said that having humans orbit Mars first before landing was important to break “this very challenging effort into smaller, more executable pieces”, differentiating it from some private sector approaches to the red planet that Logsdon said “exist but don’t seem credible”. They admitted they were conscious of the strategy’s parallels to the Apollo 8 mission, which invigorated public interest in space exploration by carrying the first humans into an orbit around the moon in 1968 and giving humanity its first view of Earthrise.

A notional timeline for the 2033 mission was presented also, with crewed test-flights in cislunar orbits being planned for 2025 and 2027. Mars missions are fixed to launch windows every 26 months to coincide with the planet’s opposition, when it comes closest to Earth. However, the launch window in 2033 provides a suitable focus year also because NASA hopes to have tested the necessary spaceflight technologies and experience through its Asteroid Retrieval Mission in the 2020s.

The Society’s space-policy writer Casey Dreier concluded on his blog:

Over the next few months, we will work to publish as much of the content presented at the workshop as we can. And later this year, we will release a report based on the discussions and feedback from this meeting formalizing our thoughts and ideas on this path forward.

However, the presser only left reporters with more questions than answers. It may have been wiser to announce all the results of the workshop alongside the report instead of releasing vague details now, even if it appears the Planetary Society has a detailed architecture of the concept in place.

And – as if to have the Society reconsider its barb about infeasible private missions to Mars – the report’s release later this year could coincide with SpaceX’s much-awaited announcement of the details of its Mars Colonial Transporter, a transport vehicle that CEO-CTO Elon Musk has promised will be very different from the Dragon and Falcon 9 rockets it currently operates. Musk is also expected to announce new spacesuit designs meeting utility requirements by the end of 2015.

Featured image credit: NASA

The Mediterranean Sea is just as polluted as some parts of the open ocean

If the surface alone hosts 800-3,000 tons of plastic objects, and that’s what’s floating after 70% has sunk to the bottom while the smallest fragments have been digested by marine animals, the Mediterranean Sea in all could be one of the most polluted water-bodies on Earth.

When the tsunami struck the western coast of Japan at Fukushima in 2011, the heaving waters swept many tons of trash into the Pacific Ocean, and set afloat toward North America by ocean currents. Eventually, the Japanese trash joined up with another prominent patch of garbage floating just off the western coast of North America, held in a region some 5,000 sq. km across by a gentle but unceasing circle of winds called the North Pacific Gyre.

Like this, there are four other gyres around the world, all located near the equator, which have become lightning rods for waste – especially plastic waste – thrown into the oceans from coastal cities, industries and ships. This week, a survey by a group of Spanish researchers adds another patch of marine of trash to this list, this one with a dubious distinction.

It’s located in the Mediterranean Sea, and the distinction is this: if not for the Strait of Gibraltar, the Sea is virtually landlocked. As a result, the amount of plastic in it has accumulated faster than it has dissipated, today boasting of an abundance that could give its oceanic peers a run for their refuse.

In fact, the survey’s authors speculate that instead of draining out the Sea’s waters, the Strait of Gibraltar could be a gateway through which the Atlantic Ocean’s plastic trash is draining into the Sea. Overall, the average amount of plastic trash in the Mediterranean Sea – 423 grams per sq. km – appears comparable to that in the Indian Gyre, North Pacific Gyre or the South Pacific Gyre, and less than only in the twin Atlantic Gyres.

Ranges of surface plastic concentrations measured in the Mediterranean Sea, and reported for the open ocean. doi:10.1371/journal.pone.0121762.g003
Ranges of surface plastic concentrations measured in the Mediterranean Sea, and reported for the open ocean. doi:10.1371/journal.pone.0121762.g003

Between the ocean and a dirty place

Over 100 million people live along the Mediterranean Sea’s coastline, its waters are part of one of the busiest shipping routes on the planet and also receive waters from the densely populated catchments of the rivers Nile, Ebro and Po. Consequently, the total amount of plastic on the Sea’s surface is thought to be between 800 tons and a prodigious 3,000 tons.

Almost 83% of this is in the form of microplastics, pieces of plastic smaller than 5 mm in size. They are especially dangerous for the ecosystem because they could be swallowed by fish and other marine creatures. Similar concentrations of microplastics abound in the oceanic gyres as well.

Size distribution of the floating plastic debris collected in the Mediterranean Sea. doi:10.1371/journal.pone.0121762.g001
Size distribution of the floating plastic debris collected in the Mediterranean Sea. doi:10.1371/journal.pone.0121762.g001

However, the Mediterranean Sea has two unique features. It has more objects that’re bigger than 20 mm in size, and fewer objects that’re smaller than 2 mm in size, than are there in the oceanic gyres.`

Size distribution of the floating plastic debris collected in the Mediterranean Sea. doi:10.1371/journal.pone.0121762.g001
Size distribution of the floating plastic debris collected in the Mediterranean Sea. doi:10.1371/journal.pone.0121762.g001

Just last year, a widely reported study had found that contrary to popular belief almost 70% of the trash in the oceans didn’t stay on the surface but sank to the seafloor (sometimes going as far down as 15,000 feet). This is true of the Mediterranean Sea as well. As the Spanish team suggests, fragments of plastic bottles and bags could have sunk down and been colonized by organisms living on the seafloor.

This would explain the relative abundance of objects around 5 mm on the surface – but not where the objects smaller than 2 mm are disappearing to. As the authors write in their paper in the journal PLOS ONE,

Removal mechanisms of microplastics include ingestion by planktivorous animals and ballasting by biofouling, and these could be greater in the Mediterranean, where ecosystem production is higher than in the subtropical gyres. … However, estimates of ingestion rates of microplastic by marine life or microplastic abundance on the seafloor are still needed to test this hypothesis.

This movement of plastics within the Sea itself suggests perhaps the most frightening prospect emerging out of this study. If the surface alone hosts 800-3,000 tons of plastic objects, and that’s what’s floating after 70% has sunk to the bottom while the smallest fragments have been digested by marine animals, the Mediterranean Sea in all could be one of the most polluted water-bodies on Earth.