Another exit from MIT Media Lab

J. Nathan Matias, a newly minted faculty member at Cornell University and a visiting scholar at the MIT Media Lab, has announced that he will cut all ties with the latter at the end of the academic year over the lab director’s, i.e. Joi Ito’s, association with Jeffrey Epstein. His announcement comes on the heels of one by Ethan Zuckerman, a philosopher and director of the lab’s Center for Civic Media, who also said he’d leave at the end of the academic year despite not having any job offers. Matias wrote on Medium on August 21:

During my last two years as a visiting scholar, the Media Lab has continued to provide desk space, organizational support, and technical infrastructure to CivilServant, a project I founded to advance a safer, fairer, more understanding internet. As part of our work, CivilServant does research on protecting women and other vulnerable people online from abuse and harassment. I cannot with integrity do that from a place with the kind of relationship that the Media Lab has had with Epstein. It’s that simple.

Zuckerman had alluded to a similar problem with a different group of people:

I also wrote notes of apology to the recipients of the Media Lab Disobedience Prize, three women who were recognized for their work on the #MeToo in STEM movement. It struck me as a terrible irony that their work on combatting sexual harassment and assault in science and tech might be damaged by their association with the Media Lab.

On the other hand, Ito’s note of apology on August 15, which precipitated these high-profile resignations and put the future of the lab in jeopardy, didn’t at all mention any regret over what Ito’s fraternising with Epstein could mean for its employees, many of whom are working on sensitive projects. Instead, Ito has only said that he would return the money Epstein donated to the lab, a sum of $200,000 (Rs 143.09 crore) according to the Boston Globe, while pleading ignorance to Epstein’s crimes.

Remembering John Nash, mathematician who unlocked game theory for economics

The Wire
May 25, 2015

The economist and Nobel Laureate Robert Solow once said, “It wasn’t until Nash that game theory came alive for economists.” He was speaking of the work of John Forbes Nash, Jr., a mathematician whose 27-page PhD thesis from 1949 transformed a chapter in mathematics from a novel idea to a powerful tool in economics, business and political science.

At the time, Nash was only 21, his age a telltale mark of genius that had accompanied and would accompany him for the rest of his life.

That life was brought to a tragic close on May 23 when his wife Alicia Nash and he were killed in a car-accident at the New Jersey Turnpike. He was 86 and she was 82; they are survived by two children.

Alicia (née Larde) met Nash when she took an advanced calculus class from him at the Massachusetts Institute of Technology in the mid-1950s. He had received his PhD in 1950 from Princeton University, spent some time as an instructor there and as a consultant at the Rand Corporation, and had moved to MIT in 1951 determined to take on the biggest problems in mathematics.

Between then and 1959, Nash made a name for himself as possibly one of the greatest mathematicians since Carl Friedrich Gauss. He solved what was until then believed to be an unsolvable problem in geometry dating from the 19th century. He worked on a cryptography machine he’d invented while at Rand and tried to get the NSA to use it. He worked with the Canadian-American mathematician Louis Nirenberg to develop non-linear partial differential equations (in recognition, the duo was awarded the coveted Abel Prize in 2015).

He made significant advances in the field of number theory and analysis that – in the eyes of other mathematicians – easily overshadowed his work from the previous decade. After Nash was awarded the Nobel Prize for economics in 1994 for transforming the field of game theory, the joke was that he’d won the prize for his most trivial work.

In 1957, Nash took a break from the Institute for Advanced Studies in Princeton, during which he married Alicia. In 1958, she became pregnant with John Charles Martin Nash. Then, in 1959, misfortune struck when Nash was diagnosed with paranoid schizophrenia. The illness would transform him, his work and the community of his peers in the next 20 years far beyond putting a dent in his professional career – even as it exposed the superhuman commitments of those who stood by him.

This group included his family, his friends at Princeton and MIT, and the Princeton community at large, even as Nash was as good as dead for the world outside.

His colleagues were no longer able to understand his work. He stopped publishing papers after 1958. He was committed to psychiatric hospitals many times but treatment didn’t help. Psychoanalysis was still in vogue in the 1950s and 1960s – while it’s been discredited now, its unsurprising inability to get through to Nash ground at people’s hopes. In these trying times, Alicia Nash became a great source of support.

Although the couple had divorced in 1963, he continued to write her strange letters – while roaming around Europe, while absconding from Princeton to Roanoke (West Virginia), while convinced that the American government was spying on him.

She later let him live in her house along with their son, paying the bills by working as a computer programmer. Many believe that his eventual remission – in the 1980s – had been the work of Alicia. She had firmly believed that he would feel better if he could live in a quiet, friendly environment, occasionally bumping into old friends, walking familiar walkways in peace. Princeton afforded him just these things.

The remission was considered miraculous because it was wholly unexpected. The intensity of Nash’s affliction was exacerbated by the genius tag, by how much of Nash’s brilliance the world was being deprived of. And the deprivation in turn served to intensify the sensation of loss, drawing out each day that he was unable to make sense when he spoke, when he worked. John Moore, a mathematician and friend of the Nashes, thought they could have been his most productive years.

After journalist Sylvia Nasar’s book A Beautiful Mind, and then an Academy-Award-winning movie based on it, his story became a part of popular culture – but the man himself withdrew from society. Ron Howard, who directed the movie, mentions in a 2002 interview that Nash couldn’t remember large chunks of his life from the 1970s.

While mood disorders like depression strike far more people – and are these days almost commonplace – schizophrenia is more ruthless and debilitating. Even as scientists think it has a firm neurological basis, a perfect cure is yet to be invented because schizophrenia damages a victim’s mind as much as her/his ability to process social stimuli.

In Nash’s case, his family and friends among the professors of Princeton and MIT protected him from succumbing to his own demons – the voices in his head, the ebb of reason, the tendency to isolate himself, that are altogether often the first step toward suicide in people less cared for. Moreover, Nash’s own work played a role in his illness. He was convinced for a time that a new global government was on the horizon, a probable outcome in game theory that his work had made possible, and tried to give up his American citizenship. As a result, his re-emergence from the two decades of mental torture were as much about escaping the vile grip of irrationality and paranoia as much as regaining a sense of certainty in the face of his mathematics’ enchanting possibilities.

A Beautiful Mind closes with Nash’s peers at Princeton learning of his being awarded the Nobel in 1994, and walking up to his table to congratulate him. On screen, Russell Crowe smiles the smile of a simple man, a certain man, revealing nothing of the once-brazen virtuosity that had him dashing into classrooms at Princeton just to scribble equations on the boards, dismissing his colleagues’ work, rearing to have a go at the next big thing in science. By then, that brilliance lay firmly trapped within John Nash’s beautiful but unsettled mind. With his death, and that of Alicia, that mind will now always be known and remembered by the brilliant body of work it produced.

A leap forward in ‘flow’ batteries

Newly constructed windmills D4 (nearest) to D1 on the Thornton Bank, 28 km off shore, on the Belgian part of the North Sea. The windmills are 157 m (+TAW) high, 184 m above the sea bottom.
Newly constructed windmills D4 (nearest) to D1 on the Thornton Bank, 28 km off shore, on the Belgian part of the North Sea.

Polymer-based separators in conventional batteries bring their share of structural and operational defects to the table, and reduce the efficiency and lifetime of the battery. To circumvent this issue, researchers at the Massachusetts Institute of Technology (MIT) have developed a membrane-less battery, a.k.a. a ‘flow’ battery. It stores and releases energy using electrochemical reactions between hydrogen and bromine. Within the battery, bromine and hydrogen bromide are pumped in through a channel between the electrodes. They keep the flow rate really low, prompting the fluids to achieve laminar flow: in this state, they flow parallely instead of mixing with each other, creating a ‘natural’ membrane that still keeps the ion-transfer channel open. The researchers, led by doctoral student William Braff, estimate that the battery, if scaled up to megawatts, could incur a one-time cost of as little as $100/kWh. This is value that’s quite attractive to the emerging renewable energy economy. From a purely research perspective, this H-Br variant is significant also for being the first rechargeable ‘flow’ battery. I covered this development for The Hindu.

A leap forward in 'flow' batteries

Newly constructed windmills D4 (nearest) to D1 on the Thornton Bank, 28 km off shore, on the Belgian part of the North Sea. The windmills are 157 m (+TAW) high, 184 m above the sea bottom.
Newly constructed windmills D4 (nearest) to D1 on the Thornton Bank, 28 km off shore, on the Belgian part of the North Sea.

Polymer-based separators in conventional batteries bring their share of structural and operational defects to the table, and reduce the efficiency and lifetime of the battery. To circumvent this issue, researchers at the Massachusetts Institute of Technology (MIT) have developed a membrane-less battery, a.k.a. a ‘flow’ battery. It stores and releases energy using electrochemical reactions between hydrogen and bromine. Within the battery, bromine and hydrogen bromide are pumped in through a channel between the electrodes. They keep the flow rate really low, prompting the fluids to achieve laminar flow: in this state, they flow parallely instead of mixing with each other, creating a ‘natural’ membrane that still keeps the ion-transfer channel open. The researchers, led by doctoral student William Braff, estimate that the battery, if scaled up to megawatts, could incur a one-time cost of as little as $100/kWh. This is value that’s quite attractive to the emerging renewable energy economy. From a purely research perspective, this H-Br variant is significant also for being the first rechargeable ‘flow’ battery. I covered this development for The Hindu.

tumblr_mptlc1tXoB1qao8kio1_500

Most of the principles of the MIT Media Lab I think can be adopted by young professionals looking to make it big. It’s not safe, it’s not sure either, but it definitely re-establishes the connection with intuitive thought (“compasses”) instead of the process-entombed one (“maps”) that’s driving many good ideas and initiatives – like the newspaper – into the ground.

Aaron Swartz is dead.

This article, as written by me and a friend, appeared in The Hindu on January 16, 2013.

In July, 2011, Aaron Swartz was indicted by the district of Massachusetts for allegedly stealing more than 4.8 million articles from the online academic literature repository JSTOR via the computer network at the Massachusetts Institute of Technology. He was charged with, among others, wire-fraud, computer-fraud, obtaining information from a protected computer, and criminal forfeiture.

After paying a $100,000-bond for release, he was expected to stand trial in early 2013 to face the charges and, if found guilty, a 35-year prison sentence and $1 million in fines. More than the likelihood of the sentence, however, what rankled him most was that he was labelled a “felon” by his government.

On January 11, Friday, Swartz’s fight, against information localisation as well as the label given to him, ended when he hung himself in his New York apartment. He was only 26. At the time of his death, JSTOR did not intend to press charges and had decided to release 4.5 million of its articles into the public domain. It seems as though this crime had no victims.

But, he was so much more than an alleged thief of intellectual property. His life was a perfect snapshot of the American Dream. But the nature of his demise shows that dreams are not always what they seem.

At the age of 14, Swartz became a co-author of the RSS (Rich Site Summary) 1.0 specification, now a widely used method for subscribing to web content. He went on to attend Stanford University, dropped out, founded a popular social news website and then sold it — leaving him a near millionaire a few days short of his 20th birthday.

A recurring theme in his life and work, however, were issues of internet freedom and public access to information, which led him to political activism. An activist organisation he founded campaigned heavily against the Stop Online Piracy Act (SOPA) bill, and eventually killed it. If passed, SOPA would have affected much of the world’s browsing.

At a time that is rife with talk of American decline, Swartz’s life reminds us that for now, the United States still remains the most innovative society on Earth, while his death tells us that it is also a place where envelope pushers discover, sometimes too late, that the line between what is acceptable and what is not is very thin.

The charges that he faced, in the last two years before his death, highlight the misunderstood nature of digital activism — an issue that has lessons for India. For instance, with Section 66A of the Indian IT Act in place, there is little chance of organising an online protest and blackout on par with the one that took place over the SOPA bill.

While civil disobedience and street protests usually carry light penalties, why should Swartz have faced long-term incarceration just because he used a computer instead? In an age of Twitter protests and online blackouts, his death sheds light on the disparities that digital activism is subjected to.

His act of trying to liberate millions of scholarly articles was undoubtedly political activism. But had he undertaken such an act in the physical world, he would have faced only light penalties for trespassing as part of a political protest. One could even argue that MIT encouraged such free exchange of information — it is no secret that its campus network has long been extraordinarily open with minimal security.

What then was the point of the public prosecutors highlighting his intent to profit from stolen property worth “millions of dollars” when Swartz’s only aim was to make them public as a statement on the problems facing the academic publishing industry? After all, any academic would tell you that there is no way to profit off a hoard of scientific literature unless you dammed the flow and then released it per payment.

In fact, JSTOR’s decision to not press charges against him came only after they had reclaimed their “stolen” articles — even though Laura Brown, the managing director of JSTOR, had announced in September 2011, that journal content from 500,000 articles would be released for free public viewing and download. In the meantime, Swartz was made to face 13 charges anyway.

Assuming the charges are reasonable at all, his demise will then mean that the gap between those who hold onto information and those who would use it is spanned only by what the government thinks is criminal. That the hammer fell so heavily on someone who tried to bridge this gap is tragic. Worse, long-drawn, expensive court cases are becoming roadblocks on the path towards change, especially when they involve prosecutors incapable of judging the difference between innovation and damage on the digital frontier. It doesn’t help that it also neatly avoids the aura of illegitimacy that imprisoning peaceful activists would have for any government.

Today, Aaron Swartz is dead. All that it took to push a brilliant mind over the edge was a case threatening to wipe out his fortunes and ruin the rest of his life. In the words of Lawrence Lessig, American academic activist, and his former mentor at the Harvard University Edmond J. Safra Centre for Ethics: “Somehow, we need to get beyond the ‘I’m right so I’m right to nuke you’ ethics of our time. That begins with one word: Shame.”

Problems associated with studying the brain

Paul Broca announced in 1861 that the region of the brain now named after him was the “seat of speech”. Through a seminal study, researchers Nancy Kanwisher and Evelina Fedorenko from MIT announced on October 11, 2012, that Broca’s area actually consists of two sub-units, and one of them specifically handles cognition when the body performed demanding tasks.

As researchers explore more on the subject, two things become clear.

The first: The more we think we know about the brain and go on to try and study it, the more we discover things we never knew existed. This is significant because, apart from giving researchers more avenues through which to explore the brain, it also details their, rather our, limits in terms of being able to predict how things really might work.

The biology is, after all, intact. Cells are cells, muscles are muscles, but through their complex interactions are born entirely new functionalities.

The second: how the cognitive-processing and the language-processing networks might communicate internally is unknown to us. This means we’ll have to devise new ways of studying the brain, forcing it to flex some muscles over others by subjecting it to performing carefully crafted tasks.

Placing a person’s brain under an fMRI scanner reveals a lot about which parts of the brain are being used at each moment, but now we realize we have no clue about how many parts are actually there! This places an onus on the researcher to devise tests that

  1. Affect only specific areas of the brain;
  2. If they have ended up affecting some other areas as well, allow the researcher to distinguish between the areas in terms of how they handle the test

Once this is done, we will finally understand both the functions and the limits of Broca’s area, and also acquire pointers as to how it communicates with the rest of the brain.

A lot of predictability and antecedent research is held back because of humankind’s inchoate visualization of the brain.