Categories
Op-eds

The capacity to notoriety of work

Why is it considered OK to flaunt hard work? Will there come a time when it might be more prudent to mask long hours of work behind a finished product and instead behave as if the object was conceived with less work and more skill and intelligence?

Is it because hard work is considered a fundamental opportunity given all humankind?

But just the possession of will and spirit deep within doesn’t mean it has to be used, to be exhausted in the pursuit of success, albeit its exhaustion be accompanied with praise. Why is that praise justified?

“He worked hard and long, I worked not half-as-hard and not for half-as-long, and I give you something better”: With this example in mind, is hard work considered a nullifier, a currency that translates all forms of luck, ill-luck, opportunity and accident into the form of perspiration and blood? Why should it be?

Moreover, the tendency exists, too, that recognizes, nay, yearns that, the capacity for honest work is somehow more innate than the capacity to fool, trick, spy on, defame, slander, and kill, that honest work is more human than the capacity for all these traits.

Is it really?

Who deigned that work would be that nullifier, a currency, and not intelligence? Is hard-work “more” fundamental than intelligence? Why is the flaunting of intelligence considered impudent while the flaunting of work a sign of the presence of humility? Is the capacity for work less volatile than the capacity to think smart? Is one acquired and the other only delivered at the time of birth?

Will a day come when the flaunting of hard-work is considered a sign of impudence and the flaunting of intelligence a sign of the presence of humility? Or – alas! – is it the implied notion of superiority that so scares us, that keeps us from acknowledging publicly that superior intelligence does imply a form of success, perhaps similar to the success implied by the capacity to work hard?

What sacrifice does one represent that the other, seemingly, rejects? Why does only intelligence suffer the curse of bigotry while honest work retains the privilege to socially unfettered use?

Categories
Life notes

A revisitation inspired by Facebook’s opportunities

When habits form, rather become fully formed, it becomes difficult to recognize the drive behind its perpetuation. Am I still doing what I’m doing for the habit’s sake, or is it that I still love what I do and that’s why I’m doing it? In the early stages of habit-formation, the impetus has to come from within – let’s say as a matter of spirit – because it’s a process of creation. Once the entity has been created, once it is fully formed, it begins to sustain itself. It begins to attract attention, the focus of other minds, perhaps even the labor of other wills. That’s the perceived pay-off of persevering at the beginning, persevering in the face of nil returns.

But where the perseverance really makes a difference is when, upon the onset of that dull moment, upon the onset of some lethargy or the writers’ block, we somehow lose the ability to set apart fatigue-of-the-spirit and suspension-of-the-habit. If I no longer am able to write, even if at least for a day or so, I should be able to tell the difference between that pit-stop and a perceived threat of the habit starting to become endangered. If we don’t learn to make that distinction – which is more palpable than fine or blurry most of the time – then we will have have persevered for nothing but perseverance’s sake.

This realization struck me after I opened a Facebook page for my blog so that, given my incessant link-sharing on the social network, only the people who wanted to read the stuff I shared could sign-up and receive the updates. I had no intention earlier to use Facebook as anything but a socialization platform, but after my the true nature of my activity on Facebook was revealed to me (by myself), I realized my professional ambitions had invaded my social ones. So, to remind myself why the social was important, too, I decided to stop sharing news-links and analyses on my timeline.

However, after some friends expressed excitement – that I never quite knew was there – about being able to avail my updates in a more cogent manner, I understood that there were people listening to me, that they did spend time reading what I had to say on science news, etc., not just from on my blog but also from wherever I decided to post it! At the same moment, I thought to myself, “Now, why am I blogging?” I had no well-defined answer, and that’s when I knew my perseverance was being misguided by my own hand, misdirected by my own foolishness.

I opened astrohep.wordpress.com in January, 2011, and whatever science- or philosophy-related stories I had to tell, I told here. After some time, during a period coinciding with the commencement of my formal education in journalism, I started to use isnerd more effectively: I beat down the habit of using big words (simply because they encapsulated better whatever I had to say) and started to put some effort in telling my stories differently, I did a whole lot of reading before and while writing each post, and I used quotations and references wherever I could.

But the reason I’d opened this blog stayed intact all the time (or at least I think it did): I wanted to tell my science/phil. stories because some of the people around me liked hearing them and I thought the rest of the world might like hearing them, too.

At some point, however, I crossed over into the other side of perseverance: I was writing some of my posts not because they were stories people might like to hear but because, hey, I was a story-writer and what do I do but write stories! I was lucky enough to warrant no nasty responses to some absolutely egregious pieces of non-fiction on this blog, and parallely, I was unlucky enough to not understand that a reader, no matter how bored, never would want to be presented crap.

Now, where I used to draw pride from pouring so much effort into a small blog in one corner of WordPress, I draw pride from telling stories somewhat effectively – although still not as effectively as I’d like. Now, astrohep.wordpress.com is not a justifiable encapsulation of my perseverance, and nothing is or will be until I have the undivided attention of my readers whenever I have something to present them. I was wrong in assuming that my readers would stay with me and take to my journey as theirs, too: A writer is never right in assuming that.

Categories
Life notes

A revisitation inspired by Facebook's opportunities

When habits form, rather become fully formed, it becomes difficult to recognize the drive behind its perpetuation. Am I still doing what I’m doing for the habit’s sake, or is it that I still love what I do and that’s why I’m doing it? In the early stages of habit-formation, the impetus has to come from within – let’s say as a matter of spirit – because it’s a process of creation. Once the entity has been created, once it is fully formed, it begins to sustain itself. It begins to attract attention, the focus of other minds, perhaps even the labor of other wills. That’s the perceived pay-off of persevering at the beginning, persevering in the face of nil returns.

But where the perseverance really makes a difference is when, upon the onset of that dull moment, upon the onset of some lethargy or the writers’ block, we somehow lose the ability to set apart fatigue-of-the-spirit and suspension-of-the-habit. If I no longer am able to write, even if at least for a day or so, I should be able to tell the difference between that pit-stop and a perceived threat of the habit starting to become endangered. If we don’t learn to make that distinction – which is more palpable than fine or blurry most of the time – then we will have have persevered for nothing but perseverance’s sake.

This realization struck me after I opened a Facebook page for my blog so that, given my incessant link-sharing on the social network, only the people who wanted to read the stuff I shared could sign-up and receive the updates. I had no intention earlier to use Facebook as anything but a socialization platform, but after my the true nature of my activity on Facebook was revealed to me (by myself), I realized my professional ambitions had invaded my social ones. So, to remind myself why the social was important, too, I decided to stop sharing news-links and analyses on my timeline.

However, after some friends expressed excitement – that I never quite knew was there – about being able to avail my updates in a more cogent manner, I understood that there were people listening to me, that they did spend time reading what I had to say on science news, etc., not just from on my blog but also from wherever I decided to post it! At the same moment, I thought to myself, “Now, why am I blogging?” I had no well-defined answer, and that’s when I knew my perseverance was being misguided by my own hand, misdirected by my own foolishness.

I opened astrohep.wordpress.com in January, 2011, and whatever science- or philosophy-related stories I had to tell, I told here. After some time, during a period coinciding with the commencement of my formal education in journalism, I started to use isnerd more effectively: I beat down the habit of using big words (simply because they encapsulated better whatever I had to say) and started to put some effort in telling my stories differently, I did a whole lot of reading before and while writing each post, and I used quotations and references wherever I could.

But the reason I’d opened this blog stayed intact all the time (or at least I think it did): I wanted to tell my science/phil. stories because some of the people around me liked hearing them and I thought the rest of the world might like hearing them, too.

At some point, however, I crossed over into the other side of perseverance: I was writing some of my posts not because they were stories people might like to hear but because, hey, I was a story-writer and what do I do but write stories! I was lucky enough to warrant no nasty responses to some absolutely egregious pieces of non-fiction on this blog, and parallely, I was unlucky enough to not understand that a reader, no matter how bored, never would want to be presented crap.

Now, where I used to draw pride from pouring so much effort into a small blog in one corner of WordPress, I draw pride from telling stories somewhat effectively – although still not as effectively as I’d like. Now, astrohep.wordpress.com is not a justifiable encapsulation of my perseverance, and nothing is or will be until I have the undivided attention of my readers whenever I have something to present them. I was wrong in assuming that my readers would stay with me and take to my journey as theirs, too: A writer is never right in assuming that.

Categories
Op-eds Science

An Indian supercomputer by 2017. Umm…

This is a tricky question. And for background, here’s the tweet from IBN Live that caught my eye.

(If you didn’t read the IBN piece, this is the gist. India, rather Kapil Sibal, our present telecom minister, will have a state-of-the-art supercomputer, 61 times faster than current-leader Sequoia, built indigenously by 2017 at a cost of Rs. 4,700 crore across 5 years.)

Kapil Sibal

India already has many supercomputers: NAL’s Flosolver, C-DAC’s PARAM, DRDO’s PACE/ANURAG, BARC’s Anupam, IMS’s Kabru-Linux cluster and CRL’s Eka (both versions of PARAM), and ISRO’s Saga 220.

The most-powerful among them, PARAM (through its latest version), is ranked 58th in the world. It was designed and deployed by the Pune-based Centre for Development of Advanced Computing (C-DAC) and the Department of Electronics and Information Technology (DEITY – how apt) in 1991. Its first version, PARAM 8000, used 8,000 Inmos transputers (a microprocessor architecture built with parallel-processing in mind); subsequent versions include PARAM 10000, Padma, and the latest Yuva. Yuva came into operation in November 2008 and boasts a peak speed of 54 teraflops (1 teraflops = 1 trillion floating point operations per second; floating point is a data type that stores numbers as {significant digits * base^exponent}).

Interestingly, in July 2009, C-DAC had announced that a new version of PARAM was in the works and that it would be deployed in 2012 with a computing power of more than 1 petaflops (1 petalfops = 1,000 teraflops) at a cost of Rs. 500 crore. Where is it?

Then, in May, 2011, it was announced that India would spend Rs. 10,000 crore in building a 132.8-exaflops supercomputer by 2017. Does that make today’s announcement an effective reduction in budget as well as diminishing of ambitions? If so, then why? If not, then are we going to have two high-power supercomputers?!

Such high-power supercomputers that the proposed 2017-supercomputer will compete with usually find use in computational fluid dynamics simulations, weather forecasting, finite element analysis, seismic modelling, e-governance, telemedicine, and administering high-speed network activities. Obviously, these are tasks that operate with a lot of probabilities thrown into the simulation and calculation mix, and require hundreds of millions of operation per second to be solved within an “acceptable” chance of the answer being right. As a result, and because of the broad scale of these applications, such supercomputers are built only when the need for the answers is already present. They are not installed to create needs but only to satisfy them.

So, that said, why does India need such a high-power supercomputer? Deploying a supercomputer is no easy task, and deploying one that’s so far ahead of the field also involves an overhaul of the existing system and network architectures. What needs is the government creating that might require so much power? Will we be able to afford it?

In fact, I worry that Mr. Kapil Sibal has announced the decision to build such a device simply because India doesn’t feature in the list of top 10 countries that have high-power supercomputers. Because, beyond being able to predict weather patterns and further extend the country’s space-faring capabilities, what will the device be used for? Are there records that the ones already in place are being used effectively?

Categories
Science

Rubbernecking at redshifting

The interplay of energy and matter is simply wonderful because, given the presence of some intrinsic properties, the results of their encounters can be largely predicted. The presence of smoke indicates fire, the presence of shadows both darkness and light, the presence of winds a pressure gradient, the presence of mass a gravitational potential. And a special radiological extension of the last correlation gives rise to a phenomenon called gravitational redshift.

The wave-particle duality insists that electromagnetic radiation, if conceived as a stream of photons, can also be thought of as propagating as waves. All waves have two fundamental properties: wavelength and frequency. If a wave consists of a crest and a trough, the length of a crest-trough pair is its wavelength, and the number of wavelengths traversed by the wave in a second its frequency. Also, the energy contained in a wave is directly proportional to its frequency.

A wave undergoes a gravitational redshift when it moves from a region of lower gravitational potential to a region of higher gravitational potential. Such a potential gradient may be experienced when one moves away from a massive body, from regions of stronger to weaker gravitational pull (note the inverse variation). And when you think of radiation, such as light, moving from the surface of a star and toward a far-away observer, the light gets redshifted. The phenomenon was proposed, implicitly, in 1916 by Albert Einstein through his, and so called, Einstein Field Equations (EFE) that described the general theory of relativity (GR).

When radiation gets redshifted, its frequency gets reduced toward the red portion of the electromagnetic spectrum, hence the name. Agreed, the phenomenon is counter-intuitive. Usually, when the leash on an escaping object is loosened, the object speeds up. In the case of a redshift, however, the frequency is lowered (or the particle slowed).

The real wonder lies in the predictive power of such physics. It doesn’t matter whence the mass and what the wave: their interaction is always preceded and succeeded by a blueshift and a redshift. More, speaking from an application-oriented perspective, the radiation reaching Earth from outer space will always be redshifted. Consider it: the waves will have left the gravitational pull of some body behind on their way toward Earth. In thinking so, given some radiation, its source, and thus the radiation’s initial frequency, it becomes easy to calculate how much mass lies between the source and Earth.

A universal map of the cosmic microwave background (CMB) radiation as recorded by the Wilkinson Microwave Anisotropy Probe (WMAP) after its launch in 2011 (This map serves as a reliable benchmark against which to compare the locally observed frequencies of CMB)

As a naturally available resource, consider the cosmic microwave background (CMB) radiation. The CMB was born when the universe was around 379,000 years old, when the ionic plasma born moments after the Big Bang had cooled to a temperature at which electrons and protons could combine to form hydrogen atoms, leaving the photons decoupled from matter and loosened upon the universe as residual radiation (currently at a temperature of  2.72548 ± 0.00057 K).

And in the CMB-context, the Sachs-Wolfe effect is of two kinds: integrated and non-integrated. The non-integrated Sachs-Wolfe effect occurs at the surface-of-last-scattering, and the integrated version between the surface-of-last-scattering and Earth. The surface mentioned here can be thought of as an imaginary surface in space where the last matter-radiation decouplings occurred. What we’re interested in is the integrated Sachs-Wolfe effect.

Assuming that the photons have just left a star behind, and been gravitationally redshifted in the process, there is a lot of matter they could still encounter on their way to Earth even if our home planet maintains a clear line-of-sight to the star. This includes dust, stray rocks, gases blown around by stellar winds, and – if it does exist – dark energy.

The iconic Pillars of Creation, snapped by the Hubble Space Telescope on April 1, 1995, show columns of intergalactic dust in the midst of star-creation while also being eroded by starlight and stellar winds from other stars in the neighborhood.

Therefore, a great way to detect the presence of dark energy between two points in space would be easy, wouldn’t it? All we’d have to do is measure the redshift in radiation detected by a satellite in orbit around Earth coming from a selected region, and compare it with a map of that region. An analysis of the redshift “leftover” from subtracting the redshift due to matter should yield the amount of dark energy! (See also: WMAP)

This procedure was suggested in 1996 by Neil Turok and Robert Crittenden of the Perimeter Institute, Canada. However, after the first evidence of the integrated Sachs-Wolfe effect was detected in 2003, the correlation between the observed data and already-available maps was very low. This lead some skeptics to suggest that the effect could have instead been caused by space dust. The possibility of their being right was indeed high, until September 11, 2012, when their skepticism was almost conclusively refuted by a team of scientists from the University of Portsmouth and the LMU University Munich.

The study, lead by Tommaso Giannantonio and Crittenden, lasted two years and established at a confidence level of 5.4 sigma (or 99.996%) that the ’03 observation indeed corresponded to dark energy and not any other source of gravitational potential.

The phenomenological legacy of redshifts is derived from its special place in Einstein’s GR. The descriptive EFE first opened even the theoretical possibilities of such redshifts and their applications in astrophysics research.

Categories
Life notes

The invasion

The most fear I’ve ever experienced is when I smoked up for the first time. I thought I’d enjoy it – isn’t that always the case when you foray into an unknown realm of experiences, a world of as-yet uninhabited sensations? With that promise firmly in mind, I’d taken a few drags and settled back, waiting for some awakening to come dazzle me. And when it did hit, I was terrified. It started with my fingertips turning numb, followed by my face… I couldn’t feel the wind on that windy day. There was nothing about me that let me close my eyes lest they turn dry against the onslaught of dead, cold air. Next, there was the reeling imagination: flying colours, rifle-toting Russian stalkers, speeding cars that cannoned me into the wall in front of my chair, and then… a memory of standing in front of a painting wondering if it was really there.

Through all of this, a voice persisted at the back of my head – is there any other place whence subdued voices persist? – telling me that I was losing control. Now, I know that there were two of me: one moving forward like an untamed warhorse, trampling and snorting and drooling, the other attempting to rein it in, trying to snap my head back without breaking it altogether. I couldn’t possibly have sided with either force: each was as necessary as it was inexplicably just there. When I tried to stand up, the jockey brought to mind gravity, my uneven footing, and my sense of neuromuscular control, but they were quick to dissipate, to dissolve within the temptation of murky passions swimming in front of my eyes. My body was lost to me; just as suddenly, I was someone else. Sure, I could have appreciated the loss of all but some inhibitions, but the loss only served to further remind that it was just that: loss. In its passing was more betrayal than in its wake more promise.

Death, you see, is nothing different. Of course, it stops with the loss – there is no “otherside”, no after. And with the presence of that darkness continuously assured, the loss simply accentuates it, each passing moment stealing forever a sense. That must be a terrifying thing, ironical as it may sound, perhaps because it’s an irreversible, suffocating handicap, the last argument that you will have, and one that you will be forced to leave without a chance at rebuttal. And then… who will look through your eyes? Who will reason through your mind? Who will shiver against the oncoming cold under the sheath of your skin? It is hard to say, just like measuring the brightness of one candle with another: the first could be twice as bright as the second, but really how bright are they? It is a dead comparison, the life of any such glow trapped within the body of a burning wick. The luxury of universal constants doesn’t exist, does it?

Categories
Science

The weakening measurement

Unlike the special theory of relativity that the superluminal-neutrinos fiasco sought to defy, Heisenberg’s uncertainty principle presents very few, and equally iffy, measurement techniques to stand verified. While both Einstein’s and Heisenberg’s foundations are close to fundamental truths, the uncertainty principle has more guided than dictated applications that involved its consequences. Essentially, a defiance of Heisenberg is one for the statisticians.

And I’m pessimistic. Let’s face it, who wouldn’t be?

Anyway, the parameters involved in the experiment were:

  1. The particles being measured
  2. Weak measurement
  3. The apparatus

The experimenters claim that a value of the photon’s original polarization, X, was obtained upon a weak measurement. Then, a “stronger” measurement was made, yielding a value A. However, according to Heisenberg’s principle, the observation should have changed the polarization from A to some fixed value A’.

Now, the conclusions they drew:

  1. Obtaining X did not change A: X = A
  2. A’ – A < Limits set by Heisenberg

The terms of the weak measurement are understood with the following formula in mind:

(The bra-ket, or Dirac, notation signifies the dot-product between two vectors or vector-states.)

Here, φ(1,2) denote the pre- and post-selected states, A-hat the observable system, and Aw the value of the weak-measurement. Thus, when the pre-selected state tends toward becoming orthogonal to the post-selected state, the value of the weak measurement increases, becoming large, or “strong”, enough to affect the being-measured value of A-hat.

In our case: Aw = A – X; φ(1) = A; φ(2) = A’.

As listed above, the sources of error are:

  1. φ(1,2)
  2. X

To prove that Heisenberg was miserly all along, Aw would have been increased until φ(1) • φ(2) equaled 0 (through multiple runs of the same experiment), and then φ(2) – φ(1), or A’ – A, measured and compared to the different corresponding values of X. After determining the strength of the weak measurement thus, A’ – X can be determined.

I am skeptical because X signifies the extent of coupling between the measuring device and the system being measured, and its standard deviation, in the case of this experiment, is dependent on the standard deviation of A’ – A, which is in turn dependent on X.

Categories
Life notes

The common tragedy

I have never been able to fathom poetry. Not because it’s unensnarable—which it annoyingly is—but because it never seems to touch upon that all-encompassing nerve of human endeavour supposedly running through our blood, transcending cultures and time and space. Is there a common trouble that we all share? Is there a common tragedy that is not death that we all quietly await that so many claim is described by poetry?

I, for one, think that that thread of shared memory is lost, forever leaving the feeble grasp of our comprehension. In fact, I believe that there is more to be shared, more to be found that will speak to the mind’s innermost voices, in a lonely moment of self-doubting. Away from a larger freedom, a “shared freedom”, we now reside in a larger prison, an invisible cell that assumes various shapes and sizes.

Sometimes, it’s in your throat, blocking your words from surfacing. Sometimes, it has your skull in a death-grip, suffocating all thoughts. Sometimes, it holds your feet to the ground and keeps you from flying, or sticks your fingers in your ears and never lets you hear what you might want to hear. Sometimes, it’s a cock in a cunt, a blade against your nerves, a catch on your side, a tapeworm in your intestines, or that cold sensation that kills wet dreams.

Today, now, this moment, the smallest of freedoms, the freedoms that belong to us alone, are what everyone shares, what everyone experiences. It’s simply an individuation of an idea, rather a belief, and the truth of that admission—peppered as it is with much doubt—makes us hold on more tightly to it. And as much as we partake of that individuation, like little gluons that emit gluons, we inspire more to pop into existence.

Within the confines of each small freedom, we live in worlds of our own fashioning. Poetry is, to me, the voice of those worlds. It is the resultant voice, counter-resolved into one expression of will and intention and sensation, that cannot, in turn, be broken down into one man or one woman, but only into whole histories that have bred them. Poetry is, to me, no longer a contiguous spectrum of pandered hormones or a conflict-indulged struggle, but an admission of self-doubt.

Categories
Life notes

Credibility on the web

There are a finite number of sources from which anyone receives information. The most prominent among them are media houses (incl. newspapers, news channels, radio stations, etc.) and scientific journals (at least w.r.t. the subjects I work with).

Seen one way, these establishments generate the information that we receive. Without them, stories would remain localized, centralized, away from the ears that could accord them gravity.

Seen another way, these establishments are also motors: sans their motive force, information wouldn’t move around as it does, although this is assuming that they don’t mess with the information itself.

With more such “motors” in the media mix, the second perspective is becoming the norm of things. Even if information isn’t picked up by one house, it could be set sailing through a blog or a CJ initiative. The means through which we learn something, or stumble upon it for that matter, are growing to be more overlapped, lines crossing each others’ paths more often.

Veritably, it’s a maze. In such a labyrinthine setup, the entity that stands to lose the most is faith of a reader/viewer/consumer in the credibility of the information received.

In many cases, with a more interconnected web – the largest “supermotor” – the credibility of one bit of information is checked in one location, by one entity. Then, as it moves around, all following entities inherit that credibility-check.

For instance, on Wikipedia, credibility is established by citing news websites, newspaper/magazine articles, journals, etc. Jimmy Wales’ enterprise doesn’t have its own process of verification in place. Sure, there are volunteers who almost constantly police its millions of pages, but all they can do is check if the citation is valid, and if there are any contrarious reports, too, to the claims being staked.

One way or another, if a statement has appeared in a publication, it can be used to have the reader infer a fact.

In this case, Wikipedia has inherited the credibility established by another entity. If the verification process had failed in the first place, the error would’ve been perpetrated by different motors, each borrowing from the credibility of the first.

Moreover, the more strata that the information percolates through, the harder it will be to establish a chain of accountability.

*

My largest sources of information are:

  1. Wikipedia
  2. Journals
  3. Newspapers
  4. Blogs

(The social media is just a popular aggregator of news from these sources.)

Wikipedia cites news reports and journal articles.

News reports are compiled with the combined efforts of reporters and editors. Reporters verify the information they receive by checking if it’s repeated by different sources under (if possible) different circumstances. Editors proofread the copy and are (or must remain) sensitive to factual inconsistencies.

Journals have the notorious peer-reviewing mechanism. Each paper is subject to a thorough verification process intended to wean out all mistakes, errors, information “created” by lapses in the scientific method, and statistical manipulations and misinterpretations.

Blogs borrow from such sources and others.

Notice: Even in describing the passage of information through these ducts, I’ve vouched for reporters, editors, and peer-reviews. What if they fail me? How would I find out?

*

The point of this post was to illustrate

  1. The onerous yet mandatory responsibility that verifiers of information must assume,
  2. That there aren’t enough of them, and
  3. That there isn’t a mechanism in place that periodically verifies the credibility of some information across its lifetime.

How would you ensure the credibility of all the information you receive?

Categories
Science

Weekly science quiz

My weekly science quiz debuted in The Hindu today, in its In School edition. Here’s the first installment.

Questions

  1. Neil Armstrong, the first man to step on the moon on July 21, 1969, passed away on August 25 this year. Who was the second man to step on the moon?
  2. When the car-sized robotic rover Curiosity landed on Mars on August 6, 2012, it was only the fourth rover to achieve the feat. Can you name the other three rovers, two of which are considered “twins” of each other?
  3. This installation, when it went live in April 2012, reduced carbon dioxide emissions by 80 lakh tonnes, saved 9 lakh tonnes of coal and natural gas per year, and smashed a Chinese record held since October, 2011. What are we talking about?
  4. This “vehicle” was designed in Switzerland, built in Italy, owned by USA, and crewed by a Belgian on January 23, 1960, when it became the first vessel to descend into Mariana Trench, the deepest point in Earth’s crust. The Belgian’s father himself once held the world record for the highest altitude reached in a hot-air balloon. Name the vessel.
  5. Horizontal slickwater hydraulic fracturing is a technique, common in the USA since the 2000’s, which releases natural gas locked under sub-surface rock formations by cracking up rock under the pressure of large quantities of water. What is the method’s common name?
  6. Last week, Michael Roukes and his team at Caltech built a highly sensitive weighing scale that uses a vibrating arm that is sensitive to small changes in its frequency. Called a nanoelectromechanical resonator, what can it measure?
  7. Netscape Navigator was the dominant web-browser of the 1990s, and its only competitor at the time was another browser named Mosaic. Since Netscape was being developed to beat Mosaic, its codename was a portmanteau of “Mosaic” and “killer”. What is the name?
  8. The ___________ lay their eggs in the months of February and March, and the hatchlings emerge after a 45-55 day incubation period, just before the hotter days of summer set in. Their nesting grounds include the coasts of Mexico, Nicaragua, Costa Rica, Orissa and Tamil Nadu, while each nesting batch is called an arribada. Fill in the blank.
  9. In the exosphere, highly energetic particles collide with atoms in the earth’s atmosphere and release a shower of less energetic particles. What are the highly energetic particles collectively called? Hint: 2012 is being celebrated as the 100th year of their discovery.
  10. The fictitious version of this contraption is a modified street-bike with a liquid-cooled V-4 engine. Its real version has a water-cooled single-cylinder engine, is made of steel, aluminium and magnesium, and is steered by the shoulders. What are we talking about?

Answers

  1. Edwin “Buzz” Aldrin
  2. Spirit & Opportunity, Sojourner
  3. The 214-MW Gujarat Solar Power Field, Patan district
  4. Trieste
  5. Fracking
  6. The weight of individual molecules
  7. Mozilla, the creator of Firefox
  8. Olive Ridley turtles
  9. Cosmic rays
  10. Batman’s Batpod
Categories
Op-eds Science

Who runs science in India?

Click on the image for the article.

Right now, Colin Macilwain cannot be more on top of the problem: the role of a Chief Scientific Adviser has shifted toward leveraging science and technology to reap rewards through economic and industrial policy, away from bridging the gap between the ruling elite and the academically engaged.

A contrast with India, unfortunately, is meaningless in this regard. While New York and Berlin may face off over what it means to have one person at the top versus what it means to have several people engaged throughout, scientific policy in India is in a shambles more because the Chief Scientific Adviser, C.N.R. Rao, has professed no inclination toward either agenda.

Instead, given that the country is oriented primarily toward tackling the energy crisis, Rao’s role in influencing the government to institute decentralization policies, cross-generation power tariffs, and subsidization of alternative energy sources pales in comparison to the industrial lobby that subsumes his voice with just a lot of money.

While we leave universities to tackle their loss of autonomy – chiefly because the boards indulge public interference in order to maximise public-funding – and our engineers to bridge the infrastructural gap between low consumption, lower private-sector investment, and invitations to greater foreign direct-investment, who is really running science in India?

Categories
Science

Superconductivity: From Feshbach to Fermi

(This post is continued from this one.)

After a bit of searching on Wikipedia, I found that the fundamental philosophical underpinnings of superconductivity were to be found in a statistical concept called the Feshbach resonance. If I had to teach superconductivity to those who only knew of the phenomenon superfluously, that’s where I’d begin. So.

Imagine a group of students who have gathered in a room to study together for a paper the next day. Usually, there is that one guy among them who will be hell-bent on gossiping more than studying, affecting the performance of the rest of the group. In fact, given sufficient time, the entire group’s interest will gradually shift in the direction of the gossip and away from its syllabus. The way to get the entire group back on track is to introduce a Feshbach resonance: cut the bond between the group’s interest and the entity causing the disruption. If done properly, the group will turn coherent in its interest and to focusing on studying for the paper.

In multi-body systems, such as a conductor harboring electrons, the presence of a Feshbach resonance renders an internal degree of freedom independent of those coordinates “along” which dissociation is most like to occur. And in a superconductor, a Feshbach resonance results in each electron pairing up with another (i.e., electron-vibrations are quelled by eliminating thermal excitation) owing to both being influenced by an attractive potential that arises out of the electron’s interaction with the vibrating lattice.

Feshbach resonance & BCS theory

For particulate considerations, the lattice-vibrations are quantized in the form of hypothetical particles called phonons. As for why the Feshbach resonance must occur the way it does in a superconductor: that is the conclusion, rather implication, of the BCS theory formulated in 1957 by John Bardeen, Leon Neil Cooper, and John Robert Schrieffer.

(Arrows describe the direction of forces acting on each entity) When a nucleus, N, pulls electrons, e, toward itself, it may be said that the two electrons are pulled toward a common target by a common force. Therefore, the electrons’ engagement with each other is influenced by N. The energy of N, in turn, is quantified as a phonon (p), and the electrons are said to interact through the phonons.

The BCS theory essentially treats electrons like rebellious, teenage kids (I must be getting old). As negatively charged electrons pass through the crystal lattice, they draw the positively charged nuclei toward themselves, creating an increase in the positive charge density in their vicinity that attracts more electrons in turn. The resulting electrostatic pull is stronger near nuclei and very weak at larger distances. The BCS theory states that two electrons that would otherwise repel each other will pair up in the face of such a unifying electrostatic potential, howsoever weak it is.

This is something like rebellious teens who, in the face of a common enemy, will unite with each other no matter what the differences between them earlier were.

Since electrons are fermions, they bow down to Pauli’s exclusion principle, which states that no two fermions may occupy the same quantum state. As each quantum state is defined by some specific combination of state variables called quantum numbers, at least one quantum number must differ between the two co-paired electrons.

Prof. Wolfgang Pauli (1900-1958)

In the case of superconductors, this is particle spin: the electrons in the member-pair will have opposite spins. Further, once such unions have been achieved between different pairs of electrons, each pair becomes indistinguishable from the other, even in principle. Imagine: they are all electron-pairs with two opposing spins but with the same values for all other quantum numbers. Each pair, called a Cooper pair, is just the same as the next!

Bose-Einstein condensates

This unification results in the sea of electrons displaying many properties normally associated with Bose-Einstein condensates (BECs). In a BEC, the particles that attain the state of indistinguishability are bosons (particles with integer spin), not fermions (particles with half-integer spin). The phenomenon occurs at temperatures close to absolute zero and in the presence of an external confining potential, such as an electric field.

In 1995, at the Joint Institute for Laboratory Astrophysics, physicists cooled rubidium atoms down to 170 billionths of a degree above absolute zero. They observed that the atoms, upon such cooling, condensed into a uniform state such that their respective velocities and distribution began to display a strong correlation (shown above, L to R with decreasing temp.). In other words, the multi-body system had condensed into a homogenous form, called a Bose-Einstein condensate (BEC), where the fluid behaved as a single, indivisible entity.

Since bosons don’t follow Pauli’s exclusion principle, a major fraction of the indistinguishable entities in the condensate may and do occupy the same quantum state. This causes quantum mechanical effects to become apparent on a macroscopic scale.

By extension, the formulation and conclusions of the BCS theory, alongside its success in supporting associated phenomena, imply that superconductivity may be a quantum phenomenon manifesting in a macroscopic scale.

Note: If even one Cooper pair is “broken”, the superconducting state will be lost as the passage of electric current will be disrupted, and the condensate will dissolve into individual electrons, which means the energy required to break one Cooper pair is the same as the energy required to break the composition of the condensate. So thermal vibrations of the crystal lattice, usually weak, become insufficient to interrupt the flow of Cooper pairs, which is the flow of electrons.

The Meissner effect in action: A magnet is levitated by a superconductor because of the expulsion of the magnetic field from within the material

The Meissner effect

In this context, the Meissner effect is simply an extrapolation of Lenz’s law but with zero electrical resistance.

Lenz’s law states that the electromotive force (EMF) because of a current in a conductor acts in a direction that always resists a change in the magnetic flux that causes the EMF. In the absence of resistance, the magnetic fields due to electric currents at the surface of a superconductor cancel all magnetic fields inside the bulk of the material, effectively pushing magnetic field lines of an external magnetic potential outward. However, the Meissner effect manifests only when the externally applied field is weaker than a certain critical threshold: if it is stronger, then the superconductor returns to its conducting state.

Now, there are a class of materials called Type II superconductors – as opposed to the Type I class described earlier – that only push some of the magnetic field outward, the rest remaining conserved inside the material in filaments while being surrounded by supercurrents. This state is called the vortex state, and its occurrence means the material can withstand much stronger magnetic fields and continue to remain superconducting while also exhibiting the hybrid Meissner effect.

Temperature & superconductivity

There are also a host of other effects that only superconductors can exhibit, including Cooper-pair tunneling, flux quantization, and the isotope effect, and it was by studying them that a strong relationship was observed between temperature and superconductivity in various forms.

(L to R) John Bardeen, Leon Cooper, and John Schrieffer

In fact, Bardeen, Cooper, and Schrieffer hit upon their eponymous theory after observing a band gap in the electronic spectra of superconductors. The electrons in any conductor can exist at specific energies, each well-defined. Electrons above a certain energy, usually in the valence band, become free to pass through the entire material instead of staying in motion around the nuclei, and are responsible for conduction.

The trio observed that upon cooling the material to closer and closer to absolute zero, there was a curious gap in the energies at which electrons could be found in the material at a particular temperature. This meant that, at that temperature, the electrons were jumping from existing at one energy to existing at some other lower energy. The observation indicated that some form of condensation was occurring. However, a BEC was ruled out because of Pauli’s exclusion principle. At the same time, a BEC-like state had to have been achieved by the electrons.

This temperature is called the transition temperature, and is the temperature below which a conductor transitions into its superconducting state, and Cooper pairs form, leading to the drop in the energy of each electron. Also, the differences in various properties of the material on either side of this threshold are also attributed to this temperature, including an important notion called the Fermi energy: it is the potential energy that any system possesses when all its thermal energy has been removed from it. This is a significant idea because it defines both the kind and amount of energy that a superconductor has to offer for an externally applied electric current.

Enrico Fermi, along with Paul Dirac, defined the Fermi-Dirac statistics that governs the behavior all identical particles that obey Pauli’s exclusion principle (i.e., fermions). Fermi level and Fermi energy are concepts named for him; however, as long as we’re discussing eponymy, Fermilab overshadows them all.

In simple terms, the density of various energy states of the electrons at the Fermi energy of a given material dictates the “breadth” of the band gap if the electron-phonon interaction energy were to be held fixed at some value: a direct proportionality. Thus, the value of the energy gap at absolute zero should be a fixed multiple of the value of the energy gap at the superconducting transition temperature (the multiplication factor was found to be 3.5 universally, irrespective of the material).

Similarly, because of the suppression of thermal excitation (because of the low temperature), the heat capacity of the material reduces drastically at low temperatures, and vanishes below the transition temperature. However, just before hitting zero at the threshold, the heat capacity balloons up to beyond its original value, and then pops. It was found that the ballooned value was always 2.5 times the material’s normal heat capacity value… again, universally, irrespective of the material!

The temperature-dependence of superconductors gains further importance with respect to applications and industrial deployment in the context of its possible occurring at higher temperatures. The low temperatures currently necessary eliminate thermal excitations, in the form of vibrations, of nuclei and almost entirely counter the possibility of electrons, or Cooper pairs, colliding into them.The low temperatures also assist in the flow of Cooper pairs as a superfluid apart from allowing for the energy of the superfluid being higher than the phononic energy of the lattice.

However, to achieve all these states in order to turn a conductor into a superconductor at a higher temperature, a more definitive theory of superconductivity is required. One that allows for the conception of superconductivity that requires only certain internal conditions to prevail while the ambient temperature soars. The 1986-discovery of high-temperature superconductors in ceramics by Bednorz and Muller was the turning point. It started to displace the BCS theory which, physicists realized, doesn’t contain the necessary mechanisms for superconductivity to manifest itself in ceramics – insulators at room temperature – at temperatures as high as 125 K.

A firmer description of superconductivity, therefore, still remains elusive. Its construction should not only pave the for one of the few phenomena that hardly appears in nature and natural processes to be fully understood, but also for its substitution against standard conductors that are responsible for lossy transmission and other such undesirable effects. After all, superconductors are the creation of humankind, and only by its hand while they ever be fully worked.