epitaxy

  • MIT develops thermo-PV cell with 40% efficiency

    Researchers at MIT have developed a heat engine that can convert heat to electricity with 40% efficiency. Unlike traditional heat engines – a common example is the internal combustion engine inside a car – this device doesn’t have any moving parts. Second, this device has been designed to work with a heat source that has a temperature of 1,900º to 2,400º C. Effectively, it’s like a solar cell that has been optimised to work with photons from vastly hotter sources – although its efficiency still sets it apart. If you know the history, you’ll understand why 40% is a big deal. And if you know a bit of optics and some materials science, you’ll understand how this device could be an important part of the world’s efforts to decarbonise its power sources. But first the history.

    We’ve known how to build heat engines for almost two millennia. They were first built to convert heat, generated by burning a fuel, into mechanical energy – so they’ve typically had moving parts. For example, the internal combustion engine combusts petrol or diesel and harnesses the energy produced to move a piston. However, the engine can only extract mechanical work from the fuel – it can’t put the heat back. If it did, it would have to ‘give back’ the work it just extracted, nullifying the engine’s purpose. So once the piston has been moved, the engine dumps the heat and begins the next cycle of heat extraction from more fuel. (In the parlance of thermodynamics, the origin of the heat is called the source and its eventual resting place is called the sink.)

    The inevitability of this waste heat keeps the heat engine’s efficiency from ever reaching 100% – and is further dragged down by the mechanical energy losses implicit in the moving parts (the piston, in this case). In 1820, the French mechanical engineer Nicolas Sadi Carnot derived the formula to calculate the maximum possible efficiency of a heat engine that works in this way. (The formula also assumes that the engine is reversible – i.e. that it can pump heat from a colder source to a hotter sink.) The number spit out by this formula is called the Carnot efficiency. No heat engine can have an energy efficiency that’s greater than its Carnot efficiency. The internal combustion engines of today have a Carnot efficiency of around 37%. A steam generator at a large power plant can go up to 51%. Against this background, the heat engine that the MIT team has developed has a celebration-worthy efficiency of 40%.

    The other notable thing about it is the amount of heat with which it can operate. There are two potential applications of the new device that come immediately to mind: to use the waste heat from something that operates at 1,900-2,400º C and to take the heat from something that stores energy at those temperatures. There aren’t many entities in the world that maintain a temperature of 1,900-2,400º C as well as dump waste heat. Work on the device caught my attention after I spotted a press release from MIT. The release described one application that combined both possibilities in the form of a thermal battery system. Here, heat from the Sun is concentred in graphite blocks (using lenses and mirrors) that are located in a highly insulated chamber. When the need arises, the insulation can be removed to a suitable extent for the graphite to lose some heat, which the new device then converts to electricity.

    On Twitter, user Scott Leibrand (@ScottLeibrand) also pointed me to a similar technology called FIRES – short for ‘Firebrick Resistance-Heated Energy Storage’, proposed by MIT researchers in 2018. According to a paper they wrote, it “stores electricity as … high-temperature heat (1000–1700 °C) in ceramic firebrick, and discharges it as a hot airstream to either heat industrial plants in place of fossil fuels, or regenerate electricity in a power plant.” They add that “traditional insulation” could limit heat leakage from the firebricks to less than 3% per day and estimate a storage cost of $10/kWh – “substantially less expensive than batteries”. This is where the new device could shine, or better yet enable a complete power-production system: by converting heat deliberately leaked from the graphite blocks or firebricks to electricity, at 40% efficiency. Even given the fact that heat transfer is more efficient at higher temperatures, this is impressive – more since such energy storage options are also geared for the long-term.

    Let’s also take a peek at how the device works. It’s called a thermophotovoltaic (TPV) cell. The “photovoltaic” in the name indicates that it uses the photovoltaic effect to create an electric current. It’s closely related to the photoelectric effect. In both cases, an incoming photon knocks out an electron in the material, creating a voltage that then supports an electric current. In the photoelectric effect, the electron is completely knocked out of the material. In the photovoltaic effect, the electron stays within the material and can be recaptured. Second, in order to achieve the high efficiency, the research team wrote in its paper that it did three things. It’s a bunch of big words but they actually have straightforward implications, as I explain, so don’t back down.

    1. “The usage of higher bandgap materials in combination with emitter temperatures between 1,900 and 2,400 °C” – Band gap refers to the energy difference between two levels. In metals, for example, when electrons in the valence band are imparted enough energy, they can jump across the band gap into the conduction band, where they can flow around the metal conducting electricity. The same thing happens in the TPV cell, where incoming photons can ‘kick’ electrons into the material’s conduction band if they have the right amount of energy. Because the photon source is a very hot object, the photons are bound to have the energy corresponding to the infrared wavelength of light – which carries around 1-1.5 electron-volt, or eV. So the corresponding TPV material also needs to have a bandgap of 1-1.5 eV. This brings us to the second point.

    2. “High-performance multi-junction architectures with bandgap tunability enabled by high-quality metamorphic epitaxy” – Architecture refers to the configuration of the cell’s physical, electrical and chemical components and epitaxy refers to the way in which the cell is made. In the new TPV cell, the MIT team used a multi-junction architecture that allowed the device to ‘accept’ photons of a range of wavelengths (corresponding to the temperature range). This is important because the incoming photons can have one of two effects: either kick out an electron or heat up the material. The latter is undesirable and should be avoided, so the multi-junction setup to absorb as many photons as possible. A related issue is that the power output per unit volume of an object radiating heat scales according to the fourth power of its temperature. That is, if its temperature increases by x, its power output per volume will increase by x^4. Since the heat source of the TPV cell is so hot, it will have a high power output, thus again privileging the multi-junction architecture. The epitaxy is not interesting to me, so I’m skipping it. But I should note that electric cells like the current one aren’t ubiquitous because making them is a highly intricate process.

    3. “The integration of a highly reflective back surface reflector (BSR) for band-edge filtering” – The MIT press release explains this part clearly: “The cell is fabricated from three main regions: a high-bandgap alloy, which sits over a slightly lower-bandgap alloy, underneath which is a mirror-like layer of gold” – the BSR. “The first layer captures a heat source’s highest-energy photons and converts them into electricity, while lower-energy photons that pass through the first layer are captured by the second and converted to add to the generated voltage. Any photons that pass through this second layer are then reflected by the mirror, back to the heat source, rather than being absorbed as wasted heat.”

    While it seems obvious that technology like this will play an important part in humankind’s future, particularly given the attractiveness of maintaining a long-term energy store as well as the use of a higher-efficiency heat engine, the economics matter muchly. I don’t know how much the new TPV cell will cost, especially since it isn’t being mass-produced yet; in addition, the design of the thermal battery system will determine how many square feet of TPV cells will be required, which in turn will affect the cells’ design as well as the economics of the overall facility. This said, the fact that the system as a whole will have so few moving parts as well as the availability of both sunlight and graphite or firebricks, or even molten silicon, which has a high heat capacity, keep the lucre of MIT’s high-temperature TPVs alive.

    Featured image: A thermophotovoltaic cell (size 1 cm x 1 cm) mounted on a heat sink designed to measure the TPV cell efficiency. To measure the efficiency, the cell is exposed to an emitter and simultaneous measurements of electric power and heat flow through the device are taken. Caption and credit: Felice Frankel/MIT, CC BY-NC-ND.

  • Before seeing, there are the ways of imaging

    When May-Britt Moser, Edvard Moser and John O’Keefe were awarded the 2014 Nobel Prize for physiology and medicine “for their discoveries of cells that constitute a positioning system in the brain”, there was a noticeable uptick in the number of articles on similar subjects in the popular as well as scientific literature in the following months. The same thing happened with the sciences Nobel Prizes in subsequent years, and I suspect it will be the same this year with cryo-electron microscopy (cryoEM) as well. And I’d like to ride this wave.

    §

    It has often been that the Nobel Prizes for physiology/medicine (a.k.a. ~ for biology) and for chemistry have awarded advancements in chemistry and biology, respectively. This year, however, the chemistry prize was more physics. Joachim Frank, Jacques Dubochet and Richard Henderson – three biologists – were on a quest to make the tool that they were using to explore structural biology more powerful, more efficient. So Frank invented computational techniques; Dubochet invented a new way to prepare the sample; and Henderson used them both deftly to prove their methods worked.

    Since then, cryoEM has come a long way but the improvisations hence have only been more sophisticated versions of what Frank, Dubochet and Henderson first demonstrated … except for one component: the microscope’s electronics.

    Just the way human eyes are primed to detect photons of a certain wavelength, extract the information encoded in them, convert that into an electric signal and send it to the brain for processing, a cryoEM uses electrons. A wave can be scattered by objects in its path that are of size comparable to the wave’s wavelength. So electrons, which have a shorter wavelength than photons, can be used to probe smaller distances. A cryoEM fires a tight, powerful beam of electrons into the specimen. Parts of the specimen scatter the electrons into a detector on the microscope. The detector ‘reads’ how the electrons have changed and delivers that information to a computer. This happens repeatedly as electron beams are fired at different copies of the specimen oriented at random angles. A computer then puts together a high-resolution 3D image of the specimen using all the detector data. In this scheme of things: a technological advancement in 2012 significantly improved the cryoEM’s imaging abilities. It was called the direct electron detector, developed to substitute the charged couple device (CCD).

    The simplest imaging system known to humans is the photographic film, which uses a surface of composed of certain chemical substances that are sensitive visible light. When the surface is exposed to a frame, say a painting, the photons reflected by the painting impinge on the surface. The substances therein then ‘record’ the information carried by the photons in the form of a photograph. A CCD employs a surface of metal-oxide semiconductors (MOS). A semiconductor relies on the behaviour of electric charge on either side of a special junction: an interface of dissimilar materials to which impurities have been added such that one layer is rich in electrons (n) and the other, poor (p). The junction will now either conduct electricity or not depending on how a voltage is applied across it. Anyway: when a photon impinges on the MOS, the latter releases an electron (thanks to the photoelectric effect) that is then moved through the device to an area where it can be manipulated to contribute to one pixel of the image.

    (Note: When I write ‘one photon’ or ‘one electron’, I don’t mean one exactly. Various uncertainties, including Heisenberg’s, prevail in quantum mechanics and it’s unreasonable to assume humans can manipulate particles one at a time. My use of the singular is only illustrative. At the same time, I hope you will pause to appreciate – later in this post – how close to the singular we’ve been able to get.)

    CCDs can produce images quickly and with high contrast even in low light. However, they have an important disadvantage. CCDs have a lower detective quantum efficiency than photographic films at higher spatial frequencies. Detective quantum efficiency is a measure of how well a detector – like the film or a CCD – can record an image when the signal to noise ratio is higher. For example, when you’re getting a dental X-ray done to understand how your teeth look below the gums, your mouth is bombarded with X-ray photons that penetrate the gums but don’t penetrate the teeth. The more such photons there are, the better the image of your teeth. However, inundating your mouth with X-rays just to get a better picture risks damaging tissue and hurting you. This would be the case if an X-ray ‘camera’ had a CCD with a lower detective quantum efficiency. The simplest workaround would be to use an amplifier to boost the signal produced by the detector – but then this would also boost the noise.

    So, in other words, CCDs have more trouble recording the finer details in an image than photographic films when there is a lot of noise coming with the incident signal. The noise can also be internally generated, such as during the process when photons are converted into electrons.

    However, scientists can’t use photographic films with cryoEM instead because CCDs have other important advantages. They scan images faster, allow for easier refocusing and realignment of the object under study, and require lesser maintenance. This dilemma provided the impetus to develop the direct electron detector – effectively a CCD with better detective quantum efficiency.

    Because a cryoEM is in the business of ‘seeing’ electrons, a scintillator is placed between the electrons and the CCD. When the electron hits the scintillator, the material absorbs the energy and emits a glow – in the form of a photon. This photon is then picked up by the CCD for processing. Sometimes, the incoming electron may not create a photon at exactly the location on the scintillator where it is received. Instead, it may bounce off of multiple locations, producing a splatter of photons in a larger area and creating a blur in the image.

    In a direct electron detector, the scintillator is removed, forcing the CCD to directly receive and process electrons produced by the initial beam for study. Such (higher energy) electrons can damage the CCD as well as produce unnecessary signals within the system. These effects can be protected against using suitable hardware and circuit design techniques, either of which required advancements in materials science that weren’t available until recently. Even so, the eventual device itself is pretty simple in design. According to the 2009 doctoral thesis of one Liang Jin,

    The device can be divided into three major regions. At the very top of the surface is the circuitry layer that has pixel transistors and photodiode as well as interconnects between all the components (metallisation layers). The middle layer is a p-epitaxial layer (about 8 to 10 µm thick) that is epitaxially grown with very low defect levels and highly doped. The rest of the 300 um silicon substrate is used mainly for mechanical support.

    On average, a single incident electron of 200 keV will generate about 2,000 ionisation electrons in the 10 µm epitaxial layer, which is significantly larger than the noise level of the device (less than 50 electrons). Each pixel integrates the collected electrons during an exposure period and at the conclusion of a frame, the contents of the sensor array are read out, digitised and stored.

    To understand the extent to which noise was reduced as a result, consider an example. In 2010, a research group led by Jean-Paul Armache of the Ludwig-Maximilians-Universität München was able to image eukaryotic ribosomes using cryoEM at a resolution of 6 angstrom (0.6 nanometers) using 1.4 million images. In 2013, a different group, led by Xiao-chen Bai of the Medical Research Council Laboratory of Molecular Biology in Cambridge, the UK, imaged the same ribosomes to 4.5 angstrom using 35,813 images. The first group used cryoEM + CCDs. The second group used cryoEM + direct detection devices.

    An even newer development seeks to bring back the CCD as the detector of choice among structural biologists. In September 2017, scientists from the Femi National Accelerator Laboratory announced that they had engineered a highly optimised skipper CCD in their lab. The skipper CCD was first theorised by, among others, D.D. Wen in 1974. It’s a CCD in which the electrons released by the photons are measured multiple times – up to 4,000 times per pixel according to one study – during processing to better separate signal from noise. The same study said that, as a result, the skipper CCD’s readout noise could be reduced to 0.068 electrons per pixel. The cost for this was that from the time the CCD received the first electrons to when the processed image became available, it would be a few hours. But in a review, Michael Schirber, a corresponding editor for Physics, argues that “this could be an acceptable tradeoff for rare events, such as hypothetical dark matter particles interacting with silicon atoms”.

    Featured image: Scientists using a 300kV cryo-electron microscope at the Max Planck Institute of Molecular Physiology, Dortmund. Credit: MPI Dortmund.