MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Thursday, October 24th, 2019

    Time Event
    11:59p
    Putting the “bang” in the Big Bang

    As the Big Bang theory goes, somewhere around 13.8 billion years ago the universe exploded into being, as an infinitely small, compact fireball of matter that cooled as it expanded, triggering reactions that cooked up the first stars and galaxies, and all the forms of matter that we see (and are) today.

    Just before the Big Bang launched the universe onto its ever-expanding course, physicists believe, there was another, more explosive phase of the early universe at play: cosmic inflation, which lasted less than a trillionth of a second. During this period, matter — a cold, homogeneous goop — inflated exponentially quickly before processes of the Big Bang took over to more slowly expand and diversify the infant universe.

    Recent observations have independently supported theories for both the Big Bang and cosmic inflation. But the two processes are so radically different from each other that scientists have struggled to conceive of how one followed the other.

    Now physicists at MIT, Kenyon College, and elsewhere have simulated in detail an intermediary phase of the early universe that may have bridged cosmic inflation with the Big Bang. This phase, known as “reheating,” occurred at the end of cosmic inflation and involved processes that wrestled inflation’s cold, uniform matter into the ultrahot, complex soup that was in place at the start of the Big Bang.

    “The postinflation reheating period sets up the conditions for the Big Bang, and in some sense puts the ‘bang’ in the Big Bang,” says David Kaiser, the Germeshausen Professor of the History of Science and professor of physics at MIT. “It’s this bridge period where all hell breaks loose and matter behaves in anything but a simple way.”

    Kaiser and his colleagues simulated in detail how multiple forms of matter would have interacted during this chaotic period at the end of inflation. Their simulations show that the extreme energy that drove inflation could have been redistributed just as quickly, within an even smaller fraction of a second, and in a way that produced conditions that would have been required for the start of the Big Bang.

    The team found this extreme transformation would have been even faster and more efficient if quantum effects modified the way that matter responded to gravity at very high energies, deviating from the way Einstein’s theory of general relativity predicts matter and gravity should interact.

    “This enables us to tell an unbroken story, from inflation to the postinflation period, to the Big Bang and beyond,” Kaiser says. “We can trace a continuous set of processes, all with known physics, to say this is one plausible way in which the universe came to look the way we see it today.”

    The team’s results appear today in Physical Review Letters. Kaiser’s co-authors are lead author Rachel Nguyen, and John T. Giblin, both of Kenyon College, and former MIT graduate student Evangelos Sfakianakis and Jorinde van de Vis, both of Leiden University in the Netherlands.

    “In sync with itself”

    The theory of cosmic inflation, first proposed in the 1980s by MIT’s Alan Guth, the V.F. Weisskopf Professor of Physics, predicts that the universe began as an extremely small speck of matter, possibly about a hundred-billionth the size of a proton. This speck was filled with ultra-high-energy matter, so energetic that the pressures within generated a repulsive gravitational force — the driving force behind inflation. Like a spark to a fuse, this gravitational force exploded the infant universe outward, at an ever-faster rate, inflating it to nearly an octillion times its original size (that’s the number 1 followed by 26 zeroes), in less than a trillionth of a second.

    Kaiser and his colleagues attempted to work out what the earliest phases of reheating — that bridge interval at the end of cosmic inflation and just before the Big Bang — might have looked like.

    “The earliest phases of reheating should be marked by resonances. One form of high-energy matter dominates, and it’s shaking back and forth in sync with itself across large expanses of space, leading to explosive production of new particles,” Kaiser says. “That behavior won’t last forever, and once it starts transferring energy to a second form of matter, its own swings will get more choppy and uneven across space. We wanted to measure how long it would take for that resonant effect to break up, and for the produced particles to scatter off each other and come to some sort of thermal equilibrium, reminiscent of Big Bang conditions.”

    The team’s computer simulations represent a large lattice onto which they mapped multiple forms of matter and tracked how their energy and distribution changed in space and over time as the scientists varied certain conditions. The simulation’s initial conditions were based on a particular inflationary model — a set of predictions for how the early universe’s distribution of matter may have behaved during cosmic inflation.

    The scientists chose this particular model of inflation over others because its predictions closely match high-precision measurements of the cosmic microwave background — a remnant glow of radiation emitted just 380,000 years after the Big Bang, which is thought to contain traces of the inflationary period.

    A universal tweak

    The simulation tracked the behavior of two types of matter that may have been dominant during inflation, very similar to a type of particle, the Higgs boson, that was recently observed in other experiments.

    Before running their simulations, the team added a slight “tweak” to the model’s description of gravity. While ordinary matter that we see today responds to gravity just as Einstein predicted in his theory of general relativity, matter at much higher energies, such as what’s thought to have existed during cosmic inflation, should behave slightly differently, interacting with gravity in ways that are modified by quantum mechanics, or interactions at the atomic scale.

    In Einstein’s theory of general relativity, the strength of gravity is represented as a constant, with what physicists refer to as a minimal coupling, meaning that, no matter the energy of a particular particle, it will respond to gravitational effects with a strength set by a universal constant.

    However, at the very high energies that are predicted in cosmic inflation, matter interacts with gravity in a slightly more complicated way. Quantum-mechanical effects predict that the strength of gravity can vary in space and time when interacting with ultra-high-energy matter — a phenomenon known as nonminimal coupling.

    Kaiser and his colleagues incorporated a nonminimal coupling term to their inflationary model and observed how the distribution of matter and energy changed as they turned this quantum effect up or down.

    In the end they found that the stronger the quantum-modified gravitational effect was in affecting matter, the faster the universe transitioned from the cold, homogeneous matter in inflation to the much hotter, diverse forms of matter that are characteristic of the Big Bang.

    By tuning this quantum effect, they could make this crucial transition take place over 2 to 3 “e-folds,” referring to the amount of time it takes for the universe to (roughly) triple in size. In this case, they managed to simulate the reheating phase within the time it takes for the universe to triple in size two to three times. By comparison, inflation itself took place over about 60 e-folds.

    “Reheating was an insane time, when everything went haywire,” Kaiser says. “We show that matter was interacting so strongly at that time that it could relax correspondingly quickly as well, beautifully setting the stage for the Big Bang. We didn’t know that to be the case, but that’s what’s emerging from these simulations, all with known physics. That’s what’s exciting for us.”

    This research was supported, in part, by the U.S. Department of Energy and the National Science Foundation.

    11:59p
    MIT engineers develop a new way to remove carbon dioxide from air

    A new way of removing carbon dioxide from a stream of air could provide a significant tool in the battle against climate change. The new system can work on the gas at virtually any concentration level, even down to the roughly 400 parts per million currently found in the atmosphere.

    Most methods of removing carbon dioxide from a stream of gas require higher concentrations, such as those found in the flue emissions from fossil fuel-based power plants. A few variations have been developed that can work with the low concentrations found in air, but the new method is significantly less energy-intensive and expensive, the researchers say.

    The technique, based on passing air through a stack of charged electrochemical plates, is described in a new paper in the journal Energy and Environmental Science, by MIT postdoc Sahag Voskian, who developed the work during his PhD, and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering.

    The device is essentially a large, specialized battery that absorbs carbon dioxide from the air (or other gas stream) passing over its electrodes as it is being charged up, and then releases the gas as it is being discharged. In operation, the device would simply alternate between charging and discharging, with fresh air or feed gas being blown through the system during the charging cycle, and then the pure, concentrated carbon dioxide being blown out during the discharging.

    As the battery charges, an electrochemical reaction takes place at the surface of each of a stack of electrodes. These are coated with a compound called polyanthraquinone, which is composited with carbon nanotubes. The electrodes have a natural affinity for carbon dioxide and readily react with its molecules in the airstream or feed gas, even when it is present at very low concentrations. The reverse reaction takes place when the battery is discharged — during which the device can provide part of the power needed for the whole system — and in the process ejects a stream of pure carbon dioxide. The whole system operates at room temperature and normal air pressure.

    “The greatest advantage of this technology over most other carbon capture or carbon absorbing technologies is the binary nature of the adsorbent’s affinity to carbon dioxide,” explains Voskian. In other words, the electrode material, by its nature, “has either a high affinity or no affinity whatsoever,” depending on the battery’s state of charging or discharging. Other reactions used for carbon capture require intermediate chemical processing steps or the input of significant energy such as heat, or pressure differences.

    “This binary affinity allows capture of carbon dioxide from any concentration, including 400 parts per million, and allows its release into any carrier stream, including 100 percent CO2,” Voskian says. That is, as any gas flows through the stack of these flat electrochemical cells, during the release step the captured carbon dioxide will be carried along with it. For example, if the desired end-product is pure carbon dioxide to be used in the carbonation of beverages, then a stream of the pure gas can be blown through the plates. The captured gas is then released from the plates and joins the stream.

    In some soft-drink bottling plants, fossil fuel is burned to generate the carbon dioxide needed to give the drinks their fizz. Similarly, some farmers burn natural gas to produce carbon dioxide to feed their plants in greenhouses. The new system could eliminate that need for fossil fuels in these applications, and in the process actually be taking the greenhouse gas right out of the air, Voskian says. Alternatively, the pure carbon dioxide stream could be compressed and injected underground for long-term disposal, or even made into fuel through a series of chemical and electrochemical processes.

    The process this system uses for capturing and releasing carbon dioxide “is revolutionary” he says. “All of this is at ambient conditions — there’s no need for thermal, pressure, or chemical input. It’s just these very thin sheets, with both surfaces active, that can be stacked in a box and connected to a source of electricity.”

    “In my laboratories, we have been striving to develop new technologies to tackle a range of environmental issues that avoid the need for thermal energy sources, changes in system pressure, or addition of chemicals to complete the separation and release cycles,” Hatton says. “This carbon dioxide capture technology is a clear demonstration of the power of electrochemical approaches that require only small swings in voltage to drive the separations.”​

    In a working plant — for example, in a power plant where exhaust gas is being produced continuously — two sets of such stacks of the electrochemical cells could be set up side by side to operate in parallel, with flue gas being directed first at one set for carbon capture, then diverted to the second set while the first set goes into its discharge cycle. By alternating back and forth, the system could always be both capturing and discharging the gas. In the lab, the team has proven the system can withstand at least 7,000 charging-discharging cycles, with a 30 percent loss in efficiency over that time. The researchers estimate that they can readily improve that to 20,000 to 50,000 cycles.

    The electrodes themselves can be manufactured by standard chemical processing methods. While today this is done in a laboratory setting, it can be adapted so that ultimately they could be made in large quantities through a roll-to-roll manufacturing process similar to a newspaper printing press, Voskian says. “We have developed very cost-effective techniques,” he says, estimating that it could be produced for something like tens of dollars per square meter of electrode.

    Compared to other existing carbon capture technologies, this system is quite energy efficient, using about one gigajoule of energy per ton of carbon dioxide captured, consistently. Other existing methods have energy consumption which vary between 1 to 10 gigajoules per ton, depending on the inlet carbon dioxide concentration, Voskian says.

    The researchers have set up a company called Verdox to commercialize the process, and hope to develop a pilot-scale plant within the next few years, he says. And the system is very easy to scale up, he says: “If you want more capacity, you just need to make more electrodes.”

    << Previous Day 2019/10/24
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org