MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Thursday, January 21st, 2016

    Time Event
    12:00a
    Sensible subsidies

    Governments often offer subsidies to consumers for clean-technology products, from home solar panels to electric vehicles. But what are the right levels of subsidy, and how should they be calculated? As a new paper co-authored by MIT researchers shows, governments can easily make subsidies too low when they ignore a basic problem: Consumer demand for these products is usually highly uncertain.

    Indeed, the paper’s analysis suggests this has already happened in the case of the Chevy Volt, an electric car introduced in 2010 that suffered slow initial sales before gaining more traction in the marketplace.

    “The government will miss their target by a lot when ignoring demand uncertainty,” says Georgia Perakis, the William F. Pounds Professor of Management at the MIT Sloan School of Management and a co-author of the paper.

    While discussion of “demand uncertainty” might sound a bit abstract, it matters. Governments usually provide subsidies based on overall adoption targets, such as the number of cars or solar panels they would like to see adopted over a period of time. But green technologies are often new products, and no one really knows how many consumers are waiting to buy them.

    Some models of subsidies assume a steady ratio between the dollar amount of the subsidy and the total number of cars or solar panels that will be sold. But as the new paper indicates, that’s not quite the right approach. Given uncertain markets, subsidy levels don’t correlate steadily with sales. Instead, it takes relatively high subsidy levels to kick-start a certain amount of business; then a more gradual increase can help achieve higher sales.

    For clean technologies, the research project shows, these increased subsidies should still pay for themselves even at higher levels, when issues such as reductions in pollution, which lead to lower health-care costs, are factored in.

    The paper, “The Impact of Demand Uncertainty on Consumer Subsidies for Green Technology Adoption,” has been published online by Management Science. The co-authors are Perakis; Maxime C. Cohen PhD ’15, an assistant professor at New York University; and Ruben Lobel PhD ’12, an assistant professor at the University of Pennsylvania.

    Higher voltage needed

    Uncertain demand for new technologies might seem to help boost sales; firms would likely want to produce enough goods to meet potential demand, while also keeping prices low enough to spur further adoption. But that isn’t the only mechanism at work. Firms always worry about having unsold inventory, and a certain number of early-adopting technology consumers seem willing to pay high prices. Those additional factors mean firms also have reason to aim for relatively high prices, while producing relatively modest quantities of the new product.

    That’s where subsidy levels matter. Suppose the market for solar panels was predictable, with a similar number of arrays being sold every month. With a given subsidy, firms would know how much to produce and how much to charge consumers. But demand for clean-tech innovations isn’t steady. For some electric vehicles and other products, sales have started off slowly before curving upward.

    For those cases, after studying the demand data and modeling a variety of scenarios with different subsidy levels, the researchers came to the conclusion that subsidy levels often start off too low. For the Chevy Volt, for instance, the optimal subsidy should have been higher relative to the $7,500 offered by the U.S. government, the researchers concluded.

    That estimation varies depending on the sales target as well as the uncertainty in terms of the sales: To get 2,000 Volts sold faster, they found, the subsidy level should have been much higher. But to reach, say, 10,000 Volts sold, the subsidy would not need to be five times as high as that needed to sell 2,000 cars; the modeling shows the needed subsidy increases in a curve that tapers off as the sales volume grows.

    At any level of sales, the higher subsidy helps keep firms from reacting to demand uncertainty by aiming for higher prices along with more modest sales. And higher subsidies are particularly crucial in the early stages of a clean-technology product launch.

    In general, as the researchers write in the paper, ignoring the uncertain nature of consumer demand when setting subsidies means “sales can be significantly below the desired adoption target level.”

    “Important and timely”

    The new paper is part of a larger program of study in which Perakis and Cohen, working with Charles Thraves, a PhD student at the MIT Operations Research Center, have studied the impact of competition in green technology markets. In particular, they investigate how the growing number of competitors in the electric vehicle industry affects the subsidy level offered to consumers.

    The results stem from a model the scholars have developed that uses data from the transportation and home-energy sectors while also representing many aspects of consumer choice in the green-technology marketplace.

    That model also allows the researchers to explore whether or not clean-technology subsidies pay for themselves in the larger context. They do, the results show, when one includes “externalities” such as costly environmental damage.

    “The conclusion that we drew is that basically if the government sets subsidies ‘the right way,’ taking externalities into account, [then consumers] will not do badly,” Perakis says. “Even if there are big externalities.”

    Other scholars say the research program and the latest paper are valuable. Ozalp Ozer, a professor of management science at the University of Texas at Dallas, who has read the paper, says it “addresses a very important and timely problem” and that “policy makers should pay close attention” to the research framework, since it “highlights some of the shortcomings of current subsidy practices.”

    The research was supported, in part, by the MIT Energy Initiative.

    12:00a
    New finding may explain heat loss in fusion reactors

    One of the biggest obstacles to making fusion power practical — and realizing its promise of virtually limitless and relatively clean energy — has been that computer models have been unable to predict how the hot, electrically charged gas inside a fusion reactor behaves under the intense heat and pressure required to make atoms stick together.

    The key to making fusion work — that is, getting atoms of a heavy form of hydrogen called deuterium to stick together to form helium, releasing a huge amount of energy in the process — is to maintain a sufficiently high temperature and pressure to enable the atoms overcome their resistance to each other. But various kinds of turbulence can stir up this hot soup of particles and dissipate some of the intense heat, and a major problem has been to understand and predict exactly how this turbulence works, and thus how to overcome it.

    A long-standing discrepancy between predictions and observed results in test reactors has been called “the great unsolved problem” in understanding the turbulence that leads to a loss of heat in fusion reactors. Solving this discrepancy is critical for predicting the performance of new fusion reactors such as the huge international collaborative project called ITER, under construction in France.

    Now, researchers at MIT’s Plasma Science and Fusion Center, in collaboration with others at the University of California at San Diego, General Atomics, and the Princeton Plasma Physics Laboratory, say that they have found the key. In a result so surprising that the researchers themselves found it hard to believe their own results at first, it turns out that interactions between turbulence at the tiniest scale, that of electrons, and turbulence at a scale 60 times larger, that of ions, can account for the mysterious mismatch between theory and experimental results.

    The new findings are detailed in a pair of papers published in the journals Nuclear Fusion and AIP Physics of Plasmas, by MIT research scientist Nathan Howard, doctoral student Juan Ruiz Ruiz, Cecil and Ida Green Associate Professor in Engineering Anne White, and 12 collaborators.

    “I’m extremely surprised” by the new results, White says. She adds that it took a thorough examination of the detailed results of computer simulations, along with matching experimental observations, to show that the counterintuitive result was real.

    Persisting eddies

    The expectation by physicists for more than a decade had been that turbulence associated with ions (atoms with an electric charge) was so much larger than turbulence caused by electrons — nearly two orders of magnitude smaller — that the latter would be completely smeared out by the much larger eddies. And even if the smaller eddies survived the larger-scale disruptions, the conventional thinking went, these electron-scale whirls would be so much smaller that their effects would be negligible.

    The new findings show that this conventional wisdom was wrong on both counts. The two scales of turbulence do indeed coexist, the researchers found, and they interact with each other so strongly that it’s impossible to understand their effects without including both kinds in any simulations.

    However, it requires prodigious amounts of computer time to run simulations that encompass such widely disparate scales, explains Howard, who is the lead author on the paper detailing these simulations. Accomplishing each simulation required 15 million hours of computation, carried out by 17,000 processors over a period of 37 days at the National Energy Research Scientific Computing Center — making this team the biggest user of that facility for the year. Using an ordinary MacBook Pro to run the full set of six simulations that the team carried out, Howard estimates, would have taken 3,000 years.

    But the results were clear, and startling. Far from being eliminated by the larger-scale turbulence, the tiny eddies produced by electrons continue to be clearly visible in the results, stretched out into long ribbons that wind around the donut-shaped vacuum chamber that characterizes a tokamak fusion reactor. Despite the temperature of 100 million degrees Celsius inside the plasma, these ribbon-like eddies persist for long enough to influence how heat gets dissipated from the swirling mass — a determining factor in how much fusion can actually take place inside the reactor.

    Previously, scientists had thought that simply simulating turbulence separately at the two different size scales and adding the results together would give a close enough approximation, but they kept finding discrepancies between those predictions and the actual results seen in test reactors. The new multiscale simulation, Howard says, matches the real results much more accurately. Now, researchers at General Atomics are taking these new results and using them to develop a simplified, streamlined simulation that could be run on an ordinary laptop computer, Howard says.

    Independent evidence

    In addition to the theoretical simulations, MIT graduate student Ruiz Ruiz, lead author of the second paper, has analyzed a series of experiments at the Princeton Plasma Physics Laboratory, which provided direct evidence of electron-scale turbulence that supports the new simulations. The results offer clear, independent evidence that the electron-scale turbulence really does play an important role, and they show that this is a general phenomenon, not one specific to a particular reactor design.

    That’s because Howard’s simulations were based on MIT’s Alcator C-Mod tokamak reactor, whereas Ruiz Ruiz’s results were from a different type of reactor called the National Spherical Torus Experiment, which has a significantly different configuration.

    Understanding the details of these different mechanisms of turbulence has been “an outstanding challenge” in the field of fusion research, White says, and these new findings could greatly improve the understanding of what’s really going on inside the 10 tokamak research reactors that exist around the world, as well as in future experimental reactors under construction or planning.

    “The evidence from both of these papers, that electron energy transport in tokamaks has a significant contribution from both ion and electron-scale turbulence and that multiscale simulations are needed to predict the transport, is profoundly important,” says Gary Staebler, a researcher at General Atomics who was not involved in this work. “Both of these papers are very high quality,” he adds. “The execution and analysis of the experiments is first class.”

    The research was supported by the U.S. Department of Energy.

    12:00a
    Diagnosing depression before it starts

    A new brain imaging study from MIT and Harvard Medical School may lead to a screen that could identify children at high risk of developing depression later in life.

    In the study, the researchers found distinctive brain differences in children known to be at high risk because of family history of depression. The finding suggests that this type of scan could be used to identify children whose risk was previously unknown, allowing them to undergo treatment before developing depression, says John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology and a professor of brain and cognitive sciences at MIT.

    “We’d like to develop the tools to be able to identify people at true risk, independent of why they got there, with the ultimate goal of maybe intervening early and not waiting for depression to strike the person,” says Gabrieli, an author of the study, which appears in the journal Biological Psychiatry.

    Early intervention is important because once a person suffers from an episode of depression, they become more likely to have another. “If you can avoid that first bout, maybe it would put the person on a different trajectory,” says Gabrieli, who is a member of MIT’s McGovern Institute for Brain Research.

    The paper’s lead author is McGovern Institute postdoc Xiaoqian Chai, and the senior author is Susan Whitfield-Gabrieli, a research scientist at the McGovern Institute.

    Distinctive patterns

    The study also helps to answer a key question about the brain structures of depressed patients. Previous imaging studies have revealed two brain regions that often show abnormal activity in these patients: the subgenual anterior cingulate cortex (sgACC) and the amygdala. However, it was unclear if those differences caused depression or if the brain changed as the result of a depressive episode.

    To address that issue, the researchers decided to scan brains of children who were not depressed, according to their scores on a commonly used diagnostic questionnaire, but had a parent who had suffered from the disorder. Such children are three times more likely to become depressed later in life, usually between the ages of 15 and 30.

    Gabrieli and colleagues studied 27 high-risk children, ranging in age from eight to 14, and compared them with a group of 16 children with no known family history of depression.

    Using functional magnetic resonance imaging (fMRI), the researchers measured synchronization of activity between different brain regions. Synchronization patterns that emerge when a person is not performing any particular task allow scientists to determine which regions naturally communicate with each other.

    The researchers identified several distinctive patterns in the at-risk children. The strongest of these links was between the sgACC and the default mode network — a set of brain regions that is most active when the mind is unfocused. This abnormally high synchronization has also been seen in the brains of depressed adults.

    The researchers also found hyperactive connections between the amygdala, which is important for processing emotion, and the inferior frontal gyrus, which is involved in language processing. Within areas of the frontal and parietal cortex, which are important for thinking and decision-making, they found lower than normal connectivity.

    Cause and effect

    These patterns are strikingly similar to those found in depressed adults, suggesting that these differences arise before depression occurs and may contribute to the development of the disorder, says Ian Gotlib, a professor of psychology at Stanford University.

    “The findings are consistent with an explanation that this is contributing to the onset of the disease,” says Gotlib, who was not involved in the research. “The patterns are there before the depressive episode and are not due to the disorder.”

    The MIT team is continuing to track the at-risk children and plans to investigate whether early treatment might prevent episodes of depression. They also hope to study how some children who are at high risk manage to avoid the disorder without treatment.

    Other authors of the paper are Dina Hirshfeld-Becker, an associate professor of psychiatry at Harvard Medical School; Joseph Biederman, director of pediatric psychopharmacology at Massachusetts General Hospital (MGH); Mai Uchida, an assistant professor of psychiatry at Harvard Medical School; former MIT postdoc Oliver Doehrmann; MIT graduate student Julia Leonard; John Salvatore, a former McGovern technical assistant; MGH research assistants Tara Kenworthy and Elana Kagan; Harvard Medical School postdoc Ariel Brown; and former MIT technical assistant Carlo de los Angeles.

    << Previous Day 2016/01/21
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org