MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, February 13th, 2019

    Time Event
    12:00a
    Mathematician finds balance and beauty in math

    Since he was a child growing up in Changzhou, China, Zhiwei Yun’s appetite for mathematics was nothing but linear, growing with each year as he absorbed lessons and solved increasingly difficult problems, both in the classroom and on his own time, with a zeal that can only come from finding one’s true passion.  

    But when Yun was a graduate student, he felt his trajectory come up short. In his third year, he was in a panic as he faced for the first time the difference between learning established mathematics and discovering new math as a researcher.

    But his advisor Bob MacPherson, a professor at the Institute for Advanced Study, kept encouraging him to find his own way, saying “only a problem found by yourself can really interest and drive you to the final solution.”

    “It was a hard time,” Yun recalls. “The hardest part of pure math research was knowing whether and when to give up on a problem.”

    In his fourth year, Yun finally broke through his own mental wall and found a topic for his thesis, which continues to be a rich vein of exploration for him today.

    “Being stuck and having to abandon your own idea is hard to do, and you need a lot of patience — there’s a psychological difficulty in research,” says Yun, now a newly tenured member of the MIT mathematics faculty. “Looking back, it was a big fortune. Now I’m not afraid of being stuck on a problem.”

    Something sparked

    Before he discovered mathematics, Yun was a child who loved to draw. He particularly liked calligraphy and would spend hours after school attempting to reproduce Chinese paintings and inscriptions.

    He recalls not being particularly interested in math early on, and in fact has kept some of his workbooks from that time, which show several math problems left blank here and there. But in third grade, something sparked, and the workbooks suddenly filled up, and then some.

    That year, Yun’s math teacher posted challenging math problems on the blackboard after class as a sort of extra credit. For students like Yun who could solve them, the teacher would feed more difficult questions. Yun soon developed a personal rapport with the teacher, along with an expanding interest in math.

    “It was a feeling of solving something that most people couldn’t solve, I think, that triggered my interest,” Yun says.

    Diving in

    With his natural aptitude, Yun was funneled into China’s Math Olympiad track, and his selection exams in high school were good enough to place him on the extremely competitive and prestigious Chinese national team. In 2000, he and five of the very best math students in the country flew to South Korea, where they won gold in the 41st International Mathematical Olympiad.

    After high school, Yun entered Peking University, where he found a much deeper, thrilling well of knowledge.

    “In the days of Math Olympiad, we were just seeing the tips of an iceberg,” Yun says. “Now we were diving into the water and seeing the whole foundations of mathematics. And it was much more interesting than what was above water.”

    Early on, he was taken with Galois theory, a mathematical solution to a problem that had puzzled mathematicians for centuries. Namely, an equation of the second degree, such as ax2+bx+c=0, can be solved by introducing a square root. Similarly, third- and fourth-degree equations can be solved with higher-order roots. But when it came to fifth-degree equations, a root-derived solution seemed impossible. It wasn’t until the 19th century when 18-year-old Évariste Galois, from France, came up with a solution.

    Galois’ theory is now viewed as a key connection between number theory and abstract algebra — two subjects that were traditionally considered distinct.

    “His solution was not understood by his contemporaries,” says Yun, who spent the first months of his college career absorbing the theory. “I still find it amazing how a teenager could go this far.”

    After graduation, Yun headed to Princeton University to pursue a PhD in pure mathematics. When he did eventually land on a thesis topic, it was in representation theory, a branch of mathematics that seeks to represent abstract algebraic structures in concrete terms such as matrices or symmetries of shapes.

    Representation theory plays a crucial role in the Langlands program, a series of associated conjectures devised by mathematician Robert Langlands, that seeks to connect the seemingly disparate fields of number theory and geometry. The Langlands program is considered one of the biggest projects in modern mathematical research, and Yun continues to work in the field of representation theory, with a focus on the Langlands program.

    “The beauty of the subject”

    From Princeton, Yun took up a brief stay at MIT as a postdoc, with an office on the first floor of Building 2, looking out on the Charles River. He spent his time soaking up as many seminars as he could attend, and would work happily into the night, before biking back to his Somerville apartment.

    “On the whole, there was not much distraction,” Yun says. “Everything was about math research.”

    As his postdoctoral work was wrapping up, he accepted a faculty position at Stanford University, while his wife, Minlan Yu, whom he met at Princeton, taught computer science at the University of Southern California. That same year, their first child was born, and Yun spent the next few years on a constant commute, traveling to Los Angeles every week or two to see his family.

    “I was booking I don’t know how many tickets each year, and I remember one time arriving at San Francisco airport, and realizing I had booked a ticket for the wrong direction,” Yun recalls. “That’s when I realized I didn’t have a sense of home, and that we really needed to move to the same place.”

    They both soon accepted offers to teach at Yale University, and spent a year and a half there before he took up his current professorship at MIT in January 2018, and she started as a professor in computer science just up the road, at Harvard University.

    Of the graduate students Yun has so far mentored, he says that “every student has their own taste, and finds problems that interest themselves, and I encourage this. That should make the transition from student to researcher more smooth.”

    He has struck up fruitful collaborations with others in the math department, all of whom share a common quality: “We are all driven by curiosity, and the beauty of the subject itself,” Yun says.

    Yun continues to work on similar problems related to the Langlands program, and has found life to be more balanced, with just enough time for math, and family.

    “My son, who is in kindergarten, was doing some first grade math problems before going to bed recently, and he asked me, ‘If I finish the fifth of this series of math books, am I close to you?’” Yun laughs proudly. “According to my grade school workbooks, he’s already ahead of me! I’m glad to see he’s eager to learn mathematics. Either way, he should follow his heart.”

    9:15a
    3Q: Machine learning and climate modeling

    Today, predicting what the future has in store for Earth’s climate means dealing in uncertainties. For example, the core climate projections from the Intergovernmental Panel on Climate Change (IPCC) has put the global temperature bump from a doubling of atmospheric CO2 levels — referred to as “climate sensitivity” — anywhere between 1.5 degrees C and 4.5  C. That gap, which has not budged since the first IPCC report in 1990, has profound implications for the type of environmental events humanity may want to prepare for.

    Part of the uncertainty arises because of unforced variability — changes that would occur even in the absence of increases in CO2 — but part of it arises because of the need for models to simulate complex processes like clouds and convection. Recently, climate scientists have tried to narrow the ranges of the uncertainty in climate models by using a recent revolution in computer science. Machine learning, which is already being deployed for a host of diverse applications (drug discoveryair traffic control, and voice recognition software, for example), is now expanding into climate research, with the goal of reducing the uncertainty in climate models, specifically as it relates to climate sensitivity and predicting regional trends, two of the greatest culprits of uncertainty.

    Paul O’Gorman, an associate professor in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and member of the Program in Atmospheres, Oceans and Climate, discusses where machine learning fits into climate modeling, possible pitfalls and their remedies, and areas in which the approach is likely to be most successful.

    Q: Climate sensitivity and regional changes in climate seem to be a source of frustration for researchers. What are the obstacles there, and how can machine learning help?

    A: Present-day climate models are already very useful on the one hand, but they're also faced with very challenging problems, two of which you mentioned — climate sensitivity for a doubling of carbon dioxide and regional aspects of changes in the climate, for example, how rainfall changes in a certain country. For both of those issues we would like to have more accurate climate models, and they also have to be fast because they have to be run for more than a thousand years, typically, just to get to them into the current climate state before then going forward into future climates.

    So it's a question of both accuracy and efficiency. Traditionally, climate models are largely based on physics and chemistry of the atmosphere and ocean, and processes at the land surface. But they can't include everything that's happening in the atmosphere down to the millimeter scale or smaller, so they have to include some empirical formulas. And those empirical formulas are called parameterizations. Parameterizations represent complex processes, like clouds and atmospheric convection — one example of which would be thunderstorms — that happen at small scales compared to the size of the Earth, so they're difficult for global climate models to represent accurately.

    One idea that has come to the fore in the last couple of years is to use machine learning to more accurately represent these small-scale aspects of the atmosphere and ocean. The idea would be to run a very expensive, high-resolution model that can resolve the process you're interested in, for example, shallow clouds, and then use machine learning to learn from those simulations. That’s the first step. The second step would be to incorporate the machine-learned algorithm in a climate model to give, hopefully, a faster and more accurate climate model. And that's what several groups around the world are exploring.

    Q: To what extent can the machine-learned algorithm generalize from one climate situation, or one region, to another?

    A: That's a big question mark. What we've found so far is that if you train on the current climate and try to then simulate a much warmer climate, the machine learning algorithm will fail because it's relying on analogies to situations in the current climate that don’t extend to the warmer climate with higher temperatures. For example, clouds in the atmosphere tend to go higher in a warmer climate. So that's a limitation if you only train on the current climate, but of course training on warmer climates in high-resolution models is also possible.

    Interestingly, we found for atmospheric convection that if you train on the current climate and then go to a colder climate, the machine learning approach does work well. So there is an asymmetry between warming or cooling and how well these algorithms can generalize, at least for the case of atmospheric convection. The reason that the machine learning algorithm can generalize in the case of a cooling climate is that it can find examples at higher latitudes in the current climate to match the tropics of the colder climate. So different climates in different regions of the world help with generalization for climate change.

    The other thing that may help is events like El Niño, where the global atmosphere on average gets a bit warmer, and so that could provide an analogy from which to learn. It's not a perfect analogy with global warming, but some of the same physics may be operating at higher temperatures so that could be something that the machine learning algorithm would automatically leverage to help to generalize to warmer climates.

    Q: Does that mean there are certain areas of the climate system that machine learning will work better for versus others?

    A: I was suggesting that we should train our machine learning algorithms on very expensive high-resolution simulations, but that only makes sense, of course, if we have accurate high-resolution simulations for the process we are interested in. What we've been studying — atmospheric convection — is a good candidate because we can do quite accurate high-resolution simulations.

    On the other hand, if one was interested in, for example, how the land surface responds to climate change and how it interacts with the atmosphere above it, it's more difficult because there's lots of complexity. We have different types of plants, different soil. It's very heterogeneous. It's not as straightforward to get the truth from which you want to learn from models in that case. And then if we say, "Well, for aspects of the climate system that don’t have accurate expensive simulations, can we instead use observations?" Perhaps. But then we come back to the problem of trying to generalize to a different climate. So, I definitely think there are different parts of the climate system that are more amenable to the machine learning approach than others.

    Also, some aspects of climate model simulations are already very good. Models are already doing well in simulating the large scale fluid dynamics of the atmosphere, for example. So those parts of climate models are very unlikely to be replaced with machine learning approaches that would be less flexible than a purely physics-based approach

    10:59a
    Turning desalination waste into a useful resource

    The rapidly growing desalination industry produces water for drinking and for agriculture in the world’s arid coastal regions. But it leaves behind as a waste product a lot of highly concentrated brine, which is usually disposed of by dumping it back into the sea, a process that requires costly pumping systems and that must be managed carefully to prevent damage to marine ecosystems. Now, engineers at MIT say they have found a better way.

    In a new study, they show that through a fairly simple process the waste material can be converted into useful chemicals — including ones that can make the desalination process itself more efficient.

    The approach can be used to produce sodium hydroxide, among other products. Otherwise known as caustic soda, sodium hydroxide can be used to pretreat seawater going into the desalination plant. This changes the acidity of the water, which helps to prevent fouling of the membranes used to filter out the salty water — a major cause of interruptions and failures in typical reverse osmosis desalination plants.

    The concept is described today in the journal Nature Catalysis and in two other papers by MIT research scientist Amit Kumar, professor of mechanical engineering John. H. Lienhard V, and several others. Lienhard is the Jameel Professor of Water and Food and the director of the Abdul Latif Jameel Water and Food Systems Lab.

    “The desalination industry itself uses quite a lot of it,” Kumar says of sodium hydroxide. “They’re buying it, spending money on it. So if you can make it in situ at the plant, that could be a big advantage.” The amount needed in the plants themselves is far less than the total that could be produced from the brine, so there is also potential for it to be a saleable product.

    Sodium hydroxide is not the only product that can be made from the waste brine: Another important chemical used by desalination plants and many other industrial processes is hydrochloric acid, which can also easily be made on site from the waste brine using established chemical processing methods. The chemical can be used for cleaning parts of the desalination plant, but is also widely used in chemical production and as a source of hydrogen.

    Currently, the world produces more than 100 billion liters (about 27 billion gallons) a day of water from desalination, which leaves a similar volume of concentrated brine. Much of that is pumped back out to sea, and current regulations require costly outfall systems to ensure adequate dilution of the salts. Converting the brine can thus be both economically and ecologically beneficial, especially as desalination continues to grow rapidly around the world. “Environmentally safe discharge of brine is manageable with current technology, but it’s much better to recover resources from the brine and reduce the amount of brine released,” Lienhard says.

    The method of converting the brine into useful products uses well-known and standard chemical processes, including initial nanofiltration to remove undesirable compounds, followed by one or more electrodialysis stages to produce the desired end product. While the processes being suggested are not new, the researchers have analyzed the potential for production of useful chemicals from brine and proposed a specific combination of products and chemical processes that could be turned into commercial operations to enhance the economic viability of the desalination process, while diminishing its environmental impact.

    “This very concentrated brine has to be handled carefully to protect life in the ocean, and it’s a resource waste, and it costs energy to pump it back out to sea,” so turning it into a useful commodity is a win-win, Kumar says. And sodium hydroxide is such a ubiquitous chemical that “every lab at MIT has some,” he says, so finding markets for it should not be difficult.

    The researchers have discussed the concept with companies that may be interested in the next step of building a prototype plant to help work out the real-world economics of the process. “One big challenge is cost — both electricity cost and equipment cost,” at this stage, Kumar says.

    The team also continues to look at the possibility of extracting other, lower-concentration materials from the brine stream, he says, including various metals and other chemicals, which could make the brine processing an even more economically viable undertaking.

    “One aspect that was mentioned … and strongly resonated with me was the proposal for such technologies to support more ‘localized’ or ‘decentralized’ production of these chemicals at the point-of-use,” says Jurg Keller, a professor of water management at the University of Queensland in Australia, who was not involved in this work. “This could have some major energy and cost benefits, since the up-concentration and transport of these chemicals often adds more cost and even higher energy demand than the actual production of these at the concentrations that are typically used.”

    The research team also included MIT postdoc Katherine Phillips and undergraduate Janny Cai, and Uwe Schroder at the University of Braunschweig, in Germany. The work was supported by Cadagua, a subsidiary of Ferrovial, through the MIT Energy Initiative.

    << Previous Day 2019/02/13
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org