MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Friday, June 1st, 2018

    Time Event
    12:30p
    MIT Energy Initiative awards nine Seed Fund grants for early-stage energy research

    In spring 2018, the MIT Energy Initiative (MITEI) awarded nine grants totaling $1,350,000 through its Seed Fund Program, an annual competition that supports early-stage innovative research across the energy spectrum. The awardees will be using the $150,000 grants to explore highly creative and promising energy research projects.

    “This is an extremely competitive process,” said MITEI Director Robert C. Armstrong, the Chevron Professor of Chemical Engineering. “Every year the submissions we receive are incredibly impressive, and this year was no exception. Our grantees are remarkable in their creative, interdisciplinary approaches to addressing key global energy and climate challenges.”

    To date, MITEI has supported 170 projects with grants totaling approximately $22.75 million. These projects have covered a variety of energy research areas, from fundamental physics and chemistry to engineering to policy and economics, and have drawn from all five MIT schools and 28 departments, labs, and centers.

    Seed grant awardees run the gamut from established professors to new faculty members. This year, six of the nine grant recipients are first-time awardees — including four researchers early in their careers at MIT.

    The chemistry of energy

    While research in the lab can be critical to advancing energy technologies, computer simulations are also valuable, serving as an efficient testing ground where new ideas can be explored rapidly and at low risk. Simulations at the atomic level can be especially valuable in discovering new energy materials and in investigating chemical change in energy generation and storage. But the computational cost associated with such “atomistic” simulations can be extremely high — a problem that Professor Rafael Gomez-Bombarelli and his team will be addressing in their project. Gomez-Bombarelli, the Toyota Assistant Professor in Materials Processing, plans to use machine learning to create software that, by leveraging already existing computational results, can accelerate high-accuracy quantum-chemical calculations, reducing the cost incurred.

    “We will use existing computer simulations that took many years of computer time to automatically learn consistent patterns about the behavior of matter in energy processes,” says Gomez-Bombarelli. “This newly gained information will make chemically accurate simulations thousands of times faster and accelerate the predictive design of more efficient and sustainable fuels, photovoltaic materials, solid-state lighting, battery chemicals, and industrial catalysts.”

    Karthish Manthiram, an assistant professor of chemical engineering, is approaching energy generation and storage from a different angle. His team is investigating lithium-based materials as electrocatalysts for nitrogen reduction, a key step in the production of ammonia, which is a potential route for storing electrical energy from intermittent renewable sources in a liquid fuel. The intrinsic reactivity of lithium makes it a prime candidate for use in catalysis, potentially beginning a new chapter in liquid fuel creation and energy storage.

    Making a better grid: Batteries and economics

    Betar Gallant, an assistant professor of mechanical engineering, won a seed grant for her team’s research into calcium as a promising anode for low-cost, high-energy-density batteries. Such batteries, if successfully developed, can play critical roles in ensuring stability on a renewables-heavy power grid and also in achieving the electrification of our transportation system. Today, the most common electric-vehicle battery pack on the market is the lithium-ion battery, but improvements in gravimetric and volumetric energy density are needed to achieve longer driving ranges. While widespread efforts have focused on developing the lithium anode to replace the graphite electrode in today’s lithium-ion batteries, lithium metal cycles poorly, is expensive, and raises significant safety concerns. Gallant and others believe there is substantial room for improvement to be made by pursuing alternative metal anodes. Calcium-based batteries possess particularly attractive volumetric energy densities and potentials compared to lithium-based cells and are also safer, less expensive, and potentially more versatile if key challenges can be overcome.

    “This field is very much in its infancy; while the lithium anode has been subject to study for decades, researchers have just begun studying the fundamental behavior of calcium-based electrodes,” Gallant says. “Among the most significant challenges facing calcium electrodes are limited round-trip efficiency and poor cycleability. If these challenges can be overcome, the calcium electrode will be unlocked for use in a wide range of advanced battery chemistries and will open new and exciting avenues for research and development.”

    Jing Li, an incoming MIT Sloan School of Management faculty member, and her team plan to produce a more accurate cost-reduction curve for batteries by developing models based on fundamental materials and underlying science and then estimating them using data on the design, structure, cost, and quantities of batteries used in commercial products on the market. Results should help clarify why battery costs have decreased dramatically in recent years and whether that trend will continue in the future.

    Li’s team will also examine what changes in the regulatory structure of electricity markets are needed in light of expanding energy storage capacity. The goal is to understand who should own and operate energy storage units on the grid and the social welfare implications of different options for energy storage ownership. The researchers will model the decision-making strategies of potential owners, including private firms and system operators, to determine possible impacts on market outcomes, including prices, quantities, and costs.

    Deep expertise, new ideas

    Joining those four early-career researchers were several faculty members with long, deep experience in their areas of expertise. First-time seed grant winner Ignacio Pérez-Arriaga, a visiting professor at the MIT Sloan School of Management, is leading a study that combines electricity and economic modeling with policy analysis of renewable portfolio standards and other incentives meant to encourage renewable energy growth in the United States. The goal is to determine the mix of renewable energy generation types that will ensure high reliability in a given state as well as the most cost-effective capacity expansion strategy for renewables, given differing natural resources and energy and environmental regulations across the country.

    Chemistry Professor Tim Swager is also a first-time seed grantee. His team’s research focuses on a new approach to generating polymer membranes with three-dimensional porosity. Such membranes are used in chemical separations to transport ions in fuel cells as well as in processes related to chemical production and water purification. Separations often account for the majority of energy consumed during such processes, so improving their effectiveness is critical. Swager’s group is also focusing on related materials that have great potential for gas separations and on applying new ion-conducting materials to enable chemical and electrochemical transformations.

    Growing long-term innovation

    Seed grants may target early-stage energy research, but MITEI’s hope is that this research will continue and lead to practical solutions to real-world problems. Several past seed fund projects have made progress in that direction since their initial grants.

    For example, 2016 grantee Marta Gonzalez, a visiting associate professor in the Department of Civil and Environmental Engineering, and her team developed an electric-vehicle planning app called Human Mobility, Energy and Autonomy, or HUMEA. As described in a paper published in Nature Energy in April, the app aims to make owning and operating an electric vehicle (EV) in the city easier and less disruptive to the power grid by connecting a network of electric vehicles and optimizing the schedule for when and where they should charge. “Most people begin charging their EV when they get to work and then unplug around 6 p.m. when they leave,” says Gonzalez. “Power operators can’t handle that kind of steep peak. We want to incentivize individuals to bring the trend to an overall flatter demand.” People using the app can create personalized energy profiles that will point out openings in their schedules when they can charge outside of peak times.

    Funding for the new grants comes chiefly from MITEI’s founding and sustaining members, supplemented by gifts from generous donors.

    Recipients of MITEI Seed Fund grants for spring 2018 are:

    • "3D porosity: Approaches to new generations of polymer membranes" — Tim Swager of the Department of Chemistry;
    • "Carbon capture from chemical processes in the intermediate temperature range" — T. Alan Hatton of the Department of Chemical Engineering and Alexie Kolpak of the Department of Mechanical Engineering;
    • "Deep learning of contracted basis sets for rapid quantum calculation of thermochemistry and other energy processes" — Rafael Gomez-Bombarelli of the Department of Materials Science and Engineering;
    • "Economics of energy storage" — Jing Li of the MIT Energy Initiative;
    • "Effective capacity expansion of renewable electricity with mosaic design of state energy and environmental regulations in the United States" — Ignacio Pérez-Arriaga of the MIT Sloan School of Management;
    • "Electrochemical ammonia synthesis for modular electrical energy storage" — Karthish Manthiram of the Department of Chemical Engineering;
    • "Oxidative coupling of methane using ion-conducting ceramic membranes" — Ahmed Ghoniem of the Department of Mechanical Engineering and Bilge Yildiz of the Department of Nuclear Science and Engineering;
    • "Scalable nanoporous membranes for energy-efficient chemical separations" — Jeffrey Grossman of the Department of Materials Science and Engineering; and
    • "Unlocking the rechargeability of calcium for high-energy-density batteries" — Betar Gallant of the Department of Mechanical Engineering.
    2:00p
    AI-based method could speed development of specialized nanoparticles

    A new technique developed by MIT physicists could someday provide a way to custom-design multilayered nanoparticles with desired properties, potentially for use in displays, cloaking systems, or biomedical devices. It may also help physicists tackle a variety of thorny research problems, in ways that could in some cases be orders of magnitude faster than existing methods.

    The innovation uses computational neural networks, a form of artificial intelligence, to “learn” how a nanoparticle’s structure affects its behavior, in this case the way it scatters different colors of light, based on thousands of training examples. Then, having learned the relationship, the program can essentially be run backward to design a particle with a desired set of light-scattering properties — a process called inverse design.

    The findings are being reported in the journal Science Advances, in a paper by MIT senior John Peurifoy, research affiliate Yichen Shen, graduate student Li Jing, professor of physics Marin Soljačić, and five others.

    While the approach could ultimately lead to practical applications, Soljačić says, the work is primarily of scientific interest as a way of predicting the physical properties of a variety of nanoengineered materials without requiring the computationally intensive simulation processes that are typically used to tackle such problems.

    Soljačić says that the goal was to look at neural networks, a field that has seen a lot of progress and generated excitement in recent years, to see “whether we can use some of those techniques in order to help us in our physics research. So basically, are computers ‘intelligent’ enough so that they can do some more intelligent tasks in helping us understand and work with some physical systems?”

    To test the idea, they used a relatively simple physical system, Shen explains. “In order to understand which techniques are suitable and to understand the limits and how to best use them, we [used the neural network] on one particular system for nanophotonics, a system of spherically concentric nanoparticles.” The nanoparticles are layered like an onion, but each layer is made of a different material and has a different thickness.

    The nanoparticles have sizes comparable to the wavelengths of visible light or smaller, and the way light of different colors scatters off of these particles depends on the details of these layers and on the wavelength of the incoming beam. Calculating all these effects for nanoparticles with many layers can be an intensive computational task for many-layered nanoparticles, and the complexity gets worse as the number of layers grows.

    The researchers wanted to see if the neural network would be able to predict the way a new particle would scatter colors of light — not just by interpolating between known examples, but by actually figuring out some underlying pattern that allows the neural network to extrapolate.

    “The simulations are very exact, so when you compare these with experiments they all reproduce each other point by point,” says Peurifoy, who will be an MIT doctoral student next year. “But they are numerically quite intensive, so it takes quite some time. What we want to see here is, if we show a bunch of examples of these particles, many many different particles, to a neural network, whether the neural network can develop ‘intuition’ for it.”

    Sure enough, the neural network was able to predict reasonably well the exact pattern of a graph of light scattering versus wavelength — not perfectly, but very close, and in much less time. The neural network simulations “now are much faster than the exact simulations,” Jing says. “So now you could use a neural network instead of a real simulation, and it would give you a fairly accurate prediction. But it came with a price, and the price was that we had to first train the neural network, and in order to do that we had to produce a large number of examples.”

    Once the network is trained, though, any future simulations would get the full benefit of the speedup, so it could be a useful tool for situations requiring repeated simulations. But the real goal of the project was to learn about the methodology, not just this particular application. “One of the main reasons why we were interested in this particular system was for us to understand these techniques, rather than just to simulate nanoparticles,” Soljačić says.

    The next step was to essentially run the program in reverse, to use a set of desired scattering properties as the starting point and see if the neural network could then work out the exact combination of nanoparticle layers needed to achieve that output.

    “In engineering, many different techniques have been developed for inverse design, and it is a huge field of research,” Soljačić says. “But very often in order to set up a given inverse design problem, it takes quite some time, so in many cases you have to be an expert in the field and then spend sometimes even months setting it up in order to solve it.”

    But with the team’s trained neural network, “we didn't do any special preparation for this. We said, ‘ok, let’s try to run it backward.’ And amazingly enough, when we compare it with some other more standard inverse design methods, this is one of the best ones,” he says. “It will actually do it much quicker than a traditional inverse design.”

    Co-author Shen says “the initial motivation we had to do this was to set up a general toolbox that any generally well-educated person who isn’t an expert in photonics can use. … That was our original motivation, and it clearly works pretty well for this particular case.”

    The speedup in certain kinds of inverse design simulations can be quite significant. Peurifoy says “It's difficult to have apples-to-apples exact comparisons, but you can effectively say that you have gains on the order of hundreds of times. So the gain is very very substantial — in some cases it goes from days down to minutes.”

    The research was supported by the National Science Foundation, the Semiconductor Research Corporation, and the U.S. Army Research Office through the Institute for Soldier Nanotechnologies. Other people involved in the work are: Yi Yang, Fidel Cano-Renteria, John D. Joannopoulos, and Max Tegmark, all from MIT; and Brendan G. Delacy from U.S. Army Edgewood Chemical Biological Center.

    4:25p
    Revolutionizing everyday products with artificial intelligence

    “Who is Bram Stoker?” Those three words demonstrated the amazing potential of artificial intelligence. It was the answer to a final question in a particularly memorable 2011 episode of Jeopardy!. The three competitors were former champions Brad Rutter and Ken Jennings, and Watson, a super computer developed by IBM. By answering the final question correctly, Watson became the first computer to beat a human on the famous quiz show.

    “In a way, Watson winning Jeopardy! seemed unfair to people,” says Jeehwan Kim, the Class ‘47 Career Development Professor and a faculty member of the MIT departments of Mechanical Engineering and Materials Science and Engineering. “At the time, Watson was connected to a super computer the size of a room while the human brain is just a few pounds. But the ability to replicate a human brain’s ability to learn is incredibly difficult.”

    Kim specializes in machine learning, which relies on algorithms to teach computers how to learn like a human brain. “Machine learning is cognitive computing,” he explains. “Your computer recognizes things without you telling the computer what it’s looking at.”

    Machine learning is one example of artificial intelligence in practice. While the phrase “machine learning” often conjures up science fiction typified in shows like "Westworld" or "Battlestar Galactica," smart systems and devices are already pervasive in the fabric of our daily lives. Computers and phones use face recognition to unlock. Systems sense and adjust the temperature in our homes. Devices answer questions or play our favorite music on demand. Nearly every major car company has entered the race to develop a safe self-driving car.

    For any of these products to work, the software and hardware both have to work in perfect synchrony. Cameras, tactile sensors, radar, and light detection all need to function properly to feed information back to computers. Algorithms need to be designed so these machines can process these sensory data and make decisions based on the highest probability of success.

    Kim and the much of the faculty at MIT’s Department of Mechanical Engineering are creating new software that connects with hardware to create intelligent devices. Rather than building the sentient robots romanticized in popular culture, these researchers are working on projects that improve everyday life and make humans safer, more efficient, and better informed. 

    Making portable devices smarter

    Jeehwan Kim holds up sheet of paper. If he and his team are successful, one day the power of a super computer like IBM’s Watson will be shrunk down to the size of one sheet of paper. “We are trying to build an actual physical neural network on a letter paper size,” explains Kim.

    To date, most neural networks have been software-based and made using the conventional method known as the Von Neumann computing method. Kim however has been using neuromorphic computing methods.

    “Neuromorphic computer means portable AI,” says Kim. “So, you build artificial neurons and synapses on a small-scale wafer.” The result is a so-called ‘brain-on-a-chip.’

    Rather than compute information from binary signaling, Kim’s neural network processes information like an analog device. Signals act like artificial neurons and move across thousands of arrays to particular cross points, which function like synapses. With thousands of arrays connected, vast amounts of information could be processed at once. For the first time, a portable piece of equipment could mimic the processing power of the brain.

    “The key with this method is you really need to control the artificial synapses well. When you’re talking about thousands of cross points, this poses challenges,” says Kim.

    According to Kim, the design and materials that have been used to make these artificial synapses thus far have been less than ideal. The amorphous materials used in neuromorphic chips make it incredibly difficult to control the ions once voltage is applied.

    In a Nature Materials study published earlier this year, Kim found that when his team made a chip out of silicon germanium they were able to control the current flowing out of the synapse and reduce variability to 1 percent. With control over how the synapses react to stimuli, it was time to put their chip to the test.

    “We envision that if we build up the actual neural network with material we can actually do handwriting recognition,” says Kim. In a computer simulation of their new artificial neural network design, they provided thousands of handwriting samples. Their neural network was able to accurately recognize 95 percent of the samples.

    “If you have a camera and an algorithm for the handwriting data set connected to our neural network, you can achieve handwriting recognition,” explains Kim.

    While building the physical neural network for handwriting recognition is the next step for Kim’s team, the potential of this new technology goes beyond handwriting recognition. “Shrinking the power of a super computer down to a portable size could revolutionize the products we use,” says Kim. “The potential is limitless – we can integrate this technology in our phones, computers, and robots to make them substantially smarter.”

    Making homes smarter

    While Kim is working on making our portable products more intelligent, Professor Sanjay Sarma and Research Scientist Josh Siegel hope to integrate smart devices within the biggest product we own: our homes. 

    One evening, Sarma was in his home when one of his circuit breakers kept going off. This circuit breaker — known as an arc-fault circuit interrupter (AFCI) — was designed to shut off power when an electric arc is detected to prevent fires. While AFCIs are great at preventing fires, in Sarma’s case there didn’t seem to be an issue. “There was no discernible reason for it to keep going off,” recalls Sarma. “It was incredibly distracting.”

    AFCIs are notorious for such ‘nuisance trips,’ which disconnect safe objects unnecessarily. Sarma, who also serves as MIT's vice president for open learning, turned his frustration into opportunity. If he could embed the AFCI with smart technologies and connect it to the ‘internet of things,’ he could teach the circuit breaker to learn when a product is safe or when a product actually poses a fire risk.

    “Think of it like a virus scanner,” explains Siegel. “Virus scanners are connected to a system that updates them with new virus definitions over time.” If Sarma and Siegel could embed similar technology into AFCIs, the circuit breakers could detect exactly what product is being plugged in and learn new object definitions over time.

    If, for example, a new vacuum cleaner is plugged into the circuit breaker and the power shuts off without reason, the smart AFCI can learn that it’s safe and add it to a list of known safe objects. The AFCI learns these definitions with the aid of a neural network. But, unlike Jeewhan Kim’s physical neural network, this network is software-based.

    The neural network is built by gathering thousands of data points during simulations of arcing. Algorithms are then written to help the network assess its environment, recognize patterns, and make decisions based on the probability of achieving the desired outcome. With the help of a $35 microcomputer and a sound card, the team can cheaply integrate this technology into circuit breakers.

    As the smart AFCI learns about the devices it encounters, it can simultaneously distribute its knowledge and definitions to every other home using the internet of things.

    “Internet of things could just as well be called 'intelligence of things,” says Sarma. “Smart, local technologies with the aid of the cloud can make our environments adaptive and the user experience seamless.”

    Circuit breakers are just one of many ways neural networks can be used to make homes smarter. This kind of technology can control the temperature of your house, detect when there’s an anomaly such as an intrusion or burst pipe, and run diagnostics to see when things are in need of repair.

    “We’re developing software for monitoring mechanical systems that’s self-learned,” explains Siegel. “You don’t teach these devices all the rules, you teach them how to learn the rules.”

    Making manufacturing and design smarter

    Artificial intelligence can not only help improve how users interact with products, devices, and environments. It can also improve the efficiency with which objects are made by optimizing the manufacturing and design process.

    “Growth in automation along with complementary technologies including 3-D printing, AI, and machine learning compels us to, in the long run, rethink how we design factories and supply chains,” says Associate Professor A. John Hart.

    Hart, who has done extensive research in 3-D printing, sees AI as a way to improve quality assurance in manufacturing. 3-D printers incorporating high-performance sensors, that are capable of analyzing data on the fly, will help accelerate the adoption of 3-D printing for mass production.

    “Having 3-D printers that learn how to create parts with fewer defects and inspect parts as they make them will be a really big deal — especially when the products you’re making have critical properties such as medical devices or parts for aircraft engines,” Hart explains.  

    The very process of designing the structure of these parts can also benefit from intelligent software. Associate Professor Maria Yang has been looking at how designers can use automation tools to design more efficiently. “We call it hybrid intelligence for design,” says Yang. “The goal is to enable effective collaboration between intelligent tools and human designers.”

    In a recent study, Yang and graduate student Edward Burnell tested a design tool with varying levels of automation. Participants used the software to pick nodes for a 2-D truss of either a stop sign or a bridge. The tool would then automatically come up with optimized solutions based on intelligent algorithms for where to connect nodes and the width of each part.

    “We’re trying to design smart algorithms that fit with the ways designers already think,” says Burnell.

    Making robots smarter

    If there is anything on MIT’s campus that most closely resembles the futuristic robots of science fiction, it would be Professor Sangbae Kim’s robotic cheetah. The four-legged creature senses its surrounding environment using LIDAR technologies and moves in response to this information. Much like its namesake, it can run and leap over obstacles. 

    Kim’s primary focus is on navigation. “We are building a very unique system specially designed for dynamic movement of the robot,” explains Kim. “I believe it is going to reshape the interactive robots in the world. You can think of all kinds of applications — medical, health care, factories.”

    Kim sees opportunity to eventually connect his research with the physical neural network his colleague Jeewhan Kim is working on. “If you want the cheetah to recognize people, voice, or gestures, you need a lot of learning and processing,” he says. “Jeewhan’s neural network hardware could possibly enable that someday.”

    Combining the power of a portable neural network with a robot capable of skillfully navigating its surroundings could open up a new world of possibilities for human and AI interaction. This is just one example of how researchers in mechanical engineering can one-day collaborate to bring AI research to next level.

    While we may be decades away from interacting with intelligent robots, artificial intelligence and machine learning has already found its way into our routines. Whether it’s using face and handwriting recognition to protect our information, tapping into the internet of things to keep our homes safe, or helping engineers build and design more efficiently, the benefits of AI technologies are pervasive.

    The science fiction fantasy of a world overtaken by robots is far from the truth. “There’s this romantic notion that everything is going to be automatic,” adds Maria Yang. “But I think the reality is you’re going to have tools that will work with people and help make their daily life a bit easier.”

    << Previous Day 2018/06/01
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org