MIT Research News' Journal
[Most Recent Entries]
[Calendar View]
Thursday, December 1st, 2016
| Time |
Event |
| 11:30a |
Lincoln Laboratory's supercomputing system ranked most powerful in New England The new TX-Green computing system at the MIT Lincoln Laboratory Supercomputing Center (LLSC) has been named the most powerful supercomputer in New England, 43rd most powerful in the U.S., and 106th most powerful in the world. A team of experts at TOP500 ranks the world's 500 most powerful supercomputers biannually. The systems are ranked based on a LINPACK Benchmark, which is a measure of a system's floating-point computing power, i.e., how fast a computer solves a dense system of linear equations.
Established in early 2016, the LLSC was developed to enhance computing power and accessibility for more than 1,000 researchers across the laboratory. The LLSC uses interactive supercomputing to augment the processing power of desktop systems to process large sets of sensor data, create high-fidelity simulations, and develop new algorithms. Located in Holyoke, Massachusetts, the new system is the only zero-carbon supercomputer on the TOP500 list; it uses energy from a mixture of hydroelectric, wind, solar, and nuclear sources.
In November, Dell EMC installed a new petaflop-scale system, which consists of 41,472 Intel processor cores and can compute 1015 operations per second. Compared to LLSC's previous technology, the new system provides 6 times more processing power and 20 times more bandwidth. This technology enables research in several laboratory research areas, such as space observation, robotic vehicles, communications, cybersecurity, machine learning, sensor processing, electronic devices, bioinformatics, and air traffic control.
The LLSC mission is to address supercomputing needs, develop new supercomputing capabilities and technologies, and collaborate with MIT campus supercomputing initiatives. "The LLSC vision is to enable the brilliant scientists and engineers at Lincoln Laboratory to analyze and process enormous amounts of information with complex algorithms," says Jeremy Kepner, Lincoln Laboratory Fellow and head of the LLSC. "Our new system is one of the largest on the East Coast and is specifically focused on enabling new research in machine learning, advanced physical devices, and autonomous systems."
Because the new processors are similar to the prototypes developed at the laboratory more than two decades ago, the new petaflop system is compatible with all existing LLSC software. "We have had many years to prepare our computing system for this kind of processor," Kepner says. "This new system is essentially a plug-and-play solution."
After establishing the Supercomputing Center and one of the top systems in the world, the LLSC team will continue to upgrade and expand supercomputing at the laboratory. Says Kepner: "Our hope is that this system is the first of many such large-scale systems at Lincoln Laboratory." | | 12:00p |
Bold research visions recognized and rewarded Since 2013, the Professor Amar G. Bose Research Grant has been supporting MIT faculty with big, bold, and unconventional research visions. In the latest round of grants, four proposals from six MIT faculty members — Angela Belcher, Betar Gallant, Amy Keating, Karl Berggren, Domitilla Del Vecchio, and Ron Weiss — were awarded from more than 100 project submissions. The researchers aim to make groundbreaking advances in areas of environmental bioremediation, cell reprogramming, new electrochemical reactions, and protein nanofabrication.
The researchers were honored at a Nov. 21 reception featuring past and current awardees, hosted by MIT President L. Rafael Reif.
Bose Grants are awarded to support innovative projects that may be unlikely to receive funding through traditional means but will offer fellows an exciting opportunity for exploration likely to benefit their fields of research. Grants provide up to $500,000 over three years for each selected project.
The grant program celebrates the legacy of the late Amar Bose, a longtime member of the MIT faculty and the founder of Bose Corporation, well known for his visionary and intellectually adventurous career. “My father would be very happy with the innovation and freedom of exploration that these grants have made possible as it was exactly what he was all about,” said his son Vanu Bose ’88, SM ’94, PhD ’99, at the reception. “The awards acknowledge the spirit of insatiable curiosity that my father embraced.”
“Through the Bose Research Grant program, which is now in its fourth year, we have a unique community of individuals synonymous with learning, teaching, exploration, and opportunity,” said President Reif. “MIT is about making a better world, and I cannot think of a better example of this than what the Bose research fellows and scholars are doing at MIT today.”
Toxin-eating yeast
“Our plan is to develop environmentally friendly, on-demand biological systems for cleaning up the environment,” says Angela Belcher, who is the James Mason Crafts Professor in biological engineering and materials science and engineering, and a member of the Koch Institute for Integrative Cancer Research. Her idea proposes using the humble yeast cell to act as a multifunctional, even programmable, bioremediation agent to clean up heavy metals and other environmental contaminants.
Belcher plans to design new yeast strains with genes from other organisms that have a natural inclination to ingest heavy metals and other toxins. By altering the yeast genes, Belcher can selectively program what genes to turn on and off. Much like commercially available yeast products, Belcher’s multifunctional yeast would be manufactured, packaged, stored, and shipped to environmentally affected areas as needed. “Our goal is to provide on-demand yeast products that can be used to clean up waste sites — from sources such as mining, manufacturing, agricultural runoff, and chemical disasters — that are easily used, recovered, and disposed of safely,” she adds.
Belcher has a successful history in redirecting natural biological processes for new purposes. Her team has repurposed natural biological agents to develop solar cells and battery technology. “It’s about natural evolution, which we are very good at, and getting biology to work with a new toolkit,” she says. “This grant allows us to take our expertise into a different direction, which is remediation.”
Reprogramming a cell’s fate
Using today’s technologies, researchers can reprogram cells of the body into stem cells capable of becoming any cell type. However, massive amounts of biochemical factors are required to force the change. And even when this reprogramming happens, less than 1 percent of the original cells actually make the full transformation to a stem cell. For more than 10 years, these issues have plagued practical applications of these induced stem cells in medicine.
Domitilla Del Vecchio, associate professor in the Department of Mechanical Engineering, and Ron Weiss, a professor in the departments of Biological Engineering and Electrical Engineering and Computer Science, propose a new technology that may offer the promise of substantially increased transformation efficiency, with smaller amounts of factors required. “Our project is about changing how the reprogramming process works,” says Del Vecchio. The pair proposes a feedback strategy whereby the cell itself adds in the needed factors at different times along the process transformative process. “We want to make a genetic circuit that can be inserted into the cell so that the cell automatically adjusts the level of needed factors,” she explains.
Adds Weiss, “Our approach is to push the cells to make the protein factors needed to both induce the change while also overriding the cells’ natural resistance which essentially fights the change.”
Harnessing the power of controlled explosions
Betar M. Gallant, the Esther and Harold E. Edgerton Career Development Assistant Professor in the Department of Mechanical Engineering, wants to build better energy systems. Her grant-winning proposal looks to create new dissolved-gas electrochemical reactions to provide substantially higher cell voltages and energy compared with current technology. A key feature of this project will be to learn to control high-potential, complex, multistep reactions.
“I want to develop tools that open up new reaction chemistries,” says Gallant. “The word ‘explosion’ conjures visions of poorly controlled and violent processes, yet electrochemistry provides us a unique handle to manipulate reactions by tuning all aspects of the reaction microenvironment. If we can learn how to design a stable and robust cell around a target reaction, we can harness that energy as electrical work.”
She is looking at designs involving a variety of gases whose theoretical reactions involve multiple electrons at high potentials. “This could be a great springboard to look at reactions that are starting to be conceived to push the limits of electrochemistry,” she says, adding that dissolved-gas reaction are largely unexplored. Looking ahead, she believes this approach could open up new pathways and new concepts in the design of chemistries, reactors, and processes for both stationary and portable power delivery.
Tool Kit for Novel Protein Nanofabrication
“We are looking for ways to combine two different fields — protein engineering and nanofabrication — to build a tool kit for arranging biomolecules in new ways on physical interfaces,” says Amy Keating, a professor of biology, explaining her project work with Karl Berggren, a professor of electrical engineering.
Keating, whose work centers on how proteins interact and function, will partner with Berggren, a nanofabrication and electrical engineering scientist, to explore new technologies for combining proteins with advanced silicon device surfaces. “We are hoping to find new ways of building very small-scale biological molecule complexes on surfaces,” she says. While living organisms naturally organize proteins and DNA into intricate pathways and complexes for a variety of functions, few engineering solutions are available now to provide that kind of design complexity.
Berggren and Keating hope to create giant biomolecular systems with the complexity of integrated circuits by leveraging their expertise in designing custom proteins with nanofabrication techniques. “We are looking at ways of making scaffolds that we can attach more complex molecules to, like sensors, or for driving biological interactions,” says Berggren. Though they are not themselves focused on a particular application, the researchers imagine possible uses for this work in the life sciences, materials sciences, and in computing. | | 1:00p |
How the brain recognizes faces MIT researchers and their colleagues have developed a new computational model of the human brain’s face-recognition mechanism that seems to capture aspects of human neurology that previous models have missed.
The researchers designed a machine-learning system that implemented their model, and they trained it to recognize particular faces by feeding it a battery of sample images. They found that the trained system included an intermediate processing step that represented a face’s degree of rotation — say, 45 degrees from center — but not the direction — left or right.
This property wasn’t built into the system; it emerged spontaneously from the training process. But it duplicates an experimentally observed feature of the primate face-processing mechanism. The researchers consider this an indication that their system and the brain are doing something similar.
“This is not a proof that we understand what’s going on,” says Tomaso Poggio, a professor of brain and cognitive sciences at MIT and director of the Center for Brains, Minds, and Machines (CBMM), a multi-institution research consortium funded by the National Science Foundation and headquartered at MIT. “Models are kind of cartoons of reality, especially in biology. So I would be surprised if things turn out to be this simple. But I think it’s strong evidence that we are on the right track.”
Indeed, the researchers’ new paper includes a mathematical proof that the particular type of machine-learning system they use, which was intended to offer what Poggio calls a “biologically plausible” model of the nervous system, will inevitably yield intermediary representations that are indifferent to angle of rotation.
Poggio, who is also a primary investigator at MIT’s McGovern Institute for Brain Research, is the senior author on a paper describing the new work, which appeared today in the journal Computational Biology. He’s joined on the paper by several other members of both the CBMM and the McGovern Institute: first author Joel Leibo, a researcher at Google DeepMind, who earned his PhD in brain and cognitive sciences from MIT with Poggio as his advisor; Qianli Liao, an MIT graduate student in electrical engineering and computer science; Fabio Anselmi, a postdoc in the IIT@MIT Laboratory for Computational and Statistical Learning, a joint venture of MIT and the Italian Institute of Technology; and Winrich Freiwald, an associate professor at the Rockefeller University.
Emergent properties
The new paper is “a nice illustration of what we want to do in [CBMM], which is this integration of machine learning and computer science on one hand, neurophysiology on the other, and aspects of human behavior,” Poggio says. “That means not only what algorithms does the brain use, but what are the circuits in the brain that implement these algorithms.”
Poggio has long believed that the brain must produce “invariant” representations of faces and other objects, meaning representations that are indifferent to objects’ orientation in space, their distance from the viewer, or their location in the visual field. Magnetic resonance scans of human and monkey brains suggested as much, but in 2010, Freiwald published a study describing the neuroanatomy of macaque monkeys’ face-recognition mechanism in much greater detail.
Freiwald showed that information from the monkey’s optic nerves passes through a series of brain locations, each of which is less sensitive to face orientation than the last. Neurons in the first region fire only in response to particular face orientations; neurons in the final region fire regardless of the face’s orientation — an invariant representation.
But neurons in an intermediate region appear to be “mirror symmetric”: That is, they’re sensitive to the angle of face rotation without respect to direction. In the first region, one cluster of neurons will fire if a face is rotated 45 degrees to the left, and a different cluster will fire if it’s rotated 45 degrees to the right. In the final region, the same cluster of neurons will fire whether the face is rotated 30 degrees, 45 degrees, 90 degrees, or anywhere in-between. But in the intermediate region, a particular cluster of neurons will fire if the face is rotated by 45 degrees in either direction, another if it’s rotated 30 degrees, and so on.
This is the behavior that the researchers’ machine-learning system reproduced. “It was not a model that was trying to explain mirror symmetry,” Poggio says. “This model was trying to explain invariance, and in the process, there is this other property that pops out.”
Neural training
The researchers’ machine-learning system is a neural network, so called because it roughly approximates the architecture of the human brain. A neural network consists of very simple processing units, arranged into layers, that are densely connected to the processing units — or nodes — in the layers above and below. Data are fed into the bottom layer of the network, which processes them in some way and feeds them to the next layer, and so on. During training, the output of the top layer is correlated with some classification criterion — say, correctly determining whether a given image depicts a particular person.
In earlier work, Poggio’s group had trained neural networks to produce invariant representations by, essentially, memorizing a representative set of orientations for just a handful of faces, which Poggio calls “templates.” When the network was presented with a new face, it would measure its difference from these templates. That difference would be smallest for the templates whose orientations were the same as that of the new face, and the output of their associated nodes would end up dominating the information signal by the time it reached the top layer. The measured difference between the new face and the stored faces gives the new face a kind of identifying signature.
In experiments, this approach produced invariant representations: A face’s signature turned out to be roughly the same no matter its orientation. But the mechanism — memorizing templates — was not, Poggio says, biologically plausible.
So instead, the new network uses a variation on Hebb’s rule, which is often described in the neurological literature as “neurons that fire together wire together.” That means that during training, as the weights of the connections between nodes are being adjusted to produce more accurate outputs, nodes that react in concert to particular stimuli end up contributing more to the final output than nodes that react independently (or not at all).
This approach, too, ended up yielding invariant representations. But the middle layers of the network also duplicated the mirror-symmetric responses of the intermediate visual-processing regions of the primate brain.
“I think it’s a significant step forward,” says Christof Koch, president and chief scientific officer at the Allen Institute for Brain Science. “In this day and age, when everything is dominated by either big data or huge computer simulations, this shows you how a principled understanding of learning can explain some puzzling findings.”
“They’re very careful,” Koch adds. “They’re only looking at the feed-forward pathway — in other words, the first 80, 100 milliseconds. The monkey opens its eyes, and within 80 to 100 milliseconds, it can recognize a face and push a button signaling that. The question is what goes on in those 80 to 100 milliseconds, and the model that they have seems to explain that quite well.” |
|