MIT Research News' Journal
[Most Recent Entries]
[Calendar View]
Wednesday, May 25th, 2016
| Time |
Event |
| 12:00a |
Automatic bug finder Symbolic execution is a powerful software-analysis tool that can be used to automatically locate and even repair programming bugs. Essentially, it traces out every path that a program’s execution might take.
But it tends not to work well with applications written using today’s programming frameworks. An application might consist of only 1,000 lines of new code, but it will generally import functions — such as those that handle virtual buttons — from a programming framework, which includes huge libraries of frequently reused code. The additional burden of evaluating the imported code makes symbolic execution prohibitively time consuming.
Computer scientists address this problem by creating simple models of the imported libraries, which describe their interactions with new programs but don’t require line-by-line evaluation of their code. Building the models, however, is labor-intensive and error prone, and the models require regular updates, as programming frameworks are constantly evolving.
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory, working with colleagues at the University of Maryland, have taken an important step toward enabling symbolic execution of applications written using programming frameworks, with a system that automatically constructs models of framework libraries.
The researchers compared a model generated by their system with a widely used model of Java’s standard library of graphical-user-interface components, which had been laboriously constructed over a period of years. They found that their new model plugged several holes in the hand-coded one.
They described their results in a paper they presented last week at the International Conference on Software Engineering. Their work was funded by the National Science Foundation’s Expeditions Program.
“Forty years ago, if you wanted to write a program, you went in, you wrote the code, and basically all the code you wrote was the code that executed,” says Armando Solar-Lezama, an associate professor of electrical engineering and computer science at MIT, whose group led the new work. “But today, if you want to write a program, you go and bring in these huge frameworks and these huge pieces of functionality that you then glue together, and you write a little code to get them to interact with each other. If you don’t understand what that big framework is doing, you’re not even going to know where your program is going to start executing.”
Consequently, a program analyzer can’t just dispense with the framework code and concentrate on the newly written code. But symbolic execution works by stepping through every instruction that a program executes for a wide range of input values. That process becomes untenable if the analyzer has to evaluate every instruction involved in adding a virtual button to a window — the positioning of the button on the screen, the movement of the button when the user scrolls down the window, the button’s change of appearance when it’s pressed, and so on.
For purposes of analysis, all that matters is what happens when the button is pressed, so that’s the only aspect of the button’s functionality that a framework model needs to capture. More precisely, the model describes only what happens when code imported from a standard programming framework returns control of a program to newly written code.
“The only thing we care about is what crosses the boundary between the application and the framework,” says Xiaokang Qiu, a postdoc in Solar-Lezama’s lab and a co-author on the new paper. “The framework itself is like a black box that we want to abstract away.”
To generate their model, the researchers ran a suite of tutorials designed to teach novices how to program in Java. Their system automatically tracked the interactions between the tutorial code and the framework code that the tutorials imported.
“The nice thing about tutorials is that they’re designed to help people understand how the framework works, so they’re also a good way to teach the synthesizer how the framework works,” Solar-Lezama says. “The problem is that if I just show you a trace of what my program did, there’s an infinite set of programs that could behave like that trace.”
To winnow down that set of possibilities, the researchers’ system tries to fit the program traces to a set of standard software “design patterns.” First proposed in the late 1970s and popularized in a 1995 book called “Design Patterns,” design patterns are based on the idea that most problems in software engineering fit into just a few categories, and their solutions have just a few general shapes.
Computer scientists have identified roughly 20 design patterns that describe communication between different components of a computer program. Solar-Lezama, Qiu, and their Maryland colleagues — Jinseong Jeon, Jonathan Fetter-Degges, and Jeffrey Foster — built four such patterns into their new system, which they call Pasket, for “pattern sketcher.” For any given group of program traces, Pasket tries to fit it to each of the design patterns, selecting only the one that works best.
Because a given design pattern needs to describe solutions to a huge range of problems that vary in their particulars, in the computer science literature, they’re described in very general terms. Fortunately, Solar-Lezama has spent much of his career developing a system, called Sketch, that takes general descriptions of program functionality and fills in the low-level computational details. Sketch is the basis of most of his group’s original research, and it’s what reconciles design patterns and program traces in Pasket.
“The availability of models for frameworks such as Swing [Java’s library of graphical-user-interface components] and Android is critical for enabling symbolic execution of applications built using these frameworks,” says Rajiv Gupta, a professor of computer science and engineering at the University of California at Riverside. "At present, framework models are developed and maintained manually. This work offers a compelling demonstration of how far synthesis technology has advanced. The scalability of Pasket is impressive — in a few minutes, it synthesized nearly 2,700 lines of code. Moreover, the generated models compare favorably with manually created ones." | | 12:00a |
New concept turns battery technology upside-down A new approach to the design of a liquid battery, using a passive, gravity-fed arrangement similar to an old-fashioned hourglass, could offer great advantages due to the system’s low cost and the simplicity of its design and operation, says a team of MIT researchers who have made a demonstration version of the new battery.
Liquid flow batteries — in which the positive and negative electrodes are each in liquid form and separated by a membrane — are not a new concept, and some members of this research team unveiled an earlier concept three years ago. The basic technology can use a variety of chemical formulations, including the same chemical compounds found in today’s lithium-ion batteries. In this case, key components are not solid slabs that remain in place for the life of the battery, but rather tiny particles that can be carried along in a liquid slurry. Increasing storage capacity simply requires bigger tanks to hold the slurry.
But all previous versions of liquid batteries have relied on complex systems of tanks, valves, and pumps, adding to the cost and providing multiple opportunities for possible leaks and failures.
The new version, which substitutes a simple gravity feed for the pump system, eliminates that complexity. The rate of energy production can be adjusted simply by changing the angle of the device, thus speeding up or slowing down the rate of flow. The concept is described in a paper in the journal Energy and Environmental Science, co-authored by Kyocera Professor of Ceramics Yet-Ming Chiang, Pappalardo Professor of Mechanical Engineering Alexander Slocum, School of Engineering Professor of Teaching Innovation Gareth McKinley, and POSCO Professor of Materials Science and Engineering W. Craig Carter, as well as postdoc Xinwei Chen, graduate student Brandon Hopkins, and four others.
Chiang describes the new approach as something like a “concept car” — a design that is not expected to go into production as it is but that demonstrates some new ideas that can ultimately lead to a real product.
The original concept for flow batteries dates back to the 1970s, but the early versions used materials that had very low energy-density — that is, they had a low capacity for storing energy in proportion to their weight. A major new step in the development of flow batteries came with the introduction of high-energy-density versions a few years ago, including one developed by members of this MIT team, that used the same chemical compounds as conventional lithium-ion batteries. That version had many advantages but shared with other flow batteries the disadvantage of complexity in its plumbing systems.
The new version replaces all that plumbing with a simple, gravity-fed system. In principle, it functions like an old hourglass or egg timer, with particles flowing through a narrow opening from one tank to another. The flow can then be reversed by turning the device over. In this case, the overall shape looks more like a rectangular window frame, with a narrow slot at the place where two sashes would meet in the middle.
In the proof-of-concept version the team built, only one of the two sides of the battery is composed of flowing liquid, while the other side — a sheet of lithium — is in solid form. The team decided to try out the concept in a simpler form before making their ultimate goal, a version where both sides (the positive and negative electrodes) are liquid and flow side by side through an opening while separated by a membrane.
Solid batteries and liquid batteries each have advantages, depending on their specific applications, Chiang says, but “the concept here shows that you don’t need to be confined by these two extremes. This is an example of hybrid devices that fall somewhere in the middle.”
The new design should make possible simpler and more compact battery systems, which could be inexpensive and modular, allowing for gradual expansion of grid-connected storage systems to meet growing demand, Chiang says. Such storage systems will be critical for scaling up the use of intermittent power sources such as wind and solar.
While a conventional, all-solid battery requires electrical connectors for each of the cells that make up a large battery system, in the flow battery only the small region at the center — the “neck” of the hourglass — requires these contacts, greatly simplifying the mechanical assembly of the system, Chiang says. The components are simple enough that they could be made through injection molding or even 3-D printing, he says.
In addition, the basic concept of the flow battery makes it possible to choose independently the two main characteristics of a desired battery system: its power density (how much energy it can deliver at a given moment) and its energy density (how much total energy can be stored in the system). For the new liquid battery, the power density is determined by the size of the “stack,” the contacts where the battery particles flow through, while the energy density is determined by the size of its storage tanks. “In a conventional battery, the power and energy are highly interdependent,” Chiang says.
The trickiest part of the design process, he says, was controlling the characteristics of the liquid slurry to control the flow rates. The thick liquids behave a bit like ketchup in a bottle — it’s hard to get it flowing in the first place, but then once it starts, the flow can be too sudden. Getting the flow just right required a long process of fine-tuning both the liquid mixture and the design of the mechanical structures.
The rate of flow can be controlled by adjusting the angle of the device, Chiang says, and the team found that at a very shallow angle, close to horizontal, “the device would operate most efficiently, at a very steady but low flow rate.” The basic concept should work with many different chemical compositions for the different parts of the battery, he says, but “we chose to demonstrate it with one particular chemistry, one that we understood from previous work. We’re not proposing this particular chemistry as the end game.”
Venkat Viswanathan, a research scientist at Lawrence Berkeley National Laboratory who was not involved in this work, says: “The authors have been able to build a bridge between the usually disparate fields of fluid mechanics and electrochemistry,” and in so doing developed a promising new approach to battery storage. “Pumping represents a large part of the cost for flow batteries,” he says, “and this new pumpless design could truly inspire a class of passively driven flow batteries.”
The work was supported by the Joint Center for Energy Storage Research, funded by the U.S. Department of Energy. The team also included graduate students Ahmed Helal and Frank Fan, and postdocs Kyle Smith and Zheng Li. | | 11:59p |
Finding a new formula for concrete Researchers at MIT are seeking to redesign concrete — the most widely used human-made material in the world — by following nature’s blueprints.
In a paper published online in the journal Construction and Building Materials, the team contrasts cement paste — concrete’s binding ingredient — with the structure and properties of natural materials such as bones, shells, and deep-sea sponges. As the researchers observed, these biological materials are exceptionally strong and durable, thanks in part to their precise assembly of structures at multiple length scales, from the molecular to the macro, or visible, level.
From their observations, the team, led by Oral Buyukozturk, a professor in MIT’s Department of Civil and Environmental Engineering (CEE), proposed a new bioinspired, “bottom-up” approach for designing cement paste.
“These materials are assembled in a fascinating fashion, with simple constituents arranging in complex geometric configurations that are beautiful to observe,” Buyukozturk says. “We want to see what kinds of micromechanisms exist within them that provide such superior properties, and how we can adopt a similar building-block-based approach for concrete.”
Ultimately, the team hopes to identify materials in nature that may be used as sustainable and longer-lasting alternatives to Portland cement, which requires a huge amount of energy to manufacture.
“If we can replace cement, partially or totally, with some other materials that may be readily and amply available in nature, we can meet our objectives for sustainability,” Buyukozturk says.
Co-authors on the paper include lead author and graduate student Steven Palkovic, graduate student Dieter Brommer, research scientist Kunal Kupwade-Patil, CEE assistant professor Admir Masic, and CEE department head Markus Buehler, the McAfee Professor of Engineering.
“The merger of theory, computation, new synthesis, and characterization methods have enabled a paradigm shift that will likely change the way we produce this ubiquitous material, forever,” Buehler says. “It could lead to more durable roads, bridges, structures, reduce the carbon and energy footprint, and even enable us to sequester carbon dioxide as the material is made. Implementing nanotechnology in concrete is one powerful example [of how] to scale up the power of nanoscience to solve grand engineering challenges.”
From molecules to bridges
Today’s concrete is a random assemblage of crushed rocks and stones, bound together by a cement paste. Concrete’s strength and durability depends partly on its internal structure and configuration of pores. For example, the more porous the material, the more vulnerable it is to cracking. However, there are no techniques available to precisely control concrete’s internal structure and overall properties.
“It’s mostly guesswork,” Buyukozturk says. “We want to change the culture and start controlling the material at the mesoscale.”
As Buyukozturk describes it, the “mesoscale” represents the connection between microscale structures and macroscale properties. For instance, how does cement’s microscopic arrangement affect the overall strength and durability of a tall building or a long bridge? Understanding this connection would help engineers identify features at various length scales that would improve concrete’s overall performance.
“We’re dealing with molecules on the one hand, and building a structure that’s on the order of kilometers in length on the other,” Buyukozturk says. “How do we connect the information we develop at the very small scale, to the information at the large scale? This is the riddle.”
Building from the bottom, up
To start to understand this connection, he and his colleagues looked to biological materials such as bone, deep sea sponges, and nacre (an inner shell layer of mollusks), which have all been studied extensively for their mechanical and microscopic properties. They looked through the scientific literature for information on each biomaterial, and compared their structures and behavior, at the nano-, micro-, and macroscales, with that of cement paste.
They looked for connections between a material’s structure and its mechanical properties. For instance, the researchers found that a deep sea sponge’s onion-like structure of silica layers provides a mechanism for preventing cracks. Nacre has a “brick-and-mortar” arrangement of minerals that generates a strong bond between the mineral layers, making the material extremely tough.
“In this context, there is a wide range of multiscale characterization and computational modeling techniques that are well established for studying the complexities of biological and biomimetic materials, which can be easily translated into the cement community,” says Masic.
Applying the information they learned from investigating biological materials, as well as knowledge they gathered on existing cement paste design tools, the team developed a general, bioinspired framework, or methodology, for engineers to design cement, “from the bottom up.”
The framework is essentially a set of guidelines that engineers can follow, in order to determine how certain additives or ingredients of interest will impact cement’s overall strength and durability. For instance, in a related line of research, Buyukozturk is looking into volcanic ash as a cement additive or substitute. To see whether volcanic ash would improve cement paste’s properties, engineers, following the group’s framework, would first use existing experimental techniques, such as nuclear magnetic resonance, scanning electron microscopy, and X-ray diffraction to characterize volcanic ash’s solid and pore configurations over time.
Researchers could then plug these measurements into models that simulate concrete’s long-term evolution, to identify mesoscale relationships between, say, the properties of volcanic ash and the material’s contribution to the strength and durability of an ash-containing concrete bridge. These simulations can then be validated with conventional compression and nanoindentation experiments, to test actual samples of volcanic ash-based concrete.
Ultimately, the researchers hope the framework will help engineers identify ingredients that are structured and evolve in a way, similar to biomaterials, that may improve concrete’s performance and longevity.
“Hopefully this will lead us to some sort of recipe for more sustainable concrete,” Buyukozturk says. “Typically, buildings and bridges are given a certain design life. Can we extend that design life maybe twice or three times? That’s what we aim for. Our framework puts it all on paper, in a very concrete way, for engineers to use.”
This research was supported in part by the Kuwait Foundation for the Advancement of Sciences through the Kuwait-MIT Center for Natural Resources and the Environment, the National Institute of Standards and Technology, and Argonne National Laboratory. | | 11:59p |
Scientists illuminate a hidden regulator in gene transcription Gene transcription is the process by which DNA is copied and synthesized as messenger RNA (mRNA) — which delivers its genetic blueprints to the cell’s protein-making machinery.
Now researchers at MIT and the Howard Hughes Medical Institute (HHMI) have identified a hidden, ephemeral phenomenon in cells that may play a major role in jump-starting mRNA production and regulating gene transcription.
In a paper published in the online journal eLife, the researchers report using a new super-resolution imaging technique they’ve developed, to see individual mRNA molecules coming out of a gene in a live cell. Using this same technique, they observed that, just before mRNA’s appearance, the enzyme RNA polymerase II (Pol II) gathers in clusters on the same gene for just a few brief seconds before scattering apart.
When the researchers manipulated the enzyme clusters in such a way that they stayed together for longer periods of time, they found that the gene produced correspondingly more molecules of mRNA. Clusters of Pol II therefore may play a central role in triggering mRNA production and controlling gene transcription.
Ibrahim Cissé, assistant professor of physics at MIT, explains that because of their transient nature, enzyme clusters have largely been regarded as a mystery, and scientists have questioned whether such clustering is purposeful or merely coincidental. These new results, he says, suggest that, although short-lived, enzyme clustering can have a significant impact on major biological processes.
“We think these weak and transient clusters are a fundamental way for the cell to control gene expression,” says Cissé, who is senior author on the paper. “If a small mutation changes the cluster’s lifetime ever so slightly, that can also change the gene expression in a major way. It seems to be a very sensitive knob that the cell can tune.”
What’s more, Cissé says scientists can now explore Pol II clusters as targets to “stall or induce a burst of transcription” and control the expression of certain genes, to explore cancer drugs and other gene therapies.
The paper’s co-authors include Won-Ki Cho, lead author and postdoc in the Department of Physics; Namrata Jayanth and Jan-Henrik Spille, also postdocs in physics; Takuma Inoue and J. Owen Andrews, graduate students in physics; and William Conway, an undergraduate in physics and biology; as well as researchers from HHMI’s Janelia Research Campus: Brian English, Jonathan Grimm, Luke Lavis, and Timothée Lionnet who is also co-senior author with Cissé.
Imaging at super-resolution
Pol II enzymes only cluster together for very short periods of time, on the order of several seconds. These clusters are also extremely small, on the scale of 100 nanometers in width. Because they are so tiny and fleeting, Pol II clusters and other weak and transient interactions have largely been hidden from view, essentially invisible to conventional imaging techniques.
To see these interactions, Cissé and his colleagues developed a super-resolution imaging technique to visualize cellular processes at the single-molecule level. The team’s technique builds on two existing super-resolution methods — photo-activation localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM). Both techniques involve tagging molecules of interest and lighting them up one by one to determine where each molecule is in space. Scientists can then merge every molecule’s position to create one super-resolution image of the cellular region.
While incredibly precise, these imaging techniques rely on the assumption that every molecule remains stationary. Molecules that come and go, and quickly cluster and scatter, are difficult to track. To catch Pol II clusters in action, Cissé and his team tweaked existing super-resolution imaging techniques, looking not just at a single enzyme’s position, but also at how frequently molecules were detected. The higher the frequency of detection, the higher the chance that a cluster has formed.
The team applied their technique to image cells, using a camera that recorded one frame every 50 milliseconds, running continuously for up to 10,000 frames.
A transient lifetime
They then created a cell line that included a bright fluorescent tag for mRNA, as well as a fluorescent tag of a different color for Pol II enzymes. The team applied its super-resolution technique to image a particular gene inside the cell, called beta-actin, which has been characterized extensively. In experiments with live cells, the researchers observed that, while previously transcribed mRNA molecules lit up on the gene, new Pol II clusters appeared on the same gene, for about 8 seconds, before disassembling.
From these experiments, the group was uncertain as to whether the clusters had any impact on mRNA production, since the time it takes from the start of transcription to the complete production of mRNA takes significantly longer — about 2.5 minutes. Could a cluster, appearing for just a fraction of that time, have any effect on mRNA?
To answer this question, the team stimulated the cells with a chemical cocktail which they knew would affect gene transcription and mRNA output. In these cells, they found that, just before the mRNA peak appeared, clusters formed on the gene and actually remained stable for as long as 24 seconds — a fourfold increase in a cluster’s typical lifetime. What’s more, the number of mRNAs increased by a similar amount.
After repeating the experiment in 207 living cells, the team found that the lifetime of Pol II clusters was directly related to the number of mRNA produced from the same gene.
Cissé speculates that perhaps Pol II clusters acts as an efficient driver of gene transcription, speeding up an otherwise inefficient process.
“It makes sense that you wouldn’t want an efficient initiation process, because you don’t want to randomly turn on any gene just because there was a random collision,” Cissé says. “But you also want to have a way to change the initiation from an inefficient to an efficient process, for example when you want to express a gene in response to some environmental stimuli. We think that these transient clusters are probably the way that the cell can render transcription initiation efficient.”
Next, Cissé plans to follow up his studies on Pol II clusters to determine what are the forces holding them together, as well as how they’re formed, and whether other molecular factors cluster with similar effects.
“I suspect there are new biophysical phenomena that come from weak and transient interactions,” Cissé says. “This is an underexplored area in biology, and because the interactions are so elusive we understand very little about how the regulatory processes happen inside a living cell.”
This research was funded, in part, by the National Institutes of Health Director’s New Innovator Award to Cissé, and additional support from the National Cancer Institute, the MIT physics department start-up funds, and the Howard Hughes Medical Institute. |
|