MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Thursday, October 27th, 2016

    Time Event
    2:00p
    Retracing the origins of a massive, multi-ring crater

    Scientists from MIT and elsewhere have reconstructed the extreme collision that created one of the moon’s largest craters, 3.8 billion years ago. The team has retraced the moon’s dramatic response in the first hours following the massive impact, and identified the processes by which large, multi-ring basins can form in the aftermath of such events.

    The findings, published today in two papers in the journal Science, may shed light on how giant impacts shaped the evolution of the moon, and even life on Earth, shortly after the planets formed.

    The team’s results pertain to the moon’s Orientale basin, an expansive, bull’s eye-shaped depression on the southwestern edge of the moon, just barely visible from Earth. The basin is surrounded by three concentric rings of rock, the largest one stretching 580 miles across — about three times as wide as the state of Massachusetts. Until now, it’s been unclear how such massive, multi-ring basins materialized.

    Using data collected by NASA’s Gravity Recovery and Interior Laboratory (GRAIL) mission, the researchers determined that the 3.8-billion-year-old basin was created by a huge impactor that punched an initial, transient crater into the lunar surface, measuring up to 285 miles in diameter — about as wide as the state of New York.

    This impact, the researchers calculated, sent at least 816,000 cubic miles of pulverized lunar crust flying out from the impact site — an amount equivalent to 135 times the combined volume of the Great Lakes.

    The ejected material, which the team modeled in computer simulations, rose up like a tidal wave, then crashed down to the lunar surface, creating giant faults through the entire crust and forming two concentric walls of rock on the surface, each rising several kilometers high. Most of the action, according to simulations, occurred over just a couple of hours.  

    If such massive, violent impacts were pummeling the moon, they must have been doing the same, if not more, to the Earth, says Maria Zuber, vice president for research and the E.A. Griswold Professor of Geophysics at MIT.

    “What’s interesting is, this was during the time when the first life forms were starting to emerge on the Earth,” says Zuber, who is the principal investigator for GRAIL and lead author on one of the Science papers. “These very large impacts probably came in, sterilizing the surface, and goodness knows how many times nascent life may have started and stopped and had to start again. It’s just amazing how catastrophic these impacts were.”

    Above is the most realistic animation of the Orientale crater collapse and ring formation. Colors denote temperature, from hot, red crustal material to cooler material, in blue. Over a little more than 2 hours, the animation shows the moon’s initially cool surface as it responds to a very large impact. Instantly, the energy from the collision heats up the material closest to the impact, and the crust surges more than 100 km above the lunar surface, before crashing back down. The pulverized material oscillates back and forth for 2 hours before settling into the pattern of the present-day basin. (Courtesy of the researchers. Animation has been sped up.)

    Flying low

    The team’s results are based on gravity field measurements taken by GRAIL’s twin spacecraft, which orbited the moon from January to mid-December in 2012. In the waning days of the mission, the GRAIL probes were programmed to fly over the Orientale basin, dropping their altitude to just 1.2 miles above the basin’s rings — even lower than the altitude at which commercial jets fly over the Earth. Flying so close to the ground, the probes were able to take measurements of the basin’s gravity field at high spatial resolution, providing scientists with a precise map of the moon’s interior mass distribution.

    Zuber, who directed the mission and led the planning of the probes’ route, notes that the Orientale basin is the best-preserved large impact basin on the moon, having undergone very little transformation since it first formed. For this reason, the basin is considered a relatively pristine example of what the moon and the Earth experienced during a period in which the solar system was dominated by large, catastrophic impacts.

    “The interesting thing is, if you look up at the moon, you see all these craters, and Earth used to look like that — it went through a very similar bombardment history,” Zuber says. “In trying to reconstruct the extreme environmental conditions that existed during this period of time, we have a clearer window into the past through studying basins on the moon, because the record of those impacts isn’t preserved on the Earth.”

    Measured impact

    In one of two papers in Science, Zuber and her colleagues analyzed GRAIL’s gravity field measurements and were able to solve a key mystery, namely, the size and location of the basin’s transient crater, which is the initial depression created when an asteroid blasts material out from the lunar surface. In smaller impacts, the transient crater is largely preserved. But in very large collisions, the transient crater collapses due to loss of strength in the target crust, erasing any hint of the impactor’s size.

    In the case of the Orientale basin, many scientists had thought that one of its three rings might represent the transient crater. But the new measurements of the basin’s gravity field show that the transient crater may have been somewhere between the two inner rings, spanning around 200 to 300 miles across. From the size of the transient crater, the team estimated that the initial impact blasted away about 816,000 cubic miles of lunar crust. The gravity signal also showed that two huge faults exist beneath the basin’s two outer rings.

    “One of the really exciting results in this paper is, the outer two basin rings correspond to massive faults,” Zuber says. “And we were able to detect that these faults appear to have penetrated entirely through the crust and into the mantle, which is quite something.”

    Making a bull’s-eye

    In the second paper, led by Brandon Johnson, a former MIT postdoc in Zuber’s group and now an assistant professor at Brown University, the team created a computer simulation to reconstruct the first hours following the initial impact that created the Orientale basin. The team ran the simulation multiple times, with varying conditions, until the final basin and its concentric rings matched the observations made by GRAIL.

    Based on these simulations, the team estimated that the basin was carved out by a 40-mile-wide object that collided with the moon at about 9 miles per second, or 32,400 miles per hour. The impact pulverized the underlying crust, and the propagation and subsequent unloading of the shockwave caused material to rise up, then crash back down, sloshing back and forth in a wave-like fashion for the next two hours. The material eventually settled back to the surface in the pattern of the basin’s two outermost rings, each rising several kilometers high. This entire process obliterated any trace of the initial crater.

    The simulations showed that the basin’s innermost ring was formed by a different process. While smaller impacts can cause material in a crater to flow inward, forming a mound in the middle, Orientale’s central mound was so large that it was unstable. The material eventually collapsed, forming the basin’s innermost ring.

    “Ultimately, what this tells us is that the early history of the planets, at the time life was developing on Earth, was an extraordinarily hostile environment,” Zuber says. “There were extreme, energetic events that produced remarkably difficult environmental conditions. Maybe that’s why life is as tenacious as it is, because life forms somehow developed in the time subsequent to these catastrophic events. They were tough little buggers.”

    This research was supported by the NASA Discovery Program. The papers’ authors from MIT include David Smith, Katarina Miljković, and Jason Soderblom.

    5:45p
    MIT launches Institute-wide survey on commuting behaviors

    Today, MIT distributed the 2016 Transportation Survey to members of the student body, faculty, and staff. The survey, which is jointly sponsored by the Parking and Transportation Office, the Environment, Health, and Safety Office, and the Office of the Provost, is given every two years as required by the State of Massachusetts and the City of Cambridge.

    The survey is designed to collect data on how the MIT community travels to campus every day and covers a wide breadth of commuter interests and concerns, including subsidized T-passes, parking access, accessibility to bike racks, length of commute, and more. This survey provides MIT the opportunity to receive feedback on whether the programs in place meet the satisfaction of its community while working to create more efficient and innovative transportation solutions, building upon the Institute’s commitment to sustainability and climate action.

    The Transportation Survey has a long standing history of impacting crucial decisions regarding MIT commuter and parking services and plans, including the recent implementation of the Access MIT program, providing free local public transit to benefits-eligible MIT employees, a program that is the first of its kind at any Boston and Cambridge area university. This survey will aid in the evaluation of this program, along with many others to better serve MIT students, faculty, and staff.

    The survey takes about 10 minutes to complete. Those who have received an email invitation to take the survey are encouraged to share their thoughts.

    To review results from past MIT Transportation Surveys, please see web.mit.edu/ir/surveys/commuting.html.

    11:59p
    Making computers explain themselves

    In recent years, the best-performing systems in artificial-intelligence research have come courtesy of neural networks, which look for patterns in training data that yield useful predictions or classifications. A neural net might, for instance, be trained to recognize certain objects in digital images or to infer the topics of texts.

    But neural nets are black boxes. After training, a network may be very good at classifying data, but even its creators will have no idea why. With visual data, it’s sometimes possible to automate experiments that determine which visual features a neural net is responding to. But text-processing systems tend to be more opaque.

    At the Association for Computational Linguistics’ Conference on Empirical Methods in Natural Language Processing, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new way to train neural networks so that they provide not only predictions and classifications but rationales for their decisions.

    “In real-world applications, sometimes people really want to know why the model makes the predictions it does,” says Tao Lei, an MIT graduate student in electrical engineering and computer science and first author on the new paper. “One major reason that doctors don’t trust machine-learning methods is that there’s no evidence.”

    “It’s not only the medical domain,” adds Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science and Lei’s thesis advisor. “It’s in any domain where the cost of making the wrong prediction is very high. You need to justify why you did it.”

    “There’s a broader aspect to this work, as well,” says Tommi Jaakkola, an MIT professor of electrical engineering and computer science and the third coauthor on the paper. “You may not want to just verify that the model is making the prediction in the right way; you might also want to exert some influence in terms of the types of predictions that it should make. How does a layperson communicate with a complex model that’s trained with algorithms that they know nothing about? They might be able to tell you about the rationale for a particular prediction. In that sense it opens up a different way of communicating with the model.”

    Virtual brains

    Neural networks are so called because they mimic — approximately — the structure of the brain. They are composed of a large number of processing nodes that, like individual neurons, are capable of only very simple computations but are connected to each other in dense networks.

    In a process referred to as “deep learning,” training data is fed to a network’s input nodes, which modify it and feed it to other nodes, which modify it and feed it to still other nodes, and so on. The values stored in the network’s output nodes are then correlated with the classification category that the network is trying to learn — such as the objects in an image, or the topic of an essay.

    Over the course of the network’s training, the operations performed by the individual nodes are continuously modified to yield consistently good results across the whole set of training examples. By the end of the process, the computer scientists who programmed the network often have no idea what the nodes’ settings are. Even if they do, it can be very hard to translate that low-level information back into an intelligible description of the system’s decision-making process.

    In the new paper, Lei, Barzilay, and Jaakkola specifically address neural nets trained on textual data. To enable interpretation of a neural net’s decisions, the CSAIL researchers divide the net into two modules. The first module extracts segments of text from the training data, and the segments are scored according to their length and their coherence: The shorter the segment, and the more of it that is drawn from strings of consecutive words, the higher its score.

    The segments selected by the first module are then passed to the second module, which performs the prediction or classification task. The modules are trained together, and the goal of training is to maximize both the score of the extracted segments and the accuracy of prediction or classification.

    One of the data sets on which the researchers tested their system is a group of reviews from a website where users evaluate different beers. The data set includes the raw text of the reviews and the corresponding ratings, using a five-star system, on each of three attributes: aroma, palate, and appearance.

    What makes the data attractive to natural-language-processing researchers is that it’s also been annotated by hand, to indicate which sentences in the reviews correspond to which scores. For example, a review might consist of eight or nine sentences, and the annotator might have highlighted those that refer to the beer’s “tan-colored head about half an inch thick,” “signature Guinness smells,” and “lack of carbonation.” Each sentence is correlated with a different attribute rating.

    Validation

    As such, the data set provides an excellent test of the CSAIL researchers’ system. If the first module has extracted those three phrases, and the second module has correlated them with the correct ratings, then the system has identified the same basis for judgment that the human annotator did.

    In experiments, the system’s agreement with the human annotations was 96 percent and 95 percent, respectively, for ratings of appearance and aroma, and 80 percent for the more nebulous concept of palate.

    In the paper, the researchers also report testing their system on a database of free-form technical questions and answers, where the task is to determine whether a given question has been answered previously.

    In unpublished work, they’ve applied it to thousands of pathology reports on breast biopsies, where it has learned to extract text explaining the bases for the pathologists’ diagnoses. They’re even using it to analyze mammograms, where the first module extracts sections of images rather than segments of text.

    “There’s a lot of hype now — and rightly so — around deep learning, and specifically deep learning for natural-language processing,” says Byron Wallace, an assistant professor of computer and information science at Northeastern University. “But a big drawback for these models is that they’re often black boxes. Having a model that not only makes very accurate predictions but can also tell you why it’s making those predictions is a really important aim.”

    “As it happens, we have a paper that’s similar in spirit being presented at the same conference,” Wallace adds. “I didn’t know at the time that Regina was working on this, and I actually think hers is better. In our approach, during the training process, while someone is telling us, for example, that a movie review is very positive, we assume that they’ll mark a sentence that gives you the rationale. In this way we train the deep-learning model to extract these rationales. But they don’t make this assumption, so their model works without using direct annotations with rationales, which is a very nice property.”

    << Previous Day 2016/10/27
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org