MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, February 12th, 2020

    Time Event
    12:59p
    Half of U.S. deaths related to air pollution are linked to out-of-state emissions

    More than half of all air-quality-related early deaths in the United States are a result of emissions originating outside of the state in which those deaths occur, MIT researchers report today in the journal Nature.

    The study focuses on the years between 2005 and 2018 and tracks combustion emissions of various polluting compounds from various sectors, looking at every state in the contiguous United States, from season to season and year to year.

    In general, the researchers find that when air pollution is generated in one state, half of that pollution is lofted into the air and carried by winds across state boundaries, to affect the health quality of out-of-state residents and increase their risk of early death.

    Electric power generation is the greatest contributor to out-of-state pollution-related deaths, the findings suggest. In 2005, for example, deaths caused by sulfur dioxide emitted by power plant smokestacks occurred in another state in more than 75 percent of cases.

    Encouragingly, the researchers found that since 2005, early deaths associated with air pollution have gone down significantly. They documented a decrease of 30 percent in 2018 compared to 2005, equivalent to about 30,000 avoided early deaths, or people who did not die early as a result of pollution. In addition, the fraction of deaths that occur due to emissions in other states is falling — from 53 percent in 2005 to 41 percent in 2018.

    Perhaps surprisingly, this reduction in cross-state pollution also appears to be related to electric power generation: In recent years, regulations such as the Environmental Protection Agency’s Clean Air Act and other changes have helped to significantly curb emissions from this sector across the country.

    The researchers caution, however, that today, emissions from other sectors are increasingly contributing to harmful cross-state pollution.

    “Regulators in the U.S. have done a pretty good job of hitting the most important thing first, which is power generation, by reducing sulfur dioxide emissions drastically, and there’s been a huge improvement, as we see in the results,” says study leader Steven Barrett, an associate professor of aeronautics and astronautics at MIT. “Now it’s looking like other emissions sectors are becoming important. To make further progress, we should start focusing on road transportation and commercial and residential emissions.”

    Barrett’s coauthors on the paper are Sebastian Eastham, a research scientist at MIT; Irene Dedoussi, formerly an MIT graduate student and now an assistant professor at Delft University of Technology; and Erwan Monier, formerly an MIT research scientist and now an assistant professor at the University of California at Davis. The research was a collaboration between MIT’s Laboratory for Aviation and the Environment and the MIT Joint Program on the Science and Policy of Global Change.

    Death and the matrix

    Scientists have long known that pollution observes no boundaries, one of the prime examples being acid rain.

    “It’s been known in Europe for over 30 years that power stations in England would create acid rain that would affect vegetation in Norway, but there’s not been a systematic way to capture how that translates to human health effects,” Barrett says.

    In the case of the United States, tracking how pollution from one state affects another state has historically been tricky and computationally difficult, Barrett says. For each of the 48 contiguous states, researchers would have to track emissions to and from the rest of the 47 states.

    “But now there are modern computational tools that enable you to do these assessments in a much more efficient way,” Barrett says. “That wasn’t really possible before.”

    He and his colleagues developed such tools, drawing on fundemental work by Daven Henze at the University of Colorado at Boulder, to track how every state in the contiguous U.S. affects pollution and health outcomes in every other state. They looked at multiple species of pollutants, such as sulfur dioxide, ozone, and fine particulates, from various emissions sectors, including electric power generation, road transportation, marine, rail, and aviation, and commercial and residential sources, at intervals of every hour of the year.

    They first obtained emissions data from each of seven sectors for the years 2005, 2011, and 2018. They then used the GEOS-Chem atmospheric chemistry transport model to track where these emissions ended up, from season to season and year to year, based on wind patterns and a pollutant’s chemical reactions to the atmosphere. Finally, they used an epidemiologically derived model to relate a population’s pollutant exposure and risk of early death.

    “We have this multidimensional matrix that characterizes the impact of a state’s emissions of a given economic sector of a given pollutant at a given time, on any other state’s health outcomes,” Barrett says. “We can figure out, for example, how much NOx emissions from road transportation in Arizona in July affects human health in Texas, and we can do those calculations instantly.”

    Importing pollution

    The researchers also found that emissions traveling out of state could affect the health of residents beyond immediate, neighboring states.

    “It’s not necessarily just the adjacent state, but states over 1,000 miles away that can be affected,” Barrett says. “Different kinds of emissions have a different kind of range.”

    For example, electric power generation has the greatest range, as power plants can loft pollutants far into the atmosphere, allowing them to travel over long distances. In contrast, commercial and residential sectors generally emit pollutants that chemically do not last as long in the atmosphere.  

    “The story is different for each pollutant,” Barrett says.

    In general, the researchers found that out-of-state air pollution was associated with more than half of all pollution-related early deaths in the U.S. from 2005 to 2018.

    In terms of the impact on individual states, the team found that many of the northern Midwest states such as Wyoming and North Dakota are “net exporters” of pollution-related health impacts, partly because the populations there are relatively low and the emissions these states generate are carried away by winds to other states. Those states that “import” health impacts tend to lie along the East Coast, in the path of the U.S. winds that sweep eastward.

    New York in particular is what the researchers call “the biggest importer of air pollution deaths”; 60 percent of air pollution-related early deaths are from out-of-state emissions.

    “There’s a big archive of data we’ve created from this project,” Barrett says. “We think there are a lot of things that policymakers can dig into, to chart a path to saving the most lives.”

    This research was supported, in part, by the U.S. Environmental Protection Agency, the MIT Martin Family Fellowship for Sustainability, the George and Marie Vergottis Fellowship at MIT, and the VoLo Foundation.

    1:51p
    Automated system can rewrite outdated sentences in Wikipedia articles

    A system created by MIT researchers could be used to automatically update factual inconsistencies in Wikipedia articles, reducing time and effort spent by human editors who now do the task manually.

    Wikipedia comprises millions of articles that are in constant need of edits to reflect new information. That can involve article expansions, major rewrites, or more routine modifications such as updating numbers, dates, names, and locations. Currently, humans across the globe volunteer their time to make these edits.  

    In a paper being presented at the AAAI Conference on Artificial Intelligence, the researchers describe a text-generating system that pinpoints and replaces specific information in relevant Wikipedia sentences, while keeping the language similar to how humans write and edit.

    The idea is that humans would type into an interface an unstructured sentence with updated information, without needing to worry about style or grammar. The system would then search Wikipedia, locate the appropriate page and outdated sentence, and rewrite it in a humanlike fashion. In the future, the researchers say, there’s potential to build a fully automated system that identifies and uses the latest information from around the web to produce rewritten sentences in corresponding Wikipedia articles that reflect updated information.

    “There are so many updates constantly needed to Wikipedia articles. It would be beneficial to automatically modify exact portions of the articles, with little to no human intervention,” says Darsh Shah, a PhD student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and one of the lead authors. “Instead of hundreds of people working on modifying each Wikipedia article, then you’ll only need a few, because the model is helping or doing it automatically. That offers dramatic improvements in efficiency.”

    Many other bots exist that make automatic Wikipedia edits. Typically, those work on mitigating vandalism or dropping some narrowly defined information into predefined templates, Shah says. The researchers’ model, he says, solves a harder artificial intelligence problem: Given a new piece of unstructured information, the model automatically modifies the sentence in a humanlike fashion. “The other [bot] tasks are more rule-based, while this is a task requiring reasoning over contradictory parts in two sentences and generating a coherent piece of text,” he says.

    The system can be used for other text-generating applications as well, says co-lead author and CSAIL graduate student Tal Schuster. In their paper, the researchers also used it to automatically synthesize sentences in a popular fact-checking dataset that helped reduce bias, without manually collecting additional data. “This way, the performance improves for automatic fact-verification models that train on the dataset for, say, fake news detection,” Schuster says.

    Shah and Schuster worked on the paper with their academic advisor Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science and a professor in CSAIL.

    Neutrality masking and fusing

    Behind the system is a fair bit of text-generating ingenuity in identifying contradictory information between, and then fusing together, two separate sentences. It takes as input an “outdated” sentence from a Wikipedia article, plus a separate “claim” sentence that contains the updated and conflicting information. The system must automatically delete and keep specific words in the outdated sentence, based on information in the claim, to update facts but maintain style and grammar. That’s an easy task for humans, but a novel one in machine learning.

    For example, say there’s a required update to this sentence (in bold): “Fund A considers 28 of their 42 minority stakeholdings in operationally active companies to be of particular significance to the group.” The claim sentence with updated information may read: “Fund A considers 23 of 43 minority stakeholdings significant.” The system would locate the relevant Wikipedia text for “Fund A,” based on the claim. It then automatically strips out the outdated numbers (28 and 42) and replaces them with the new numbers (23 and 43), while keeping the sentence exactly the same and grammatically correct. (In their work, the researchers ran the system on a dataset of specific Wikipedia sentences, not on all Wikipedia pages.)

    The system was trained on a popular dataset that contains pairs of sentences, in which one sentence is a claim and the other is a relevant Wikipedia sentence. Each pair is labeled in one of three ways: “agree,” meaning the sentences contain matching factual information; “disagree,” meaning they contain contradictory information; or “neutral,” where there’s not enough information for either label. The system must make all disagreeing pairs agree, by modifying the outdated sentence to match the claim. That requires using two separate models to produce the desired output.

    The first model is a fact-checking classifier — pretrained to label each sentence pair as “agree,” “disagree,” or “neutral” — that focuses on disagreeing pairs. Running in conjunction with the classifier is a custom “neutrality masker” module that identifies which words in the outdated sentence contradict the claim. The module removes the minimal number of words required to “maximize neutrality” — meaning the pair can be labeled as neutral. That’s the starting point: While the sentences don’t agree, they no longer contain obviously contradictory information. The module creates a binary “mask” over the outdated sentence, where a 0 gets placed over words that most likely require deleting, while a 1 goes on top of keepers.

    After masking, a novel two-encoder-decoder framework is used to generate the final output sentence. This model learns compressed representations of the claim and the outdated sentence. Working in conjunction, the two encoder-decoders fuse the dissimilar words from the claim, by sliding them into the spots left vacant by the deleted words (the ones covered with 0s) in the outdated sentence.

    In one test, the model scored higher than all traditional methods, using a technique called “SARI” that measures how well machines delete, add, and keep words compared to the way humans modify sentences. They used a dataset with manually edited Wikipedia sentences, which the model hadn’t seen before. Compared to several traditional text-generating methods, the new model was more accurate in making factual updates and its output more closely resembled human writing. In another test, crowdsourced humans scored the model (on a scale of 1 to 5) based on how well its output sentences contained factual updates and matched human grammar. The model achieved average scores of 4 in factual updates and 3.85 in matching grammar.

    Removing bias

    The study also showed that the system can be used to augment datasets to eliminate bias when training detectors of “fake news,” a form of propaganda containing disinformation created to mislead readers in order to generate website views or steer public opinion. Some of these detectors train on datasets of agree-disagree sentence pairs to “learn” to verify a claim by matching it to given evidence.

    In these pairs, the claim will either match certain information with a supporting “evidence” sentence from Wikipedia (agree) or it will be modified by humans to include information contradictory to the evidence sentence (disagree). The models are trained to flag claims with refuting evidence as “false,” which can be used to help identify fake news.

    Unfortunately, such datasets currently come with unintended biases, Shah says: “During training, models use some language of the human written claims as “give-away” phrases to mark them as false, without relying much on the corresponding evidence sentence. This reduces the model’s accuracy when evaluating real-world examples, as it does not perform fact-checking.”

    The researchers used the same deletion and fusion techniques from their Wikipedia project to balance the disagree-agree pairs in the dataset and help mitigate the bias. For some “disagree” pairs, they used the modified sentence’s false information to regenerate a fake “evidence” supporting sentence. Some of the give-away phrases then exist in both the “agree” and “disagree” sentences, which forces models to analyze more features. Using their augmented dataset, the researchers reduced the error rate of a popular fake-news detector by 13 percent.

    “If you have a bias in your dataset, and you’re fooling your model into just looking at one sentence in a disagree pair to make predictions, your model will not survive the real world,” Shah says. “We make models look at both sentences in all agree-disagree pairs.”

    3:40p
    Why C. difficile infection spreads despite increased sanitation practices

    New research from MIT suggests the risk of becoming colonized by Clostridium difficile (C. difficile) increases immediately following gastrointestinal (GI) disturbances that result in diarrhea.

    Once widely considered an antibiotic- and hospital-associated pathogen, recent research into C. difficile has shown the infection is more frequently acquired outside of hospitals. Now, a team of researchers has shown that GI disturbances, such as those caused by food poisoning and laxative abuse, trigger susceptibility to colonization by C. difficile, and carriers remain C. difficile-positive for a year or longer.

    “Our work helps show why the hospital and antibiotic association of C. difficile infections is an oversimplification of the risks and transmission patterns, and helps reconcile a lot of the observations that have followed the more recent revelation that transmission within hospitals is uncommon,” says David VanInsberghe PhD '19, a recent graduate of the MIT Department of Biology and lead author of the study. “Diarrheal events can trigger long-term Clostridium difficile colonization with recurrent blooms” in Nature Microbiology, published on Feb. 10.

    The researchers analyzed human gut microbiome time series studies conducted on individuals who had diarrhea illnesses and were not treated with antibiotics. Observing the colonization of C. difficile soon after the illnesses were acquired, they tested this association directly by feeding mice increasing quantities of laxatives while exposing them to non-pathogenic C. difficile spores. Their results suggest that GI disturbances create a window of susceptibility to C. difficile colonization during recovery.

    Further, the researchers found that carriers shed C. difficile in highly variable amounts day-to-day; the number of C. difficile cells shed in a carrier’s stool can increase by over 1,000 times in one day. These recurrent blooms likely influence the transmissibility of C. difficile outside of hospitals, and their unpredictability questions the reliability of single time-point diagnostics for detecting carriers.

    “In our study, two of the people we followed with high temporal resolution became carriers outside of the hospital,” says VanInsberghe, who is now a postdoc in the Department of Pathology at Emory University. “The observations we made from their data helped us understand how people become susceptible to colonization and what the short- and long-term patterns in C. difficile abundance in carriers look like. Those patterns told us a lot about how C. difficile can spread between people outside of hospitals.”

    “I believe that there is a lot of rethinking of C. diff infections at the moment and I hope our study will help contribute to ultimately better manage the risks associated with it,” says Martin Polz, senior author of the study and a visiting professor in MIT’s Parsons Laboratory for Environmental Science and Engineering within the MIT Department of Civil and Environmental Engineering.

    The research team also included Joseph A. Elsherbini, a graduate student in the MIT Department of Biology; Bernard Varian, a researcher in MIT’s Division of Comparative Medicine; Theofilos Poutahidis, a professor in the Department of Pathology within the College of Veterinary Medicine at Aristotle University in Greece; and Susan Erdman, a principal research scientist in MIT’s Division of Comparative Medicine.

    11:59p
    “Sensorized” skin helps soft robots find their bearings

    For the first time, MIT researchers have enabled a soft robotic arm to understand its configuration in 3D space, by leveraging only motion and position data from its own “sensorized” skin.

    Soft robots constructed from highly compliant materials, similar to those found in living organisms, are being championed as safer, and more adaptable, resilient, and bioinspired alternatives to traditional rigid robots. But giving autonomous control to these deformable robots is a monumental task because they can move in a virtually infinite number of directions at any given moment. That makes it difficult to train planning and control models that drive automation.

    Traditional methods to achieve autonomous control use large systems of multiple motion-capture cameras that provide the robots feedback about 3D movement and positions. But those are impractical for soft robots in real-world applications.

    In a paper being published in the journal IEEE Robotics and Automation Letters, the researchers describe a system of soft sensors that cover a robot’s body to provide “proprioception” — meaning awareness of motion and position of its body. That feedback runs into a novel deep-learning model that sifts through the noise and captures clear signals to estimate the robot’s 3D configuration. The researchers validated their system on a soft robotic arm resembling an elephant trunk, that can predict its own position as it autonomously swings around and extends.

    The sensors can be fabricated using off-the-shelf materials, meaning any lab can develop their own systems, says Ryan Truby, a postdoc in the MIT Computer Science and Artificial Laboratory (CSAIL) who is co-first author on the paper along with CSAIL postdoc Cosimo Della Santina.

    “We’re sensorizing soft robots to get feedback for control from sensors, not vision systems, using a very easy, rapid method for fabrication,” he says. “We want to use these soft robotic trunks, for instance, to orient and control themselves automatically, to pick things up and interact with the world. This is a first step toward that type of more sophisticated automated control.”

    One future aim is to help make artificial limbs that can more dexterously handle and manipulate objects in the environment. “Think of your own body: You can close your eyes and reconstruct the world based on feedback from your skin,” says co-author Daniela Rus, director of CSAIL and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. “We want to design those same capabilities for soft robots.”

    Shaping soft sensors

    A longtime goal in soft robotics has been fully integrated body sensors. Traditional rigid sensors detract from a soft robot body’s natural compliance, complicate its design and fabrication, and can cause various mechanical failures. Soft-material-based sensors are a more suitable alternative, but require specialized materials and methods for their design, making them difficult for many robotics labs to fabricate and integrate in soft robots.

    While working in his CSAIL lab one day looking for inspiration for sensor materials, Truby made an interesting connection. “I found these sheets of conductive materials used for electromagnetic interference shielding, that you can buy anywhere in rolls,” he says. These materials have “piezoresistive” properties, meaning they change in electrical resistance when strained. Truby realized they could make effective soft sensors if they were placed on certain spots on the trunk. As the sensor deforms in response to the trunk’s stretching and compressing, its electrical resistance is converted to a specific output voltage. The voltage is then used as a signal correlating to that movement.

    But the material didn’t stretch much, which would limit its use for soft robotics. Inspired by kirigami — a variation of origami that includes making cuts in a material — Truby designed and laser-cut rectangular strips of conductive silicone sheets into various patterns, such as rows of tiny holes or crisscrossing slices like a chain link fence. That made them far more flexible, stretchable, “and beautiful to look at,” Truby says.

    Credit: Ryan L. Truby, MIT CSAIL

    The researchers’ robotic trunk comprises three segments, each with four fluidic actuators (12 total) used to move the arm. They fused one sensor over each segment, with each sensor covering and gathering data from one embedded actuator in the soft robot. They used “plasma bonding,” a technique that energizes a surface of a material to make it bond to another material. It takes roughly a couple hours to shape dozens of sensors that can be bonded to the soft robots using a handheld plasma-bonding device.

    Credit: Ryan L. Truby, MIT CSAIL

    “Learning” configurations

    As hypothesized, the sensors did capture the trunk’s general movement. But they were really noisy. “Essentially, they’re nonideal sensors in many ways,” Truby says. “But that’s just a common fact of making sensors from soft conductive materials. Higher-performing and more reliable sensors require specialized tools that most robotics labs do not have.”

    To estimate the soft robot’s configuration using only the sensors, the researchers built a deep neural network to do most of the heavy lifting, by sifting through the noise to capture meaningful feedback signals. The researchers developed a new model to kinematically describe the soft robot’s shape that vastly reduces the number of variables needed for their model to process.

    In experiments, the researchers had the trunk swing around and extend itself in random configurations over approximately an hour and a half. They used the traditional motion-capture system for ground truth data. In training, the model analyzed data from its sensors to predict a configuration, and compared its predictions to that ground truth data which was being collected simultaneously. In doing so, the model “learns” to map signal patterns from its sensors to real-world configurations. Results indicated, that for certain and steadier configurations, the robot’s estimated shape matched the ground truth.

    Next, the researchers aim to explore new sensor designs for improved sensitivity and to develop new models and deep-learning methods to reduce the required training for every new soft robot. They also hope to refine the system to better capture the robot’s full dynamic motions.

    Currently, the neural network and sensor skin are not sensitive to capture subtle motions or dynamic movements. But, for now, this is an important first step for learning-based approaches to soft robotic control, Truby says: “Like our soft robots, living systems don’t have to be totally precise. Humans are not precise machines, compared to our rigid robotic counterparts, and we do just fine.”

    << Previous Day 2020/02/12
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org