MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Thursday, January 23rd, 2020

    Time Event
    12:00a
    Printing objects that can incorporate living organisms

    A method for printing 3D objects that can control living organisms in predictable ways has been developed by an interdisciplinary team of researchers at MIT and elsewhere. The technique may lead to 3D printing of biomedical tools, such as customized braces, that incorporate living cells to produce therapeutic compunds such as painkillers or topical treatments, the researchers say.

    The new development was led by MIT Media Lab Associate Professor Neri Oxman and graduate students Rachel Soo Hoo Smith, Christoph Bader, and Sunanda Sharma, along with six others at MIT and at Harvard University’s Wyss Institute and Dana-Farber Cancer Institute. The system is described in a paper recently published in the journal Advanced Functional Materials.

    “We call them hybrid living materials, or HLMs,” Smith says. For their initial proof-of-concept experiments, the team precisely incorporated various chemicals into the 3D printing process. These chemicals act as signals to activate certain responses in biologically engineered microbes, which are spray-coated onto the printed object. Once added, the microbes display specific colors or fluorescence in response to the chemical signals.

    In their study, the team describes the appearance of these colored patterns in a variety of printed objects, which they say demonstrates the successful incorporation of the living cells into the surface of the 3D-printed material, and the cells’ activation in response to the selectively placed chemicals.

    The objective is to make a robust design tool for producing objects and devices incorporating living biological elements, made in a way that is as predictable and scalable as other industrial manufacturing processes.

    The team uses a multistep process to produce their hybrid living materials. First, they use a commercially available multimaterial inkjet-based 3D printer, and customized recipes for the combinations of resins and chemical signals used for printing. For example, they found that one type of resin, normally used just to produce a temporary support for overhanging parts of a printed structure and then dissolved away after printing, could produce useful results by being mixed in with the structural resin material. The parts of the structure that incorporate this support material become absorbent and are able to retain the chemical signals that control the behavior of the living organisms.

    Finally, the living layer is added: a surface coating of hydrogel — a gelatinous material composed mostly of water but providing a stable and durable lattice structure — is infused with biologically engineered bacteria and spray-coated onto the object.

    “We can define very specific shapes and distributions of the hybrid living materials and the biosynthesized products, whether they be colors or therapeutic agents, within the printed shapes,” Smith says. Some of these initial test shapes were made as silver-dollar-sized disks, and others in the form of colorful face masks, with the colors provided by the living bacteria within their structure. The colors take several hours to develop as the bacteria grow, and then remain stable once they are in place.

    “There are exciting practical applications with this approach, since designers are now able to control and pattern the growth of living systems through a computational algorithm,” Oxman says. “Combining computational design, additive manufacturing, and synthetic biology, the HLM platform points toward the far-reaching impact these technologies may have across seemingly disparate fields, ‘enlivening’ design and the object space.”

    The printing platform the team used allows the material properties of the printed object to be varied precisely and continuously between different parts of the structure, with some sections stiffer and others more flexible, and some more absorbent and others liquid-repellent. Such variations could be useful in the design of biomedical devices that can provide strength and support while also being soft and pliable to provide comfort in places where they are in contact with the body.

    The team included specialists in biology, bioengineering, and computer science to come up with a system that yields predictable patterning of the biological behavior across the printed object, despite the effects of factors such as diffusion of chemicals through the material. Through computer modeling of these effects, the researchers produced software that they say offers levels of precision comparable to the computer-assisted design (CAD) systems used for traditional 3D printing systems.

    The multiresin 3D printing platform can use anywhere from three to seven different resins with different properties, mixed in any proportions. In combination with synthetic biological engineering, this makes it possible to design objects with biological surfaces that can be programmed to respond in specific ways to particular stimuli such as light or temperature or chemical signals, in ways that are reproducible yet completely customizable, and that can be produced on demand, the researchers say.

    “In the future, the pigments included in the masks can be replaced with useful chemical substances for human augmentation such as vitamins, antibodies or antimicrobial drugs,” Oxman says. “Imagine, for example, a wearable interface designed to guide ad-hoc antibiotic formation customized to fit the genetic makeup of its user. Or, consider smart packaging that can detect contamination, or environmentally responsive architectural skins that can respond and adapt — in real-time — to environmental cues.”

    In their tests, the team used genetically modified E. coli bacteria, because these grow rapidly and are widely used and studied, but in principle other organisms could be used as well, the researchers say.

    The team included Dominik Kolb, Tzu-Chieh Tang, Christopher Voigt, and Felix Moser at MIT; Ahmed Hosny at the Dana-Farber Cancer Institute of Harvard Medical School; and James Weaver at the Wyss Medical Institute of Harvard. It was supported by the Robert Wood Johnson Foundation, Gettylab, the DARPA Engineered Living Materials agreement, and a National Security Science and Engineering Faculty Fellowship.

    12:00a
    Using artificial intelligence to enrich digital maps

    A model invented by researchers at MIT and Qatar Computing Research Institute (QCRI) that uses satellite imagery to tag road features in digital maps could help improve GPS navigation.  

    Showing drivers more details about their routes can often help them navigate in unfamiliar locations. Lane counts, for instance, can enable a GPS system to warn drivers of diverging or merging lanes. Incorporating information about parking spots can help drivers plan ahead, while mapping bicycle lanes can help cyclists negotiate busy city streets. Providing updated information on road conditions can also improve planning for disaster relief.

    But creating detailed maps is an expensive, time-consuming process done mostly by big companies, such as Google, which sends vehicles around with cameras strapped to their hoods to capture video and images of an area’s roads. Combining that with other data can create accurate, up-to-date maps. Because this process is expensive, however, some parts of the world are ignored.

    A solution is to unleash machine-learning models on satellite images — which are easier to obtain and updated fairly regularly — to automatically tag road features. But roads can be occluded by, say, trees and buildings, making it a challenging task. In a paper being presented at the Association for the Advancement of Artificial Intelligence conference, the MIT and QCRI researchers describe “RoadTagger,” which uses a combination of neural network architectures to automatically predict the number of lanes and road types (residential or highway) behind obstructions.

    In testing RoadTagger on occluded roads from digital maps of 20 U.S. cities, the model counted lane numbers with 77 percent accuracy and inferred road types with 93 percent accuracy. The researchers are also planning to enable RoadTagger to predict other features, such as parking spots and bike lanes.

    “Most updated digital maps are from places that big companies care the most about. If you’re in places they don’t care about much, you’re at a disadvantage with respect to the quality of map,” says co-author Sam Madden, a professor in the Department of Electrical Engineering and Computer Science (EECS) and a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “Our goal is to automate the process of generating high-quality digital maps, so they can be available in any country.”

    The paper’s co-authors are CSAIL graduate students Songtao He, Favyen Bastani, and Edward Park; EECS undergraduate student Satvat Jagwani; CSAIL professors Mohammad Alizadeh and Hari Balakrishnan; and QCRI researchers Sanjay Chawla, Sofiane Abbar, and Mohammad Amin Sadeghi.

    Combining CNN and GNN

    Quatar, where QCRI is based, is “not a priority for the large companies building digital maps,” Madden says. Yet, it’s constantly building new roads and improving old ones, especially in preparation for hosting the 2022 FIFA World Cup.

    “While visiting Qatar, we’ve had experiences where our Uber driver can’t figure out how to get where he’s going, because the map is so off,” Madden says. “If navigation apps don’t have the right information, for things such as lane merging, this could be frustrating or worse.”

    RoadTagger relies on a novel combination of a convolutional neural network (CNN) — commonly used for images-processing tasks — and a graph neural network (GNN). GNNs model relationships between connected nodes in a graph and have become popular for analyzing things like social networks and molecular dynamics. The model is “end-to-end,” meaning it’s fed only raw data and automatically produces output, without human intervention.

    The CNN takes as input raw satellite images of target roads. The GNN breaks the road into roughly 20-meter segments, or “tiles.” Each tile is a separate graph node, connected by lines along the road. For each node, the CNN extracts road features and shares that information with its immediate neighbors. Road information propagates along the whole graph, with each node receiving some information about road attributes in every other node. If a certain tile is occluded in an image, RoadTagger uses information from all tiles along the road to predict what’s behind the occlusion.

    This combined architecture represents a more human-like intuition, the researchers say. Say part of a four-lane road is occluded by trees, so certain tiles show only two lanes. Humans can easily surmise that a couple lanes are hidden behind the trees. Traditional machine-learning models — say, just a CNN — extract features only of individual tiles and most likely predict the occluded tile is a two-lane road.

    “Humans can use information from adjacent tiles to guess the number of lanes in the occluded tiles, but networks can’t do that,” He says. “Our approach tries to mimic the natural behavior of humans, where we capture local information from the CNN and global information from the GNN to make better predictions.”

    Learning weights   

    To train and test RoadTagger, the researchers used a real-world map dataset, called OpenStreetMap, which lets users edit and curate digital maps around the globe. From that dataset, they collected confirmed road attributes from 688 square kilometers of maps of 20 U.S. cities — including Boston, Chicago, Washington, and Seattle. Then, they gathered the corresponding satellite images from a Google Maps dataset.

    In training, RoadTagger learns weights — which assign varying degrees of importance to features and node connections — of the CNN and GNN. The CNN extracts features from pixel patterns of tiles and the GNN propagates the learned features along the graph. From randomly selected subgraphs of the road, the system learns to predict the road features at each tile. In doing so, it automatically learns which image features are useful and how to propagate those features along the graph. For instance, if a target tile has unclear lane markings, but its neighbor tile has four lanes with clear lane markings and shares the same road width, then the target tile is likely to also have four lanes. In this case, the model automatically learns that the road width is a useful image feature, so if two adjacent tiles share the same road width, they’re likely to have the same lane count.

    Given a road not seen in training from OpenStreetMap, the model breaks the road into tiles and uses its learned weights to make predictions. Tasked with predicting a number of lanes in an occluded tile, the model notes that neighboring tiles have matching pixel patterns and, therefore, a high likelihood to share information. So, if those tiles have four lanes, the occluded tile must also have four.

    In another result, RoadTagger accurately predicted lane numbers in a dataset of synthesized, highly challenging road disruptions. As one example, an overpass with two lanes covered a few tiles of a target road with four lanes. The model detected mismatched pixel patterns of the overpass, so it ignored the two lanes over the covered tiles, accurately predicting four lanes were underneath.

    The researchers hope to use RoadTagger to help humans rapidly validate and approve continuous modifications to infrastructure in datasets such as OpenStreetMap, where many maps don’t contain lane counts or other details. A specific area of interest is Thailand, Bastani says, where roads are constantly changing, but there are few if any updates in the dataset.

    “Roads that were once labeled as dirt roads have been paved over so are better to drive on, and some intersections have been completely built over. There are changes every year, but digital maps are out of date,” he says. “We want to constantly update such road attributes based on the most recent imagery.”

    5:00a
    Technique reveals whether models of patient risk are accurate

    After a patient has a heart attack or stroke, doctors often use risk models to help guide their treatment. These models can calculate a patient’s risk of dying based on factors such as the patient’s age, symptoms, and other characteristics.

    While these models are useful in most cases, they do not make accurate predictions for many patients, which can lead doctors to choose ineffective or unnecessarily risky treatments for some patients.

    “Every risk model is evaluated on some dataset of patients, and even if it has high accuracy, it is never 100 percent accurate in practice,” says Collin Stultz, a professor of electrical engineering and computer science at MIT and a cardiologist at Massachusetts General Hospital. “There are going to be some patients for which the model will get the wrong answer, and that can be disastrous.”

    Stultz and his colleagues from MIT, IBM Research, and the University of Massachusetts Medical School have now developed a method that allows them to determine whether a particular model’s results can be trusted for a given patient. This could help guide doctors to choose better treatments for those patients, the researchers say.

    Stultz, who is also a professor of health sciences and technology, a member of MIT’s Institute for Medical Engineering and Sciences and Research Laboratory of Electronics, and an associate member of the Computer Science and Artificial Intelligence Laboratory, is the senior author of the new study. MIT graduate student Paul Myers is the lead author of the paper, which appears today in Digital Medicine.

    Modeling risk

    Computer models that can predict a patient’s risk of harmful events, including death, are used widely in medicine. These models are often created by training machine-learning algorithms to analyze patient datasets that include a variety of information about the patients, including their health outcomes.

    While these models have high overall accuracy, “very little thought has gone into identifying when a model is likely to fail,” Stultz says. “We are trying to create a shift in the way that people think about these machine-learning models. Thinking about when to apply a model is really important because the consequence of being wrong can be fatal.”

    For instance, a patient at high risk who is misclassified would not receive sufficiently aggressive treatment, while a low-risk patient inaccurately determined to be at high risk could receive unnecessary, potentially harmful interventions.

    To illustrate how the method works, the researchers chose to focus on a widely used risk model called the GRACE risk score, but the technique can be applied to nearly any type of risk model. GRACE, which stands for Global Registry of Acute Coronary Events, is a large dataset that was used to develop a risk model that evaluates a patient’s risk of death within six months after suffering an acute coronary syndrome (a condition caused by decreased blood flow to the heart). The resulting risk assessment is based on age, blood pressure, heart rate, and other readily available clinical features.

    The researchers’ new technique generates an “unreliability score” that ranges from 0 to 1. For a given risk-model prediction, the higher the score, the more unreliable that prediction. The unreliability score is based on a comparison of the risk prediction generated by a particular model, such as the GRACE risk-score, with the prediction produced by a different model that was trained on the same dataset. If the models produce different results, then it is likely that the risk-model prediction for that patient is not reliable, Stultz says.

    “What we show in this paper is, if you look at patients who have the highest unreliability scores — in the top 1 percent — the risk prediction for that patient yields the same information as flipping a coin,” Stultz says. “For those patients, the GRACE score cannot discriminate between those who die and those who don’t. It’s completely useless for those patients.”

    The researchers’ findings also suggested that the patients for whom the models don’t work well tend to be older and to have a higher incidence of cardiac risk factors.

    One significant advantage of the method is that the researchers derived a formula that tells how much two predictions would disagree, without having to build a completely new model based on the original dataset. 

    “You don’t need access to the training dataset itself in order to compute this unreliability measurement, and that’s important because there are privacy issues that prevent these clinical datasets from being widely accessible to different people,” Stultz says.

    Retraining the model

    The researchers are now designing a user interface that doctors could use to evaluate whether a given patient’s GRACE score is reliable. In the longer term, they also hope to improve the reliability of risk models by making it easier to retrain models on data that include more patients who are similar to the patient being diagnosed.

    “If the model is simple enough, then retraining a model can be fast. You could imagine a whole suite of software integrated into the electronic health record that would automatically tell you whether a particular risk score is appropriate for a given patient, and then try to do things on the fly, like retrain new models that might be more appropriate,” Stultz says.

    The research was funded by the MIT-IBM Watson AI Lab. Other authors of the paper include MIT graduate student Wangzhi Dai; Kenney Ng, Kristen Severson, and Uri Kartoun of the Center for Computational Health at IBM Research; and Wei Huang and Frederick Anderson of the Center for Outcomes Research at the University of Massachusetts Medical School.

    2:40p
    The new front against antibiotic resistance

    After Alexander Fleming discovered the antibiotic penicillin in 1928, spurring a “golden age” of drug development, many scientists thought infectious disease would become a horror of the past. But as antibiotics have been overprescribed and used without adhering to strict regimens, bacterial strains have evolved new defenses that render previously effective drugs useless. Tuberculosis, once held at bay, has surpassed HIV/AIDS as the leading cause of death from infectious disease worldwide. And research in the lab hasn’t caught up to the needs of the clinic. In recent years, the U.S. Food and Drug Administration has approved only one or two new antibiotics annually.

    While these frustrations have led many scientists and drug developers to abandon the field, researchers are finally making breakthroughs in the discovery of new antibiotics. On Jan. 9, the Department of Biology hosted a talk by one of the chemical biologists who won’t quit: Deborah Hung, core member and co-director of the Infectious Disease and Microbiome Program at the Broad Institute of MIT and Harvard, and associate professor in the Department of Genetics at Harvard Medical School.

    Each January during Independent Activities Period, the Department of Biology organizes a seminar series that highlights cutting-edge research in biology. Past series have included talks on synthetic and quantitative biology. This year’s theme is Microbes in Health and Disease. The team of student organizers, led by assistant professor of biology Omer Yilmaz, chose to explore our growing understanding of microbes as both pathogens and symbionts in the body. Hung’s presentation provided an invigorating introduction to the series.

    “Deborah is an international pioneer in developing tools and discovering new biology on the interaction between hosts and pathogens,” Yilmaz says. “She's done a lot of work on tuberculosis as well as other bacterial infections. So it’s a privilege for us to host her talk.”

    A clinician as well as a chemical biologist, Hung understands firsthand the urgent need for new drugs. In her talk, she addressed the conventional approaches to finding new antibiotics, and why they’ve been failing scientists for decades.

    “The rate of resistance is actually far outpacing our ability to discover new antibiotics,” she said. “I’m beginning to see patients [and] I have to tell them, I’m sorry, we have no antibiotics left.”

    The way Hung sees it, there are two long-term goals in the fight against infectious disease. The first is to find a method that will greatly speed up the discovery of new antibiotics. The other is to think beyond antibiotics altogether, and find other ways to strengthen our bodies against intruders and increase patient survival.

    Last year, in pursuit of the first goal, Hung spearheaded a multi-institutional collaboration to develop a new high-throughput screening method called PROSPECT (PRimary screening Of Strains to Prioritize Expanded Chemistry and Targets). By weakening the expression of genes essential to survival in the tuberculosis bacterium, researchers genetically engineered over 400 unique “hypomorphs,” vulnerable in different ways, that could be screened in large batches against tens of thousands of chemical compounds using PROSPECT.

    With this approach, it’s possible to identify effective drug candidates 10 times faster than ever before. Some of the compounds Hung’s team has discovered, in addition to those that hit well-known targets like DNA gyrase and the cell wall, are able to kill tuberculosis in novel ways, such as disabling the bacterium’s molecular efflux pump.

    But one of the challenges to antibiotic discovery is that the drugs that will kill a disease in a test tube won’t necessarily kill the disease in a patient. In order to address her second goal of strengthening our bodies against disease-causing microbes, Hung and her lab are now using zebrafish embryos to screen small molecules not just for their extermination of a pathogen, but for the survival of the host. This way, they can investigate drugs that have no effect on bacteria in a test tube but, in Hung’s words, “throw a wrench in the system” and interact with the host’s cells to provide immunity.

    For much of the 20th century, microbes were primarily studied as agents of harm. But, more recent research into the microbiome — the trillions of organisms that inhabit our skin, gut, and cavities — has illuminated their complex and often symbiotic relationship with our immune system and bodily functions, which antibiotics can disrupt. The other three talks in the series, featuring researchers from Harvard Medical School, delve into the connections between the microbiome and colorectal cancer, inflammatory bowel disease, and stem cells.

    “We're just starting to scratch the surface of the dance between these different microbes, both good and bad, and their role in different aspects of organismal health, in terms of regeneration and other diseases such as cancer and infection,” Yilmaz says.

    For those in the audience, these seminars are more than just a way to pass an afternoon during IAP. Hung addressed the audience as potential future collaborators, and she stressed that antibiotic research needs all hands on deck.

    “It's always a work in progress for us,” she said. “If any of you are very computationally-minded or really interested in looking at these large datasets of chemical-genetic interactions, come see me. We are always looking for new ideas and great minds who want to try to take this on.”

    11:59p
    Study: Commercial air travel is safer than ever

    It has never been safer to fly on commercial airlines, according to a new study by an MIT professor that tracks the continued decrease in passenger fatalities around the globe.

    The study finds that between 2008 and 2017, airline passenger fatalities fell significantly compared to the previous decade, as measured per individual passenger boardings — essentially the aggregate number of passengers. Globally, that rate is now one death per 7.9 million passenger boardings, compared to one death per 2.7 million boardings during the period 1998-2007, and one death per 1.3 million boardings during 1988-1997.

    Going back further, the commercial airline fatality risk was one death per 750,000 boardings during 1978-1987, and one death per 350,000 boardings during 1968-1977.

    “The worldwide risk of being killed had been dropping by a factor of two every decade,” says Arnold Barnett, an MIT scholar who has published a new paper summarizing the study’s results. “Not only has that continued in the last decade, the [latest] improvement is closer to a factor of three. The pace of improvement has not slackened at all even as flying has gotten ever safer and further gains become harder to achieve. That is really quite impressive and is important for people to bear in mind.”

    The paper, “Aviation Safety: A Whole New World?” was published online this month in Transportation Science. Barnett is the sole author.

    The new research also reveals that there is discernible regional variation in airline safety around the world. The study finds that the nations housing the lowest-risk airlines are the U.S., the members of the European Union, China, Japan, Canada, Australia, New Zealand, and Israel. The aggregate fatality risk among those nations was one death per 33.1 million passenger boardings during 2008-2017. Barnett chose the nation as the unit of measurement in the study because important safety regulations for both airlines and airports are decided at the national level.

    For airlines in a second set of countries, which Barnett terms the “advancing” set with an intermediate risk level, the rate is one death per 7.4 million boardings during 2008-2017. This group — comprising countries that are generally rapidly industrializing and have recently achieved high overall life expectancy and GDP per capita — includes many countries in Asia as well as some countries in South America and the Middle East.

    For a third and higher-risk set of developing countries, including some in Asia, Africa, and Latin America, the death risk during 2008-2017 was one per 1.2 million passenger boardings — an improvement from one death per 400,000 passenger boardings during 1998-2007.

    “The two most conspicuous changes compared to previous decades were sharp improvements in China and in Eastern Europe,” says Barnett, who is the George Eastman Professor of Management at the MIT Sloan School of Management. In those places, he notes, had safety achievements in the last decade that were strong even within the lowest-risk group of countries.

    Overall, Barnett suggests, the rate of fatalities has declined far faster than public fears about flying.

    “Flying has gotten safer and safer,” Barnett says. “It’s a factor of 10 safer than it was 40 years ago, although I bet anxiety levels have not gone down that much. I think it’s good to have the facts.”

    Barnett is a long-established expert in the field of aviation safety and risk, whose work has helped contextualize accident and safety statistics. Whatever the absolute numbers of air crashes and fatalities may be — and they fluctuate from year to year — Barnett has sought to measure those numbers against the growth of air travel.

    To conduct the current study, Barnett used data from a number of sources, including the Flight Safety Foundation’s Aviation Safety Network Accident Database. He mostly used data from the World Bank, based on information from the International Civil Aviation Organization, to measure the number of passengers carried, which is now roughly 4 billion per year.

    In the paper, Barnett discusses the pros and cons of some alternative metrics that could be used to evaluate commercial air safety, including deaths per flight and deaths per passenger miles traveled. He prefers to use deaths per boarding because, as he writes in the paper, “it literally reflects the fraction of passengers who perished during air journeys.”

    The new paper also includes historical data showing that even in today’s higher-risk areas for commerical aviation, the fatality rate is better, on aggregate, than it was in the leading air-travel countries just a few decades in the past.

    “The risk now in the higher-risk countries is basically the risk we used to have 40-50 years ago” in the safest air-travel countries, Barnett notes.

    Barnett readily acknowledges that the paper is evaluating the overall numbers, and not providing a causal account of the air-safety trend; he says he welcomes further research attempting to explain the reasons for the continued gains in air safety.

    In the paper, Barnett also notes that year-to-year air fatality numbers have notable variation. In 2017, for instance, just 12 people died in the process of air travel, compared to 473 in 2018.

    “Even if the overall trendline is [steady], the numbers will bounce up and down,” Barnett says. For that reason, he thinks looking at trends a decade at a time is a better way of grasping the full trajectory of commercial airline safety.

    On a personal level, Barnett says he understands the kinds of concerns people have about airline travel. He began studying the subject partly because of his own worries about flying, and quips that he was trying to “sublimate my fears in a way that might be publishable.”

    Those kinds of instinctive fears may well be natural, but Barnett says he hopes that his work can at least build public knowledge about the facts and put them into perspective for people who are afraid of airplane accidents.

    “The risk is so low that being afraid to fly is a little like being afraid to go into the supermarket because the ceiling might collapse,” Barnett says.

    << Previous Day 2020/01/23
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org