MIT Research News' Journal
[Most Recent Entries]
[Calendar View]
Thursday, January 9th, 2020
Time |
Event |
1:30p |
Pathways to a low-carbon future When it comes to fulfilling ambitious energy and climate commitments, few nations successfully walk their talk. A case in point is the Paris Agreement initiated four years ago. Nearly 200 signatory nations submitted voluntary pledges to cut their contribution to the world’s greenhouse gas emissions by 2030, but many are not on track to fulfill these pledges. Moreover, only a small number of countries are now pursuing climate policies consistent with keeping global warming well below 2 degrees Celsius, the long-term target recommended by the Intergovernmental Panel on Climate Change (IPCC).
This growing discrepancy between current policies and long-term targets — combined with uncertainty about individual nations’ ability to fulfill their commitments due to administrative, technological, and cultural challenges — makes it increasingly difficult for scientists to project the future of the global energy system and its impact on the global climate. Nonetheless, these projections remain essential for decision-makers to assess the physical and financial risks of climate change and of efforts to transition to a low-carbon economy.
Toward that end, several expert groups continue to produce energy scenarios and analyze their implications for the climate. In a study in the journal Economics of Energy & Environmental Policy, Sergey Paltsev, deputy director of the MIT Joint Program on the Science and Policy of Global Change and a senior research scientist at the MIT Energy Initiative, collected projections of the global energy mix over the next two decades from several major energy-scenario producers. Aggregating results from scenarios developed by the MIT Joint Program, International Energy Agency, Shell, BP and ExxonMobil, and contrasting them with scenarios assessed by the IPCC that would be required to follow a pathway that limits global warming to 1.5 C, Paltsev arrived at three notable findings:
1. Fossil fuels decline, but still dominate. Assuming current Paris Agreement pledges are maintained beyond 2030, the share of fossil fuels in the global energy mix declines from approximately 80 percent today to 73-76 percent in 2040. In scenarios consistent with the 2 C goal, this share decreases to 56-61 percent in 2040. Meanwhile, the share of wind and solar rises from 2 percent today to 6-13 percent (current pledges) and further to 17-26 percent (2 C scenarios) in 2040.
2. Carbon capture waits in the wings. The multiple scenarios also show a mixed future for fossil fuels as the globe shifts away from carbon-intensive energy sources. Coal use does not have a sustainable future unless combined with carbon capture and storage (CCS) technology, and most near-term projections show no large-scale deployment of CCS in the next 10-15 years. Natural gas consumption, however, is likely to increase in the next 20 years, but also projected to decline thereafter without CCS. For pathways consistent with the “well below 2 C” goal, CCS scale-up by midcentury is essential for all carbon-emitting technologies.
3. Solar and wind thrive, but storage challenges remain. The scenarios show the critical importance of energy-efficiency improvements on the pace of the low-carbon transition but little consensus on the magnitude of such improvements. They do, however, unequivocally point to successful upcoming decades for solar and wind energy. This positive outlook is due to declining costs and escalating research and innovation in addressing intermittency and long-term energy storage challenges.
While the scenarios considered in this study project an increased share of renewables in the next 20 years, they do not indicate anything close to a complete decarbonization of the energy system during that time frame. To assess what happens beyond 2040, the study concludes that decision-makers should be drawing upon a range of projections of plausible futures, because the dominant technologies of the near term may not prevail over the long term.
“While energy projections are becoming more difficult because of the widening gulf between current policies and stated goals, they remain stakeholders’ sharpest tool in assessing the near- and long-term physical and financial risks associated with climate change and the world’s ongoing transition to a low-carbon energy system,” says Paltsev. “Combining the results from multiple sources provides additional insight into the evolution of the global energy mix.” | 2:00p |
Julia Ortony: Concocting nanomaterials for energy and environmental applications A molecular engineer, Julia Ortony performs a contemporary version of alchemy.
“I take powder made up of disorganized, tiny molecules, and after mixing it up with water, the material in the solution zips itself up into threads 5 nanometers thick — about 100 times smaller than the wavelength of visible light,” says Ortony, the Finmeccanica Career Development Assistant Professor of Engineering in the Department of Materials Science and Engineering (DMSE). “Every time we make one of these nanofibers, I am amazed to see it.”
But for Ortony, the fascination doesn’t simply concern the way these novel structures self-assemble, a product of the interaction between a powder’s molecular geometry and water. She is plumbing the potential of these nanomaterials for use in renewable energy and environmental remediation technologies, including promising new approaches to water purification and the photocatalytic production of fuel.
Tuning molecular properties
Ortony’s current research agenda emerged from a decade of work into the behavior of a class of carbon-based molecular materials that can range from liquid to solid.
During doctoral work at the University of California at Santa Barbara, she used magnetic resonance (MR) spectroscopy to make spatially precise measurements of atomic movement within molecules, and of the interactions between molecules. At Northwestern University, where she was a postdoc, Ortony focused this tool on self-assembling nanomaterials that were biologically based, in research aimed at potential biomedical applications such as cell scaffolding and regenerative medicine.
“With MR spectroscopy, I investigated how atoms move and jiggle within an assembled nanostructure,” she says. Her research revealed that the surface of the nanofiber acted like a viscous liquid, but as one probed further inward, it behaved like a solid. Through molecular design, it became possible to tune the speed at which molecules that make up a nanofiber move.
A door had opened for Ortony. “We can now use state-of-matter as a knob to tune nanofiber properties,” she says. “For the first time, we can design self-assembling nanostructures, using slow or fast internal molecular dynamics to determine their key behaviors.”
Slowing down the dance
When she arrived at MIT in 2015, Ortony was determined to tame and train molecules for nonbiological applications of self-assembling “soft” materials.
“Self-assembling molecules tend to be very dynamic, where they dance around each other, jiggling all the time and coming and going from their assembly,” she explains. “But we noticed that when molecules stick strongly to each other, their dynamics get slow, and their behavior is quite tunable.” The challenge, though, was to synthesize nanostructures in nonbiological molecules that could achieve these strong interactions.
“My hypothesis coming to MIT was that if we could tune the dynamics of small molecules in water and really slow them down, we should be able to make self-assembled nanofibers that behave like a solid and are viable outside of water,” says Ortony.
Her efforts to understand and control such materials are now starting to pay off.
“We’ve developed unique, molecular nanostructures that self-assemble, are stable in both water and air, and — since they’re so tiny — have extremely high surface areas,” she says. Since the nanostructure surface is where chemical interactions with other substances take place, Ortony has leapt to exploit this feature of her creations — focusing in particular on their potential in environmental and energy applications.
Clean water and fuel from sunlight
One key venture, supported by Ortony’s Professor Amar G. Bose Fellowship, involves water purification. The problem of toxin-laden drinking water affects tens of millions of people in underdeveloped nations. Ortony’s research group is developing nanofibers that can grab deadly metals such as arsenic out of such water. The chemical groups she attaches to nanofibers are strong, stable in air, and in recent tests “remove all arsenic down to low, nearly undetectable levels,” says Ortony.
She believes an inexpensive textile made from nanofibers would be a welcome alternative to the large, expensive filtration systems currently deployed in places like Bangladesh, where arsenic-tainted water poses dire threats to large populations.
“Moving forward, we would like to chelate arsenic, lead, or any environmental contaminant from water using a solid textile fabric made from these fibers,” she says.
In another research thrust, Ortony says, “My dream is to make chemical fuels from solar energy.” Her lab is designing nanostructures with molecules that act as antennas for sunlight. These structures, exposed to and energized by light, interact with a catalyst in water to reduce carbon dioxide to different gases that could be captured for use as fuel.
In recent studies, the Ortony lab found that it is possible to design these catalytic nanostructure systems to be stable in water under ultraviolet irradiation for long periods of time. “We tuned our nanomaterial so that it did not break down, which is essential for a photocatalytic system,” says Ortony.
Students dive in
While Ortony’s technologies are still in the earliest stages, her approach to problems of energy and the environment are already drawing student enthusiasts.
Dae-Yoon Kim, a postdoc in the Ortony lab, won the 2018 Glenn H. Brown Prize from the International Liquid Crystal Society for his work on synthesized photo-responsive materials and started a tenure track position at the Korea Institute of Science and Technology this fall. Ortony also mentors Ty Christoff-Tempesta, a DMSE doctoral candidate, who was recently awarded a Martin Fellowship for Sustainability. Christoff-Tempesta hopes to design nanoscale fibers that assemble and disassemble in water to create environmentally sustainable materials. And Cynthia Lo ’18 won a best-senior-thesis award for work with Ortony on nanostructures that interact with light and self-assemble in water, work that will soon be published. She is “my superstar MIT Energy Initiative UROP [undergraduate researcher],” says Ortony.
Ortony hopes to share her sense of wonder about materials science not just with students in her group, but also with those in her classes. “When I was an undergraduate, I was blown away at the sheer ability to make a molecule and confirm its structure,” she says. With her new lab-based course for grad students — 3.65 (Soft Matter Characterization) — Ortony says she can teach about “all the interests that drive my research.”
While she is passionate about using her discoveries to solve critical problems, she remains entranced by the beauty she finds pursuing chemistry. Fascinated by science starting in childhood, Ortony says she sought out every available class in chemistry, “learning everything from beginning to end, and discovering that I loved organic and physical chemistry, and molecules in general.”
Today, she says, she finds joy working with her “creative, resourceful, and motivated” students. She celebrates with them “when experiments confirm hypotheses, and it’s a breakthrough and it’s thrilling,” and reassures them “when they come with a problem, and I can let them know it will be thrilling soon.”
This article appears in the Autumn 2019 issue of Energy Futures, the magazine of the MIT Energy Initiative. | 5:10p |
How well can computers connect symptoms to diseases? A new MIT study finds “health knowledge graphs,” which show relationships between symptoms and diseases and are intended to help with clinical diagnosis, can fall short for certain conditions and patient populations. The results also suggest ways to boost their performance.
Health knowledge graphs have typically been compiled manually by expert clinicians, but that can be a laborious process. Recently, researchers have experimented with automatically generating these knowledge graphs from patient data. The MIT team has been studying how well such graphs hold up across different diseases and patient populations.
In a paper presented at the Pacific Symposium on Biocomputing 2020, the researchers evaluated automatically generated health knowledge graphs based on real datasets comprising more than 270,000 patients with nearly 200 diseases and more than 770 symptoms.
The team analyzed how various models used electronic health record (EHR) data, containing medical and treatment histories of patients, to automatically “learn” patterns of disease-symptom correlations. They found that the models performed particularly poorly for diseases that have high percentages of very old or young patients, or high percentages of male or female patients — but that choosing the right data for the right model, and making other modifications, can improve performance.
The idea is to provide guidance to researchers about the relationship between dataset size, model specification, and performance when using electronic health records to build health knowledge graphs. That could lead to better tools to aid physicians and patients with medical decision-making or to search for new relationships between diseases and symptoms.
“In the last 10 years, EHR use has skyrocketed in hospitals, so there’s an enormous amount of data that we hope to mine to learn these graphs of disease-symptom relationships,” says first author Irene Y. Chen, a graduate student in the Department of Electrical Engineering and Computer Science (EECS). “It is essential that we closely examine these graphs, so that they can be used as the first steps of a diagnostic tool.”
Joining Chen on the paper are Monica Agrawal, a graduate student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL); Steven Horng of Beth Israel Deaconess Medical Center (BIDMC); and EECS Professor David Sontag, who is a member of CSAIL and the Institute for Medical Engineering and Science, and head of the Clinical Machine Learning Group.
Patients and diseases
In health knowledge graphs, there are hundreds of nodes, each representing a different disease and symptom. Edges (lines) connect disease nodes, such as “diabetes,” with correlated symptom nodes, such as “excessive thirst.” Google famously launched its own version in 2015, which was manually curated by several clinicians over hundreds of hours and is considered the gold standard. When you Google a disease now, the system displays associated symptoms.
In a 2017 Nature Scientific Reports paper, Sontag, Horng, and other researchers leveraged data from the same 270,00 patients in their current study — which came from the emergency department at BIDMC between 2008 and 2013 — to build health knowledge graphs. They used three model structures to generate the graphs, called logistic regression, naive Bayes, and noisy OR. Using data provided by Google, the researchers compared their automatically generated health knowledge graph with the Google Health Knowledge Graph (GHKG). The researchers’ graph performed very well.
In their new work, the researchers did a rigorous error analysis to determine which specific patients and diseases the models performed poorly for. Additionally, they experimented with augmenting the models with more data, from beyond the emergency room.
In one test, they broke the data down into subpopulations of diseases and symptoms. For each model, they looked at connecting lines between diseases and all possible symptoms, and compared that with the GHKG. In the paper, they sort the findings into the 50 bottom- and 50 top-performing diseases. Examples of low performers are polycystic ovary syndrome (which affects women), allergic asthma (very rare), and prostate cancer (which predominantly affects older men). High performers are the more common diseases and conditions, such as heart arrhythmia and plantar fasciitis, which is tissue swelling along the feet.
They found the noisy OR model was the most robust against error overall for nearly all of the diseases and patients. But accuracy decreased among all models for patients that have many co-occurring diseases and co-occurring symptoms, as well as patients that are very young or above the age of 85. Performance also suffered for patient populations with very high or low percentages of any sex.
Essentially, the researchers hypothesize, poor performance is caused by patients and diseases that have outlier predictive performance, as well as potential unmeasured confounders. Elderly patients, for instance, tend to enter hospitals with more diseases and related symptoms than younger patients. That means it’s difficult for the models to correlate specific diseases with specific symptoms, Chen says. “Similarly,” she adds, “young patients don’t have many diseases or as many symptoms, and if they have a rare disease or symptom, it doesn’t present in a normal way the models understand.”
Splitting data
The researchers also collected much more patient data and created three distinct datasets of different granularity to see if that could improve performance. For the 270,000 visits used in the original analysis, the researchers extracted the full EHR history of the 140,804 unique patients, tracking back a decade, with around 7.4 million annotations total from various sources, such as physician notes.
Choices in the dataset-creation process impacted the model performance as well. One of the datasets aggregates each of the 140,400 patient histories as one data point each. Another dataset treats each of the 7.4 million annotations as a separate data point. A final one creates “episodes” for each patient, defined as a continuous series of visits without a break of more than 30 days, yielding a total of around 1.4 million episodes.
Intuitively, a dataset where the full patient history is aggregated into one data point should lead to greater accuracy since the entire patient history is considered. Counterintuitively, however, it also caused the naive Bayes model to perform more poorly for some diseases. “You assume the more intrapatient information, the better, with machine-learning models. But these models are dependent on the granularity of the data you feed them,” Chen says. “The type of model you use could get overwhelmed.”
As expected, feeding the model demographic information can also be effective. For instance, models can use that information to exclude all male patients for, say, predicting cervical cancer. And certain diseases far more common for elderly patients can be eliminated in younger patients.
But, in another surprise, the demographic information didn’t boost performance for the most successful model, so collecting that data may be unnecessary. That’s important, Chen says, because compiling data and training models on the data can be expensive and time-consuming. Yet, depending on the model, using scores of data may not actually improve performance.
Next, the researchers hope to use their findings to build a robust model to deploy in clinical settings. Currently, the health knowledge graph learns relations between diseases and symptoms but does not give a direct prediction of disease from symptoms. “We hope that any predictive model and any medical knowledge graph would be put under a stress test so that clinicians and machine-learning researchers can confidently say, ‘We trust this as a useful diagnostic tool,’” Chen says. |
|