MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, November 29th, 2016

    Time Event
    2:00p
    MIT Skoltech Seed Fund issues call for proposals

    The MIT Skoltech Seed Fund Program is calling for proposals, now through Dec. 23, from MIT faculty and researchers with principal investigator status for innovative projects that have the potential to benefit the development of the MIT Skoltech Program or the mission of the Skolkovo Foundation. The program strongly encourages proposals that involve collaborative research with Skoltech or other Russian academic and research institutions.  

    Interested researchers are encouraged to submit proposals in the following three main categories:

    • research projects in science and engineering (biomedicine, energy, information technology, data science and computational modeling, product design and manufacturing, and space);

    • research projects in the areas of policy, economics, humanities, arts and social sciences (especially innovation and entrepreneurship, international collaborative programs, technology and policy, and general Russian studies, including Russian history, Russian art and Russian economy); and

    • non-research projects to promote engagement and collaboration on topics and activities that may impact Russia, Skoltech, or other Russian institutions — such as course development, course teaching, student exchange, event organization (e.g., a hackathon or other application-type activity), etc.

    The MIT Skoltech Seed Fund will award grants in amounts up to $75,000, for one year.

    The application deadline is Friday Dec. 23. For more information and to apply, visit the Skoltech Seed Fund page

    3:20p
    Climate models may be overestimating the cooling effect of wildfire aerosols

    Whether intentionally set to consume agricultural waste or naturally ignited in forests or peatlands, open-burning fires impact the global climate system in two ways which, to some extent, cancel each other out. On one hand, they generate a significant fraction of the world’s carbon dioxide emissions, which drive up the average global surface temperature. On the other hand, they produce atmospheric aerosols, organic carbon, black carbon, and sulfate-bearing particulates that can lower that temperature either directly, by reflecting sunlight skyward, or indirectly, by increasing the reflectivity of clouds. Because wildfire aerosols play a key role in determining the future of the planet’s temperature and precipitation patterns, it’s crucial that today’s climate models — upon which energy and climate policymaking depend — accurately represent their impact on the climate system.

    But a new study in Atmospheric Chemistry and Physics by researchers at the MIT Joint Program on the Science and Policy of Global Change shows that at least one widely-used climate model is overestimating the cooling effect of these aerosol emissions by as much as 23 percent.

    “This overestimation could lead to errors in projections of surface temperature and rainfall, both globally and regionally,“ says Chien Wang, a senior research scientist at MIT’s Department of Earth, Atmospheric and Planetary Sciences and the Joint Program, who co-authored the paper with two members of his group: lead author and research scientist Benjamin S. Grandey and postdoc Hsiang-He Lee of the Center for Environmental Sensing and Modeling at the Singapore-MIT Alliance for Research and Technology. “We hope our findings will reduce such errors in climate modeling.”

    To make long-term global projections, most climate models represent atmospheric wildfire aerosol emissions by using monthly measures of emissions at different locations around the globe, and then averaging those emissions over multiple years — before estimating their effect on solar radiation at each location over the multi-year period. Questioning the accuracy of this conventional approach, the researchers proposed a revised representation of wildfire aerosol emissions in which the radiative effect associated with each monthly measure of emissions is first calculated, before averaging over the multi-year period. The revised approach would account for year-to-year variability in the aerosols’ radiative effect, which is missing in the conventional representation.

    Using a global aerosol-climate model — the Community Earth System Model (CESM) — and the Global Fire Emissions Database (GFED4.0s), the researchers compared both modeling approaches over a 10-year period. The comparison showed that wildfire emissions are responsible for a global mean net radiative effect of about -1.26 watts per square meter for the conventional approach, and about -1.02 watts per square meter for the revised approach. The conventional climate modeling approach systematically overestimated the strength of the net radiative effect of wildfire aerosols — by 23 percent globally and by higher levels (58 percent over Australia and New Zealand; 43 percent over Boreal Asia, where wildfires are commonplace) regionally.

    The researchers attribute this systematic overestimation to the non-linear influence of the aerosols on clouds, due largely to interactions between organic carbon aerosols and clouds. Organic carbon aerosols initially boost the reflectivity (and thus cooling effect) of clouds, but as concentrations increase over a particular geographic location, the rate of increase in cloud reflectivity (and cooling effect) slows down considerably. By incorrectly assuming that the indirect cooling effect of aerosol emissions increases linearly with their concentration, conventional approaches overestimate that effect in climate models.

    Representing the year-to-year variability in the cooling effect of wildfire aerosols in climate models could improve our understanding of the climate system and the overall accuracy of global and regional climate projections.

    “Hopefully what we’ve found here will be taken into account in future climate modeling studies, which could help improve decision-making regarding climate mitigation and adaptation,” says Grandey.

    The research team recommends further research to test the robustness of their method by using different climate models and wildfire emissions data sets, improve the scientific understanding of the mechanisms behind the results, and explore in greater depth the impact of year-to-year variation in aerosol emissions on different aspects of climate change.

    The research was funded by the Singapore National Research Foundation through the Singapore-MIT Alliance for Research and Technology's Center for Environmental Sensing and Modeling, as well as by grants from the National Science Foundation, the Department of Energy, and the Environmental Protection Agency.

    3:50p
    How online tools and open innovation can support implementation of Paris Agreement goals

    An MIT research initiative is harnessing the power of crowds and online collaborative tools in support of fulfilling global Paris Agreement climate goals.

    MIT’s Climate CoLab, founded and directed by Professor Thomas Malone of the MIT Center for Collective Intelligence, presented its work and innovative approach in a series of events earlier this month at the United Nations Framework Convention on Climate Change (UNFCCC) Conference of the Parties in Marrakech, Morocco (COP22). Climate CoLab’s team was on the ground in Marrakech to strengthen and build new collaborations with the international community in support of the 2015 Paris international climate agreement, and to showcase the role crowds and online collaborative tools can play in supporting implementation of the Paris Agreement goals. Of the project, Malone said: “It’s now possible to harness the collective intelligence of thousands of people, all over the world, at a scale, and with a degree of collaboration, that was never possible before in human history."

    Amid notable milestones in international climate cooperation this fall — including the early legal entry into force of the Paris Agreement, a recent international accord on reducing global hydrofluorocarbons (HFCs), and another on reducing emissions from the aviation sector, COP22 was still awash with reminders of the stark scientific realities that further near-term action is needed to combat the most dangerous impacts of climate change. Among them, a new United Nations' Environmental Program 2016 Emissions Gap Report, released immediately prior to COP22, projected that 25 percent greater global emissions cuts are needed prior to 2030. UN Secretary-General Ban Ki-moon recently urged the global community, “We are still in a race against time. We need to transition to a low-emissions and climate-resilient future.”

    Climate CoLab is pioneering a crowd-based methodology to help meet this challenge. The project was highlighted during several events at COP22, including two official UN side events, and a featured interview with the UNFCCC Climate Change Studio.What if we could harness all of the ingenuity and intelligence of everybody that’s [at COP22], and also everybody that couldn’t be here today, to continuously work together on climate change solutions? What could be possible?” said Laur Hesse Fisher, Climate CoLab project manager, during the interview. “New digital collaboration tools enable that,” she continued.

    On Monday, Nov. 14, Climate CoLab co-hosted an official side event with collaborator Climate Interactive and the Abibimman Foundation, entitled “Meeting the Paris Goals through Decision-Maker Tools and Climate Education.” Panelist Andrew Jones, Climate Interactive’s co-director, started the session with the premise that we need large-scale engagement in order to adequately address this challenge: We don’t need 10,000 experts, we need 1 billion amateurs doing all they can, effectively, to make change.”

    The role of non-state actors and open transparent stakeholder engagement processes were featured throughout COP22. On Nov. 15, Hesse Fisher joined a panel of collaborators from various international organizations, including Climate Policy Institute, Climate-KIC, the Global Environmental Facility, ICLEI, and many others, organized by the Cities Climate Finance Leadership Alliance. Addressing an audience of government officials, academics, non-profit advocates, and others, the panelists discussed the role of innovation platforms and tools in helping finance climate action.

    Additionally, building on last year’s launch of a partnership with the UN Secretary-General’s Climate Resilience Initiative: Absorb, Anticipate, Reshape (A2R), Climate CoLab was featured in an A2R brochure distributed at A2R Initiative COP22 events, for its new contest on “Anticipating Climate Hazards,” which seeks proposals on early warning systems and climate preparedness responses. Of the collaboration, Malone said, “To contend with the most pressing impacts of climate change, it is clear that now more than ever before, we need ideas and contributions of as many people as possible to address climate change.”

    As focus turns to accelerating countries’ implementation of their emissions reductions targets and adaptation strategies put forward under the Paris Agreement — also known as “nationally-determined contributions” or “NDCs” — Climate CoLab is exploring how this online collaborative approach of stakeholder engagement and expert-validated climate planning including assessment could prove valuable to countries. Building on themes of open engagement and enhancing transparency, Malone remarked, “We believe it’s possible to open up the national and international climate planning processes to anyone around the world who wants to participate.” As Fisher said, this approach provides “new ways that the world can work together.”

    4:45p
    Creating videos of the future

    Living in a dynamic physical world, it’s easy to forget how effortlessly we understand our surroundings. With minimal thought, we can figure out how scenes change and objects interact.

    But what’s second nature for us is still a huge problem for machines. With the limitless number of ways that objects can move, teaching computers to predict future actions can be difficult.

    Recently, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have moved a step closer, developing a deep-learning algorithm that, given a still image from a scene, can create a brief video that simulates the future of that scene.

    Trained on 2 million unlabeled videos that include a year’s worth of footage, the algorithm generated videos that human subjects deemed to be realistic 20 percent more often than a baseline model.

    The team says that future versions could be used for everything from improved security tactics and safer self-driving cars. According to CSAIL PhD student and first author Carl Vondrick, the algorithm can also help machines recognize people’s activities without expensive human annotations.

    “These videos show us what computers think can happen in a scene,” says Vondrick. “If you can predict the future, you must have understood something about the present.”

    Vondrick wrote the paper with MIT professor Antonio Torralba and Hamed Pirsiavash, a former CSAIL postdoc who is now a professor at the University of Maryland Baltimore County (UMBC). The work will be presented at next week’s Neural Information Processing Systems (NIPS) conference in Barcelona.

    How it works

    Multiple researchers have tackled similar topics in computer vision, including MIT Professor Bill Freeman, whose new work on “visual dynamics” also creates future frames in a scene. But where his model focuses on extrapolating videos into the future, Torralba’s model can also generate completely new videos that haven’t been seen before.

    Previous systems build up scenes frame by frame, which creates a large margin for error. In contrast, this work focuses on processing the entire scene at once, with the algorithm generating as many as 32 frames from scratch per second.

    “Building up a scene frame-by-frame is like a big game of ‘Telephone,’ which means that the message falls apart by the time you go around the whole room,” says Vondrick. “By instead trying to predict all frames simultaneously, it’s as if you’re talking to everyone in the room at once.”

    Of course, there’s a trade-off to generating all frames simultaneously: While it becomes more accurate, the computer model also becomes more complex for longer videos. Nevertheless, this complexity may be worth it for sharper predictions.

    To create multiple frames, researchers taught the model to generate the foreground separate from the background, and to then place the objects in the scene to let the model learn which objects move and which objects don’t.

    The team used a deep-learning method called “adversarial learning” that involves training two competing neural networks. One network generates video, and the other discriminates between the real and generated videos. Over time, the generator learns to fool the discriminator.

    From that, the model can create videos resembling scenes from beaches, train stations, hospitals, and golf courses.  For example, the beach model produces beaches with crashing waves, and the golf model has people walking on grass.

    Testing the scene

    The team compared the videos against a baseline of generated videos and asked subjects which they thought were more realistic. From over 13,000 opinions of 150 users, subjects chose the generative model videos 20 percent more often than the baseline.
     
    Vondrick stresses that the model still lacks some fairly simple common-sense principles. For example, it often doesn’t understand that objects are still there when they move, like when a train passes through a scene. The model also tends to make humans and objects look much larger in size than reality.

    Another limitation is that the generated videos are just one and a half seconds long, which the team hopes to be able to increase in future work. The challenge is that this requires tracking longer dependencies to ensure that the scene still makes sense over longer time periods. One way to do this would be to add human supervision.

    “It’s difficult to aggregate accurate information across long time periods in videos,” says Vondrick. “If the video has both cooking and eating activities, you have to be able to link those two together to make sense of the scene.”

    These types of models aren’t limited to predicting the future. Generative videos can be used for adding animation to still images, like the animated newspaper from the Harry Potter books. They could also help detect anomalies in security footage and compress data for storing and sending longer videos.

    “In the future, this will let us scale up vision systems to recognize objects and scenes without any supervision, simply by training them on video,” says Vondrick.

    This work was supported by the National Science Foundation, the START program at UMBC, and a Google PhD fellowship.

    << Previous Day 2016/11/29
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org