MIT Research News' Journal
[Most Recent Entries]
[Calendar View]
Monday, April 22nd, 2019
Time |
Event |
10:59a |
Neuroscientists reverse some behavioral symptoms of Williams Syndrome Williams Syndrome, a rare neurodevelopmental disorder that affects about 1 in 10,000 babies born in the United States, produces a range of symptoms including cognitive impairments, cardiovascular problems, and extreme friendliness, or hypersociability.
In a study of mice, MIT neuroscientists have garnered new insight into the molecular mechanisms that underlie this hypersociability. They found that loss of one of the genes linked to Williams Syndrome leads to a thinning of the fatty layer that insulates neurons and helps them conduct electrical signals in the brain.
The researchers also showed that they could reverse the symptoms by boosting production of this coating, known as myelin. This is significant, because while Williams Syndrome is rare, many other neurodevelopmental disorders and neurological conditions have been linked to myelination deficits, says Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience and a member of MIT’s McGovern Institute for Brain Research.
“The importance is not only for Williams Syndrome,” says Feng, who is one of the senior authors of the study. “In other neurodevelopmental disorders, especially in some of the autism spectrum disorders, this could be potentially a new direction to look into, not only the pathology but also potential treatments.”
Zhigang He, a professor of neurology and ophthalmology at Harvard Medical School, is also a senior author of the paper, which appears in the April 22 issue of Nature Neuroscience. Former MIT postdoc Boaz Barak, currently a principal investigator at Tel Aviv University in Israel, is the lead author and a senior author of the paper.
Impaired myelination
Williams Syndrome, which is caused by the loss of one of the two copies of a segment of chromosome 7, can produce learning impairments, especially for tasks that require visual and motor skills, such as solving a jigsaw puzzle. Some people with the disorder also exhibit poor concentration and hyperactivity, and they are more likely to experience phobias.
In this study, the researchers decided to focus on one of the 25 genes in that segment, known as Gtf2i. Based on studies of patients with a smaller subset of the genes deleted, scientists have linked the Gtf2i gene to the hypersociability seen in Williams Syndrome.
Working with a mouse model, the researchers devised a way to knock out the gene specifically from excitatory neurons in the forebrain, which includes the cortex, the hippocampus, and the amygdala (a region important for processing emotions). They found that these mice did show increased levels of social behavior, measured by how much time they spent interacting with other mice. The mice also showed deficits in fine motor skills and increased nonsocial related anxiety, which are also symptoms of Williams Syndrome.
Next, the researchers sequenced the messenger RNA from the cortex of the mice to see which genes were affected by loss of Gtf2i. Gtf2i encodes a transcription factor, so it controls the expression of many other genes. The researchers found that about 70 percent of the genes with significantly reduced expression levels were involved in the process of myelination.
“Myelin is the insulation layer that wraps the axons that extend from the cell bodies of neurons,” Barak says. “When they don’t have the right properties, it will lead to faster or slower electrical signal transduction, which affects the synchronicity of brain activity.”

Further studies revealed that the mice had only about half the normal number of mature oligodendrocytes — the brain cells that produce myelin. However, the number of oligodendrocyte precursor cells was normal, so the researchers suspect that the maturation and differentiation processes of these cells are somehow impaired when Gtf2i is missing in the neurons.
This was surprising because Gtf2i was not knocked out in oligodendrocytes or their precursors. Thus, knocking out the gene in neurons may somehow influence the maturation process of oligodendrocytes, the researchers suggest. It is still unknown how this interaction might work.
“That’s a question we are interested in, but we don’t know whether it’s a secreted factor, or another kind of signal or activity,” Feng says.
In addition, the researchers found that the myelin surrounding axons of the forebrain was significantly thinner than in normal mice. Furthermore, electrical signals were smaller, and took more time to cross the brain in mice with Gtf2i missing.
The study is an example of pioneering research into the contribution of glial cells, which include oligodendrocytes, to neuropsychiatric disorders, says Doug Fields, chief of the nervous system development and plasticity section of the Eunice Kennedy Shriver National Institute of Child Health and Human Development.
“Traditionally myelin was only considered in the context of diseases that destroy myelin, such as multiple sclerosis, which prevents transmission of neural impulses. More recently it has become apparent that more subtle defects in myelin can impair neural circuit function, by causing delays in communication between neurons,” says Fields, who was not involved in the research.
Symptom reversal
It remains to be discovered precisely how this reduction in myelination leads to hypersociability. The researchers suspect that the lack of myelin affects brain circuits that normally inhibit social behaviors, making the mice more eager to interact with others.
“That’s probably the explanation, but exactly which circuits and how does it work, we still don’t know,” Feng says.
The researchers also found that they could reverse the symptoms by treating the mice with drugs that improve myelination. One of these drugs, an FDA-approved antihistamine called clemastine fumarate, is now in clinical trials to treat multiple sclerosis, which affects myelination of neurons in the brain and spinal cord. The researchers believe it would be worthwhile to test these drugs in Williams Syndrome patients because they found thinner myelin and reduced numbers of mature oligodendrocytes in brain samples from human subjects who had Williams Syndrome, compared to typical human brain samples.
“Mice are not humans, but the pathology is similar in this case, which means this could be translatable,” Feng says. “It could be that in these patients, if you improve their myelination early on, it could at least improve some of the conditions. That’s our hope.”
Such drugs would likely help mainly the social and fine-motor issues caused by Williams Syndrome, not the symptoms that are produced by deletion of other genes, the researchers say. They may also help treat other disorders, such as autism spectrum disorders, in which myelination is impaired in some cases, Feng says.
“We think this can be expanded into autism and other neurodevelopmental disorders. For these conditions, improved myelination may be a major factor in treatment,” he says. “We are now checking other animal models of neurodevelopmental disorders to see whether they have myelination defects, and whether improved myelination can improve some of the pathology of the defects.”
The research was funded by the Simons Foundation, the Poitras Center for Affective Disorders Research at MIT, the Stanley Center for Psychiatric Research at the Broad Institute of MIT and Harvard, and the Simons Center for the Social Brain at MIT. | 12:40p |
Designing water infrastructure for climate uncertainty In Kenya’s second largest city, Mombasa, the demand for water is expected to double by 2035 to an estimated 300,000 cubic meters per day. In Mombasa’s current warm and humid climate, that water comes from a substantial volume of precipitation that may also change significantly as the region warms in the coming decades in line with global climate model projections.
What’s not clear from the projections, however, is whether precipitation levels will rise or fall along with that warming.
The ultimate direction and magnitude of precipitation change is a major concern for designers of a proposed dam and reservoir system that will capture runoff into the Mwache River, which currently totals about 310,000 cubic meters per day. The substantial uncertainty in future runoff makes it difficult to determine the reservoir capacity necessary to meet Mombasa’s water demand throughout its estimated 100-year lifetime. City planner are therefore faced with deciding whether to invest in an expensive, large-scale dam to provide a consistent water supply under the driest future climate projected by the models, a smaller-scale dam that could accommodate current needs, or start small and build capacity as needed.
To help cities like Mombasa sort through such consequential decisions, a team of researchers at the MIT Joint Program on the Science and Policy of Global Change has developed a new, systematic approach to designing long-term water infrastructure amid climate change uncertainty. Their planning framework assesses the potential to learn about regional climate change over time as new observations become available, and thus evaluate the suitability of flexible approaches that add water storage capacity incrementally if the climate becomes warmer and drier.
The researchers describe the framework and its application to Mombasa in the journal Nature Communications.
A new framework for water infrastructure design
Using the framework to compare the likely lifetime costs of a flexible approach with those of two static, irreversible options for the proposed dam in Mombasa — one designed for the driest, warmest climate, the other for today’s climate — the research team found the flexible approach to be the most cost-effective while still maintaining a reliable supply of water to Mombasa.
“We found that the flexible adaptive option, which allows for the dam’s height to be increased incrementally, substantially reduces the risk of overbuilding infrastructure that you don’t need, and maintains a similar level of water supply reliability in comparison to having a larger dam from the get-go,” says Sarah Fletcher, the study’s lead author, a postdoctoral fellow at MIT’s Department of Civil and Environmental Engineering.
Fletcher’s work on the study was largely completed as a PhD student at MIT’s Institute for Data, Systems and Society under the supervision of co-author and MIT Joint Program Research Scientist Kenneth Strzepek, and in collaboration with co-author and former Joint Program research associate Megan Lickley, now a PhD student in the Department of Earth, Atmospheric and Planetary Sciences.
The Kenyan government is now in the final stages of the design of the Mwache Dam.
“Due to the Joint Program’s efforts to make leading-edge climate research available for use globally, the results from this study have informed the ongoing design and master planning process,” says Strzepek. “It’s a perfect illustration of the mission of Global MIT: ‘Of the World. In the World. For the World.’”
By pinpointing opportunities to reliably apply flexible rather than static approaches to water infrastructure design, the new planning framework could free up billions of dollars in savings in climate adaptation investments — savings that could be passed on to provide water infrastructure solutions to many more resource-limited communities that face substantial climate risk.
Incorporating learning into large infrastructure decision-making
The study may be the first to address a limitation in current water infrastructure planning, which traditionally assumes that today’s climate change uncertainty estimates will persist throughout the whole planning timeline, one that typically spans multiple decades. In many cases this assumption causes flexible, adaptive planning options to appear less cost-effective than static approaches. By estimating upfront how much planners can expect to learn about climate change in the future, the new framework can enable decision-makers to evaluate whether adaptive approaches are likely to be reliable and cost effective.
"Climate models can provide us with a useful range of potential trajectories of the climate system,” says Lickley. “There is considerable uncertainty in terms of the magnitude and timing of these changes over the next 50 to 100 years. In this work we show how to incorporate learning into these large infrastructure decisions as we gain new knowledge about the climate trajectory over the coming decades."
Using this planning tool, a city planner could determine whether it makes sense to choose a static or flexible design approach for a proposed water infrastructure system based on current projections of maximum temperature and precipitation change over the lifetime of the system, along with information that will eventually come in from future observations of temperature and precipitation change. In the study, the researchers performed this analysis for the proposed Mombasa dam under thousands of future regional climate simulations covering a wide range of potential temperature and precipitation trends.
“For example, if you started off on a high-temperature trajectory and 40 years from now you remain on that trajectory, you would know that none of the low-temperature design options are feasible anymore,” says Fletcher. “At that point you would have exceeded a certain amount of warming, and could then rule out the low-temperature-change planning option, and take advantage of an adaptive approach to increase the capacity.”
Future development on the planning framework may incorporate analysis of the potential to learn about other sources of uncertainty, such as the growth in demand for water resources, during the lifetime of a water infrastructure project.
The study was supported by the MIT Abdul Latif Jameel Water and Food Systems Lab and National Science Foundation. | 1:55p |
Breakthrough in boiling Engineers must manage a maelstrom in the core of operating nuclear reactors. Nuclear reactions deposit an extraordinary amount of heat in the fuel rods, setting off a frenzy of boiling, bubbling, and evaporation in surrounding fluid. From this churning flow, operators harness the removal of heat.
In search of greater efficiencies in nuclear systems, scientists have long sought to characterize and predict the physics underlying these processes of heat transfer, with only modest success.
But now a research team led by Emilio Baglietto, an associate professor of nuclear science and engineering at MIT, has made a significant breakthrough in detailing these physical phenomena. Their approach utilizes a modeling technology called computational fluid dynamics (CFD). Baglietto has developed new CFD tools that capture the fundamental physics of boiling, making it possible to track rapidly evolving heat transfer phenomena at the microscale in a range of different reactors, and for different operating conditions.
“Our research opens up the prospect of advancing the efficiency of current nuclear power systems and designing better fuel for future reactor systems,” says Baglietto.
The group, which includes Etienne Demarly, a doctoral candidate in nuclear science and engineering, and Ravikishore Kommajosyula, a doctoral candidate in mechanical engineering and computation, describes its work in the March 11 issue of Applied Physics Letters.
Baglietto, who arrived at MIT in 2011, is thermal hydraulics lead for the Consortium for Advanced Simulation of Lightwater Reactors (CASL), an initiative begun in 2010 to design predictive modeling tools to improve current and next generation reactors, and to ensure the economic viability of nuclear energy as an electricity source.
Central to Baglietto’s CASL work has been the issue of critical heat flux (CHF), which “represents one of the grand challenges for the heat transfer community,” he says. CHF describes a condition of boiling where there is a sudden loss of contact between the bubbling liquid, and the heating element, which in the case of the nuclear industry is the nuclear fuel rod. This instability can emerge suddenly, in response to changes in power levels, for example. As boiling reaches a crisis, a vaporous film covers the fuel surface, which then gives way to dry spots that quickly reach very high temperatures.
“You want bubbles forming and departing from the surface, and water evaporating, in order to take away heat,” explains Baglietto. “If it becomes impossible to remove the heat, it is possible for the metal cladding to fail.”
Nuclear regulators have established power settings in the commercial reactor fleet whose upper limits are well beneath levels that might trigger CHF. This has meant running reactors below their potential energy output.
“We want to allow as much boiling as possible without reaching CHF,” says Baglietto. “If we could know how far we are at all times from CHF, we could operate just on the other side, and improve the performance of reactors.”
Achieving this, says Baglietto, requires better modeling of the processes leading to CHF. “Previous models were based on clever guesses, because it was impossible to see what was actually going on at the surface where boiling took place, and because models didn’t take into account all the physics driving CHF,” says Baglietto.
So he set out to create a comprehensive, high-fidelity representation of boiling heat transfer processes up to the point of CHF. This meant creating physically accurate models of the movement of bubbles, boiling, and condensation taking place at what engineers call "the wall" — the cladding of four meter-tall, one centimeter-wide nuclear fuel rods, which are packed by the tens of thousands in a typical nuclear reactor core and surrounded by hot fluid.
While some of Baglietto’s computational models took advantage of existing knowledge of the complex fuel assembly heat transfer processes inside reactors, he also sought new experimental data to validate his models. He enlisted the help of department colleagues Matteo Bucci, the Norman C. Rasmussen Assistant Professor of Nuclear Science and Engineering, and Jacopo Buongiorno, the TEPCO Professor and associate department head for nuclear science and engineering.
Using electrically simulated heaters with surrogate fuel assemblies and transparent walls, MIT researchers were able to observe the fine details in the evolution of boiling to CHF.
“You’d go from a situation where nice little bubbles removed a lot of heat, and new water re-flooded the surface, keeping things cold, to an instant later when suddenly there was no more space for bubbles and dry spots would form and grow,” says Baglietto.
One fundamental corroboration emerged from these experiments. Baglietto’s initial models, contrary to conventional thinking, had suggested that during boiling, evaporation is not the exclusive form of heat removal. Simulation data showed that bubbles sliding, jostling and departing from the surface removed even more heat than evaporation, and experiments validated the findings of the models.
“Baglietto’s work represents a landmark in the evolution of predictive capabilities for boiling systems, enabling us to model behaviors at a much more fundamental level than ever possible before,” says W. David Pointer, group leader of advanced reactor engineering at the Oak Ridge National Laboratory, who was not involved in the research. “This research will allow us to develop significantly more aggressive designs that better optimize the power produced by fuel without compromising on safety, and it will have an immediate impact on performance in the current fleet as well as on next-generation reactor design.”
Baglietto’s research will also quickly improve the process for developing nuclear fuels. Instead of spending many months and millions of dollars on experiments, says Pointer, “We can shortcut those long sequences of tests by providing accurate, reliable models.”
In coming years, Baglietto’s comprehensive approach may help deliver fuel cladding that is more resistant to fouling and impurities, more accident tolerant, and that encourages higher wettability, making surfaces more conducive to contact with water and less likely to form dry spots.
Even small improvements in nuclear energy output can make a big difference, Baglietto says.
“If fuel performs five percent better in an existing reactor, that means five percent more energy output, which can mean burning less gas and coal,” he says. “I hope to see our work very soon in U.S. reactors, because if we can produce more nuclear energy cheaply, reactors will remain competitive against other fuels, and make a greater impact on CO2 emissions."
The research was supported by the Department of Energy’s Consortium for Advanced Simulation of Light Water Reactors. | 3:00p |
Working out makes hydrogels perform more like muscle Human skeletal muscles have a unique combination of properties that materials researchers seek for their own creations. They’re strong, soft, full of water, and resistant to fatigue. A new study by MIT researchers has found one way to give synthetic hydrogels this total package of characteristics: putting them through a vigorous workout.
In particular, the scientists mechanically trained the hydrogels by stretching them in a water bath. And just as with skeletal muscles, the reps at the “gym” paid off. The training aligned nanofibers inside the hydrogels to produce a strong, soft, and hydrated material that resists breakdown or fatigue over thousands of repetitive movements.
The polyvinyl alcohol (PVA) hydrogels trained in the experiment are well-known biomaterials that researchers use for medical implants, drug coatings, and other applications, says Xuanhe Zhao, an associate professor of mechanical engineering at MIT. “But one with these four important properties has not been designed or manufactured until now.”
In their paper, published this week in the Proceedings of the National Academy of Sciences, Zhao and his colleagues describe how the hydrogels also can be 3-D-printed into a variety of shapes that can be trained to develop the suite of muscle-like properties.
In the future, the materials might be used in implants such as “heart valves, cartilage replacements, and spinal disks, as well as in engineering applications such as soft robots,” Zhao says.
Other MIT authors on the paper include graduate student Shaoting Lin, postdoc Ji Liu, and graduate student Xunyue Liu in Zhao’s lab.
Training for strength and more
Excellent load-bearing natural tissues such as muscles and heart valves are a bioinspiration to materials researchers, but it has been very challenging to design materials that capture all their properties simultaneously, Zhao says.
For instance, one can design a hydrogel with highly aligned fibers to give it strength, but it may not be as flexible as a muscle, or it may not have the water content that makes it compatible for use in humans. “Most of the tissues in the human body contain about 70 percent water, so if we want to implant a biomaterial in the body, a higher water content is more desirable for many applications in the body,” Zhao explains.
The discovery that mechanical training could produce a muscle-like hydrogel was something of an accident, says Lin, the lead author of the PNAS study. The research team had been performing cyclic mechanical loading tests on the hydrogels, trying to find the fatigue point where the hydrogels would begin to break down. They were surprised instead to find that the cyclic training was actually strengthening the hydrogels.
“The phenomenon of strengthening in hydrogels after cyclic loading is counterintuitive to the current understanding on fatigue fracture in hydrogels, but shares the similarity with the mechanism of muscle strengthening after training,” says Lin.
Before training, the nanofibers that make up the hydrogel are randomly oriented. “During the training process, what we realized is that we were aligning the nanofibers,” says Lin, adding that the alignment is similar to what happens to a human muscle under repeated exercise. This training made the hydrogels stronger and fatigue-resistant. The combination of the four key properties appeared after about 1,000 stretching cycles, but some of the hydrogels were stretched over 30,000 cycles without breaking down. The tensile strength of the trained hydrogel, in the direction of the aligned fibers, increased by about 4.3 times over the unstretched hydrogel.
At the same time, the hydrogel demonstrated soft flexibility, and maintained a high water content of 84 percent, the researchers found.

The antifatigue factor
The scientists turned to confocal microscopy to take a closer look at the trained hydrogels, to see if they could discover the reasons behind their impressive anti-fatigue property. “We put these through thousands of cycles of load, so why doesn’t it fail?” Lin says. “What we did is make a cut perpendicular to these nanofibers and tried to propagate a crack or damage in this material.”
“We dyed the fibers under the microscope to see how they deformed as a result of the cut, [and found that] a phenomenon called crack pinning was responsible for fatigue resistance,” Ji says.
“In an amorphous hydrogel, where the polymer chains are randomly aligned, it doesn’t take too much energy for damage to spread through the gel,” Lin adds. “But in the aligned fibers of the hydrogel, a crack perpendicular to the fibers is ‘pinned’ in place and prevented from lengthening because it takes much more energy to fracture through the aligned fibers one by one.”
In fact, the trained hydrogels break a famous fatigue threshold, predicted by the Lake-Thomas theory, which proposes the energy required to fracture a single layer of amorphous polymer chains such as those that make up PVA hydrogels. The trained hydrogels are 10 to 100 times more fatigue-resistant than predicted by the theory, Zhao and his colleagues concluded.
The research was supported, in part, by the National Science Foundation, the Office of Naval Research, and the U.S. Army Research Office through the Institute for Soldier Nanotechnologies at MIT. |
|