MIT Research News' Journal
[Most Recent Entries]
[Calendar View]
Wednesday, May 20th, 2020
Time |
Event |
12:00a |
A scientist turns to entrepreneurship Like the atomic particles he studies, Pablo Ducru seems constantly on the move, vibrating with energy. But if he sometimes appears to be headed in an unexpected direction, Ducru, a doctoral candidate in nuclear science and computational engineering, knows exactly where he is going: “My goal is to address climate change as an innovator and creator, whether by pushing the boundaries of science” through research, says Ducru, or pursuing a zero-carbon future as an entrepreneur.
It can be hard catching up with Ducru. In January, he returned to Cambridge, Massachusetts, from Beijing, where he was spending a year earning a master’s degree in global affairs as a Schwarzman Scholar at Tsinghua University. He flew out just days before a travel crackdown in response to Covid-19.
“This year has been intense, juggling my PhD work and the master’s overseas,” he says. “But I needed to do it, to get a 360-degree understanding of the problem of climate change, which isn’t just a technological problem, but also one involving economics, trade, policy, and finance.”
Schwarzman Scholars, an international cohort selected on the basis of academic excellence and leadership potential, among other criteria, focus on critical challenges of the 21st century. While all the students must learn the basics of international relations and China’s role in the world economy, they can tailor their studies according to their interests.
Ducru is incorporating nuclear science into his master’s program. “It is at the core of many of the world’s key problems, from climate change to arms controls, and it also impacts artificial intelligence by advancing high-performance computing,” he says.
A Franco-Mexican raised in Paris, Ducru arrived at nuclear science by way of France’s selective academic system. He excelled in math, history, and English during his high school years. “I realized technology is what drives history,” he says. “I thought that if I wanted to make history, I needed to make technology.” He graduated from Ecole Polytechnique specializing in physics and applied mathematics, and with a major in energies of the 21st century.
Creating computational shortcuts
Today, as a member of MIT’s Computational Reactor Physics Group (CRPG), Ducru is deploying his expertise in singular ways to help solve some of the toughest problems in nuclear science.
Nuclear engineers, hoping to optimize efficiency and safety in current and next-generation reactor designs, are on a quest for high-fidelity nuclear simulations. At such fine-grained levels of modeling, the behavior of subatomic particles is sensitive to minute uncertainties in temperature change, or differences in reactor core geometry, for instance. To quantify such uncertainties, researchers currently need countless costly hours of supercomputer time to simulate the behaviors of billions of neutrons under varying conditions, estimating and then averaging outcomes.
“But with some problems, more computing won’t make a difference,” notes Ducru. “We have to help computers do the work in smarter ways.” To accomplish this task, he has developed new formulations for characterizing basic nuclear physics that make it much easier for a computer to solve problems: “I dig into the fundamental properties of physics to give nuclear engineers new mathematical algorithms that outperform thousands of times over the old ways of computing.”
With his novel statistical methods and algorithms, developed with CRPG colleagues and during summer stints at Los Alamos and Oak Ridge National Laboratories, Ducru offers “new ways of looking at problems that allow us to infer trends from uncertain inputs, such as physics, geometries, or temperatures,” he says.
These innovative tools accommodate other kinds of problems that involve computing average behaviors from billions of individual occurrences, such as bubbles forming in a turbulent flow of reactor coolant. “My solutions are quite fundamental and problem-agnostic — applicable to the design of new reactors, to nuclear imaging systems for tumor detection, or to the plutonium battery of a Mars rover,” he says. “They will be useful anywhere scientists need to lower costs of high-fidelity nuclear simulations."
But Ducru won’t be among the scientists deploying these computational advances. “I think we’ve done a good job, and others will continue in this area of research,” he says. “After six years of delving deep into quantum physics and statistics, I felt my next step should be a startup.”
Scaling up with shrimp
As he pivots away from academia and nuclear science, Ducru remains constant to his mission of addressing the climate problem. The result is Torana, a company Ducru and a partner started in 2018 to develop the financial products and services aquaculture needs to sustainably feed the world.
“I thought we could develop a scalable zero-carbon food,” he says. “The world needs high-nutrition proteins to feed growing populations in a climate-friendly way, especially in developing nations.”
Land-based protein sources such as livestock can take a heavy toll on the environment. But shrimp, on the other hand, are “very efficient machines, scavenging crud at the bottom of the ocean and converting it into high-quality protein,” notes Ducru, who received the 2018 MIT Water Innovation Prize and the 2019 Rabobank-MIT Food and Agribusiness Prize to help develop his aquaculture startup (then called Velaron).
Torana is still in early stages, and Ducru hopes to apply his modeling expertise to build a global system of sustainable shrimp farming. His Schwarzman master thesis studies the role of aquaculture in our future global food system, with a focus on the shrimp supply chain.
In response to the Covid-19 pandemic, Ducru relocated to the family farm in southern France, which he helps run while continuing to follow the Tsinghua masters online and work on his MIT PhD. He is tweaking his business plans, and putting the final touches on his PhD research, including submitting several articles for publication. While it’s been challenging keeping all these balls in the air, he has supportive mentors — “Benoit Forget [CRPG director] has backed almost all my crazy ideas,” says Ducru. “People like him make MIT the best university on Earth.”
Ducru is already mapping out his next decade or so: grow his startup, and perhaps create a green fund that could underwrite zero-carbon projects, including nuclear ones. “I don’t have Facebook and don’t watch online series or TV, because I prefer being an actor, creating things through my work,” he says. “I’m a scientific entrepreneur, and will continue to innovate across different realms.” | 11:00a |
Machine-learning tool could help develop tougher materials For engineers developing new materials or protective coatings, there are billions of different possibilities to sort through. Lab tests or even detailed computer simulations to determine their exact properties, such as toughness, can take hours, days, or more for each variation. Now, a new artificial intelligence-based approach developed at MIT could reduce that to a matter of milliseconds, making it practical to screen vast arrays of candidate materials.
The system, which MIT researchers hope could be used to develop stronger protective coatings or structural materials — for example, to protect aircraft or spacecraft from impacts — is described in a paper in the journal Matter, by MIT postdoc Chi-Hua Yu, civil and environmental engineering professor and department head Markus J. Buehler, and Yu-Chuan Hsu at the National Taiwan University.
The focus of this work was on predicting the way a material would break or fracture, by analyzing the propagation of cracks through the material’s molecular structure. Buehler and his colleagues have spent many years studying fractures and other failure modes in great detail, since understanding failure processes is key to developing robust, reliable materials. “One of the specialties of my lab is to use what we call molecular dynamics simulations, or basically atom-by-atom simulations” of such processes, Buehler says.
These simulations provide a chemically accurate description of how fracturing happens, he says. But it’s slow, because it requires solving equations of motion for every single atom. “It takes a lot of time to simulate these processes,” he says. The team decided to explore ways of streamlining that process, using a machine-learning system.
“We’re kind of taking a detour,” he says. “We’ve been asking, what if you had just the observation of how fracturing happens [in a given material], and let computers learn this relationship itself?” To do that, artificial intelligence (AI) systems need a variety of examples to use as a training set, to learn about the correlations between the material’s characteristics and its performance.
In this case, they were looking at a variety of composite, layered coatings made of crystalline materials. The variables included the composition of the layers and the relative orientations of their orderly crystal structures, and the way those materials each responded to fracturing, based on the molecular dynamics simulations. “We basically simulate, atom by atom, how materials break, and we record that information,” Buehler says.

The team used atom-by-atom simulations to determine how cracks propagate through different materials. This animation shows one such simulation, in which the crack propagates all the way through.
They painstakingly generated hundreds of such simulations, with a wide variety of structures, and subjected each one to many different simulated fractures. Then they fed large amounts of data about all these simulations into their AI system, to see if it could discover the underlying physical principles and predict the performance of a new material that was not part of the training set.
And it did. “That’s the really exciting thing,” Buehler says, “because the computer simulation through AI can do what normally takes a very long time using molecular dynamics, or using finite element simulations, which are another way that engineers solve this problem, and it’s very slow as well. So, this is a whole new way of simulating how materials fail.”
How materials fail is crucial information for any engineering project, Buehler emphasizes. Materials failures such as fractures are “one of the biggest reasons for losses in any industry. For inspecting planes or trains or cars, or for roads or infrastructure, or concrete, or steel corrosion, or to understand the fracture of biological tissues such as bone, the ability to simulate fracturing with AI, and doing that quickly and very efficiently, is a real game changer.”
The improvement in speed produced by using this method is remarkable. Hsu explains that “for single simulations in molecular dynamics, it has taken several hours to run the simulations, but in this artificial intelligence prediction, it only takes 10 milliseconds to go through all the predictions from the patterns, and show how a crack forms step by step.”
The method they developed is quite generalizable, Buehler says. “Even though in our paper we only applied it to one material with different crystal orientations, you can apply this methodology to much more complex materials.” And while they used data from atomistic simulations, the system could also be used to make predictions on the basis of experimental data such as images of a material undergoing fracturing.
“If we had a new material that we’ve never simulated before,” he says, “if we have a lot of images of the fracturing process, we can feed that data into the machine-learning model as well.” Whatever the input, simulated or experimental, the AI system essentially goes through the evolving process frame by frame, noting how each image differs from the one before in order to learn the underlying dynamics.
For example, as researchers make use of the new facilities in MIT.nano, the Institute’s facility dedicated to fabricating and testing materials at the nanoscale, vast amounts of new data about a variety of synthesized materials will be generated.
“As we have more and more high-throughput experimental techniques that can produce a lot of images very quickly, in an automated way, these kind of data sources can immediately be fed into the machine-learning model,” Buehler says. “We really think that the future will be one where we have a lot more integration between experiment and simulation, much more than we have in the past.”
The system could be applied not just to fracturing, as the team did in this initial demonstration, but to a wide variety of processes unfolding over time, he says, such as diffusion of one material into another, or corrosion processes. “Anytime where you have evolutions of physical fields, and we want to know how these fields evolve as a function of the microstructure,” he says, this method could be a boon.
The research was supported by the U.S. Office of Naval Research and the Army Research Office. | 11:26a |
Towable sensor free-falls to measure vertical slices of ocean conditions The motion of the ocean is often thought of in horizontal terms, for instance in the powerful currents that sweep around the planet, or the waves that ride in and out along a coastline. But there is also plenty of vertical motion, particularly in the open seas, where water from the deep can rise up, bringing nutrients to the upper ocean, while surface waters sink, sending dead organisms, along with oxygen and carbon, to the deep interior.
Oceanographers use instruments to characterize the vertical mixing of the ocean’s waters and the biological communities that live there. But these tools are limited in their ability to capture small-scale features, such as the up- and down-welling of water and organisms over a small, kilometer-wide ocean region. Such features are essential for understanding the makeup of marine life that exists in a given volume of the ocean (such as in a fishery), as well as the amount of carbon that the ocean can absorb and sequester away.
Now researchers at MIT and the Woods Hole Oceanographic Institution (WHOI) have engineered a lightweight instrument that measures both physical and biological features of the vertical ocean over small, kilometer-wide patches. The “ocean profiler,” named EcoCTD, is about the size of a waist-high model rocket and can be dropped off the back of a moving ship. As it free-falls through the water, its sensors measure physical features, such as temperature and salinity, as well as biological properties, such as the optical scattering of chlorophyll, the green pigment of phytoplankton.
“With EcoCTD, we can see small-scale areas of fast vertical motion, where nutrients could be supplied to the surface, and where chlorophyll is carried downward, which tells you this could also be a carbon pathway. That’s something you would otherwise miss with existing technology,” says Mara Freilich, a graduate student in MIT’s Department of Earth, Atmospheric, and Planetary Sciences and the MIT-WHOI Joint Program in Oceanography/Applied Ocean Sciences and Engineering.
Freilich and her colleagues have published their results today in the Journal of Atmospheric and Oceanic Technology. The paper’s co-authors are J. Thomas Farrar, Benjamin Hodges, Tom Lanagan, and Amala Mahadevan of WHOI, and Andrew Baron of Dynamic System Analysis, in Nova Scotia. The lead author is Mathieu Dever of WHOI and RBR, a developer of ocean sensors based in Ottawa.
Ocean synergy
Oceanographers use a number of methods to measure the physical properties of the ocean. Some of the more powerful, high-resolution instruments used are known as CTDs, for their ability to measure the ocean’s conductivity, temperature, and depth. CTDs are typically bulky, as they contain multiple sensors as well as components that collect water and biological samples. Conventional CTDs require a ship to stop as scientists lower the instrument into the water, sometimes via a crane system. The ship has to stay put as the instrument collects measurements and water samples, and can only get back underway after the instrument is hauled back onboard.
Physical oceanographers who do not study ocean biology, and therefore do not need to collect water samples, can sometimes use “UCTDs” — underway versions of CTDs, without the bulky water sampling components, that can be towed as a ship is underway. These instruments can sample quickly since they do not require a crane or a ship to stop as they are dropped.
Freilich and her team looked to design a version of a UCTD that could also incorporate biological sensors, all in a small, lightweight, towable package, that would also keep the ship moving on course as it gathered its vertical measurements.
“It seemed there could be straightforward synergy between these existing instruments, to design an instrument that captures physical and biological information, and could do this underway as well,” Freilich says.
“Reaching the dark ocean”
The core of the EcoCTD is the RBR Concerto Logger, a sensor that measures the temperature of the water, as well as the conductivity, which is a proxy for the ocean’s salinity. The profiler also includes a lead collar that provides enough weight to enable the instrument to free-fall through the water at about 3 meters per second — a rate that takes the instrument down to about 500 meters below the surface in about two minutes.
“At 500 meters, we’re reaching the upper twilight zone,” Freilich says. “The euphotic zone is where there’s enough light in the ocean for photosynthesis, and that’s at about 100 to 200 meters in most places. So we’re reaching the dark ocean.”
Another sensor, the EcoPuck, is unique to other UCTDs in that it measures the ocean’s biological properties. Specifically, it is a small, puck-shaped bio-optical sensor that emits two wavelengths of light — red and blue. The sensor captures any change in these lights as they scatter back and as chlorophyll-containing phytoplankton fluoresce in response to the light. If the red light received resembles a certain wavelength characteristic of chlorophyll, scientists can deduce the presence of phytoplankton at a given depth. Variations in red and blue light scattered back to the sensor can indicate other matter in the water, such as sediments or dead cells — a measure of the amount of carbon at various depths.
The EcoCTD includes another sensor unique to UCTDs — the Rinko III Do, which measures the oxygen concentration in water, which can give scientists an estimate of how much oxygen is being taken up by any microbial communities living at a given depth and parcel of water.
Finally, the entire instrument is encased in a tube of aluminum and designed to attach via a long line to a winch at the back of a ship. As the ship is moving, a team can drop the instrument overboard and use the winch to pay the line out at a rate that the instrument drops straight down, even as the ship moves away. After about two minutes, once it has reached a depth of about 500 meters, the team cranks the winch to pull the instrument back up, at a rate that the instrument catches up to the ship within 12 minutes. The crew can then drop the instrument again, this time at some distance from their last dropoff point.
“The nice thing is, by the time we go to the next cast, we’re 500 meters away from where we were the first time, so we’re exactly where we want to sample next,” Freilich says.
They tested the EcoCTD on two cruises in 2018 and 2019, one to the Mediterranean and the other in the Atlantic, and in both cases were able to collect both physical and biological data at a higher resolution than existing CTDs.
“The ecoCTD is capturing these ocean characteristics at a gold-standard quality with much more convenience and versatility,” Freilich says.
The team will further refine their design, and hopes that their high-resolution, easily-deployable, and more efficient alternative may be adapted by both scientists to monitor the ocean’s small-scale responses to climate change, as well as fisheries that want to keep track of a certain region’s biological productivity.
This research was funded in part by the U.S. Office of Naval Research. | 1:59p |
Scientists find a new way to reverse symptoms of Fragile X MIT scientists have identified a potential new strategy for treating Fragile X syndrome, a disorder that is the leading heritable cause of intellectual disability and autism.
In a study of mice, the researchers showed that inhibiting an enzyme called GSK3 alpha reversed many of the behavioral and cellular features of Fragile X. The small-molecule compound has been licensed for further development and possible human clinical trials.
From the mouse studies, there are signs that this compound may not have the same limitations of another class of Fragile X drugs that failed in human clinical trials a few years ago, says Mark Bear, the Picower Professor of Neuroscience, a member of MIT’s Picower Institute for Learning and Memory, and one of the senior authors of the study.
GSK3 inhibitors might also be useful against other diseases in which GSK3 plays a role, including Alzheimer’s disease, he says.
Florence Wagner, director of medicinal chemistry at the Broad Institute’s Stanley Center for Psychiatric Research, is also a senior author of the study, which appears today in Science Translational Medicine. The lead authors are MIT postdoc Patrick McCamphill, former MIT graduate student Laura Stoppel, and former MIT postdoc Rebecca Senter.
Many targets
Fragile X affects about 1 in 2,500 to 4,000 boys and 1 in 7,000 to 8,000 girls, and is caused by a genetic mutation of a protein called Fragile X mental retardation protein (FMRP). In addition to intellectual disability, symptoms include epilepsy, attention deficit and hyperactivity, hypersensitivity to noise and light, and autistic behaviors such as hand-flapping.
Bear’s lab, which has been studying Fragile X for about two decades, has previously shown that protein synthesis at synapses, the specialized junctions between neurons, is stimulated by a neurotransmitter receptor called metabotropic glutamate receptor 5 (mGluR5). FMRP normally regulates this protein synthesis. When FMRP is lost, mGluR5-stimulated protein synthesis becomes overactive, and this can account for many of the varied symptoms seen in Fragile X.
In studies of mice, Bear and others have found that compounds that inhibit the mGluR5 receptor could reverse most of the symptoms of Fragile X. However, none of the mGluR5 inhibitors that have been tested in clinical trials have succeeded.
In the meantime, the MIT team, along with many other research groups, has been searching for other molecules that could be targeted to treat Fragile X.
“We and many other labs have been chipping away at this and trying to understand the key molecular players. There’s quite a large number now, and there have been different manipulations in the signaling pathway that can correct Fragile X phenotypes in animals,” Bear says. “We like to refer to this as a target-rich environment. If at first you don’t succeed therapeutically, you have many other shots on goal.”
Some studies suggested that GSK3 was overactive in Fragile X mouse models and that this activity could be turned down using lithium. However, the required dosage of lithium has adverse side effects in children. Pharmaceutical companies developed other small-molecule drugs that inhibit GSK3, but these triggered an accumulation of a protein called beta-catenin, which can lead to cancerous cell proliferation.
The GSK3 enzyme comes in two forms, alpha and beta, so Wagner, along with Edward Holson, former director of medicinal chemistry at the Stanley Center, and Edward Scolnick, chief scientist emeritus at the Stanley Center, set out to develop drugs that would inhibit either one or the other.
“Studies had been published showing that if you selectively knock out either alpha or beta, it wouldn’t trigger beta-catenin accumulation,” Wagner says. “GSK3 inhibitors had been tested in Fragile X models before, but it’s never gone anywhere because of the toxicity issue.”
After a screen of more than 400,000 drug compounds, Wagner identified a handful that inhibited both forms of GSK3. By slightly altering their structures, she then came up with versions that could target selectively the alpha or beta forms.
Bear’s lab tested the selective inhibitors in genetically engineered mice that lack the FMRP protein, and found that the inhibitor specific to GSK3 alpha eliminated one of the common Fragile X symptoms — seizures induced by loud tones. Following that, they found that the GSK3 alpha inhibitor also successfully reversed several other symptoms of Fragile X, while the GSK3 beta inhibitor did not.
These symptoms include overproduction of protein as well as altered synaptic plasticity, impairment of some types of learning and memory, and hyperexcitability of some neurons.
“It checked off all the boxes that we would have expected from inhibiting mGluR5 or the signaling pathway downstream,” Bear says. “It’s really amazing that if you can correct the excess protein synthesis with a drug compound, a dozen other phenotypes are going to be corrected.”
Exploring side effects
GSK3 is a kinase, which means that it controls other proteins by adding chemical groups called phosphates to them, but its exact role in Fragile X is not yet known. In this study, the researchers found that GSK3 is part of the same signaling pathway controlled by mGluR5, but GSK3 appears to act later in the pathway.
The initial findings in mice suggest that GSK3 alpha inhibitors do not have some of the complications that may have caused the mGluR5 inhibitors to fail in clinical trials, Bear says. In those trials, mGluR5 inhibitors were found to cause hallucinations in some people, which limits the dose that can be given. (In mice, hallucinations cannot be directly measured, but there are techniques for indirectly testing hallucinogenic potential.) Mouse studies of mGluR5 inhibitors did show that potential for causing hallucination, but studies of GSK3 alpha inhibitors have not shown it.
Another side effect seen in mouse studies of mGluR5 inhibitors is the development of resistance to long-term treatment, for some of the symptoms of the disorder.
“We don’t know whether the mGluR trials failed because of treatment resistance, but it’s a viable hypothesis,” Bear says. “What we do know is with the GSK3 alpha inhibitor, we do not see that in mice, to the extent that we’ve looked at it.”
GSK3 inhibitors may also hold promise for treating other diseases in which GSK3 plays a role. In a Science Translational Medicine study published last year, also co-authored by Wagner, researchers at the Broad Institute and Dana-Farber Cancer Institute showed that selective GSK3 inhibitors could be effective against acute myeloid leukemia.
GSK3 could also be a potential target for Alzheimer’s treatment, as it is responsible for phosphorylating Tau, a protein that forms tangles in the brains of Alzheimer’s patients.
The research was funded by the National Institute of Mental Health, the Simons Foundation, the Stanley Center for Psychiatric Research, the JPB Foundation, and the FRAXA Research Foundation. | 2:40p |
Making tissue stretchable, compressible, and nearly indestructible When there’s a vexing problem to be solved, people sometimes offer metaphorical advice such as “stretching the mind” or engaging in “flexible” thinking, but in confronting a problem facing many biomedical research labs, a team of MIT researchers has engineered a solution that is much more literal. To make imaging cells and molecules in brain and other large tissues easier while also making samples tough enough for years of handling in the lab, they have come up with a chemical process that makes tissue stretchable, compressible, and pretty much indestructible.
“ELAST” technology, described in a new paper in Nature Methods, provides scientists a very fast way to fluorescently label cells, proteins, genetic material, and other molecules within brains, kidneys, lungs, hearts, and other organs. That’s because when such tissues can be stretched out or squished down thin, labeling probes can infuse them far more rapidly. Several demonstrations in the paper show that even after repeated expansions or compressions to speed up labeling, tissues snap back to their original form unaltered except for the new labels.
The lab of Kwanghun Chung, an associate professor of chemical engineering and a member of MIT’s Institute for Medical Engineering and Science, and Picower Institute for Learning and Memory, developed ELAST amid work on a five-year project, funded by the National Institutes of Health, to make the most comprehensive map yet of the entire human brain. That requires being able to label and scan every fine cellular and molecular detail in the thickest slabs possible to preserve 3D structure. It also means the lab must be able to keep samples perfectly intact for years, even as they must accomplish numerous individual rounds of labeling quickly and efficiently. Each round of labeling — maybe a particular kind of neuron one day, or a key protein the next — will tell them something new about how the brain is structured and how it works.
“When people donate their brain, it is like they are donating a library,” says Chung. “Each one contains a library worth of information. You cannot access all the books in the library at the same time. We have to repeatedly be able to access the library without damaging it. Each of these brains is an extremely precious resource.”
Former lab postdoc Taeyun Ku, now an assistant professor at the Korea Advanced Institute of Science and Technology, is the study’s lead author. He says the particular difficulty of working with human tissues, which of course are much larger than those of lab animals like mice, inspired him to take this new engineering approach. Late one night in the lab around Christmas 2017, he was mulling over how to transform tissue for quicker labeling and began to tinker with repeated compression of an elastic gel.
“We changed our way of thinking: Biological tissue doesn't need to be very biological,” Ku says. “If our goal is not to image living events but to image appearances, we can change the material type of the tissue while maintaining the appearances. Our work shows how higher-level engineering of the brain enables us to better look into what inside the brain.”
The team’s efforts to engineer ELAST came down to finding the right formulation of a gel-like chemical called polyacrylamide. In the past, Chung has used the substance in a different formulation with crosslinking chemicals to make tissues strong but fairly brittle, says study co-author Webster Guan, a chemical engineering graduate student. When that formulation infused the tissues, cells and molecules would become directly attached to a grid-like mesh.
In the new formulation, the team used a high concentration of acrylamide with much less crosslinker and initiator. The result was an entanglement of long polymer chains with links that are able to slip around, giving the gel a structural integrity but with much more flexibility. Moreover, rather than attaching to the chains, Guan says, the cells and molecules of the tissue just become entangled within it, adding further to the ability of the acrylamide-infused tissues to withstand stretching or squashing without anything becoming torn or permanently displaced in the process.
In the study the team reports stretching human or mouse brain tissues to twice their width and length simultaneously, or compressing their thickness by 10 times with virtually no distortion after returning to their regular size.
“These results demonstrate that ELAST enables fully reversible tissue shape transformation while preserving structural and molecular information in the tissue,” they wrote.
Fully integrating the polyacrylamide into a large amount of tissue to achieve the elasticity can take as long as 21 days, they report, but from then on, any individual labeling step, such as labeling a particular kind of cell to determine its abundance, or a specific protein to see where it is expressed, can proceed far more quickly than with prior methods.
In one case, by repeatedly compressing a 5-milimeter thick cross section of a human brain, the team needed only 24 hours to label it all the way through. For comparison, back in 2013 when Chung and colleagues debuted “CLARITY,” a method of making brain tissue transparent and fixing it with an acrylamide gel, they needed 24 hours to label a slice only a tenth as thick. Because labeling time is estimated by squaring the depth that probes must penetrate, calculations suggest labeling with ELAST proceeds 100 times faster than with CLARITY.
Though Chung’s lab mostly focuses on brains, the applicability to other organs can aid in other cell mapping efforts, Chung says. He adds that even if labeling tissue isn’t a goal at all, having an easy new way to make a durable, elastic gel could have other applications, for instance in creating soft robotics. Resources for learning more about ELAST are available at Chung’s website.
In addition to Ku, Guan, and Chung, the paper’s other authors are Nicholas Evans, Chang Ho Sohn, Alexandre Albanese, Joon-Goon Kim, and Matthew Frosch, a professor at Massachusetts General Hospital and Harvard Medical School.
Funding for the work came from sources including the JPB Foundation, the National Institutes of Health, the NCSOFT Cultural Foundation, Searle Scholars Program, Packard award in Science and Engineering, NARSAD Young Investigator Award, and McKnight Foundation Technology Award. |
|