MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Monday, February 1st, 2016

    Time Event
    11:00a
    Curing disease by repairing faulty genes

    The genome-editing technique known as CRISPR allows scientists to clip a specific DNA sequence and replace it with a new one, offering the potential to cure diseases caused by defective genes. For this potential to be realized, however, scientists must find a way to safely deliver the CRISPR machinery and a corrected copy of the DNA into the diseased cells.   

    MIT researchers have now developed a way to deliver the CRISPR genome repair components more efficiently than previously possible, and they also believe it may be safer for human use. In a study of mice, they found that they could correct the mutated gene that causes a rare liver disorder, in 6 percent of liver cells — enough to cure the mice of the disease, known as tyrosinemia.

    “This finding really excites us because it makes us think that this is a gene repair system that could be used to treat a range of diseases — not just tyrosinemia but others as well,” says Daniel Anderson, associate professor in MIT’s Department of Chemical Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science (IMES).

    Anderson is one of the senior authors of a paper describing the findings in the Feb. 1 issue of Nature Biotechnology. Wen Xue, an assistant professor in molecular medicine at the University of Massachusetts Medical School, is also a senior author. The paper’s lead author is Hao Yin, a research scientist at the Koch Institute.

    Find and replace

    The CRISPR system relies on cellular machinery that bacteria use to defend themselves from viral infection. Researchers have previously harnessed this system to create gene-editing complexes composed of a DNA-cutting enzyme called Cas9 and a short RNA that guides the enzyme to a specific area of the genome, directing Cas9 where to make its cut.

    When Cas9 and the short guide RNA targeting a disease gene are delivered into cells, a specific cut is made in the genome, and the cells’ DNA repair processes glue the cut back together, often deleting a small portion of the genome. However, if a corrected copy of the gene is also delivered when the cut is made, the DNA repair can lead to correction of the disease gene, permanently repairing the genome.

    In 2014, Anderson and colleagues described the first use of CRISPR to repair a disease gene in an adult animal. In that study, they were able to cure tyrosinemia in mice. However, delivery of the genetic components required a high-pressure injection, a method that can also cause some damage to the liver.

    “That was the first demonstration of using CRISPR/Cas9 to do genetic repair in an adult animal,” Anderson says. “We were excited by this demonstration but wanted to find a way to develop a drug form of the repair machinery that would be both safer and more efficient.”

    The researchers also wanted to boost the percentage of cells that had the defective gene replaced. In the previous study, about one in 250 liver cells were repaired, which was enough to successfully treat tyrosinemia. However, for many other diseases, a higher percentage of repair would be needed to provide a therapeutic effect.

    In the new study, Anderson and colleagues developed a combined nanoparticle and viral delivery system to deliver the CRISPR repair machinery. First, they created a nanoparticle from lipids and messenger RNA (mRNA) that encoded the Cas9 enzyme. The other two components — the RNA guide strand and the DNA for the corrected gene — were embedded into a reprogrammed viral particle based on an adeno-associated virus (AAV).

    The researchers first injected the virus about a week before the lipid nanoparticles, giving the liver cells time to begin producing the RNA guide strand and the DNA template. When the nanoparticles carrying the Cas9 mRNA strand were injected, the cells began producing the Cas9 protein, but only for a few days because the mRNA eventually degraded. This is long enough to perform gene repair, but prevents cas9 from lingering in the cells and potentially disrupting other parts of the cells’ genome. 

    “There’s some concern that if you had Cas9 in your cells for too long of a period of time, it might cause some genomic instability,” Anderson says. “We think the use of the mRNA nanoparticle provides an additional level of safety by making sure the enzyme is not present for too long a period of time.”

    High accuracy

    With this method, about one in 16 cells had the gene corrected, a 15-fold improvement over the 2014 study. The researchers also found that this approach generated less off-target DNA cutting than methods in which the Cas9 gene is integrated into a cell’s genome.

    “We did a genome-scale analysis and we have a very high level of on-target effects but almost no off-target effects,” Yin says.

    Anderson’s lab has developed similar lipid nanoparticles that are now in clinical development. AAV viral particles are in clinical trials for other purposes, making the researchers optimistic that this CRISPR delivery method could be used in humans, although more studies are needed.

    The researchers have applied for patents on this technique, which they believe could be used to treat wide range of diseases, especially those of the liver. “There are a range of metabolic diseases and other liver disorders where if you fix a mutated gene you might be really able to have an impact on human health,” Anderson says.

    “It’s really exciting to see our team develop this new delivery approach for CRISPR, which I believe has the potential to have far-reaching implications,” says Robert Langer, the David H. Koch Institute Professor at MIT and an author of the paper.

    Other MIT authors include graduate student Joseph Dorkin, postdoc Yizhou Dong, research associate Roman Bogorad, and technical assistants Qiongqiong Wu, Sneha Suresh, Stephen Walsh, and Junghoon Yang.

    11:00a
    Structure of kerogen revealed

    The dark-colored hydrocarbon solid known as kerogen gives rise to the fuels that power many of our daily activities: Petroleum is the source of gasoline and diesel fuels, and natural gas is used for cooking, heating, and increasingly for producing electricity.

    And yet, kerogen’s basic internal structure has remained poorly understood — until now.

    A new analysis, by a joint team of researchers at MIT, the French government research organization CNRS, and elsewhere, has revealed kerogen’s internal structure, in detail down to the atomic level. Their results were just published in the journal Nature Materials in a paper by MIT postdoc Colin Bousige, visiting scientist Benoit Coasne, senior research scientist Roland J.-M. Pellenq, professor Franz-Josef Ulm, and colleagues at MIT, CNRS, and other institutions.

    The findings reveal important details about how gas and oil move through pores in formations deep underground, making it possible to estimate the amount of recoverable reserves more accurately and potentially pointing to better ways of extracting them.

    A “game-changing” revelation

    Kerogen is a mixture of organic materials, primarily the remains of dead microbes, plants, and animals that have decomposed and been buried deep underground and compressed. This process forms a carbon-rich, rock-hard material riddled with pores of various sizes. When transformed as a result of pressure or geothermal heat, hydrocarbon molecules in the kerogen break down into gas or petroleum. These flow through the pores and can be released through drilling.

    It turns out that the formula, known as the Darcy equation, that the petroleum and gas industries have traditionally used to describe the way these fluids move underground is not accurate when the hydrocarbon fluids are inside kerogen. The new understanding could change the interpretation of how some gas and oil reservoirs, often found in shale formations, actually behave.

    As these fluids move through pores in the deep rock, “the flux in these nanopores, what we call the transport properties, are not what is given by the macroscale physics of liquids,” says Pellenq, a senior research scientist in the Department of Civil and Environmental Engineering at MIT and co-director, with Ulm, of the joint MIT-CNRS program called MultiScale Materials Science for Energy and Environment.

    In many situations where the standard formula predicts that oil or gas will flow, in reality — and as predicted by the new model — the flow stops. The pore sizes in the rock are often smaller and less interconnected than expected, the study shows, so the individual molecules of oil or gas no longer behave collectively as fluids. Instead, they get stuck in place, like a large dog trying to crawl through a cat door.

    This understanding of the nanoscale structure of pore spaces in kerogen is “a true new idea, it’s a game changer,” says Pellenq. Previously it had been assumed that the pore spaces in these deep underground formations were larger — microscale rather than nanoscale — and thus would allow the petroleum or gas to flow more easily through them. “Those nanopores were not expected by the industry,” he says.

    What that reveals, he explains, is that “those molecules trapped in those pores are really trapped.” Although researchers had generally assumed that these molecules could be released from the rock simply by applying more pressure or better solvents, “these nanopores are really a big part of the porosity of kerogen,” he says, and essential for understanding the recoverability of reserves.

    Rethinking the hydrofracking process

    Right now, industry practices extract fluids from the few big pores in the fracture, Pellenq says. However this fracking process “is not even touching the real treasure, which is in the walls, in the pores of the wall.”

    A better approach, the research suggests, might be to replace the conventional water-based hydrofracking solutions. “Those formations, and especially those pores, are hydrophobic, so the hydrofracking is not touching those nanopores,” Pellenq says. But “if you were change the fluids from water-based to carbon-dioxide based, we know that CO2 will go into those nanopores, because those pores are CO2-philic” (that is, carbon-dioxide attracting). This would force out at least the lighter molecules, such as the methane that is the main component of natural gas, though perhaps not the heavier molecules of petroleum.

    Moving away from the use of water would be “good news” because it would reduce the need for using and then cleaning or disposing of contaminated water, according to Pellenq. In addition, it might even be a way of sequestering some carbon dioxide, he says, providing another potential bonus.

    Jean-Noël Rouzaud, the CNRS research director at the geology laboratory of the Ecole Normale Supérieure in Paris, who was not involved in this research, says this work “appears me very original and of really good scientific quality.” He adds that it brings “an essential contribution to the study of non-conventional sources of hydrocarbons such as oil and gas shales. [This work] should allow people to envisage more effective and environment-friendlier techniques of recovery of hydrocarbons.”

    The study also included researchers from Oak Ridge National Laboratory; the Institut de Science des Materiaux de Mulhouse in Mulhouse, France; Schlumberger-Doll Research in Cambridge, Massachusetts; the European Synchrotron Radiation Facility in Grenoble, France; and Aix Marseille University in Marseille, France. It was supported by the MIT Energy Initiative, as part of the X-Shale project funded by Shell and Schlumberger, and by the French National Research Agency through the Laboratory of Excellence (Labex) ICoME2.

    2:00p
    Cell squeezing enhances protein imaging

    Tagging proteins with a fluorescent label such as green fluorescent protein (GFP) is currently the best way to track specific molecules inside a living cell. However, while this approach has yielded many significant discoveries, GFP and similar tags are so large that they may interfere with the labeled proteins’ natural functions.

    A new approach based on cell-squeezing technology developed at MIT allows researchers to deliver fluorescent tags that are much less bulky, making this kind of protein imaging easier and more efficient.

    In 2013, the MIT team demonstrated that squeezing cells makes it possible to deliver a variety of molecules, including proteins, DNA, carbon nanotubes, and quantum dots, into the cells without damaging them.

    Researchers at Goethe University Frankfurt in Germany, working with their MIT colleagues, have now employed this approach to deliver relatively tiny fluorescent tags that can be targeted to specific proteins. Using regular confocal microscopes or super resolution microscopes, scientists can then track these proteins over time as they perform their normal functions.

    “It really opens up the door to watching protein interactions in live cells,” says Armon Sharei, a former postdoc at MIT’s Koch Institute for Integrative Cancer Research. “Proteins are the building blocks of cells and control all their functions, so it’s exciting to be able to finally visualize them in a living cell, without genetic modifications.”

    Sharei is an author of a paper describing the technique in Nature Communications. The paper’s lead author is Alina Kollmannsperger, a graduate student at Goethe University, and the senior authors are Ralph Weineke and Robert Tampé of Goethe University. Robert Langer, the David H. Koch Institute Professor at MIT, and Klavs Jensen, the Warren K. Lewis Professor of Chemical Engineering at MIT, are also authors.

    “We are very excited about this latest application for our cell squeezing approach and its implications for protein labeling,” says Langer, who is a member of MIT’s Koch Institute for Integrative Cancer Research.

    Rapid delivery

    In their 2013 study, the MIT team showed that squeezing cells through a constriction 30 to 80 percent smaller than the cells’ diameter caused tiny, temporary holes to appear in the cell membranes, allowing any large molecules in the surrounding fluid to enter. The holes reseal quickly and the cells suffer no long-term damage.

    The researchers then began working with the Goethe University team to use this technique to label proteins with small fluorescent tags, which have previously been difficult to get into living cells. The Goethe team developed a tag called trisNTA that binds to any protein with a long string of histidine molecules (one of the 20 amino acids that form the building blocks of proteins).

    For this study, the researchers first used genetic engineering to attach the histidine sequence to several different proteins, including one found in the nucleus and another involved in processing foreign molecules that have entered the cell. Then, the cells were pushed through a microfluidic channel at a rate of 1 million cells per second, which squeezed them sufficiently to allow the trisNTA tag in.

    Until now, scientists have had to use protein tags, such as the bulky GFP, that can be genetically encoded in the cells’ DNA, or to study proteins in nonliving cells because the process of getting other fluorescent tags into the cells requires destroying the cell membrane.

    “This study shows how microfluidic cell-squeezing together with specific chemical labeling can be exploited to hook various synthetic fluorophores to intracellular proteins with exquisite specificity. I foresee many applications for this approach and I have a very long list of probes that I would like to test immediately,” says Kai Johnsson, a professor of chemical sciences and engineering at the École Polytechnique Fédérale de Lausanne in Switzerland, who was not involved in the research.

    With further work, including the development of new tags that target other proteins, this technique could help scientists learn much more about proteins’ functions inside living cells.

    “Basically everything that happens in your cells is mediated by proteins,” Sharei says. “You can start to learn a lot about the basic biology of how a cell works, how it divides, and what makes the cancer cell a cancer cell, as far as what mechanisms go awry and what proteins are responsible for that.”

    Normal cell behavior

    The researchers believe that the cell squeezing technique should work with nearly any type of cell. So far, they have tried it successfully with more than 30 different types of mammalian cells.

    An added benefit is that when cells undergo the squeezing procedure, they show no changes in the genes they express. In contrast, when a jolt of electricity is applied to cells to make them more permeable — a technique commonly used to deliver DNA and RNA — more than 7,000 genes are affected.

    “It’s possible to assume that a squeezed cell is probably going to behave more or less normally, which is critical when you’re trying to study these kinds of processes,” Sharei says.

    A company called SQZ Biotech, started by MIT researchers including Sharei, Langer, and Jensen, has licensed the cell squeezing technology and is now using it to engineer immune cells to improve their ability to attack cancer cells.

    3:00p
    Living a “mixotrophic” lifestyle

    How do you find your food? Most animal species, whether they rummage through a refrigerator or stalk prey in the wild, obtain nutrients by consuming living organisms. Plants, for the most part, adopt a different feeding, or “trophic,” strategy, making their own food through photosynthesis. There are, however, certain enterprising species that can do both: photosynthesize and consume prey. These organisms, found mostly in certain ocean plankton communities, live a flexible, “mixotrophic” lifestyle. 

    Now researchers at MIT and Bristol University in the United Kingdom have found that these microscopic, mixotrophic organisms may have a large impact on the ocean’s food web and the global carbon cycle. 

    The scientists developed a mixotrophic model of the global ocean food web, at the scale of marine plankton, in which they gave each plankton class the ability to both photosynthesize and consume prey. They found that, compared with traditional models that do not take mixotrophs into account, their model produced larger, heavier plankton throughout the ocean. As these more substantial microbes die, the researchers found they increase the flux of sinking organic carbon particles by as much as 35 percent.

    The results, says Mick Follows, associate professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences, suggest that mixotrophic organisms may make the ocean more efficient in storing carbon, which in turn enhances the efficiency with which the oceans sequester carbon dioxide.

    “If [mixotrophs] weren’t in the oceans, we’re suggesting atmospheric carbon dioxide might be higher, because there would less of the large, carbon-rich particles formed which efficiently transfer carbon to depth,” Follows says. “It’s a hypothesis, but it has been ignored in carbon cycle models until now, and we suggest it must be represented because it’s potentially very important.”

    Follows and his colleague Ben Ward, a former MIT postdoc now at Bristol University, have published their results today in the Proceedings of the National Academy of Sciences.

    Part of the equation

    Today’s ocean models typically take an “either/or” approach, grouping plankton as either photosynthesizers or consumers of prey. This approach, Follows says, oversimplifies the processes taking place in the ocean that may ultimately contribute to how carbon moves through the oceans and atmosphere. He says mixotrophs are often overlooked, because our terrestrial experience makes them seem rare. 

    “To us on land, we tend to think of [mixotrophs], like Venus fly traps, as exotic — they are a curiosity to us,” Follows says. “Our traditional perspective is biased by the land, where organisms fall into one or the other category, rather strictly. But in the oceans, the more people have looked at plankton, the more mixotrophy seems to be common.” 

    The problem is that there are very few data to work into models, as it’s extremely difficult to observe trophic strategies at the microscopic plankton scale. Therefore, models have largely left mixotrophs out of the equation and have instead looked to other marine processes to try and explain how much carbon is stored in the oceans. 

    “It’s like if we have a weather forecast model that gets the rain right in Boston today, but for the wrong reasons,” Follows says. “If we use it tomorrow, we shouldn’t expect it to do a good job, because it was cooked up for today. We want our climate model to be representative of the processes going on, in order to be predictive of how carbon storage responds to global change.”

    Making a (mixotrophic) living

    As a first step, Follows and Ward chose to simulate a virtual world in which every plankton class is potentially mixotrophic. 

    “It’s a very idealized, black-and-white case: What’s the maximum impact mixotrophs could have?” Follows says. 

    In the oceans, plankton can range in size from less than 1 micron, to about 1 millimeter in diameter. Typical ocean models that incorporate plankton often group them in 10 general size classes, each of which fall into a “two-guild” structure, as either photosynthesizers, or consumers of prey. 

    Instead, Follows and Ward made all of the plankton mixotrophic. The organisms in the model can photosynthesize, consuming inorganic nutrients. (The smallest organisms are the most efficient at acquiring those resources.) They can also eat other plankton and are constrained to consume prey in size classes about ten times smaller than themselves.

    “After we have built in these rules for the system, whether each size class lives largely by photosynthesis or largely by predation depends upon the availability of each type of resource and their relative ability to harvest them in each environment,” Follows says.

    After running the model forward, the researchers compared the results to those of a traditional model without mixotrophs. They found both models showed a general feeding structure throughout the plankton food web: The smallest organisms were too small to ingest prey, while the largest plankton were poor competitors when living by photosynthesis. 

    However, where the traditional model made a strict separation between those that photosynthesize and those that don’t, the mixotrophic model blurred those lines, with some smaller organisms consuming prey and some larger ones being able to photosynthesize. The result was that mixotrophic organisms in every class increased the average size of that organism, creating larger and heavier plankton throughout the oceans. These more substantial organisms, compared to smaller and lighter plankton, were more capable of sinking to the ocean floor, as carbon-containing detritus.

    “It essentially means that, through multiple means, in a world with mixotrophs, more organic carbon is sinking into the deep ocean than in a world without mixotrophs,” Follows says. 

    The team’s estimate of the amount of sinking carbon contributed by mixotrophs appears to agree with recent observations of carbon flux by mixotrophic plankton in the North Atlantic. Follows says that, with more data on these opportunistic organisms, he hopes to improve the model to accurately reflect mixotrophic populations and their effect on the planet’s carbon cycle.

    “Part of our hope is for the work is to give some wind to the sails of these observational studies. We think they’re very valuable,” Follows says. “There may be a large fraction of grazing that is being done by mixotrophs, so it’s potentially very significant in terms of the flow of carbon in the ocean and it should be quantified.”

    This research was funded, in part, by the Simons Foundation, the Gordon and Betty Moore Foundation, NASA, and the National Science Foundation.

    << Previous Day 2016/02/01
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org