MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, June 20th, 2017

    Time Event
    12:00a
    Endowing cells with new abilities

    It had been seven years since Fahim Farzadfard had last seen his family back home in Iran. Even after obtaining his green card in the middle of 2016, he had waited to finish his PhD before making the trip. Finally, last December, Farzadfard made a long-awaited visit to his hometown of Mashhad, Iran.

    Farzadfard had missed his family, and would have loved to stay longer to enjoy Persian cuisine, Iran’s raw beauty, and the courteous culture he grew up with. He kept his visit short, however, partly out of concerns over a possible travel ban under the new federal administration, but also because of the ongoing research at MIT that he was devoted to. Even while in Iran, he had been working on it to the best of his ability, but he soon felt pulled to return to campus.

    Farzadfard came to MIT almost seven years ago, after completing his undergraduate and master’s degrees in biotechnology from a prestigious program at the University of Tehran. It was in this program that Farzadfard would fall twice in love — with biological engineering and with a fellow student, Nava Gharaei, whom he would later marry.

    While at the University of Tehran, Fahim was committed to his research but understood that the academic labs in Iran had limited resources. That meant research projects would take much longer and be more difficult to accomplish. Something that would take a month to complete in the United States, Fahim says, might take a year in Iran.

    “I wanted to do cutting edge research,” he says. “The opportunity wasn’t there.”

    Fahim decided that if he wanted to excel further in the emerging field of synthetic biology, he would need to seek opportunities outside of his home country. Weeks after he and Gharaei defended their master’s theses, they got married and landed in Boston to start a new journey as PhD students — she at Harvard University and he in the Synthetic Biology Group in MIT’s Research Laboratory of Electronics.

    Master tinkerers

    Synthetic biology is a relatively new field that marries engineering with molecular and cellular biology. As Farzardfard explains it, scientists tinker with existing biological parts and systems to redesign a cell’s functionality. “We engineer new functions into cells. We try to give them new abilities,” he says. “Basically, we’re making molecular tools. We try to understand cells’ inner-workings and engineer them using these tools.”

    Farzadfard has been taking synthetic biology to the next level by engineering an analog memory storage system using the cell’s DNA. Living cells are constantly sensing their environment, sampling molecules and computing a response based on their genetic programs and the environmental cues that they recieve. For example, when pancreatic cells sense a high concentration of glucose (input), this starts a chain reaction of regulatory and signaling molecules (cell processing), that eventually leads to the production and release of insulin (output).

    In synthetic biology, genes can be designed and engineered to respond to signaling molecules and to regulate the expression of other genes in such a way that they could perform logic functions. Analogous to computer circuits, these interconnected networks of genes are called gene circuits and can be used to engineer and program cellular functions. Someday, engineered cells with these circuits could find a number of uses, such as sensors for environmental or medical monitoring.

    In 2014, Farzadfard and Timothy Lu, MIT associate professor of electrical engineering and computer science and of biological engineering, published their findings in Science. Previous studies had demonstrated that memory can be encoded in DNA, but the encodings were “digital” — that is, they recorded only whether a particular event occurred. Farzadfard’s platform overcomes this limitation by making analog recordings, which capture information about an event’s intensity, for example, or its duration.

    “You can have infinite states. The capacity to record analog information means you can record the magnitude of inputs, not just absence or presence,” Farzadfard says, “You can record how much input and how long it has been there.”

    Farzadfard has also been working to develop a sort of DNA barcode memory system in bacteria that are communicating with one another. In this research, two different bacteria, A and B, were given their own DNA barcodes. Farzadfard developed a system where if bacteria A and B communicate via conjugation (a transfer of DNA), then a new barcode, AB, would be encoded in their DNA. The logic seems simple and straightforward, but it has the potential for mapping cellular interactions at single-cell resolution. For example, this technique might someday be used to map all the connections of a network of neurons, Farzardfard says.

    Caught between two countries

    After the 2016 presidential election, Farzadfard remembers being concerned about potential policy changes from the new administration that could affect people living in the U.S., based on their nationalities. Those concerns increased when President Trump first signed an executive order preventing individuals from Iran and certain other countries from entering the United States, and the status of green-card holders was unclear. Farzadfard was thankful to have had the foresight to visit Iran last December, but he acknowledges that many others are worried about being able to travel to their home countries.

    “Personally, I didn’t experience that much hardship after the election,” he says. “I’m lucky. Some of my friends couldn’t come those few days after [the president’s executive order]. I could have easily been one of those people, and had that stress of the uncertainty of not knowing what is going to happen next.”

    At that time, he wondered when he would next be able to visit his three sisters, his parents, or once again enjoy delicious Persian food which, he jokingly says, caused him to gain a few pounds that he hasn’t lost yet. Now that judges from federal district courts have blocked the revised version of President Trump’s executive order, Farzadfard feels more confident about traveling to Iran and having the freedom to return to America.

    Whatever the future may hold, this soft-spoken Iranian immigrant has come a long way since first arriving in America without knowing anyone but his wife. Today, he’s made many friends and has established his life in New England, while continuing to push the frontiers of synthetic biology.

    5:30p
    Measuring biological dust in the wind

    In the popular children’s story “Horton Hears a Who!” author Dr. Seuss tells of a gentle and protective elephant who stumbles upon a speck of dust that harbors a community of microscopic creatures called the Whos living the equally tiny town of Whoville. Throughout their journey together, Horton argues for the existence of the Whos traveling around in the air on a dust speck, while doubters dispute the finding. Ultimately, through observation, evidence for the organisms emerges, but regardless of the outcome, this speck altered a world greater than its own.

    While this tale is a work of fiction, climate and atmospheric scientists have considered a real-life Whoville scenario — biological particles and inorganic material riding around in the atmosphere affecting the climate. Previous research has shown that some aerosols are very good at nucleating ice, which could form clouds in the troposphere. But due to complex atmospheric chemistries and a lack of data, scientists aren’t sure what percentage of these ice active particles are biological in nature and abundant enough in the troposphere to have an impact on climate. Furthermore, chemically parsing the metaphorical Whos from their speck has proved difficult — until now.

    Atmospheric science researchers in the Program in Atmospheres, Oceans and Climate (PAOC) in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) have found a way to differentiate biological material in the atmosphere (bioaerosols) from non-biological particulates with a higher accuracy than other methods, using machine learning. When applied to previously-collected atmospheric samples and data, their findings support evidence that on average these bioaerosols globally make up less than 1 percent of the particles in the upper troposphere — where they could influence cloud formation and by extension, the climate — and not around 25 to 50 percent as some previous research suggests.

    The work, lead by MIT associate professor of atmospheric chemistry Dan Cziczo and graduate student Maria Zawadowicz, was published last week in the journal Atmospheric Chemistry and Physics.

    Bioaerosols in a complex climate system

    Bioaerosols, a subset of atmospheric aerosols, are biological particulates or liquids suspended in the air at any given time. These emissions consist of whole and fragmented airborne bacteria, fungal spores, yeast, viruses, pollens, and other materials from the environment. Their solid, non-biological counterparts, inorganic aerosols, include materials such as mineral dust particles such as apatite and monazite, and industrial combustion products like fly ash.

    Scientists have long been interested in bioaerosols because of their potential to form cirrus ice clouds, which have major implications for the climate — reflecting, absorbing, and transmitting sunlight as well as thermal infrared radiation from Earth. Bacteria like Pseudomonas syringae use their nucleating properties to form ice crystals on tomato plants and humans used them to create artificial snow. While atmospheric and climate modeling suggests that bioaerosols, globally averaged, are not abundant and efficient enough at freezing to significantly influence cloud formation, research findings have varied significantly.

    “There has been a lot of debate recently — the last five to seven years — about how much biological material is in the atmosphere,” Cziczo says. “[The study findings] are all over the map, but there are a cluster of studies that say it’s a few percent of the atmospheric aerosol and there’s a few studies that say it’s a lot, 25 percent or 50 percent. And so, those are sort of the two camps that have been out there, and you can imagine that these have really different effects on our climate system, on precipitation, on chemistry.”

    Until now, gathering and making positive identification of bioaerosols has been difficult. Measurement techniques specific to bioaerosol include filter collection coupled with electron microscopy or optical microscopy with fluorescent staining. Scientists have also used in-situ fluorescence with a wideband integrated bioaerosol sensor (WIBS), in addition to measuring particles’ shapes and sizes. The problem with this is interference — bioaerosols are often found to have chemical signatures similar to smoke, an inorganic aerosol. Additionally, researchers have tried culturing samples for microbial strains, as well as analyzing their data offline, in the lab. These techniques inject significant uncertainty into the measurements and some studies reported bioaerosol concentrations greater than the total aerosol measurement obtained, which is impossible.

    In case that wasn’t complicated enough, aerosols become chemically and physically altered as they enter the troposphere, interacting with other atmospheric compounds, and the longer they are there before falling out, the more they age and mix. Finally, all of this varies by region, season, climate, and altitude, which can affect measurements, further blurring the boundary between bioaerosols and inorganic aerosols, and making quantification challenging.

    Cziczo’s research group is interested in the interrelationship of particulate matter and cloud formation. His team utilizes laboratory and field studies to elucidate how small particles interact with water vapor to form droplets and ice crystals, which are important players in the Earth’s climate system. Experiments include using small cloud chambers in the laboratory to mimic atmospheric conditions that lead to cloud formation and observing clouds in situ from remote mountaintop sites or through the use of research aircraft.

    Aerosol breakdown

    “One of the things that we suspected was that the previous ways of determining biological material probably over-counted [their abundance] because they were looking and characterizing other things as being biological that really weren’t,” Cziczo says.

    Zawadowicz adds: “Everything in the atmosphere is very highly processed. It’s what confounds a lot of these measurements”.

    So, in an effort to rein in the uncertainty surrounding bioaerosols in the atmosphere and constrain their influence on cloud formation processes, Cziczo and Zawadowicz, along with collaborators at the National Oceanic and Atmospheric Administration, developed a technique that couples a technique called particle analysis by laser mass spectrometry (PALMS) with machine learning. Here, single particle mass spectrometry is used to ablate and ionize aerosols one at a time, breaking them down into ion fragments and clusters, which are then detected by the instrument. Each aerosol analyzed this way produces a spectrum with identifiable features of its composition, like a chemical fingerprint.

    The group leveraged the presence of phosphorus in the mass spectra to train the classification machine learning algorithm on known samples and then, primed, applied it to field data acquired from Desert Research Institute’s Storm Peak Laboratory in Steamboat Springs, Colorado, and from the Carbonaceous Aerosol and Radiative Effects Study based in the town of Cool, California.

    “So, what Maria did was she grabbed a whole host of different particles, focusing on biological ones, bacteria, both in a living and dead state, fungal spores, pollen, yeast, just about anything you could imagine that could turn into an atmospheric particulate,” Cziczo says. “And she found ways of dispersing these materials and then bringing them into the instrument so that we could see their composition.”

    Some particles were chemically aged to mimic atmospheric interactions, others, physically broken down so they were small enough to be analyzed and nebulized.

    Knowing that the principal atmospheric emissions of phosphorus are from mineral dust, combustion products, and biological particles, they exploited the presence of phosphate and organic nitrogen ions and their characteristic ratios in known samples to classify the particles. In bioaerosols, phosphorus mostly occurs in phospholipid bilayers and nucleic acids, whereas in mineral dust like apatite and monazite, it’s found as in the form of calcium phosphate. But the division isn’t cut and dried; compounds like soil dust can include internal mixtures of biological and inorganic components.

    Once analyzed, other spectral peaks and markers were used to provide additional evidence for the classification as biological or non-biological and increase the confidence in the algorithm and its results.

    “We found that if we do some ratios of certain components in the mass spectrum that there are certain clusters that form, and we employed some advanced statistical techniques to disentangle the clusters and see which signatures are biological and which aren’t,” Zawadowicz says. The new technique was able to accurately classify 97 percent of the spectra, and when applied to spectra from field data, found that less than 1 percent was biological for the global average. Phosphorus emissions inventories helped to confirm this.

    The unlikeliness of a real-life Whoville

    While the list of bioaerosols tested and data sets used — which didn’t include locations and times of high and low bioaerosol concentration — were not exhaustive, the group found convincing evidence that, when it came to cirrus cloud formation, bioaerosols were an unlikely culprit. Previous research assumed that most of the phosphorus found in the atmosphere was biological, but Cziczo points out that this conflicts with phosphorus emissions inventories, implying that inorganic compounds were often mistaken for biological ones. For Cziczo this finding that bioaerosols accounted for less than 1 percent on average was the smoking gun.

    “It’s not enough to say that a particle is good at nucleating ice, it also has to have an abundance that causes that cloud formation to happen. And it looks much less certain now that we have enough of these biologicals to create the effect that some people have suggested in the literature,” Cziczo says. “Instead, it’s much more likely that there are other things that are causing the ice nucleation like the mineral dust particles.”

    Even though Cziczo and Zawadowicz’s research has cast more shade over the existence of a “Whoville,” they say their work has just begun.

    “So now that we have an understanding of what it [bioaerosol presence in the atmosphere] looks like, and we have some field data to say how abundant it is in different seasons at different locations, the question is: Are the models getting that correct?” says Cziczo, who has plans to collaborate with EAPS Senior Research Scientist Chien Wang and Colette Heald associate professor in the MIT Department of Civil and Environmental Engineering with a joint appointment in EAPS, both of whom also investigate and model aerosol and climate impacts. Says Cziczo, “We’re going to be looking at working with them in the future and seeing if we can mesh all of this data — the laboratory data, the field data, and the models together.”

    11:59p
    New technique makes brain scans better

    People who suffer a stroke often undergo a brain scan at the hospital, allowing doctors to determine the location and extent of the damage. Researchers who study the effects of strokes would love to be able to analyze these images, but the resolution is often too low for many analyses.

    To help scientists take advantage of this untapped wealth of data from hospital scans, a team of MIT researchers, working with doctors at Massachusetts General Hospital and many other institutions, has devised a way to boost the quality of these scans so they can be used for large-scale studies of how strokes affect different people and how they respond to treatment.

    “These images are quite unique because they are acquired in routine clinical practice when a patient comes in with a stroke,” says Polina Golland, an MIT professor of electrical engineering and computer science. “You couldn’t stage a study like that.”

    Using these scans, researchers could study how genetic factors influence stroke survival or how people respond to different treatments. They could also use this approach to study other disorders such as Alzheimer’s disease.

    Golland is the senior author of the paper, which will be presented at the Information Processing in Medical Imaging conference during the week of June 25. The paper’s lead author is Adrian Dalca, a postdoc in MIT’s Computer Science and Artificial Intelligence Laboratory. Other authors are Katie Bouman, an MIT graduate student; William Freeman, the Thomas and Gerd Perkins Professor of Electrical Engineering at MIT; Natalia Rost, director of the acute stroke service at MGH; and Mert Sabuncu, an assistant professor of electrical and computer engineering at Cornell University.

    Filling in data

    Scanning the brain with magnetic resonance imaging (MRI) produces many 2-D “slices” that can be combined to form a 3-D representation of the brain.

    For clinical scans of patients who have had a stroke, images are taken rapidly due to limited scanning time. As a result, the scans are very sparse, meaning that the image slices are taken about 5-7 millimeters apart. (The in-slice resolution is 1 millimeter.)

    For scientific studies, researchers usually obtain much higher-resolution images, with slices only 1 millimeter apart, which requires keeping subjects in the scanner for a much longer period of time. Scientists have developed specialized computer algorithms to analyze these images, but these algorithms don’t work well on the much more plentiful but lower-quality patient scans taken in hospitals.

    The MIT researchers, along with their collaborators at MGH and other hospitals, were interested in taking advantage of the vast numbers of patient scans, which would allow them to learn much more than can be gleaned from smaller studies that produce higher-quality scans.

    “These research studies are very small because you need volunteers, but hospitals have hundreds of thousands of images. Our motivation was to take advantage of this huge set of data,” Dalca says.

    The new approach involves essentially filling in the data that is missing from each patient scan. This can be done by taking information from the entire set of scans and using it to recreate anatomical features that are missing from other scans.

    “The key idea is to generate an image that is anatomically plausible, and to an algorithm looks like one of those research scans, and is completely consistent with clinical images that were acquired,” Golland says. “Once you have that, you can apply every state-of-the-art algorithm that was developed for the beautiful research images and run the same analysis, and get the results as if these were the research images.”

    Once these research-quality images are generated, researchers can then run a set of algorithms designed to help with analyzing anatomical features. These include the alignment of slices and a process called skull-stripping that eliminates everything but the brain from the images.

    Throughout this process, the algorithm keeps track of which pixels came from the original scans and which were filled in afterward, so that analyses done later, such as measuring the extent of brain damage, can be performed only on information from the original scans.

    “In a sense, this is a scaffold that allows us to bring the image into the collection as if it were a high-resolution image, and then make measurements only on the pixels where we have the information,” Golland says.

    Higher quality

    Now that the MIT team has developed this technique for enhancing low-quality images, they plan to apply it to a large set of stroke images obtained by the MGH-led consortium, which includes about 4,000 scans from 12 hospitals. 

    “Understanding spatial patterns of the damage that is done to the white matter promises to help us understand in more detail how the disease interacts with cognitive abilities of the person, with their ability to recover from stroke, and so on,” Golland says.

    The researchers also hope to apply this technique to scans of patients with other brain disorders.

    “It opens up lots of interesting directions,” Golland says. “Images acquired in routine medical practice can give anatomical insight, because we lift them up to that quality that the algorithms can analyze.”

    The research was funded by the National Institute of Neurological Disorders and Stroke and the National Institute of Biomedical Imaging and Bioengineering.

    << Previous Day 2017/06/20
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org