MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Monday, September 18th, 2017

    Time Event
    10:30a
    A new approach to ultrafast light pulses

    Two-dimensional materials called molecular aggregates are very effective light emitters that work on a different principle than typical organic light-emitting diodes (OLEDs) or quantum dots. But their potential as components for new kinds of optoelectronic devices has been limited by their relatively slow response time. Now, researchers at MIT, the University of California at Berkeley, and Northeastern University have found a way to overcome that limitation, potentially opening up a variety of applications for these materials.

    The findings are described in the journal Proceedings of the National Academy of Sciences, in a paper by MIT associate professor of mechanical engineering Nicholas X. Fang, postdocs Qing Hu and Dafei Jin, and five others.

    The key to enhancing the response time of these 2-D molecular aggregates (2DMA), Fang and his team found, is to couple that material with a thin layer of a metal such as silver. The interaction between the 2DMA and the metal that is just a few nanometers away boosts the speed of the material’s light pulses more than tenfold.

    These 2DMA materials exhibit a number of unusual properties and have been used to create exotic forms of matter, known as Bose-Einstein condensates, at room temperature, while other approaches required extreme cooling. They have also been applied in technologies such as solar cells and light-harvesting organic antennas. But the new work for the first time identifies the strong influence that a very close sheet of metal can have on the way these materials emit light.

    In order for these materials to be useful in devices such as photonic chips — which are like semiconductor chips but carry out their operations using light instead of electrons — “the challenge is to be able to switch them on and off quickly,” which had not been possible before, Fang says.

    With the metal substrate nearby, the response time for the light emission dropped from 60 picoseconds (trillionths of a second) to just 2 picoseconds, Fang says: “This is pretty exciting, because we observed this effect even when the material is 5 to 10 nanometers away from the surface,” with a spacing layer of polymer in between. That’s enough of a separation that fabricating such paired materials in quantity should not be an overly demanding process. “This is something we think could be adapted to roll-to-roll printing,” he says.

    If used for signal processing, such as sending data by light rather than radio waves, Fang says, this advance could lead to a data transmission rate of about 40 gigahertz, which is eight times faster than such devices can currently deliver. This is “a very promising step, but it’s still very early” as far as translating that into practical, manufacturable devices, he cautions.

    The team studied only one of the many kinds of molecular aggregates that have been developed, so there may still be opportunities to find even better variations. “This is actually a very rich family of luminous materials,” Fang says.

    Because the responsiveness of the material is so strongly influenced by the exact proximity of the nearby metal substrate, such systems could also be used for very precise measuring tools. “The interaction is reduced as a function of the gap size, so it could now be used if we want to measure the proximity of a surface,” Fang says.

    As the team continues its studies of these materials, one next step is to study the effects that patterning of the metal surface might have, since the tests so far only used flat surfaces. Other questions to be addressed include determining the useful lifetimes of these materials and how they might be extended.

    Fang says a first prototype of a device using this system might be produced “within a year or so.”

    The team also included Soon Hoon Nam at MIT; Jun Xiao, Xiaoze Liu, and Xiang Zhang at UC Berkeley; and Yongmin Liu at Northeastern University. The work was supported by the National Science Foundation, the Masdar Institute of Science and Technology, and the King Abdullah University of Science and Technology.

    3:00p
    Analyzing the language of color

    The human eye can perceive millions of different colors, but the number of categories human languages use to group those colors is much smaller. Some languages use as few as three color categories (words corresponding to black, white, and red), while the languages of industrialized cultures use up to 10 or 12 categories.

    In a new study, MIT cognitive scientists have found that languages tend to divide the “warm” part of the color spectrum into more color words, such as orange, yellow, and red, compared to the “cooler” regions, which include blue and green. This pattern, which they found across more than 100 languages, may reflect the fact that most objects that stand out in a scene are warm-colored, while cooler colors such as green and blue tend to be found in backgrounds, the researchers say.

    This leads to more consistent labeling of warmer colors by different speakers of the same language, the researchers found.

    “When we look at it, it turns out it’s the same across every language that we studied. Every language has this amazing similar ordering of colors, so that reds are more consistently communicated than greens or blues,” says Edward Gibson, an MIT professor of brain and cognitive sciences and the first author of the study, which appears in the Proceedings of the National Academy of Sciences the week of Sept. 18.

    The paper’s other senior author is Bevil Conway, an investigator at the National Eye Institute (NEI). Other authors are MIT postdoc Richard Futrell, postdoc Julian Jara-Ettinger, former MIT graduate students Kyle Mahowald and Leon Bergen, NEI postdoc Sivalogeswaran Ratnasingam, MIT research assistant Mitchell Gibson, and University of Rochester Assistant Professor Steven Piantadosi.

    Color me surprised

    Gibson began this investigation of color after accidentally discovering during another study that there is a great deal of variation in the way colors are described by members of the Tsimane’, a tribe that lives in remote Amazonian regions of Bolivia. He found that most Tsimane’ consistently use words for white, black, and red, but there is less agreement among them when naming colors such as blue, green, and yellow.

    Working with Conway, who was then an associate professor studying visual perception at Wellesley College, Gibson decided to delve further into this variability. The researchers asked about 40 Tsimane’ speakers to name 80 color chips, which were evenly distributed across the visible spectrum of color.

    Once they had these data, the researchers applied an information theory technique that allowed them to calculate a feature they called “surprisal,” which is a measure of how consistently different people describe, for example, the same color chip with the same color word.

    When a particular word (such as “blue” or “green”) is used to describe many color chips, then one of these chips has higher surprisal. Furthermore, chips that people tend to label consistently with just one word have a low surprisal rate, while chips that different people tend to label with different words have a higher surprisal rate. The researchers found that the color chips labeled in Tsimane’, English, and Spanish were all ordered such that cool-colored chips had higher average surprisals than warm-colored chips (reds, yellows, and oranges).

    The researchers then compared their results to data from the World Color Survey, which performed essentially the same task for 110 languages around the world, all spoken by nonindustrialized societies. Across all of these languages, the researchers found the same pattern.

    This reflects the fact that while the warm colors and cool colors occupy a similar amount of space in a chart of the 80 colors used in the test, most languages divide the warmer regions into more color words than the cooler regions. Therefore, there are many more color chips that most people would call “blue” than there are chips that people would define as “yellow” or “red.”

    “What this means is that human languages divide that space in a skewed way,” Gibson says. “In all languages, people preferentially bring color words into the warmer parts of the space and they don’t bring them into the cooler colors.”

    Colors in the forefront

    To explore possible explanations for this trend, the researchers analyzed a database of 20,000 images collected and labeled by Microsoft, and they found that objects in the foreground of a scene are more likely to be a warm color, while cooler colors are more likely to be found in backgrounds.

    “Warm colors are in the foreground, they’re all the stuff that we interact with and want to talk about,” Gibson says. “We need to be able to talk about things which are identical except for their color: objects.”

    Gibson now hopes to study languages spoken by societies found in snowy or desert climates, where background colors are different, to see if their color naming system is different from what he found in this study.

    Julie Sedivy, an adjunct associate professor of psychology at the University of Calgary, says the paper makes an important contribution to scientists’ ability to study questions such as how culture and language influence how people perceive the world.

    “It’s a big step forward in establishing a more rigorous approach to asking really important questions that in the past have been addressed in a scientifically flimsy way,” says Sedivy, who was not part of the research team. She added that this approach could also be used to study other attributes that are represented by varying numbers of words in different languages, such as odors, tastes, and emotions.

    The research was funded by the National Institutes of Health and the National Science Foundation.

    3:00p
    Blood testing via sound waves may replace some tissue biopsies

    Cells secrete nanoscale packets called exosomes that carry important messages from one part of the body to another. Scientists from MIT and other institutions have now devised a way to intercept these messages, which could be used to diagnose problems such as cancer or fetal abnormalities.

    Their new device uses a combination of microfluidics and sound waves to isolate these exosomes from blood. The researchers hope to incorporate this technology into a portable device that could analyze patient blood samples for rapid diagnosis, without involving the cumbersome and time-consuming ultracentrifugation method commonly used today.

    “These exosomes often contain specific molecules that are a signature of certain abnormalities. If you isolate them from blood, you can do biological analysis and see what they reveal,” says Ming Dao, a principal research scientist in MIT’s Department of Materials Science and Engineering and a senior author of the study, which appears in the Proceedings of the National Academy of Sciences the week of Sept. 18.

    The paper’s senior authors also include Subra Suresh, president-designate of Nanyang Technological University in Singapore, MIT’s Vannevar Bush Professor of Engineering Emeritus, and a former dean of engineering at MIT; Tony Jun Huang, a professor of mechanical engineering and materials science at Duke University; and Yoel Sadovsky, director of the Magee-Women’s Research Institute. The paper’s lead author is Duke graduate student Mengxi Wu.

    Sorting with sound

    In 2014, the same team of researchers first reported that they could separate cells by exposing them to sound waves as they flowed through a tiny channel. This technique offers a gentler alternative to other cell-sorting technologies, which require tagging the cells with chemicals or exposing them to stronger mechanical forces that may damage them.

    Since then, the researchers have shown that this technology can be used to isolate rare, circulating tumor cells from a blood sample. In their new study, they set out to capture exosomes. These vesicles, which are usually about 30 to 150 nanometers in diameter, can carry proteins, RNA, or other important cellular molecules.

    Previous studies have revealed that exosome contents can serve as markers for disorders such as cancer, neurodegenerative disease, and kidney disease, among others. However, existing methods for isolating exosomes require high-speed centrifugation, which takes nearly 24 hours to perform, using a large machine that is not portable. The high centrifugal forces can also damage vesicles.

    “Acoustic sound waves are much gentler,” Dao says. “These particles are experiencing the forces for only a second or less as they’re being separated, which is a big advantage.”

    The researchers’ original acoustic cell-sorting device consists of a microfluidic channel exposed to two tilted acoustic transducers. When sound waves produced by these transducers encounter one another, they form standing waves that generate a series of pressure nodes. Each time a cell or particle flows through the channel and encounters a node, the pressure guides the cell a little further off center. The distance of cell movement depends on size and other properties such as compressibility, making it possible to separate cells of different sizes by the time they reach the end of the channel.

    To isolate exosomes, the researchers built a device with two such units in tandem. In the first, sound waves are used to remove cells and platelets from a blood sample. Once the cells and platelets are removed, the sample enters a second microfluidic unit, which uses sound waves of a higher frequency to separate exosomes from slightly larger extracellular vesicles.

    Using this device, it takes less than 25 minutes to process a 100-microliter undiluted blood sample.

    “The new technique can address the drawbacks of the current technologies for exosome isolation, such as long turnaround time, inconsistency, low yield, contamination, and uncertain exosome integrity,” Huang says. “We want to make extracting high-quality exosomes as simple as pushing a button and getting the desired samples within 10 minutes.”

    “This work provides a novel way to capture exosomes from human fluid samples through a unique combination of microfluidics and acoustics, using state-of-the-art microfabrication technologies,” Suresh says. “The capability of this method to separate these nanoscale vesicles, essentially without altering their biological or physical characteristics, offers appealing possibilities for developing new ways of assessing human health as well as the onset and progression of diseases.”

    A clear signature

    This new method of exosome isolation “may usher a new paradigm in disease diagnosis and prognosis,” says Taher Saif, a professor of mechanical science and engineering at the University of Illinois at Urbana-Champaign. “This paper presents a noninvasive, label-free, biocompatible, on-chip method to isolate exosomes rapidly from blood with ultrahigh precision,” says Saif, who was not involved in the research.

    The research team now plans to use this technology to seek biomarkers that can reveal disease states. They have a joint grant from the National Institutes of Health to look for markers related to abnormal pregnancy, and they believe the technology could be used to help diagnose other health conditions as well.

    “The new acoustofluidic technology has the potential to dramatically improve the process of isolation of exosomes and other extracellular vesicles from blood and other bodily fluids,” Sadovsky says. “This will add a new dimension to research into ‘liquid biopsy,’ and facilitate the clinical use of extracellular vesicles to inform the physiology and health of organs that are hard to access, such as the placenta during human pregnancy.”

    The research was funded by the National Institutes of Health and the National Science Foundation.

    << Previous Day 2017/09/18
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org