MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Monday, December 4th, 2017

    Time Event
    10:59a
    New nanowires are just a few atoms thick

    “Two-dimensional materials” — materials deposited in layers that are only a few atoms thick — are promising for both high-performance electronics and flexible, transparent electronics that could be layered onto physical surfaces to make computing ubiquitous.

    The best-known 2-D material is graphene, which is a form of carbon, but recently researchers have been investigating other 2-D materials, such as molybdenum disulfide, which have their own, distinct advantages.

    Producing useful electronics, however, requires integrating multiple 2-D materials in the same plane, which is a tough challenge. In 2015, researchers at King Abdullah University in Saudi Arabia developed a technique for depositing molybdenum disulfide (MoS2) next to tungsten diselenide (WSe2), with a very clean junction between the two materials. With a variation of the technique, researchers at Cornell University then found that they could induce long, straight wires of MoS2 — only a few atoms in diameter— to extend into the WSe2,  while preserving the clean junction.

    The researchers contacted Markus Buehler, the McAfee Professor of Engineering in MIT’s Department of Civil and Environmental Engineering, who specializes in atomic-level models of crack propagation, to see if his group could help explain this strange phenomenon.

    In the latest issue of Nature Materials, the King Abdullah, Cornell, and MIT researchers team with colleagues at Academia Sinica, the Taiwanese national research academy, and Texas Tech University to describe both the material deposition method and the mechanism underlying the formation of the MoS2 nanowires, which the MIT researchers were able to model computationally.

    “The manufacturing of new 2-D materials still remains a challenge,” Buehler says. “The discovery of mechanisms by which certain desired material structures can be created is key to moving these materials toward applications. In this process, the joint work of simulation and experiment is critical to make progress, especially using molecular-level models of materials that enable new design directions.”

    Wired up

    The ability to create long, thin MoS2 channels in WSe2 could have a number of applications, the researchers say.

    “Based on [the materials’] electrical properties and optical properties, people are looking at using MoS2 and WSe2 for solar cells or for water splitting based on sunlight,” says Gang Seob Jung, an MIT graduate student in civil and environmental engineering and a coauthor on the new paper. “Most of the interesting stuff  happens at the interface. When you have not just the one interface — if there are many nanowire interfaces — it could improve the efficiency of a solar cell, even if it’s quite random.”

    But the theoretical explanation of the molecular mechanism underlying the nanowires’ formation also raises the hope that their formation could be controlled, to enable the assembly of atom-scale electronic components.

    “Two-D materials, one of the most promising candidates for future electronics, ultimately need to beat silicon-based devices, which have achieved a few nanometers in size already,” says Yimo Han, a Cornell graduate student in chemistry and first author on the paper. “Two-D materials are the thinnest in the vertical direction but still span a quite large area in the lateral dimensions. We made the thinnest dislocation-free channels in 2-D materials, which is a big step toward subnanometer electronic devices out of 2-D materials.”

    Propagating polygons

    In a 2-D crystal, both MoS2 and WSe2 naturally arrange themselves into hexagons in which the constituent elements — molybdenum and sulfur or tungsten and selenium — alternate. Together, these hexagons produce a honeycomb pattern.

    The Cornell researchers’ fabrication technique preserves this honeycomb pattern across the junction between materials, a rare feat and one that’s very useful for electronics applications. Their technique uses chemical vapor deposition, in which a substrate — in this case, sapphire — is exposed to gases carrying chemicals that react to produce the desired materials.

    The natural sizes of the MoS2 and WSe2 hexagons are slightly different, however, so their integration puts a strain on both crystals, particularly near their junction. If a pair of WSe2 hexagons right at the MoS2 junction convert into a hexagon matched with a heptagon (a seven-sided polygon), it releases the strain.

    This so-called 5|7 dislocation creates a site at which an MoS2 particle can attach itself. The resulting reaction inserts a molybdenum atom into the pentagon, producing a hexagon, and breaks the heptagon open. Sulfur atoms then attach to the heptagon to form another 5|7 dislocation. As this process repeats, the 5|7 dislocation moves deeper into WSe2 territory, with a nanowire extending behind it. The pattern in which the strain on the mismatched hexagons relaxes and recurs ensures that the dislocation progresses along a straight line.

    The research was supported by the Office of Naval Research and the Department of Defense.

    10:59a
    How the brain keeps time

    Timing is critical for playing a musical instrument, swinging a baseball bat, and many other activities. Neuroscientists have come up with several models of how the brain achieves its exquisite control over timing, the most prominent being that there is a centralized clock, or pacemaker, somewhere in the brain that keeps time for the entire brain.

    However, a new study from MIT researchers provides evidence for an alternative timekeeping system that relies on the neurons responsible for producing a specific action. Depending on the time interval required, these neurons compress or stretch out the steps they take to generate the behavior at a specific time.

    “What we found is that it’s a very active process. The brain is not passively waiting for a clock to reach a particular point,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

    MIT postdoc Jing Wang and former postdoc Devika Narain are the lead authors of the paper, which appears in the Dec. 4 issue of Nature Neuroscience. Graduate student Eghbal Hosseini is also an author of the paper.

    Flexible control

    One of the earliest models of timing control, known as the clock accumulator model, suggested that the brain has an internal clock or pacemaker that keeps time for the rest of the brain. A later variation of this model suggested that instead of using a central pacemaker, the brain measures time by tracking the synchronization between different brain wave frequencies.

    Although these clock models are intuitively appealing, Jazayeri says, “they don’t match well with what the brain does.”

    No one has found evidence for a centralized clock, and Jazayeri and others wondered if parts of the brain that control behaviors that require precise timing might perform the timing function themselves. “People now question why would the brain want to spend the time and energy to generate a clock when it’s not always needed. For certain behaviors you need to do timing, so perhaps the parts of the brain that subserve these functions can also do timing,” he says.

    To explore this possibility, the researchers recorded neuron activity from three brain regions in animals as they performed a task at two different time intervals — 850 milliseconds or 1,500 milliseconds.

    The researchers found a complicated pattern of neural activity during these intervals. Some neurons fired faster, some fired slower, and some that had been oscillating began to oscillate faster or slower. However, the researchers’ key discovery was that no matter the neurons’ response, the rate at which they adjusted their activity depended on the time interval required.

    At any point in time, a collection of neurons is in a particular “neural state,” which changes over time as each individual neuron alters its activity in a different way. To execute a particular behavior, the entire system must reach a defined end state. The researchers found that the neurons always traveled the same trajectory from their initial state to this end state, no matter the interval. The only thing that changed was the rate at which the neurons traveled this trajectory.

    When the interval required was longer, this trajectory was “stretched,” meaning the neurons took more time to evolve to the final state. When the interval was shorter, the trajectory was compressed.

    “What we found is that the brain doesn’t change the trajectory when the interval changes, it just changes the speed with which it goes from the initial internal state to the final state,” Jazayeri says.

    Dean Buonomano, a professor of behavioral neuroscience at the University of California at Los Angeles, says that the study “provides beautiful evidence that timing is a distributed process in the brain — that is, there is no single master clock.”

    “This work also supports the notion that the brain does not tell time using a clock-like mechanism, but rather relies on the dynamics inherent to neural circuits, and that as these dynamics increase and decrease in speed, animals move more quickly or slowly,” adds Buonomano, who was not involved in the research.

    Neural networks

    The researchers focused their study on a brain loop that connects three regions: the dorsomedial frontal cortex, the caudate, and the thalamus. They found this distinctive neural pattern in the dorsomedial frontal cortex, which is involved in many cognitive processes, and the caudate, which is involved in motor control, inhibition, and some types of learning. However, in the thalamus, which relays motor and sensory signals, they found a different pattern: Instead of altering the speed of their trajectory, many of the neurons simply increased or decreased their firing rate, depending on the interval required.

    Jazayeri says this finding is consistent with the possibility that the thalamus is instructing the cortex on how to adjust its activity to generate a certain interval.

    The researchers also created a computer model to help them further understand this phenomenon. They began with a model of hundreds of neurons connected together in random ways, and then trained it to perform the same interval-producing task they had used to train animals, offering no guidance on how the model should perform the task.

    They found that these neural networks ended up using the same strategy that they observed in the animal brain data. A key discovery was that this strategy only works if some of the neurons have nonlinear activity — that is, the strength of their output doesn’t constantly increase as their input increases. Instead, as they receive more input, their output increases at a slower rate.

    Jazayeri now hopes to explore further how the brain generates the neural patterns seen during varying time intervals, and also how our expectations influence our ability to produce different intervals.

    The research was funded by the Rubicon Grant from the Netherlands Scientific Organization, the National Institutes of Health, the Sloan Foundation, the Klingenstein Foundation, the Simons Foundation, the Center for Sensorimotor Neural Engineering, and the McGovern Institute.

    5:30p
    On 75th anniversary of first nuclear fission reactor, MIT re-enacts seminal experiment

    On Dec. 2 1942, under the stands at the University of Chicago’s Stagg Field football stadium, Nobel laureate Enrico Fermi led an experimental team that produced humankind’s first controlled nuclear chain reaction — an event that marked the dawn of the nuclear era, enabling the development of the first atomic bomb and the first nuclear power reactors.

    To commemorate the first criticality of the Chicago Pile (CP-1), exactly 75 years later, MIT on Saturday restored a device similar the one used for that epochal event in Chicago. Researchers celebrated by restoring an MIT subcritical experimental facility, which is similar to those used during development of the CP-1 reactor and its landmark sustained nuclear chain reaction.

    The historic experiment’s re-enactment was not merely a novelty. The researchers have revived a device, called a graphite exponential pile and originally built in 1957, that over the coming years will provide hands-on access to subcritical nuclear experiments for MIT’s students, and serve as a unique and valuable research tool that can be used to study new reactor designs for future nuclear power plants.

    The device is essentially just a large cube-shaped pile of blocks made of pure graphite — the material used as the “lead” of a pencil — with holes drilled through to allow insertion of rods of uranium. These natural-uranium rods have such low radiation emissions that they could be safely handled with bare hands, as Fermi and his collaborators did in 1942 (though in this case they will be handled with protective gloves anyway).

    In the decades following Fermi’s original experiment, more than two dozen similar graphite pile devices were built at universities and national laboratories around the country and used for basic research and teaching, but over the years most of those have been disposed of. The one at MIT, which though only half as big as Fermi’s original was the largest of these other installations, escaped that fate but had been unused and forgotten for many years, until being “rediscovered” last year by professor Michael Short of MIT’s Department of Nuclear Science and Engineering.

    Kord Smith, the KEPCO Professor of the Practice of Nuclear Science and Engineering, was surprised to learn that the device was still intact. Covered in protective metal panels that made it look like a disused storage cabinet, it went unnoticed even by students and faculty working near it. Smith, working with colleagues in the Department of Nuclear Science and Engineering and David Moncton, director of the Nuclear Reactor Laboratory and his staff, quickly formulated a plan to restore the device for the 75th anniversary of the original groundbreaking experiment. MIT nuclear science and engineering student Richard Knapp made the design and construction of the system the subject of his BS thesis in 1957.

    Now, with the device and its 30 tons of graphite and 2.5 tons of uranium fully cleaned and restored, the final slugs of uranium were ceremonially slid into place on Dec. 2 to complete the system. This took place before an invited group of 49 faculty, students, and guests — the same number who were present with Fermi in Chicago — at the precise time of the original experiment.

    Smith explains that MIT’s subcritical graphite pile originally fell into disuse as the nuclear industry quickly shifted from graphite-based reactor designs to alternatives based on light water, heavy water, or liquid sodium. Experiments with the graphite system were thus seen as less relevant. In these devices, graphite (or water) serves as a moderator that slows down the speed of neutrons emanating from a radiation source, by a factor of more than a million, to get them to interact with other uranium atoms and initiate a self-sustaining chain reaction in which neutrons knock other neutrons out of an atom’s nucleus to create a cascade of collisions. Criticality of the much larger CP-1 graphite pile was controlled by inserting or withdrawing control rods, made of cadmium, to absorb the neutrons and interrupt the reaction.

    Today, a wide variety of cutting-edge designs for proposed next-generation nuclear reactors, including designs that have passive cooling systems or continuous operation without requiring shutdowns for refueling, do once again make use of graphite, so the reactor is once again a useful research tool. Such a tool will permit students to actually handle nuclear fuel and be more accessible to students than full-scale nuclear reactors such as MIT’s own research reactor, which runs almost continuously and produces 6 megawatts of thermal power. Experiments done in that reactor, to study new kinds of fuel-rod cladding or new instruments for monitoring the reactions, for example, typically run for a year at a time.

    Students will be able to install, run, and get results from experiments in the graphite exponential pile within a few hours or days, Smith says. Use of the graphite pile is anticipated to stimulate students’ interest in, and preparation for, performing cutting-edge experiments on the much more powerful MIT research reactor.

    “Graphite as a medium for reactors has come and gone a few times over the years,” he says, but now, “we’re in the midst of a rebirth.” And even today, there are still significant aspects of exactly how neutrons from nuclear reactions scatter through the crystal lattice of graphite. In fact, Smith says, a new physics model to describe these interactions has recently been proposed, and using the graphite pile “we want to design experiments to test these new theoretical models.”

    In addition to doing experiments that could help in the development of new reactor designs, fuel, and cladding types, or measurement systems, this device and the MIT reactor will be a valuable educational tools for nuclear engineers, Smith says. “We tend to get students who are very good at developing computational algorithms and models. But if you don’t have something to compare your calculations with, you start to think your simulations are perfect.” In the real world, though, the actual measurements usually don’t agree perfectly with predictions, and understanding such differences often lead to the development of improved theoretical models, he says.

    << Previous Day 2017/12/04
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org