MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, October 16th, 2019

    Time Event
    12:00a
    Assembler robots make large structures from little pieces

    Today’s commercial aircraft are typically manufactured in sections, often in different locations — wings at one factory, fuselage sections at another, tail components somewhere else — and then flown to a central plant in huge cargo planes for final assembly.

    But what if the final assembly was the only assembly, with the whole plane built out of a large array of tiny identical pieces, all put together by an army of tiny robots?

    That’s the vision that graduate student Benjamin Jenett, working with Professor Neil Gershenfeld in MIT’s Center for Bits and Atoms (CBA), has been pursuing as his doctoral thesis work. It’s now reached the point that prototype versions of such robots can assemble small structures and even work together as a team to build up a larger assemblies.

    The new work appears in the October issue of the IEEE Robotics and Automation Letters, in a paper by Jenett, Gershenfeld, fellow graduate student Amira Abdel-Rahman, and CBA alumnus Kenneth Cheung SM ’07, PhD ’12, who is now at NASA’s Ames Research Center, where he leads the ARMADAS project to design a lunar base that could be built with robotic assembly.

    “This paper is a treat,” says Aaron Becker, an associate professor of electrical and computer engineering at the University of Houston, who was not associated with this work. “It combines top-notch mechanical design with jaw-dropping demonstrations, new robotic hardware, and a simulation suite with over 100,000 elements,” he says.

    “What’s at the heart of this is a new kind of robotics, that we call relative robots,” Gershenfeld says. Historically, he explains, there have been two broad categories of robotics — ones made out of expensive custom components that are carefully optimized for particular applications such as factory assembly, and ones made from inexpensive mass-produced modules with much lower performance. The new robots, however, are an alternative to both. They’re much simpler than the former, while much more capable than the latter, and they have the potential to revolutionize the production of large-scale systems, from airplanes to bridges to entire buildings.

    Experiments demonstrating relative robotic assembly of 1D, 2D, and 3D discrete cellular structures

    According to Gershenfeld, the key difference lies in the relationship between the robotic device and the materials that it is handling and manipulating. With these new kinds of robots, “you can’t separate the robot from the structure — they work together as a system,” he says. For example, while most mobile robots require highly precise navigation systems to keep track of their position, the new assembler robots only need to keep track of where they are in relation to the small subunits, called voxels, that they are currently working on. Every time the robot takes a step onto the next voxel, it readjusts its sense of position, always in relation to the specific components that it is standing on at the moment.

    The underlying vision is that just as the most complex of images can be reproduced by using an array of pixels on a screen, virtually any physical object can be recreated as an array of smaller three-dimensional pieces, or voxels, which can themselves be made up of simple struts and nodes. The team has shown that these simple components can be arranged to distribute loads efficiently; they are largely made up of open space so that the overall weight of the structure is minimized. The units can be picked up and placed in position next to one another by the simple assemblers, and then fastened together using latching systems built into each voxel.

    The robots themselves resemble a small arm, with two long segments that are hinged in the middle, and devices for clamping onto the voxel structures on each end. The simple devices move around like inchworms, advancing along a row of voxels by repeatedly opening and closing their V-shaped bodies to move from one to the next. Jenett has dubbed the little robots BILL-E (a nod to the movie robot WALL-E), which stands for Bipedal Isotropic Lattice Locomoting Explorer.

    Jenett has built several versions of the assemblers as proof-of-concept designs, along with corresponding voxel designs featuring latching mechanisms to easily attach or detach each one from its neighbors. He has used these prototypes to demonstrate the assembly of the blocks into linear, two-dimensional, and three-dimensional structures. “We’re not putting the precision in the robot; the precision comes from the structure” as it gradually takes shape, Jenett says. “That’s different from all other robots. It just needs to know where its next step is.”

    As it works on assembling the pieces, each of the tiny robots can count its steps over the structure, says Gershenfeld, who is the director of CBA. Along with navigation, this lets the robots correct errors at each step, eliminating most of the complexity of typical robotic systems, he says. “It’s missing most of the usual control systems, but as long as it doesn’t miss a step, it knows where it is.” For practical assembly applications, swarms of such units could be working together to speed up the process, thanks to control software developed by Abdel-Rahman that can allow the robots to coordinate their work and avoid getting in each other’s way.

    This kind of assembly of large structures from identical subunits using a simple robotic system, much like a child assembling a large castle out of LEGO blocks, has already attracted the interest of some major potential users, including NASA, MIT’s collaborator on this research, and the European aerospace company Airbus SE, which also helped to sponsor the study.

    Computer simulation shows a group of four assembler robots at work on building a three-dimensional structure. Whole swarms of such robots could be unleashed to create large structures such as airplane wings or space habitats. Illustration courtesy of the researchers

    One advantage of such assembly is that repairs and maintenance can be handled easily by the same kind of robotic process as the initial assembly. Damaged sections can be disassembled from the structure and replaced with new ones, producing a structure that is just as robust as the original. “Unbuilding is as important as building,” Gershenfeld says, and this process can also be used to make modifications or improvements to the system over time.

    “For a space station or a lunar habitat, these robots would live on the structure, continuously maintaining and repairing it,” says Jenett.

    Ultimately, such systems could be used to construct entire buildings, especially in difficult environments such as in space, or on the moon or Mars, Gershenfeld says. This could eliminate the need to ship large preassembled structures all the way from Earth. Instead it could be possible to send large batches of the tiny subunits — or form them from local materials using systems that could crank out these subunits at their final destination point. “If you can make a jumbo jet, you can make a building,” Gershenfeld says.

    Sandor Fekete, director of the Institute of Operating Systems and Computer Networks at the Technical University of Braunschweig, in Germany, who was not involved in this work, says “Ultralight, digital materials such as [these] open amazing perspectives for constructing efficient, complex, large-scale structures, which are of vital importance in aerospace applications.”

    But assembling such systems is a challenge, says Fekete, who plans to join the research team for further development of the control systems. “This is where the use of small and simple robots promises to provide the next breakthrough: Robots don’t get tired or bored, and using many miniature robots seems like the only way to get this critical job done. This extremely original and clever work by Ben Jenett and collaborators makes a giant leap towards the construction of dynamically adjustable airplane wings, enormous solar sails or even reconfigurable space habitats.”

    In the process, Gershenfeld says, “we feel like we’re uncovering a new field of hybrid material-robot systems.”

    12:00a
    Recovering “lost dimensions” of images and video

    MIT researchers have developed a model that recovers valuable data lost from images and video that have been “collapsed” into lower dimensions.

    The model could be used to recreate video from motion-blurred images, or from new types of cameras that capture a person’s movement around corners but only as vague one-dimensional lines. While more testing is needed, the researchers think this approach could someday could be used to convert 2D medical images into more informative — but more expensive — 3D body scans, which could benefit medical imaging in poorer nations.

    “In all these cases, the visual data has one dimension — in time or space — that’s completely lost,” says Guha Balakrishnan, a postdoc in Computer Science and Artificial Intelligence Laboratory (CSAIL) and first author on a paper describing the model, which is being presented at next week’s International Conference on Computer Vision. “If we recover that lost dimension, it can have a lot of important applications.”

    Captured visual data often collapses data of multiple dimensions of time and space into one or two dimensions, called “projections.” X-rays, for example, collapse three-dimensional data about anatomical structures into a flat image. Or, consider a long-exposure shot of stars moving across the sky: The stars, whose position is changing over time, appear as blurred streaks in the still shot.

    Likewise, “corner cameras,” recently invented at MIT, detect moving people around corners. These could be useful for, say, firefighters finding people in burning buildings. But the cameras aren’t exactly user-friendly. Currently they only produce projections that resemble blurry, squiggly lines, corresponding to a person’s trajectory and speed.

    The researchers invented a “visual deprojection” model that uses a neural network to “learn” patterns that match low-dimensional projections to their original high-dimensional images and videos. Given new projections, the model uses what it’s learned to recreate all the original data from a projection.

    In experiments, the model synthesized accurate video frames showing people walking, by extracting information from single, one-dimensional lines similar to those produced by corner cameras. The model also recovered video frames from single, motion-blurred projections of digits moving around a screen, from the popular Moving MNIST dataset.

    Joining Balakrishnan on the paper are: Amy Zhao, a graduate student in the Department of Electrical Engineering and Computer Science (EECS) and CSAIL; EECS professors John Guttag, Fredo Durand, and William T. Freeman; and Adrian Dalca, a faculty member in radiology at Harvard Medical School.

    Clues in pixels

    The work started as a “cool inversion problem” to recreate movement that causes motion blur in long-exposure photography, Balakrishnan says. In a projection’s pixels there exist some clues about the high-dimensional source.

    Digital cameras capturing long-exposure shots, for instance, will basically aggregate photons over a period of time on each pixel. In capturing an object’s movement over time, the camera will take the average value of the movement-capturing pixels. Then, it applies those average values to corresponding heights and widths of a still image, which creates the signature blurry streaks of the object’s trajectory. By calculating some variations in pixel intensity, the movement can theoretically be recreated.

    As the researchers realized, that problem is relevant in many areas: X-rays, for instance, capture height, width, and depth information of anatomical structures, but they use a similar pixel-averaging technique to collapse depth into a 2D image. Corner cameras — invented in 2017 by Freeman, Durand, and other researchers — capture reflected light signals around a hidden scene that carry two-dimensional information about a person’s distance from walls and objects. The pixel-averaging technique then collapses that data into a one-dimensional video — basically, measurements of different lengths over time in a single line.  

    The researchers built a general model, based on a convolutional neural network (CNN) — a machine-learning model that’s become a powerhouse for image-processing tasks — that captures clues about any lost dimension in averaged pixels.

    Synthesizing signals

    In training, the researchers fed the CNN thousands of pairs of projections and their high-dimensional sources, called “signals.” The CNN learns pixel patterns in the projections that match those in the signals. Powering the CNN is a framework called a “variational autoencoder,” which evaluates how well the CNN outputs match its inputs across some statistical probability. From that, the model learns a “space” of all possible signals that could have produced a given projection. This creates, in essence, a type of blueprint for how to go from a projection to all possible matching signals.

    When shown previously unseen projections, the model notes the pixel patterns and follows the blueprints to all possible signals that could have produced that projection. Then, it synthesizes new images that combine all data from the projection and all data from the signal. This recreates the high-dimensional signal.

    For one experiment, the researchers collected a dataset of 35 videos of 30 people walking in a specified area. They collapsed all frames into projections that they used to train and test the model. From a hold-out set of six unseen projections, the model accurately recreated 24 frames of the person’s gait, down to the position of their legs and the person’s size as they walked toward or away from the camera. The model seems to learn, for instance, that pixels that get darker and wider with time likely correspond to a person walking closer to the camera.

    “It’s almost like magic that we’re able to recover this detail,” Balakrishnan says.

    The researchers didn’t test their model on medical images. But they are now collaborating with Cornell University colleagues to recover 3D anatomical information from 2D medical images, such as X-rays, with no added costs — which can enable more detailed medical imaging in poorer nations. Doctors mostly prefer 3D scans, such as those captured with CT scans, because they contain far more useful medical information. But CT scans are generally difficult and expensive to acquire.

    “If we can convert X-rays to CT scans, that would be somewhat game-changing,” Balakrishnan says. “You could just take an X-ray and push it through our algorithm and see all the lost information.”

    1:15p
    Controlling our internal world

    Olympic skaters can launch, perform multiple aerial turns, and land gracefully, anticipating imperfections and reacting quickly to correct course. To make such elegant movements, the brain must have an internal model of the body to control, predict, and make almost-instantaneous adjustments to motor commands. So-called “internal models” are a fundamental concept in engineering and have long been suggested to underlie control of movement by the brain, but what about processes that occur in the absence of movement, such as contemplation, anticipation, planning?

    Using a novel combination of task design, data analysis, and modeling, MIT neuroscientist Mehrdad Jazayeri and colleagues now provide compelling evidence that the core elements of an internal model also control purely mental processes.

    “During my thesis, I realized that I’m interested not so much in how our senses react to sensory inputs, but instead in how my internal model of the world helps me make sense of those inputs,” says Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

    Indeed, understanding the building blocks exerting control of such mental processes could help to paint a better picture of disruptions in mental disorders, such as schizophrenia.

    Internal models for mental processes

    Scientists working on the motor system have long theorized that the brain overcomes noisy and slow signals using an accurate internal model of the body. This internal model serves three critical functions: it provides motor to control movement, simulates upcoming movement to overcome delays, and uses feedback to make real-time adjustments.

    “The framework that we currently use to think about how the brain controls our actions is one that we have borrowed from robotics: We use controllers, simulators, and sensory measurements to control machines and train operators,” explains Reza Shadmehr, a professor at the Johns Hopkins School of Medicine who was not involved with the study. “That framework has largely influenced how we imagine our brain controlling our movements.”

    Jazazyeri and colleagues wondered whether the same framework might explain the control principles governing mental states in the absence of any movement.

    “When we’re simply sitting, thoughts and images run through our heads and, fundamental to intellect, we can control them,” explains lead author Seth Egger, a former postdoc in the Jazayeri lab who is now at Duke University. “We wanted to find out what’s happening between our ears when we are engaged in thinking.”

    Imagine, for example, a sign language interpreter keeping up with a fast speaker. To track speech accurately, the translator continuously anticipates where the speech is going, rapidly adjusting when the actual words deviate from the prediction. The interpreter could be using an internal model to anticipate upcoming words, and use feedback to make adjustments on the fly.

    1-2-3-Go

    Hypothesizing about how the components of an internal model function in scenarios such as translation is one thing. Cleanly measuring and proving the existence of these elements is much more complicated, as the activity of the controller, simulator, and feedback are intertwined. To tackle this problem, Jazayeri and colleagues devised a clever task with primate models in which the controller, simulator, and feedback act at distinct times.

    In this task, called “1-2-3-Go,” the animal sees three consecutive flashes (1, 2, and 3) that form a regular beat, and learns to make an eye movement (Go) when they anticipate the 4th flash should occur. During the task, researchers measured neural activity in a region of the frontal cortex they had previously linked to the timing of movement.

    Jazayeri and colleagues had clear predictions about when the controller would act (between the third flash and “Go”) and when feedback would be engaged (with each flash of light). The key surprise came when researchers saw evidence for the simulator anticipating the third flash. This unexpected neural activity has dynamics that resemble the controller, but was not associated with a response. In other words, the researchers uncovered a covert plan that functions as the simulator, thus uncovering all three elements of an internal model for a mental process, the planning and anticipation of “Go” in the “1-2-3-Go” sequence.

    “Jazayeri’s work is important because it demonstrates how to study mental simulation in animals,” explains Shadmehr, “and where in the brain that simulation is taking place.”

    Having found how and where to measure an internal model in action, Jazayeri and colleagues now plan to ask whether these control strategies can explain how primates effortlessly generalize their knowledge from one behavioral context to another. For example, how does an interpreter rapidly adjust when someone with widely different speech habits takes the podium? This line of investigation promises to shed light on high-level mental capacities of the primate brain that simpler animals seem to lack, that go awry in mental disorders, and that designers of artificial intelligence systems so fondly seek.

    4:44p
    Scientists discover fractal patterns in a quantum material

    A fractal is any geometric pattern that occurs again and again, at different sizes and scales, within the same object. This “self-similarity” can be seen throughout nature, for example in a snowflake’s edge, a river network, the splitting veins in a fern, and the crackling forks of lightning.

    Now physicists at MIT and elsewhere have for the first time discovered fractal-like patterns in a quantum material — a material that exhibits strange electronic or magnetic behavior, as a result of quantum, atomic-scale effects.

    The material in question is neodymium nickel oxide, or NdNiO3, a rare earth nickelate that can act, paradoxically, as both an electrical conductor and insulator, depending on its temperature. The material also happens to be magnetic, though the orientation of its magnetism is not uniform throughout the material, but rather resembles a patchwork of “domains.” Each domain represents a region of the material with a particular magnetic orientation, and domains can vary in size and shape throughout the material.

    In their study, the researchers identified a fractal-like pattern within the texture of the material’s magnetic domains. They found that the distribution of domain sizes resembles a downward slope, reflecting a higher number of small domains and a lower number of large domains. If the researchers zoomed in on any part of the total distribution — say, a slice of midsized domains — they observed the same downward-sloping pattern, with a higher number of smaller versus larger domains. 

    As it turns out, this same distribution appears repeatedly throughout the material, no matter the size range, or scale at which it’s observed —  a quality that the team recognized as fractal in nature.

    “The domain pattern was hard to decipher at first, but after analyzing the statistics of domain distribution, we realized it had a fractal behavior,” says Riccardo Comin, assistant professor of physics at MIT. “It was completely unexpected — it was serendipity.”

    Scientists are exploring neodymium nickel oxide for various applications, including as a possible building block for neuromorphic devices — artificial systems that mimic biological neurons. Just as a neuron can be both active and inactive, depending on the voltage that it receives, NdNiO3 can be a conductor or an insulator. Comin says an understanding of the material’s nanoscale magnetic and electronic textures is essential to understand and engineer other materials for similar scopes.

    Comin and his colleagues, including lead author and MIT graduate student Jiarui Li, have published their results today in the journal Nature Communications.

    Lighthouses, refocused

    Comin and Li didn’t intend to find fractals in a quantum material. Instead, the team was studying the effect of temperature on the material’s magnetic domains.

    “The material is not magnetic at all temperatures,” Comin says. “We wanted to see how these domains pop up and grow once the magnetic phase is reached upon cooling down the material.”

    To do that, the team had to devise a way to measure the material’s magnetic domains at the nanoscale, since some domains can be as small as several atoms wide, while others span tens of thousands of atoms across. 

    Researchers often use X-rays to probe a material’s magnetic properties. Here, low-energy X-rays, known as soft X-rays, were used to sense the material’s magnetic order and its configuration. Comin and colleagues performed these studies using the National Synchrotron Light Source II at Brookhaven National Laboratory, where a massive, ring-shaped particle accelerator slings electrons around by the billions. The bright beams of soft X-rays produced by this machine are a tool for the most advanced characterization of materials.

    “But still, this X-ray beam is not nanoscopic,” Comin says. “So we adopted a special solution that allows squeezing this beam down to a very small footprint, so that we could map, point by point, the arrangement of magnetic domains in this material.”

    In the end, the researchers developed a new X-ray-focusing lens based on a design that’s been used in lighthouses for centuries. Their new X-ray probe is based on the Fresnel lens, a type of composite lens, that is made not from a single, curved slab of glass, but from many pieces of glass, arranged to act like a curved lens. In lighthouses, a Fresnel lens can span several meters across, and it’s used to focus diffuse light produced by a bright lamp into a directional beam that guides ships at sea. Comin’s team fabricated a similar lens, though much smaller, on the order of about 150 microns wide, to focus a soft X-ray beam of several hundred microns in diameter, down to about 70 nanometers wide.

    The beauty of this is, we’re using concepts from geometric optics that have been known for centuries, and have been applied in lighthouses, and we’re just scaling them down by a factor of 10,000 or so,” Comin says.

    Fractal textures

    Using their special X-ray-focusing lens, the researchers, at Brookhaven’s synchrotron light source, focused incoming soft X-rays beams onto a thin film of neodymium nickel oxide. Then they scanned the much smaller, nanoscopic beam of X-rays across the sample to map the size, shape, and orientation of magnetic domains, point by point. They mapped the sample at different temperatures, confirming that the material became magnetic, or formed magnetic domains, below a certain critical temperature. Above this temperature, the domains disappeared, and the magnetic order was effectively erased.

    Interestingly, the group found that if they cooled the sample back down to below the critical temperature, the magnetic domains reappeared almost in the same place as before.

    “So it turns out the system has memory,” Comin says. “The material retains a memory of where the magnetic bits would be. This was also very unexpected. We thought we would see a completely new domain distribution, but we observed the same pattern re-emerging, even after seemingly erasing these magnetic bits altogether.”

    After mapping the material’s magnetic domains, and measuring the size of each domain, the researchers counted the number of domains of a given size, and plotted their number as a function of size. The resulting distribution resembled a downward slope — a pattern that they found, again and again, no matter what range of domain size they focused in on.

    “We have observed textures of unique richness spanning multiple spatial scales,” Li says. “Most strikingly, we have found that these magnetic patterns have a fractal nature.”

    Comin says that understanding how a material’s magnetic domains arrange at the nanoscale, and knowing that they exhibit memory, is useful, for instance in designing artificial neurons, and resilient, magnetic data storage devices.

    “Similar to magnetic disks in spinning hard drives, one can envision storing bits of information in these magnetic domains,” Comin says. “If the material has a sort of memory, you could have a system that’s robust against external perturbations, so even if subjected to heat, the information is not lost.”

    This research was supported by the National Science Foundation and the Sloan Research Fellowship.

    << Previous Day 2019/10/16
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org