MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, February 1st, 2017

    Time Event
    5:00a
    Transparent, gel-based robots can catch and release live fish

    Engineers at MIT have fabricated transparent, gel-based robots that move when water is pumped in and out of them. The bots can perform a number of fast, forceful tasks, including kicking a ball underwater, and grabbing and releasing a live fish.

    The robots are made entirely of hydrogel — a tough, rubbery, nearly transparent material that’s composed mostly of water. Each robot is an assemblage of hollow, precisely designed hydrogel structures, connected to rubbery tubes. When the researchers pump water into the hydrogel robots, the structures quickly inflate in orientations that enable the bots to curl up or stretch out.

    The team fashioned several hydrogel robots, including a finlike structure that flaps back and forth, an articulated appendage that makes kicking motions, and a soft, hand-shaped robot that can squeeze and relax.

    Because the robots are both powered by and made almost entirely of water, they have similar visual and acoustic properties to water. The researchers propose that these robots, if designed for underwater applications, may be virtually invisible.

    The group, led by Xuanhe Zhao, associate professor of mechanical engineering and civil and environmental engineering at MIT, and graduate student Hyunwoo Yuk, is currently looking to adapt hydrogel robots for medical applications.

    “Hydrogels are soft, wet, biocompatible, and can form more friendly interfaces with human organs,” Zhao says. “We are actively collaborating with medical groups to translate this system into soft manipulators such as hydrogel ‘hands,’ which could potentially apply more gentle manipulations to tissues and organs in surgical operations.”

    Zhao and Yuk have published their results this week in the journal Nature Communications. Their co-authors include MIT graduate students Shaoting Lin and Chu Ma, postdoc Mahdi Takaffoli, and associate professor of mechanical engineering Nicholas X. Fang.

    Robot recipe

    For the past five years, Zhao’s group has been developing “recipes” for hydrogels, mixing solutions of polymers and water, and using techniques they invented to fabricate tough yet highly stretchable materials. They have also developed ways to glue these hydrogels to various surfaces such as glass, metal, ceramic, and rubber, creating extremely strong bonds that resist peeling.

    The team realized that such durable, flexible, strongly bondable hydrogels might be ideal materials for use in soft robotics. Many groups have designed soft robots from rubbers like silicones, but Zhao points out that such materials are not as biocompatible as hydrogels. As hydrogels are mostly composed of water, he says, they are naturally safer to use in a biomedical setting. And while others have attempted to fashion robots out of hydrogels, their solutions have resulted in brittle, relatively inflexible materials that crack or burst with repeated use.

    In contrast, Zhao’s group found its formulations leant themselves well to soft robotics.

    “We didn’t think of this kind of [soft robotics] project initially, but realized maybe our expertise can be crucial to translating these jellies as robust actuators and robotic structures,” Yuk says.

    Fast and forceful

    To apply their hydrogel materials to soft robotics, the researchers first looked to the animal world. They concentrated in particular on leptocephali, or glass eels — tiny, transparent, hydrogel-like eel larvae that hatch in the ocean and eventually migrate to their natural river habitats.

    “It is extremely long travel, and there is no means of protection,” Yuk says. “It seems they tried to evolve into a transparent form as an efficient camouflage tactic. And we wanted to achieve a similar level of transparency, force, and speed.”

    To do so, Yuk and Zhao used 3-D printing and laser cutting techniques to print their hydrogel recipes into robotic structures and other hollow units, which they bonded to small, rubbery tubes that are connected to external pumps.

    To actuate, or move, the structures, the team used syringe pumps to inject water through the hollow structures, enabling them to quickly curl or stretch, depending on the overall configuration of the robots.

    Yuk and Zhao found that by pumping water in, they could produce fast, forceful reactions, enabling a hydrogel robot to generate a few newtons of force in one second. For perspective, other researchers have activated similar hydrogel robots by simple osmosis, letting water naturally seep into structures — a slow process that creates millinewton forces over several minutes or hours.

    Catch and release

    In experiments using several hydrogel robot designs, the team found the structures were able to withstand repeated use of up to 1,000 cycles without rupturing or tearing. They also found that each design, placed underwater against colored backgrounds, appeared almost entirely camouflaged. The group measured the acoustic and optical properties of the hydrogel robots, and found them to be nearly equal to that of water, unlike rubber and other commonly used materials in soft robotics.

    In a striking demonstration of the technology, the team fabricated a hand-like robotic gripper and pumped water in and out of its “fingers” to make the hand open and close. The researchers submerged the gripper in a tank with a goldfish and showed that as the fish swam past, the gripper was strong and fast enough to close around the fish.

    “[The robot] is almost transparent, very hard to see,” Zhao says. “When you release the fish, it’s quite happy because [the robot] is soft and doesn’t damage the fish. Imagine a hard robotic hand would probably squash the fish.”

    Next, the researchers plan to identify specific applications for hydrogel robotics, as well as tailor their recipes to particular uses. For example, medical applications might not require completely transparent structures, while other applications may need certain parts of a robot to be stiffer than others.

    “We want to pinpoint a realistic application and optimize the material to achieve something impactful,” Yuk says. “To our best knowledge, this is the first demonstration of hydrogel pressure-based acutuation. We are now tossing this concept out as an open question, to say, ‘Let’s play with this.’”

    This research was supported, in part, by the Office of Naval Research, the MIT Institute for Soldier Nanotechnologies, and the National Science Foundation.

    10:00a
    Wearable AI system can detect a conversation's tone

    It’s a fact of nature that a single conversation can be interpreted in very different ways. For people with anxiety or conditions such as Asperger’s, this can make social situations extremely stressful. But what if there was a more objective way to measure and understand our interactions?

    Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Institute of Medical Engineering and Science (IMES) say that they’ve gotten closer to a potential solution: an artificially intelligent, wearable system that can predict if a conversation is happy, sad, or neutral based on a person’s speech patterns and vitals.

    “Imagine if, at the end of a conversation, you could rewind it and see the moments when the people around you felt the most anxious,” says graduate student Tuka Alhanai, who co-authored a related paper with PhD candidate Mohammad Ghassemi that they will present at next week’s Association for the Advancement of Artificial Intelligence (AAAI) conference in San Francisco. “Our work is a step in this direction, suggesting that we may not be that far away from a world where people can have an AI social coach right in their pocket.”

    As a participant tells a story, the system can analyze audio, text transcriptions, and physiological signals to determine the overall tone of the story with 83 percent accuracy. Using deep-learning techniques, the system can also provide a “sentiment score” for specific five-second intervals within a conversation.

    “As far as we know, this is the first experiment that collects both physical data and speech data in a passive but robust way, even while subjects are having natural, unstructured interactions,” says Ghassemi. “Our results show that it’s possible to classify the emotional tone of conversations in real-time.”

    The researchers say that the system's performance would be further improved by having multiple people in a conversation use it on their smartwatches, creating more data to be analyzed by their algorithms. The team is keen to point out that they developed the system with privacy strongly in mind: The algorithm runs locally on a user’s device as a way of protecting personal information. (Alhanai says that a consumer version would obviously need clear protocols for getting consent from the people involved in the conversations.)

    How it works

    Many emotion-detection studies show participants “happy” and “sad” videos, or ask them to artificially act out specific emotive states. But in an effort to elicit more organic emotions, the team instead asked subjects to tell a happy or sad story of their own choosing.

    Subjects wore a Samsung Simband, a research device that captures high-resolution physiological waveforms to measure features such as movement, heart rate, blood pressure, blood flow, and skin temperature. The system also captured audio data and text transcripts to analyze the speaker’s tone, pitch, energy, and vocabulary.

    “The team’s usage of consumer market devices for collecting physiological data and speech data shows how close we are to having such tools in everyday devices,” says Björn Schuller, professor and chair of Complex and Intelligent Systems at the University of Passau in Germany, who was not involved in the research. “Technology could soon feel much more emotionally intelligent, or even ‘emotional’ itself.”

    After capturing 31 different conversations of several minutes each, the team trained two algorithms on the data: One classified the overall nature of a conversation as either happy or sad, while the second classified each five-second block of every conversation as positive, negative, or neutral.

    Alhanai notes that, in traditional neural networks, all features about the data are provided to the algorithm at the base of the network. In contrast, her team found that they could improve performance by organizing different features at the various layers of the network.

    “The system picks up on how, for example, the sentiment in the text transcription was more abstract than the raw accelerometer data," says Alhanai. “It’s quite remarkable that a machine could approximate how we humans perceive these interactions, without significant input from us as researchers.”

    Results

    Indeed, the algorithm’s findings align well with what we humans might expect to observe. For instance, long pauses and monotonous vocal tones were associated with sadder stories, while more energetic, varied speech patterns were associated with happier ones. In terms of body language, sadder stories were also strongly associated with increased fidgeting and cardiovascular activity, as well as certain postures like putting one’s hands on one’s face.

    On average, the model could classify the mood of each five-second interval with an accuracy that was approximately 18 percent above chance, and a full 7.5 percent better than existing approaches.

    The algorithm is not yet reliable enough to be deployed for social coaching, but Alhanai says that they are actively working toward that goal. For future work the team plans to collect data on a much larger scale, potentially using commercial devices such as the Apple Watch that would allow them to more easily implement the system out in the world.

    “Our next step is to improve the algorithm’s emotional granularity so that it is more accurate at calling out boring, tense, and excited moments, rather than just labeling interactions as ‘positive’ or ‘negative,'” says Alhanai. “Developing technology that can take the pulse of human emotions has the potential to dramatically improve how we communicate with each other.”

    This research was made possible in part by the Samsung Strategy and Innovation Center.

    << Previous Day 2017/02/01
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org