MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, June 19th, 2019

    Time Event
    9:25a
    Engineers 3-D print flexible mesh for ankle and knee braces

    Hearing aids, dental crowns, and limb prosthetics are some of the medical devices that can now be digitally designed and customized for individual patients, thanks to 3-D printing. However, these devices are typically designed to replace or support bones and other rigid parts of the body, and are often printed from solid, relatively inflexible material.

    Now MIT engineers have designed pliable, 3-D-printed mesh materials whose flexibility and toughness they can tune to emulate and support softer tissues such as muscles and tendons. They can tailor the intricate structures in each mesh, and they envision the tough yet stretchy fabric-like material being used as personalized, wearable supports, including ankle or knee braces, and even implantable devices, such as hernia meshes, that better match to a person’s body.

    As a demonstration, the team printed a flexible mesh for use in an ankle brace. They tailored the mesh’s structure to prevent the ankle from turning inward — a common cause of injury — while allowing the joint to move freely in other directions. The researchers also fabricated a knee brace design that could conform to the knee even as it bends. And, they produced a glove with a 3-D-printed mesh sewn into its top surface, which conforms to a wearer’s knuckles, providing resistance against involuntary clenching that can occur following a stroke.

    “This work is new in that it focuses on the mechanical properties and geometries required to support soft tissues,” says Sebastian Pattinson, who conducted the research as a postdoc at MIT.

    Pattinson, now on the faculty at Cambridge University, is the lead author of a study published today in the journal Advanced Functional Materials. His MIT co-authors include Meghan Huber, Sanha Kim, Jongwoo Lee, Sarah Grunsfeld, Ricardo Roberts, Gregory Dreifus, Christoph Meier, and Lei Liu, as well as Sun Jae Professor in Mechanical Engineering Neville Hogan and associate professor of mechanical engineering A. John Hart.

    Riding collagen’s wave

    The team’s flexible meshes were inspired by the pliable, conformable nature of fabrics.

    “3-D-printed clothing and devices tend to be very bulky,” Pattinson says. “We were trying to think of how we can make 3-D-printed constructs more flexible and comfortable, like textiles and fabrics.”

    Pattinson found further inspiration in collagen, the structural protein that makes up much of the body’s soft tissues and is found in ligaments, tendons, and muscles. Under a microscope, collagen can resemble curvy, intertwined strands, similar to loosely braided elastic ribbons. When stretched, this collagen initially does so easily, as the kinks in its structure straighten out. But once taut, the strands are harder to extend.

    Inspired by collagen’s molecular structure, Pattinson designed wavy patterns, which he 3-D-printed using thermoplastic polyurethane as the printing material. He then fabricated a mesh configuration to resemble stretchy yet tough, pliable fabric. The taller he designed the waves, the more the mesh could be stretched at low strain before becoming more stiff — a design principle that can help to tailor a mesh’s degree of flexibility and helped it to mimic soft tissue.

    The researchers printed a long strip of the mesh and tested its support on the ankles of several healthy volunteers. For each volunteer, the team adhered a strip along the length of the outside of the ankle, in an orientation that they predicted would support the ankle if it turned inward. They then put each volunteer’s ankle into an ankle stiffness measurement robot — named, logically, Anklebot — that was developed in Hogan’s lab. The Anklebot moved their ankle in 12 different directions, and then measured the force the ankle exerted with each movement, with the mesh and without it, to understand how the mesh affected the ankle’s stiffness in different directions.

    In general, they found the mesh increased the ankle’s stiffness during inversion, while leaving it relatively unaffected as it moved in other directions.

    “The beauty of this technique lies in its simplicity and versatility. Mesh can be made on a basic desktop 3-D printer, and the mechanics can be tailored to precisely match those of soft tissue,” Hart says.

    Stiffer, cooler drapes

    The team’s ankle brace was made using relatively stretchy material. But for other applications, such as implantable hernia meshes, it might be useful to include a stiffer material, that is at the same time just as conformable. To this end, the team developed a way to incorporate stronger and stiffer fibers and threads into a pliable mesh, by printing stainless steel fibers over regions of an elastic mesh where stiffer properties would be needed, then printing a third elastic layer over the steel to sandwich the stiffer thread into the mesh.

    The combination of stiff and elastic materials can give a mesh the ability to stretch easily up to a point, after which it starts to stiffen, providing stronger support to prevent, for instance, a muscle from overstraining.

    The team also developed two other techniques to give the printed mesh an almost fabric-like quality, enabling it to conform easily to the body, even while in motion.

    “One of the reasons textiles are so flexible is that the fibers are able to move relative to each other easily,” Pattinson says. “We also wanted to mimic that capability in the 3-D-printed parts.”

    In traditional 3-D printing, a material is printed through a heated nozzle, layer by layer. When heated polymer is extruded it bonds with the layer underneath it. Pattinson found that, once he printed a first layer, if he raised the print nozzle slightly, the material coming out of the nozzle would take a bit longer to land on the layer below, giving the material time to cool. As a result, it would be less sticky. By printing a mesh pattern in this way, Pattinson was able to create a layers that, rather than being fully bonded, were free to move relative to each other, and he demonstrated this in a multilayer mesh that draped over and conformed to the shape of a golf ball.

    Finally, the team designed meshes that incorporated auxetic structures — patterns that become wider when you pull on them. For instance, they were able to print meshes, the middle of which consisted of structures that, when stretched, became wider rather than contracting as a normal mesh would. This property is useful for supporting highly curved surfaces of the body. To that end, the researchers fashioned an auxetic mesh into a potential knee brace design and found that it conformed to the joint. 

    “There’s potential to make all sorts of devices that interface with the human body,” Pattinson says. Surgical meshes, orthoses, even cardiovascular devices like stents — you can imagine all potentially benefiting from the kinds of structures we show.”

    This research was supported in part by the National Science Foundation, the MIT-Skoltech Next Generation Program, and the Eric P. and Evelyn E. Newman Fund at MIT.

    9:38a
    From one brain scan, more information for medical artificial intelligence

    MIT researchers have devised a novel method to glean more information from images used to train machine-learning models, including those that can analyze medical scans to help diagnose and treat brain conditions.

    An active new area in medicine involves training deep-learning models to detect structural patterns in brain scans associated with neurological diseases and disorders, such as Alzheimer’s disease and multiple sclerosis. But collecting the training data is laborious: All anatomical structures in each scan must be separately outlined or hand-labeled by neurological experts. And, in some cases, such as for rare brain conditions in children, only a few scans may be available in the first place.

    In a paper presented at the recent Conference on Computer Vision and Pattern Recognition, the MIT researchers describe a system that uses a single labeled scan, along with unlabeled scans, to automatically synthesize a massive dataset of distinct training examples. The dataset can be used to better train machine-learning models to find anatomical structures in new scans — the more training data, the better those predictions.

    The crux of the work is automatically generating data for the “image segmentation” process, which partitions an image into regions of pixels that are more meaningful and easier to analyze. To do so, the system uses a convolutional neural network (CNN), a machine-learning model that’s become a powerhouse for image-processing tasks. The network analyzes a lot of unlabeled scans from different patients and different equipment to “learn” anatomical, brightness, and contrast variations. Then, it applies a random combination of those learned variations to a single labeled scan to synthesize new scans that are both realistic and accurately labeled. These newly synthesized scans are then fed into a different CNN that learns how to segment new images.

    “We’re hoping this will make image segmentation more accessible in realistic situations where you don’t have a lot of training data,” says first author Amy Zhao, a graduate student in the Department of Electrical Engineering and Computer Science (EECS) and Computer Science and Artificial Intelligence Laboratory (CSAIL). “In our approach, you can learn to mimic the variations in unlabeled scans to intelligently synthesize a large dataset to train your network.”

    There’s interest in using the system, for instance, to help train predictive-analytics models at Massachusetts General Hospital, Zhao says, where only one or two labeled scans may exist of particularly uncommon brain conditions among child patients.

    Joining Zhao on the paper are: Guha Balakrishnan, a postdoc in EECS and CSAIL; EECS professors Fredo Durand and John Guttag, and senior author Adrian Dalca, who is also a faculty member in radiology at Harvard Medical School.

    The “Magic” behind the system

    Although now applied to medical imaging, the system actually started as a means to synthesize training data for a smartphone app that could identify and retrieve information about cards from the popular collectable card game, “Magic: The Gathering.” Released in the early 1990s, “Magic” has more than 20,000 unique cards — with more released every few months — that players can use to build custom playing decks.

    Zhao, an avid “Magic” player, wanted to develop a CNN-powered app that took a photo of any card with a smartphone camera and automatically pulled information such as price and rating from online card databases. “When I was picking out cards from a game store, I got tired of entering all their names into my phone and looking up ratings and combos,” Zhao says. “Wouldn’t it be awesome if I could scan them with my phone and pull up that information?”

    But she realized that’s a very tough computer-vision training task. “You’d need many photos of all 20,000 cards, under all different lighting conditions and angles. No one is going to collect that dataset,” Zhao says.

    Instead, Zhao trained a CNN on smaller dataset of around 200 cards, with 10 distinct photos of each card, to learn how to warp a card into various positions. It computed different lighting, angles, and reflections — for when cards are placed in plastic sleeves — to synthesized realistic warped versions of any card in the dataset. It was an exciting passion project, Zhao says: “But we realized this approach was really well-suited for medical images, because this type of warping fits really well with MRIs.”

    Mind warp

    Magnetic resonance images (MRIs) are composed of three-dimensional pixels, called voxels. When segmenting MRIs, experts separate and label voxel regions based on the anatomical structure containing them. The diversity of scans, caused by variations in individual brains and equipment used, poses a challenge to using machine learning to automate this process.

    Some existing methods can synthesize training examples from labeled scans using “data augmentation,” which warps labeled voxels into different positions. But these methods require experts to hand-write various augmentation guidelines, and some synthesized scans look nothing like a realistic human brain, which may be detrimental to the learning process.

    Instead, the researchers’ system automatically learns how to synthesize realistic scans. The researchers trained their system on 100 unlabeled scans from real patients to compute spatial transformations — anatomical correspondences from scan to scan. This generated as many “flow fields,” which model how voxels move from one scan to another. Simultaneously, it computes intensity transformations, which capture appearance variations caused by image contrast, noise, and other factors.

    In generating a new scan, the system applies a random flow field to the original labeled scan, which shifts around voxels until it structurally matches a real, unlabeled scan. Then, it overlays a random intensity transformation. Finally, the system maps the labels to the new structures, by following how the voxels moved in the flow field. In the end, the synthesized scans closely resemble the real, unlabeled scans — but with accurate labels.

    To test their automated segmentation accuracy, the researchers used Dice scores, which measure how well one 3-D shape fits over another, on a scale of 0 to 1. They compared their system to traditional segmentation methods — manual and automated — on 30 different brain structures across 100 held-out test scans. Large structures were comparably accurate among all the methods. But the researchers’ system outperformed all other approaches on smaller structures, such as the hippocampus, which occupies only about 0.6 percent of a brain, by volume.

    “That shows that our method improves over other methods, especially as you get into the smaller structures, which can be very important in understanding disease,” Zhao says. “And we did that while only needing a single hand-labeled scan.”

    In a nod to the work’s “Magic” roots, the code is publicly available on Github under the name of one of the game’s cards, “Brainstorm.”

    10:06a
    3 Questions: An experiment illuminates the value of public transportation

    Urban residents hear a lot about public transit fares, but to what extent do transportation costs really affect riders? A group of urban studies researchers at MIT has conducted a new experiment — a randomized, controlled trial — on Boston’s MBTA system showing that if low-income people are offered a 50 percent fare discount, their ridership increases by over 30 percent. A new white paper with the results was issued this month. The paper’s lead author is MIT PhD student Jeffrey Rosenblum; his co-authors are Department of Urban Studies and Planning professors Jinhua Zhao, Mariana Arcaya, Justin Steil, and Chris Zegras. MIT News spoke to Rosenblum about the results.

    Q: What was the impetus for the study, and what did you find?

    A: The idea was to look at travel behavior of riders. One of the things we don’t ordinarily have access to is how low-income people use the system. We can track seniors because seniors have a special card. But for low-income people, a lot of the information had previously been anecdotal.

    There were hardly any studies to help me understand how low-income riders would respond to fare decreases. When I have to look back to a 1964 study from New York City as one of the prime examples that looked at low-income riders, you know there’s some missing data.

    There have been two hypotheses in this area. One is that low-income people have no choice but to use public transit, so they have to take it out of their food budget or child budget. The other is that they do change behavior when fares decrease. The second is what we ended up finding: Low-income people did take significantly more trips, about a third more, based on the analysis. This suggests that for the low-income people in the study group, who were selected out of food stamps recipients, affordability was a big factor. So that’s really the take-home message.

    Q: There is another layer to the results, though, which is that the increased use of public transit was strongly linked to certain purposes, such as using social services.

    A: This gets into an important concept in transportation. No one gets on a bus to get on a bus. They want to go someplace. In the past transit systems really just cared about the numbers of people using the system, and they didn’t really care about the purposes of those trips.

    In most categories of trip purpose, we didn’t see much difference, but in the social services category, we did. Usually when people think of public transportation, they think of commuting to work. And when people think about low-income riders, they don’t think about other really important things in life. Low-income people also spend more time on public transit doing errands, visiting family, as well as going to social services and health care providers.

    Q: So this is not just a matter of household finance, since it seems like lower fares for low-income people have a kind of multiplier effect, allowing them to access other goods, right?

    A: Yes. And any decisions related to implementation and the impact on the system would be as important as trying to find the money to fund such a program. Whenever studies like this get done, the implication is that this is an important issue to address.

    But then one question is: Who is going to pay for it, and how? And the second is: Who would administer it? One option would be just to say the MBTA has to do it all. A more creative option would be to incorporate it into an existing government program, like Mass Health, or SNAP, the food stamps program, where those agencies already have a whole customer-service system set up, a database of low-income people, and are already issuing them cards. Imagine if a low-income person had one card, with a debit-card for food stamps, the Mass Health information, and a Charlie Card [an MBTA metro card] chip embedded in it. That’s where government efficiency counts. The technology is there but the lack of interagency coordination is a significant barrier.

    11:59p
    Spotting objects amid clutter

    A new MIT-developed technique enables robots to quickly identify objects hidden in a three-dimensional cloud of data, reminiscent of how some people can make sense of a densely patterned “Magic Eye” image if they observe it in just the right way.

    Robots typically “see” their environment through sensors that collect and translate a visual scene into a matrix of dots. Think of the world of, well, “The Matrix,” except that the 1s and 0s seen by the fictional character Neo are replaced by dots — lots of dots — whose patterns and densities outline the objects in a particular scene.

    Conventional techniques that try to pick out objects from such clouds of dots, or point clouds, can do so with either speed or accuracy, but not both.

    With their new technique, the researchers say a robot can accurately pick out an object, such as a small animal, that is otherwise obscured within a dense cloud of dots, within seconds of receiving the visual data. The team says the technique can be used to improve a host of situations in which machine perception must be both speedy and accurate, including driverless cars and robotic assistants in the factory and the home.

    “The surprising thing about this work is, if I ask you to find a bunny in this cloud of thousands of points, there’s no way you could do that,” says Luca Carlone, assistant professor of aeronautics and astronautics and a member of MIT’s Laboratory for Information and Decision Systems (LIDS). “But our algorithm is able to see the object through all this clutter. So we’re getting to a level of superhuman performance in localizing objects.”

    Carlone and graduate student Heng Yang will present details of the technique later this month at the Robotics: Science and Systems conference in Germany.

    “Failing without knowing”

    Robots currently attempt to identify objects in a point cloud by comparing a template object — a 3-D dot representation of an object, such as a rabbit — with a point cloud representation of the real world that may contain that object. The template image includes “features,” or collections of dots that indicate characteristic curvatures or angles of that object, such the bunny’s ear or tail. Existing algorithms first extract similar features from the real-life point cloud, then attempt to match those features and the template’s features, and ultimately rotate and align the features to the template to determine if the point cloud contains the object in question.

    But the point cloud data that streams into a robot’s sensor invariably includes errors, in the form of dots that are in the wrong position or incorrectly spaced, which can significantly confuse the process of feature extraction and matching. As a consequence, robots can make a huge number of wrong associations, or what researchers call “outliers” between point clouds, and ultimately misidentify objects or miss them entirely.

    Carlone says state-of-the-art algorithms are able to sift the bad associations from the good once features have been matched, but they do so in “exponential time,” meaning that even a cluster of processing-heavy computers, sifting through dense point cloud data with existing algorithms, would not be able to solve the problem in a reasonable time. Such techniques, while accurate, are impractical for analyzing larger, real-life datasets containing dense point clouds.

    Other algorithms that can quickly identify features and associations do so hastily, creating a huge number of outliers or misdetections in the process, without being aware of these errors.

    “That’s terrible if this is running on a self-driving car, or any safety-critical application,” Carlone says. “Failing without knowing you’re failing is the worst thing an algorithm can do.”

    A relaxed view

    Yang and Carlone instead devised a technique that prunes away outliers in “polynomial time,” meaning that it can do so quickly, even for increasingly dense clouds of dots. The technique can thus quickly and accurately identify objects hidden in cluttered scenes.

    The MIT-developed technique quickly and smoothly matches objects to those hidden in dense point clouds (left), versus existing techniques (right) that produce incorrect, disjointed matches. Gif: Courtesy of the researchers

    The researchers first used conventional techniques to extract features of a template object from a point cloud. They then developed a three-step process to match the size, position, and orientation of the object in a point cloud with the template object, while simultaneously identifying good from bad feature associations.

    The team developed an “adaptive voting scheme” algorithm to prune outliers and match an object’s size and position. For size, the algorithm makes associations between template and point cloud features, then compares the relative distance between features in a template and corresponding features in the point cloud. If, say, the distance between two features in the point cloud is five times that of the corresponding points in the template, the algorithm assigns a “vote” to the hypothesis that the object is five times larger than the template object.

    The algorithm does this for every feature association. Then, the algorithm selects those associations that fall under the size hypothesis with the most votes, and identifies those as the correct associations, while pruning away the others.  In this way, the technique simultaneously reveals the correct associations and the relative size of the object represented by those associations. The same process is used to determine the object’s position.  

    The researchers developed a separate algorithm for rotation, which finds the orientation of the template object in three-dimensional space.

    To do this is an incredibly tricky computational task. Imagine holding a mug and trying to tilt it just so, to match a blurry image of something that might be that same mug. There are any number of angles you could tilt that mug, and each of those angles has a certain likelihood of matching the blurry image.

    Existing techniques handle this problem by considering each possible tilt or rotation of the object as a “cost” — the lower the cost, the more likely that that rotation creates an accurate match between features. Each rotation and associated cost is represented in a topographic map of sorts, made up of multiple hills and valleys, with lower elevations associated with lower cost.

    But Carlone says this can easily confuse an algorithm, especially if there are multiple valleys and no discernible lowest point representing the true, exact match between a particular rotation of an object and the object in a point cloud. Instead, the team developed a “convex relaxation” algorithm that simplifies the topographic map, with one single valley representing the optimal rotation. In this way, the algorithm is able to quickly identify the rotation that defines the orientation of the object in the point cloud.

    With their approach, the team was able to quickly and accurately identify three different objects — a bunny, a dragon, and a Buddha — hidden in point clouds of increasing density. They were also able to identify objects in real-life scenes, including a living room, in which the algorithm quickly was able to spot a cereal box and a baseball hat.

    Carlone says that because the approach is able to work in “polynomial time,” it can be easily scaled up to analyze even denser point clouds, resembling the complexity of sensor data for driverless cars, for example.

    “Navigation, collaborative manufacturing, domestic robots, search and rescue, and self-driving cars is where we hope to make an impact,” Carlone says.

    This research was supported in part by the Army Research Laboratory, the Office of Naval Research, and the Google Daydream Research Program.

    << Previous Day 2019/06/19
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org