MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, October 2nd, 2019

    Time Event
    10:00a
    Using algorithms to build a map of the placenta

    The placenta is one of the most vital organs when a woman is pregnant. If it’s not working correctly, the consequences can be dire: Children may experience stunted growth and neurological disorders, and their mothers are at increased risk of blood conditions like preeclampsia, which can impair kidney and liver function. 

    Unfortunately, assessing placental health is difficult because of the limited information that can be gleaned from imaging. Traditional ultrasounds are cheap, portable, and easy to perform, but they can’t always capture enough detail. This has spurred researchers to explore the potential of magnetic resonance imaging (MRI). Even with MRIs, though, the curved surface of the uterus makes images difficult to interpret.

    This problem got the attention of a team of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), who wondered whether the placenta’s scrunched shape could be flattened out using some fancy geometry.

    Next month they’re publishing a paper showing that it can. Their new algorithm unfolds images from MRI scans to better visualize the organ. For example, their images more clearly show the “cotyledons,” circular structures that allow for the exchange of nutrients between the mother and her developing child or children. Being able to visualize such structures could allow doctors to diagnose and treat placental issues much earlier in the pregnancy. 

    “The idea is to unfold the image of the placenta while it’s in the body, so that it looks similar to how doctors are used to seeing it after delivery,” says PhD student Mazdak Abulnaga, lead author of the new paper with MIT professors Justin Solomon and Polina Golland. “While this is just a first step, we think an approach like this has the potential to become a standard imaging method for radiologists.” 

    Golland says that the algorithm could also be used in clinical research to find specific  biomarkers associated with poor placental health. Such research could help radiologists save time and more accurately locate problem areas without having to examine many different slices of the placenta.

    Chris Kroenke, an associate professor at Oregon Health and Science University, says that the project opens up many new possibilities for monitoring placental health. 

    “The biological processes that underlie cotyledon patterning are not completely understood, nor is it known whether a standard pattern should be expected for a given population,” says Kroenke, who was not involved in the paper. “The tools provided by this work will certainly aid researchers to address these questions in the future.”

    Abulnaga, Solomon, and Golland co-wrote the paper with former CSAIL postdoc Mikhail Bessmeltsev and their collaborators, Esra Abaci Turk and P. Ellen Grant of Boston Children’s Hospital (BCH). Grant is the director of BCH’s Fetal-Neonatal Neuroimaging and Development Science Center, and a professor of radiology and pediatrics at Harvard Medical School. The team also worked closely with collaborators at Massachusetts General Hospital (MGH) and MIT Professor Elfar Adalsteinsson.

    The paper will be presented Oct. 14 in Shenzhen, China, at the International Conference on Medical Image Computing and Computer-Assisted Intervention. 

    The team’s algorithm first models the placenta’s shape by subdividing it into thousands of tiny pyramids, or tetrahedra. This serves an efficient representation for computers to perform operations to manipulate the shape. The algorithm then arranges those pyramids into a template that resembles the flattened shape that a placenta holds once it’s out of the body. (The algorithm does this by essentially moving the corners of the pyramids on the surface of the placenta to match the two parallel planes of the template and letting the rest fill the new shape.)

    The model has to make a tradeoff between the pyramids matching the shape of the template and minimizing the amount of distortion. The team showed the system can ultimately achieve accuracy at the scale of less than one voxel (a 3-D pixel). 

    The project is far from the first aimed at improving medical imaging by actually manipulating said images. There have been recent efforts to unfold scans of ribs, and researchers have also spent many years developing ways to flatten images of the brain’s cerebral cortex to better visualize areas between the folds.

    Meanwhile, work involving the womb is much newer. Previous approaches to this problem focused on flattening different layers of the placenta separately. The team says that they feel that the new volumetric method results in more consistency and less distortion because it maps the whole 3-D placenta at once, enabling it to more closely model the physical unfolding process.

    “The team’s work provides a very elegant tool to address the issue of the placenta’s irregular shape being difficult to image,” says Kroenke. 

    As a next step, the team hopes to work with MGH and BCH to directly compare in-utero images with ones of the same placentas post-birth. Because the placenta loses fluid and changes shape during the birth process, this will require using a special chamber designed by MGH and BCH where researchers can put the placenta directly after the birth.

    The source code for the project is available on github. The work was supported in part by the National Institute of Child Health and Human Development, the National Institute of Biomedical Imaging and Bioengineering, the National Science Foundation, the U.S. Air Force, and the Natural Sciences and Engineering Research Council of Canada.

    11:59p
    System helps smart devices find their position

    A new system developed by researchers at MIT and elsewhere helps networks of smart devices cooperate to find their positions in environments where GPS usually fails.

    Today, the “internet of things” concept is fairly well-known: Billions of interconnected sensors around the world — embedded in everyday objects, equipment, and vehicles, or worn by humans or animals — collect and share data for a range of applications.

    An emerging concept, the “localization of things,” enables those devices to sense and communicate their position. This capability could be helpful in supply chain monitoring, autonomous navigation, highly connected smart cities, and even forming a real-time “living map” of the world. Experts project that the localization-of-things market will grow to $128 billion by 2027.

    The concept hinges on precise localization techniques. Traditional methods leverage GPS satellites or wireless signals shared between devices to establish their relative distances and positions from each other. But there’s a snag: Accuracy suffers greatly in places with reflective surfaces, obstructions, or other interfering signals, such as inside buildings, in underground tunnels, or in “urban canyons” where tall buildings flank both sides of a street.

    Researchers from MIT, the University of Ferrara, the Basque Center of Applied Mathematics (BCAM), and the University of Southern California have developed a system that captures location information even in these noisy, GPS-denied areas. A paper describing the system appears in the Proceedings of the IEEE.

    When devices in a network, called “nodes,” communicate wirelessly in a signal-obstructing, or “harsh,” environment, the system fuses various types of positional information from dodgy wireless signals exchanged between the nodes, as well as digital maps and inertial data. In doing so, each node considers information associated with all possible locations — called “soft information” — in relation to those of all other nodes. The system leverages machine-learning techniques and techniques that reduce the dimensions of processed data to determine possible positions from measurements and contextual data. Using that information, it then pinpoints the node’s position.

    In simulations of harsh scenarios, the system operates significantly better than traditional methods. Notably, it consistently performed near the theoretical limit for localization accuracy. Moreover, as the wireless environment got increasingly worse, traditional systems’ accuracy dipped dramatically while the new soft information-based system held steady.

    “When the tough gets tougher, our system keeps localization accurate,” says Moe Win, a professor in the Department of Aeronautics and Astronautics and the Laboratory for Information and Decision Systems (LIDS), and head of the Wireless Information and Network Sciences Laboratory. “In harsh wireless environments, you have reflections and echoes that make it far more difficult to get accurate location information. Places like the Stata Center [on the MIT campus] are particularly challenging, because there are surfaces reflecting signals everywhere. Our soft information method is particularly robust in such harsh wireless environments.”

    Joining Win on the paper are: Andrea Conti of the University of Ferrara; Santiago Mazuelas of BCAM; Stefania Bartoletti of the University of Ferrara; and William C. Lindsey of the University of Southern California.

    Capturing “soft information”

    In network localization, nodes are generally referred to as anchors or agents. Anchors are nodes with known positions, such as GPS satellites or wireless base stations. Agents are nodes that have unknown positions — such as autonomous cars, smartphones, or wearables.

    To localize, agents can use anchors as reference points, or they can share information with other agents to orient themselves. That involves transmitting wireless signals, which arrive at the receiver carrying positional information. The power, angle, and time-of-arrival of the received waveform, for instance, correlate to the distance and orientation between nodes.

    Traditional localization methods extract one feature of the signal to estimate a single value for, say, the distance or angle between two nodes. Localization accuracy relies entirely on the accuracy of those inflexible (or “hard”) values, and accuracy has been shown to decrease drastically as environments get harsher.

    Say a node transmits a signal to another node that’s 10 meters away in a building with many reflective surfaces. The signal may bounce around and reach the receiving node at a time corresponding to 13 meters away. Traditional methods would likely assign that incorrect distance as a value.

    For the new work, the researchers decided to try using soft information for localization. The method leverages many signal features and contextual information to create a probability distribution of all possible distances, angles, and other metrics. “It’s called ‘soft information’ because we don’t make any hard choices about the values,” Conti says.

    The system takes many sample measurements of signal features, including its power, angle, and time of flight. Contextual data come from external sources, such as digital maps and models that capture and predict how the node moves.

    Back to the previous example: Based on the initial measurement of the signal’s time of arrival, the system still assigns a high probability that the nodes are 13 meters apart. But it assigns a small possibility that they’re 10 meters apart, based on some delay or power loss of the signal. As the system fuses all other information from surrounding nodes, it updates the likelihood for each possible value. For instance, it could ping a map and see that the room’s layout shows it’s highly unlikely both nodes are 13 meters apart. Combining all the updated information, it decides the node is far more likely to be in the position that is 10 meters away.

    “In the end, keeping that low-probability value matters,” Win says. “Instead of giving a definite value, I’m telling you I’m really confident that you’re 13 meters away, but there’s a smaller possibility you’re also closer. This gives additional information that benefits significantly in determining the positions of the nodes.”

    Reducing complexity

    Extracting many features from signals, however, leads to data with large dimensions that can be too complex and inefficient for the system. To improve efficiency, the researchers reduced all signal data into a reduced-dimension and easily computable space.

    To do so, they identified aspects of the received waveforms that are the most and least useful for pinpointing location based on “principal component analysis,” a technique that keeps the most useful aspects in multidimensional datasets and discards the rest, creating a dataset with reduced dimensions. If received waveforms contain 100 sample measurements each, the technique might reduce that number to, say, eight.

    A final innovation was using machine-learning techniques to learn a statistical model describing possible positions from measurements and contextual data. That model runs in the background to measure how that signal-bouncing may affect measurements, helping to further refine the system’s accuracy.

    The researchers are now designing ways to use less computation power to work with resource-strapped nodes that can’t transmit or compute all necessary information. They’re also working on bringing the system to “device-free” localization, where some of the nodes can’t or won’t share information. This will use information about how the signals are backscattered off these nodes, so other nodes know they exist and where they are located.

    << Previous Day 2019/10/02
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org