MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, April 16th, 2019

    Time Event
    12:00a
    A novel data-compression technique for faster computer programs

    A novel technique developed by MIT researchers rethinks hardware data compression to free up more memory used by computers and mobile devices, allowing them to run faster and perform more tasks simultaneously.

    Data compression leverages redundant data to free up storage capacity, boost computing speeds, and provide other perks. In current computer systems, accessing main memory is very expensive compared to actual computation. Because of this, using data compression in the memory helps improve performance, as it reduces the frequency and amount of data programs need to fetch from main memory.

    Memory in modern computers manages and transfers data in fixed-size chunks, on which traditional compression techniques must operate. Software, however, doesn’t naturally store its data in fixed-size chunks. Instead, it uses “objects,” data structures that contain various types of data and have variable sizes. Therefore, traditional hardware compression techniques handle objects poorly.

    In a paper being presented at the ACM International Conference on Architectural Support for Programming Languages and Operating Systems this week, the MIT researchers describe the first approach to compress objects across the memory hierarchy. This reduces memory usage while improving performance and efficiency.

    Programmers could benefit from this technique when programming in any modern programming language — such as Java, Python, and Go — that stores and manages data in objects, without changing their code. On their end, consumers would see computers that can run much faster or can run many more apps at the same speeds. Because each application consumes less memory, it runs faster, so a device can support more applications within its allotted memory.

    In experiments using a modified Java virtual machine, the technique compressed twice as much data and reduced memory usage by half over traditional cache-based methods.

    “The motivation was trying to come up with a new memory hierarchy that could do object-based compression, instead of cache-line compression, because that’s how most modern programming languages manage data,” says first author Po-An Tsai, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL).

    “All computer systems would benefit from this,” adds co-author Daniel Sanchez, a professor of computer science and electrical engineering, and a researcher at CSAIL. “Programs become faster because they stop being bottlenecked by memory bandwidth.”

    The researchers built on their prior work that restructures the memory architecture to directly manipulate objects. Traditional architectures store data in blocks in a hierarchy of progressively larger and slower memories, called “caches.” Recently accessed blocks rise to the smaller, faster caches, while older blocks are moved to slower and larger caches, eventually ending back in main memory. While this organization is flexible, it is costly: To access memory, each cache needs to search for the address among its contents.

    “Because the natural unit of data management in modern programming languages is objects, why not just make a memory hierarchy that deals with objects?” Sanchez says.

    In a paper published last October, the researchers detailed a system called Hotpads, that stores entire objects, tightly packed into hierarchical levels, or “pads.” These levels reside entirely on efficient, on-chip, directly addressed memories — with no sophisticated searches required.

    Programs then directly reference the location of all objects across the hierarchy of pads. Newly allocated and recently referenced objects, and the objects they point to, stay in the faster level. When the faster level fills, it runs an “eviction” process that keeps recently referenced objects but kicks down older objects to slower levels and recycles objects that are no longer useful, to free up space. Pointers are then updated in each object to point to the new locations of all moved objects. In this way, programs can access objects much more cheaply than searching through cache levels.

    For their new work, the researchers designed a technique, called “Zippads,” that leverages the Hotpads architecture to compress objects. When objects first start at the faster level, they’re uncompressed. But when they’re evicted to slower levels, they’re all compressed. Pointers in all objects across levels then point to those compressed objects, which makes them easy to recall back to the faster levels and able to be stored more compactly than prior techniques.  

    A compression algorithm then leverages redundancy across objects efficiently. This technique uncovers more compression opportunities than previous techniques, which were limited to finding redundancy within each fixed-size block. The algorithm first picks a few representative objects as “base” objects. Then, in new objects, it only stores the different data between those objects and the representative base objects.

    Brandon Lucia, an assistant professor of electrical and computer engineering at Carnegie Mellon University, praises the work for leveraging features of object-oriented programming languages to better compress memory. “Abstractions like object-oriented programming are added to a system to make programming simpler, but often introduce a cost in the performance or efficiency of the system,” he says. “The interesting thing about this work is that it uses the existing object abstraction as a way of making memory compression more effective, in turn making the system faster and more efficient with novel computer architecture features.”

    12:00a
    The fluid that feeds tumor cells

    Before being tested in animals or humans, most cancer drugs are evaluated in tumor cells grown in a lab dish. However, in recent years, there has been a growing realization that the environment in which these cells are grown does not accurately mimic the natural environment of a tumor, and that this discrepancy could produce inaccurate results.

    In a new study, MIT biologists analyzed the composition of the interstitial fluid that normally surrounds pancreatic tumors, and found that its nutrient composition is different from that of the culture medium normally used to grow cancer cells. It also differs from blood, which feeds the interstitial fluid and removes waste products.

    The findings suggest that growing cancer cells in a culture medium more similar to this fluid could help researchers better predict how experimental drugs will affect cancer cells, says Matthew Vander Heiden, an associate professor of biology at MIT and a member of the Koch Institute for Integrative Cancer Research.

    “It’s kind of an obvious statement that the tumor environment is important, but I think in cancer research the pendulum had swung so far toward genes, people tended to forget that,” says Vander Heiden, one of the senior authors of the study.

    Alex Muir, a former Koch Institute postdoc who is now an assistant professor at the University of Chicago, is also a senior author of the paper, which appears in the April 16 edition of the journal eLife. The lead author of the study is Mark Sullivan, an MIT graduate student.

    Environment matters

    Scientists have long known that cancer cells metabolize nutrients differently than most other cells. This alternative strategy helps them to generate the building blocks they need to continue growing and dividing, forming new cancer cells. In recent years, scientists have sought to develop drugs that interfere with these metabolic processes, and one such drug was approved to treat leukemia in 2017.

    An important step in developing such drugs is to test them in cancer cells grown in a lab dish. The growth medium typically used to grow these cells includes carbon sources (such as glucose), nitrogen, and other nutrients. However, in the past few years, Vander Heiden’s lab has found that cancer cells grown in this medium respond differently to drugs than they do in mouse models of cancer.

    David Sabatini, a member of the Whitehead Institute and professor of biology at MIT, has also found that drugs affect cancer cells differently if they are grown in a medium that resembles the nutrient composition of human plasma, instead of the traditional growth medium.

    “That work, and similar results from a couple of other groups around the world, suggested that environment matters a lot,” Vander Heiden says. “It really was a wake up call for us that to really know how to find the dependencies of cancer, we have to get the environment right.”

    To that end, the MIT team decided to investigate the composition of interstitial fluid, which bathes the tissue and carries nutrients that diffuse from blood flowing through the capillaries. Its composition is not identical to that of blood, and in tumors, it can be very different because tumors often have poor connections to the blood supply.

    The researchers chose to focus on pancreatic cancer in part because it is known to be particularly nutrient-deprived. After isolating interstitial fluid from pancreatic tumors in mice, the researchers used mass spectrometry to measure the concentrations of more than 100 different nutrients, and discovered that the composition of the interstitial fluid is different from that of blood (and from that of the culture medium normally used to grow cells). Several of the nutrients that the researchers found to be depleted in tumor interstitial fluid are amino acids that are important for immune cell function, including arginine, tryptophan, and cystine.

    Not all nutrients were depleted in the interstitial fluid — some were more plentiful, including the amino acids glycine and glutamate, which are known to be produced by some cancer cells.

    Location, location, location

    The researchers also compared tumors growing in the pancreas and the lungs and found that the composition of the interstitial fluid can vary based on tumors’ location in the body and at the site where the tumor originated. They also found slight differences between the fluid surrounding tumors that grew in the same location but had different genetic makeup; however, the genetic factors tested did not have as big an impact as the tumor location.

    “That probably says that what determines what nutrients are in the environment is heavily driven by interactions between cancer cells and noncancer cells within the tumor,” Vander Heiden says.

    Scientists have previously discovered that those noncancer cells, including supportive stromal cells and immune cells, can be recruited by cancer cells to help remake the environment around the tumor to promote cancer survival and spread.

    Vander Heiden’s lab and other research groups are now working on developing a culture medium that would more closely mimic the composition of tumor interstitial fluid, so they can explore whether tumor cells grown in this environment could be used to generate more accurate predictions of how cancer drugs will affect cells in the body.

    The research was funded by the National Institutes of Health, the Lustgarten Foundation, the MIT Center for Precision Cancer Medicine, Stand Up to Cancer, the Howard Hughes Medical Institute, and the Ludwig Center at MIT.

    12:00a
    TESS discovers its first Earth-sized planet

    NASA’s Transiting Exoplanet Survey Satellite, TESS, has discovered its first Earth-sized exoplanet. The planet, named HD 21749c, is the smallest world outside our solar system that TESS has identified yet.

    In a paper published today in the journal Astrophysical Journal Letters, an MIT-led team of astronomers reports that the new planet orbits the star HD 21749 — a very nearby star, just 52 light years from Earth. The star also hosts a second planet — HD 21749b — a warm “sub-Neptune” with a longer, 36-day orbit, which the team reported previously and now details further in the current paper.

    The new Earth-sized planet is likely a rocky though uninhabitable world, as it circles its star in just 7.8 days — a relatively tight orbit that would generate surface temperatures on the planet of up to 800 degrees Fahrenheit.

    The discovery of this Earth-sized world is nevertheless exciting, as it demonstrates TESS’ ability to pick out small planets around nearby stars. In the near future, the TESS team expects the probe should reveal even colder planets, with conditions more suitable for hosting life.

    “For stars that are very close by and very bright, we expected to find up to a couple dozen Earth-sized planets,” says lead author and TESS member Diana Dragomir, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research. “And here we are — this would be our first one, and it’s a milestone for TESS. It sets the path for finding smaller planets around even smaller stars, and those planets may potentially be habitable.”

    TESS has been hunting for planets beyond our solar system since it launched on April 18, 2018. The satellite is a NASA Astrophysics Explorer mission that is led and operated by MIT, and is designed to observe nearly the entire sky, in overlapping, month-long patches, or “sectors,” as it orbits the Earth. As it circles our own planet, TESS focuses its four cameras outward to monitor the nearest, brightest stars in the sky, looking for any periodic dips in starlight that could indicate the presence of an exoplanet as it passes in front of its host star.

    Over its two-year mission, TESS aims to identify for the astronomy community at least 50 small, rocky planets, along with estimates of their masses. To date, the mission has discovered 10 planets smaller than Neptune, four of their masses which have been estimated, including π Men b, a planet twice the size of Earth, with a six-day orbit around its star; LHS 3844b, a hot, rocky world that’s slightly bigger than Earth and circles its star in a blistering 11 hours; and TOI 125b and c — two “sub-Neptunes” that orbit the same star, both within about a week. All four of these planets were identified from data obtained during TESS’ first two observing sectors — a good indication, the team writes in its paper, that “many more are to be found.”  

    Dragomir picked out this newest, Earth-sized planet from the first four sectors of TESS observations. When these data became available, in the form of light curves, or intensities of starlight, she fed them into a software code to look for interesting, periodic signals. The code first identified a possible transit that the team later confirmed as the warm sub-Neptune they announced earlier this year.

    As is usually the case with small planets, where there’s one, there are likely to be more, and Dragomir and her colleagues decided to comb through the same observations again to see if they could spot any other small worlds hiding in the data.  

    “We know these planets often come in families,” Dragomir says. “So we searched all the data again, and this small signal came up.”

    The team identified a small dip in the light from HD 21749, that occurred every 7.8 days. Ultimately, the researchers identified 11 such periodic dips, or transits, and determined that the star’s light was being momentarily blocked by a planet about the size of the Earth.

    While this is the first Earth-sized planet discovered by TESS, other Earth-sized exoplanets have been discovered in the past, mainly by NASA’s Kepler Space Telescope, a since-retired telescope that monitored more than 530,000 stars. In the end, the Kepler mission detected 2,662 planets, many of which were Earth-sized, and a handful of those were deemed to be within their star’s habitable zone — where a balance of conditions could be suitable for hosting life.

    However, Kepler observed stars that are many leagues further away than those that are monitored by TESS. Therefore, Dragomir says that following up on any of Kepler’s far-flung, Earth-sized planets would be much harder than studying planets orbiting TESS’ much closer, brighter stars.

    “Because TESS monitors stars that are much closer and brighter, we can measure the mass of this planet in the very near future, whereas for Kepler’s Earth-sized planets, that was out of the question,” Dragomir says. “So this new TESS discovery could lead to the first mass measurement of an Earth-sized planet. And we’re excited about what that mass could be. Will it be Earth’s mass? Or heavier? We don’t really know.”

    2:15p
    Robots that can sort recycling

    Every year trash companies sift through an estimated 68 million tons of recycling, which is the weight equivalent of more than 30 million cars.

    A key step in the process happens on fast-moving conveyor belts, where workers have to sort items into categories like paper, plastic and glass. Such jobs are dull, dirty, and often unsafe, especially in facilities where workers also have to remove normal trash from the mix.

    With that in mind, a team led by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has developed a robotic system that can detect if an object is paper, metal, or plastic.

    The team’s “RoCycle” system includes a soft Teflon hand that uses tactile sensors on its fingertips to detect an object’s size and stiffness. Compatible with any robotic arm, RoCycle was found to be 85 percent accurate at detecting materials when stationary, and 63 percent accurate on an actual simulated conveyer belt. (Its most common error was identifying paper-covered metal tins as paper, which the team says would be improved by adding more sensors along the contact surface.)

    “Our robot’s sensorized skin provides haptic feedback that allows it to differentiate between a wide range of objects, from the rigid to the squishy,” says MIT Professor Daniela Rus, senior author on a related paper that will be presented in April at the IEEE International Conference on Soft Robotics (RoboSoft) in Seoul, South Korea. “Computer vision alone will not be able to solve the problem of giving machines human-like perception, so being able to use tactile input is of vital importance.”

    A collaboration with Yale University, RoCycle directly demonstrates the limits of sight-based sorting: It can reliably distinguish between two visually similar Starbucks cups, one made of paper and one made of plastic, that would give vision systems trouble.

    Incentivizing recycling

    Rus says that the project is part of her larger goal to reduce the back-end cost of recycling, in order to incentivize more cities and countries to create their own programs. Today recycling centers aren’t particularly automated; their main kinds of machinery include optical sorters that use different wavelength light to distinguish between plastics, magnetic sorters that separate out iron and steel products, and aluminum sorters that use eddy currents to remove non-magnetic metals.

    This is a problem for one very big reason: just last month China raised its standards for the cleanliness of recycled goods it accepts from the United States, meaning that some of the country’s single-stream recycling is now sent to landfills.

    "If a system like RoCycle could be deployed on a wide scale, we'd potentially be able to have the convenience of single-stream recycling with the lower contamination rates of multi-stream recycling,” says PhD student Lillian Chin, lead author on the new paper.

    It’s surprisingly hard to develop machines that can distinguish between paper, plastic, and metal, which shows how impressive a feat it is for humans. When we pick up an object, we can immediately recognize many of its qualities even with our eyes closed, like whether it’s large and stiff or small and soft. By feeling the object and understanding how that relates to the softness of our own fingertips, we are able to learn how to handle a wide range of objects without dropping or breaking them.

    This kind of intuition is tough to program into robots. Traditional hard (“rigid”) robot hands have to know an object’s exact location and size to be able to calculate a precise motion path. Soft hands made of materials like rubber are much more flexible, but have a different problem: Because they’re powered by fluidic forces, they have a balloon-like structure that can puncture quite easily.

    How RoCycle works

    Rus’ team used a motor-driven hand made of a relatively new material called “auxetics.” Most materials get narrower when pulled on, like a rubber band when you stretch it; auxetics, meanwhile, actually get wider. The MIT team took this concept and put a twist on it, quite literally: They created auxetics that, when cut, twist to either the left or right. Combining a “left-handed” and “right-handed” auxetic for each of the hand’s two large fingers makes them interlock and oppose each other’s rotation, enabling more dynamic movement. (The team calls this “handed-shearing auxetics”, or HSA.)

    “In contrast to soft robots, whose fluid-driven approach requires air pumps and compressors, HSA combines twisting with extension, meaning that you’re able to use regular motors,” says Chin.

    The team’s gripper first uses its “strain sensor” to estimate an object’s size, and then uses its two pressure sensors to measure the force needed to grasp an object. These metrics — along with calibration data on the size and stiffnesses of objects of different material types — are what gives the gripper a sense of what material the object is made. (Since the tactile sensors are also conductive, they can detect metal by how much it changes the electrical signal.)

    “In other words, we estimate the size and measure the pressure difference between the current closed hand and what a normal open hand should look like,” says Chin. “We use this pressure difference and size to classify the specific object based on information about different objects that we’ve already measured.”

    RoCycle builds on an set of sensors that detect the radius of an object to within 30 percent accuracy, and tell the difference between “hard” and “soft” objects with 78 percent accuracy. The team’s hand is also almost completely puncture resistant: It was able to be scraped by a sharp lid and punctured by a needle more than 20 times, with minimal structural damage.

    As a next step, the researchers plan to build out the system so that it can combine tactile data with actual video data from a robot’s cameras. This would allow the team to further improve its accuracy and potentially allow for even more nuanced differentiation between different kinds of materials.

    Chin and Rus co-wrote the RoCycle paper alongside MIT postdoc Jeffrey Lipton, as well as PhD student Michelle Yuen and Professor Rebecca Kramer-Bottiglio of Yale University.

    This project was supported in part by Amazon, JD.com, the Toyota Research Institute, and the National Science Foundation.

    11:59p
    Giving robots a better feel for object manipulation

    A new learning system developed by MIT researchers improves robots’ abilities to mold materials into target shapes and make predictions about interacting with solid objects and liquids. The system, known as a learning-based particle simulator, could give industrial robots a more refined touch — and it may have fun applications in personal robotics, such as modelling clay shapes or rolling sticky rice for sushi.

    In robotic planning, physical simulators are models that capture how different materials respond to force. Robots are “trained” using the models, to predict the outcomes of their interactions with objects, such as pushing a solid box or poking deformable clay. But traditional learning-based simulators mainly focus on rigid objects and are unable to handle fluids or softer objects. Some more accurate physics-based simulators can handle diverse materials, but rely heavily on approximation techniques that introduce errors when robots interact with objects in the real world.

    In a paper being presented at the International Conference on Learning Representations in May, the researchers describe a new model that learns to capture how small portions of different materials — “particles” — interact when they’re poked and prodded. The model directly learns from data in cases where the underlying physics of the movements are uncertain or unknown. Robots can then use the model as a guide to predict how liquids, as well as rigid and deformable materials, will react to the force of its touch. As the robot handles the objects, the model also helps to further refine the robot’s control.

    In experiments, a robotic hand with two fingers, called “RiceGrip,” accurately shaped a deformable foam to a desired configuration — such as a “T” shape — that serves as a proxy for sushi rice. In short, the researchers’ model serves as a type of “intuitive physics” brain that robots can leverage to reconstruct three-dimensional objects somewhat similarly to how humans do.

    Humans have an intuitive physics model in our heads, where we can imagine how an object will behave if we push or squeeze it. Based on this intuitive model, humans can accomplish amazing manipulation tasks that are far beyond the reach of current robots,” says first author Yunzhu Li, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL).We want to build this type of intuitive model for robots to enable them to do what humans can do.”

    “When children are 5 months old, they already have different expectations for solids and liquids,” adds co-author Jiajun Wu, a CSAIL graduate student. “That’s something we know at an early age, so maybe that’s something we should try to model for robots.”

    Joining Li and Wu on the paper are: Russ Tedrake, a CSAIL researcher and a professor in the Department of Electrical Engineering and Computer Science (EECS); Joshua Tenenbaum, a professor in the Department of Brain and Cognitive Sciences and a member of CSAIL and the Center for Brains, Minds, and Machines (CBMM); and Antonio Torralba, a professor in EECS and director of the MIT-IBM Watson AI Lab.

    Dynamic graphs

    A key innovation behind the model, called the “particle interaction network” (DPI-Nets), was creating dynamic interaction graphs, which consist of thousands of nodes and edges that can capture complex behaviors of so-called particles. In the graphs, each node represents a particle. Neighboring nodes are connected with each other using directed edges, which represent the interaction passing from one particle to the other. In the simulator, particles are hundreds of small spheres combined to make up some liquid or a deformable object.

    The graphs are constructed as the basis for a machine-learning system called a graph neural network. In training, the model over time learns how particles in different materials react and reshape. It does so by implicitly calculating various properties for each particle — such as its mass and elasticity — to predict if and where the particle will move in the graph when perturbed.

    The model then leverages a “propagation” technique, which instantaneously spreads a signal throughout the graph. The researchers customized the technique for each type of material — rigid, deformable, and liquid — to shoot a signal that predicts particles positions at certain incremental time steps. At each step, it moves and reconnects particles, if needed.

    For example, if a solid box is pushed, perturbed particles will be moved forward. Because all particles inside the box are rigidly connected with each other, every other particle in the object moves the same calculated distance, rotation, and any other dimension. Particle connections remain intact and the box moves as a single unit. But if an area of deformable foam is indented, the effect will be different. Perturbed particles move forward a lot, surrounding particles move forward only slightly, and particles farther away won’t move at all. With liquids being sloshed around in a cup, particles may completely jump from one end of the graph to the other. The graph must learn to predict where and how much all affected particles move, which is computationally complex.

    Shaping and adapting

    In their paper, the researchers demonstrate the model by tasking the two-fingered RiceGrip robot with clamping target shapes out of deformable foam. The robot first uses a depth-sensing camera and object-recognition techniques to identify the foam. The researchers randomly select particles inside the perceived shape to initialize the position of the particles. Then, the model adds edges between particles and reconstructs the foam into a dynamic graph customized for deformable materials.

    Because of the learned simulations, the robot already has a good idea of how each touch, given a certain amount of force, will affect each of the particles in the graph. As the robot starts indenting the foam, it iteratively matches the real-world position of the particles to the targeted position of the particles. Whenever the particles don’t align, it sends an error signal to the model. That signal tweaks the model to better match the real-world physics of the material.

    Next, the researchers aim to improve the model to help robots better predict interactions with partially observable scenarios, such as knowing how a pile of boxes will move when pushed, even if only the boxes at the surface are visible and most of the other boxes are hidden.

    The researchers are also exploring ways to combine the model with an end-to-end perception module by operating directly on images. This will be a joint project with Dan Yamins’s group; Yamin recently completed his postdoc at MIT and is now an assistant professor at Stanford University. “You’re dealing with these cases all the time where there’s only partial information,” Wu says. “We’re extending our model to learn the dynamics of all particles, while only seeing a small portion.”

    << Previous Day 2019/04/16
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org