MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Monday, November 7th, 2016

    Time Event
    12:00a
    Driverless-vehicle options now include scooters

    At MIT’s 2016 Open House last spring, more than 100 visitors took rides on an autonomous mobility scooter in a trial of software designed by researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), the National University of Singapore, and the Singapore-MIT Alliance for Research and Technology (SMART).

    The researchers had previously used the same sensor configuration and software in trials of autonomous cars and golf carts, so the new trial completes the demonstration of a comprehensive autonomous mobility system. A mobility-impaired user could, in principle, use a scooter to get down the hall and through the lobby of an apartment building, take a golf cart across the building’s parking lot, and pick up an autonomous car on the public roads.

    The new trial establishes that the researchers’ control algorithms work indoors as well as out. “We were testing them in tighter spaces,” says Scott Pendleton, a graduate student in mechanical engineering at the National University of Singapore (NUS) and a research fellow at SMART. “One of the spaces that we tested in was the Infinite Corridor of MIT, which is a very difficult localization problem, being a long corridor without very many distinctive features. You can lose your place along the corridor. But our algorithms proved to work very well in this new environment.”

    The researchers’ system includes several layers of software: low-level control algorithms that enable a vehicle to respond immediately to changes in its environment, such as a pedestrian darting across its path; route-planning algorithms; localization algorithms that the vehicle uses to determine its location on a map; map-building algorithms that it uses to construct the map in the first place; a scheduling algorithm that allocates fleet resources; and an online booking system that allows users to schedule rides.

    Uniformity

    Using the same control algorithms for all types of vehicles — scooters, golf carts, and city cars — has several advantages. One is that it becomes much more practical to perform reliable analyses of the system’s overall performance.

    “If you have a uniform system where all the algorithms are the same, the complexity is much lower than if you have a heterogeneous system where each vehicle does something different,” says Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and one of the project’s leaders. “That’s useful for verifying that this multilayer complexity is correct.”

    Furthermore, with software uniformity, information that one vehicle acquires can easily be transferred to another. Before the scooter was shipped to MIT, for instance, it was tested in Singapore, where it used maps that had been created by the autonomous golf cart.

    Similarly, says Marcelo Ang, an associate professor of mechanical engineering at NUS who co-leads the project with Rus, in ongoing work the researchers are equipping their vehicles with machine-learning systems, so that interactions with the environment will improve the performance of their navigation and control algorithms. “Once you have a better driver, you can easily transplant that to another vehicle,” says Ang. “That’s the same across different platforms.”

    Finally, software uniformity means that the scheduling algorithm has more flexibility in its allocation of system resources. If an autonomous golf cart isn’t available to take a user across a public park, a scooter could fill in; if a city car isn’t available for a short trip on back roads, a golf cart might be.

    “I can see its usefulness in large indoor shopping malls and amusement parks to take [mobility-impaired] people from one spot to another,” says Dan Ding, an associate professor of rehabilitation science and technology at the University of Pittsburgh, about the system.

    Changing perceptions

    The scooter trial at MIT also demonstrated the ease with which the researchers could deploy their modular hardware and software system in a new context. “It’s extraordinary to me, because it’s a project that the team conducted in about two months,” Rus says. MIT’s Open House was at the end of April, and “the scooter didn’t exist on February 1st,” Rus says.

    The researchers described the design of the scooter system and the results of the trial in a paper they presented last week at the IEEE International Conference on Intelligent Transportation Systems. Joining Rus, Pendleton, and Ang on the paper are You Hong Eng, who leads the SMART autonomous-vehicle project, and four other researchers from both NUS and SMART.

    The paper also reports the results of a short user survey that the researchers conducted during the trial. Before riding the scooter, users were asked how safe they considered autonomous vehicles to be, on a scale from one to five; after their rides, they were asked the same question again. Experience with the scooter brought the average safety score up, from 3.5 to 4.6.

    12:00a
    Faster programs, easier programming

    Dynamic programming is a technique that can yield relatively efficient solutions to computational problems in economics, genomic analysis, and other fields. But adapting it to computer chips with multiple “cores,” or processing units, requires a level of programming expertise that few economists and biologists have.

    Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Stony Brook University aim to change that, with a new system that allows users to describe what they want their programs to do in very general terms. It then automatically produces versions of those programs that are optimized to run on multicore chips. It also guarantees that the new versions will yield exactly the same results that the single-core versions would, albeit much faster.

    In experiments, the researchers used the system to “parallelize” several algorithms that used dynamic programming, splitting them up so that they would run on multicore chips. The resulting programs were between three and 11 times as fast as those produced by earlier techniques for automatic parallelization, and they were generally as efficient as those that were hand-parallelized by computer scientists.

    The researchers presented their new system last week at the Association for Computing Machinery’s conference on Systems, Programming, Languages and Applications: Software for Humanity.

    Dynamic programming offers exponential speedups on a certain class of problems because it stores and reuses the results of computations, rather than recomputing them every time they’re required.

    “But you need more memory, because you store the results of intermediate computations,” says Shachar Itzhaky, first author on the new paper and a postdoc in the group of Armando Solar-Lezama, an associate professor of electrical engineering and computer science at MIT. “When you come to implement it, you realize that you don't get as much speedup as you thought you would, because the memory is slow. When you store and fetch, of course, it’s still faster than redoing the computation, but it’s not as fast as it could have been.”

    Outsourcing complexity

    Computer scientists avoid this problem by reordering computations so that those requiring a particular stored value are executed in sequence, minimizing the number of times that the value has to be recalled from memory. That’s relatively easy to do with a single-core computer, but with multicore computers, when multiple cores are sharing data stored at multiple locations, memory management become much more complex. A hand-optimized, parallel version of a dynamic-programming algorithm is typically 10 times as long as the single-core version, and the individual lines of code are more complex, to boot.

    The CSAIL researchers’ new system — dubbed Bellmania, after Richard Bellman, the applied mathematician who pioneered dynamic programming — adopts a parallelization strategy called recursive divide-and-conquer. Suppose that the task of a parallel algorithm is to perform a sequence of computations on a grid of numbers, known as a matrix. Its first task might be to divide the grid into four parts, each to be processed separately.

    But then it might divide each of those four parts into four parts, and each of those into another four parts, and so on. Because this approach — recursion — involves breaking a problem into smaller subproblems, it naturally lends itself to parallelization.

    Joining Itzhaky on the new paper are Solar-Lezama; Charles Leiserson, the Edwin Sibley Webster Professor of Electrical Engineering and Computer Science; Rohit Singh and Kuat Yessenov, who were MIT both graduate students in electrical engineering and computer science when the work was done; Yongquan Lu, an MIT undergraduate who participated in the project through MIT’s Undergraduate Research Opportunities Program; and Rezaul Chowdhury, an assistant professor of computer science at Stony Brook, who was formerly a research affiliate in Leiserson’s group.

    Leiserson’s group specializes in divide-and-conquer parallelization techniques; Solar-Lezama’s specializes in program synthesis, or automatically generating code from high-level specifications. With Bellmania, the user simply has to describe the first step of the process — the division of the matrix and the procedures to be applied to the resulting segments. Bellmania then determines how to continue subdividing the problem so as to use memory efficiently.

    Rapid search

    At each level of recursion — with each successively smaller subdivision of the matrix — a program generated by Bellmania will typically perform some operation on some segment of the matrix and farm the rest out to subroutines, which can be performed in parallel. Each of those subroutines, in turn, will perform some operation on some segment of the data and farm the rest out to further subroutines, and so on.

    Bellmania determines how much data should be processed at each level and which subroutines should handle the rest. “The goal is to arrange the memory accesses such that when you read a cell [of the matrix], you do as much computation as you can with it, so that you will not have to read it again later,” Itzhaky says.

    Finding the optimal division of tasks requires canvassing a wide range of possibilities. Solar-Lezama’s group has developed a suite of tools to make that type of search more efficient; even so, Bellmania takes about 15 minutes to parallelize a typical dynamic-programming algorithm. That’s still much faster than a human programmer could perform the same task, however. And the result is guaranteed to be correct; hand-optimized code is so complex that it’s easy for errors to creep in.

    “The work that they’re doing is really foundational in enabling a broad set of applications to run on multicore and parallel processors,” says David Bader, a professor of computational science and engineering at Georgia Tech. “One challenge has been to enable high-level writing of programs that work on our current multicore processors, and up to now doing that requires heroic, low-level manual coding to get performance. What they provide is a much simpler, high-level technique for some classes of programs that makes it very easy to write the program and have their system automatically figure out how to divide up the work to create codes that are competitive with hand-tuned, low-level coding.

    “The types of applications that they would enable range from computational biology, to proteomics, to cybersecurity, to sorting, to scheduling problems of all sorts, to managing network traffic — there are countless examples of real algorithms in the real world for which they now enable much more efficient code,” he adds. “It’s remarkable.”

    3:00p
    A new approach against Salmonella and other pathogens

    Researchers from MIT and the University of California at Irvine have developed a new strategy to immunize against microbes that invade the gastrointestinal tract, including Salmonella, which causes more foodborne illness in the United States than any other bacteria.

    The researchers targeted a molecule that Salmonella and other bacteria secrete to scavenge iron, which is essential to many cellular functions. Immunization against this molecule led to the production of antibodies that reduced Salmonella growth, and to much lower levels of the bacteria. 

    This approach could offer an alternative to antibiotics, which can cause side effects because they also kill beneficial bacteria. Using too many antibiotics can also lead to drug resistance.

    “We have a huge problem in terms of infectious disease and antibiotic resistance,” says Elizabeth Nolan, an associate professor in MIT’s Department of Chemistry. “One aspect we like about our strategy is that it’s narrow-spectrum, in contrast to many small-molecule antibiotics that are broad-spectrum and can disrupt the commensal [beneficial] microbiota, which can then have secondary negative consequences for the patient.”

    Nolan and Manuela Raffatellu, a professor at UC Irvine, are the senior authors of the study, which appears in the Proceedings of the National Academy of Sciences the week of Nov. 7. The paper’s lead authors are Phoom Chairatana, a recent MIT PhD in Chemistry, and Martina Sassone-Corsi, a postdoc at UC Irvine. The team initiated this project in 2011 when Chairatana and Sassone-Corsi were both first-year graduate students.

    Iron-clad defenses

    Most bacteria, as well as some fungi, use molecules known as siderophores to obtain iron, a metal that is critical for cellular processes including metabolism and DNA synthesis. Bacteria that live in the intestinal tract secrete siderophores into the gut and then reabsorb them after they have grabbed onto iron.

    There are hundreds of different types of siderophores, and in this study, the researchers focused on a subset of siderophores that are produced by Salmonella and a few other types of pathogenic bacteria that can live in the gut.

    The researchers were inspired by the way that some organisms naturally combat microbes by blocking their iron uptake. Humans have a defense protein known as lipocalin 2, which can capture some siderophores and prevent these molecules from carrying iron into bacterial cells. However, lipocalin 2 is not effective against certain types of siderophores, including one type used by Salmonella.

    “There’s no identified human defense mechanism against some of these molecules. That’s how we got thinking about how we could boost this metal-withholding response via an immunization,” Nolan says.

    The siderophore molecules are too small to induce an immune response from a host organism, so the researchers decided to attach it to a protein that does induce an immune response — cholera toxin subunit B (CTB). The siderophore-CTB complex is delivered nasally or injected into the abdomen and makes its way to the lining of the GI tract, where the body begins producing antibodies against both CTB and the siderophore.

    The researchers gave mice the immunization twice, two weeks apart, and then infected them with Salmonella 36 to 51 days after the first immunization. They found that antibodies against the siderophores peaked around 21 days after the first immunization and then remained at high levels. The immunized mice also had much smaller numbers of Salmonella in their gut and did not experience the weight loss seen in mice that were infected but not immunized.

    In a paper appearing in the same issue of PNAS, researchers at the University of Michigan used a similar approach to generate an immune response against Escherichia coli that can cause urinary tract infections. 

    Bacterial benefits

    The researchers also found that immunization not only reduced the Salmonella population but also led to the expansion of the population of a beneficial bacteria known as Lactobacillus — the probiotic bacteria found in yogurt, which help to inhibit the growth of pathogenic microbes. “We think that the expansion of Lactobacillus may be conferring additional benefit to the host,” Nolan says.

    This immunization strategy could be useful to protect people at high risk for certain kinds of infections, such as people who have compromised immune systems or cancer patients receiving chemotherapy, Nolan says.

    This approach could also be used to generate antibodies to treat people after they become infected with certain pathogens, such as Salmonella. The researchers are now working to isolate and analyze the antibodies that the mice produced in this study, and they are developing immunization strategies against other types of siderophores found in other organisms.

    << Previous Day 2016/11/07
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org