MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Monday, July 27th, 2020

    Time Event
    3:50p
    Does ride-sharing substitute for or complement public transit?

    Ride-sharing apps like Uber, Lyft, Grab, and DiDi have become ubiquitous in cities around the world, but have also attracted much backlash from established taxi companies. Despite its adoption worldwide, regulation of ride-sourcing services still varies greatly in different parts of the world, as policymakers struggle to assess its impact on the economy and society with limited information and yet-unidentified risks involved.

    One major consideration to improve mobility and sustainability in cities is whether ride-sourcing apps serve as a substitute or complement for public transit. In an ideal situation, ride-sharing could complement transit service and help to reduce private car usage. However, as an alternative travel mode, it may also substitute for the transit.

    To understand more about this and the impact upon cities, Hui Kong, Xiaohu Zhang, and Jinhua Zhao from SMART Future Urban Mobility interdisciplinary research group (IRG) and the JTL Urban Mobility Lab at MIT recently conducted a study that investigates the relationship between ride-sharing and public transit using ride-sourcing data. Their findings were published in a research paper, “How does ridesourcing substitute for public transit? A geospatial perspective in Chengdu, China” in the Journal of Transport Geography, along with a visualization of their work. Future Urban Mobility (FM) is an IRG of the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore.

    In the first such study undertaken by any researcher in the world to look into the substitution effect of each individual trip at the disaggregated level, SMART researchers used DiDi data in Chengdu, China, a major urban center with a population of over 16 million people. They developed a three-level structure to recognize the potential substitution or complementary relationship between ride-sharing and public transit, while also investigating the impacts through exploratory spatiotemporal data analysis, and examining the factors influencing the degree of substitution via linear, spatial autoregressive, and zero-inflated beta regression models.

    Through this, the researchers found that one-third of DiDi trips potentially substitute for public transit, with a ride-sourcing trip considered potentially a substitute for public transit if the trip can be effectively served by public transit.

    The time of the day and the location matter as well. The researchers found that the substitution rate is higher during the daytime (8 a.m. to 6 p.m.) and more significant in the city center. Also, substitution trips appear more in the areas with higher building density and land use mixture. During the day, around 40 percent of DiDi trips have the potential to substitute for public transit, but the researchers found that this substitution rate decreases as the supply of transit decreases.

    The researchers also found that the substitution effect is more significant in more developed areas covered by subway lines, while peripheral and suburban areas were dominated by complementary trips. However, they also note that house prices were positively correlated with the substitution rate, highlighting the importance of public transit to less-wealthy populations.   

    “High substitution rate implies the necessity of implementing ride-sourcing regulations (e.g., spatial quotas, strategic pricing) or optimizing public transit service (e.g., shorten travel time, lower fee, improve crowdedness) in that area,” says Hui Kong, SMART FM investigator and postdoc at JTL Urban Mobility Lab and MIT Transit Lab. “The lower substitution in suburban areas can highlight areas where the current public transit service is inadequate and would help regulators decide on where to implement new bus or train lines.”

    Because ride-sharing substitutes a large proportion of public transit, it also amplifies the issue of digital divide. After all, most of the ride-sourcing services rely on smartphone apps and credit card fare-paying. As a result, the unbanked population and people who do not own a smartphone may not have access to ride-sharing services. Policymakers may have to rethink digitalization efforts.

    SMART was established by MIT in partnership with the National Research Foundation of Singapore (NRF) in 2007. SMART is the first entity in the Campus for Research Excellence and Technological Enterprise (CREATE) developed by NRF. SMART serves as an intellectual and innovation hub for research interactions between MIT and Singapore, undertaking cutting-edge research projects in areas of interest to both Singapore and MIT. SMART currently comprises an Innovation Center and five IRGs: Antimicrobial Resistance, Critical Analytics for Manufacturing Personalized-Medicine, Disruptive and Sustainable Technologies for Agricultural Precision, FM, and Low Energy Electronic Systems. SMART research is funded by the NRF under the CREATE program. 

    FM harnesses new technological and institutional innovations to create the next generation of urban mobility systems to increase accessibility, equity, safety, and environmental performance for the citizens and businesses of Singapore and other metropolitan areas, worldwide.  

    4:15p
    Shining a light on the quantum world

    In the universe, there is the world we can see with the naked eye: trees, planes in the sky, dishes in the sink. But there are other worlds that reveal themselves with the help of a magnifying glass, telescope, or microscope. With these, we can see up into the universe or down into the smallest particles that make it up. The smallest of these is a world populated by particles smaller than an atom: the quantum world. 

    Physicists who probe this world study how these subatomic particles interact with one another, often in ways not predicted by behavior at the atomic or molecular level. One such physicist is Nicholas Rivera, who studies light-matter interactions at the quantum level.

    Unfinished business

    In the quantum world, light is two things: both a wave and a small particle called a photon. “I was always fascinated with light, especially the quantum nature of light,” says Rivera, a Department of Physics graduate student in Professor Marin Soljačić’s group

    According to Rivera, there is still a lot we don’t know about quantum light, and uncovering these unknowns may prove useful for a number of applications. “It’s connected to a lot of interesting problems,” says Rivera, such as how to make better quantum computers and lasers at new frequencies like ultraviolet and X-ray. It’s this dual nature of the work — with fundamental questions coupled with practical solutions — that attracted Rivera to his current area of research. 

    Rivera joined Soljačić’s group in 2013, when he was an undergraduate at MIT. Since then his research has focused on how light and matter interact at the most elementary level, between quanta of light, also called photons, and electrons of matter. These interactions are governed by the laws of quantum electrodynamics and involve the emission of photons by electrons that hop up and down energy levels. This may sound simple, but it is surprisingly difficult because light and matter are operating on two different size scales, which often means these interactions are inefficient. One specific goal of Rivera’s work is to improve that efficiency.  

    “The atom is this tiny thing, a 10th of a nanometer large,” says Rivera. But when light takes the form of a wave, its wavelengths are much larger than an atom. “The idea is that, because of this mismatch, many of the possible ways that an electron could release a photon are just too slow to be observable.” Rivera uses theory to figure out how light and matter could be manipulated to allow for new types of interactions and ways to intentionally change the quantum state of light. 

    Inefficient interactions are often thought of as “forbidden” because, in normal circumstances, they would take billions of years to happen. “The forbidden light-matter interactions project is something we have been thinking about for many years, but we didn’t have a suitable material-system platform for it,” says Soljačić. In 2015, graphene plasmons arrived on the scene, and forbidden interactions could be explored.

    Graphene is an ultra-thin 2D material, and plasmons are another quantum-scale particle related to the oscillation of electrons. In these ultra-thin materials, light can be “shrunk” so that the wavelengths are closer to the scale of the electrons, making forbidden interactions possible. 

    Rivera’s first paper on this topic, published the summer after he graduated with his bachelor’s degree in 2016, was the start of his longstanding collaboration with Ido Kaminer, an assistant professor at the Technion-Israel Institute of Technology. But Rivera wasn’t done with light-matter interactions. “There were so many other directions that one could go with that work, and I really wanted the ability to probe all of them,” Rivera says, and he decided to stay in Soljačić’s group for his PhD. 

    A natural match

    That first collaboration with Kaminer, who was then a postdoc in Soljačić’s group, was a pivotal moment in Rivera’s career as a physicist. “I was working on a different project with Marin, but then he invited me to his office with Ido and told me about the project that would become the 2016 paper,” says Rivera. According to Soljačić, putting Kaminer and Rivera together “was a natural match.”
     
    Kaminer moved to the Technion in 2018, which was when Rivera took his first trip to Haifa, Israel, with funds provided by MISTI-Israel, a program within the MIT International Science and Technology Initiatives (MISTI). There, he gave a seminar and met with students and professors. “That visit seeded some projects that we’re still working on today,” says Rivera, such as a project where vacuum forces were used to generate X-ray photons

    With the help of lasers and optical materials, it’s relatively easy to generate photons of visible light, but making X-ray photons is much harder. “We don’t have lasers the same way we do for visible light, and we don’t have as many materials to manipulate X-rays,” says Rivera. The search for new strategies for generating X-ray photons is important, Rivera says, because these photons can help scientists explore physics at the atomic scale. 

    This past January, Rivera visited Israel for the third time. On these trips, “[we make] progress on the collaborations we have with the students, and also brainstorm new projects,” says Rivera. According to Kaminer, the in-person brainstorming is vital when coming up with new ideas. “Such creative ideas are, in the end, the most important part of our work as scientists,” Kaminer explains. During each visit, Rivera and Kaminer sketch out a research plan for the next six months to year, such as continuing to investigate new ways to control and generate quantum sources of X-ray photons.   

    When investigating the theory of light-matter interactions, the potential applications are never far from Rivera’s mind. “We’re trying to think about applications that could potentially be realized next year and in the next five years, but even potentially further down the line.” 

    For Rivera, being able to be in the same place as his collaborators is a major boon, and he doubts the continued collaboration with Kaminer would be as active if he hadn’t taken that first trip to Haifa in 2018. “And the hummus isn’t bad,” he jokes. 

    When Soljačić introduced Rivera and Kaminer five years ago, neither expected that the collaboration would still be going strong. “It’s hard to anticipate what collaborations will be successful in the long term,” says Kaminer. “But more important than the collaboration is the friendship,” he adds. 

    The deeper Rivera explores the quantum aspects of light-matter interactions, the more potential avenues of exploration open up. “It just keeps branching,” says Rivera. And he envisions himself continuing to visit Kaminer in Israel, no matter where his research takes him next. “It’s a lifelong collaboration at this point.”

    4:45p
    Looking into the black box

    Deep learning systems are revolutionizing technology around us, from voice recognition that pairs you with your phone to autonomous vehicles that are increasingly able to see and recognize obstacles ahead. But much of this success involves trial and error when it comes to the deep learning networks themselves. A group of MIT researchers recently reviewed their contributions to a better theoretical understanding of deep learning networks, providing direction for the field moving forward.

    “Deep learning was in some ways an accidental discovery,” explains Tommy Poggio, investigator at the McGovern Institute for Brain Research, director of the Center for Brains, Minds, and Machines (CBMM), and the Eugene McDermott Professor in Brain and Cognitive Sciences. “We still do not understand why it works. A theoretical framework is taking form, and I believe that we are now close to a satisfactory theory. It is time to stand back and review recent insights.”

    Climbing data mountains

    Our current era is marked by a superabundance of data — data from inexpensive sensors of all types, text, the internet, and large amounts of genomic data being generated in the life sciences. Computers nowadays ingest these multidimensional datasets, creating a set of problems dubbed the “curse of dimensionality” by the late mathematician Richard Bellman.

    One of these problems is that representing a smooth, high-dimensional function requires an astronomically large number of parameters. We know that deep neural networks are particularly good at learning how to represent, or approximate, such complex data, but why? Understanding why could potentially help advance deep learning applications.

    “Deep learning is like electricity after Volta discovered the battery, but before Maxwell,” explains Poggio, who is the founding scientific advisor of The Core, MIT Quest for Intelligence, and an investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. “Useful applications were certainly possible after Volta, but it was Maxwell’s theory of electromagnetism, this deeper understanding that then opened the way to the radio, the TV, the radar, the transistor, the computers, and the internet.”

    The theoretical treatment by Poggio, Andrzej Banburski, and Qianli Liao points to why deep learning might overcome data problems such as “the curse of dimensionality.” Their approach starts with the observation that many natural structures are hierarchical. To model the growth and development of a tree doesn’t require that we specify the location of every twig. Instead, a model can use local rules to drive branching hierarchically. The primate visual system appears to do something similar when processing complex data. When we look at natural images — including trees, cats, and faces — the brain successively integrates local image patches, then small collections of patches, and then collections of collections of patches. 

    “The physical world is compositional — in other words, composed of many local physical interactions,” explains Qianli Liao, an author of the study, and a graduate student in the Department of Electrical Engineering and Computer Science and a member of the CBMM. “This goes beyond images. Language and our thoughts are compositional, and even our nervous system is compositional in terms of how neurons connect with each other. Our review explains theoretically why deep networks are so good at representing this complexity.”

    The intuition is that a hierarchical neural network should be better at approximating a compositional function than a single “layer” of neurons, even if the total number of neurons is the same. The technical part of their work identifies what “better at approximating” means and proves that the intuition is correct.

    Generalization puzzle

    There is a second puzzle about what is sometimes called the unreasonable effectiveness of deep networks. Deep network models often have far more parameters than data to fit them, despite the mountains of data we produce these days. This situation ought to lead to what is called “overfitting,” where your current data fit the model well, but any new data fit the model terribly. This is dubbed poor generalization in conventional models. The conventional solution is to constrain some aspect of the fitting procedure. However, deep networks do not seem to require this constraint. Poggio and his colleagues prove that, in many cases, the process of training a deep network implicitly “regularizes” the solution, providing constraints.

    The work has a number of implications going forward. Though deep learning is actively being applied in the world, this has so far occurred without a comprehensive underlying theory. A theory of deep learning that explains why and how deep networks work, and what their limitations are, will likely allow development of even much more powerful learning approaches.

    “In the long term, the ability to develop and build better intelligent machines will be essential to any technology-based economy,” explains Poggio. “After all, even in its current — still highly imperfect — state, deep learning is impacting, or about to impact, just about every aspect of our society and life.”

    << Previous Day 2020/07/27
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org