MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, August 7th, 2019

    Time Event
    11:28a
    Study measures how fast humans react to road hazards

    Imagine you’re sitting in the driver’s seat of an autonomous car, cruising along a highway and staring down at your smartphone. Suddenly, the car detects a moose charging out of the woods and alerts you to take the wheel. Once you look back at the road, how much time will you need to safely avoid the collision?

    MIT researchers have found an answer in a new study that shows humans need about 390 to 600 milliseconds to detect and react to road hazards, given only a single glance at the road — with younger drivers detecting hazards nearly twice as fast as older drivers. The findings could help developers of autonomous cars ensure they are allowing people enough time to safely take the controls and steer clear of unexpected hazards.

    Previous studies have examined hazard response times while people kept their eyes on the road and actively searched for hazards in videos. In this new study, recently published in the Journal of Experimental Psychology: General, the researchers examined how quickly drivers can recognize a road hazard if they’ve just looked back at the road. That’s a more realistic scenario for the coming age of semiautonomous cars that require human intervention and may unexpectedly hand over control to human drivers when facing an imminent hazard.

    “You’re looking away from the road, and when you look back, you have no idea what’s going on around you at first glance,” says lead author Benjamin Wolfe, a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We wanted to know how long it takes you to say, ‘A moose is walking into the road over there, and if I don’t do something about it, I’m going to take a moose to the face.’”

    For their study, the researchers built a unique dataset that includes YouTube dashcam videos of drivers responding to road hazards — such as objects falling off truck beds, moose running into the road, 18-wheelers toppling over, and sheets of ice flying off car roofs — and other videos without road hazards. Participants were shown split-second snippets of the videos, in between blank screens. In one test, they indicated if they detected hazards in the videos. In another test, they indicated if they would react by turning left or right to avoid a hazard.

    The results indicate that younger drivers are quicker at both tasks: Older drivers (55 to 69 years old) required 403 milliseconds to detect hazards in videos, and 605 milliseconds to choose how they would avoid the hazard. Younger drivers (20 to 25 years old) only needed 220 milliseconds to detect and 388 milliseconds to choose.

    Those age results are important, Wolfe says. When autonomous vehicles are ready to hit the road, they’ll most likely be expensive. “And who is more likely to buy expensive vehicles? Older drivers,” he says. “If you build an autonomous vehicle system around the presumed capabilities of reaction times of young drivers, that doesn’t reflect the time older drivers need. In that case, you’ve made a system that’s unsafe for older drivers.”

    Joining Wolfe on the paper are: Bobbie Seppelt, Bruce Mehler, Bryan Reimer, of the MIT AgeLab, and Ruth Rosenholtz of the Department of Brain and Cognitive Sciences and CSAIL.

    Playing “the worst video game ever”

    In the study, 49 participants sat in front of a large screen that closely matched the visual angle and viewing distance for a driver, and watched 200 videos from the Road Hazard Stimuli dataset for each test. They were given a toy wheel, brake, and gas pedals to indicate their responses. “Think of it as the worst video game ever,” Wolfe says.

    The dataset includes about 500 eight-second dashcam videos of a variety of road conditions and environments. About half of the videos contain events leading to collisions or near collisions. The other half try to closely match each of those driving conditions, but without any hazards. Each video is annotated at two critical points: the frame when a hazard becomes apparent, and the first frame of the driver’s response, such as braking or swerving.

    Before each video, participants were shown a split-second white noise mask. When that mask disappeared, participants saw a snippet of a random video that did or did not contain an imminent hazard. After the video, another mask appeared. Directly following that, participants stepped on the brake if they saw a hazard or the gas if they didn’t. There was then another split-second pause on a black screen before the next mask popped up.

    When participants started the experiment, the first video they saw was shown for 750 milliseconds. But the duration changed during each test, depending on the participants’ responses. If a participant responded incorrectly to one video, the next video’s duration would extend slightly. If they responded correctly, it would shorten. In the end, durations ranged from a single frame (33 milliseconds) up to one second. “If they got it wrong, we assumed they didn’t have enough information, so we made the next video longer. If they got it right, we assumed they could do with less information, so made it shorter,” Wolfe says.

    The second task used the same setup to record how quickly participants could choose a response to a hazard. For that, the researchers used a subset of videos where they knew the response was to turn left or right. The video stops, and the mask appears on the first frame that the driver begins to react. Then, participants turned the wheel either left or right to indicate where they’d steer.

    “It’s not enough to say, ‘I know something fell into road in my lane.’ You need to understand that there’s a shoulder to the right and a car in the next lane that I can’t accelerate into, because I’ll have a collision,” Wolfe says.

    More time needed

    The MIT study didn’t record how long it actually takes people to, say, physically look up from their phones or turn a wheel. Instead, it showed people need up to 600 milliseconds to just detect and react to a hazard, while having no context about the environment.

    Wolfe thinks that’s concerning for autonomous vehicles, since they may not give humans adequate time to respond, especially under panic conditions. Other studies, for instance, have found that it takes people who are driving normally, with their eyes on the road, about 1.5 seconds to physically avoid road hazards, starting from initial detection.

    Driverless cars will already require a couple hundred milliseconds to alert a driver to a hazard, Wolfe says. “That already bites into the 1.5 seconds,” he says. “If you look up from your phone, it may take an additional few hundred milliseconds to move your eyes and head. That doesn’t even get into time it’ll take to reassert control and brake or steer. Then, it starts to get really worrying.”

    Next, the researchers are studying how well peripheral vision helps in detecting hazards. Participants will be asked to stare at a blank part of the screen — indicating where a smartphone may be mounted on a windshield — and similarly pump the brakes when they notice a road hazard.

    The work is sponsored, in part, by the Toyota Research Institute.  

    12:45p
    Air travel in academia

    Our planet’s warming climate presents an imminent and catastrophic challenge that will have far-reaching economic, social, and political ramifications. As residents of a wealthy, developed nation, we contribute more to climate change than the average global citizen. At MIT, as globally connected citizens with many opportunities for work- and research-related air travel, many community members contribute more to climate change than the average American.

    For many individuals at the Media Lab, who travel around the world to collaborate on research projects, present at conferences, and lead workshops, research-related air travel represents a huge proportion of their annual greenhouse-gas emissions. For example, a single economy-class seat on a flight from Boston, Massachusetts, to Los Angeles, California, is responsible for the same carbon emissions as 110 days of driving a car. Several labbers wanted to do more to educate the Media Lab community about the impact of our collective air travel and improve the lab’s sustainability.

    While the best way to reduce our carbon footprint would be to take fewer airplane flights, this solution isn’t always possible or desirable given the research opportunities that require air travel. Instead, research assistants Juliana Cherston, Natasha Jaques, and Caroline Jaffe decided to start a pilot program through which the Media Lab will buy high-quality carbon offsets to reduce the climate impact of the lab’s collective air travel. The program's website was designed and engineered by Craig Ferguson.

    Though carbon-offset programs have been criticized in the past for giving people an excuse for irresponsible climate behavior, carbon-offset verification has improved drastically in the past decade. When it is infeasible to reduce overall air travel mileage, the purchase of high-quality, verified carbon offsets will fund projects that produce renewable energy and avoid future carbon emissions. As part of a pilot program, the lab plans to buy carbon offsets through Gold Standard, a certified offset provider that verifies that their offset projects, like distributing clean cooking stoves, investing in wind power plants, and regenerating forests, both reduce carbon emissions and also meet the United Nations' Sustainable Development Goals.

    During the six-month pilot program, the project leaders are asking members of the Media Lab community to log their lab-related air miles through a simple web interface. At the end of each month they will tally the air miles traveled by the community, calculate the carbon emissions associated with those flights, and purchase offsets through Gold Standard to offset the impact of those flights. It is hoped that the program will spark a discussion about climate behavior while contributing to a global model of sustainability.

    While putting together the pilot program, the organizing team members ran into a few surprising data and design issues. First, they learned that gathering data — and knowing which data to collect — was trickier than expected. What exactly counts as “lab-related” travel, and is there some centralized system that tracks the lab’s air mileage? It turns out that no such system exists. While MIT maintains careful financial accounting, there hasn’t been a reason to specifically track mileage before, and the ability to do so is not built into the Institute’s accounting systems.

    The team also wrestled with interesting questions around user participation. While they wanted to encourage as many people as possible to participate in order to collect the most accurate travel data, they also didn’t want to incentivize people to travel more than they do already. And, they didn’t want people to vacate a sense of responsibility by knowing their travel was being offset. In the process of putting together this pilot, the team learned of other groups at MIT and at other universities who are developing carbon-offset programs. In other cases, offset programs are top-down: Offsets are automatically purchased through finance or logistics channels. These programs don’t have to deal with user-participation challenges and likely have more accurate data totals, but they also miss the opportunity to engage the community in a substantive conversation around air travel emissions.

    After thinking carefully about goals for the project, the team decided that soliciting travel data from the community would do the most to raise awareness about the issue — and it was also a cheap and easy way to kick off a pilot. After launching the pilot several weeks ago, the team has received a few dozen messages communicating enthusiasm, asking questions, and raising concerns. They are planning to send monthly update emails to the Media Lab community, and host several discussion groups at the end of the pilot to evaluate the program and figure out what to do next. Through this pilot, the team hopes to learn about what makes an effective carbon-offsets program and pass this knowledge on to groups at MIT and other schools who are trying to implement university-wide offset programs.

    Read more at offset.media.mit.edu (and log your air miles if you’re at the Media Lab). When the pilot is complete, the team will publish a followup to share its findings.

    A version of this article was previously published by the MIT Media Lab.

    << Previous Day 2019/08/07
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org