MIT Research News' Journal
[Most Recent Entries]
[Calendar View]
Thursday, July 31st, 2014
Time |
Event |
12:00a |
A market for emotions Emotions can be powerful for individuals. But they’re also powerful tools for content creators, such as advertisers, marketers, and filmmakers. By tracking people’s negative or positive feelings toward ads — via traditional surveys and focus groups — agencies can tweak and tailor their content to better satisfy consumers.
Increasingly, over the past several years, companies developing emotion-recognition technology — which gauges subconscious emotions by analyzing facial cues — have aided agencies on that front.
Prominent among these companies is MIT spinout Affectiva, whose advanced emotion-tracking software, called Affdex, is based on years of MIT Media Lab research. Today, the startup is attracting some big-name clients, including Kellogg and Unilever.
Backed by more than $20 million in funding, the startup — which has amassed a vast facial-expression database — is also setting its sights on a “mood-aware” Internet that reads a user’s emotions to shape content. This could lead, for example, to more relevant online ads, as well as enhanced gaming and online-learning experiences.
“The broad goal is to become the emotion layer of the Internet,” says Affectiva co-founder Rana el Kaliouby, a former MIT postdoc who invented the technology. “We believe there’s an opportunity to sit between any human-to-computer, or human-to-human interaction point, capture data, and use it to enrich the user experience.”
Ads and apps
In using Affdex, Affectiva recruits participants to watch advertisements in front of their computer webcams, tablets, and smartphones. Machine learning algorithms track facial cues, focusing prominently on the eyes, eyebrows, and mouth. A smile, for instance, would mean the corners of the lips curl upward and outward, teeth flash, and the skin around their eyes wrinkles.
Affdex then infers the viewer’s emotions — such as enjoyment, surprise, anger, disgust, or confusion — and pushes the data to a cloud server, where Affdex aggregates the results from all the facial videos (sometimes hundreds), which it publishes on a dashboard.
But determining whether a person “likes” or “dislikes” an advertisement takes advanced analytics. Importantly, the software looks for “hooking” the viewers in the first third of an advertisement, by noting increased attention and focus, signaled in part by less fidgeting and fixated gazes.
Smiles can indicate that a commercial designed to be humorous is, indeed, funny. But if a smirk — subtle, asymmetric lip curls, separate from smiles — comes at a moment when information appears on the screen, it may indicate skepticism or doubt.
A furrowed brow may signal confusion or cognitive overload. “Sometimes that’s by design: You want people to be confused, before you resolve the problem. But if the furrowed brow persists throughout the ad, and is not resolved by end, that’s a red flag,” el Kaliouby says.
Affectiva has been working with advertisers to optimize their marketing content for a couple of years. In a recent case study with Mars, for example, Affectiva found that the client’s chocolate ads elicited the highest emotional engagement, while its food ads elicited the least, helping predict short-term sales of these products.
In that study, some 1,500 participants from the United States and Europe viewed more than 200 ads to track their emotional responses, which were tied to the sales volume for different product lines. These results were combined with a survey to increase the accuracy of predicting sales volume.
“Clients usually take these responses and edit the ad, maybe make it shorter, maybe change around the brand reveal,” el Kaliouby says. “With Affdex, you see on a moment-by-moment basis, who’s really engaged with ad, and what’s working and what’s not.”
This year, the startup released a developer kit for mobile app designers. Still in their early stages, some of the apps are designed for entertainment, such as people submitting “selfies” to analyze their moods and sharing them across social media.
Still others could help children with autism better interact, el Kaliouby says — such as games that make people match facial cues with emotions. “This would focus on pragmatic training, helping these kids understand the meaning of different facial expressions and how to express their own,” she says.
Entrenched in academia
While several companies are commercializing similar technology, Affectiva is unusual in that it is “entrenched in academia,” el Kaliouby says: Years of data-gathering have “trained” the algorithms to be very discerning.
As a PhD student at Cambridge University in the early 2000s, el Kaliouby began developing facial-coding software. She was inspired, in part, by her future collaborator and Affectiva co-founder, Rosalind Picard, an MIT professor who pioneered the field of affective computing — where machines can recognize, interpret, process, and simulate human affects.
Back then, the data that el Kaliouby had access to consisted of about 100 facial expressions gathered from photos — and those 100 expressions were fairly prototypical. “To recognize surprise, for example, we had this humongous surprise expression. This meant that if you showed the computer an expression of a person that’s somewhat surprised or subtly shocked, it wouldn’t recognize it,” el Kaliouby says.
In 2006, el Kaliouby came to the Media Lab to work with Picard to expand what the technology can do. Together, they quickly started applying the facial-coding technology to autism research and training the algorithms by collecting vast stores of data.
“Coming from a traditional research background, the Media Lab was completely different,” el Kaliouby says. “You prototype, prototype, prototype, and fail fast. It’s very startup-minded.”
Among their first prototypes was a Google Glass-type invention with a camera that could read facial expressions and provide real-time feedback to the wearer via a Bluetooth headset. For instance, auditory cues would provide feedback, such as, “This person is bored” or, “This person is confused.”
However, inspired by increasing industry attention —- and with a big push by Frank Moss, then the Media Lab’s director — they soon ditched the wearable prototype to build a cloud-based version of the software, founding Affectiva in 2009.
Early support from a group of about eight mentors at MIT’s Venture Mentoring Service helped the Affectiva team connect to industry and shape its pitch — by focusing on the value proposition, not the technology. “We learned to build a product story instead of a technology story — that was key,” el Kaliouby says.
To date, Affectiva has amassed a dataset of about 1.7 million facial expressions, roughly 2 billion data points, from people of all races, across 70 different countries — the largest facial-coding dataset in the world, el Kaliouby says — training its software’s algorithms to discern expressions from all different face types and skin colors. It can also track faces that are moving, in all types of lighting, and can avoid tracking any other movement on screen.
A “mood-aware” Internet
One of Affectiva’s long-term goals is to usher in a “mood-aware” Internet to improve users’ experiences. Imagine an Internet that’s like walking into a large outlet store with sales representatives, el Kaliouby says.
“At the store, the salespeople are reading your physical cues in real time, and assessing whether to approach you or not, and how to approach you,” she says. “Websites and connected devices of the future should be like this, very mood-aware.”
Sometime in the future, this could mean computer games that adapt in difficulty and other game variables, based on user reaction. But more immediately, it could work for online learning.
Already, Affectiva has conducted pilot work for online learning, where it captured data on facial engagement to predict learning outcomes. For this, the software indicates, for instance, if a student is bored, frustrated, or focused — which is especially valuable for prerecorded lectures, el Kaliouby says.
“To be able to capture that data, in real time, means educators can adapt that learning experience and change the content to better engage students — making it, say, more or less difficult — and change feedback to maximize learning outcomes,” el Kaliouby says. “That’s one application we’re really excited about.” | 12:00a |
Vision-correcting displays Researchers at the MIT Media Laboratory and the University of California at Berkeley have developed a new display technology that automatically corrects for vision defects — no glasses (or contact lenses) required.
The technique could lead to dashboard-mounted GPS displays that farsighted drivers can consult without putting their glasses on, or electronic readers that eliminate the need for reading glasses, among other applications.
“The first spectacles were invented in the 13th century,” says Gordon Wetzstein, a research scientist at the Media Lab and one of the display’s co-creators. “Today, of course, we have contact lenses and surgery, but it’s all invasive in the sense that you either have to put something in your eye, wear something on your head, or undergo surgery. We have a different solution that basically puts the glasses on the display, rather than on your head. It will not be able to help you see the rest of the world more sharply, but today, we spend a huge portion of our time interacting with the digital world.”
Wetzstein and his colleagues describe their display in a paper they’re presenting in August at Siggraph, the premier graphics conference. Joining him on the paper are Ramesh Raskar, the NEC Career Development Professor of Media Arts and Sciences and director of the Media Lab’s Camera Culture group, and Berkeley’s Fu-Chung Huang and Brian Barsky.
Knowing the angles
The display is a variation on a glasses-free 3-D technology also developed by the Camera Culture group. But where the 3-D display projects slightly different images to the viewer’s left and right eyes, the vision-correcting display projects slightly different images to different parts of the viewer’s pupil.
A vision defect is a mismatch between the eye’s focal distance — the range at which it can actually bring objects into focus — and the distance of the object it’s trying to focus on. Essentially, the new display simulates an image at the correct focal distance — somewhere between the display and the viewer’s eye.
The difficulty with this approach is that simulating a single pixel in the virtual image requires multiple pixels of the physical display. The angle at which light should seem to arrive from the simulated image is sharper than the angle at which light would arrive from the same image displayed on the screen. So the physical pixels projecting light to the right side of the pupil have to be offset to the left, and the pixels projecting light to the left side of the pupil have to be offset to the right.
The use of multiple on-screen pixels to simulate a single virtual pixel would drastically reduce the image resolution. But this problem turns out to be very similar to a problem that Wetzstein, Raskar, and colleagues solved in their 3-D displays, which also had to project different images at different angles.
The researchers discovered that there is, in fact, a great deal of redundancy between the images required to simulate different viewing angles. The algorithm that computes the image to be displayed onscreen can exploit that redundancy, allowing individual screen pixels to participate simultaneously in the projection of different viewing angles. The MIT and Berkeley researchers were able to adapt that algorithm to the problem of vision correction, so the new display incurs only a modest loss in resolution.
In the researchers’ prototype, however, display pixels do have to be masked from the parts of the pupil for which they’re not intended. That requires that a transparency patterned with an array of pinholes be laid over the screen, blocking more than half the light it emits.
Multitasking
But early versions of the 3-D display faced the same problem, and the MIT researchers solved it by instead using two liquid-crystal displays (LCDs) in parallel. Carefully tailoring the images displayed on the LCDs to each other allows the system to mask perspectives while letting much more light pass through. Wetzstein envisions that commercial versions of a vision-correcting screen would use the same technique.
Indeed, he says, the same screens could both display 3-D content and correct for vision defects, all glasses-free. They could also reproduce another Camera Culture project, which diagnoses vision defects. So the same device could, in effect, determine the user’s prescription and automatically correct for it.
“Most people in mainstream optics would have said, ‘Oh, this is impossible,’” says Chris Dainty, a professor at the University College London Institute of Ophthalmology and Moorfields Eye Hospital. “But Ramesh’s group has the art of making the apparently impossible possible.”
“The key thing is they seem to have cracked the contrast problem,” Dainty adds. “In image-processing schemes with incoherent light — normal light that we have around us, nonlaser light — you’re always dealing with intensities. And intensity is always positive (or zero). Because of that, you’re always adding positive things, so the background just gets bigger and bigger and bigger. And the signal-to-background, which is contrast, therefore gets smaller as you do more processing. It’s a fundamental problem.”
Dainty believes that the most intriguing application of the technology is in dashboard displays. “Most people over 50, 55, quite probably see in the distance fine, but can’t read a book,” Dainty says. “In the car, you can wear varifocals, but varifocals distort the geometry of the outside world, so if you don’t wear them all the time, you have a bit of a problem. There, [the MIT and Berkeley researchers] have a great solution.” | 10:15a |
MIT economist Nancy Rose to take Department of Justice position MIT economist Nancy Rose, an expert on firm behavior and the economics of regulated industries, has been named by the U.S. Department of Justice (DOJ) as deputy assistant attorney general for economic analysis. She will take a leave of absence from the Institute in order to take the position, which formally begins on Sept. 8.
Rose, who is the Charles P. Kindleberger Professor of Applied Economics, will lead a staff of about 50 economists conducting research for DOJ’s Antitrust Division, while also working with DOJ leaders in establishing policy priorities.
“I became an economist because I was very interested in public policies at the intersection of competition policy and regulation,” Rose told MIT News. “This is an exciting opportunity to pivot from research to a direct role in applying economics to guide public policy that promotes a competitive and open marketplace.”
Economists in DOJ’s Antitrust Division work with their legal counterparts in the division to assess the likely impact of proposed mergers on consumers and market outcomes; to investigate potential anti-competitive practices in markets; and to provide policy guidance on practices that may impede marketplace competition.
“There is a large and extraordinarily talented group of professional economists within the division,” Rose said. “I hope to contribute to their effectiveness, and I expect to learn from them as well.”
Rose received her undergraduate degree in economics and government at Harvard University in 1980, and her PhD in economics from MIT in 1985. Her first faculty position was at the MIT Sloan School of Management, which she joined in 1985. She has been on the faculty of the Department of Economics since 1994. Rose has also served as the director of the National Bureau of Economic Research’s program in industrial organization since its establishment in 1991.
Rose has published extensively on industries including electricity generation and transmission, airlines, and other transportation sectors. She has received multiple teaching awards, and was named a Margaret MacVicar Faculty Fellow in 2012, MIT’s highest honor for undergraduate teaching.
“My teaching increasingly has used antitrust cases to illustrate both competitive strategy and policy issues,” Rose noted. “I’ve enjoyed using Department of Justice actions to tee up discussions of current issues in competition policy. It’s exciting to now have the chance to be part of that process.”
Rose will replace Aviv Nevo, a Northwestern University economist who has held the position since April 2013. | 3:15p |
Going to the Red Planet Whenever the first NASA astronauts arrive on Mars, they will likely have MIT to thank for the oxygen they breathe — and for the oxygen needed to burn rocket fuel that will launch them back home to Earth.
On Thursday, NASA announced the seven instruments that will accompany Mars 2020, a planned $1.9 billion roving laboratory similar to the Mars Curiosity rover currently cruising the Red Planet. Key among these instruments is an MIT-led payload known as MOXIE, which will play a leading role in paving the way for human exploration of our ruddy planetary neighbor.
MOXIE — short for Mars OXygen In situ resource utilization Experiment — was selected from 58 instrument proposals submitted by research teams around the world. The experiment, currently scheduled to launch in the summer of 2020, is a specialized reverse fuel cell whose primary function is to consume electricity in order to produce oxygen on Mars, where the atmosphere is 96 percent carbon dioxide. If proven to work on the Mars 2020 mission, a MOXIE-like system could later be used to produce oxygen on a larger scale, both for life-sustaining activities for human travelers and to provide liquid oxygen needed to burn the rocket fuel for a return trip to Earth.
“Human exploration of Mars will be a seminal event for the next generation, the same way the moon landing mission was for my generation,” says Michael Hecht, principal investigator of the MOXIE instrument and assistant director for research management at the MIT Haystack Observatory. “I welcome this opportunity to move us closer to that vision.”
An oxygen factory on Mars
One of the main goals of the Mars 2020 mission will be to determine the potential habitability of the planet for human visitors. To that end, the MOXIE instrument will attempt to make oxygen out of native resources in order to demonstrate that it could be done on a larger scale for future missions.
To do this, MOXIE will be designed and built as what Hecht calls a “fuel cell run in reverse.” In a normal fuel cell, fuel is heated together with an oxidizer — often oxygen — producing electricity. In this case, however, electricity produced by a separate machine would be combined with carbon dioxide from the Martian air to produce oxygen and carbon monoxide in a process called solid oxide electrolysis.
“It’s a pretty exotic way to run a fuel cell on Earth,” Hecht says, “but on Mars if you want to run an engine, you don’t have oxygen. Over 75 percent of what you would have to carry to run an engine on Mars would be oxygen.”
Of course, setting up a system to create oxygen that human explorers could breathe would be extremely helpful for a mission of any duration. But there’s an equally important reason to be able to produce oxygen onsite, Hecht says: “When we send humans to Mars, we will want them to return safely, and to do that they need a rocket to lift off the planet. That’s one of the largest pieces of the mass budget that we would need to send astronauts there and back. So if we can eliminate that piece by making the oxygen on Mars, we’re way ahead of the game.”
According to Hecht, a long-term plan for getting humans to Mars — and back — would look something like this: First, a small nuclear reactor would be sent to the Red Planet along with a scaled-up version of the MOXIE instrument. Over a couple of years, its oxygen tank would fill up in preparation for human visitors. Once the crew arrives, “they have their power source, they have their fuel, and the infrastructure for the mission is already in place,” Hecht says. “That’s the piece we’re after.”
Hecht adds that producing oxygen on the Martian surface is likely the simplest solution for a number of reasons. It would, for example, eliminate the difficulty and expense of sending liquid oxygen stores to Mars.
To be sure, MOXIE won’t be the only instrument aboard the Mars 2020 mission. It will occupy valuable space on a rover that will also conduct other important scientific experiments — such as searching the Martian soil for signs of life. So why do scientists and engineers need to demonstrate that they can produce oxygen on the surface, when they’re confident they can make that reaction happen on Earth?
“If you were one of those astronauts depending on an oxygen tank for your ride home, I think you’d like to see it tested on Mars before you go,” Hecht explains. “We want to invest in a simple prototype before we are convinced. We’ve never run a factory on Mars. But this is what we’re doing; we’re running a prototype factory to see what problems we might come up against.”
MIT connection
To develop MOXIE, MIT will partner with NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, Calif. JPL will lead design and development of the payload, while MIT will establish the mission architecture, oversee the development, and plan operations on the surface of Mars.
At MIT, MOXIE’s home will be the Haystack Observatory, an interdisciplinary research center in Westford, Mass., that specializes in radio science related to astronomy, atmospheric science, and applied measurement of the Earth known as geodesy. Hecht admits that developing and building the MOXIE instrument will be something of a departure from Haystack’s typical projects, but he and his colleagues are excited to take on the challenge.
“Haystack has been involved in the space program since its inception, even before it was officially Haystack Observatory,” Hecht says. “We really pride ourselves on our ability to pioneer new, ultraprecise scientific instrumentation and get it out into the field. We’re kind of a bridge between an engineering production shop and a fundamental science shop, so this plays to our strength in every way but the fact that there’s no radio science [on Mars 2020].”
Of course, Hecht and his Haystack colleagues won’t be working on MOXIE in a vacuum. The instrument will also benefit from the expertise of Jeff Hoffman, a former astronaut and professor of the practice in MIT’s Department of Aeronautics and Astronautics. Associate Professor of Nuclear Science and Engineering Bilge Yildiz, who has unique experience with the technology that will fly on the MOXIE experiment, will also play an important role.
“It’s a collaboration I never expected, between nuclear engineering, AeroAstro, and Haystack Observatory,” Hecht says. “But in the end, our leadership team ended up with a very competitive product.”
Humans key to future Mars exploration
If all goes according to plan, the Mars 2020 mission, with MOXIE in tow, will launch in July 2020. Assuming a safe landing and deployment, Hecht hopes the MOXIE instrument will transform the future of Martian exploration by demonstrating that humans can live directly off the land, with as few resources as possible shipped in from Earth.
When will humans actually get to Mars? An independent mission known as Mars One aims to send humans on a one-way trip to the Red Planet in 2024 — but critically, the explorers who have signed up for that mission know they won’t be returning. Sending humans on a government-funded return trip will take much more effort, both in terms of science and technology and political will.
“It’s not a science and engineering question, it’s a political and programmatic question,” argues Hecht, who believes it’s not unreasonable to think NASA could launch humans on a return trip to Mars in 20 years. “What that will take is, I’d say, a political bipartisan commitment, a sustained investment, and the best and brightest minds of the generation, just as Apollo did,” Hecht says. “It’s a really challenging project, just as Apollo was. It’s doable, it won’t break the bank in the United States, and we can afford it, but it’s a large commitment.
“I was thinking about [President John F.] Kennedy’s speech,” Hecht adds, “when he talked about going to the moon not because it’s easy, but because it’s hard. And I was really struck by what came after that quote, the part that nobody remembers. He said, ‘Because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win.’ That just said it. We will get to Mars in 20 years when we are willing to embrace that challenge.” | 11:59p |
Light pulses control graphene’s electrical behavior Graphene, an ultrathin form of carbon with exceptional electrical, optical, and mechanical properties, has become a focus of research on a variety of potential uses. Now researchers at MIT have found a way to control how the material conducts electricity by using extremely short light pulses, which could enable its use as a broadband light detector.
The new findings are published in the journal Physical Review Letters, in a paper by graduate student Alex Frenzel, Nuh Gedik, and three others.
The researchers found that by controlling the concentration of electrons in a graphene sheet, they could change the way the material responds to a short but intense light pulse. If the graphene sheet starts out with low electron concentration, the pulse increases the material’s electrical conductivity. This behavior is similar to that of traditional semiconductors, such as silicon and germanium.
But if the graphene starts out with high electron concentration, the pulse decreases its conductivity — the same way that a metal usually behaves. Therefore, by modulating graphene’s electron concentration, the researchers found that they could effectively alter graphene’s photoconductive properties from semiconductorlike to metallike.
The finding also explains the photoresponse of graphene reported previously by different research groups, which studied graphene samples with differing concentration of electrons. “We were able to tune the number of electrons in graphene, and get either response,” Frenzel says.
To perform this study, the team deposited graphene on top of an insulating layer with a thin metallic film beneath it; by applying a voltage between graphene and the bottom electrode, the electron concentration of graphene could be tuned. The researchers then illuminated graphene with a strong light pulse and measured the change of electrical conduction by assessing the transmission of a second, low-frequency light pulse.
In this case, the laser performs dual functions. “We use two different light pulses: one to modify the material, and one to measure the electrical conduction,” Gedik says, adding that the pulses used to measure the conduction are much lower frequency than the pulses used to modify the material behavior. To accomplish this, the researchers developed a device that was transparent, Frenzel explains, to allow laser pulses to pass through it.
This all-optical method avoids the need for adding extra electrical contacts to the graphene. Gedik, the Lawrence C. and Sarah W. Biedenharn Associate Professor of Physics, says the measurement method that Frenzel implemented is a “cool technique. Normally, to measure conductivity you have to put leads on it,” he says. This approach, by contrast, “has no contact at all.”
Additionally, the short light pulses allow the researchers to change and reveal graphene’s electrical response in only a trillionth of a second.
In a surprising finding, the team discovered that part of the conductivity reduction at high electron concentration stems from a unique characteristic of graphene: Its electrons travel at a constant speed, similar to photons, which causes the conductivity to decrease when the electron temperature increases under the illumination of the laser pulse. “Our experiment reveals that the cause of photoconductivity in graphene is very different from that in a normal metal or semiconductor,” Frenzel says.
The researchers say the work could aid the development of new light detectors with ultrafast response times and high sensitivity across a wide range of light frequencies, from the infrared to ultraviolet. While the material is sensitive to a broad range of frequencies, the actual percentage of light absorbed is small. Practical application of such a detector would therefore require increasing absorption efficiency, such as by using multiple layers of graphene, Gedik says.
Isabella Gierz, a professor at the Max Planck Institute for the Structure and Dynamics of Matter in Hamburg, Germany, who was not involved in this research, says, “The work is interesting because it presents a systematic study of the doping dependence of the low-energy dynamics, which has not received much attention so far.” She says the new research “certainly helps to reconcile previous apparently contradicting results,” and adds that these findings represent “a solid experiment, analysis, and interpretation.”
The research team also included Jing Kong, the ITT Career Development Associate Professor of Electrical Engineering at MIT, who provided the graphene samples used for the experiments; physics postdoc Chun Hung Lui; and Yong Cheol Shin, a graduate student in materials science and engineering. The work received support from the U.S. Department of Energy and the National Science Foundation. |
|