MIT Research News' Journal
[Most Recent Entries]
[Calendar View]
Thursday, February 20th, 2014
Time |
Event |
5:00a |
Closing the ‘free will’ loophole In a paper published this week in the journal Physical Review Letters, MIT researchers propose an experiment that may close the last major loophole of Bell’s inequality — a 50-year-old theorem that, if violated by experiments, would mean that our universe is based not on the textbook laws of classical physics, but on the less-tangible probabilities of quantum mechanics.
Such a quantum view would allow for seemingly counterintuitive phenomena such as entanglement, in which the measurement of one particle instantly affects another, even if those entangled particles are at opposite ends of the universe. Among other things, entanglement — a quantum feature Albert Einstein skeptically referred to as “spooky action at a distance”— seems to suggest that entangled particles can affect each other instantly, faster than the speed of light.
In 1964, physicist John Bell took on this seeming disparity between classical physics and quantum mechanics, stating that if the universe is based on classical physics, the measurement of one entangled particle should not affect the measurement of the other — a theory, known as locality, in which there is a limit to how correlated two particles can be. Bell devised a mathematical formula for locality, and presented scenarios that violated this formula, instead following predictions of quantum mechanics.
Since then, physicists have tested Bell’s theorem by measuring the properties of entangled quantum particles in the laboratory. Essentially all of these experiments have shown that such particles are correlated more strongly than would be expected under the laws of classical physics — findings that support quantum mechanics.
However, scientists have also identified several major loopholes in Bell’s theorem. These suggest that while the outcomes of such experiments may appear to support the predictions of quantum mechanics, they may actually reflect unknown “hidden variables” that give the illusion of a quantum outcome, but can still be explained in classical terms.
Though two major loopholes have since been closed, a third remains; physicists refer to it as “setting independence,” or more provocatively, “free will.” This loophole proposes that a particle detector’s settings may “conspire” with events in the shared causal past of the detectors themselves to determine which properties of the particle to measure — a scenario that, however far-fetched, implies that a physicist running the experiment does not have complete free will in choosing each detector’s setting. Such a scenario would result in biased measurements, suggesting that two particles are correlated more than they actually are, and giving more weight to quantum mechanics than classical physics.
“It sounds creepy, but people realized that’s a logical possibility that hasn’t been closed yet,” says MIT’s David Kaiser, the Germeshausen Professor of the History of Science and senior lecturer in the Department of Physics. “Before we make the leap to say the equations of quantum theory tell us the world is inescapably crazy and bizarre, have we closed every conceivable logical loophole, even if they may not seem plausible in the world we know today?”
Now Kaiser, along with MIT postdoc Andrew Friedman and Jason Gallicchio of the University of Chicago, have proposed an experiment to close this third loophole by determining a particle detector’s settings using some of the oldest light in the universe: distant quasars, or galactic nuclei, which formed billions of years ago.
The idea, essentially, is that if two quasars on opposite sides of the sky are sufficiently distant from each other, they would have been out of causal contact since the Big Bang some 14 billion years ago, with no possible means of any third party communicating with both of them since the beginning of the universe — an ideal scenario for determining each particle detector’s settings.
As Kaiser explains it, an experiment would go something like this: A laboratory setup would consist of a particle generator, such as a radioactive atom that spits out pairs of entangled particles. One detector measures a property of particle A, while another detector does the same for particle B. A split second after the particles are generated, but just before the detectors are set, scientists would use telescopic observations of distant quasars to determine which properties each detector will measure of a respective particle. In other words, quasar A determines the settings to detect particle A, and quasar B sets the detector for particle B.
The researchers reason that since each detector’s setting is determined by sources that have had no communication or shared history since the beginning of the universe, it would be virtually impossible for these detectors to “conspire” with anything in their shared past to give a biased measurement; the experimental setup could therefore close the “free will” loophole. If, after multiple measurements with this experimental setup, scientists found that the measurements of the particles were correlated more than predicted by the laws of classical physics, Kaiser says, then the universe as we see it must be based instead on quantum mechanics.
“I think it’s fair to say this [loophole] is the final frontier, logically speaking, that stands between this enormously impressive accumulated experimental evidence and the interpretation of that evidence saying the world is governed by quantum mechanics,” Kaiser says.
Now that the researchers have put forth an experimental approach, they hope that others will perform actual experiments, using observations of distant quasars.
Physicist Michael Hall says that while the idea of using light from distant sources like quasars is not a new one, the group’s paper illustrates the first detailed analysis of how such an experiment could be carried out in practice, using current technology.
“It is therefore a big step to closing the loophole once and for all,” says Hall, a research fellow in the Centre for Quantum Dynamics at Griffith University in Australia. “I am sure there will be strong interest in conducting such an experiment, which combines cosmic distances with microscopic quantum effects — and most likely involving an unusual collaboration between quantum physicists and astronomers.”
“At first, we didn’t know if our setup would require constellations of futuristic space satellites, or 1,000-meter telescopes on the dark side of the moon,” Friedman says. “So we were naturally delighted when we discovered, much to our surprise, that our experiment was both feasible in the real world with present technology, and interesting enough to our experimentalist collaborators who actually want to make it happen in the next few years.”
Adds Kaiser, “We’ve said, ‘Let’s go for broke — let’s use the history of the cosmos since the Big Bang, darn it.’ And it is very exciting that it’s actually feasible.”
This research was funded by the National Science Foundation. | 5:00a |
Rise of the compliant machines Are we on the brink of a robotics revolution? That’s what numerous media outlets asked last December when Google acquired eight robotics companies that specialize in such innovations as manipulation, vision, and humanoid robots.
Among those acquisitions was MIT spinout Meka Robotics, co-founded by Aaron Edsinger SM ’01, PhD ’07 and Jeff Weber, a former research engineer in the Computer Science and Artificial Intelligence Lab.
Founded in 2006, Meka was an early creator of “compliant” humanoid robots that now work safely alongside humans in everyday environments — including factories and cramped research labs.
Based on the co-founders’ work at MIT, Meka’s sleek robotics hardware included adult-size arms and hands, as well as heads, torsos, and full-body systems with advanced control innovations, such as spring-based Series Elastic Actuators (SEAs) that provide torque control and measurements at each joint. All of Meka’s robots run off Meka M3 and Robot Operating System software, which allow for real-time communication.
Perhaps the company is most notable for its M1 Mobile Manipulator, a $340,000 robotic humanoid that combines all of Meka’s hardware. Designed to lift and carry objects, the M1’s arms move smoothly and are equipped with strong grippers and with SEAs that allow the arms to slow down upon human touch. A customizable pan-tilt head comes with a Kinect 3-D camera, along with other digital cameras, for sensing objects. Its base is an omnidirectional platform with a mechanical lift that allows the torso to move vertically.
Dozens of researchers today use Meka’s robotic hardware and software in labs around the world for advanced robotics research. “These are hardware platforms for research labs to develop algorithms for mobile manipulation, social robotics, and human-robot interaction,” says Edsinger, who was Meka’s chief executive officer.
Google’s other recent acquisitions have included MIT spinout Boston Dynamics, a military robot maker, and Redwood Robotics, a joint venture between Meka and the robotics firms Willow Garage and SRI International.
Co-founded by Edsinger, Redwood Robotics focused specifically on refining Meka’s robot arms. But it has greater aims of bringing manufacturing back stateside. “Designing arms is part of the story, but the bigger product solution is to fulfill that vision,” says Edsinger, now a robotics director at Google.
With Google’s acquisitions, Edsinger believes that robotics innovation is on the rise. “My hope,” he says, “is that we’re going to see as much energy and effort pooled into robotics startups in the next 10 years as we’ve seen in social media in the last 10.”
Aesthetics and engineering
While the technology behind Meka’s robots was novel in the mid-2000s, what continued to set the company apart in a burgeoning robotics landscape “was designing robots on human scale that had a focus on aesthetic packaging,” Edsinger says.
This is perhaps best showcased in Meka’s S2 Humanoid Heads, designed with expressive eyes and emotive ears. These were used to build “sociable” robots in collaboration with researchers across the nation.
Simon, a robot co-developed by Meka and researchers at the Georgia Institute of Technology, includes a Meka humanoid head with 13 degrees of freedom (DOF), including independently moving eyes and eyelids, movable ears, and a five-DOF neck — which replicates a human’s range of motion. It also conveys nonverbal cues through lifelike head motions, eye contact, and blinking.
Similar in specs is the “doe-eyed,” red-haired Dreamer, a head incorporated onto a robot co-developed by Meka and the University of Texas at Austin’s Human Centered Robotics group — which also uses Meka’s SEA-based compliant arms. Like Simon, it had seven DOF, with ears that curl and bend to display various emotions, such as confusion and understanding. Its eyes are equipped with cameras that track movements, and the head moves in whatever direction the eyes do.
The aim of aesthetic designs for M1, Simon, Dreamer, and all the other Meka bots, Edsinger explains, is to help make people feel “affinity and trust” toward robots. But it’s also inspired from the co-founders’ time as artists.
For five years before coming to MIT, Edsinger (who holds a bachelor’s degree in computer science from Stanford University) and Weber (a trained industrial designer) were visual artists in San Francisco, building anthropomorphic robotic sculptures for participation in theatrical performances.
“As artists we valued aesthetics and design, and human interaction, and how these robotic systems relate to people,” Edsinger says. “That’s the mindset we came into MIT with and learned the chops of engineering.”
Building bots and a business
In MIT’s Human Robotics Group, then led by professor and entrepreneur Rodney Brooks (of iRobot and Rethink Robotics fame), the co-founders built the Domo robot — which had 29 active DOF, sensors, SEA-integrated arms, four digital cameras, and other innovations that allowed it to work safely alongside humans.
After graduating, and while serving as a postdoc in Brooks’ lab, Edsinger had an unshakable urge to launch a robotics company, “where I could get out in the world and have an impact,” he says.
Without a proper business plan, Edsinger and Weber relocated to San Francisco, carrying what they learned building Domo to found Meka. A few quick sales and contracts from researchers helped the company churn out its first commercial robotic arm in about nine months.
From there, Meka sold parts: an arm here, a hand there, a head, a torso, a base. Eventually, Meka started working with the Defense Advanced Research Projects Agency, building underwater humanoid robots, exoskeletons, and prosthetics, among other things.
“We took an incremental bootstrapping approach,” Edsinger says. “Every sale would finance the next iteration of engineering the robot. We stayed very diligent, trying to ensure that every little step forward could scale into a bigger opportunity.”
Soon, Edsinger says, they built the entire M1 Mobile Manipulator, “which allowed for a higher sales price.”
This “incremental bootstrapping” approach is something Edsinger says he soaked up from the business classes he took at the MIT Sloan School of Management. Another lesson: Surround yourself with people better than you at different aspects of technology and business. “In robotics it’s particularly important,” he says, “because it’s so multidisciplinary you can’t possibly cover all the bases. That’s one bit of advice I’ve taken to heart over the years.”
Circling back to Meka’s founding, Edsinger says the company launched initially to bring advanced robots to computer science labs. “At the time,” he says, “these labs could spend years building robotic systems to test robotic algorithms, but the robots were ultimately unreliable.”
But, he adds, Meka was ultimately a self-fulfilling project for two engineers and artists that happened to get big: “Really, we just enjoyed the hard engineering and design and wanted to build cool stuff. This was a fun way to do it.” |
|