MIT Research News' Journal
[Most Recent Entries]
[Calendar View]
Monday, February 6th, 2017
| Time |
Event |
| 11:00a |
Engineers harness stomach acid to power tiny sensors Researchers at MIT and Brigham and Women’s Hospital have designed and demonstrated a small voltaic cell that is sustained by the acidic fluids in the stomach. The system can generate enough power to run small sensors or drug delivery devices that can reside in the gastrointestinal tract for extended periods of time.
This type of power could offer a safer and lower-cost alternative to the traditional batteries now used to power such devices, the researchers say.
“We need to come up with ways to power these ingestible systems for a long time,” says Giovanni Traverso, a research affiliate at the Koch Institute for Integrative Cancer Research. “We see the GI tract as providing a really unique opportunity to house new systems for drug delivery and sensing, and fundamental to these systems is how they are powered.”
Traverso, who is also a gastroenterologist and biomedical engineer at Brigham and Women’s Hospital, is one of the senior authors of the study. The others are Robert Langer, the David H. Koch Institute Professor at MIT; and Anantha Chandrakasan, head of MIT’s Department of Electrical Engineering and Computer Science and the Vannevar Bush Professor of Electrical Engineering and Computer Science. MIT postdoc Phillip Nadeau is the lead author of the paper, which appears in the Feb. 6 issue of Nature Biomedical Engineering.
Sustained by acid
Traverso and Langer have previously built and tested many ingestible devices that can be used to sense physiological conditions such as temperature, heart rate, and breathing rate, or to deliver drugs to treat diseases such as malaria.
“This work could lead to a new generation of electronic ingestible pills that could someday enable novel ways of monitoring patient health and/or treating disease,” Langer says.
These devices are usually powered by small batteries, but conventional batteries self-discharge over time and pose a possible safety risk. To overcome those disadvantages, Langer and Traverso worked with Nadeau and Chandrakasan, who specialize in developing low-power electronics.
The research team took inspiration from a very simple type of voltaic cell known as a lemon battery, which consists of two electrodes — often a galvanized nail and a copper penny — stuck in a lemon. The citric acid in the lemon carries a small electric current between the two electrodes.
To replicate that strategy, the researchers attached zinc and copper electrodes to the surface of their ingestible sensor. The zinc emits ions into the acid in the stomach to power the voltaic circuit, generating enough energy to power a commercial temperature sensor and a 900-megahertz transmitter.
In tests in pigs, the devices took an average of six days to travel through the digestive tract. While in the stomach, the voltaic cell produced enough energy to power a temperature sensor and to wirelessly transmit the data to a base station located 2 meters away, with a signal sent every 12 seconds.
Once the device moved into the small intestine, which is less acidic than the stomach, the cell generated only about 1/100 of what it produced in the stomach. “But there’s still power there, which you could harvest over a longer period of time and use to transmit less frequent packets of information,” Traverso says.
“This paper reports an exciting and remarkably broad collection of advances in ‘ingestible’ electronics — from bioresorbable power supplies to energy efficient electronics, advanced sensors/actuators, and wireless communication systems,” says John Rogers, a professor of materials science and engineering at Northwestern University, who was not involved in the research. “These types of systems have great potential to address important clinical needs.”
Miniaturization
The current prototype of the device is a cylinder about 40 millimeters long and 12 millimeters in diameter, but the researchers anticipate that they could make the capsule about one-third that size by building a customized integrated circuit that would carry the energy harvester, transmitter, and a small microprocessor.
“A big challenge in implantable medical devices involves managing energy generation, conversion, storage, and utilization. This work allows us to envision new medical devices where the body itself contributes to energy generation enabling a fully self-sustaining system,” Chandrakasan says.
Once the researchers miniaturize the device, they anticipate adding other types of sensors and developing it for applications such as long-term monitoring of vital signs.
“You could have a self-powered pill that would monitor your vital signs from inside for a couple of weeks, and you don’t even have to think about it. It just sits there making measurements and transmitting them to your phone,” Nadeau says.
Such devices could also be used for drug delivery. In this study, the researchers demonstrated that they could use the power generated by the voltaic cell to release drugs encapsulated by a gold film. This could be useful for situations in which doctors need to try out different dosages of a drug, such as medication for controlling blood pressure.
The research was funded by Texas Instruments, the Semiconductor Research Corporation’s Center of Excellence for Energy Efficient Electronics, the Hong Kong Innovation and Technology Commission, the National Institutes of Health, and the Max Planck Research Award. | | 2:59p |
Sensor traces dopamine released by single cells MIT chemical engineers have developed an extremely sensitive detector that can track single cells’ secretion of dopamine, a brain chemical responsible for carrying messages involved in reward-motivated behavior, learning, and memory.
Using arrays of up to 20,000 tiny sensors, the researchers can monitor dopamine secretion of single neurons, allowing them to explore critical questions about dopamine dynamics. Until now, that has been very difficult to do.
“Now, in real-time, and with good spatial resolution, we can see exactly where dopamine is being released,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering and the senior author of a paper describing the research, which appears in the Proceedings of the National Academy of Sciences the week of Feb. 6.
Strano and his colleagues have already demonstrated that dopamine release occurs differently than scientists expected in a type of neural progenitor cell, helping to shed light on how dopamine may exert its effects in the brain.
The paper’s lead author is Sebastian Kruss, a former MIT postdoc who is now at Göttingen University, in Germany. Other authors are Daniel Salem and Barbara Lima, both MIT graduate students; Edward Boyden, an associate professor of biological engineering and brain and cognitive sciences, as well as a member of the MIT Media Lab and the McGovern Institute for Brain Research; Lela Vukovic, an assistant professor of chemistry at the University of Texas at El Paso; and Emma Vander Ende, a graduate student at Northwestern University.
“A global effect”
Dopamine is a neurotransmitter that plays important roles in learning, memory, and feelings of reward, which reinforce positive experiences.
Neurotransmitters allow neurons to relay messages to nearby neurons through connections known as synapses. However, unlike most other neurotransmitters, dopamine can exert its effects beyond the synapse: Not all dopamine released into a synapse is taken up by the target cell, allowing some of the chemical to diffuse away and affect other nearby cells.
“It has a local effect, which controls the signaling through the neurons, but also it has a global effect,” Strano says. “If dopamine is in the region, it influences all the neurons nearby.”
Tracking this dopamine diffusion in the brain has proven difficult. Neuroscientists have tried using electrodes that are specialized to detect dopamine, but even using the smallest electrodes available, they can place only about 20 near any given cell.
“We’re at the infancy of really understanding how these packets of chemicals move and their directionality,” says Strano, who decided to take a different approach.
Strano’s lab has previously developed sensors made from arrays of carbon nanotubes — hollow, nanometer-thick cylinders made of carbon, which naturally fluoresce when exposed to laser light. By wrapping these tubes in different proteins or DNA strands, scientists can customize them to bind to different types of molecules.
The carbon nanotube sensors used in this study are coated with a DNA sequence that makes the sensors interact with dopamine. When dopamine binds to the carbon nanotubes, they fluoresce more brightly, allowing the researchers to see exactly where the dopamine was released. The researchers deposited more than 20,000 of these nanotubes on a glass slide, creating an array that detects any dopamine secreted by a cell placed on the slide.
Dopamine diffusion
In the new PNAS study, the researchers used these dopamine sensors to explore a longstanding question about dopamine release in the brain: From which part of the cell is dopamine secreted?
To help answer that question, the researchers placed individual neural progenitor cells known as PC-12 cells onto the sensor arrays. PC-12 cells, which develop into neuron-like cells under the right conditions, have a starfish-like shape with several protrusions that resemble axons, which form synapses with other cells.
After stimulating the cells to release dopamine, the researchers found that certain dopamine sensors near the cells lit up immediately, while those farther away turned on later as the dopamine diffused away. Tracking those patterns over many seconds allowed the researchers to trace how dopamine spreads away from the cells.
Strano says one might expect to see that most of the dopamine would be released from the tips of the arms extending out from the cells. However, the researchers found that in fact more dopamine came from the sides of the arms.
“We have falsified the notion that dopamine should only be released at these regions that will eventually become the synapses,” Strano says. “This observation is counterintuitive, and it’s a new piece of information you can only obtain with a nanosensor array like this one.”
The team also showed that most of the dopamine traveled away from the cell, through protrusions extending in opposite directions. “Even though dopamine is not necessarily being released only at the tip of these protrusions, the direction of release is associated with them,” Salem says.
Other questions that could be explored using these sensors include how dopamine release is affected by the direction of input to the cell, and how the presence of nearby cells influences each cell’s dopamine release.
The research was funded by the National Science Foundation, the National Institutes of Health, a University of Illinois Center for the Physics of Living Cells Postdoctoral Fellowship, the German Research Foundation, and a Liebig Fellowship. | | 5:20p |
Faculty highlight: Senthil Todadri Mother nature is like a restless child who fidgets even when at rest, because electrons are never completely at rest, even at the coldest temperatures, says Professor Senthil Todadri, a theoretician in the MIT Department of Physics. Imagine pushing a pendulum hanging from a clock. It will swing back and forth, but eventually it will come to a complete stop so that it has a velocity of zero, but we also can see that it has a definite position in space. In the quantum world of electrons, knowing both of these properties, velocity and position, with ultimate precision is forbidden by the Heisenberg uncertainty principle.
“Quantum mechanically, there is still some motion even in the default state, what we call the ground state,” Senthil says. (Although his legal name is Senthil Todadri, he publishes under the name T. Senthil.) “It’s unavoidable motion that’s there, even in the default state, the lowest energy state that a quantum system can find itself in. There is still some motion.” And this basic fact of nature underlies a variety of unusual behaviors of electrons in materials. Explaining these hard-to-observe physical conditions such as new forms of magnetism is Senthil’s life’s work.
Unraveling magnetism
Everyday permanent magnetism in materials such as iron, where opposite poles attract and like poles repel, has been known for some 2,600 years, Senthil notes; these are called ferromagnets, derived from the Latin word for iron. “A ferromagnet is something in which the electrons inside the material, they have tiny magnetic moments, and those tiny magnetic moments all line up together to form a giant magnet, and that’s what we see in the ordinary world as a ferromagnet,” he says.
Less than a century ago, physicists identified a new kind of emergent magnetism in materials that are called antiferromagnets. In an antiferromagnet, tiny magnetic moments of individual electrons are frozen in space, but the pattern, or direction, of these tiny moments, oscillates in space on an atomic length scale. “From a macroscopic point of view, if you take an antiferromagnet, it’s very hard to know it has any kind of magnetism at all,” he says. That’s because this magnetism varies on a scale of one angstrom to a few angstroms, which is the size of just a few atoms, and technology to detect it didn’t become available until the 20th century.
“But now we know that antiferromagnetism is by far the most common form of magnetism. If you look around in magnetic materials, there are many, many more antiferromagnets than there are ferromagnets. It’s just that they are much harder to detect,” Senthil explains.
While it can be studied by itself, magnetism is intimately related to electricity, since every electron, including a magnetic one, carries an electric charge. A different class of materials, called superconductors, lose their resistance to electricity at very cold temperatures and there textbook analysis breaks down, says Senthil. As they are being chilled to a temperature of about 100 kelvins (-279.67 degrees Fahrenheit), these materials first show unusual metallic behavior, then become superconducting. Among materials that have been discovered to superconduct, many don’t do so until they reach a range from about 4 kelvins down to a fraction of 1 kelvin, or nearly absolute zero, the coldest possible temperature. So in the world of physics, superconducting at 100 kelvins is spoken of as high-temperature.
“From a fundamental science point of view what makes them remarkable is they seem to violate almost all of our textbook understanding of how electrons behave inside a solid,” Senthil explains. “Before they become superconducting, when the material is so hot that it’s not a superconductor yet, it’s a metal, but it’s a really unusual metal, and it’s out of this unusual metal that the superconductor is born.”
“Many of us think that understanding this unusual metal will give us a clue as to why the system is superconducting,” he says. “One of the hopes is that once we understand that clue, we can then have sensible ideas on what kinds of other materials might superconduct at relatively high temperatures and eventually maybe the field will discover room temperature superconductivity.”
These magnetic and superconducting behaviors appear to be intimately connected. One such material, lanthanum copper oxide, becomes an antiferromagnet at about 27 degrees Celsius (80 F). “Certainly above room temperature in Boston right now,” Senthil says on a chilly winter day. “This is a very famous material. It’s famous because if you take this material and you remove electrons from this material, using chemical methods, it’s precisely this material that becomes one of those superconductors that I mentioned before. If you remove electrons, it losses its magnetism, but it becomes something else, it becomes a superconductor. Of course, it’s superconducting only at temperatures much colder than room temperature.”
“The thing I would really like to understand — that’s been a goal for more than 10 years now — is to understand these metals that defy the textbook description of a metal. In some sense, to me that’s the most outstanding challenge in this entire field, and over the years, we’ve been making slow but steady progress, not just I, myself, but the community as a whole, but I would really like that progress to accelerate. That’s my home base; that’s the problem I am always thinking about. Everything else almost feels like I am doing on the side, but that’s hard. There has been progress, enough to keep me encouraged, but not enough for me to jump up and down. So that’s my pet project. That’s what I see as the main thing I feel like I want to have done by the time I’m ready to retire,” Senthil reveals.
Random tumbling
Yet another form of magnetism in solid materials is the quantum spin liquid, a name for magnetism drawn by analogy from fluid materials. In a spin liquid, Senthil says, individual electrons still have magnetic moments but they are randomly tumbling around all the time. “If you took a snapshot of what the magnetic moments are doing, different electrons with spins will be pointing in different directions, and if you took a different snapshot, the pattern would be completely different. So as a function of time, it’s moving all over the place, so there is no net magnetic moment. It’s neither a ferromagnet nor an antiferromagnet. We describe that by saying there is no magnetic ordering. ... That’s what’s called a spin liquid,” Senthil says.
Although there is no magnetic ordering, electrons in these spin liquids possess a property that no other form of magnetism has: Their magnetic moments are entangled quantum mechanically with the magnetic moments of other electrons far away from them. Quantum entanglement is one of the most counterintuitive concepts in physics. Its essence is that quantum mechanical systems that are separated from each other spatially still have some sort of contact with each other, what Einstein called “spooky action at a distance.” “The presence of quantum entanglement between distant parts of my sample, that is unusual, and it’s unprecedented in magnetism. So this is a new chapter in the study of magnetism. It leads to all kinds of unusual, bizarre phenomena that can potentially happen inside a solid,” Senthil explains.
An example of something unusual that can happen in these materials is that the electron may split into fractional pieces. How can that be when an electron is supposed to one of the fundamental particles of the universe? “An electron is supposed to be a fundamental particle in the vacuum of the universe, but inside a solid, the presence of long-distance quantum entanglement, enables the system to behave as some soup in which there are particle-like objects that move, but these objects are fractions of the electron,” he says. “What makes all these phenomena possible inside a system that looks very simple: It’s a collection of magnetic moments. What makes it possible is the long distance quantum mechanical entanglement that’s present between the localized magnetic moments. So I’ve been studying these kinds of magnetic matter. Not only is it fascinating by itself as a phenomenon, it turns out that studying these kinds of matter leads to all kinds of insights into other problems in condensed matter physics.”
In a February 2016 Physical Review B paper with Chong Wang PhD ’15, who now is a postdoc at the Society of Fellows at Harvard University, Senthil explored three-dimensional quantum spin liquids. Thinking about these unusual states of magnetism helped them map the connections between topological insulators, superconductors, and quantum spin liquids. “There are some deep connections between many different phenomena in the field in many different kinds of systems that we realized only recently just in 2015, 2016, which ended up in some cases solving, in some cases showing the way forward, on questions that have been open for more than 20 years in the field. So there is enormous progress that has come about in theoretical physics as a whole because of thinking about these kinds of novel states of magnetism,” he says.
Contradictory behavior
In particular, Senthil and Wang clarified how a particular metallic system chilled to very low temperatures in the presence of a large magnetic field can display the seemingly contradictory behavior of having both an overall current running at a right angles to an applied voltage while at the same time carrying charged particles that were shown in experiment to move in a straight line with the voltage. A long-standing theory from the early ’90s by co-authors Patrick A. Lee at MIT, Bertrand I. Halperin at Harvard and Nicholas Read at Yale University proposed that this phenomenon could be explained by thinking of each charge carrier as having an electron that attaches to itself a bit of magnetic flux, so that even though each such composite particle moves in a straight line, it carries both an electrical charge and a magnetic flux. This movement of magnetic flux produces an electric field in the perpendicular [or transverse] direction. But the Lee-Halperin-Read theory couldn’t explain a kind of physical symmetry in this system known as particle-hole symmetry, Senthil says. A hole, perhaps most familiar in the context of semiconducting materials, is the absence of an electron in an atom where one is expected. Symmetry means that whether you view the system as a collection of electrons or view it as a collection of holes, the system is the same. “These are just two different viewpoints on what’s really the same system,” Senthil says.
In experiments, replacing electrons with holes in this system didn’t make a physical difference, so it is considered to be in electron-hole symmetry. Physicists model electrons mathematically through a complicated formula known as a wave function, which incorporates the electrons’ properties such as spin and momentum. Senthil says he and Wang were motivated by an idea proposed by Dam T. Son at the University of Chicago that this could be explained by thinking of the composite particle as carrying a “spin” that is locked into some definite angle to its momentum. A peculiar occurrence in quantum physics is that when spin of this composite particle, just like the electron spin itself, is rotated in a full circle, the particle’s quantum mechanical wave function turns from a positive number to a negative number. By coupling spin to momentum in this system, Senthil explains, “What we now understand in this story is that these objects that move in straight lines inside this medium, inside this two-dimensional collection of electrons in a strong field, that they have this feature that if you rotate their momentum on a full circle, the wave function changes sign, so that’s a different theory from the older theory.”
The technical term for these spin-momentum coupled particles is Dirac particles, from the Dirac equation that quantifies their quantum state. “Going from electrons to holes, it turns out, flips the direction of the momentum of these Dirac particles, and it also has the effect of flipping, therefore, the direction of the spin, because the spin of this particle is tied to the momentum. So the momentum changes sign, the spin changes sign, so that’s all that happens,” he says. But importantly, it explains how the electron-hole symmetry acts in this system. Wang and Senthil derived this solution from their understanding of other phenomena in condensed matter physics, such as topological insulators and quantum spin liquids. “In the process, we ended up learning a lot about all of these different systems,” Senthil says.
Senthil notes that these ideas also were pursued independently by Max Metlitski, who recently joined the MIT Department of Physics faculty as assistant professor in the Condensed Matter Theory group, and Harvard professor of physics Ashvin Vishwanath. “These are friends of mine who it turns out were working on the same thing at the same time independently,” Senthil says. Vishwanath previously was a Pappalardo Fellow at MIT, serving as a postdoctoral associate in physics with Senthil.
Research group
Senthil supervises a small group of graduate students: Michael Pretko and Liujun Zou, who are working on quantum spin liquids, and Yahui Zhang, who is working on unconventional metallic states of matter through numerical calculations. “In experimental groups, the PI [principal investigator] tells the group what to work on. In theory groups, there is a lot more freedom. If a student comes to me and says, look, I have this idea, I encourage them to work on it, even if I am not working on it myself. ... It’s a great thing for students to come up with their own ideas,” Senthil says. Senthil also works with Pappalardo Fellows Inti Sodemann and Itamar Kimchi and Moore Fellow Sam Lederer and advises them on their work on magnetism, unconventional metals, and superconductivity.
This spring, Senthil will teach an advanced graduate course on Many Body Quantum Mechanics. Over the past four years, he taught quantum mechanics for first-year graduate students.
Senthil is married and has two daughters, ages 7 and 14. “Whenever I find time, which is getting harder and harder, I try to read,” he says. | | 11:59p |
Stars align in test supporting “spooky action at a distance” Quantum entanglement may appear to be closer to science fiction than anything in our physical reality. But according to the laws of quantum mechanics — a branch of physics that describes the world at the scale of atoms and subatomic particles — quantum entanglement, which Einstein once skeptically viewed as “spooky action at a distance,” is, in fact, real.
Imagine two specks of dust at opposite ends of the universe, separated by several billion light years. Quantum theory predicts that, regardless of the vast distance separating them, these two particles can be entangled. That is, any measurement made on one will instantaneously convey information about the outcome of a future measurement on its partner. In that case, the outcomes of measurements on each member of the pair can become highly correlated.
If, instead, the universe behaves as Einstein imagined it should — with particles having their own, definite properties prior to measurement, and with local causes only capable of yielding local effects — then there should be an upper limit to the degree to which measurements on each member of the pair of particles could be correlated. Physicist John Bell quantified that upper limit, now known as “Bell’s inequality,” more than 50 years ago.
In numerous previous experiments, physicists have observed correlations between particles in excess of the limit set by Bell’s inequality, which suggests that they are indeed entangled, just as predicted by quantum theory. But each such test has been subject to various “loopholes,” scenarios that might account for the observed correlations even if the world were not governed by quantum mechanics.
Now, physicists from MIT, the University of Vienna, and elsewhere have addressed a loophole in tests of Bell’s inequality, known as the freedom-of-choice loophole, and have presented a strong demonstration of quantum entanglement even when the vulnerability to this loophole is significantly restricted.
“The real estate left over for the skeptics of quantum mechanics has shrunk considerably,” says David Kaiser, the Germeshausen Professor of the History of Science and professor of physics at MIT. “We haven’t gotten rid of it, but we’ve shrunk it down by 16 orders of magnitude.”
A research team including Kaiser; Alan Guth, the Victor F. Weisskopf Professor of Physics at MIT and a researcher in the Laboratory for Nuclear Science; Andrew Friedman, an MIT research associate; and colleagues from
the University of Vienna and elsewhere has published its results today in the journal Physical Review Letters.
Closing the door on quantum alternatives
The freedom-of-choice loophole refers to the idea that experimenters have total freedom in choosing their experimental setup, from the types of particles to entangle, to the measurements they choose to make on those particles. But what if there were some other factors or hidden variables correlated with the experimental setup, making the results appear to be quantumly entangled, when in fact they were the result of some nonquantum mechanism?
Physicists have attempted to address this loophole with extremely controlled experiments, in which they produce a pair of entangled photons from a single source, then send the photons to two different detectors and measure properties of each photon to determine their degree of correlation, or entanglement. To rule out the possibility that hidden variables may have influenced the results, researchers have used random number generators at each detector to decide what property of each photon to measure, in the split second between when the photon leaves the source and arrives at the detector.
But there is a chance, however slight, that hidden variables, or nonquantum influences, may affect a random number generator before it relays its split-second decision to the photon detector.
“At the heart of quantum entanglement is the high degree of correlations in the outcomes of measurements on these pairs [of particles],” Kaiser says. “But what if a skeptic or critic insisted these correlations weren’t due to these particles acting in a fully quantum mechanical way? We want to address whether there is any other way that those correlations could have snuck in without our having noticed.”
“Stars aligned”
In 2014, Kaiser, Friedman, and their colleague Jason Gallicchio (now a professor at Harvey Mudd College) proposed an experiment to use ancient photons from astronomical sources such as stars or quasars as “cosmic setting generators,” rather than random number generators on Earth, to determine the measurements to be made on each entangled photon. Such cosmic light would be arriving at Earth from objects that are very far away — anywhere from dozens to billions of light years away. Thus, if some hidden variables were to interfere with the randomness of the choice of measurements, they would have had to have set those changes in motion before the time the light left the cosmic source, long before the experiment on Earth was conducted.
In this new paper, the researchers have demonstrated their idea experimentally for the first time. The team, including Professor Anton Zeilinger and his group at the
University of Vienna and the Austrian Academy of Sciences, set up a source to produce highly entangled pairs of photons on the roof of a university laboratory in Vienna. In each experimental run, they shot the entangled photons out in opposite directions, toward detectors located in buildings several city blocks away — the Austrian National Bank and a second university building.
The researchers also set up telescopes at both detector sites and trained them on stars, the closest of which is about 600 light years away, which they had previously determined would send sufficient photons, or starlight, in their direction.
“On those nights, the stars aligned,” Friedman says. “And with bright stars like these, the number of photons coming in can be like a firehose. So we have these very fast detectors that can register detections of cosmic photons on subnanosecond timescales.”
“Out of whack” with Einstein
In the few microseconds before an entangled photon arrived at a detector, the researchers used each telescope to rapidly measure a property of an incoming stellar photon — in this case, whether its wavelength was redder or bluer than a particular reference wavelength. They then used this random property of the stellar photon, generated 600 years ago by its star, to determine what property of the incoming entangled photons to measure. In this case, red stellar photons signaled a detector to measure an entangled photon’s polarization in a particular direction. A blue stellar photon would set the device to measure the polarization of the entangled particle along a different direction.
The team conducted two experiments, with each experimental run lasting only three minutes. In each case, the researchers measured about 100,000 pairs of entangled photons. They found that the polarization measurements of the photon pairs were highly correlated, well in excess of the bound set by Bell’s inequality, in a way that is most easily explained by quantum mechanics.
“We find answers consistent with quantum mechanics to an enormously strong degree, and enormously out of whack with an Einstein-like prediction,” Kaiser says.
The results represent improvements by 16 orders of magnitude over previous efforts to address the freedom-of-choice loophole.
“All previous experiments could have been subject to this weird loophole to account for the results microseconds before each experiment, versus our 600 years,” Kaiser says. “So it’s a difference of a millionth of a second versus 600 years’ worth of seconds — 16 orders of magnitude.”
“This experiment pushes back the latest time at which the conspiracy could have started,” Guth adds. “We’re saying, in order for some crazy mechanism to simulate
quantum mechanics in our experiment, that mechanism had to have been in place 600 years ago to plan for our doing the experiment here today, and to have sent photons of just the right messages to end up reproducing the results of quantum mechanics. So it’s very far-fetched.”
There is also a second, equally far-fetched possibility, says Michael Hall, a senior research fellow at Griffith University in Brisbane, Australia.
“When photons from the distant stars reach the devices that determine the measurement settings, it is possible that these devices act in some way to change the colors of the photons, in a way that is correlated with the laser producing the entanglement,” says Hall, who was not involved in the work. “This would only require a 10-microsecond-old conspiracy between the devices and the laser. However, the idea that photons don't show their ‘true colors’ when detected would overturn all observational astronomy and basic electromagnetism.”
This research was supported, in part, by the U.S. National Science Foundation and the Austrian Academy of Sciences. |
|