MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Monday, September 30th, 2019

    Time Event
    5:00a
    How to dismantle a nuclear bomb

    How do weapons inspectors verify that a nuclear bomb has been dismantled? An unsettling answer is: They don’t, for the most part. When countries sign arms reduction pacts, they do not typically grant inspectors complete access to their nuclear technologies, for fear of giving away military secrets.

    Instead, past U.S.-Russia arms reduction treaties have called for the destruction of the delivery systems for nuclear warheads, such as missiles and planes, but not the warheads themselves. To comply with the START treaty, for example, the U.S. cut the wings off B-52 bombers and left them in the Arizona desert, where Russia could visually confirm the airplanes’ dismemberment.

    It’s a logical approach but not a perfect one. Stored nuclear warheads might not be deliverable in a war, but they could still be stolen, sold, or accidentally detonated, with disastrous consequences for human society.

    “There’s a real need to preempt these kinds of dangerous scenarios and go after these stockpiles,” says Areg Danagoulian, an MIT nuclear scientist. “And that really means a verified dismantlement of the weapons themselves.”

    Now MIT researchers led by Danagoulian have successfully tested a new high-tech method that could help inspectors verify the destruction of nuclear weapons. The method uses neutron beams to establish certain facts about the warheads in question — and, crucially, uses an isotopic filter that physically encrypts the information in the measured data.

    A paper detailing the experiments, “A physically cryptographic warhead verification system using neutron induced nuclear resonances,” is being published today in Nature Communications. The authors are Danagoulian, who is the Norman C. Rasmussen Assistant Professor of Nuclear Science and Engineering at MIT, and graduate student Ezra Engel. Danagoulian is the corresponding author.

    High-stakes testing

    The experiment builds on previous theoretical work, by Danagoulian and other members of his research group, who last year published two papers detailing computer simulations of the system. The testing took place at the Gaerttner Linear Accelerator (LINAC) Facility on the campus of Rensselaer Polytechnic Institute, using a 15-meter long section of the facility’s neutron-beam line.

    Nuclear warheads have a couple of characteristics that are central to the experiment. They tend to use particular isotopes of plutonium — varieties of the element that have different numbers of neutrons. And nuclear warheads have a distinctive spatial arrangement of materials.

    The experiments consisted of sending a horizontal neutron beam first through a proxy of the warhead, then through a lithium filter scrambling the information. The beam’s signal was then sent to a glass detector, where a signature of the data, representing some of its key properties, was recorded. The MIT tests were performed using molybdenum and tungsten, two metals that share significant properties with plutonium and served as viable proxies for it.

    The test works, first of all, because the neutron beam can identify the isotope in question.

    “At the low energy range, the neutrons’ interactions are extremely isotope-specific,” Danagoulian says. “So you do a measurement where you have an isotopic tag, a signal which itself embeds information about the isotopes and the geometry. But you do an additional step which physically encrypts it.”

    That physical encryption of the neutron beam information alters some of the exact details, but still allows scientists to record a distinct signature of the object and then use it to perform object-to-object comparisons. This alteration means a country can submit to the test without divulging all the details about how its weapons are engineered.

    “This encrypting filter basically covers up the intrinsic properties of the actual classified object itself,” Danagoulian explains.

    It would also be possible just to send the neutron beam through the warhead, record that information, and then encrypt it on a computer system. But the process of physical encryption is more secure, Danagoulian notes: “You could, in principle, do it with computers, but computers are unreliable. They can be hacked, while the laws of physics are immutable.”

    The MIT tests also included checks to make sure that inspectors could not reverse-engineer the process and thus deduce the weapons information countries want to keep secret.

    To conduct a weapons inspection, then, a host country would present a warhead to weapons inspectors, who could run the neutron-beam test on the materials. If it passes muster, they could run the test on every other warhead intended for destruction as well, and make sure that the data signatures from those additional bombs match the signature of the original warhead.

    For this reason, a country could not, say, present one real nuclear warhead to be dismantled, but bamboozle inspectors with a series of identical-looking fake weapons. And while many additional protocols would have to be arranged to make the whole process function reliably, the new method plausibly balances both disclosure and secrecy for the parties involved.

    The human element

    Danagoulian believes putting the new method through the testing stage has been a significant step forward for his research team.

    “Simulations capture the physics, but they don’t capture system instabilities,” Danagoulian says. “Experiments capture the whole world.”

    In the future, he would like to build a smaller-scale version of the testing apparatus, one that would be just 5 meters long and could be mobile, for use at all weapons sites.

    “The purpose of our work is to create these concepts, validate them, prove that they work through simulations and experiments, and then have the National Laboratories to use them in their set of verification techniques,” Danagoulian says, referring to U.S. Department of Energy scientists.

    Karl van Bibber, a professor in the Department of Nuclear Engineering at the University of California at Berkeley, who has read the group’s papers, says “the work is promising and has taken a large step forward,” but adds that “there is yet a ways to go” for the project. More specifically, van Bibber notes, in the recent tests it was easier to detect fake weapons based on the isotopic characteristics of the materials rather than their spatial arrangements. He believes testing at the relevant U.S. National Laboratories — Los Alamos or Livermore — would help further assess the verification techniques on sophisticated missile designs.

    Overall, van Bibber adds, speaking of the researchers, “their persistence is paying off, and the treaty verification community has got to be paying attention.”

    Danagoulian also emphasizes the seriousness of nuclear weapons disarmament. A small cluster of several modern nuclear warheads, he notes, equals the destructive force of every armament fired in World War II, including the atomic bombs dropped on Hiroshima and Nagasaki. The U.S. and Russia possess about 13,000 nuclear weapons between them.

    “The concept of nuclear war is so big that it doesn’t [normally] fit in the human brain,” Danagoulian says. “It’s so terrifying, so horrible, that people shut it down.”

    In Danagoulian’s case, he also emphasizes that, in his case, becoming a parent greatly increased his sense that action is needed on this issue, and helped spur the current research project.

    “It put an urgency in my head,” Danagoulian says. “Can I use my knowledge and my skill and my training in physics to do something for society and for my children? This is the human aspect of the work.”

    The research was supported, in part, by a U.S. Department of Energy National Nuclear Security Administration Award.

    11:00a
    Delivery system can make RNA vaccines more powerful

    Vaccines made from RNA hold great potential as a way to treat cancer or prevent a variety of infectious diseases. Many biotech companies are now working on such vaccines, and a few have gone into clinical trials.

    One of the challenges to creating RNA vaccines is making sure that the RNA gets into the right immune cells and produces enough of the encoded protein. Additionally, the vaccine must stimulate a strong enough response that the immune system can wipe out the relevant bacteria, viruses, or cancer cells when they are subsequently encountered.

    MIT chemical engineers have now developed a new series of lipid nanoparticles to deliver such vaccines. They showed that the particles trigger efficient production of the protein encoded by the RNA, and they also behave like an “adjuvant,” further boosting the vaccine effectiveness. In a study of mice, they used this RNA vaccine to successfully inhibit the growth of melanoma tumors.

    “One of the key discoveries of this paper is that you can build RNA delivery lipids that can also activate the immune system in important ways,” says Daniel Anderson, an associate professor in MIT’s Department of Chemical Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science.

    Anderson is the senior author of the study, which appears in the Sept. 30 issue of Nature Biotechnology. The lead authors of the study are former postdocs Lei Miao and Linxian Li and former research associate Yuxuan Huang. Other MIT authors include Derfogail Delcassian, Jasdave Chahal, Jinsong Han, Yunhua Shi, Kaitlyn Sadtler, Wenting Gao, Jiaqi Lin, Joshua C. Doloff, and Robert Langer, the David H. Koch Institute Professor at MIT and a member of the Koch Institute.

    Vaccine boost

    Most traditional vaccines are made from proteins produced by infectious microbes, or from weakened forms of the microbes themselves. In recent years, scientists have explored the idea of making vaccines using DNA that encodes microbial proteins. However, these vaccines, which have not been approved for use in humans, have so far failed to produce strong enough immune responses.

    RNA is an attractive alternative to DNA in vaccines because unlike DNA, which has to reach the cell nucleus to become functional, RNA can be translated into protein as soon as it gets into the cell cytoplasm. It can also be adapted to target many different diseases.

    “Another advantage of these vaccines is that we can quickly change the target disease,” he says. “We can make vaccines to different diseases very quickly just by tinkering with the RNA sequence.” 

    For an RNA vaccine to be effective, it needs to enter a type of immune cell called an antigen-presenting cell. These cells then produce the protein encoded by the vaccine and display it on their surfaces, attracting and activating T cells and other immune cells.

    Anderson’s lab has previously developed lipid nanoparticles for delivering RNA and DNA for a variety of applications. These lipid particles form tiny droplets that protect RNA molecules and carry them to their destinations. The researchers’ usual approach is to generate libraries of hundreds or thousands of candidate particles with varying chemical features, then screen them for the ones that work the best.

    “In one day, we can synthesize over 1,000 lipid materials with multiple different structures,” Miao says. “Once we had that very large library, we could screen the molecules and see which type of structures help RNA get delivered to the antigen-presenting cells.”

    They discovered that nanoparticles with a certain chemical feature — a cyclic structure at one end of the particle — are able to turn on an immune signaling pathway called stimulator of interferon genes (STING). Once this pathway is activated, the cells produce interferon and other cytokines that provoke T cells to leap into action.

    “Broad applications”

    The researchers tested the particles in two different mouse models of melanoma. First, they used mice with tumors engineered to produce ovalbumin, a protein found in egg whites. The researchers designed an RNA vaccine to target ovalbumin, which is not normally found in tumors, and showed that the vaccine stopped tumor growth and significantly prolonged survival.

    Then, the researchers created a vaccine that targets a protein naturally produced by melanoma tumors, known as Trp2. This vaccine also stimulated a strong immune response that slowed tumor growth and improved survival rates in the mice.

    Anderson says he plans to pursue further development of RNA cancer vaccines as well as vaccines that target infectious diseases such as HIV, malaria, or Ebola.

    “We think there could be broad applications for this,” he says. “A particularly exciting area to think about is diseases where there are currently no vaccines.”

    The research was funded by Translate Bio and JDRF.

    2:59p
    This flat structure morphs into shape of a human face when temperature changes

    Researchers at MIT and elsewhere have designed 3-D printed mesh-like structures that morph from flat layers into predetermined shapes, in response to changes in ambient temperature. The new structures can transform into configurations that are more complex than what other shape-shifting materials and structures can achieve.

    As a demonstration, the researchers printed a flat mesh that, when exposed to a certain temperature difference, deforms into the shape of a human face. They also designed a mesh embedded with conductive liquid metal, that curves into a dome to form an active antenna, the resonance frequency of which changes as it deforms.

    The team’s new design method can be used to determine the specific pattern of flat mesh structures to print, given the material’s properties, in order to make the structure transform into a desired shape.

    The researchers say that down the road, their technique may be used to design deployable structures, such as tents or coverings that automatically unfurl and inflate in response to changes in temperature or other ambient conditions.

    Such complex, shape-shifting structures could also be of use as stents or scaffolds for artificial tissue, or as deformable lenses in telescopes. Wim van Rees, assistant professor of mechanical engineering at MIT, also sees applications in soft robotics.

    “I’d like to see this incorporated in, for example, a robotic jellyfish that changes shape to swim as we put it in water,” says van Rees. “If you could use this as an actuator, like an artificial muscle, the actuator could be any arbitrary shape that transforms into another arbitrary shape. Then you’re entering an entirely new design space in soft robotics.”

    Van Rees and his colleagues are publishing their results this week in the Proceedings of the National Academy of Sciences. His co-authors are J. William Boley of Boston University; Ryan Truby, Arda Kotikian, Jennifer Lewis, and L. Mahadevan of Harvard University; Charles Lissandrello of Draper Laboratory; and Mark Horenstein of Boston University.

    Gift wrap’s limit

    Two years ago, van Rees came up with a theoretical design for how to transform a thin flat sheet into a complex shape such as a human face. Until then, researchers in the field of 4-D materials — materials designed to deform over time — had developed ways for certain materials to change, or morph, but only into relatively simple structures.

    “My goal was to start with a complex 3-D shape that we want to achieve, like a human face, and then ask, ‘How do we program a material so it gets there?’” van Rees says. “That’s a problem of inverse design.”

    He came up with a formula to compute the expansion and contraction that regions of a bilayer material sheet would have to achieve in order to reach a desired shape, and developed a code to simulate this in a theoretical material. He then put the formula to work, and visualized how the method could transform a flat, continuous disc into a complex human face.

    But he and his collaborators quickly found that the method wouldn’t apply to most physical materials, at least if they were trying to work with continuous sheets. While van Rees used a continuous sheet for his simulations, it was of an idealized material, with no physical constraints on the amount of expansion and contraction it could achieve. Most materials, in contrast, have very limited growth capabilities. This limitation has profound consequences on a property known as double curvature, meaning a surface that can curve simultaneously in two perpendicular directions — an effect that is described in an almost 200-year-old theorem by Carl Friedrich Gauss called the Theorema Egregium, Latin for “Remarkable Theorem.”

    If you’ve ever tried to gift wrap a soccer ball, you’ve experienced this concept in practice: To transform paper, which has no curvature at all, to the shape of a ball, which has positive double curvature, you have to crease and crumple the paper at the sides and bottom to completely wrap the ball. In other words, for the paper sheet to adapt to a shape with double curvature, it would have to stretch or contract, or both, in the necessary places to wrap a ball uniformly.

    To impart double curvature to a shape-shifting sheet, the researchers switched the basis of the structure from a continuous sheet to a lattice, or mesh. The idea was twofold: first, a temperature-induced bending of the lattice’s ribs would result in much larger expansions and contractions of the mesh nodes, than could be achieved in a continuous sheet. Second, the voids in the lattice can easily accommodate large changes in surface area when the ribs are designed to grow at different rates across the sheet.

    The researchers also designed each individual rib of the lattice to bend by a predetermined degree in order to create the shape of, say, a nose rather than an eye-socket.

    For each rib, they incorporated four skinnier ribs, arranging two to line up atop the other two. All four miniribs were made from carefully selected variations of the same base material, to calibrate the required different responses to temperature.

    When the four miniribs were bonded together in the printing process to form one larger rib, the rib as a whole could curve due to the difference in temperature response between the materials of the smaller ribs: If one material is more responsive to temperature, it may prefer to elongate. But because it is bonded to a less responsive rib, which resists the elongation, the whole rib will curve instead.

    The researchers can play with the arrangement of the four ribs to “preprogram” whether the rib as a whole curves up to form part of a nose, or dips down as part of an eye socket.

    Shapes unlocked

    To fabricate a lattice that changes into the shape of a human face, the researchers started with a 3-D image of a face — to be specific, the face of Gauss, whose principles of geometry underly much of the team’s approach. From this image, they created a map of the distances a flat surface would require to rise up or dip down to conform to the shape of the face. Van Rees then devised an algorithm to translate these distances into a lattice with a specific pattern of ribs, and ratios of miniribs within each rib.

    The team printed the lattice from PDMS, a common rubbery material which naturally expands when exposed to an increase in temperature. They adjusted the material’s temperature responsiveness by infusing one solution of it with glass fibers, making it physically stiffer and more resistant to a change in temperature. After printing lattice patterns of the material, they cured the lattice in a 250-degree-Celsius oven, then took it out and placed it in a saltwater bath, where it cooled to room temperature and morphed into the shape of a human face.

    Courtesy of the researchers

    The team also printed a latticed disc made from ribs embedded with a liquid metal ink — an antenna of sorts, that changed its resonant frequency as the lattice transformed into a dome.

    Van Rees and his colleagues are currently investigating ways to apply the design of complex shape-shifting to stiffer materials, for sturdier applications, such as temperature-responsive tents and self-propelling fins and wings.

    This research was supported, in part, by the National Science Foundation, and Draper Laboratory.

    3:20p
    MIT.nano awards inaugural NCSOFT seed grants for gaming technologies

    MIT.nano has announced the first recipients of NCSOFT seed grants to foster hardware and software innovations in gaming technology. The grants are part of the new MIT.nano Immersion Lab Gaming program, with inaugural funding provided by video game developer NCSOFT, a founding member of the MIT.nano Consortium.

    The newly awarded projects address topics such as 3-D/4-D data interaction and analysis, behavioral learning, fabrication of sensors, light field manipulation, and micro-display optics. 

    “New technologies and new paradigms of gaming will change the way researchers conduct their work by enabling immersive visualization and multi-dimensional interaction,” says MIT.nano Associate Director Brian W. Anthony. “This year’s funded projects highlight the wide range of topics that will be enhanced and influenced by augmented and virtual reality.”

    In addition to the sponsored research funds, each awardee will be given funds specifically to foster a community of collaborative users of MIT.nano’s Immersion Lab.

    The MIT.nano Immersion Lab is a new, two-story immersive space dedicated to visualization, augmented and virtual reality (AR/VR), and the depiction and analysis of spatially related data. Currently being outfitted with equipment and software tools, the facility will be available starting this semester for use by researchers and educators interested in using and creating new experiences, including the seed grant projects. 

    The five projects to receive NCSOFT seed grants are:

    Stefanie Mueller: connecting the virtual and physical world

    Virtual game play is often accompanied by a prop — a steering wheel, a tennis racket, or some other object the gamer uses in the physical world to create a reaction in the virtual game. Build-it-yourself cardboard kits have expanded access to these props by lowering costs; however, these kits are pre-cut, and thus limited in form and function. What if users could build their own dynamic props that evolve as they progress through the game?

    Department of Electrical Engineering and Computer Science (EECS) Professor Stefanie Mueller aims to enhance the user’s experience by developing a new type of gameplay with tighter virtual-physical connection. In Mueller’s game, the player unlocks a physical template after completing a virtual challenge, builds a prop from this template, and then, as the game progresses, can unlock new functionalities to that same item. The prop can be expanded upon and take on new meaning, and the user learns new technical skills by building physical prototypes.

    Luca Daniel and Micha Feigin-Almon: replicating human movements in virtual characters

    Athletes, martial artists, and ballerinas share the ability to move their body in an elegant manner that efficiently converts energy and minimizes injury risk. Professor Luca Daniel, EECS and Research Laboratory of Electronics, and Micha Feigin-Almon, research scientist in mechanical engineering, seek to compare the movements of trained and untrained individuals to learn the limits of the human body with the goal of generating elegant, realistic movement trajectories for virtual reality characters.

    In addition to use in gaming software, their research on different movement patterns will predict stresses on joints, which could lead to nervous system models for use by artists and athletes.

    Wojciech Matusik: using phase-only holograms

    Holographic displays are optimal for use in augmented and virtual reality. However, critical issues show a need for improvement. Out-of-focus objects look unnatural, and complex holograms have to be converted to phase-only or amplitude-only in order to be physically realized. To combat these issues, EECS Professor Wojciech Matusik proposes to adopt machine learning techniques for synthesis of phase-only holograms in an end-to-end fashion. Using a learning-based approach, the holograms could display visually appealing three-dimensional objects.

    “While this system is specifically designed for varifocal, multifocal, and light field displays, we firmly believe that extending it to work with holographic displays has the greatest potential to revolutionize the future of near-eye displays and provide the best experiences for gaming,” says Matusik.

    Fox Harrell: teaching socially impactful behavior

    Project VISIBLE — Virtuality for Immersive Socially Impactful Behavioral Learning Enhancement — utilizes virtual reality in an educational setting to teach users how to recognize, cope with, and avoid committing microaggressions. In a virtual environment designed by Comparative Media Studies Professor Fox Harrell, users will encounter micro-insults, followed by major micro-aggression themes. The user’s physical response drives the narrative of the scenario, so one person can play the game multiple times and reach different conclusions, thus learning the various implications of social behavior.

    Juejun Hu: displaying a wider field of view in high resolution

    Professor Juejun Hu from the Department of Materials Science and Engineering seeks to develop high-performance, ultra-thin immersive micro-displays for AR/VR applications. These displays, based on metasurface optics, will allow for a large, continuous field of view, on-demand control of optical wavefronts, high-resolution projection, and a compact, flat, lightweight engine. While current commercial waveguide AR/VR systems offer less than 45 degrees of visibility, Hu and his team aim to design a high-quality display with a field of view close to 180 degrees.

    3:50p
    Tracing the origins of air pollutants in India

    At any moment in Delhi, India, a resident might start their car, releasing exhaust that floats into the atmosphere. In northwest India, a farmer might set fire to his field after the wheat harvest to clear it quickly, releasing smoke that’ll be carried by the wind. A small family might burn wood to light their stove, releasing soot into the sky. Delhi, a city which boasts a population of over 28 million residents, bustles with activity at all hours of the day and night. And as it grows — so does its pollution. 

    The pollution, which sometimes manifests as thick smog, respiratory illness, and disease, is the focus of many who hope to identify and eliminate its sources. But to do that accurately, the pollution must be tracked by research-grade air quality monitors that measure pollutants including particulate matter, sulfur dioxide, nitrogen dioxide, ozone, and more, which can cost upwards of hundreds of thousands of dollars. 

    Low-cost sensors, which have recently begun to be commercialized, offer scientists, policymakers, and the public the opportunity to detect pollution without high overhead costs — but not without some tradeoffs. Jesse Kroll, a professor in the MIT departments of Civil and Environmental Engineering and Chemical Engineering, researches the instruments and methods used to conduct atmospheric chemistry research. “In terms of nearly every measurement metric — precision, accuracy, sensitivity, interferences, drift, and so on — the low-cost sensors fall far short of what research-grade equipment can deliver,” he says. “This is a major limitation, but it usually isn’t made clear by the sensor manufacturers.” 

    As a result, Kroll says, the use of low-cost sensors to detect pollution remains poorly characterized. But the sensors’ lower cost, lower energy consumption, and smaller sizes incentivize their adoption, so their use has expanded significantly over the past few years in countries such as China and India. “The use of these instruments is really outpacing our efforts to understand what their data actually mean,” Kroll says.

    The challenge to clarify and expand the capabilities of low-cost sensors in pollution detection inspired a recently published study led by Kroll and graduate student David Hagan that compared the performance of low-cost sensors with research-grade equipment in Delhi — and found a new capability of the devices. 

    On the India Institute of Technology’s Delhi campus, research-grade instrumentation already sampled the air from the fourth floor of a building in Hauz Khaz, set up and maintained by Kroll and Hagan’s collaborators, Josh Apte and Lea Hildebrandt of the University of Texas at Austin. “We jumped at the opportunity to be able to co-locate our instruments with theirs to prove how well ours could work,” Hagan says. But it wasn’t easy: In Delhi, he says, the particulate matter levels were so high that their sensors would initially foul easily, and the sensors risked overheating on hot days. “Designing around that is a fun engineering challenge,” Hagan says. 

    After overcoming those challenges, the low-cost sensors and research-grade monitors ran simultaneously over a six-week period in winter 2018, sampling the air from the fourth-floor balcony of a laboratory. After analyzing the data captured, the researchers found that the low-cost sensors, which measured both gases and particles, not only captured big-picture air quality and pollutant levels, but also could be used to infer the sources of pollutants, even those that the sensors cannot detect directly.

    By applying a type of multivariate analysis called non-negative matrix factorization, the researchers were able to identify, disentangle, and infer the sources that contributed to the total signal detected by the low-cost sensors, and compare those results to the more detailed measurements collected by the research-grade monitors. 

    That analysis revealed that the total signal comprised of a combustion factor as well as two other factors, and was characterized by the particles measured from the air. The combustion particles, which constitute a large fraction of the total particulate matter, are too small to be detected by the sensors themselves, but sensor measurements of other co-emitted pollutants, such as carbon monoxide, allowed them to be inferred nonetheless. 

    “These low-cost sensors can be used for more than just making routine measurements, and can actually be used to identify sources of pollution that can lead of a better understanding of what we breathe,” Hagan says.

    Even further, the data collected by the low-cost sensors captured enough information about ambient Delhi pollution that the researchers could distinguish between primary sources of pollution, or directly-emitted particles, and secondary sources, those particles formed via chemical reactions after emission in the atmosphere. 

    Those types of information could make it easier to understand how air quality varies around the world. “One of the strengths of low-cost sensors is that they can provide information about air quality and pollution sources in places that are under-studied — and many of these places, such as cities in the developing world, tend to have some of the worst pollution in the world,” Kroll says. 

    “Using these low-cost sensors, we can really understand the spatial and temporal heterogeneity of air pollution and human exposure,” Hagan says. “That is much more relevant to how people actually live their lives.” 

    The results have already inspired future studies. “This is a crucial first step in improving urban air quality,” Kroll says. “We’d like to see if we can extend it to other environments and other types of pollution as well. This includes not only other polluted cities, but also relatively clean ones, such as Boston.” 

    << Previous Day 2019/09/30
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org