MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Thursday, December 19th, 2019

    Time Event
    9:23a
    A new way to remove contaminants from nuclear wastewater

    Nuclear power continues to expand globally, propelled, in part, by the fact that it produces few greenhouse gas emissions while providing steady power output. But along with that expansion comes an increased need for dealing with the large volumes of water used for cooling these plants, which becomes contaminated with radioactive isotopes that require special long-term disposal.

    Now, a method developed at MIT provides a way of substantially reducing the volume of contaminated water that needs to be disposed of, instead concentrating the contaminants and allowing the rest of the water to be recycled through the plant’s cooling system. The proposed system is described in the journal Environmental Science and Technology, in a paper by graduate student Mohammad Alkhadra, professor of chemical engineering Martin Bazant, and three others.

    The method makes use of a process called shock electrodialysis, which uses an electric field to generate a deionization shockwave in the water. The shockwave pushes the electrically charged particles, or ions, to one side of a tube filled with charged porous material, so that concentrated stream of contaminants can be separated out from the rest of the water. The group discovered that two radionuclide contaminants — isotopes of cobalt and cesium — can be selectively removed from water that also contains boric acid and lithium. After the water stream is cleansed of its cobalt and cesium contaminants, it can be reused in the reactor.

    The shock electrodialysis process was initially developed by Bazant and his co-workers as a general method of removing salt from water, as demonstrated in their first scalable prototype four years ago. Now, the team has focused on this more specific application, which could help improve the economics and environmental impact of working nuclear power plants. In ongoing research, they are also continuing to develop a system for removing other contaminants, including lead, from drinking water.

    Not only is the new system inexpensive and scalable to large sizes, but in principle it also can deal with a wide range of contaminants, Bazant says. “It’s a single device that can perform a whole range of separations for any specific application,” he says.

    In their earlier desalination work, the researchers used measurements of the water’s electrical conductivity to determine how much salt was removed. In the years since then, the team has developed other methods for detecting and quantifying the details of what’s in the concentrated radioactive waste and the cleaned water.

    “We carefully measure the composition of all the stuff going in and out,” says Bazant, who is the E.G. Roos Professor of Chemical Engineering as well as a professor of mathematics. “This really opened up a new direction for our research.” They began to focus on separation processes that would be useful for health reasons or that would result in concentrating material that has high value, either for reuse or to offset disposal costs.

    The method they developed works for sea water desalination, but it is a relatively energy-intensive process for that application. The energy cost is dramatically lower when the method is used for ion-selective separations from dilute streams such as nuclear plant cooling water. For this application, which also requires expensive disposal, the method makes economic sense, he says. It also hits both of the team’s targets: dealing with high-value materials and helping to safeguard health. The scale of the application is also significant — a single large nuclear plant can circulate about 10 million cubic meters of water per year through its cooling system, Alkhadra says.

    For their tests of the system, the researchers used simulated nuclear wastewater based on a recipe provided by Mitsubishi Heavy Industries, which sponsored the research and is a major builder of nuclear plants. In the team’s tests, after a three-stage separation process, they were able to remove 99.5 percent of the cobalt radionuclides in the water while retaining about 43 percent of the water in cleaned-up form so that it could be reused. As much as two-thirds of the water can be reused if the cleanup level is cut back to 98.3 percent of the contaminants removed, the team found.

    While the overall method has many potential applications, the nuclear wastewater separation, is “one of the first problems we think we can solve [with this method] that no other solution exists for,” Bazant says. No other practical, continuous, economic method has been found for separating out the radioactive isotopes of cobalt and cesium, the two major contaminants of nuclear wastewater, he adds.

    While the method could be used for routine cleanup, it could also make a big difference in dealing with more extreme cases, such as the millions of gallons of contaminated water at the damaged Fukushima Daichi power plant in Japan, where the accumulation of that contaminated water has threatened to overpower the containment systems designed to prevent it from leaking out into the adjacent Pacific. While the new system has so far only been tested at much smaller scales, Bazant says that such large-scale decontamination systems based on this method might be possible “within a few years.”

    The research team also included MIT postdocs Kameron Conforti and Tao Gao and graduate student Huanhuan Tian.

    9:31a
    Model beats Wall Street analysts in forecasting business financials

    Knowing a company’s true sales can help determine its value. Investors, for instance, often employ financial analysts to predict a company’s upcoming earnings using various public data, computational tools, and their own intuition. Now MIT researchers have developed an automated model that significantly outperforms humans in predicting business sales using very limited, “noisy” data.

    In finance, there’s growing interest in using imprecise but frequently generated consumer data — called “alternative data” — to help predict a company’s earnings for trading and investment purposes. Alternative data can comprise credit card purchases, location data from smartphones, or even satellite images showing how many cars are parked in a retailer’s lot. Combining alternative data with more traditional but infrequent ground-truth financial data — such as quarterly earnings, press releases, and stock prices — can paint a clearer picture of a company’s financial health on even a daily or weekly basis.

    But, so far, it’s been very difficult to get accurate, frequent estimates using alternative data. In a paper published this week in the Proceedings of ACM Sigmetrics Conference, the researchers describe a model for forecasting financials that uses only anonymized weekly credit card transactions and three-month earning reports.

    Tasked with predicting quarterly earnings of more than 30 companies, the model outperformed the combined estimates of expert Wall Street analysts on 57 percent of predictions. Notably, the analysts had access to any available private or public data and other machine-learning models, while the researchers’ model used a very small dataset of the two data types.

    “Alternative data are these weird, proxy signals to help track the underlying financials of a company,” says first author Michael Fleder, a postdoc in the Laboratory for Information and Decision Systems (LIDS). “We asked, ‘Can you combine these noisy signals with quarterly numbers to estimate the true financials of a company at high frequencies?’ Turns out the answer is yes.”

    The model could give an edge to investors, traders, or companies looking to frequently compare their sales with competitors. Beyond finance, the model could help social and political scientists, for example, to study aggregated, anonymous data on public behavior. “It’ll be useful for anyone who wants to figure out what people are doing,” Fleder says.

    Joining Fleder on the paper is EECS Professor Devavrat Shah, who is the director of MIT’s Statistics and Data Science Center, a member of the Laboratory for Information and Decision Systems, a principal investigator for the MIT Institute for Foundations of Data Science, and an adjunct professor at the Tata Institute of Fundamental Research.  

    Tackling the “small data” problem

    For better or worse, a lot of consumer data is up for sale. Retailers, for instance, can buy credit card transactions or location data to see how many people are shopping at a competitor. Advertisers can use the data to see how their advertisements are impacting sales. But getting those answers still primarily relies on humans. No machine-learning model has been able to adequately crunch the numbers.

    Counterintuitively, the problem is actually lack of data. Each financial input, such as a quarterly report or weekly credit card total, is only one number. Quarterly reports over two years total only eight data points. Credit card data for, say, every week over the same period is only roughly another 100 “noisy” data points, meaning they contain potentially uninterpretable information.

    “We have a ‘small data’ problem,” Fleder says. “You only get a tiny slice of what people are spending and you have to extrapolate and infer what’s really going on from that fraction of data.”

    For their work, the researchers obtained consumer credit card transactions — at typically weekly and biweekly intervals — and quarterly reports for 34 retailers from 2015 to 2018 from a hedge fund. Across all companies, they gathered 306 quarters-worth of data in total.

    Computing daily sales is fairly simple in concept. The model assumes a company’s daily sales remain similar, only slightly decreasing or increasing from one day to the next. Mathematically, that means sales values for consecutive days are multiplied by some constant value plus some statistical noise value — which captures some of the inherent randomness in a company’s sales. Tomorrow’s sales, for instance, equal today’s sales multiplied by, say, 0.998 or 1.01, plus the estimated number for noise.

    If given accurate model parameters for the daily constant and noise level, a standard inference algorithm can calculate that equation to output an accurate forecast of daily sales. But the trick is calculating those parameters.

    Untangling the numbers

    That’s where quarterly reports and probability techniques come in handy. In a simple world, a quarterly report could be divided by, say, 90 days to calculate the daily sales (implying sales are roughly constant day-to-day). In reality, sales vary from day to day. Also, including alternative data to help understand how sales vary over a quarter complicates matters: Apart from being noisy, purchased credit card data always consist of some indeterminate fraction of the total sales. All that makes it very difficult to know how exactly the credit card totals factor into the overall sales estimate.

    “That requires a bit of untangling the numbers,” Fleder says. “If we observe 1 percent of a company’s weekly sales through credit card transactions, how do we know it’s 1 percent? And, if the credit card data is noisy, how do you know how noisy it is? We don’t have access to the ground truth for daily or weekly sales totals. But the quarterly aggregates help us reason about those totals.”

    To do so, the researchers use a variation of the standard inference algorithm, called Kalman filtering or Belief Propagation, which has been used in various technologies from space shuttles to smartphone GPS. Kalman filtering uses data measurements observed over time, containing noise inaccuracies, to generate a probability distribution for unknown variables over a designated timeframe. In the researchers’ work, that means estimating the possible sales of a single day.

    To train the model, the technique first breaks down quarterly sales into a set number of measured days, say 90 — allowing sales to vary day-to-day. Then, it matches the observed, noisy credit card data to unknown daily sales. Using the quarterly numbers and some extrapolation, it estimates the fraction of total sales the credit card data likely represents. Then, it calculates each day’s fraction of observed sales, noise level, and an error estimate for how well it made its predictions.

    The inference algorithm plugs all those values into the formula to predict daily sales totals. Then, it can sum those totals to get weekly, monthly, or quarterly numbers. Across all 34 companies, the model beat a consensus benchmark — which combines estimates of Wall Street analysts — on 57.2 percent of 306 quarterly predictions.

    Next, the researchers are designing the model to analyze a combination of credit card transactions and other alternative data, such as location information. “This isn’t all we can do. This is just a natural starting point,” Fleder says.

    8:00p
    Researchers produce first laser ultrasound images of humans

    For most people, getting an ultrasound is a relatively easy procedure: As a technician gently presses a probe against a patient’s skin, sound waves generated by the probe travel through the skin, bouncing off muscle, fat, and other soft tissues before reflecting back to the probe, which detects and translates the waves into an image of what lies beneath.

    Conventional ultrasound doesn’t expose patients to harmful radiation as X-ray and CT scanners do, and it’s generally noninvasive. But it does require contact with a patient’s body, and as such, may be limiting in situations where clinicians might want to image patients who don’t tolerate the probe well, such as babies, burn victims, or other patients with sensitive skin. Furthermore, ultrasound probe contact induces significant image variability, which is a major challenge in modern ultrasound imaging.

    Now, MIT engineers have come up with an alternative to conventional ultrasound that doesn’t require contact with the body to see inside a patient. The new laser ultrasound technique leverages an eye- and skin-safe laser system to remotely image the inside of a person. When trained on a patient’s skin, one laser remotely generates sound waves that bounce through the body. A second laser remotely detects the reflected waves, which researchers then translate into an image similar to conventional ultrasound.

    In a paper published today by Nature in the journal Light: Science and Applications, the team reports generating the first laser ultrasound images in humans. The researchers scanned the forearms of several volunteers and observed common tissue features such as muscle, fat, and bone, down to about 6 centimeters below the skin. These images, comparable to conventional ultrasound, were produced using remote lasers focused on a volunteer from half a meter away.

    “We’re at the beginning of what we could do with laser ultrasound,” says Brian W. Anthony, a principal research scientist in MIT’s Department of Mechanical Engineering and Institute for Medical Engineering and Science (IMES), a senior author on the paper. “Imagine we get to a point where we can do everything ultrasound can do now, but at a distance. This gives you a whole new way of seeing organs inside the body and determining properties of deep tissue, without making contact with the patient.”

    Anthony’s co-authors on the paper are lead author and MIT postdoc Xiang (Shawn) Zhang, recent doctoral graduate Jonathan Fincke, along with Charles Wynn, Matthew Johnson, and Robert Haupt of MIT’s Lincoln Laboratory.

    Yelling into a canyon — with a flashlight

    In recent years, researchers have explored laser-based methods in ultrasound excitation in a field known as photoacoustics. Instead of directly sending sound waves into the body, the idea is to send in light, in the form of a pulsed laser tuned at a particular wavelength, that penetrates the skin and is absorbed by blood vessels.

    The blood vessels rapidly expand and relax — instantly heated by a laser pulse then rapidly cooled by the body back to their original size — only to be struck again by another light pulse. The resulting mechanical vibrations generate sound waves that travel back up, where they can be detected by transducers placed on the skin and translated into a photoacoustic image.

    While photoacoustics uses lasers to remotely probe internal structures, the technique still requires a detector in direct contact with the body in order to pick up the sound waves. What’s more, light can only travel a short distance into the skin before fading away. As a result, other researchers have used photoacoustics to image blood vessels just beneath the skin, but not much deeper.

    Since sound waves travel further into the body than light, Zhang, Anthony, and their colleagues looked for a way to convert a laser beam’s light into sound waves at the surface of the skin, in order to image deeper in the body. 

    Based on their research, the team selected 1,550-nanometer lasers, a wavelength which is highly absorbed by water (and is eye- and skin-safe with a large safety margin).  As skin is essentially composed of water, the team reasoned that it should efficiently absorb this light, and heat up and expand in response. As it oscillates back to its normal state, the skin itself should produce sound waves that propagate through the body.

    The researchers tested this idea with a laser setup, using one pulsed laser set at 1,550 nanometers to generate sound waves, and a second continuous laser, tuned to the same wavelength, to remotely detect reflected sound waves.  This second laser is a sensitive motion detector that measures vibrations on the skin surface caused by the sound waves bouncing off muscle, fat, and other tissues. Skin surface motion, generated by the reflected sound waves, causes a change in the laser’s frequency, which can be measured. By mechanically scanning the lasers over the body, scientists can acquire data at different locations and generate an image of the region.

    “It’s like we’re constantly yelling into the Grand Canyon while walking along the wall and listening at different locations,” Anthony says. “That then gives you enough data to figure out the geometry of all the things inside that the waves bounced against — and the yelling is done with a flashlight.”

    In-home imaging

    The researchers first used the new setup to image metal objects embedded in a gelatin mold roughly resembling skin’s water content. They imaged the same gelatin using a commercial ultrasound probe and found both images were encouragingly similar. They moved on to image excised animal tissue — in this case, pig skin — where they found laser ultrasound could distinguish subtler features, such as the boundary between muscle, fat, and bone.

    Finally, the team carried out the first laser ultrasound experiments in humans, using a protocol that was approved by the MIT Committee on the Use of Humans as Experimental Subjects. After scanning the forearms of several healthy volunteers, the researchers produced the first fully noncontact laser ultrasound images of a human. The fat, muscle, and tissue boundaries are clearly visible and comparable to images generated using commercial, contact-based ultrasound probes.

    The researchers plan to improve their technique, and they are looking for ways to boost the system’s performance to resolve fine features in the tissue. They are also looking to hone the detection laser’s capabilities. Further down the road, they hope to miniaturize the laser setup, so that laser ultrasound might one day be deployed as a portable device.

    “I can imagine a scenario where you’re able to do this in the home,” Anthony says. “When I get up in the morning, I can get an image of my thyroid or arteries, and can have in-home physiological imaging inside of my body. You could imagine deploying this in the ambient environment to get an understanding of your internal state.” 

    This research was supported in part by the MIT Lincoln Laboratory Biomedical Line Program for the United States Air Force and by the U.S. Army Medical Research and Material Command's Military Operational Medicine Research Program.

    << Previous Day 2019/12/19
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org