MIT Research News
The following are the titles of recent articles syndicated from MIT Research News
Add this feed to your friends list for news aggregation, or view this feed's syndication information.

LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.

[ << Previous 20 ]
Thursday, August 20th, 2020
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
12:00 am
The factory of the future, batteries not included

Many analysts have predicted an explosion in the number of industrial “internet of things” (IoT) devices that will come online over the next decade. Sensors play a big role in those forecasts.

Unfortunately, sensors come with their own drawbacks, many of which are due to the limited energy supply and finite lifetime of their batteries.

Now the startup Everactive has developed industrial sensors that run around the clock, require minimal maintenance, and can last over 20 years. The company created the sensors not by redesigning its batteries, but by eliminating them altogether.

The key is Everactive’s ultra-low-power integrated circuits, which harvest energy from sources like indoor light and vibrations to generate data. The sensors continuously send that data to Everactive’s cloud-based dashboard, which gives users real time insights, analysis, and alerts to help them leverage the full power of industrial IoT devices.

“It’s all enabled by the ultra-low-power chips that support continuous monitoring,” says Everactive Co-Chief Technology Officer David Wentzloff SM ’02, PhD ’07. “Because our source of power is unlimited, we’re not making tradeoffs like keeping radios off or doing something else [limiting] to save battery life.”

Everactive builds finished products on top of its chips that customers can quickly deploy in large numbers. Its first product monitors steam traps, which release condensate out of steam systems. Such systems are used in a variety of industries, and Everactive’s customers include companies in sectors like oil and gas, paper, and food production. Everactive has also developed a sensor to monitor rotating machinery, like motors and pumps, that runs on the second generation of its battery-free chips.

By avoiding the costs and restrictions associated with other sensors, the company believes it’s well-positioned to play a role in the IoT-powered transition to the factory of the future.

“This is technology that’s totally maintenance free, with no batteries, powered by harvested energy, and always connected to the cloud. There’s so many things you can do with that, it’s hard to wrap your head around,” Wentzloff says.

Breaking free from batteries

Wentzloff and his Everactive co-founder and co-CTO Benton Calhoun SM ’02, PhD ’06 have been working on low-power circuit design for more than a decade, beginning with their time at MIT. They both did their PhD work in the lab of Anantha Chandrakasan, who is currently the Vannevar Bush Professor of Electrical Engineering and Computer Science and the dean of MIT’s School of Engineering. Calhoun’s research focused on low-power digital circuits and memory while Wentzloff’s focused on low power radios.

After earning their PhDs, both men became assistant professors at the schools they attended as undergraduates — Wentzloff at the University of Michigan and Calhoun at the University of Virginia — where they still teach today. Even after settling in different parts of the country, they continued collaborating, applying for joint grants and building circuit-based systems that combined their areas of research.

The collaboration was not an isolated incident: The founders have maintained relationships with many of their contacts from MIT.

“To this day I stay in touch with my colleagues and professors,” Wentzloff says. “It’s a great group to be associated with, especially when you talk about the integrated circuit space. It’s a great community, and I really value and appreciate that experience and those connections that have come out of it. That’s far an away the longest impression MIT has left on my career, those people I continue to stay in touch with. We’re all helping each other out.”

Wentzloff and Calhoun’s academic labs eventually created a battery-free physiological monitor that could track a user’s movement, temperature, heart rate, and other signals and send that data to a phone, all while running on energy harvested from body heat.

“That’s when we decided we should look at commercializing this technology,” Wentzloff says.

In 2014, they partnered with semiconductor industry veteran Brendan Richardson to launch the company, originally called PsiKick.

In the beginning, when Wentzloff describes the company as “three guys and a dog in a garage,” the founders sought to reimagine circuit designs that included features of full computing systems like sensor interfaces, processing power, memory, and radio signals. They also needed to incorporate energy harvesting mechanisms and power management capabilities.

“We wiped the slate clean and had a fresh start,” Wentzloff recalls.

The founders initially attempted to sell their chips to companies to build solutions on top of, but they quickly realized the industry wasn’t familiar enough with battery-free chips.

“There’s an education level to it, because there’s a generation of engineers used to thinking of systems design with battery-operated chips,” Wentzloff says.

The learning curve led the founders to start building their own solutions for customers. Today Everactive offers its sensors as part of a wider service that incorporates wireless networks and data analytics.

The company’s sensors can be powered by small vibrations, lights inside a factory as dim as 100 lux, and heat differentials below 10 degrees Fahrenheit. The devices can sense temperature, acceleration, vibration, pressure, and more.

The company says its sensors cost significantly less to operate than traditional sensors and avoid the maintenance headache that comes with deploying thousands of battery-powered devices.

For instance, Everactive considered the cost of deploying 10,000 traditional sensors. Assuming a three-year battery life, the customer would need to replace an average of 3,333 batteries each year, which comes out to more than nine a day.

The next technological revolution

By saving on maintenance and replacement costs, Everactive customers are able to deploy more sensors. That, combined with the near-continuous operation of those sensors, brings a new level of visibility to operations.

“[Removing restrictions on sensor installations] starts to give you a sixth sense, if you will, about how your overall operations are running,” Calhoun says. “That’s exciting. Customers would like to wave a magic wand and know exactly what’s going on wherever they’re interested. The ability to deploy tens of thousands of sensors gets you close to that magic wand.”

With thousands of Everactive’s steam trap sensors already deployed, Wentzloff believes its sensors for motors and other rotating machinery will make an even bigger impact on the IoT market.

Beyond Everactive’s second generation of products, the founders say their sensors are a few years away from being translucent, flexible, and the size of a postage stamp. At that point customers will simply need to stick the sensors onto machines to start generating data. Such ease of installation and use would have implications far beyond the factory floor.

“You hear about smart transportation, smart agriculture, etc.,” Calhoun says. “IoT has this promise to make all of our environments smart, meaning there’s an awareness of what’s going on and use of that information to have these environments behave in ways that anticipate our needs and are as efficient as possible. We believe battery-less sensing is required and inevitable to bring about that vision, and we’re excited to be a part of that next computing revolution.”

Wednesday, August 19th, 2020
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
2:45 pm
Are we still listening to space?

When LIGO, the Laser Interferometer Gravitational-Wave Observatory, and its European counterpart, Virgo, detect a gravitational ripple from space, a public alert is sent out. That alert lets researchers know with a decently high confidence that this ripple was probably caused by an exceptional cosmic event, such as the collision of neutron stars or the merging of black holes, somewhere in the universe.

Then starts the scramble. A pair of researchers is assigned to the incoming event, analyzing the data to get a preliminary location in the sky whence the ripple emanated. Telescopes are pointed in that direction, more data is amassed, and the pair of researchers conducts further followup studies to try to determine what kind of event caused the wave.

“I often think of it as if we’re in a dark forest and listening to the ground,” says Eva Huang, a third-year Department of Physics graduate student in Assistant Professor Salvatore Vitale’s lab in the MIT Kavli Institute for Astrophysics and Space Research (MKI). “From the footsteps, we’re trying to guess what kind of animal is passing by.”

The LIGO-Virgo Collaboration keeps a rotation system to determine which researchers get to investigate the latest detection. Sylvia Biscoveanu, a second-year graduate student also in Vitale’s lab, was next on the list when LIGO suspended its third observational run due to Covid-19. If a cosmic event happens in the universe and there’s no one there to detect it, did it even happen?

Data analysis in isolation

When MIT similarly scaled back on-campus research in mid-March due to the coronavirus pandemic, the LIGO team at MKI adapted quickly to the new work-from-home normal. “Our work is physically less dependent on being at MIT,” says Vitale, who is also a member of the LIGO Scientific Collaboration. “Still, there are consequences.”

For Biscoveanu, working from home has entailed being at her computer for at least eight hours a day. “In terms of actually being able to do my research, I haven’t suffered,” she says. What has suffered is her ability to exchange ideas with other members of the LIGO group at MIT. “I had just moved to a bigger office with a bunch of graduate students, and we were really looking forward to being able to talk to each other and ask questions regularly,” says Biscoveanu. “I definitely don’t get as much of that at home.”

Mentorship also looks different when everyone is at home. Vitale has always had an open-door policy with his graduate students. “I do weekly meetings with my students, but on top of that I had close-to-daily interactions with them,” he says. Unless his door was closed, Vitale says, his students could come in and talk anytime. That immediate connection, he has found, is hard to replicate in the digital world.

“The thing I tell my students is that we don’t work in a hut where everyone is making their own project and then it’s done,” says Vitale. “Research is more than the sum of its parts.” One advantage of working in a group is the ability to turn to a colleague to discuss a paper you just read, a problem you’re facing, or a crazy idea you had the night before. That’s harder to do when everyone is stuck in their own hut.

“Now you have to go in the chat room or arrange a telecon if you want to ask a question,” says Ken Ng, a third-year graduate student in the Vitale group. Ng uses gravitational waves to study particle physics, with his work focusing on axions, a proposed elementary particle that is orders of magnitude smaller than the tiniest particle observed. Telecons and Slack, he has found, can be particularly inefficient when you’re trying to quickly sketch out an idea. “I’m actually thinking of buying a white board,” he says.

Space never stops

When the third observation run was suspended a month before it was supposed to end, it had collected 56 gravitational wave candidates. In comparison, the first two runs combined amassed a total of 11 candidates. So even though fresh data isn’t arriving in the lab, the work hasn’t ceased, and LIGO scientists are scrutinizing the data from home. “If the pandemic had happened a few months before, we could have missed half the data,” says Ng, looking on the positive side.  

Compared to the other members of the lab, Ng is no pandemic rookie. When the Covid-19 pandemic struck, he thought, “Again?” Ng, who is from Hong Kong, faced the SARS outbreak in 2002 and considers himself the pandemic veteran of the group. That experience has kept him from panicking these days. “I know the importance of social distancing and mask-wearing,” he explains.

Still, for some in the group, social distancing has led to less productivity and feelings of guilt. “I sometimes feel that, because my work is less impacted, I cannot allow myself to feel frustrated,” says Huang. Her work — analyzing LIGO data to decipher the cosmic events responsible for detected waves — can be done at home, unlike researchers who need to be physically in-lab. Throughout the pandemic, Huang has worked hard to combat the feeling that she needs to earn permission to be self-compassionate. “I can be, and need to be, kind to myself during this time.

All are looking forward to the day when they can come back to campus. Partly, Ng confesses, for the free food. But mostly to continue studying gravitational waves in the same space. “I miss being able to chat randomly when people are in an office,” he says.

Vitale acknowledges that there have been some benefits of working from home. “This has obliged everyone to think a bit harder about how to express what we want to say,” he says. Still, like his students, he also can’t wait to leave his hut and get back to campus. “I think for all of us, it will also just be nice to be back at the office and re-establish a clear separation between our living and our working spaces, that right now are collapsed in the same entity.”

Tuesday, August 18th, 2020
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
3:20 pm
For student researchers, no pause for the pandemic

In mid-March, when the Covid-19 pandemic darkened MIT classrooms and labs, lights switched on for undergraduate research taking place remotely. Zooming in from time zones often distant from Cambridge, Massachusetts, many students were able to continue undergraduate research opportunities (UROPs) made possible by nuclear science and engineering faculty.

Advancing projects begun during January independent activities period or the start of spring semester, students overcame significant obstacles to make their research experiences meaningful while working from home — whether that home was in a manicured U.S. suburban subdivision, a palm-lined street in the Middle East, or, in the case of Quynh T. Nguyen, surrounded by local rice fields in Vietnam.

“It was tough returning to Dong Hoi City, because I thought that meant I was done with my UROP for the semester,” says the rising junior majoring in physics. Working with Assistant Professor Mingda Li, Nguyen had been investigating the thermal transport properties of materials, growing crystals in the lab. One goal of such work is optimizing heat transfer in materials to improve efficiency in energy production. “I was so grateful when Professor Li found ways for me to stay on the project from home,” he says.

While finishing his spring classes online — a major undertaking given the 11-hour time difference and difficulties accessing MIT servers — Nguyen pivoted with enthusiasm from lab work to developing machine learning applications for the same project.

“I’ve been excited about machine learning since taking a class, and so actually this UROP has allowed me to leverage my knowledge in an extremely new and interesting way for me,” says Nguyen.

Aljazzy Alahmadi, a rising sophomore, managed to get back to Saudi Arabia the day before such international flights were halted. “I was in a UROP meeting when MIT emailed the news, and I didn’t think about anything except getting home as fast as possible,” she recalls. But soon after she settled into life in Dammam, a city of more than a million on the Persian Gulf, she was relieved to learn that she could continue her project with graduate student Saleem Aldajani, within the lab of Associate Professor Michael P. Short.

“My work involves finding trends in the degradation of a stainless steel alloy often used in light water nuclear reactors when it’s under reactor-like thermal conditions,” she says. This kind of information might contribute to extended lifetimes for light water reactors. But after training with steel cutting and specialized spectroscopy techniques in the lab, her remote location necessitated a turn to data analysis instead.

“I was kind of happy about this switch,” Alahmadi says. “When I began the project, I didn’t really grasp what it was all about — I was learning how to cut steel samples — so when I started focusing on datasets I could intellectually explore in a way I couldn’t before.”

After she returned to her home in Katy, Texas, a small city in Houston’s shadow, Andrea Garcia, a rising sophomore, says she felt “kind of devastated.” Drawn to disciplines that would enable her to address environmental problems and climate change, Garcia had just decided to concentrate in materials science and engineering. “I had a lot of things planned for the rest of the semester,” she says, including a UROP in the Short lab. After hearing him lecture about the promise of fusion energy in the fall, Garcia had determined to learn more about nuclear energy more broadly.

She leapt into Short’s project, spending weeks learning how to use lasers safely. “Then we got kicked out due to Covid,” says Garcia. “I thought there’d be no way for undergraduate researchers to keep doing the research, but Professor Short made it happen, offering to run experiments and send us the data.”

Flying (mostly) solo

Although routinely in touch with faculty and lab supervisors via email and Zoom meetings, the students were on their own for the most part during spring semester and beyond. While they found the physical isolation from a team challenging at times, the undergraduates also relished their independence.

“I was analyzing data on irradiated samples of titanium aluminum metals, focused on thermal diffusivity, and was left to my own devices,” says Garcia. “Every week, we had to present our findings, and I came to feel a sense of ownership, that I was having an impact and that my work was achieving something.”

Investigating electrical and thermal conductivity of crystals that feature some unique quantum properties proved fascinating to Nguyen, not least because it catalyzed him to “learn many new things related to machine learning on Coursera,” as well as to investigate domains of physics previously unfamiliar to him. He especially enjoyed prowling through vast online databases: “I find it amazing that scientists have built these repositories and made them available for everyone to access.”

Alahmadi felt energized by the quest to find something of value in her datasets. “With this project, I felt I couldn’t leave until I reached a point of a deliverable,” she says. “I wanted to get a result, publish a paper, go to a conference — get the full experience of this.”

Sticking with it

Although their fall plans might be uncertain, these students remain anchored by their continuing research. Garcia, who found that she enjoyed using Python to create graphs mapping the properties of her material samples, says the experience reminded her “that computer science is a useful skill.” As a result, she hopes to bear down on her materials science major while taking more computer science courses.

“My wildest dream, which keeps me going, is to incorporate power systems in Saudi that don’t use carbon,” Alahmadi says. She hopes to stick with her UROP, wherever she is living. “It’s taught me to open my eyes to all things so I can learn new skills, from acquiring new capabilities to make projects go faster, to collaborating well with other lab members.”

Nguyen, who is targeting a career in applied physics, feels his experience with the UROP “is invaluable for my future,” he says. He has co-authored a scientific publication, and feels deep ties to his Cambridge-based research group. He has come to view this difficult period not as an obstacle, but an opportunity. “It’s an unprecedented experience, working and communicating remotely,” he says. “We are all experiencing a painful pandemic, but as Professor Li notes we are living in a historic time that will one day be memorialized in movies and books, so it’s not all bad.”

Thursday, August 13th, 2020
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
4:00 pm
A fix for foulants

When clogs and corrosion threaten residential water and heating systems, homeowners can simply call a plumber to snake a drain or replace a pipe. Operators of nuclear power plants aren’t nearly so lucky. Metallic oxide particles, collectively known as CRUD in the nuclear energy world, build up directly on reactor fuel rods, impeding the plant’s ability to generate heat. These foulants cost the nuclear energy industry millions of dollars annually.

This issue has vexed the nuclear energy industry since its start in the 1960s, and scientists have only found ways to mitigate, but not cure, CRUD buildup. But that may be about to change. “We believe we have cracked the problem of CRUD,” says Michael Short, Class of ’42 Associate Professor of Nuclear Science and Engineering (NSE), and research lead. “Every test we’ve done so far has looked good.”

In a recent paper published online by Langmuir, an American Chemical Society journal, Short and MIT colleagues describe their work, which offers a novel approach to designing fouling-resistant materials for use in nuclear reactors and other large-scale energy systems. Co-authors on the paper are Cigdem Toparli, a postdoc in NSE at the time of the study; NSE graduate students Max Carlson and Minh A. Dinh; and Bilge Yildiz, professor of nuclear science and engineering and of materials science and engineering.

The team’s research goes beyond theory and lays out specific design principles for anti-foulant materials. “One important aspect of our project was to make a practical solution to the problem today — no pie-in-the-sky for our children’s generation, but something that has to work with everything that exists now,” says Short.

Exelon, one of the nation’s largest power generators, is confident enough in the viability of the MIT team’s anti-foulant designs that it has started making plans to validate them in one of its commercial reactors. In the highly regulated domain of nuclear energy, the time from research idea to application could set a speed record.

The forces behind CRUD

Short has been investigating CRUD since 2010, when he joined the Consortium for Advanced Simulation of Light Water Reactors (CASL), a project sponsored by the U.S. Department of Energy to improve the performance of current and future nuclear reactors. As a postdoc at MIT, he developed computer models of CRUD.

“This made me read a lot about CRUD, and how different surface forces can cause things to stick to each other, such as the corrosion products circulating in coolant fluid that accumulate on fuel rods,” says Short. “I wanted to learn how it accumulates in the first place, and maybe find a way to actually prevent CRUD formation.”

Toward that end, he set up a boiling chamber made out of spare parts in the basement of Building NW22 to see which materials stuck to each other, and received a small grant to learn how to test the growth of CRUD in reactor conditions in Japan. He and his students built a flow loop (a way of recreating reactor conditions without radiation), and conducted a series of experiments to see which materials encouraged, and which discouraged, the growth of CRUD.

Researchers have floated a host of surface forces as candidates for causing the stickiness behind CRUD: hydrogen bonding, magnetism, electrostatic charges. But through experimentation and computational analysis, Short and his team began to suspect an overlooked contender: van der Waals forces. Discovered by 19th-century Dutch physicist Johannes Diderik van der Waals, these are weak electric forces that account for some of the attraction of molecules to each other in liquid, solids, and gases.

“We could rule out other surface forces for simple reasons, but one force we couldn’t rule out was van der Waals,” says Short.

Then came a major breakthrough: Carlson recalled a 50-year-old equation developed by Russian physicist Evgeny Lifshitz that he had come across during a review of materials science literature.

“Lifshitz’s theory described the magnitude of van der Waals forces according to electron vibrations, where electrons in different materials vibrate at different frequencies and at different amplitudes, such as the stuff floating in coolant water, and fuel rod materials,” describes Short. “His math tells us if the solid materials have the same electronic vibrations as water, nothing will stick to them.”

This, says Short, was the team’s “Aha” moment. If cladding, the outer layer of fuel rods, could be coated with a material that matched the electronic frequency spectrum of coolant water, then these particles would slip right past the fuel rod. “The answer was sitting in the literature for 50 years, but nobody recognized it in this way,” says Short.

“This was real thinking outside the box,” says Chris Stanek, a technical director at Los Alamos National Laboratory engaged in nuclear energy advanced modeling and simulation, who was not involved in the research. “It was an unconventional, MIT approach — to step back and look at the source of fouling, to find something no one else had in the literature, and then getting straight to the physical underpinnings of CRUD.”

One design principle

The researchers got to work demonstrating that van der Waals was the single most important surface force behind the stickiness of CRUD. In search of a simple and uniform way of calculating materials’ molecular frequencies, they seized on the refractive light index — a measure of the amount light bends as it passes through a material. Shining calibrated LED light on material samples, they created a map of the optical properties of nuclear fuel and cladding materials. This enabled them to rate materials on a stickiness scale. Materials sharing the same optical properties, according to the Lifshitz theory, would prove slippery to each other, while those far apart on the refractive light scale would stick together.

By the end of their studies, as the paper describes, Short’s team had not only come up with a design principle for anti-foulant materials but a group of candidate coatings whose optical properties made them a good (slippery) match for coolant fluids. But in actual experiments, some of their coatings didn’t work. “It wasn’t enough to get the refractive index right,” says Short. “Materials need to be hard, resistant to radiation, hydrogen, and corrosion, and capable of being fabricated at large scale.”

Additional trials, including time in the harsh environment of MIT’s Nuclear Reactor Laboratory, have yielded a few coating materials that meet most of these tough criteria. The final step is determining if these materials can stop CRUD from growing in a real reactor. It is a test with a start date expected next year, at an Exelon commercial nuclear plant.

“Fuel rods coated with antifoulant materials will go into an operating commercial reactor putting power on the grid,” says Short. “At different intervals, they come out for examination, and if all goes right, our rods are clean and the ones next door are dirty,” says Short. “We could be one long test away from stopping CRUD in this type of reactor, and if we eliminate CRUD, we’ve wiped away a scourge of the industry.”

Funders of this research include Exelon Corporation through the MIT Energy Initiative’s Center for Advanced Nuclear Energy Systems; Statoil Petroleum AS (now Equinor); and the International Collaborative Energy Technology R&D Program of the Korea Institute of Energy Technology Evaluation and Planning, which is funded by the Korean Ministry of Trade Industry and Energy.

Wednesday, August 12th, 2020
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
2:00 pm
Study suggests animals think probabilistically to distinguish contexts

Among the many things rodents have taught neuroscientists is that, in a region called the hippocampus, the brain creates a new map for every unique spatial context — for instance, a different room or maze. But scientists have so far struggled to learn how animals decide when a context is novel enough to merit creating, or at least revising, these mental maps. In a study in eLife, MIT and Harvard University researchers propose a new understanding: The process of “remapping” can be mathematically modeled as a feat of probabilistic reasoning by the rodents.

The approach offers scientists a new way to interpret many experiments that depend on measuring remapping to investigate learning and memory. Remapping is integral to that pursuit, because animals (and people) associate learning closely with context, and hippocampal maps indicate which context an animal believes itself to be in.

“People have previously asked ‘What changes in the environment cause the hippocampus to create a new map?’ but there haven’t been any clear answers,” says lead author Honi Sanders. “It depends on all sorts of factors, which means that how the animals define context has been shrouded in mystery.”

Sanders is a postdoc in the lab of co-author Matthew Wilson, Sherman Fairchild Professor in The Picower Institute for Learning and Memory and the departments of Biology and Brain and Cognitive Sciences at MIT.  He is also a member of the Center for Brains, Minds and Machines. The pair collaborated with Samuel Gershman, a professor of psychology at Harvard.

A fundamental problem with remapping that has frequently led labs to report conflicting, confusing, or surprising results, is that scientists cannot simply assure their rats that they have moved from experimental Context A to Context B, or that they are still in Context A, even if some ambient condition, like temperature or odor, has inadvertently changed. It is up to the rat to explore and infer that conditions like the maze shape, or smell, or lighting, or the position of obstacles and rewards, or the task they must perform, have or have not changed enough to trigger a full or partial remapping.

So, rather than trying to understand remapping measurements based on what the experimental design is supposed to induce, Sanders, Wilson, and Gershman argue that scientists should predict remapping by mathematically accounting for the rat’s reasoning using Bayesian statistics, which quantify the process of starting with an uncertain assumption and then updating it as new information emerges.

“You never experience exactly the same situation twice. The second time is always slightly different,” Sanders says. “You need to answer the question: ‘Is this difference just the result of normal variation in this context or is this difference actually a different context?’ The first time you experience the difference you can’t be sure, but after you’ve experienced the context many times and get a sense of what variation is normal and what variation is not, you can pick up immediately when something is out of line.”

The trio call their approach “hidden state inference” because to the animal, the possible change of context is a hidden state that must be inferred.

In the study, the authors describe several cases in which hidden state inference can help explain the remapping, or the lack of it, observed in prior studies.

For instance, in many studies it’s been difficult to predict how changing some of the cues that a rodent navigates by in a maze (e.g., a light or a buzzer) will influence whether it makes a completely new map or partially remaps the current one, and by how much. Mostly the data has showed there isn’t an obvious “one-to-one” relationship of cue change and remapping. But the new model predicts how, as more cues change, a rodent can transition from becoming uncertain about whether an environment is novel (and therefore partially remapping) to becoming sure enough of that to fully remap.

In another, the model offers a new prediction to resolve a remapping ambiguity that has arisen when scientists have incrementally “morphed” the shape of rodent enclosures. Multiple labs, for instance, found different results when they familiarized rats with square and round environments and then tried to measure how and whether they remap when placed in intermediate shapes, such as an octagon. Some labs saw complete remapping, while others observed only partial remapping. The new model predicts how that could be true: rats exposed to the intermediate environment after longer training would be more likely to fully remap than those exposed to the intermediate shape earlier in training, because with more experience they would be more sure of their original environments, and therefore more certain that the intermediate one was a real change.

The math of the model even includes a variable that can account for differences between individual animals. Sanders is looking at whether rethinking old results in this way could allow researchers to understand why different rodents respond so variably to similar experiments.

Ultimately, Sanders says, he hopes the study will help fellow remapping researchers adopt a new way of thinking about surprising results — by considering the challenge their experiments pose to their subjects.

“Animals are not given direct access to context identities, but have to infer them,” he says. “Probabilistic approaches capture the way that uncertainty plays a role when inference occurs. If we correctly characterize the problem the animal is facing, we can make sense of differing results in different situations because the differences should stem from a common cause: the way that hidden state inference works.”

The U.S. National Science Foundation funded the research.

Tuesday, August 11th, 2020
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
2:00 pm
SMART research enhances dengue vaccination in mice

Researchers from the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, have found a practical way to induce a strong and broad immunity to the dengue virus based on proof-of-concept studies in mice. Dengue is a mosquito-borne viral disease with an estimated 100 million symptomatic infections every year. It is endemic in over 100 countries in the world, from the United States to Africa and wide swathes of Asia. In Singapore, over 1,700 dengue new cases were reported recently. 

The study is reported in a paper titled “Sequential immunization induces strong and broad immunity against all four dengue virus serotypes,” published in NPJ Vaccines. It is jointly published by SMART researchers Jue Hou, Shubham Shrivastava, Hooi Linn Loo, Lan Hiong Wong, Eng Eong Ooi, and Jianzhu Chen from SMART’s Infectious Diseases and Antimicrobial Resistance (AMR) interdisciplinary research groups (IRGs). 

The dengue virus (DENV) consists of four antigenically distinct serotypes and there is no lasting immunity following infection with any of the DENV serotypes, meaning someone can be infected again by any of the remaining three variants of DENVs.

Today, Dengvaxia is the only vaccine available to combat dengue. It consists of four variant dengue antigens, one for each of the four serotypes of dengue, expressed from attenuated yellow-fever virus. The current three doses of immunization with the tetravalent vaccine induce only suboptimal protection against DENV1 and DENV2. Furthermore, in people who have not been infected by dengue, the vaccine induces a more severe dengue infection in the future. Therefore, in most of the world, the vaccination is only given to those who have been previously infected. 

To help overcome these issues, SMART researchers tested on mice whether sequential immunization (or one serotype per dose) induces stronger and broader immunity against four DENV serotypes than tetravalent-formulated immunization — and found that sequential immunization induced significantly higher levels of virus-specific T cell responses than tetravalent immunization. Moreover, sequential immunization induced higher levels of neutralizing antibodies to all four DENV serotypes than tetravalent vaccination.

“The principle of sequential immunization generally aligns with the reality for individuals living in dengue-endemic areas, whose immune responses may become protective after multiple heterotypic exposures,” says Professor Eng Eong Ooi, SMART AMR principal investigator and senior author of the study. “We were able to find a similar effect based on the use of sequential immunization, which will pave the way for a safe and effective use of the vaccine and to combat the virus.”

Upon these promising results, the investigators will aim to test the sequential immunization in humans in the near future.

The work was supported by the National Research Foundation (NRF) Singapore through the SMART Infectious Disease Research Program and AMR IRG. SMART was established by MIT in partnership with the NRF Singapore in 2007. SMART is the first entity in the Campus for Research Excellence and Technological Enterprise (CREATE) developed by NRF.  SMART serves as an intellectual and innovation hub for research interactions between MIT and Singapore, performing cutting-edge research of interest to both Singapore and MIT. SMART currently comprises an Innovation Centre and five IRGs: AMR, Critical Analytics for Manufacturing Personalized-Medicine, Disruptive and Sustainable Technologies for Agricultural Precision, Future Urban Mobility, and Low Energy Electronic Systems. SMART research is funded by the NRF Singapore under the CREATE program.  

The AMR IRG is a translational research and entrepreneurship program that tackles the growing threat of antimicrobial resistance. By leveraging talent and convergent technologies across Singapore and MIT, they aim to tackle AMR head-on by developing multiple innovative and disruptive approaches to identify, respond to, and treat drug-resistant microbial infections. Through strong scientific and clinical collaborations, they provide transformative, holistic solutions for Singapore and the world.

Monday, August 10th, 2020
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
11:59 pm
How airplanes counteract St. Elmo’s Fire during thunderstorms

At the height of a thunderstorm, the tips of cell towers, telephone poles, and other tall, electrically conductive structures can spontaneously emit a flash of blue light. This electric glow, known as a corona discharge, is produced when the air surrounding a conductive object is briefly ionized by an electrically charged environment.

For centuries, sailors observed corona discharges at the tips of ship masts during storms at sea. They coined the phenomenon St. Elmo’s fire, after the patron saint of sailors.

Scientists have found that a corona discharge can strengthen in windy conditions, glowing more brightly as the wind further electrifies the air. This wind-induced intensification has been observed mostly in electrically grounded structures, such as trees and towers. Now aerospace engineers at MIT have found that wind has an opposite effect on ungrounded objects, such as airplanes and some wind turbine blades.

In some of the last experiments performed in MIT’s Wright Brothers Wind Tunnel before it was dismantled in 2019, the researchers exposed an electrically ungrounded model of an airplane wing to increasingly strong wind gusts. They found that the stronger the wind, the weaker the corona discharge, and the dimmer the glow that was produced.

The team’s results appear in the Journal of Geophysical Research: Atmospheres. The study’s lead author is Carmen Guerra-Garcia, an assistant professor of aeronautics and astronautics at MIT. Her co-authors at MIT are Ngoc Cuong Nguyen, a senior research scientist; Theodore Mouratidis, a graduate student; and Manuel Martinez-Sanchez, a post-tenure professor of aeronautics and astronautics.

Electric friction

Within a storm cloud, friction can build up to produce extra electrons, creating an electric field that can reach all the way to the ground. If that field is strong enough, it can break apart surrounding air molecules, turning neutral air into a charged gas, or plasma. This process most often occurs around sharp, conductive objects such as cell towers and wing tips, as these pointed structures tend to concentrate the electric field in a way that electrons are pulled from surrounding air molecules toward the pointed structures, leaving behind a veil of positively charged plasma immediately around the sharp object.

Once a plasma has formed, the molecules within it can begin to glow via the process of corona discharge, where excess electrons in the electric field ping-pong against the molecules, knocking them into excited states. In order to come down from those excited states, the molecules emit a photon of energy, at a wavelength that, for oxygen and nitrogen, corresponds to the characteristic blueish glow of St. Elmo’s fire.

In previous laboratory experiments, scientists found that this glow, and the energy of a corona discharge, can strengthen in the presence of wind. A strong gust can essentially blow away the positively charged ions, that were locally shielding the electric field and reducing its effect — making it easier for electrons to trigger a stronger, brighter glow.

These experiments were mostly carried out with electrically grounded structures, and the MIT team wondered whether wind would have the same strengthening effect on a corona discharge that was produced around a sharp, ungrounded object, such as an airplane wing.

To test this idea, they fabricated a simple wing structure out of wood and wrapped the wing in foil to make it electrically conductive. Rather than try to produce an ambient electric field similar to what would be generated in a thunderstorm, the team studied an alternative configuration in which the corona discharge was  generated in a metal wire running parallel to the length of the wing, and connecting a small high-voltage power source between wire and wing. They fastened the wing to a pedestal made from an insulating material that, because of its nonconductive nature, essentially made the wing itself electrically suspended, or ungrounded.

The team placed the entire setup in MIT’s Wright Brothers Wind Tunnel, and subjected it to increasingly higher velocities of wind, up to 50 meters per second, as they also varied the amount of voltage that they applied to the wire. During these tests, they measured the amount of electrical charge building up in the wing, the current of the corona and also used an ultraviolet-sensitive camera to observe the brightness of the corona discharge on the wire.

Scientists observe the ion “glow” of corona discharge in an electrically ungrounded object (left) compared to a grounded object (right). Courtesy of the researchers

In the end, they found that the strength of the corona discharge and its resulting brightness decreased as the wind increased — a surprising and opposite effect from what scientists have seen for wind acting on grounded structures.

Pulled against the wind

The team developed numerical simulations to try and explain the effect, and found that, for ungrounded structures, the process is largely similar to what happens with grounded objects — but with something extra.

In both cases, the wind is blowing away the positive ions generated by the corona, leaving behind a stronger field in the surrounding air. For ungrounded structures, however, because they are electrically isolated, they become more negatively charged. This results in a weakening of  the positive corona discharge. The amount of negative charge that the wing retains is set by the competing effects of positive ions blown by the wind and those attracted and pulled back as a result of the negative excursion. This secondary effect, the researchers found, acts to weaken the local electric field, as well as the corona discharge’s electric glow.

“The corona discharge is the first stage of lightning in general,” Guerra-Garcia says. “How corona discharge behaves is important and kind of sets the stage for what could happen next in terms of electrification.”

In flight, aircraft such as planes and helicopters inherently produce wind, and a glow corona system like the one tested in the wind tunnel could actually be used to control the electrical charge of the vehicle. Connecting to some prior work by the team, she and her colleagues previously showed that if a plane could be negatively charged, in a controlled fashion, the plane’s risk of being struck by lightning could be reduced. The new results show that charging of an aircraft in flight to negative values can be achieved using a controlled positive corona discharge.

‘’The exciting thing about this study is that, while trying to demonstrate that the electrical charge of an aircraft can be controlled using a corona discharge, we actually discovered that classical theories of corona discharge in wind do not apply for airborne platforms, that are electrically isolated from their environment,” Guerra-Garcia says. “Electrical breakdown occurring in aircraft really presents some unique features that do not allow the direct extrapolation from ground studies.”

This research was funded, in part, by The Boeing Company, through the Strategic Universities for Boeing Research and Technology Program.

LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
4:00 pm
Data systems that learn to be better

Big data has gotten really, really big: By 2025, all the world’s data will add up to an estimated 175 trillion gigabytes. For a visual, if you stored that amount of data on DVDs, it would stack up tall enough to circle the Earth 222 times. 

One of the biggest challenges in computing is handling this onslaught of information while still being able to efficiently store and process it. A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) believe that the answer rests with something called “instance-optimized systems.”  

Traditional storage and database systems are designed to work for a wide range of applications because of how long it can take to build them — months or, often, several years. As a result, for any given workload such systems provide performance that is good, but usually not the best. Even worse, they sometimes require administrators to painstakingly tune the system by hand to provide even reasonable performance. 

In contrast, the goal of instance-optimized systems is to build systems that optimize and partially re-organize themselves for the data they store and the workload they serve. 

“It’s like building a database system for every application from scratch, which is not economically feasible with traditional system designs,” says MIT Professor Tim Kraska. 

As a first step toward this vision, Kraska and colleagues developed Tsunami and Bao. Tsunami uses machine learning to automatically re-organize a dataset’s storage layout based on the types of queries that its users make. Tests show that it can run queries up to 10 times faster than state-of-the-art systems. What’s more, its datasets can be organized via a series of "learned indexes" that are up to 100 times smaller than the indexes used in traditional systems. 

Kraska has been exploring the topic of learned indexes for several years, going back to his influential work with colleagues at Google in 2017. 

Harvard University Professor Stratos Idreos, who was not involved in the Tsunami project, says that a unique advantage of learned indexes is their small size, which, in addition to space savings, brings substantial performance improvements.

“I think this line of work is a paradigm shift that’s going to impact system design long-term,” says Idreos. “I expect approaches based on models will be one of the core components at the heart of a new wave of adaptive systems.”

Bao, meanwhile, focuses on improving the efficiency of query optimization through machine learning. A query optimizer rewrites a high-level declarative query to a query plan, which can actually be executed over the data to compute the result to the query. However, often there exists more than one query plan to answer any query; picking the wrong one can cause a query to take days to compute the answer, rather than seconds. 

Traditional query optimizers take years to build, are very hard to maintain, and, most importantly, do not learn from their mistakes. Bao is the first learning-based approach to query optimization that has been fully integrated into the popular database management system PostgreSQL. Lead author Ryan Marcus, a postdoc in Kraska’s group, says that Bao produces query plans that run up to 50 percent faster than those created by the PostgreSQL optimizer, meaning that it could help to significantly reduce the cost of cloud services, like Amazon’s Redshift, that are based on PostgreSQL.

By fusing the two systems together, Kraska hopes to build the first instance-optimized database system that can provide the best possible performance for each individual application without any manual tuning. 

The goal is to not only relieve developers from the daunting and laborious process of tuning database systems, but to also provide performance and cost benefits that are not possible with traditional systems.

Traditionally, the systems we use to store data are limited to only a few storage options and, because of it, they cannot provide the best possible performance for a given application. What Tsunami can do is dynamically change the structure of the data storage based on the kinds of queries that it receives and create new ways to store data, which are not feasible with more traditional approaches.

Johannes Gehrke, a managing director at Microsoft Research who also heads up machine learning efforts for Microsoft Teams, says that his work opens up many interesting applications, such as doing so-called “multidimensional queries” in main-memory data warehouses. Harvard’s Idreos also expects the project to spur further work on how to maintain the good performance of such systems when new data and new kinds of queries arrive.

Bao is short for “bandit optimizer,” a play on words related to the so-called “multi-armed bandit” analogy where a gambler tries to maximize their winnings at multiple slot machines that have different rates of return. The multi-armed bandit problem is commonly found in any situation that has tradeoffs between exploring multiple different options, versus exploiting a single option — from risk optimization to A/B testing.

“Query optimizers have been around for years, but they often make mistakes, and usually they don’t learn from them,” says Kraska. “That’s where we feel that our system can make key breakthroughs, as it can quickly learn for the given data and workload what query plans to use and which ones to avoid.”

Kraska says that in contrast to other learning-based approaches to query optimization, Bao learns much faster and can outperform open-source and commercial optimizers with as little as one hour of training time.In the future, his team aims to integrate Bao into cloud systems to improve resource utilization in environments where disk, RAM, and CPU time are scarce resources.

“Our hope is that a system like this will enable much faster query times, and that people will be able to answer questions they hadn’t been able to answer before,” says Kraska.

A related paper about Tsunami was co-written by Kraska, PhD students Jialin Ding and Vikram Nathan, and MIT Professor Mohammad Alizadeh. A paper about Bao was co-written by Kraska, Marcus, PhD students Parimarjan Negi and Hongzi Mao, visiting scientist Nesime Tatbul, and Alizadeh.

The work was done as part of the Data System and AI Lab (DSAIL@CSAIL), which is sponsored by Intel, Google, Microsoft, and the U.S. National Science Foundation. 

LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
11:00 am
3 Questions: Asegun Henry on five “grand thermal challenges” to stem the tide of global warming

More than 90 percent of the world’s energy use today involves heat, whether for producing electricity, heating and cooling buildings and vehicles, manufacturing steel and cement, or other industrial activities. Collectively, these processes emit a staggering amount of greenhouse gases into the environment each year.

Reinventing the way we transport, store, convert, and use thermal energy would go a long way toward avoiding a global rise in temperature of more than 2 degrees Celsius — a critical increase that is predicted to tip the planet into a cascade of catastrophic climate scenarios.

But, as three thermal energy experts write in a letter published today in Nature Energy, “Even though this critical need exists, there is a significant disconnect between current research in thermal sciences and what is needed for deep decarbonization.”

In an effort to motivate the scientific community to work on climate-critical thermal issues, the authors have laid out five thermal energy “grand challenges,” or broad areas where significant innovations need to be made in order to stem the rise of global warming. MIT News spoke with Asegun Henry, the lead author and the Robert N. Noyce Career Development Associate Professor in the Department of Mechanical Engineering, about this grand vision.

Q: Before we get into the specifics of the five challenges you lay out, can you say a little about how this paper came about, and why you see it as a call to action?

A: This paper was born out of this really interesting meeting, where my two co-authors and I were asked to meet with Bill Gates and teach him about thermal energy. We did a several-hour session with him in October of 2018, and when we were leaving, at the airport, we all agreed that the message we shared with Bill needs to be spread much more broadly.

This particular paper is about thermal science and engineering specifically, but it’s an interdisciplinary field with lots of intersections. The way we frame it, this paper is about five grand challenges that if solved, would literally alter the course of humanity. It’s a big claim — but we back it up.

And we really need this to be declared as a mission, similar to the declaration that we were going to put a man on the moon, where you saw this concerted effort among the scientific community to achieve that mission. Our mission here is to save humanity from extinction due to climate change. The mission is clear. And this is a subset of five problems that will get us the majority of the way there, if we can solve them. Time is running out, and we need all hands on deck. 

Q: What are the five thermal energy challenges you outline in your paper?

A: The first challenge is developing thermal storage systems for the power grid, electric vehicles, and buildings. Take the power grid: There is an international race going on to develop a grid storage system to store excess electricity from renewables so you can use it at a later time. This would allow renewable energy to penetrate the grid. If we can get to a place of fully decarbonizing the grid, that alone reduces carbon dioxide emissions from electricity production by 25 percent. And the beauty of that is, once you decarbonize the grid you open up decarbonizing the transportation sector with electric vehicles. Then you’re talking about a 40 percent reduction of global carbon emissions.

The second challenge is decarbonizing industrial processes, which contribute 15 percent of global carbon dioxide emissions. The big actors here are cement, steel, aluminum, and hydrogen. Some of these industrial processes intrinsically involve the emission of carbon dioxide, because the reaction itself has to release carbon dioxide for it to work, in the current form. The question is, is there another way? Either we think of another way to make cement, or come up with something different. It’s an extremely difficult challenge, but there are good ideas out there, and we need way more people thinking about this.

The third challenge is solving the cooling problem. Air conditioners and refrigerators have chemicals in them that are very harmful to the environment, 2,000 times more harmful than carbon dioxide on a molar basis. If the seal breaks and that refrigerant gets out, that little bit of leakage will cause global warming to shift significantly. When you account for India and other developing nations that are now getting access to electricity infrastructures to run AC systems, the leakage of these refrigerants will become responsible for 15 to 20 percent of global warming by 2050.

The fourth challenge is long-distance transmission of heat. We transmit electricity because it can be transmitted with low loss, and it’s cheap. The question is, can we transmit heat like we transmit electricity? There is an overabundance of waste heat available at power plants, and the problem is, where the power plants are and where people live are two different places, and we don’t have a connector to deliver heat from these power plants, which is literally wasted. You could satisfy the entire residential heating load of the world with a fraction of that waste heat. What we don’t have is the wire to connect them. And the question is, can someone create one?

The last challenge is variable conductance building envelopes. There are some demonstrations that show it is physically possible to create a thermal material, or a device that will change its conductance, so that when it’s hot, it can block heat from getting through a wall, but when you want it to, you could change its conductance to let the heat in or out. We’re far away from having a functioning system, but the foundation is there.

Q: You say that these five challenges represent a new mission for the scientific community, similar to the mission to land a human on the moon, which came with a clear deadline. What sort of timetable are we talking about here, in terms of needing to solve these five thermal problems to mitigate climate change?

A: In short, we have about 20 to 30 years of business as usual, before we end up on an inescapable path to an average global temperature rise of over 2 degrees Celsius. This may seem like a long time, but it’s not when you consider that it took natural gas 70 years to become 20 percent of our energy mix. So imagine that now we have to not just switch fuels, but do a complete overhaul of the entire energy infrastructure in less than one third the time. We need dramatic change, not yesterday, but years ago. So every day I fear we will do too little too late, and we as a species may not survive Mother Earth’s clapback.

Thursday, August 6th, 2020
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
2:00 pm
Why shaving dulls even the sharpest of razors

Razors, scalpels, and knives are commonly made from stainless steel, honed to a razor-sharp edge and coated with even harder materials such as diamond-like carbon. However, knives require regular sharpening, while razors are routinely replaced after cutting materials far softer than the blades themselves.

Now engineers at MIT have studied the simple act of shaving up close, observing how a razor blade can be damaged as it cuts human hair — a material that is 50 times softer than the blade itself. They found that hair shaving deforms a blade in a way that is more complex than simply wearing down the edge over time. In fact, a single strand of hair can cause the edge of a blade to chip under specific conditions. Once an initial crack forms, the blade is vulnerable to further chipping. As more cracks accumulate around the initial chip, the razor’s edge can quickly dull.

The blade’s microscopic structure plays a key role, the team found. The blade is more prone to chipping if the microstructure of the steel is not uniform. The blade’s approaching angle to a strand of hair and the presence of defects in the steel’s microscopic structure also play a role in initiating cracks.

The team’s findings may also offer clues on how to preserve a blade’s sharpness. For instance, in slicing vegetables, a chef might consider cutting straight down, rather than at an angle. And in designing longer-lasting, more chip-resistant blades, manufacturers might consider making knives from more homogenous materials.

“Our main goal was to understand a problem that more or less everyone is aware of: why blades become useless when they interact with much softer material,” says C. Cem Tasan, the Thomas B. King Associate Professor of Metallurgy at MIT. “We found the main ingredients of failure, which enabled us to determine a new processing path to make blades that can last longer.”

Tasan and his colleagues have published their results today in the journal Science. His co-authors are Gianluca Roscioli, lead author and MIT graduate student, and Seyedeh Mohadeseh Taheri Mousavi, MIT postdoc.

A metallurgy mystery

Tasan’s group in MIT’s Department of Materials Science and Engineering explores the microstructure of metals in order to design new materials with exceptional damage-resistance.

“We are metallurgists and want to learn what governs the deformation of metals, so that we can make better metals,” Tasan says. “In this case, it was intriguing that, if you cut something very soft, like human hair, with something very hard, like steel, the hard material would fail.”

To identify the mechanisms by which razor blades fail when shaving human hair, Roscioli first carried out some preliminary experiments, using disposable razors to shave his own facial hair. After every shave, he took images of the razor’s edge with a scanning electron microscope (SEM) to track how the blade wore down over time.

An in-situ hair cutting experiment in a scanning electron microscope, showing the chipping process. Credit: Gianluca Roscioli

Surprisingly, the experiments revealed very little wear, or rounding out of the sharp edge over time. Instead, he noticed chips forming along certain regions of the razor’s edge.

“This created another mystery: We saw chipping, but didn’t see chipping everywhere, only in certain locations,” Tasan says. “And we wanted to understand, under what conditions does this chipping take place, and what are the ingredients of failure?”

A chip off the new blade

To answer this question, Roscioli built a small, micromechanical apparatus to carry out more controlled shaving experiments. The apparatus consists of a movable stage, with two clamps on either side, one to hold a razor blade and the other to anchor strands of hair. He used blades from commercial razors, which he set at various angles and cutting depths to mimic the act of shaving.

The apparatus is designed to fit inside a scanning electron microscope, where Roscioli was able to take high-resolution images of both the hair and the blade as he carried out multiple cutting experiments. He used his own hair, as well as hair sampled from several of his labmates, overall representing a wide range of hair diameters.

In-situ single-hair cutting experiment carried out to measure the loads generated on the blade edge during shaving. Credit: Gianluca Roscioli

Regardless of a hair’s thickness, Roscioli observed the same mechanism by which hair damaged a blade. Just as in his initial shaving experiments, Roscioli found that hair caused the blade’s edge to chip, but only in certain spots.

When he analyzed the SEM images and movies taken during the cutting experiments, he found that chips did not occur when the hair was cut perpendicular to the blade. When the hair was free to bend, however, chips were more likely to occur. These chips most commonly formed in places where the blade edge met the sides of the hair strands.

To see what conditions were likely causing these chips to form, the team ran computational simulations in which they modeled a steel blade cutting through a single hair. As they simulated each hair shave, they altered certain conditions, such as the cutting angle, the direction of the force applied in cutting, and most importantly, the composition of the blade’s steel.

They found that the simulations predicted failure under three conditions: when the blade approached the hair at an angle, when the blade’s steel was heterogenous in composition, and when the edge of a hair strand met the blade at a weak point in its heterogenous structure.

Tasan says these conditions illustrate a mechanism known as stress intensification, in which the effect of a stress applied to a material is intensified if the material’s structure has microcracks. Once an initial microcrack forms, the material’s heterogeneous structure enabled these cracks to easily grow to chips.

“Our simulations explain how heterogeneity in a material can increase the stress on that material, so that a crack can grow, even though the stress is imposed by a soft material like hair,” Tasan says.

The researchers have filed a provisional patent on a process to manipulate steel into a more homogenous form, in order to make longer-lasting, more chip-resistant blades.

“The basic idea is to reduce this heterogeneity, while we keep the high hardness,” Roscioli says. “We’ve learned how to make better blades, and now we want to do it.”

LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
11:00 am
A new tool for modeling the human gut microbiome

Several thousand strains of bacteria live in the human gut. Some of these are associated with disease, while others have beneficial effects on human health. Figuring out the precise role of each of these bacteria can be difficult, because many of them can’t be grown in lab studies using human tissue.

This difficulty is especially pronounced for species that cannot live in oxygen-rich environments. However, MIT biological and mechanical engineers have now designed a specialized device in which they can grow those oxygen-intolerant bacteria in tissue that replicates the lining of the colon, allowing them to survive for up to four days.

“We thought it was really important to contribute a tool to the community that could be used for this extreme case,” says Linda Griffith, the School of Engineering Professor of Teaching Innovation in MIT’s Department of Biological Engineering. “We showed that you can grow these very fastidious organisms, and we were able to study the effects they have on the human colon.”

Using this system, the researchers showed that they could grow a strain of bacteria called Faecalibacterium prausnitzii, which lives in the human gut and protects against inflammation. They also showed that these bacteria, which are often diminished in patients with Crohn’s disease, appear to exert many of their protective effects through the release of a fatty acid called butyrate.

Griffith and David Trumper, an MIT professor of mechanical engineering, are the senior authors of the study, which appears today in the journal Med. MIT postdocs Jianbo Zhang and Yu-Ja Huang are the lead authors of the paper.

Oxygen sensitivity

The human gut’s complex microbiome environment is difficult to model using animals such as mice, in part because mice eat a very different diet from humans, Griffith says.

“We've learned a huge amount from mice and other animal models, but there are a lot of differences, especially when it comes to the gut microbiome,” she says.

Most of the bacteria that live in the human gut are anaerobic, meaning that they do not require oxygen to survive. Some of these bacteria can tolerate low levels of oxygen, while others, such as F. prausnitzii, cannot survive oxygen exposure, which makes it difficult to study them in a laboratory. Some researchers have designed devices in which they can grow human colon cells along with bacteria that tolerate low levels of oxygen, but these don’t work well for F. prausnitzii and other highly oxygen-intolerant microbes.

To overcome this, the MIT team designed a device that allows them to precisely control oxygen levels in each part of the system. Their device contains a channel that is coated with cells from the human mucosal barrier of the colon. Below these cells, nutrients are pumped in to keep the cells alive. This bottom layer is oxygen-rich, but the concentration of oxygen decreases toward the top of the mucosal cell layer, similarly to what happens in the interior of the human colon.

Just as they do in the human colon, the barrier cells in the channel secrete a dense layer of mucus. The MIT team showed that F. prausnitzii can form clouds of cells in the outer layer of this mucus and survive there for up to four days, in an environment that is kept oxygen-free by fluid flowing across it. This fluid also contains nutrients for the microbes.

Using this system, the researchers were able to show that F. prausnitzii does influence cell pathways involved in inflammation. They observed that the bacteria produce a short-chain fatty acid called butyrate, which has previously been shown to reduce inflammation. After butyrate levels went up, the mucosal cells showed a reduction in the activity of a pathway called NF kappa B. This reduction calms inflammation.

“Overall, this pathway has been reduced, which is really similar to what people have seen in humans,” Zhang says. “It seems that the bacteria are desensitizing the mammalian cells to not overreact to the dangers in the outside environment, so the inflammation status is being calmed down by the bacteria.”

Patients with Crohn’s disease often have reduced levels of F. prausnitzii, and the lack of those bacteria is hypothesized to contribute to the overactive inflammation seen in those patients.

When the researchers added butyrate to the system, without bacteria, it did not generate all of the effects that they saw when the bacteria were present. This suggests that some of the bacteria’s effects may be exerted through other mechanisms, which the researchers hope to further investigate.

Microbes and disease

The researchers also plan to use their system to study what happens when they add other species of bacteria that are believed to play a role in Crohn’s disease, to try to further explore the effects of each species.

They are also planning a study, working with Alessio Fasano, the division chief of pediatric gastroenterology and nutrition at Massachusetts General Hospital, to grow mucosal tissue from patients with celiac disease and other gastrointestinal disorders. This tissue could then be used to study microbe-induced inflammation in cells with different genetic backgrounds.

“We are hoping to get new data that will show how the microbes and the inflammation work with the genetic background of the host, to see if there could be people who have a genetic susceptibility to having microbes interfere with the mucosal barrier a little more than other people,” Griffith says.

She also hopes to use the device to study other types of mucosal barriers, including those of the female reproductive tract, such as the cervix and the endometrium.

The research was funded by the U.S. National Institutes of Health, the Boehringer Ingelheim SHINE Program, and the National Institute of Environmental Health Sciences.

Tuesday, August 4th, 2020
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
10:51 am
Key brain region was “recycled” as humans developed the ability to read

Humans began to develop systems of reading and writing only within the past few thousand years. Our reading abilities set us apart from other animal species, but a few thousand years is much too short a timeframe for our brains to have evolved new areas specifically devoted to reading.

To account for the development of this skill, some scientists have hypothesized that parts of the brain that originally evolved for other purposes have been “recycled” for reading. As one example, they suggest that a part of the visual system that is specialized to perform object recognition has been repurposed for a key component of reading called orthographic processing — the ability to recognize written letters and words.

A new study from MIT neuroscientists offers evidence for this hypothesis. The findings suggest that even in nonhuman primates, who do not know how to read, a part of the brain called the inferotemporal (IT) cortex is capable of performing tasks such as distinguishing words from nonsense words, or picking out specific letters from a word.

“This work has opened up a potential linkage between our rapidly developing understanding of the neural mechanisms of visual processing and an important primate behavior — human reading,” says James DiCarlo, the head of MIT’s Department of Brain and Cognitive Sciences, an investigator in the McGovern Institute for Brain Research and the Center for Brains, Minds, and Machines, and the senior author of the study.

Rishi Rajalingham, an MIT postdoc, is the lead author of the study, which appears today in Nature Communications. Other MIT authors are postdoc Kohitij Kar and technical associate Sachi Sanghavi. The research team also includes Stanislas Dehaene, a professor of experimental cognitive psychology at the Collège de France.

Word recognition

Reading is a complex process that requires recognizing words, assigning meaning to those words, and associating words with their corresponding sound. These functions are believed to be spread out over different parts of the human brain.

Functional magnetic resonance imaging (fMRI) studies have identified a region called the visual word form area (VWFA) that lights up when the brain processes a written word. This region is involved in the orthographic stage: It discriminates words from jumbled strings of letters or words from unknown alphabets. The VWFA is located in the IT cortex, a part of the visual cortex that is also responsible for identifying objects.

DiCarlo and Dehaene became interested in studying the neural mechanisms behind word recognition after cognitive psychologists in France reported that baboons could learn to discriminate words from nonwords, in a study that appeared in Science in 2012.

Using fMRI, Dehaene’s lab has previously found that parts of the IT cortex that respond to objects and faces become highly specialized for recognizing written words once people learn to read.

“However, given the limitations of human imaging methods, it has been challenging to characterize these representations at the resolution of individual neurons, and to quantitatively test if and how these representations might be reused to support orthographic processing,” Dehaene says. “These findings inspired us to ask if nonhuman primates could provide a unique opportunity to investigate the neuronal mechanisms underlying orthographic processing.”

The researchers hypothesized that if parts of the primate brain are predisposed to process text, they might be able to find patterns reflecting that in the neural activity of nonhuman primates as they simply look at words.

To test that idea, the researchers recorded neural activity from about 500 neural sites across the IT cortex of macaques as they looked at about 2,000 strings of letters, some of which were English words and some of which were nonsensical strings of letters.

“The efficiency of this methodology is that you don't need to train animals to do anything,” Rajalingham says. “What you do is just record these patterns of neural activity as you flash an image in front of the animal.”

The researchers then fed that neural data into a simple computer model called a linear classifier. This model learns to combine the inputs from each of the 500 neural sites to predict whether the string of letters that provoked that activity pattern was a word or not. While the animal itself is not performing this task, the model acts as a “stand-in” that uses the neural data to generate a behavior, Rajalingham says.

Using that neural data, the model was able to generate accurate predictions for many orthographic tasks, including distinguishing words from nonwords and determining if a particular letter is present in a string of words. The model was about 70 percent accurate at distinguishing words from nonwords, which is very similar to the rate reported in the 2012 Science study with baboons. Furthermore, the patterns of errors made by model were similar to those made by the animals.

Neuronal recycling

The researchers also recorded neural activity from a different brain area that also feeds into IT cortex: V4, which is part of the visual cortex. When they fed V4 activity patterns into the linear classifier model, the model poorly predicted (compared to IT) the human or baboon performance on the orthographic processing tasks.

The findings suggest that the IT cortex is particularly well-suited to be repurposed for skills that are needed for reading, and they support the hypothesis that some of the mechanisms of reading are built upon highly evolved mechanisms for object recognition, the researchers say.

The researchers now plan to train animals to perform orthographic tasks and measure how their neural activity changes as they learn the tasks.

The research was funded by the Simons Foundation and the U.S. Office of Naval Research.

LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
12:00 am
Lava oceans may not explain the brightness of some hot super-Earths

Arguably some of the weirdest, most extreme planets among the more than 4,000 exoplanets discovered to date are the hot super-Earths — rocky, flaming-hot worlds that zing so precariously close to their host stars that some of their surfaces are likely melted seas of molten lava.

These fiery worlds, about the size of Earth, are known more evocatively as “lava-ocean planets,” and scientists have observed that a handful of these hot super-Earths are unusually bright, and in fact brighter than our own brilliant blue planet.

Exactly why these far-off fireballs are so bright is unclear, but new experimental evidence by scientists at MIT shows that the unexpected glow from these worlds is likely not due to either molten lava or cooled glass (i.e. rapidly solidified lava) on their surfaces.

The researchers came to this conclusion after interrogating the problem in a refreshingly direct way: melting rocks in a furnace and measuring the brightness of the resulting lava and cooled glass, which they then used to calculate the brightness of regions of a planet covered in molten or solidified material. Their results revealed that lava and glass, at least as a product of the materials they melted in the lab, are not reflective enough to explain the observed brightness of certain lava-ocean planets.

Their findings suggest that hot super-Earths may have other surprising features that contribute to their brightness, such as metal-rich atmospheres and highly reflective clouds.

“We still have so much to understand about these lava-ocean planets,” says Zahra Essack, a graduate student in MIT’s Department of Earth, Atmospheric, and Planetary Sciences. “We thought of them as just glowing balls of rock, but these planets may have complex systems of surface and atmospheric processes that are quite exotic, and not anything we’ve ever seen before.”

Essack is the first author of a study detailing the team’s results, which appears today in The Astrophysical Journal. Her co-authors are former MIT postdoc Mihkel Pajusalu, who was instrumental in the experiment’s initial setup, and Sara Seager, the Class of 1941 Professor of Planetary Science, with appointments in the departments of Physics and Aeronautics and Astronautics.

More than charcoal balls

Hot super-Earths are between one and 10 times the mass of Earth, and have extremely short orbital periods, circling their host star in just 10 days or less. Scientists have expected that these lava worlds would be so close to their host star that any appreciable atmosphere and clouds would be stripped away. Their surfaces as a result would be at least 850 kelvins, or 1,070 degrees Fahrenheit — hot enough to cover the surface in oceans of molten rock.

Scientists have previously discovered a handful of super-Earths with unexpectedly high albedos, or brightnesses, in which they reflected between 40 and 50 percent of the light from their star. In comparison, the Earth’s albedo, with all of its reflective surfaces and clouds, is only around 30 percent.

“You’d expect these lava planets to be sort of charcoal balls orbiting in space — very dark, not very bright at all,” Essack says. “So what makes them so bright?”

One idea has been that the lava itself may be the main source of the planets’ luminosity, though there had never been any proof, either in observations or experiments.

“So being MIT people, we decided, ok, we should make some lava and see if it’s bright or not,” Essack says.

Making lava

To first make lava, the team needed a furnace that could reach temperatures high enough to melt basalt and feldspar, the two rock types that they chose for their experiments, as they are well-characterized material that are common on Earth.

As it turns out, they initially didn’t have to look farther than the foundry at MIT, a space within the Department of Materials Science and Engineering, where trained metallurgists help students and researchers melt materials in the foundry’s furnace for research and class projects.

Essack brought samples of feldspar to the foundry, where metallurgists determined the type of crucible in which to place them, and the temperatures at which they needed to be heated.

“They drop it in the furnace, let the rocks melt, take it out, and then the whole place turns into a furnace itself — it’s very hot,” Essack says. “And it was an incredible experience to stand next to this bright glowing lava, feeling that heat.”

However, the experiment quickly ran up against an obstacle: The lava, once it was pulled from the furnace, almost instantly cooled into a smooth, glassy material. The process occurred so quickly that Essack wasn’t able to measure the lava’s reflectivity while still molten.

So she took the cooled feldspar glass to a spectroscopy lab she designed and implemented on campus to measure its reflectance, by shining a light on the glass from different angles and measuring the amount of light reflecting back from the surface. She repeated these experiments for cooled basalt glass, samples of which were donated by colleagues at Syracuse University who run the Lava Project. Seager visited them a few years ago for a preliminary version of the experiment, and at that time collected basalt samples now used for Essack’s experiments.

“They melted a huge bunch of basalt and poured it down a slope, and they chipped it up for us,” Seager says.

After measuring the brightness of cooled basalt and feldspar glass, Essack looked through the literature to find reflectivity measurements of molten silicates, which are a major component of lava on Earth. She used these measurements as a reference to calculate how bright the initial lava from the basalt and feldspar glass would be. She then estimated the brightness of a hot super-Earth covered either entirely in lava or cooled glass, or combinations of the two materials.

In the end, she found that, no matter the combination of surface materials, the albedo of a lava-ocean planet would be no more than about 10 percent — pretty dark compared with the 40 to 50 percent albedo observed for some hot super-Earths.

“This is quite dark compared to Earth, and not enough to explain the brightness of the planets we were interested in,” Essack says.

This realization has narrowed the search range for interpreting observations, and directs future studies to consider other exotic possibilities, such as the presence of atmospheres rich in reflective metals.

“We’re not 100 percent sure what these planets are made of, so we’re narrowing the parameter space and guiding future studies toward all these other potential options,” Essack says.

This research was funded, in part, by NASA’s TESS mission and, in part, by the MIT Presidential Fellowship.

Monday, August 3rd, 2020
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
10:49 am
Can a quantum strategy help bring down the house?

In some versions of the game blackjack, one way to win against the house is for players at the table to work as a team to keep track of and covertly communicate amongst each other the cards they have been dealt. With that knowledge, they can then estimate the cards still in the deck, and those most likely to be dealt out next, all to help each player decide how to place their bets, and as a team, gain an advantage over the dealer.

This calculating strategy, known as card-counting, was made famous by the MIT Blackjack Team, a group of students from MIT, Harvard University, and Caltech, who for several decades starting in 1979, optimized card-counting and other techniques to successfully beat casinos at blackjack around the world — a story that later inspired the book “Bringing Down the House.”

Now researchers at MIT and Caltech have shown that the weird, quantum effects of entanglement could theoretically give blackjack players even more of an edge, albeit a small one, when playing against the house.

In a paper published this week in the journal Physical Review A, the researchers lay out a theoretical scenario in which two players, playing cooperatively against the dealer, can better coordinate their strategies using a quantumly entangled pair of systems. Such systems exist now in the laboratory, although not in forms convenient for any practical use in casinos. In their study, the authors nevertheless explore the theoretical possibilities for how a quantum system might influence outcomes in blackjack.

They found that such quantum communication would give the players a slight advantage compared to classical card-counting strategies, though in limited situations where the number of cards left in the dealer’s deck is low.

“It’s pretty small in terms of the actual magnitude of the expected quantum advantage,” says first author Joseph Lin, a former graduate student at MIT. “But if you imagine the players are extremely rich, and the deck is really low in number, so that every card counts, these small advantages can be big. The exciting result is that there’s some advantage to quantum communication, regardless of how small it is.”

Lin’s MIT co-authors on the paper are professor of physics Joseph Formaggio, associate professor of physics Aram Harrow, and Anand Natarajan of Caltech, who will start at MIT in September as assistant professor of electrical engineering and computer science.

Quantum dealings

Entanglement is a phenomenon described by the rules of quantum mechanics, which states that two physically separate objects can be “entangled,” or correlated with each other, in such a way that the correlations between them are stronger than what would be predicted by the classical laws of physics and probability.

In 1964, physicist John Bell proved mathematically that quantum entanglement could exist, and also devised a test — known a Bell test — that scientists have since applied to many scenarios to ascertain if certain spatially remote particles or systems behave according to classical, real-world physics, or whether they may exhibit some quantum, entangled states.

“One motivation for this work was as a concrete realization of the Bell test,” says Harrow of the team’s new paper. “People wrote the rules of blackjack not thinking of entanglement. But the players are dealt cards, and there are some correlations between the cards they get. So does entanglement work here? The answer to the question was not obvious going into it.”

After casually entertaining the idea during a regular poker night with friends, Formaggio decided to explore the possibility of quantum blackjack more formally with his MIT colleagues.

“I was grateful to them for not laughing and closing the door on me when I brought up the idea,” Formaggio recalls.

Correlated cards

In blackjack, the dealer deals herself and each player a face-up card that is public to all, and a face-down card. With this information, each player decides whether to “hit,” and be dealt another card, or “stand,” and stay with the cards they have. The goal after one round is to have a hand with a total that is closer to 21, without going over, than the dealer and the other players at the table.

In their paper, the researchers simulated a simple blackjack setup involving two players, Alice and Bob, playing cooperatively against the dealer. They programmed Alice to consistently bet low, with the main objective of helping Bob, who could hit or stand based on any information he gained from Alice.

The researchers considered how three different scenarios might help the players win over the dealer: a classical card-counting scenario without communication; a best-case scenario in which Alice simply shows Bob her face-down card, demonstrating the best that a team can do in playing against the dealer; and lastly, a quantum entanglement scenario.

In the quantum scenario, the researchers formulated a mathematical model to represent a quantum system, which can be thought of abstractedly as a box with many “buttons,” or measurement choices, that is shared between Alice and Bob.

For instance, if Alice’s face-down card is a 5, she can push a particular button on the quantum box and use its output to inform her usual choice of whether to hit or stand. Bob, in turn, looks at his face-down card when deciding which button to push on his quantum box, as well as whether to use the box at all. In the cases where Bob uses his quantum box, he can combine its output with his observation of Alice’s strategy to decide his own move. This extra information — not exactly the value of Alice’s card, but more information than a random guess — can help Bob decide whether to hit or stand.

The researchers ran all three scenarios, with many combinations of cards between each player and the dealer, and with increasing number of cards left in the dealer’s deck, to see how often Alice and Bob could win against the dealer.

After running thousands of rounds for each of the three scenarios, they found that the players had a slight advantage over the dealer in the quantum entanglement scenario, compared with the classical card-counting strategy, though only when a handful of cards were left in the dealer’s deck.

“As you increase the deck and therefore increase all the possibilities of different cards coming to you, the fact that you know a little bit more through this quantum process actually gets diluted,” Formaggio explains.

Nevertheless, Harrow notes that “it was surprising that these problems even matched, that it even made sense to consider entangled strategy in blackjack.”

Do these results mean that future blackjack teams might use quantum strategies to their advantage?

“It would require a very large investor, and my guess is, carrying a quantum computer in your backpack will probably tip the house,” Formaggio says. “We think casinos are safe right now from this particular threat.”

This research was funded, in part, by the National Science Foundation, the Army Research Office, the U.S. Department of Energy, and the MIT Undergraduate Research Opportunities Program (UROP).

Sunday, August 2nd, 2020
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
12:00 am
New US postage stamp highlights MIT research

Letter writers across the country will soon have a fun and beautiful new Forever stamp to choose from, featuring novel research from the Media Lab's Biomechatronics research group. 

The stamp is part of a new U.S. Postal Service (USPS) series on innovation, representing computing, biomedicine, genome sequencing, robotics, and solar technology. For the robotics category, the USPS chose the bionic prosthesis designed and built by Matt Carney PhD ’20 and members of the Biomechatronics group, led by Professor Hugh Herr.

The image used in the stamp was taken by photographer Andy Ryan, whose portfolio spans images from around the world, and who for many years has been capturing the MIT experience — from stunning architectural shots to the research work of labs across campus. Ryan suggested the bionic work of the biomechatronics group to USPS to represent the future of robotics. Ryan also created the images that became the computing and solar technology stamps in the series. 

“I was aware that Hugh Herr and his research team were incorporating robotic elements into the prosthetic legs they were developing and testing,” Ryan notes. “This vision of robotics was, in my mind, a true depiction of how robots and robotics would manifest and impact society in the future." 

With encouragement from Herr, Ryan submitted high-definition, stylized, and close-up images of Matt Carney working on the group's latest designs. 

Carney, who recently completed his PhD in media arts and sciences at the Media Lab, views bionic limbs as the ultimate humanoid robot, and an ideal innovation to represent and portray robotics in 2020. He was all-in for sharing that work with the world.

"Robotic prostheses integrate biomechanics, mechanical, electrical, and software engineering, and no piece is off-the-shelf,” Carney says. “To attempt to fit within the confines of the human form, and to match the bandwidth and power density of the human body, we must push the bounds of every discipline: computation, strength of materials, magnetic energy densities, sensors, biological interfaces, and so much more."

In his childhood, Carney himself collected stamps from different corners of the globe, and so the selection of his research for a U.S. postal stamp has been especially meaningful. 

"It's a freakin' honor to have my PhD work featured as a USPS stamp," Carney says, breaking into a big smile. "I hope this feat is an inspiration to young students everywhere to crush their homework, and to build the skills to make a positive impact on the world. And while I worked insane hours to build this thing — and really tried to inspire with its design as much as its engineering — it's truly the culmination of powered prosthesis work pioneered by Dr. Hugh Herr and our entire team at the Media Lab's Biomechatronics group, and it expands on work from a global community over more than a decade of development."

The new MIT stamp joins a venerable list of other stamps associated with the Institute. Formerly issued stamps have featured Apollo 11 astronaut and moonwalker Buzz Aldrin ScD ’63, Nobel Prize winner Richard Feynman ’39, and architect Robert Robinson Taylor, who graduated from MIT in 1892 and is considered the nation’s first academically trained African American architect, followed by Pritzker Prize-winning architect I.M. Pei ’40, whose work includes the Louvre Glass Pyramid and the East Building on the National Gallery in Washington, as well as numerous buildings on the MIT campus. 

The new robotics stamp, however, is the first to feature MIT research, as well as members of the MIT community.

"I'm deeply honored that a USPS Forever stamp has been created to celebrate technologically-advanced robotic prostheses, and along with that, the determination to alleviate human impairment," Herr says. "Through the marriage of human physiology and robotics, persons with leg amputation can now walk with powered prostheses that closely emulate the biological leg. By integrating synthetic sensors, artificial computation, and muscle-like actuation, these technologies are already improving people's lives in profound ways, and may one day soon bring about the end of disability."

The Innovation Stamp series will be available for purchase through the U.S. Postal Service later this month.

Friday, July 31st, 2020
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
2:15 pm
An automated health care system that understands when to step in

In recent years, entire industries have popped up that rely on the delicate interplay between human workers and automated software. Companies like Facebook work to keep hateful and violent content off their platforms using a combination of automated filtering and human moderators. In the medical field, researchers at MIT and elsewhere have used machine learning to help radiologists better detect different forms of cancer

What can be tricky about these hybrid approaches is understanding when to rely on the expertise of people versus programs. This isn’t always merely a question of who does a task “better;” indeed, if a person has limited bandwidth, the system may have to be trained to minimize how often it asks for help.

To tackle this complex issue, researchers from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) have developed a machine learning system that can either make a prediction about a task, or defer the decision to an expert. Most importantly, it can adapt when and how often it defers to its human collaborator, based on factors such as its teammate’s availability and level of experience.

The team trained the system on multiple tasks, including looking at chest X-rays to diagnose specific conditions such as atelectasis (lung collapse) and cardiomegaly (an enlarged heart). In the case of cardiomegaly, they found that their human-AI hybrid model performed 8 percent better than either could on their own (based on AU-ROC scores).  

“In medical environments where doctors don’t have many extra cycles, it’s not the best use of their time to have them look at every single data point from a given patient’s file,” says PhD student Hussein Mozannar, lead author with David Sontag, the Von Helmholtz Associate Professor of Medical Engineering in the Department of Electrical Engineering and Computer Science, of a new paper about the system that was recently presented at the International Conference of Machine Learning. “In that sort of scenario, it’s important for the system to be especially sensitive to their time and only ask for their help when absolutely necessary.”

The system has two parts: a “classifier” that can predict a certain subset of tasks, and a “rejector” that decides whether a given task should be handled by either its own classifier or the human expert.

Through experiments on tasks in medical diagnosis and text/image classification, the team showed that their approach not only achieves better accuracy than baselines, but does so with a lower computational cost and with far fewer training data samples.

“Our algorithms allow you to optimize for whatever choice you want, whether that’s the specific prediction accuracy or the cost of the expert’s time and effort,” says Sontag, who is also a member of MIT’s Institute for Medical Engineering and Science. “Moreover, by interpreting the learned rejector, the system provides insights into how experts make decisions, and in which settings AI may be more appropriate, or vice-versa.”

The system’s particular ability to help detect offensive text and images could also have interesting implications for content moderation. Mozanner suggests that it could be used at companies like Facebook in conjunction with a team of human moderators. (He is hopeful that such systems could minimize the amount of hateful or traumatic posts that human moderators have to review every day.)

Sontag clarified that the team has not yet tested the system with human experts, but instead developed a series of “synthetic experts” so that they could tweak parameters such as experience and availability. In order to work with a new expert it’s never seen before, the system would need some minimal onboarding to get trained on the person’s particular strengths and weaknesses.

In future work, the team plans to test their approach with real human experts, such as radiologists for X-ray diagnosis. They will also explore how to develop systems that can learn from biased expert data, as well as systems that can work with — and defer to — several experts at once. For example, Sontag imagines a hospital scenario where the system could collaborate with different radiologists who are more experienced with different patient populations.

“There are many obstacles that understandably prohibit full automation in clinical settings, including issues of trust and accountability,” says Sontag. “We hope that our method will inspire machine learning practitioners to get more creative in integrating real-time human expertise into their algorithms.” 

Mozanner is affiliated with both CSAIL and the MIT Institute for Data, Systems and Society (IDSS). The team’s work was supported, in part, by the National Science Foundation.

Thursday, July 30th, 2020
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
12:00 am
Q&A: Peter Fisher discusses JASON report on reopening university laboratories

What will it take for research universities across the U.S. to safely open their labs? That’s the subject of a recently released report by JASON, an independent group of scientists who advise the U.S. government about science and technology, in association with the MITRE Corporation. The report was led by Peter Fisher, professor and head of MIT’s Department of Physics, who is a JASON member. (MIT has separately examined the question, and began a phased ramp-up of lab research in June; Fisher participated in MIT reopening efforts as well.) MIT News talked with Fisher about the JASON report.

Q: What are the main things the JASON report recommends to universities trying to reopen labs?

A: Probably the top-level things are just mask-wearing, handwashing, and social distancing. Those are three things that everybody can do. And people know this, but the report looks in exhaustive detail at why those things are important — from the physics of masks to a whole section on air handling and how the virus builds up in the air. The design of campuses, including MIT, is intended to bring people together as much as possible, so we are really fighting the physical nature of MIT.

The research is important because the people who work in the labs and come back to campuses have to buy into these rules. And I think they buy in much better if they understand the science behind the rules.

Q: One implication of the report is that details matter greatly. For instance, the report notes that, compared to breathing, the viral load people exhale from speaking is 20 times as great — so you recommend communicating via text, whiteboards, scratch pads, or other nonverbal means. And the report has a section on proper mask fitting. Won’t people have to get used to some distinctly new practices?

A: Yes. For example, really minimize the amount you speak. And you can whisper. The amount that comes out between breathing [on the one hand], and speaking, singing, and shouting [on the other] is just enormous. … [Regarding masks], if you are in a hospital wearing one, part of the standard operating procedure is there is a specialist who fits it onto your face. There’s a metal bar that goes across your nose and it’s all about how you press down that metal bar so it forms a good seal, going across your nose and your cheekbones.

Q: Beyond those things, the report suggests making aggregate health information available to community in a dashboard format. Why would that help?

 A: Because people are competitive. And sometimes if you want to get people to do something, you turn it into a game. Also, from some of the work the organization does with the military, there is what’s called situational awareness. That comes down to a few numbers that tell you how you’re doing. And for Covid-19, it would be how many tests have you administered; how many of them came back positive; how many people are coming onto campus every day; and how many of them are not complying with the rules, like wearing masks. It’s all anonymous, but it gives you a snapshot of how you’re doing.

When you design a system, you can’t just assume that compliance is going to be 100 percent. It’s going to be lower, and you have to account for that in your planning and thinking. That’s one [reason] we talked about the dashboard. Another thing [that helps] is really good modeling. At MIT, the senior researchers and faculty are very highly regarded and closely watched. And I think really the faculty and senior research staff have to model this behavior.

Q: The report also discusses whether university campuses can be “islands” apart from the pandemic, or part of larger community. What are some of the factors at play there?

A: It depends on what you’re willing to do and where you are. MIT is an urban campus and Cambridge is one of the most densely populated places around. Everybody on campus, before the Covid-19, relied on the surrounding community for something — food, entertainment, health services. And it’s hard to change that. Inevitably, how a university is doing is going to be closely coupled to the community it’s embedded in. It’s true of all the big Boston-area campuses.

In [rural cases], the college can separate itself pretty well from the city. … [For example], at Vassar they are having a strict rule: Nobody leaves campus, and they’re going to try to make it into a bubble. Some colleges, you can draw a line around campus, and it’s clear what is campus and what is not campus. But MIT leaks out all into the city.

Q: It seems apparent from all this that some universities might have midstream, midsemester decisions to make. There might be plans in place, but don’t institutions have to keep in mind that circumstances can change, this fall or further out in the future?

A: It’s really tough because the nature of this disease is that the people feeling symptoms now were infected five days ago. The people who are in the hospital now [were infected] 14 days ago. Knowing how many positive cases you’re getting today is not a great indicator of how many people are going to feel sick tomorrow. It’s an indicator of how many people are going to feel really sick in two weeks, and during that time, the virus can spread exponentially. So there are going to have to be some tough decisions to be made. You don’t want to have to suddenly make them when you have 400 sick students to contend with. I think MIT’s done smart things — including the single-room occupancy housing policy. The leadership at MIT is taking the long view.

One thing that everybody is only starting to struggle with is that the world just changed, in this really fundamental way. We’re all hoping it gets a lot better, but where that ends up is not going to be where we were at the start of 2020. We’re living through a remarkable moment in history. It’s quite daunting to think about, but what we always try to do with JASON [reports] is just detail the bedrock science.

Wednesday, July 29th, 2020
LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
5:00 pm
Rapid antibody development yields possible treatment for yellow fever

Yellow fever, a hemorrhagic disease that is common in South America and sub-Saharan Africa, infects about 200,000 people per year and causes an estimated 30,000 deaths. While there is a vaccine for yellow fever, it can’t be given to some people because of the risk of side effects, and there are no approved treatments for the disease. 

An international team of researchers, led by MIT Professor Ram Sasisekharan, has now developed a potential treatment for yellow fever. Their drug, an engineered monoclonal antibody that targets the virus, has shown success in early-stage clinical trials in Singapore. 

This class of antibodies holds promise for treating a variety of infectious diseases, but it usually takes several years to develop and test them. The MIT-led researchers demonstrated that they could design, produce, and begin clinical trials of their antibody drug within seven months.

Their approach, which condenses the timeline by performing many of the steps necessary for drug development in parallel, could also be applied to developing new treatments for Covid-19, says Sasisekharan, the Alfred H. Caspary Professor of Biological Engineering and Health Sciences and Technology. He adds that a potential Covid-19 antibody treatment, developed using this approach in a process that took just four months, has shown no adverse events in healthy volunteers in phase I clinical trials, and phase 3 trials are expected to start in early August in Singapore.

“Traditional drug development processes are very linear, and they take many years,” Sasisekharan says. “If you’re going to get something to humans fast, you can’t do it linearly, because then the best-case scenario for testing in humans is a year to 18 months. If you need to develop a drug in six months or less, then a lot of these things need to happen in parallel.”

Jenny Low, a senior consultant in infectious diseases at Singapore General Hospital, is the lead author of the study, which appears today in the New England Journal of Medicine. Researchers from the Singapore-MIT Alliance for Research and Technology (SMART), Duke-National University of Singapore Medical School, and the biotechnology company Tysana Pte also contributed to the study.

Speeding up the process

Several types of monoclonal antibodies have been approved to treat a variety of cancers. These engineered antibodies help to stimulate a patient’s immune system to attack tumors by binding to proteins found on cancerous cells.

Many researchers are also working on monoclonal antibodies to treat infectious diseases. In recent years, scientists have developed an experimental cocktail of three monoclonal antibodies that target the Ebola virus, which has shown some success in clinical trials in the Democratic Republic of Congo.

Sasisekharan began working on a “rapid response” to emerging infectious diseases after the Zika outbreak that started in 2015. Singapore, which experienced a small outbreak of the Zika virus in 2016, is home to the SMART antimicrobial resistance research group, where Sasisekharan is a principal investigator.

The Sasisekharan lab antibody design process uses computational methods to target functionally important, and evolutionarily stable, regions on the virus. Building blocks from a database of all known antibody elements are selected based on several criteria, including their functional importance, to build candidate antibodies to evaluate. Testing these candidates provides valuable feedback, and the design loop continues until an optimized antibody that fully neutralizes the target virus is identified.

The group also explored new approaches to compress the timeline by performing many of the necessary steps in parallel, using analytical techniques to address regulatory risks associated with drug safety, manufacturing, and clinical study design. 

Using this approach, the researchers developed a candidate Zika treatment within nine months. They performed phase 1a clinical trials to test for safety in March 2018, but by the time they were ready to test the drug’s effectiveness in patients, the outbreak had ended. However, the team hopes to eventually test it in areas where the disease is still present.

Sasisekharan and his colleagues then decided to see if they could apply the same approach to developing a potential treatment for yellow fever. Yellow fever, a mosquito-borne disease, tends to appear seasonally in tropical and subtropical regions of South America and Africa. A particularly severe outbreak began in January 2018 in Brazil and lasted for several months. 

The MIT/SMART team began working on developing a yellow fever antibody treatment in March 2018, in hopes of having it ready to counter an outbreak so that it could be made available for potential patients in late 2018 or early 2019, when another outbreak was expected. They identified promising antibody candidates based on their ability to bind to the viral envelope and neutralize the virus that causes yellow fever. 

The researchers narrowed their candidates down to one antibody, which they called TY014. They then developed production methods to create small, uniform batches that they could use to perform necessary testing phases in parallel. These tests include studying the drugs’ effectiveness in human cells, determining the most effective dosages, testing for potential toxicity, and analyzing how the drug behaves in animal models. As soon as they had results indicating that the treatment would be safe, they began clinical trials in December 2018.

“The mindset in the industry is that it’s like a relay race. You don’t start the next lap until you finish the previous lap,” Sasisekharan says. “In our case, we start each runner as soon as we can.”

Clinical trials

TY014 was clinically tested in parallel to address safety through dose escalation in healthy human volunteers. Once an appropriate dose was deemed safe, the researchers began a phase 1b trial, in which they measured the antibody’s ability to clear the virus. Even though the 1b trial had begun, the 1a trial continued until a maximum safe dose in humans was identified. 

Because there is a vaccine available for yellow fever, the researchers could perform a type of clinical trial known as a challenge test. They first vaccinated volunteers, then 24 hours later, they gave them either the experimental antibody drug or a placebo. Two days after that, they measured whether the drug cleared the weakened viruses that make up the vaccine.

The researchers found that following treatment, the virus was undetectable in blood samples from people who received the antibodies. The treatment also reduced inflammation following vaccination, compared to people who received the vaccine but not the antibody treatment. The phase 1b trial was completed in July 2019, and the researchers now hope to perform phase 2 clinical trials in patients infected with the disease. 

The research was funded by Tysana Pte. Tysana is also performing the clinical trials now underway for a Covid-19 treatment that was developed along with Singaporean government agencies including the Ministry of Defense, the Ministry of Health, and the Economic Development Board.

LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
11:00 am
“Giant atoms” enable quantum processing and communication in one

MIT researchers have introduced a quantum computing architecture thatcan perform low-error quantum computations while also rapidly sharing quantum information between processors. The work represents a key advance toward a complete quantum computing platform.

Previous to this discovery, small-scale quantum processors have successfully performed tasks at a rate exponentially faster than that of classical computers. However, it has been difficult to controllably communicate quantum information between distant parts of a processor. In classical computers, wired interconnects are used to route information back and forth throughout a processor during the course of a computation. In a quantum computer, however, the information itself is quantum mechanical and fragile, requiring fundamentally new strategies to simultaneously process and communicate quantum information on a chip.

“One of the main challenges in scaling quantum computers is to enable quantum bits to interact with each other when they are not co-located,” says William Oliver, an associate professor of electrical engineering and computer science, MIT Lincoln Laboratory fellow, and associate director of the Research Laboratory for Electronics. “For example, nearest-neighbor qubits can easily interact, but how do I make ‘quantum interconnects’ that connect qubits at distant locations?”

The answer lies in going beyond conventional light-matter interactions.

While natural atoms are small and point-like with respect to the wavelength of light they interact with, in a paper published today in the journal Nature, the researchers show that this need not be the case for superconducting “artificial atoms.” Instead, they have constructed “giant atoms” from superconducting quantum bits, or qubits, connected in a tunable configuration to a microwave transmission line, or waveguide.

This allows the researchers to adjust the strength of the qubit-waveguide interactions so the fragile qubits can be protected from decoherence, or a kind of natural decay that would otherwise be hastened by the waveguide, while they perform high-fidelity operations. Once those computations are carried out, the strength of the qubit-waveguide couplings is readjusted, and the qubits are able to release quantum data into the waveguide in the form of photons, or light particles.

“Coupling a qubit to a waveguide is usually quite bad for qubit operations, since doing so can significantly reduce the lifetime of the qubit,” says Bharath Kannan, MIT graduate fellow and first author of the paper. “However, the waveguide is necessary in order to release and route quantum information throughout the processor. Here, we’ve shown that it’s possible to preserve the coherence of the qubit even though it’s strongly coupled to a waveguide. We then have the ability to determine when we want to release the information stored in the qubit. We have shown how giant atoms can be used to turn the interaction with the waveguide on and off.”

The system realized by the researchers represents a new regime of light-matter interactions, the researchers say. Unlike models that treat atoms as point-like objects smaller than the wavelength of the light they interact with, the superconducting qubits, or artificial atoms, are essentially large electrical circuits. When coupled with the waveguide, they create a structure as large as the wavelength of the microwave light with which they interact.

The giant atom emits its information as microwave photons at multiple locations along the waveguide, such that the photons interfere with each other. This process can be tuned to complete destructive interference, meaning the information in the qubit is protected. Furthermore, even when no photons are actually released from the giant atom, multiple qubits along the waveguide are still able to interact with each other to perform operations. Throughout, the qubits remain strongly coupled to the waveguide, but because of this type of quantum interference, they can remain unaffected by it and be protected from decoherence, while single- and two-qubit operations are performed with high fidelity.

“We use the quantum interference effects enabled by the giant atoms to prevent the qubits from emitting their quantum information to the waveguide until we need it.” says Oliver.

“This allows us to experimentally probe a novel regime of physics that is difficult to access with natural atoms,” says Kannan. “The effects of the giant atom are extremely clean and easy to observe and understand.”

The work appears to have much potential for further research, Kannan adds.

“I think one of the surprises is actually the relative ease by which superconducting qubits are able to enter this giant atom regime.” he says. “The tricks we employed are relatively simple and, as such, one can imagine using this for further applications without a great deal of additional overhead.”

Andreas Wallraff, professor of solid-state physics at ETH Zurich, says the research "investigates a piece of quantum physics that is hard or even impossible to fathom for microscopic objects such as electrons or atoms, but that can be studied with macroscopic engineered superconducting quantum circuits. With these circuits, using a clever trick, they are able both to protect their giant atom from decay and simultaneously to allow for coupling two of them coherently. This is very nice work exploring waveguide quantum electrodynamics."

The coherence time of the qubits incorporated into the giant atoms, meaning the time they remained in a quantum state, was approximately 30 microseconds, nearly the same for qubits not coupled to a waveguide, which have a range of between 10 and 100 microseconds, according to the researchers.

Additionally, the research demonstrates two-qubit entangling operations with 94 percent fidelity. This represents the first time researchers have quoted a two-qubit fidelity for qubits that were strongly coupled to a waveguide, because the fidelity of such operations using conventional small atoms is often low in such an architecture. With more calibration, operation tune-up procedures and optimized hardware design, Kannan says, the fidelity can be further improved.

LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.
11:00 am
Bringing RNA into genomics

The human genome contains about 20,000 protein-coding genes, but the coding parts of our genes account for only about 2 percent of the entire genome. For the past two decades, scientists have been trying to find out what the other 98 percent is doing.

A research consortium known as ENCODE (Encyclopedia of DNA Elements) has made significant progress toward that goal, identifying many genome locations that bind to regulatory proteins, helping to control which genes get turned on or off. In a new study that is also part of ENCODE, researchers have now identified many additional sites that code for RNA molecules that are likely to influence gene expression.

These RNA sequences do not get translated into proteins, but act in a variety of ways to control how much protein is made from protein-coding genes. The research team, which includes scientists from MIT and several other institutions, made use of RNA-binding proteins to help them locate and assign possible functions to tens of thousands of sequences of the genome.

“This is the first large-scale functional genomic analysis of RNA-binding proteins with multiple different techniques,” says Christopher Burge, an MIT professor of biology. “With the technologies for studying RNA-binding proteins now approaching the level of those that have been available for studying DNA-binding proteins, we hope to bring RNA function more fully into the genomic world.”

Burge is one of the senior authors of the study, along with Xiang-Dong Fu and Gene Yeo of the University of California at San Diego, Eric Lecuyer of the University of Montreal, and Brenton Graveley of UConn Health.

The lead authors of the study, which appears today in Nature, are Peter Freese, a recent MIT PhD recipient in Computational and Systems Biology; Eric Van Nostrand, Gabriel Pratt, and Rui Xiao of UCSD; Xiaofeng Wang of the University of Montreal; and Xintao Wei of UConn Health.

RNA regulation

Much of the ENCODE project has thus far relied on detecting regulatory sequences of DNA using a technique called ChIP-seq. This technique allows researchers to identify DNA sites that are bound to DNA-binding proteins such as transcription factors, helping to determine the functions of those DNA sequences.

However, Burge points out, this technique won’t detect genomic elements that must be copied into RNA before getting involved in gene regulation. Instead, the RNA team relied on a technique known as eCLIP, which uses ultraviolet light to cross-link RNA molecules with RNA-binding proteins (RBPs) inside cells. Researchers then isolate specific RBPs using antibodies and sequence the RNAs they were bound to.

RBPs have many different functions — some are splicing factors, which help to cut out sections of protein-coding messenger RNA, while others terminate transcription, enhance protein translation, break down RNA after translation, or guide RNA to a specific location in the cell. Determining the RNA sequences that are bound to RBPs can help to reveal information about the function of those RNA molecules.

“RBP binding sites are candidate functional elements in the transcriptome,” Burge says. “However, not all sites of binding have a function, so then you need to complement that with other types of assays to assess function.”

The researchers performed eCLIP on about 150 RBPs and integrated those results with data from another set of experiments in which they knocked down the expression of about 260 RBPs, one at a time, in human cells. They then measured the effects of this knockdown on the RNA molecules that interact with the protein.

Using a technique developed by Burge’s lab, the researchers were also able to narrow down more precisely where the RBPs bind to RNA. This technique, known as RNA Bind-N-Seq, reveals very short sequences, sometimes containing structural motifs such as bulges or hairpins, that RBPs bind to.

Overall, the researchers were able to study about 350 of the 1,500 known human RBPs, using one or more of these techniques per protein. RNA splicing factors often have different activity depending on where they bind in a transcript, for example activating splicing when they bind at one end of an intron and repressing it when they bind the other end. Combining the data from these techniques allowed the researchers to produce an “atlas” of maps describing how each RBP’s activity depends on its binding location.

“Why they activate in one location and repress when they bind to another location is a longstanding puzzle,” Burge says. “But having this set of maps may help researchers to figure out what protein features are associated with each pattern of activity.”

Additionally, Lecuyer’s group at the University of Montreal used green fluorescent protein to tag more than 300 RBPs and pinpoint their locations within cells, such as the nucleus, the cytoplasm, or the mitochondria. This location information can also help scientists to learn more about the functions of each RBP and the RNA it binds to.

“The strength of this manuscript is in the generation of a comprehensive and multilayered dataset that can be used by the biomedical community to develop therapies targeted to specific sites on the genome using genome-editing strategies, or on the transcriptome using antisense oligonucleotides or agents that mediate RNA interference,” says Gil Ast, a professor of human molecular genetics and biochemistry at Tel Aviv University, who was not involved in the research.

Linking RNA and disease

Many research labs around the world are now using these data in an effort to uncover links between some of the RNA sequences identified and human diseases. For many diseases, researchers have identified genetic variants called single nucleotide polymorphisms (SNPs) that are more common in people with a particular disease.

“If those occur in a protein-coding region, you can predict the effects on protein structure and function, which is done all the time. But if they occur in a noncoding region, it’s harder to figure out what they may be doing,” Burge says. “If they hit a noncoding region that we identified as binding to an RBP, and disrupt the RBP’s motif, then we could predict that the SNP may alter the splicing or stability of the gene.”

Burge and his colleagues now plan to use their RNA-based techniques to generate data on additional RNA-binding proteins.

“This work provides a resource that the human genetics community can use to help identify genetic variants that function at the RNA level,” he says.

The research was funded by the National Human Genome Research Institute ENCODE Project, as well as a grant from the Fonds de Recherche de Québec-Santé.

[ << Previous 20 ]

LJ.Rossia.org makes no claim to the content supplied through this journal account. Articles are retrieved via a public feed supplied by the site for this purpose.