MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Thursday, September 15th, 2016

    Time Event
    12:00a
    Calculating the financial risks of renewable energy

    For investors, deciding whether to invest money into renewable-energy projects can be difficult. The issue is volatility: Wind-powered energy production, for instance, changes annually — and even weekly or daily — which creates uncertainty and investment risks. With limited options to accurately quantify that volatility, today’s investors tend to act conservatively.

    An MIT spinout, EverVest, has built a data-analytics platform whose goal is to give investors rapid, accurate cash-flow models and financial risk analyses for renewable-energy projects. Recently acquired by asset-management firm Ultra Capital, EverVest’s platform could help boost investments in sustainable-infrastructure projects, including wind and solar power.  

    Ultra Capital acquired the EverVest platform and team earlier this year in order to leverage the software for its own underwriting and risk analytics. The acquisition by Ultra will enable the EverVest software to expand to a broader array of sustainable infrastructure sectors, including water, energy, waste, and agriculture projects.

    “If an investor has confidence in the performance and risk they are taking, they may be willing to invest more capital into the sustainable infrastructure asset class. More capital means more projects get built,” says EverVest co-founder and former CEO Mike Reynolds MBA ’14, now director of execution at Ultra Capital. “We wanted to give investors more firepower when it comes to evaluating risk.”

    The platform’s core technology was initially based on research at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), by EverVest co-founder and former Chief Technology Officer Teasha Feldman ’14, now director of engineering at Ultra Capital.

    The strength of data

    EverVest’s software platform analyzes data on a variety of factors that may impact the performance of renewable-energy projects. Layout and location of a site, contractual offtake agreements, type of equipment, grid connection, weather, and operation and maintenance costs can all help predict a possible financial rate of return.

    Today, financial analysts use Excel spreadsheets to find a flat, annual revenue average for the next 20 to 30 years. “It leaves a lot to the imagination,” Reynolds says. “Renewable energy is volatile and uncertain.”

    By the time of its acquisition, EverVest had clients in the United States and Europe, including banks, investors, and developers for wind and solar power projects. Users enter information about their prospective project into the software, which would provide a detailed analysis of future cash-flow model, along with a detailed statistical analysis of the project's financial risks.

    “The analytical engine and access to data on the EverVest platform gives investors a better understanding of their assets,” Reynolds says.

    For example, consider a wind farm. With location data, the software can use public data sets to calculate the last few decades of wind speed and determine the project’s overall performance. Location can also help determine the project’s profitability in the market. California could be a better market than, say, Texas or Maine.

    Specific types of equipment and manufacturer matter, too. If an investor considers a certain type of wind turbine, “we can pull data to determine that a turbine in that location is going to need $2 million of replacement parts in year five,” Reynolds says. “In year seven, you might have a 50 percent probability that something is going to fail, potentially resulting in a shut-down of the site.”

    The end result is a more detailed projection of the rate of return, Reynolds says. While a standard Excel spreadsheet might give an average rate of return of, say, 12 percent, the EverVest software platform would show a full analysis of the quarterly performance, including the statistical uncertainty of the rate of return. While 12 percent may be the average, the returns may vary between 4 and 18 percent. “By understanding that range of risk, you can understand the true value of the project,” Reynolds says.

    Now at Ultra Capital, Feldman is further developing the platform. Reynolds is using it to invest in a wide array of sustainable-infrastructure projects, including solar energy projects, waste-to-energy assets, water treatment facilities, and recycling plants. “We’ve brought our technology in-house and have expanded it a great deal,” Reynolds says. “Now I get to use the software we built to make better investments.”

    EverVest: The happy accident

    EverVest (formerly Cardinal Wind) began as a CSAIL research project that was refined and developed through MIT’s entrepreneurial ecosystem before going to market.

    As an MIT junior in 2012, Feldman wanted to branch out from her theoretical physics coursework to focus on renewable energy. She discovered a CSAIL project, led by research scientist Una-May O’Reilly, that involved collecting and analyzing data on wind farm energy. “I showed up in [O’Reilly’s] office and begged her to let me work on the project,” Feldman says.

    In a year, Feldman had designed a machine-learning algorithm that collected 30 years of wind data from airports and other sites, to predict future wind power there for the next 30 years. During that time, she sought enrollment in Course 15.366 (Energy Ventures), where students from across departments plan businesses around clean technologies. Undergraduates are seldom accepted. But as luck would have it, the class wanted O’Reilly to speak about her research — and O’Reilly told them to ask Feldman.

    “I said, ‘Yes, I’m working on that research. You should just let me into the class,’” Feldman says, laughing.

    Enrolling in fall 2013, Feldman pitched her algorithm to the class, and it caught the eye of one student. Reynolds had come to MIT Sloan School of Management, he says, “with scars from working on Wall Street in investment banking … and I wanted to open my horizons and work with engineers who were building amazing things at MIT.”

    During his time as an investment banker at Goldman Sachs, Reynolds dealt with funding large projects in infrastructure, energy, and transportation. So Feldman’s prediction algorithm research resonated immediately. “I saw her algorithm and thought of how great it would be for investors to have a more accurate way to measure the rate of return for a potential wind project investment,” Reynolds says.

    Joining forces, Feldman and Reynolds launched Cardinal Wind in 2013. The startup was somewhat of a “happy accident,” Feldman says. “The company took an insane amount of hard work to start and build. But by showing up in a lab and convincing them to give me a job, and then bringing the research to class, we were able to determine that there was a great opportunity and need for better financial risk analysis tools in the marketplace."

    The following summer, Cardinal Wind entered the Global Founders’ Skills Accelerator (GFSA), run by the Martin Trust Center for MIT Entrepreneurship, “which was a huge boost,” Reynolds says. Mentors and entrepreneurs-in-residence offered guidance and feedback on pitches, and generous GFSA funding paid the startup’s bills. “And we worked alongside other startups going through the same challenges,” Reynolds says. “All those resources were incredibly helpful.”

    By October 2015, Cardinal Wind had expanded Feldman’s algorithm into a full cash-flow modelling platform that also included analyses for solar power projects. That month, Cardinal Wind rebranded as EverVest, and this July was acquired by Ultra Capital.

    A key to EverVest’s success, Feldman says, was constantly developing the technology to fit customer needs — such as including solar power. “When we found the actual need was more than just predicting wind patterns, we departed from using that particular algorithm, and we’ve built a lot of our core platform since then,” she says.

    2:00p
    For first time, researchers see individual atoms keep away from each other or bunch up as pairs

    If you bottle up a gas and try to image its atoms using today’s most powerful microscopes, you will see little more than a shadowy blur. Atoms zip around at lightning speeds and are difficult to pin down at ambient temperatures.

    If, however, these atoms are plunged to ultracold temperatures, they slow to a crawl, and scientists can start to study how they can form exotic states of matter, such as superfluids, superconductors, and quantum magnets.

    Physicists at MIT have now cooled a gas of potassium atoms to several nanokelvins — just a hair above absolute zero — and trapped the atoms within a two-dimensional sheet of an optical lattice created by crisscrossing lasers. Using a high-resolution microscope, the researchers took images of the cooled atoms residing in the lattice.

    By looking at correlations between the atoms’ positions in hundreds of such images, the team observed individual atoms interacting in some rather peculiar ways, based on their position in the lattice. Some atoms exhibited “antisocial” behavior and kept away from each other, while some bunched together with alternating magnetic orientations. Others appeared to piggyback on each other, creating pairs of atoms next to empty spaces, or holes.

    The team believes that these spatial correlations may shed light on the origins of superconducting behavior. Superconductors are remarkable materials in which electrons pair up and travel without friction, meaning that no energy is lost in the journey. If superconductors can be designed to exist at room temperature, they could initiate an entirely new, incredibly efficient era for anything that relies on electrical power.

    Martin Zwierlein, professor of physics and principal investigator at MIT’s NSF Center for Ultracold Atoms and at its Research Laboratory of Electronics, says his team’s results and experimental setup can help scientists identify ideal conditions for inducing superconductivity.

    “Learning from this atomic model, we can understand what’s really going on in these superconductors, and what one should do to make higher-temperature superconductors, approaching hopefully room temperature,” Zwierlein says.

    Zwierlein and his colleagues’ results appear in the Sept. 16 issue of the journal Science. Co-authors include experimentalists from the MIT-Harvard Center for Ultracold Atoms, MIT’s Research Laboratory of Electronics, and two theory groups from San Jose State University, Ohio State University, the University of Rio de Janeiro, and Penn State University.

    “Atoms as stand-ins for electrons”

    Today, it is impossible to model the behavior of high‐temperature superconductors, even using the most powerful computers in the world, as the interactions between electrons are very strong. Zwierlein and his team sought instead to design a “quantum simulator,” using atoms in a gas as stand-ins for electrons in a superconducting solid.

    The group based its rationale on several historical lines of reasoning: First, in 1925 Austrian physicist Wolfgang Pauli formulated what is now called the Pauli exclusion principle, which states that no two electrons may occupy the same quantum state — such as spin, or position — at the same time. Pauli also postulated that electrons maintain a certain sphere of personal space, known as the “Pauli hole.”

    His theory turned out to explain the periodic table of elements: Different configurations of electrons give rise to specific elements, making carbon atoms, for instance, distinct from hydrogen atoms. 

    The Italian physicist Enrico Fermi soon realized that this same principle could be applied not just to electrons, but also to atoms in a gas: The extent to which atoms like to keep to themselves can define the properties, such as compressibility, of a gas.

    “He also realized these gases at low temperatures would behave in peculiar ways,” Zwierlein says.

    British physicist John Hubbard then incorporated Pauli’s principle in a theory that is now known as the Fermi-Hubbard model, which is the simplest model of interacting atoms, hopping across a lattice. Today, the model is thought to explain the basis for superconductivity. And while theorists have been able to use the model to calculate the behavior of superconducting electrons, they have only been able to do so in situations where the electrons interact weakly with each other.   

    “That’s a big reason why we don’t understand high-temperature superconductors, where the electrons are very strongly interacting,” Zwierlein says. “There’s no classical computer in the world that can calculate what will happen at very low temperatures to interacting [electrons]. Their spatial correlations have also never been observed in situ, because no one has a microscope to look at every single electron.”

    Carving out personal space

    Zwierlein’s team sought to design an experiment to realize the Fermi-Hubbard model with atoms, in hopes of seeing behavior of ultracold atoms analogous to that of electrons in high-temperature superconductors.

    The group had previously designed an experimental protocol to first cool a gas of atoms to near absolute zero, then trap them in a two-dimensional plane of a laser-generated lattice. At such ultracold temperatures, the atoms slowed down enough for researchers to capture them in images for the first time, as they interacted across the lattice.

    At the edges of the lattice, where the gas was more dilute, the researchers observed atoms forming Pauli holes, maintaining a certain amount of personal space within the lattice.

    “They carve out a little space for themselves where it’s very unlikely to find a second guy inside that space,” Zwierlein says.

    Where the gas was more compressed, the team observed something unexpected: Atoms were more amenable to having close neighbors, and were in fact very tightly bunched. These atoms exhibited alternating magnetic orientations.

    “These are beautiful, antiferromagnetic correlations, with a checkerboard pattern — up, down, up, down,” Zwierlein describes.

    At the same time, these atoms were found to often hop on top of one another, creating a pair of atoms next to an empty lattice square. This, Zwierlein says, is reminiscent of a mechanism proposed for high-temperature superconductivity, in which electron pairs resonating between adjacent lattice sites can zip through the material without friction if there is just the right amount of empty space to let them through.

    Ultimately, he says the team’s experiments in gases can help scientists identify ideal conditions for superconductivity to arise in solids.

    Zwierlein explains: “For us, these effects occur at nanokelvin because we are working with dilute atomic gases. If you have a dense piece of matter, these same effects may well happen at room temperature.”

    Currently, the team has been able to achieve ultracold temperatures in gases that are equivalent to hundreds of kelvins in solids. To induce superconductivity, Zwierlein says the group will have to cool their gases by another factor of five or so.

    “We haven’t played all of our tricks yet, so we think we can get colder,” he says.   

    This research was supported in part by the National Science Foundation, the Air Force Office of Scientific Research, the Army Research Office, and the David and Lucile Packard Foundation.

    2:00p
    New theory overcomes a longstanding polymer problem

    All polymers have a distinctive degree of elasticity — how much they will stretch when a force is applied. However, for the past 100 years, polymer scientists have been stymied in their efforts to predict polymers’ elasticity, because the materials usually have structural flaws at the molecular level that impact elasticity in unknown ways.

    By coming up with a way to measure these structural defects, MIT researchers have now shown that they can accurately calculate the elasticity of polymer networks such as hydrogels.

    “This is the first time anyone has developed a predictive theory of elasticity in a polymer network, which is something that many have said over the years was impossible to do,” says Jeremiah Johnson, the Firmenich Career Development Associate Professor of Chemistry at MIT.

    This theory could make it much easier for scientists to design materials with a specific elasticity, which is now more of a trial-and-error process.

    Bradley Olsen, an associate professor of chemical engineering, and Johnson are the senior authors of the new study, which appears in the Sept. 15 online edition of Science. The paper’s lead authors are former MIT postdoc Mingjiang Zhong, postdoc Rui Wang, and graduate student Ken Kawamoto.

    Counting loops

    Polymers, or long chains of repeating molecules, are found in many objects that we encounter every day, including anything made of plastic or rubber. These chains form networks in which each chain would ideally bind to only one other chain. However, in real-life materials, a significant fraction of these chains bind to themselves, forming defects — floppy loops that weaken the network.

    These loops also make it impossible to accurately calculate the material’s elasticity, because existing formulas for this calculation assume that the material has no defects.

    In 2012, Johnson and Olsen published a paper in which they demonstrated a technique for counting these defects — the first time that had ever been achieved. The researchers designed polymer chains that incorporate at a specific location a chemical bond that can be broken using hydrolysis. Once the polymers link to form a gel, the researchers cleave the bonds and measure the quantity of different types of degradation products. By comparing that measurement with what would be seen in a defect-free material, they can figure out how much of the polymer has formed loops.

    In the new study, the researchers built on that work by developing a way to determine how these defects influence the material’s elasticity. First, they calculated how a single defect would alter the elasticity. This number can then by multiplied by the total number of defects measured, which yields the overall impact on elasticity.

    “We do one complicated calculation for each type of defect to calculate how it perturbs the structure of the network under deformation, and then we add up all of those to get an adjusted elasticity,” Olsen says.

    “Quantum leap”

    After testing this approach on several materials, the researchers devised a theory that allows them to predict the elasticity of any polymer material. This theory proved to be much more accurate than the two existing approaches to calculating polymer elasticity (known as the affine network theory and phantom network model), both of which assume an ideal, defect-free network.

    This model should be applicable to any type of polymer, says Sanat Kumar, a professor of chemical engineering at Columbia University, who was not involved in the research.

    “They have taken an age-old problem and done very clear experiments and developed a very nice theory that moves the field up a whole quantum leap,” Kumar says.

    The MIT team is now working on expanding this approach to other polymers. “I think within a few years you’ll see it broaden rapidly to cover more and more types of networks,” Olsen says.

    The researchers are also interested in exploring other features of polymers that affect their elasticity and strength, including a property known as entanglement, which occurs when polymer chains are wound around each other like Christmas tree lights without chemically binding to each other.

    The research was funded by the Designing Materials to Revolutionize and Engineer our Future (DMREF) program of the National Science Foundation (NSF), as well as the U.S. Army Research Office through MIT’s Institute for Soldier Nanotechnologies and the Materials Research Science and Engineering Centers Program of the NSF.

    << Previous Day 2016/09/15
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org