MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, February 5th, 2020

    Time Event
    1:00p
    Engineers mix and match materials to make new stretchy electronics

    At the heart of any electronic device is a cold, hard computer chip, covered in a miniature city of transistors and other semiconducting elements. Because computer chips are rigid, the electronic devices that they power, such as our smartphones, laptops, watches, and televisions, are similarly inflexible.

    Now a process developed by MIT engineers may be the key to manufacturing flexible electronics with multiple functionalities in a cost-effective way.

    The process is called  “remote epitaxy” and involves growing thin films of semiconducting material on a large, thick wafer of the same material, which is covered in an intermediate layer of graphene. Once the researchers grow a semiconducting film, they can peel it away from the graphene-covered wafer and then reuse the wafer, which itself can be expensive depending on the type of material it’s made from. In this way, the team can copy and peel away any number of thin, flexible semiconducting films, using the same underlying wafer.

    In a paper published today in the journal Nature, the researchers demonstrate that they can use remote epitaxy to produce freestanding films of any functional material. More importantly, they can stack films made from these different materials, to produce flexible, multifunctional electronic devices.

    The researchers expect that the process could be used to produce stretchy electronic films for a wide variety of uses, including virtual reality-enabled contact lenses, solar-powered skins that mold to the contours of your car, electronic fabrics that respond to the weather, and other flexible electronics that seemed until now to be the stuff of Marvel movies.

    “You can use this technique to mix and match any semiconducting material to have new device functionality, in one flexible chip,” says Jeehwan Kim, an associate professor of mechanical engineering at MIT. “You can make electronics in any shape.”

    Kim’s co-authors include Hyun S. Kum, Sungkyu Kim, Wei Kong, Kuan Qiao, Peng Chen, Jaewoo Shim, Sang-Hoon Bae, Chanyeol Choi, Luigi Ranno, Seungju Seo, Sangho Lee, Jackson Bauer, and Caroline Ross from MIT, along with collaborators from the Uniersity of Wisconsin at Madison, Cornell University, the University of Virginia, Penn State University, Sun Yat-Sen University, and the Korea Atomic Energy Research Institute.

    Buying time

    Kim and his colleagues reported their first results using remote epitaxy in 2017. Then, they were able to produce thin, flexible films of semiconducting material by first placing a layer of graphene on a thick, expensive wafer made from a combination of exotic metals. They flowed atoms of each metal over the graphene-covered wafer and found the atoms formed a film on top of the graphene, in the same crystal pattern as the underlying wafer. The graphene provided a nonstick surface from which the researchers could peel away the new film, leaving the graphene-covered wafer, which they could reuse. 

    In 2018, the team showed that they could use remote epitaxy to make semiconducting materials from metals in groups 3 and 5 of the periodic table, but not from group 4. The reason, they found, boiled down to polarity, or the respective charges between the atoms flowing over graphene and the atoms in the underlying wafer.

    Since this realization, Kim and his colleagues have tried a number of increasingly exotic semiconducting combinations. As reported in this new paper, the team used remote epitaxy to make flexible semiconducting films from complex oxides — chemical compounds made from oxygen and at least two other elements. Complex oxides are known to have a wide range of electrical and magnetic properties, and some combinations can generate a current when physically stretched or exposed to a magnetic field.

    Kim says the ability to manufacture flexible films of complex oxides could open the door to new energy-havesting devices, such as sheets or coverings that stretch in response to vibrations and produce electricity as a result. Until now, complex oxide materials have only been manufactured on rigid, millimeter-thick wafers, with limited flexibility and therefore limited energy-generating potential.

    The researchers did have to tweak their process to make complex oxide films. They initially found that when they tried to make a complex oxide such as strontium titanate (a compound of strontium, titanium, and three oxygen atoms), the oxygen atoms that they flowed over the graphene tended to bind with the graphene’s carbon atoms, etching away bits of graphene instead of following the underlying wafer’s pattern and binding with strontium and titanium. As a surprisingly simple fix, the researchers added a second layer of graphene.

    “We saw that by the time the first layer of graphene is etched off, oxide compounds have already formed, so elemental oxygen, once it forms these desired compounds, does not interact as heavily with graphene,” Kim explains. “So two layers of graphene buys some time for this compound to form.”

    Peel and stack

    The team used their newly tweaked process to make films from multiple complex oxide materials, peeling off each 100-nanometer-thin layer as it was made. They were also able to stack together layers of different complex oxide materials and effectively glue them together by heating them slightly, producing a flexible, multifunctional device.

    “This is the first demonstration of stacking multiple nanometers-thin membranes like LEGO blocks, which has been impossible because all functional electronic materials exist in a thick wafer form,” Kim says.

    In one experiment, the team stacked together films of two different complex oxides: cobalt ferrite, known to expand in the presence of a magnetic field, and PMN-PT, a material that generates voltage when stretched. When the researchers exposed the multilayer film to a magnetic field, the two layers worked together to both expand and produce a small electric current. 

    The results demonstrate that remote epitaxy can be used to make flexible electronics from a combination of materials with different functionalities, which previously were difficult to combine into one device. In the case of cobalt ferrite and PMN-PT, each material has a different crystalline pattern. Kim says that traditional epitaxy techniques, which grow materials at high temperatures on one wafer, can only combine materials if their crystalline patterns match. He says that with remote epitaxy, researchers can make any number of different films, using different, reusable wafers, and then stack them together, regardless of their crystalline pattern.

    “The big picture of this work is, you can combine totally different materials in one place together,” Kim says. “Now you can imagine a thin, flexible device made from layers that include a sensor, computing system, a battery, a solar cell, so you could have a flexible, self-powering, internet-of-things stacked chip.”

    The team is exploring various combinations of semiconducting films and is working on developing prototype devices, such as something Kim is calling an “electronic tattoo” — a flexible, transparent chip that can attach and conform to a person’s body to sense and wirelessly relay vital signs such as temperature and pulse.

    “We can now make thin, flexible, wearable electronics with the highest functionality,” Kim says. “Just peel off and stack up.”

    This research was supported, in part, by the U.S. Defense Advanced Research Projects Agency.

    1:10p
    Improving pavement networks by predicting the future

    With around 4.18 million miles of roads in the United States, planning pavement maintenance can seem like a daunting process.

    Currently, departments of transportation (DOTs) tend to rely on past practices or expert opinion to make maintenance decisions. But with a $420 billion backlog of repairs for U.S. highways, these conventional methods are becoming less effective. Instead, DOTs require more quantitative approaches to manage their tight budgets and fix their aging roadways.

    In a recent paper in Transportation Research Part C: Emerging Technologies, MIT Concrete Sustainability Hub (CSHub) researchers Fengdi Guo, Jeremy Gregory, and Randolph Kirchain propose one such approach, known as Probabilistic Treatment Path Dependence (PTPD). PTPD performs better than conventional models, which would require a 10 percent additional annual budget to reach the same level of network performance in the given case study.

    CSHub researchers achieved this by confronting a fundamental concern that many conventional models shy away from: uncertainty.

    Comfortable with uncertainty 

    Paving is fraught with uncertainty. From the deterioration of pavements to the price of materials, DOTs cannot be sure of what things will look like in five, 10, or 20 years. What’s more, predicting and incorporating these kinds of uncertainties can prove challenging — enough so that many models discount it altogether.

    Traditionally, most models weigh the costs and benefits of maintenance decisions for each segment of a network to choose the best one. Their analyses tend to calculate the cost and benefit based on the current year or for a fixed set of future maintenance treatments, without considering uncertainties during the analysis period.

    “This may mean that they plan to maintain a new segment of pavement the same way each time over the course of its life,” says Guo. “The problem is that this is often not possible. Over time, changes in the price of materials, the deterioration rates of pavements, and even the changes in treatment paths — which are the sequence of maintenance actions taken — will demand treatments not specified in the original model.”

    For DOTs to manage their networks efficiently, then, they had better adapt to treatment path dependence and uncertainty.

    CSHub research sought to create a new model that offers them the most adaptability. To do this, they considered thousands of treatment schedules under future scenarios.

    Their model takes a bottom-up approach, looking at each segment in a pavement network. For each segment, it evaluates every possible initial treatment and future scenario of material price and deterioration. From there, an optimal treatment path and its total cost are identified for each combination of scenario and initial treatment.

    With all of these possibilities laid out in front of them, CSHub researchers then calculated the likelihood of certain outcomes in pavement performance — the pavement’s surface quality — for each combination of initial treatment option and future scenario. This allows them to capture which treatments will likely have the best outcomes given all the possible changes that might occur. For each segment, the model then identifies the two treatment options with the best likely outcomes.

    “To select between these final two options,” says Guo, “our model considers the risks associated with each and the available budget, as well.”

    In this case, risk refers to how the actual performance of a treatment might deviate from its average expected performance. The more the variance and the more extreme the outlier scenarios, the greater the risk. However, it’s a tradeoff — a riskier treatment may also yield better performance.

    So, it’s up to the DOT to determine how much risk they are willing to take. And it’s that level of risk that determines which of the final two options they will select for each segment in the pavement network.

    Paving in practice

    In several case studies discussed in their paper, CSHub researchers analyzed how levels of risk affected the selection of treatments within their models, as well as how their model compared to conventional models. They found that when DOTs were less averse to risking unexpected outcomes in a segment’s performance, their model favored thin asphalt overlays for that segment, which is a cheaper treatment option. As risk aversion increased, however, the opposite occurred. The model instead favored more expensive concrete overlays and complete reconstructions of the segment.

    How come?

    It boils down to the price of materials.

    “Unlike asphalt, concrete tends to have lower price volatility,” explains Guo.That means DOTs can reliably predict how much concrete treatments will cost. This prevents the kind of cost overruns that might occur due to an unexpected increase in asphalt prices.”

    The same tradeoff occurs with pavement performance.

    “While riskier treatments might offer better performance outcomes, it’s more likely those outcomes will vary,” explains Guo. “On the other hand, less-risky treatments will offer more consistent performance — though that performance could be slightly lower.”

    Ultimately, the researchers found that models with moderate risk aversion and a mix of asphalt and concrete had the best outcomes, since they could optimize average performance and performance variability.

    The researchers then compared their PTPD model with moderate risk to conventional cost-benefit approaches currently used by DOTs.

    Over a 20-year analysis period, they found that their PTPD model performed better than the conventional model.

    While the conventional model could optimize cost and performance in the short term, it didn’t anticipate future uncertainties. This led to more frequent, less-expensive treatments that initially improved outcomes but resulted in worse performance and higher costs over time.

    The PTPD model instead took a long-term perspective. It accounted for uncertainties and, as a consequence, better anticipated and adapted to future changes.

    This meant it invested more heavily up front in a few key, heavily used segments of a network. As a result, the performance and cost benefits throughout the network didn’t manifest until later in the analysis period. By that time, the network required simpler, cheaper treatments less frequently.

    In fact, for the cost-benefit model to perform as well as the PTPD model, DOTs would have to spend 10 percent more over 20 years in the given case study.

    In the future, Guo and his colleagues hope to extend their analysis to the entire U.S. roadway system. In addition to cost and performance, they intend to measure the environmental footprint of paving decisions, as well.

    Facing uncertainty is difficult. But with their latest model, CSHub researchers do just that. Instead of discounting uncertainty, they confront it head-on. And consequentially, DOTs may soon expect reduced backlogs and better roads.

    1:40p
    Decarbonizing the making of consumer products

    Most efforts to reduce energy consumption and carbon emissions have focused on the transportation and residential sectors. Little attention has been paid to industrial manufacturing, even though it consumes more energy than either of those sectors and emits high levels of CO2 in the process.

    To help address that situation, Assistant Professor Karthish Manthiram, postdoc Kyoungsuk Jin, graduate students Joseph H. Maalouf and Minju Chung, and their colleagues, all of the MIT Department of Chemical Engineering, have been devising new methods of synthesizing epoxides, a group of chemicals used in the manufacture of consumer goods ranging from polyester clothing, detergents, and antifreeze to pharmaceuticals and plastics.

    “We don’t think about the embedded energy and carbon dioxide footprint of a plastic bottle we’re using or the clothing we’re putting on,” says Manthiram. “But epoxides are everywhere!”

    As solar and wind and storage technologies mature, it’s time to address what Manthiram calls the “hidden energy and carbon footprints of materials made from epoxides.” And the key, he argues, may be to perform epoxide synthesis using electricity from renewable sources along with specially designed catalysts and an unlikely starting material: water.

    The challenge

    Epoxides can be made from a variety of carbon-containing compounds known generically as olefins. But regardless of the olefin used, the conversion process generally produces high levels of CO2 or has other serious drawbacks.

    To illustrate the problem, Manthiram describes processes now used to manufacture ethylene oxide, an epoxide used in making detergents, thickeners, solvents, plastics, and other consumer goods. Demand for ethylene oxide is so high that it has the fifth-largest CO2 footprint of any chemical made today.

    The top panel of Figure 1 in the slideshow above illustrates one common synthesis process. The recipe is simple: Combine ethylene molecules and oxygen molecules, subject the mixture to high temperatures and pressures, and separate out the ethylene oxide that forms.

    However, those ethylene oxide molecules are accompanied by molecules of CO2 — a problem, given the volume of ethylene oxide produced nationwide. In addition, the high temperatures and pressures required are generally produced by burning fossil fuels. And the conditions are so extreme that the reaction must take place in a massive pressure vessel. The capital investment required is high, so epoxides are generally produced in a central location and then transported long distances to the point of consumption.

    Another widely synthesized epoxide is propylene oxide, which is used in making a variety of products, including perfumes, plasticizers, detergents, and polyurethanes. In this case, the olefin — propylene — is combined with tert-butyl hydroperoxide, as illustrated in the bottom panel of Figure 1. An oxygen atom moves from the tert-butyl hydroperoxide molecule to the propylene to form the desired propylene oxide. The reaction conditions are somewhat less harsh than in ethylene oxide synthesis, but a side product must be dealt with. And while no CO2 is created, the tert-butyl hydroperoxide is highly reactive, flammable, and toxic, so it must be handled with extreme care.

    In short, current methods of epoxide synthesis produce CO2, involve dangerous chemicals, require huge pressure vessels, or call for fossil fuel combustion. Manthiram and his team believed there must be a better way.

    A new approach

    The goal in epoxide synthesis is straightforward: Simply transfer an oxygen atom from a source molecule onto an olefin molecule. Manthiram and his lab came up with an idea: Could water be used as a sustainable and benign source of the needed oxygen atoms? The concept was counterintuitive. “Organic chemists would say that it shouldn’t be possible because water and olefins don’t react with one another,” he says. “But what if we use electricity to liberate the oxygen atoms in water? Electrochemistry causes interesting things to happen — and it’s at the heart of what our group does.”

    Using electricity to split water into oxygen and hydrogen is a standard practice called electrolysis. Usually, the goal of water electrolysis is to produce hydrogen gas for certain industrial applications or for use as a fuel. The oxygen is simply vented to the atmosphere.

    To Manthiram, that practice seemed wasteful. Why not do something useful with the oxygen? Making an epoxide seemed the perfect opportunity — and the benefits could be significant. Generating two valuable products instead of one would bring down the high cost of water electrolysis. Indeed, it might become a cheaper, carbon-free alternative to today’s usual practice of producing hydrogen from natural gas. The electricity needed for the process could be generated from renewable sources such as solar and wind. There wouldn’t be any hazardous reactants or undesirable byproducts involved. And there would be no need for massive, costly, and accident-prone pressure vessels. As a result, epoxides could be made at small-scale, modular facilities close to the place they’re going to be used — no need to transport, distribute, or store the chemicals produced.

    Will the reaction work?

    However, there was a chance that the proposed process might not work. During electrolysis, the oxygen atoms quickly pair up to form oxygen gas. The proposed process — illustrated in Figure 2 in the slideshow above — would require that some of the oxygen atoms move onto the olefin before they combine with one another.

    To investigate the feasibility of the process, Manthiram’s group performed a fundamental analysis to find out whether the reaction is thermodynamically favorable. Does the energy of the overall system shift to a lower state by making the move? In other words, is the product more stable than the reactants were?

    They started with a thermodynamic analysis of the proposed reaction at various combinations of temperature and pressure — the standard variables used in hydrocarbon processing. As an example, they again used ethylene oxide. The results, shown in Figure 3 in the slideshow above, were not encouraging. As the uniform blue in the left-hand figure shows, even at elevated temperatures and pressures, the conversion of ethylene and water to ethylene oxide plus hydrogen doesn’t happen — just as a chemist’s intuition would predict.

    But their proposal was to use voltage rather than pressure to drive the chemical reaction. As the right-hand figure in Figure 3 shows, with that change, the outcome of the analysis looked more promising. Conversion of ethylene to ethylene oxide occurs at around 0.8 volts. So the process is viable at voltages below that of an everyday AA battery and at essentially room temperature.

    While a thermodynamic analysis can show that a reaction is possible, it doesn’t reveal how quickly it will occur, and reactions must be fast to be cost-effective. So the researchers needed to design a catalyst — a material that would speed up the reaction without getting consumed.

    Designing catalysts for specific electrochemical reactions is a focus of Manthiram’s group. For this reaction, they decided to start with manganese oxide, a material known to catalyze the water-splitting reaction. And to increase the catalyst’s effectiveness, they fabricated it into nanoparticles — a particle size that would maximize the surface area on which reactions can take place.

    Figure 4 in the slideshow above shows the special electrochemical cell they designed. Like all such cells, it has two electrodes — in this case, an anode where oxygen is transferred to make an olefin into an epoxide, and a cathode where hydrogen gas forms. The anode is made of carbon paper decorated with the nanoparticles of manganese oxide (shown in yellow). The cathode is made of platinum. Between the anode and the cathode is an electrolyte that ferries electrically charged ions between them. In this case, the electrolyte is a mixture of a solvent, water (the oxygen source), and the olefin.

    The magnified views in Figure 4 show what happens at the two electrodes. The right-hand view shows the olefin and water (H2O) molecules arriving at the anode surface. Encouraged by the catalyst, the water molecules break apart, sending two electrons (negatively charged particles, e–) into the anode and releasing two protons (positively charged hydrogen ions, H+) into the electrolyte. The leftover oxygen atom (O) joins the olefin molecule on the surface of the electrode, forming the desired epoxide molecule.

    The two liberated electrons travel through the anode and around the external circuit (shown in red), where they pass through a power source — ideally, fueled by a renewable source such as wind or solar—and gain extra energy. When the two energized electrons reach the cathode, they join the two protons arriving in the electrolyte and — as shown in the left-hand magnified view — they form hydrogen gas (H2), which exits the top of the cell.

    Experimental results

    Experiments with that setup have been encouraging. Thus far, the work has involved an olefin called cyclooctene, a well-known molecule that’s been widely used by people studying oxidation reactions. “Ethylene and the like are structurally more important and need to be solved, but we’re developing a foundation on a well-known molecule just to get us started,” says Manthiram.

    Results have already allayed a major concern. In one test, the researchers applied 3.8 volts across their mixture at room temperature, and, after four hours, about half of the cyclooctene had converted into its epoxide counterpart, cyclooctene oxide. “So that result confirms that we can split water to make hydrogen and oxygen and then intercept the oxygen atoms so they move onto the olefin and convert it into an epoxide,” says Manthiram.

    But how efficiently does the conversion happen? If this reaction is perfectly efficient, one oxygen atom will move onto an olefin for every two electrons that go into the anode. Thus, one epoxide molecule will form for each hydrogen molecule that forms. Using special equipment, the researchers counted the number of epoxide molecules formed for each pair of electrons passing through the external circuit to form hydrogen.

    That analysis showed that their conversion efficiency was 30 percent of the maximum theoretical efficiency. “That’s because the electrons are also doing other reactions — maybe making oxygen, for instance, or oxidizing some of the solvent,” says Manthiram. “But for us, 30 percent is a remarkable number for a new reaction that was previously unknown. For that to be the first step, we’re very happy about it.”

    Manthiram recognizes that the efficiency might need to be twice as high, or even higher, for the process to be commercially viable. “Techno-economics will ultimately guide where that number needs to be,” he says. “But I would say that the heart of our discoveries so far is the realization that there is a catalyst that can make this happen. That’s what has opened up everything that we’ve explored since the initial discovery.”

    Encouraging results and future challenges

    Manthiram is cautious not to overstate the potential implications of the work. “We know what the outcome is,” he says. “We put olefin in, and we get epoxide out.” But to optimize the conversion efficiency they need to know at a molecular level all the steps involved in that conversion. For example, does the electron transfer first by itself, or does it move with a proton at the same time? How does the catalyst bind the oxygen atom? And how does the oxygen atom transfer to the olefin on the surface of the catalyst?

    According to Manthiram, he and his group have hypothesized a reaction sequence, and several analytical techniques have provided a “handful of observables” that support it. But he admits that there is much more theoretical and experimental work to do to develop and validate a detailed mechanism that they can use to guide the optimization process. And then there are practical considerations, such as how to extract the epoxides from the electrochemical cell and how to scale up production.

    Manthiram believes that this work on epoxides is just “the tip of the iceberg” for his group. There are many other chemicals they might be able to make using voltage and specially designed catalysts. And while some attempts may not work, with each one they’ll learn more about how voltages and electrons and surfaces influence the outcome.

    He and his team predict that the face of the chemical industry will change dramatically in the years to come. The need to reduce CO2 emissions and energy use is already pushing research on chemical manufacturing toward using electricity from renewable sources. And that electricity will increasingly be made at distributed sites. “If we have solar panels and wind turbines everywhere, why not do chemical synthesis close to where the power is generated, and make commercial products close to the communities that need them?” says Manthiram. The result will be a distributed, electrified, and decarbonized chemical industry — and a dramatic reduction in both energy use and CO2 emissions.

    This research was supported by MIT’s Department of Chemical Engineering and by National Science Foundation Graduate Research Fellowships.

    This article appears in the Autumn 2019 issue of Energy Futures, the magazine of the MIT Energy Initiative. 

    11:59p
    The complex effects of colonial rule in Indonesia

    The areas of Indonesia where Dutch colonial rulers built a huge sugar-producing industry in the 1800s remain more economically productive today than other parts of the country, according to a study co-authored by an MIT economist.

    The research, focused on the Indonesian island of Java, introduces new data into the study of the economic effects of colonialism. The finding shows that around villages where the Dutch built sugar-processing factories from the 1830 through the 1870s, there is today greater economic activity, more extensive manufacturing, and even more schools, along with higher local education levels.

    “The places where the Dutch established [sugar factories] persisted as manufacturing centers,” says Benjamin Olken, a professor of economics at MIT and co-author of a paper detailing the results, which appears in the January issue of the Review of Economic Studies.

    The historical link between this “Dutch Cultivation System” and economic activity today has likely been transmitted “through a couple of forces,” Olken suggests. One of them, he says, is the building of “complementary infrastructure” such as railroads and roads, which remain in place in contemporary Indonesia.

    The other mechanism, Olken says, is that “industries grew up around the sugar [industry], and those industries persisted. And once you have this manufacturing environment, that can lead to other changes: More infrastructure and more schools have persisted in these areas as well.”

    To be sure, Olken says, the empirical conclusions of the study do not represent validation of Dutch colonial rule, which lasted from the early 1600s until 1949 and significantly restricted the rights and self-constructed political institutions of Indonesians. Dutch rule had long-lasting effects in many areas of civic life, and the Dutch Cultivation System used forced labor, for one thing.

    “This paper is not trying to argue that the [Dutch] colonial enterprise was a net good for the people of the time,” Olken emphasizes. “I want to be very clear on that. That’s not what we’re saying.”

    Instead, the study was designed to evaluate the empirical effects of the Dutch Cultivation System, and the outcome of the research was not necessarily what Olken would have anticipated.

    “The results are striking,” Olken says. “They just jump out at you.”

    The paper, “The Development Effects of the Extractive Colonial Economy: The Dutch Cultivation System in Java,” is co-authored by Olken and Melissa Dell PhD ’12, a professor of economics at Harvard University.

    On the ground

    Historically in Java, the biggest of Indonesia’s many islands, the main crop had been rice. Starting in the 1830s, the Dutch instituted a sugar-growing system in some areas, building 94 sugar-processing factories, as well as roads and railroads to transport materials and products.

    Generally the Dutch would export high-quality sugar from Indonesia while keeping lower-quality sugar in the country. Overall, the system became massive; at one point in the mid-19th century, sugar production in Java accounted for one-third of the Dutch government’s revenues and 4 percent of Dutch GDP. By one estimate, a quarter of the population was involved in the industry.

    In developing their research, Olken and Dell used 19th century data from government archives in the Netherlands, as well as modern data from Indonesia. The Dutch built the processing plants next to rivers in places with enough flat land to sustain extensive sugar crops; to conduct the study, the researchers looked at economic activity near sugar-processing factories and compared it with economic activity in similar areas that lacked factories.

    “In the 1850s, the Dutch spent four years on the ground collecting detailed information for the over 10,000 villages that contributed land and labor to the Cultivation System,” Dell notes. The researchers digitized those records and, as she states, “painstakingly merged them” with economic and demograhic records from the same locations today

    As the results show, places close to factories are 25-30 percentage points less agricultural in economic composition than those away from factories, and they have more manufacturing, by 6-7 percentage points. They also have 9 percent more employment in retail.

    Areas within 1 kilometer of a sugar factory have a railroad density twice that of similar places 5 to 20 kilometers from factories; by 1980, they were also 45 percent more likely to have electricity and 4 percent more likely to have a high school. They also have local populations with a full year more of education, on average, than areas not situated near old sugar factories.

    The study shows there is also about 10 to 15 percent more public-land use in villages that were part of the Dutch Cultivation System, a data point that holds steady in both 1980 and 2003.

    “The key thing that underlies this paper, in multiple respects, is the linking of the historical data and the modern data,” Olken says. The researchers also observed that the disparity between industrialized places and their more rural counterparts has not arisen since 1980, further suggesting how much Java’s deep economic roots matter.

    Net Effects?

    The paper blends the expertise of Olken, who has spent years conducting antipoverty studies in Indonesia, and Dell, whose work at times examines the effects of political history on current-day economic outcomes.

    “I had never really done a historical project before,” Olken says. “But the opportunity to collaborate with Melissa on this was really exciting.”

    One of Dell’s best-known papers, published in 2010 while she was still a PhD student at MIT, shows that in areas of Peru where colonial Spanish rulers instituted a system of forced mining labor from the 1500s to the 1800s, there are significant and negative economic effects that persist today.

    However, somewhat to their surprise, the researchers did not observe similarly promounced effects from the Dutch Cultivation System.

    “One might have thought that could have had negative consequences on local social capital and local development in other respects,” says Olken, adding that he “wasn’t sure what to expect” before looking at the data.

    “The differences between the long-run effects of forced labor in Peru and Java suggest that for understanding persistent impacts on economic activity, we need to know more than just whether there was forced labor in a location,” Dell says. “We need to understand how the historical institutions influenced economic incentives and activities initially, and how these initial effects may or may not have persisted moving forward.”

    Olken adds that the study “can’t measure every possible thing,” and that “it’s possible there are other effects we didn’t see.”

    Moreover, Olken notes, the paper cannot determine the net effect of the Dutch Cultivation System on Indonesian economic growth. That is, in the absence of Dutch rule, Indonesia’s economy would have certainly grown on it own — but it is impossible to say whether it would have expanded at a rate faster, slower, or equivalent to the trajectory it had under the Dutch.

    “We can’t say what would have happened if the Dutch had never showed up in Indonesia,” Olken says. “And of course the Dutch [colonizing] Indonesia had all kinds of effects well beyond the scope of this paper, many of them negative for the contemporaneous population.”

    << Previous Day 2020/02/05
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org