MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Thursday, November 3rd, 2016

    Time Event
    12:00a
    A new twist on airplane wing design

    When the Wright brothers accomplished their first powered flight more than a century ago, they controlled the motion of their Flyer 1 aircraft using wires and pulleys that bent and twisted the wood-and-canvas wings. This system was quite different than the separate, hinged flaps and ailerons that have performed those functions on most aircraft ever since. But now, thanks to some high-tech wizardry developed by engineers at MIT and NASA, some aircraft may be returning to their roots, with a new kind of bendable, “morphing” wing.

    The new wing architecture, which could greatly simplify the manufacturing process and reduce fuel consumption by improving the wing’s aerodynamics, as well as improving its agility, is based on a system of tiny, lightweight subunits that could be assembled by a team of small specialized robots, and ultimately could be used to build the entire airframe. The wing would be covered by a “skin” made of overlapping pieces that might resemble scales or feathers.

    The new concept is described in the journal Soft Robotics, in a paper by Neil Gershenfeld, director of MIT’s Center for Bits and Atoms (CBA); Benjamin Jenett, a CBA graduate student; Kenneth Cheung PhD ’12, a CBA alumnus and NASA research scientist; and four others.

    A test version of the deformable wing designed by the MIT and NASA researchers is shown undergoing its twisting motions, which could replace the need for separate, hinged panels for controlling a plane's motion. (Kenneth Cheung/NASA)

    Researchers have been trying for many years to achieve a reliable way of deforming wings as a substitute for the conventional, separate, moving surfaces, but all those efforts “have had little practical impact,” Gershenfeld says. The biggest problem was that most of these attempts relied on deforming the wing through the use of mechanical control structures within the wing, but these structures tended to be so heavy that they canceled out any efficiency advantages produced by the smoother aerodynamic surfaces. They also added complexity and reliability issues.

    By contrast, Gershenfeld says, “We make the whole wing the mechanism. It’s not something we put into the wing.” In the team’s new approach, the whole shape of the wing can be changed, and twisted uniformly along its length, by activating two small motors that apply a twisting pressure to each wingtip.

    This approach to the manufacture of aircraft, and potentially other technologies, is such a new idea that “I think we can say it is a philosophical revolution, opening the gate to disruptive innovation,” says Vincent Loubiere, a lead technologist for emerging technologies and concepts at Airbus, who was not directly involved in this research. He adds that “the perspectives and fields this approach opens are thrilling.”

    Like building with blocks

    The basic principle behind the new concept is the use of an array of tiny, lightweight structural pieces, which Gershenfeld calls “digital materials,” that can be assembled into a virtually infinite variety of shapes, much like assembling a structure from Lego blocks. The assembly, performed by hand for this initial experiment, could be done by simple miniature robots that would crawl along or inside the structure as it took shape. The team has already developed prototypes of such robots.

    The individual pieces are strong and stiff, but the exact choice of the dimensions and materials used for the pieces, and the geometry of how they are assembled, allow for a precise tuning of the flexibility of the final shape. For the initial test structure, the goal was to allow the wing to twist in a precise way that would substitute for the motion of separate structural pieces (such as the small ailerons at the trailing edges of conventional wings), while providing a single, smooth aerodynamic surface.

    Building up a large and complex structure from an array of small, identical building blocks, which have an exceptional combination of strength, light weight, and flexibility, greatly simplifies the manufacturing process, Gershenfeld explains. While the construction of light composite wings for today’s aircraft requires large, specialized equipment for layering and hardening the material, the new modular structures could be rapidly manufactured in mass quantities and then assembled robotically in place.

    Side perspective of the test wing at Langley Research Center’s 12-Foot Low Speed Tunnel. (Kenneth Cheung/NASA)

    Gershenfeld and his team have been pursuing this approach to building complex structures for years, with many potential applications for robotic devices of various kinds. For example, this method could lead to robotic arms and legs whose shapes could bend continuously along their entire length, rather than just having a fixed number of joints.

    This research, says Cheung, “presents a general strategy for increasing the performance of highly compliant  — that is, ‘soft’ — robots and mechanisms,” by replacing conventional flexible materials with new cellular materials “that are much lower weight, more tunable, and can be made to dissipate energy at much lower rates” while having equivalent stiffness.

    Saving fuel, cutting emissions

    While exploring possible applications of this nascent technology, Gershenfeld and his team consulted with NASA engineers and others seeking ways to improve the efficiency of aircraft manufacturing and flight. They learned that “the idea that you could continuously deform a wing shape to do pure lift and roll has been a holy grail in the field, for both efficiency and agility,” he says. Given the importance of fuel costs in both the economics of the airline industry and that sector’s contribution to greenhouse gas emissions, even small improvements in fuel efficiency could have a significant impact.

    Wind-tunnel tests of this structure showed that it at least matches the aerodynamic properties of a conventional wing, at about one-tenth the weight.

    The “skin” of the wing also enhances the structure’s performance. It’s made from overlapping strips of flexible material, layered somewhat like feathers or fish scales, allowing for the pieces to move across each other as the wing flexes, while still providing a smooth outer surface.

    The modular structure also provides greater ease of both assembly and disassembly: One of this system’s big advantages, in principle, Gershenfeld says, is that when it’s no longer needed, the whole structure can be taken apart into its component parts, which can then be reassembled into something completely different. Similarly, repairs could be made by simply replacing an area of damaged subunits.

    “An inspection robot could just find where the broken part is and replace it, and keep the aircraft 100 percent healthy at all times,” says Jenett.

    Following up on the successful wind tunnel tests, the team is now extending the work to tests of a flyable unpiloted aircraft, and initial tests have shown great promise, Jenett says. “The first tests were done by a certified test pilot, and he found it so responsive that he decided to do some aerobatics.”

    Some of the first uses of the technology may be to make small, robotic aircraft — “super-efficient long-range drones,” Gershenfeld says, that could be used in developing countries as a way of delivering medicines to remote areas.

    “Ultralight, tunable, aeroelastic structures and flight controls open up whole new frontiers for flight,” says Gonzalo Rey, chief technology officer for Moog Inc., a precision aircraft motion-controls company, who was not directly involved in this work, though he has collaborated with the team. “Digital materials and fabrication are a fundamentally new way to make things and enable the conventionally impossible. The digital morphing wing article demonstrates the ability to resolve in depth the engineering challenges necessary to apply the concept.”

    Rey adds that “The broader potential in this concept extends directly to skyscrapers, bridges, and space structures, providing not only improved performance and survivability but also a more sustainable approach by achieving the same strength while using, and reusing, substantially less raw material.”

    And Loubiere, from Airbus, suggests that many other technologies could also benefit from this method, including wind turbines: “Simply enabling the assembly of the windmill blades on the spot, instead of using complex and fuel-consuming transport, would enhance greatly the cost and overall performance,” he says.

    The research team also included graduate students Sam Calisch at MIT’s Center for Bits and Atoms; Daniel Cellucci at Cornell University; Nick Cramer at the University of California at Santa Cruz; and researcher Sean Swei at NASA’s Ames Research Center in Mountain View, California. The work was supported by the NASA Aeronautics Research Institute Team Seeding Program, the NASA ARMD Convergent Aeronautics Solutions Program, and the NASA Space Technology Research Fellowship program.

    11:00a
    Real estate innovation by the numbers

    A new lab in the MIT Center for Real Estate (CRE) will link the creation of the built environment to economic impact, seeking to identify innovations in design and technology that will determine the future of communities and cities.

    The MIT Real Estate Innovation Lab (REIL) will explore three areas: innovations in building design that lead to new building types and urban forms; new processes in construction such as 3-D printing and modular construction; and new data technologies with the potential to transform the organization of cities and the built environment.

    “Thirty years ago, green buildings were a cutting-edge concept. Today they are the standard, not only due to their environmental benefits but because there is evidence that justifies the economic investment,” says Andrea Chegut, research scientist and director of the new lab. “Our goal will be to understand what is happening at the frontier of the built environment today, to produce statistical and empirical evidence of approaches that work, and to translate these innovations into widespread use.”

    The lab’s principal investigators are Dennis Frenchman, the Class of 1922 Professor of Urban Design and Planning, and David Geltner, professor of real estate finance and engineering systems. The lab also provides an opportunity for MIT School of Architecture and Planning graduate students, PhD candidates, and postdocs to contribute and collaborate in an entrepreneurial setting that combines expertise in advanced methodologies of statistics, computational architecture, economics, urban design, and other disciplines.

    “This new lab creates a dynamic environment for scholars to conduct research that will weave together the many disciplines within the School of Architecture and Planning that contribute to the development of innovative products, spaces, and communities,” says Albert Saiz, director of the CRE and the Daniel Rose Associate Professor of Urban Economics and Real Estate. “When you link these scholars and practitioners together, you start to see the full landscape of the frontier. We believe this is the first academic lab to bring these disciplines together with this ambition.”

    Industry and corporate collaborations — from real estate development and construction to big data and architecture — are important to the researchers, says Geltner, because they are the stakeholders who develop cities and communities. Accompanying the launch of the new lab is the announcement of its founding industry partner, JLL, the leading professional services firm specializing in real estate.

    JLL is engaging with REIL to develop an open-innovation strategy to disseminate academic research to the urban technology community. “JLL and the MIT Center for Real Estate have a shared mission — and a passion — for innovation and real estate,” said Ben Breslau, JLL’s managing director of research. “As an industry partner, we look forward to engaging with the brilliant minds at MIT to support research that will create a better future.”

    Architects and designers can introduce novel ideas from 3-D printed buildings to indoor vertical food farms, but to develop these projects, they have to make the case to the investors that will fund them. However, says Chegut, new technologies and systems may not have diffused through the market enough to offer sufficient assurance to investors. “If you are going to get meaningful projects off the ground, you must have the financial stakeholders on board,” she says. “We hope to contribute analysis and evidence that helps innovators and investors come together.”

    The big data revolution is finally coming to the built environment, says Frenchman, noting that venture capital investment in urban technology firms has gone from $200,000 per year in 2008 to $1.8 billion today. “These tech companies are building digital platforms and scraping data to show us the city in a new form,” he says.

    Embracing this new diversity of data, one of the lab’s flagship projects is to create a comprehensive real estate database for the entire city of New York. By assembling any data that describes the city — such as rents, transaction prices, building mortgages, vacant space, Airbnb locations, co-working spaces, cell towers, fiber-optic cables, subway lines, and more — the database will offer a way to “hack the city,” Chegut says. “Each source represents a distinct view of a city, whether from architects, urban planners, real estate developers, economists, construction firms, utilities, or investors. Capturing the full complexity of these overlapping and interconnected sets of data will offer an unprecedented platform for insight because it will integrate multiple perspectives into one resource.”

    One sign of the database’s potential for impact is the interest it is already receiving. “Many data providers are committing to share their data with us to help illustrate these links,” says Weikal. “They understand that sharing their information will help everyone build better cities.”

    The lab will also explore how changes in the built environment over time help create an atmosphere for innovation to occur. “We used to build factories, then office parks, then media cities, and now we are building innovation cities,” says Frenchman. By examining locations all over the world with successful innovation districts — including Cambridge, Massachusetts; Seoul, South Korea; Medellín, Colombia; Mexico City, Mexico; Barcelona, Spain; and Beijing, China — the project will identify how they benefit cities in terms of jobs and development, as well as inform urban design.

    The new lab plans on releasing at least a half-dozen papers exploring these phenomena this year and will link this research to DesignX, a new entrepreneurship accelerator in the School of Architecture and Planning that provides training, tools, and resources to student teams building companies and organizations focused on design and the built environment.

    By applying knowledge generated from the lab’s research to educate students in the DesignX program, the lab can help foster new technologies that will address gaps in the marketplace. “If we have a number of pieces of the puzzle, but one is missing, we can say to young entrepreneurs, ‘Run that way, fast,’” says Frenchman. “Our research can point innovators toward the frontier.”

    By foregrounding the business case for innovation, Chegut hopes that the lab can help to give architects, urban planners, and developers the tools they need to succeed with their most transformative ideas. “This kind of R&D is the lifeblood of MIT,” says Chegut. “Designers are asked to justify every intervention they make. By getting architects and economists together, we can propose design innovations that not only support viable urban economies but also lead to vibrant and successful communities.”

    2:00p
    Scientists set traps for atoms with single-particle precision

    Atoms, photons, and other quantum particles are often capricious and finicky by nature; very rarely at a standstill, they often collide with others of their kind. But if such particles can be individually corralled and controlled in large numbers, they may be harnessed as quantum bits, or qubits — tiny units of information whose state or orientation can be used to carry out calculations at rates significantly faster than today’s semiconductor-based computer chips.

    In recent years, scientists have come up with ways to isolate and manipulate individual quantum particles. But such techniques have been difficult to scale up, and the lack of a reliable way to manipulate large numbers of atoms remains a significant roadblock toward quantum computing.

    Now, scientists from Harvard and MIT have found a way around this challenge. In a paper published today in the journal Science, the researchers report on a new method that enables them to use lasers as optical “tweezers” to pick individual atoms out from a cloud and hold them in place. As the atoms are “trapped,” the scientists use a camera to create images of the atoms and their locations. Based on these images, they then manipulate the angle of the laser beams, to move individual atoms into any number of different configurations.

    The team has so far created arrays of 50 atoms and manipulated them into various defect-free patterns, with single-atom control. Vladan Vuletic, one of the paper’s authors and the Lester Wolfe Professor of Physics at MIT, likens the process to “building a small crystal of atoms, from the bottom, up.”

    “We have demonstrated a reconfigurable array of traps for single atoms, where we can prepare up to 50 individual atoms in separate traps deterministically, for future use in quantum information processing, quantum simulations, or precision measurements,” says Vuletic, who is also a member of MIT’s Research Laboratory of Electronics. “It’s like Legos of atoms that you build up, and you can decide where you want each block to be.” 

    The paper’s other senior authors are lead author Manuel Endres and Markus Greiner and Mikhail Lukin of Harvard University.

    Staying neutral

    The team designed its technique to manipulate neutral atoms, which carry no electrical charge. Most other quantum experiments have involved charged atoms, or ions, as their charge makes them more easily trappable. Scientists have also shown that ions, under certain conditions, can be made to perform quantum gates — logical operations between two quantum bits, similar to logic gates in classical circuits. However, because of their charged nature, ions repel each other and are difficult to assemble in dense arrays.

    Neutral atoms, on the other hand, have no problem being in close proximity. The main obstacle to using neutral atoms as qubits has been that, unlike ions, they experience very weak forces and are not easily held in place.

    “The trick is to trap them, and in particular, to trap many of them,” Vuletic says. “People have been able to trap many neutral atoms, but not in a way that you could form a regular structure with them. And for quantum computing, you need to be able to move specific atoms to specific locations, with individual control.”

    Setting the trap

    To trap individual neutral atoms, the researchers first used a laser to cool a cloud of rubidium atoms to ultracold, near-absolute-zero temperatures, slowing the atoms down from their usual, high-speed trajectories. They then directed a second laser beam through an instrument that splits the laser beam into many smaller beams, the number and angle of which depend on the radio frequency applied to the deflector.

    The researchers focused the smaller laser beams through the cloud of ultracold atoms and found that each beam’s focus — the point at which the beam’s intensity was highest — attracted a single atom, essentially picking it out from the cloud and holding it in place.

    “It’s similar to charging up a comb by rubbing it against something woolen, and using it to pick up small pieces of paper,” Vuletic says. “It’s a similar process with atoms, which are attracted to regions of high intensity of the light field.”

    While the atoms are trapped, they emit light, which the scientists captured using a charge-coupled-device camera. By looking at their images, the researchers were able to discern which laser beams, or tweezers, were holding atoms and which were not. They could then change the radio frequency of each beam to “switch off” the tweezers without atoms, and rearrange those with atoms, to create arrays that were free of defects. The team ultimately created arrays of 50 atoms that were held in place for up to several seconds.

    “The question is always, how many quantum operations can you perform in this time?” Vuletic says. “The typical timescale for neutral atoms is about 10 microseconds, so you could do about 100,000 operations in a second. We think for now this lifetime is fine.”

    Now, the team is investigating whether they can encourage neutral atoms to perform quantum gates — the most basic processing of information between two qubits. While others have demonstrated this between two neutral atoms, they have not been able to retain quantum gates in systems involving large numbers of atoms.  If Vuletic and his colleagues can successfully induce quantum gates in their systems of 50 atoms or more, they will have taken a significant step toward realizing quantum computing.

    “People would also like to do other experiments aside from quantum computing, such as simulating condensed matter physics, with a predetermined number of atoms, and now with this technique it should be possible,” Vuletic says. “It’s very exciting.”

    This research was supported in part by the National Science Foundation and the National Security Science and Engineering Faculty Fellowship.

    2:55p
    3Q: William Bonvillian on connecting Cambridge and Capitol Hill

    From climate change to health and medicine, many of today’s national policy issues are intrinsically linked to science and technology. A scant few of the lawmakers shaping these policies were trained as scientists or engineers, however. Fortunately for them, they have William Bonvillian and his team in the MIT Washington Office.

    The Washington Office provides a two-way conduit for information between the Institute and policymakers on Capitol Hill. Directed by Bonvillian for the last 11 years, the office ensures that knowledge generated at MIT makes an impact among decision makers in the White House, Congress, national research and development agencies, and other Washington organizations. The office also helps MIT’s leadership and faculty stay abreast of policy developments in R&D funding, innovation, energy, or other areas where science, technology, and policy intersect.  

    Bonvillian, who announced today that he will be stepping down in January, was a senior policy advisor in the U.S. Senate for 17 years before joining MIT in 2006. Among his many contributions to the MIT Washington Office, he worked with faculty and researchers at the Institute on MIT’s policy initiatives, establishing an important new model for providing lawmakers with cutting-edge and interdisciplinary perspectives from outside the Beltway.

    Bonvillian plans to maintain his ties to MIT after leaving the Washington Office; he will teach a semester-long course on science and technology policy, in addition to the Independent Activities Period (IAP) course he has taught for nine years, and he plans to work on research projects with the Industrial Performance Center. He spoke with MIT News about the work of the Washington Office and importance of the connections between MIT and Capitol Hill.

    Q: Why does MIT have a Washington Office?

    A: The Washington Office was formally set up in 1991, when Chuck Vest was president of the Institute, but MIT’s role and presence in Washington long predates that. The onslaught of World War II really marked MIT’s deep involvement in national policymaking. Vannevar Bush is legendary as the former dean of engineering and vice president of MIT who came to be Franklin Delano Roosevelt’s science czar, and who organized science for the coming conflict.

    MIT’s presidents have long viewed it as their responsibility to be involved in national policymaking. Karl Compton [who served as MIT’s president from 1930 to 1948] was deeply involved, and so were many others at MIT. The Radiation Laboratory, or Rad Lab, was formed at MIT and became the model by which science was going to be organized at the federal level to serve the public interest.

    A figure I’m fond of citing is that MIT received 80 times more funding in the course of World War II than its previous 80 years of existence. This period marked the advent of the federal research university, and MIT has been deeply involved in that system ever since. James Killian [MIT’s president from 1948 to 1959] was Eisenhower’s science and technology advisor, for example, and Jerry Wiesner [MIT’s president from 1971 to 1980] served in that role for John F. Kennedy. I was pleased to work for two presidents, Susan Hockfield and Rafael Reif, who continued the tradition.

    Q: What have been some of the office’s main efforts in the last few years?

    A: We serve as a link between the national R&D agencies and MIT’s faculty and administration. We work to understand where the agencies are headed, what they’re thinking, what new directions they want to move in. And we communicate to them MIT’s thinking on that, and where MIT thinks the opportunity spaces are.

    Our second core task is communication with Congress. MIT doesn’t seek earmarks or funding for particular research awards from Congress — we carefully avoid that — but MIT does worry about where the federal research enterprise is going. In our office, we watch the overall funding support that Congress provides and link MIT faculty and leadership into that debate.

    We also connect MIT students with opportunities in the policy process. Our office has close working relations with the student group the Science Policy Initiative, and we organize a series of events with them every year. I teach a course on science and technology policy in IAP, for example, and we bring about 25 members of that group for an annual day of visits with members of Congress and another multiday trip for meetings with a variety of R&D and nongovernmental organizations. It’s really an education for them on how the policy process works and on the science policy apparatus here in Washington.

    Finally, there are the MIT policy initiatives. This is the significant new piece that’s been added to the office in recent years but it’s part of MIT’s long history of thinking that it should be engaged in national policy making process around science and technology. This effort came out of the MIT Energy Initiative (MITEI) originally. It was put together by MIT faculty members like Ernest Moniz, who has been the U.S. Energy Secretary under the Obama Administration; Bob Armstrong, the current director of MITEI; and John Deutsch, who has served in multiple federal appointments since the 1970s. The first study drew on input from all of MIT’s schools to think about what the future might be for nuclear power, and what the big challenges were. This has led to a series of major energy studies, very cross-disciplinary in nature, that have tackled topics from nuclear power, to carbon capture and sequestration, to the grid, to solar energy. The upcoming report will be on the future of the utility and is expected out in mid-December. These have all made a major contribution to policymaking in the energy field. As one former energy secretary told MIT’s president in my presence, “Keep doing these. We can’t do these at the DOE. We have plenty of physicists, even some engineers, but we don’t have the broad range of perspectives that these reports bring.”

    That was the first and the model. Since then there has been a major effort, with reports in 2011 and 2016, around convergence, the idea that the next generation of health research advances must assimilate approaches and insights from many different disciplines — that biology must engage with the physical, computational, and engineering sciences. The current administration has picked up this convergence approach, with its BRAIN Initiative and other initiatives on precision medicine and the microbiome. DARPA has now opened a Biological Technologies Office on the convergence model. So, there has been a lot of movement on this front.

    Other MIT policy initiatives have covered advanced manufacturing (where MIT’s work led to President Obama’s largest technology initiative), online learning, the central role of basic research, and more. The newly announced “The Engine” could lead to another. The role of the Washington Office is to support these initiatives and bring the reports to Washington, organizing the follow-up meetings between the report’s faculty leaders and policymakers.

    Q: What are some of the biggest science policy challenges you see in the near future?

    A: A major issue has been imposition of budget sequestration on the domestic, discretionary part of the budget. This includes R&D and applies to both defense and nondefense spending. That’s a major dilemma causing stagnation in the U.S. research portfolio. Federally funded R&D support is foundational for industry R&D, which is now overwhelmingly focused on development as opposed to research. So if you’re curtailing research, you’re going to over time affect development as well. It’s a real dilemma.

    MIT and other universities have argued against sequestration, for the sake of the country’s innovation system. These funding constraints are box we need to escape out of. This was imposed in 2013, and Congress, realizing the problems it had created, has significantly modified sequestration for R&D for 2014 to 2017. But this dilemma still remains in place through 2023.

    11:59p
    Decoding the medical cost mystery

    In Miami, health care providers spent about $14,423 per Medicare patient in 2010. But in Minneapolis, average spending on Medicare enrollees that year was $7,819, just over half as much. In fact, the U.S. is filled with regional disparities in medical spending. Why is this?

    One explanation focuses on providers: In some regions, they may be more likely to use expensive tests or procedures. Another account focuses on patients: If the underlying health or the care preferences of regional populations varies enough, that may cause differences in spending. In recent years, public discussion of this issue has largely highlighted providers, with the implication that reducing apparently excessive treatments could trim overall health care costs.

    But now a unique study co-authored by MIT economists provides a new answer to the medical cost mystery: By scrutinizing millions of Medicare patients who have moved from one place to another, the researchers have found that patients and providers account for virtually equal shares of the differences in regional spending.

    “We find it is about 50/50, half due to patients and half due to places,” says Heidi Williams, the Class of 1957 Career Development Associate Professor in MIT’s Department of Economics, and a co-author of a new paper detailing the study’s findings.

    Specifically, the study finds that nearly 50 percent of the spending differences across geographic areas stems from the characteristics of patients, meaning both their basic health and their varying preferences concerning the intensiveness of medical care. The rest of the spending differences derive from place-specific factors, potentially due to disparities in provider practices and incentives. The finding could help analysts and policymakers better understand the components of medical costs, and could add nuance to the debate about possible inefficiencies in health care spending.

    After all, says co-author Amy Finkelstein, the John and Jennie S. MacDonald Professor of Economics at MIT, the study provides “evidence that there are real, place-specific differences in how health care is practiced. On the other hand … rather than just saying [that] place matters, we’re quantifying how important it is, and showing that a lot of the geographic variation is due to differences across patients.”

    The paper, “Sources of Geographic Variation in Health Care: Evidence from Patient Migration,” appears in the November issue of the Quarterly Journal of Economics. The co-authors are Finkelstein, Williams, and Matthew Gentzkow, an economist at Stanford University.

    Millions of Medicare enrollees

    To conduct the study, the scholars analyzed a sample of 2.5 million Medicare patients, and examined their health care usage from 1998 through 2008. The study focused on a subgroup of enrollees who did not move during this time, as well as 500,000 Medicare enrollees who moved from one market, or “Hospital Referral Region” (HRR), to another. The HRR concept was first developed by the Dartmouth Atlas of Health Care, a research project housed at Dartmouth College.

    Essentially, by examining the health care usage patterns of the same people as they lived in different places, Finkelstein, Williams, and Gentzkow were able to develop a natural experiment to address the providers-or-patients issue.

    Beyond their bottom-line result, the researchers unearthed several more specific findings. For instance, when it comes to emergency care, a relatively larger share of the regional spending discrepancy — 71 percent — was attributable to patients, who likely make more of the decisions about whether or not to seek care in those situations.

    However, patients accounted for only 9 percent of the regional discrepancy for diagnostic tests and 14 percent of the discrepancy for imaging tests; in those cases, the variation by geographic region seems due to differing provider practices, with health care institutions in some places consistently spending more money on testing than other providers do elsewhere.    

    The researchers also found that when a Medicare enrollee moves and receives a different level of health care spending, most of that change occurs in the first year after the move.

    Even given these more detailed findings, the researchers note that it is still difficult to assess which of the higher-spending or lower-spending regions have more “efficient” medical practices. In Williams’ view, an overly simplistic reading of the paper’s results would be that variation on the provider side reflects inefficiency, in the sense of some institutions providing suboptimal levels of care. She stresses that there may also be efficient geographic variation if providers in some areas are relatively more skilled at certain intensive procedures and provide more of those procedures.

    “Just because there’s geographic variation on the provider side doesn’t mean that is necessarily inefficient,” Williams observes.

    The scholars, who initiated the study in 2010, tried to account for potential complications concerning costs. For instance: Could more people with already-existing health problems be moving to warm, sunny, higher-spending places such as Miami, while leaving colder, lower-spending places like Minneapolis? That sort of migration could push Miami’s per-capita Medicare spending to a higher level.

    But as the researchers discovered, such a pattern does not seem to exist. People moving to Arizona, California, and Florida accounted for 24 percent of all moves in the study; but the results are robust even when excluding those states from the tabulations.

    State of the debate

    Regional variation in health care spending has become a prominent issue over the last decade thanks in large part to the Dartmouth Atlas of Health Care.

    “We are very much standing on the shoulders of giants,” Finkelstein says. “The people at Dartmouth, our colleagues in the economics department and the medical school there, have done an enormous amount of work and a huge service documenting and establishing the facts that have rightly gotten a lot of attention, of large differences [in health care spending] across a lot of geographic areas of the country.”

    Some of the better-known Dartmouth findings have suggested that variations in regional spending were not leading to better or worse patient outcomes — although some studies in health care economics over the last decade have also found that more spending correlates with improved results for patients.

    In the view of the MIT researchers, because some of the subsequent public discussion has emphasized questions about the practices of providers, they hope the new paper’s data about the role of patient characteristics can help inform the ongoing debate.

    “The current consensus [has been] that almost all this variation was about providers, and that patient-specific health or preferences were unlikely to be important in explaining geographic variation in spending,” Williams says. “I think our paper shifts the weight of the evidence.”

    11:59p
    Laser particles could provide sharper images of tissues

    A new imaging technique developed by scientists at MIT, Harvard University, and Massachusetts General Hospital (MGH) aims to illuminate cellular structures in deep tissue and other dense and opaque materials. Their method uses tiny particles embedded in the material, that give off laser light.

    The team synthesized these “laser particles” in the shape of tiny chopsticks, each measuring a small fraction of a human hair’s width. The particles are made from lead iodide perovskite — a material that is also used in solar panels, and that efficiently absorbs and traps light. When the researchers shine a laser beam at the particles, the particles light up, giving off normal, diffuse fluorescent light. But if they tune the incoming laser’s power to a certain “lasing threshold,” the particles will instantly generate laser light.

    The researchers, led by MIT graduate student Sangyeon Cho, demonstrated they were able to stimulate the particles to emit laser light, creating images at a resolution six times higher than that of current fluorescence-based microscopes.

    “That means that if a fluorescence microscope’s resolution is set at 2 micrometers, our technique can have 300-nanometer resolution — about a sixfold improvement over regular microscopes,” Cho says. “The idea is very simple but very powerful and can be useful in many different imaging applications.”

    Cho and his colleagues have published their results in the journal Physical Review Letters. His co-authors include Seok Hyun Yun, a professor at Harvard; Nicola Martino, a research fellow at Harvard and MGH’s Wellman Center for Photomedicine; and Matjaž Humar, a researcher at the Jozef Stefan Institute. The research was done as part of the Harvard-MIT Division of Health Sciences and Technology.

    A light in the dark

    When you shine a flashlight in a darkened room, that light appears as a relatively diffuse, hazy beam of white light, representing a jumble of different wavelengths and colors. In stark contrast, laser light is a pointedly focused, monochromatic beam of light, of a specific frequency and color.

    In conventional fluorescence microscopy, scientists may inject a sample of biological tissue with particles filled with fluorescent dyes. They then point a laser beam through a lens that directs the beam through the tissue, causing any fluorescent particles in its path to light up.

    But these particles, like microscopic flashlights, produce a relatively indistinct, fuzzy glow. If such particles were to emit more focused, laser-like light, they might produce sharper images of deep tissues and cells. In recent years, researchers have developed laser-light-emitting particles, but Cho’s work is the first to apply these unique particles to imaging applications.

    Chopstick lasers

    The team first synthesized tiny, 6-micron-long nanowires from lead iodide perovskite, a material that does a good job of trapping and concentrating fluorescent light. The particles’ rod-shaped geometry — which Cho describes as “chopstick-like” — can allow a specific wavelength of light to bounce back and forth along the particles’ length, generating a standing wave, or very regular, concentrated pattern of light, similar to a laser.

    The researchers then built a simple optical setup, similar to conventional fluorescence microscopes, in which a laser beam is pumped from a light source, through a lens, and onto a sample platform containing the laser particles.

    For the most part, the researchers found that the particles emitted diffuse fluorescent light in response to the laser stimulation, similar to conventional fluorescent dyes, at low pump power. However, when they tuned the laser’s power to a certain threshold, the particles lit up considerably, emitting much more laser light.

    Cho says that the new optical technique, which they have named LAser particle Stimulated Emission (LASE) microscopy, could be used to image a specific focal plane, or a particular layer of biological tissue. Theoretically, he says, scientists can shine a laser beam into a three-dimensional sample of tissue embedded throughout with laser particles, and use a lens to focus the beam at a specific depth. Only those particles in the beam’s focus will absorb enough light or energy to turn on as lasers themselves. All other particles upstream of the path’s beam should absorb less energy and only emit fluorescent light.

    “We can collect all this stimulated emission and can distinguish laser from fluorescent light very easily using spectrometers,” Cho says. “We expect this will be very powerful when applied to biological tissue, where light normally scatters all around, and resolution is devastated. But if we use laser particles, they will be the narrow points that will emit laser light. So we can distinguish from the background and can achieve good resolution.”

    Giuliano Scarcelli, an assistant professor at the University of Maryland, says the technique’s success will hinge on successfully implementing it on a standard fluorescence microscope. Once that is achieved, laser imaging’s applications, he says, are promising.

    “The fact that you have a laser versus fluorescence probably means you can measure deeper into tissue because you have a higher signal-to-noise ratio,” says Scarcelli, who was not involved in the work. “We’ll need to see in practice, but on the other hand, with optics, we have no good way of imaging deep tissue. So any research on this topic is a welcome addition.”

    To implement this technique in living tissue, Cho says laser particles would have to be biocompatible, which lead iodide perovskite materials are not. However, the team is currently investigating ways to manipulate cells themselves to glow like lasers.

    “Our idea is, why not use the cell as an internal light source?” Cho says. “We’re starting to think about that problem.”

    << Previous Day 2016/11/03
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org