MIT Research News' Journal
[Most Recent Entries]
[Calendar View]
Tuesday, March 10th, 2020
Time |
Event |
12:00a |
How the brain encodes landmarks that help us navigate When we move through the streets of our neighborhood, we often use familiar landmarks to help us navigate. And as we think to ourselves, “OK, now make a left at the coffee shop,” a part of the brain called the retrosplenial cortex (RSC) lights up.
While many studies have linked this brain region with landmark-based navigation, exactly how it helps us find our way is not well-understood. A new study from MIT neuroscientists now reveals how neurons in the RSC use both visual and spatial information to encode specific landmarks.
“There’s a synthesis of some of these signals — visual inputs and body motion — to represent concepts like landmarks,” says Mark Harnett, an assistant professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research. “What we went after in this study is the neuron-level and population-level representation of these different aspects of spatial navigation.”
In a study of mice, the researchers found that this brain region creates a “landmark code” by combining visual information about the surrounding environment with spatial feedback of the mice’s own position along a track. Integrating these two sources of information allowed the mice to learn where to find a reward, based on landmarks that they saw.
“We believe that this code that we found, which is really locked to the landmarks, and also gives the animals a way to discriminate between landmarks, contributes to the animals’ ability to use those landmarks to find rewards,” says Lukas Fischer, an MIT postdoc and the lead author of the study.
Harnett is the senior author of the study, which appears today in the journal eLife. Other authors are graduate student Raul Mojica Soto-Albors and recent MIT graduate Friederike Buck.
Encoding landmarks
Previous studies have found that people with damage to the RSC have trouble finding their way from one place to another, even though they can still recognize their surroundings. The RSC is also one of the first areas affected in Alzheimer’s patients, who often have trouble navigating.
The RSC is wedged between the primary visual cortex and the motor cortex, and it receives input from both of those areas. It also appears to be involved in combining two types of representations of space — allocentric, meaning the relationship of objects to each other, and egocentric, meaning the relationship of objects to the viewer.
“The evidence suggests that RSC is really a place where you have a fusion of these different frames of reference,” Harnett says. “Things look different when I move around in the room, but that’s because my vantage point has changed. They’re not changing with respect to one another.”
In this study, the MIT team set out to analyze the behavior of individual RSC neurons in mice, including how they integrate multiple inputs that help with navigation. To do that, they created a virtual reality environment for the mice by allowing them to run on a treadmill while they watch a video screen that makes it appear they are running along a track. The speed of the video is determined by how fast the mice run.
At specific points along the track, landmarks appear, signaling that there’s a reward available a certain distance beyond the landmark. The mice had to learn to distinguish between two different landmarks, and to learn how far beyond each one they had to run to get the reward.
Once the mice learned the task, the researchers recorded neural activity in the RSC as the animals ran along the virtual track. They were able to record from a few hundred neurons at a time, and found that most of them anchored their activity to a specific aspect of the task.
There were three primary anchoring points: the beginning of the trial, the landmark, and the reward point. The majority of the neurons were anchored to the landmarks, meaning that their activity would consistently peak at a specific point relative to the landmark, say 50 centimeters before it or 20 centimeters after it.
Most of those neurons responded to both of the landmarks, but a small subset responded to only one or the other. The researchers hypothesize that those strongly selective neurons help the mice to distinguish between the landmarks and run the correct distance to get the reward.
When the researchers used optogenetics (a tool that can turn off neuron activity) to block activity in the RSC, the mice’s performance on the task became much worse.
Combining inputs
The researchers also did an experiment in which the mice could choose to run or not while the video played at a constant speed, unrelated to the mice’s movement. The mice could still see the landmarks, but the location of the landmarks was no longer linked to a reward or to the animals’ own behavior. In that situation, RSC neurons did respond to the landmarks, but not as strongly as they did when the mice were using them for navigation.
Further experiments allowed the researchers to tease out just how much neuron activation is produced by visual input (seeing the landmarks) and by feedback on the mouse’s own movement. However, simply adding those two numbers yielded totals much lower than the neuron activity seen when the mice were actively navigating the track.
“We believe that is evidence for a mechanism of nonlinear integration of these inputs, where they get combined in a way that creates a larger response than what you would get if you just added up those two inputs in a linear fashion,” Fischer says.
The researchers now plan to analyze data that they have already collected on how neuron activity evolves over time as the mice learn the task. They also hope to perform further experiments in which they could try to separately measure visual and spatial inputs into different locations within RSC neurons.
The research was funded by the National Institutes of Health, the McGovern Institute, the NEC Corporation Fund for Research in Computers and Communications at MIT, and the Klingenstein-Simons Fellowship in Neuroscience. | 12:00a |
Why do banking crises occur? Why did the U.S. banking crisis of 2007-2008 occur? Many accounts have chronicled the bad decisions and poor risk management at places like Lehmann Brothers, the now-vanished investment bank. Still, plenty of banks have vanished, and many countries have had their own banking crises in recent decades. So, to pose the question more generally, why do modern banking crises occur?
David Singer believes he knows. An MIT professor and head of the Institute’s Department of Political Science, Singer has spent years examining global data on the subject with his colleague Mark Copelovitch, a political scientist at the University of Wisconsin at Madison.
Together, Singer and Copelovitch have identified two things, in tandem, that generate banking crises: One, a large amount of foreign investment surges into a country, and two, that country’s economy has a well-developed market in securities — especially stocks.
“Empirically, we find that systemic bank failures are more likely when substantial foreign capital inflows meet a financial system with well-developed stock markets,” says Singer. “Banks take on more risk in these environments, which makes them more prone to collapse.”
Singer and Copelovitch detail their findings in a new book, “Banks on the Brink: Global Capital, Securities Markets, and the Political Roots of Financial Crises,” published by Cambridge University Press. In it, they emphasize that the historical development of markets creates conditions ripe for crisis — it is not just a matter of a few rogue bankers engaging in excessive profit-hunting.
“There wasn’t much scholarship that explored the phenomenon from both a political and an economic perspective,” Singer adds. “We sought to go up to 30,000 feet and see what the patterns were, to explain why some banking systems were more resilient than others.”
Where the risk goes: Banks or stocks?
Through history, lending institutions have often been prone to instability. But Singer and Copelovitch examined what makes banks vulnerable under contemporary conditions. They looked at economic and banking-sector data from 1976-2011, for the 32 countries in the Organization for Economic Cooperation and Development (OECD).
That time period begins soon after the Bretton Woods system of international monetary-policy cooperation vanished, which led to a significant increase in foreign capital movement. From 1990 to 2005 alone, international capital flow increased from $1 trillion to $12 trillion annually. (It has since slid back to $5 trillion, after the Great Recession.)
Even so, a flood of capital entering a country is not enough, by itself, to send a banking sector under water, Singer says: “Why is it that some capital inflows can be accommodated and channeled productively throughout an economy, but other times they seem to lead a banking system to go awry?”
The answer, Singer and Copelovitch contend, is that a highly active stock market is a form of competition for the banking sector, to which banks respond by taking greater risks.
To see why, imagine a promising business needs capital. It could borrow funds from a bank. Or it could issue a stock offering, and raise the money from investors, as riskier firms generally do. If a lot of foreign investment enters a country, backing firms that issue stock offerings, bankers will want a piece of the action.
“Banks and stock markets are competing for the business of firms that need to raise money,” Singer says. “When stock markets are small and unsophisticated, there’s not much competition. Firms go to their banks.” However, he adds, “A bank doesn’t want to lose a good chunk of its customer base to the stock markets. … And if that happens, banks start to do business with slightly riskier firms.”
Rethinking Canadian bank stability
Exploring this point in depth, the book develops contrasting case studies of Canada and Germany. Canada is one of the few countries to remain blissfully free of banking crises — something commentators usually ascribe to sensible regulation.
However, Singer and Copelovitch observe, Canada has always had small, regional stock markets, and is the only OECD country without a national stock-market regulator.
“There’s a sense that Canada has stable banks just because they’re well-regulated,” Singer says. “That’s the conventional wisdom we’re trying to poke holes in. And I think it’s not well-understood that Canada’s stock markets are as underdeveloped as they are.”
He adds: “That’s one of the key considerations, when we analyze why Canada’s banks are so stable. They don’t face a competitive threat from stock markets the way banks in the United States do. They can be conservative and be competitive and still be profitable.”
By contrast, German banks have been involved in many banking blowups in the last two decades. At one time, that would not have been the case. But Germany’s national-scale banks, feeling pressure from a thriving set of regional banks, tried to bolster profits through securities investment, leading to some notable problems.
“Germany started off the period we study looking like a very bank-centric economy,” Singer says. “And that’s what Germany is often known for, close connections between banks and industry.” However, he notes, “The national banks started to feel a competitive threat and looked to stock markets to bolster their competitive advantage. … German banks used to be so stable and so long-term focused, and they’re now finding short-term trouble.”
“Banks on the Brink” has drawn praise from other scholars in the field. Jeffry Frieden, a professor of government at Harvard University, says the book’s “careful logic, statistical analyses, and detailed case studies make compelling reading for anyone interested in the economics and politics of finance.”
For their part, Singer and Copelovitch say they hope to generate more discussion about both the recent history of banking crises, and how to avoid them in the future.
Perhaps surprisingly, Singer believes that separating commerical and investment banks from each other — which the Glass-Steagall Act used to do in the U.S. — would not prevent crises. Any bank, not just investment banks, can flounder if profit-hunting in risky territory.
Instead, Singer says, “We think macroprudential regulations for banks are the way to go. That’s just about capital regulations, making sure banks are holding enough capital to absorb any losses they might incur. That seems to be the best approach to maintaining a stable banking system, especially in the face of large capital flows.” | 5:59a |
How plants protect themselves from sun damage For plants, sunlight can be a double-edged sword. They need it to drive photosynthesis, the process that allows them to store solar energy as sugar molecules, but too much sun can dehydrate and damage their leaves.
A primary strategy that plants use to protect themselves from this kind of photodamage is to dissipate the extra light as heat. However, there has been much debate over the past several decades over how plants actually achieve this.
“During photosynthesis, light-harvesting complexes play two seemingly contradictory roles. They absorb energy to drive water-splitting and photosynthesis, but at the same time, when there’s too much energy, they have to also be able to get rid of it,” says Gabriela Schlau-Cohen, the Thomas D. and Virginia W. Cabot Career Development Assistant Professor of Chemistry at MIT.
In a new study, Schlau-Cohen and colleagues at MIT, the University of Pavia, and the University of Verona directly observed, for the first time, one of the possible mechanisms that have been proposed for how plants dissipate energy. The researchers used a highly sensitive type of spectroscopy to determine that excess energy is transferred from chlorophyll, the pigment that gives leaves their green color, to other pigments called carotenoids, which can then release the energy as heat.
“This is the first direct observation of chlorophyll-to-carotenoid energy transfer in the light-harvesting complex of green plants,” says Schlau-Cohen, who is the senior author of the study. “That’s the simplest proposal, but no one’s been able to find this photophysical pathway until now.”
MIT graduate student Minjung Son is the lead author of the study, which appears today in Nature Communications. Other authors are Samuel Gordon ’18, Alberta Pinnola of the University of Pavia, in Italy, and Roberto Bassi of the University of Verona.
Excess energy
When sunlight strikes a plant, specialized proteins known as light-harvesting complexes absorb light energy in the form of photons, with the help of pigments such as chlorophyll. These photons drive the production of sugar molecules, which store the energy for later use.
Much previous research has shown that plants are able to quickly adapt to changes in sunlight intensity. In very sunny conditions, they convert only about 30 percent of the available sunlight into sugar, while the rest is released as heat. If this excess energy is allowed to remain in the plant cells, it creates harmful molecules called free radicals that can damage proteins and other important cellular molecules.
“Plants can respond to fast changes in solar intensity by getting rid of extra energy, but what that photophysical pathway is has been debated for decades,” Schlau-Cohen says.
The simplest hypothesis for how plants get rid of these extra photons is that once the light-harvesting complex absorbs them, chlorophylls pass them to nearby molecules called carotenoids. Carotenoids, which include lycopene and beta-carotene, are very good at getting rid of excess energy through rapid vibration. They are also skillful scavengers of free radicals, which helps to prevent damage to cells.
A similar type of energy transfer has been observed in bacterial proteins that are related to chlorophyll, but until now, it had not been seen in plants. One reason why it has been hard to observe this phenomenon is that it occurs on a very fast time scale (femtoseconds, or quadrillionths of a second). Another obstacle is that the energy transfer spans a broad range of energy levels. Until recently, existing methods for observing this process could only measure a small swath of the spectrum of visible light.
In 2017, Schlau-Cohen’s lab developed a modification to a femtosecond spectroscopic technique that allows them to look at a broader range of energy levels, spanning red to blue light. This meant that they could monitor energy transfer between chlorophylls, which absorb red light, and carotenoids, which absorb blue and green light.
In this study, the researchers used this technique to show that photons move from an excited state, which is spread over multiple chlorophyll molecules within a light-harvesting complex, to nearby carotenoid molecules within the complex.
“By broadening the spectral bandwidth, we could look at the connection between the blue and the red ranges, allowing us to map out the changes in energy level. You can see energy moving from one excited state to another,” Schlau-Cohen says.
Once the carotenoids accept the excess energy, they release most of it as heat, preventing light-induced damage to the cells.
Boosting crop yields
The researchers performed their experiments in two different environments — one in which the proteins were in a detergent solution, and one in which they were embedded in a special type of self-assembling membrane called a nanodisc. They found that the energy transfer occurred more rapidly in the nanodisc, suggesting that environmental conditions affect the rate of energy dissipation.
It remains a mystery exactly how excess sunlight triggers this mechanism within plant cells. Schlau-Cohen’s lab is now exploring whether the organization of chlorophylls and carotenoids within the chloroplast membrane play a role in activating the photoprotection system.
A better understanding of plants’ natural photoprotection system could help scientists develop new ways to improve crop yields, Schlau-Cohen says. A 2016 paper from University of Illinois researchers showed that by overproducing all of the proteins involved in photoprotection, crop yields could be boosted by 15 to 20 percent. That paper also suggested that production could be further increased to a theoretical maximum of about 30 percent.
“If we understand the mechanism, instead of just upregulating everything and getting 15 to 20 percent, we could really optimize the system and get to that theoretical maximum of 30 percent,” Schlau-Cohen says.
The research was funded by the U.S. Department of Energy. | 11:59p |
Why are workers getting smaller pieces of the pie? It’s one of the biggest economic changes in recent decades: Workers get a smaller slice of company revenue, while a larger share is paid to capital owners and distributed as profits. Or, as economists like to say, there has been a fall in labor’s share of gross domestic product, or GDP.
A new study co-authored by MIT economists uncovers a major reason for this trend: Big companies that spend more on capital and less on workers are gaining market share, while smaller firms that spend more on workers and less on capital are losing market share. That change, the researchers say, is a key reason why the labor share of GDP in the U.S. has dropped from around 67 percent in 1980 to 59 percent today, following decades of stability.
“To understand this phenomenon, you need to understand the reallocation of economic activity across firms,” says MIT economist David Autor, co-author of the paper. “That’s our key point.”
To be sure, many economists have suggested other hypotheses, including new generations of software and machines that substitute directly for workers, the effects of international trade and outsourcing, and the decline of labor union power. The current study does not entirely rule out all of those explanations, but it does highlight the importance of what the researchers term “superstar firms” as a primary factor.
“We feel this is an incredibly important and robust fact pattern that you have to grapple with,” adds Autor, the Ford Professor of Economics in MIT’s Department of Economics.
The paper, “The Fall of the Labor Share and the Rise of Superstar Firms,” appears in advance online form in the Quarterly Journal of Economics. In addition to Autor, the other authors are David Dorn, a professor of economics at the University of Zurich; Lawrence Katz, a professor of economics at Harvard University; Christina Patterson, PhD ’19, a postdoc at Northwestern University who will join the faculty at the University of Chicago’s Booth School of Business in July; and John Van Reenen, the Gordon Y. Billard Professor of Management and Economics at MIT.
An economic “miracle” vanishes
For much of the 20th century, labor’s share of GDP was notably consistent. As the authors note, John Maynard Keynes once called it “something of a miracle” in the face of economic changes, and the British economist Nicholas Kaldor included labor’s steady portion of GDP as one of his often-cited six “stylized facts” of growth.
To conduct the study, the researchers scrutinized data for the U.S. and other countries in the Organization of Economic Cooperation and Development (OECD). The scholars used U.S. Economic Census data from 1982 to 2012 to study six economic sectors that account for about 80 percent of employment and GDP: manufacturing, retail trade, wholesale trade, services, utilities and transportation, and finance. The data includes payroll, total output, and total employment.
The researchers also used information from the EU KLEMS database, housed at the Vienna Institute for International Economic Studies, to examine the other OECD countries.
The increase in market dominance for highly competitive top firms in many of those sectors is evident in the data. In the retail trade, for instance, the top four firms accounted for just under 15 percent of sales in 1981, but that grew to around 30 percent of sales in 2011. In utilities and transportation, those figures moved from 29 percent to 41 percent in the same time frame. In manufacturing, this top-four sales concentration grew from 39 percent in 1981 to almost 44 percent in 2011.
At the same time, the average payroll-to-sales ratio declined in five of those sectors — with finance being the one exception. In manufacturing, the payroll-to-sales ratio decreased from roughly 18 percent in 1981 to about 12 percent in 2011. On aggregate, the labor share of GDP declined at most times except the period from 1997 to 2002, the final years of an economic expansion with high employment.
But surprisingly, labor’s share is not falling at the typical firm. Rather, reallocation of market share between firms is the key. In general, says Autor, the picture is of a “winner-take-most setting, where a smaller number of firms are accounting for a larger amount of economic activity, and those are firms where workers historically got a smaller share of the pie.”
A key insight provided by the study is that the dynamics within industry sectors has powered the drop in the labor share of GDP. The overall change is not just the result of, say, an increase in the deployment of technology in manufacturing, which some economists have suggested. While manufacturing is important to the big picture, the same phenomenon is unfolding across and within many sectors of the economy.
As far as testing the remaining alternate hypotheses, the study found no special pattern within industries linked to changes in trade policy — a subject Autor has studied extensively in the past. And while the decline in union power cannot be ruled out as a cause, the drop in labor share of GDP occurs even in countries where unions remain relatively stronger than they do in the U.S.
Deserved market power, or not?
As Autor notes, there are nuances within the findings. Many “superstar” firms pay above-average wages to their employees; it is not that these firms are increasingly “squeezing” their workers, as he puts it. Rather, labor’s share of the economic value added across the industrial sectors in the study is falling because market-leading “superstar” firms are now a bigger piece of all economic activity.
On a related note, Autor suggests that the growth in market power is related to technological investment by firms in many sectors.
“We shouldn’t presume that just because a market is concentrated — with a few leading firms accounting for a large fraction of sales — it’s a market with low productivity and high prices,” Autor says. “It might be a market where you have some very productive leading firms.” Today, he adds, “more competition is platform-based competition, as opposed to simple price competition. Walmart is a platform business. Amazon is a platform business. Many tech companies are platform businesses. Many financial services companies are platform businesses. You have to make some huge investment to create a sophisticated service or set of offerings. Once that’s in place, it’s hard for your competitors to replicate.”
With this in mind, Autor says we may want to distinguish whether market concentration is “the bad kind, where lazy monopolists are jacking up prices, or the good kind, where the more competitive firms are getting a larger share. To the best we can distinguish, the rise of superstar firms appears more the latter than the former. These firms are in more innovative industries — their productivity growth has developed faster, they make more investment, they patent more. It looks like this is happening more in the frontier sectors than the laggard sectors.”
Still Autor adds, the paper does contain policy implications for regulators.
“Once a firm is that far ahead, there’s potential for abuse,” he notes. “Maybe Facebook shouldn’t be allowed to buy all its competitors. Maybe Amazon shouldn’t be both the host of a market and a competitor in that market. This potentially creates regulatory issues we should be looking at. There’s nothing in this paper that says everyone should just take a few years off and not worry about the issue.”
“We don’t think our paper is in any sense the last word on the topic,” Autor notes. “We think it adds useful paragraphs to the conversation for everybody to listen to and grapple with. We’ve had too few facts chased by too many theories. We need more facts to allow us to adjudicate among theories.”
Support for the research project was provided by Accenture LLC, the Economic and Social Research Council, the European Research Council, IBM Global Universities Programs, the MIT Initiative on the Digital Economy, the National Science Foundation, Schmidt Futures, the Sloan Foundation, the Smith Richardson Foundation, and the Swiss National Science Foundation. |
|