MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, October 8th, 2019

    Time Event
    12:00a
    A look at Japan’s evolving intelligence efforts

    Once upon a time — from the 1600s through the 1800s — Japan had a spy corps so famous we know their name today: the ninjas, intelligence agents serving the ruling Tokugawa family.

    Over the last 75 years, however, as international spying and espionage has proliferated, Japan has mostly been on the sidelines of this global game. Defeat in World War II, and demilitarization afterward, meant that Japanese intelligence services were virtually nonexistent for decades.

    Japan’s interest in spycraft has returned, however. In addition to a notable military expansion — as of last year, the country has aircraft carriers again — Japan is also ramping up its formal intelligence apparatus, as a response to what the country’s chief cabinet secretary has called “the drastically changing security environment” around it.

    “Intelligence is a critical element of any national security strategy,” says MIT political scientist Richard Samuels, a leading expert on Japanese politics and foreign policy. “It’s just a question of how robust, and openly robust, any country is willing to make it.”

    Examining the status of Japan’s intelligence efforts, then, helps us understand Japan’s larger strategic outlook and goals. And now Samuels has written a wide-ranging new history of Japan’s intelligence efforts, right up to the present. The book, “Special Duty: A History of the Japanese Intelligence Community,” is being published this week by Cornell University Press.

    “Japan didn’t have a comprehensive intelligence capability, but they’re heading in that direction,” says Samuels, who is the director of the Center for International Studies and the Ford International Professor of Political Science at MIT. As firm as Japan’s taboo on military and intelligence activity once was, he adds, “that constraint is coming undone.”

    Ruffians and freelance agents

    Aside from the ninjas, who focused on domestic affairs, Japan’s international intelligence efforts have seen a few distinct phases: a patchy early period, a big buildup before World War II, the dismantling of the system under the postwar U.S. occupation, and — especially during the current decade — a restoration of intelligence capabilities.

    Famously, Japan was closed off to much of the rest of the world until the late 19th century. It did not formally pursue international intelligence activities until the late 1860s. By the early 1900s, Japanese agents had found some success: They decoded Russian cables in the Russo-Japanese War of 1904-05 and cut off Russian raids during the conflict.

    But as Samuels details in the book, during this period Japan heavily relied on a colorful array of spies and agents working on an unofficial basis, an arrangement that gave the country “plausible deniability” in case these operatives were caught.

    “There was an interesting reliance upon scoundrels, ruffians, and freelance agents,” Samuels says.

    Some of these figures were quite successful. One agent, Uchida Ryohei, founded an espionage group, the Amur River Society (also sometimes called the Black Dragon Society), which opened its own training school, created Japan’s best battlefield maps and conducted all manner of operations meant to limit Russian expansion. In the 1930s, another undercover agent, Doihara Kenji, became so successful at creating pro-Japanese local governments in northern China, that he became known as “Lawrence of Manchuria.”

    Meanwhile, Japan’s official intelligence units had a chronic lack of coordination; they divided along military branches and between military and diplomatic bureaucracies. Still, in the decades before World War II, Japan leveraged some existing strengths in the study of foreign cultures — “The Japanese invented area studies before we did,” says Samuels — and used technological advances to make huge strides in information-gathering.

    “They had strengths, they had weaknesses, they had official intelligence, they had nonofficial intelligence, but overall that was a period of great growth in their intelligence capability,” Samuels says. “That of course comes to a crashing halt at the end of the war, when the entire military apparatus was taken down. So there was this period immediately after the war where there was no formal intelligence.”

    Japan’s subsequent postwar political reorientation toward the U.S. created many advantages for the country but was simultaneously a source of frustration to some. The country became an economic powerhouse while lacking the same covert capabilities as other countries.  

    “The Cold War was a period in which many Japanese in the intelligence world resented having to accommodate to American power in the intelligence world, and resented it,” Samuels says. “They had economic intelligence capability. They were very good at doing foreign economic analysis and were all over the world, but they were underperforming on the diplomatic and military fronts.”

    The Asian pivot

    In “Special Duty,” Samuels suggests three main reasons why any country reforms its intelligence services: Shifts in the strategic environment, technological innovations, and intelligence failures. The first of these seems principally responsible for the current revival of Japan’s intelligence operations.

    As Samuels notes, some Japanese officials wanted to change the country’s intelligence structure during the 1980s — to little avail. The end of the Cold War, and the more complicated geopolitcal map that resulted, provided a more compelling rationale for doing so, without producing many tangible results.

    Instead, more recent events in Asia have had a much bigger impact in Japan: namely, North Korean missile testing and China’s massive surge in economic and military power. In 2005, Samuels notes, Japan’s GDP was still twice that of China. A decade later, China’s economy was two and a half times as large as Japan’s, and its military budget was twice as big. U.S. power relative to China has also declined. Those developments have altered Japanese security priorities.

    “There’s been a Japanese pivot in Asia,” Samuels notes. “That’s really very important.” Moreover, he adds, from the Japanese perspective, “The question about China is obvious. Is its rise going to be disjunctive, or is it going to be stabilizing?”

    These regional changes have led Japan to chart a course of greater confidence in foreign policy — reflected in its growing intelligence function. Since 2013 in particular, after Prime Minister Shinzo Abe took office for a second time, Japan has built up its own intelligence function as never before, making operations more unified and better-supported. Japan still coordinates extensively with the U.S. in some areas of intelligence but is also taking intelligence matters into its own hands, in a way not seen for several decades.

    As Samuels notes, Japan’s increasing foreign-policy independence is also supported by voters.

    “Japanese public opinion has changed,” Samuels says. “They see the issues now, they talk about it now. Used to be, you couldn’t talk about intelligence in polite company. But people talk about it now, and they’re much more willing to go forward.”

    “Special Duty” has been praised by other scholars in the field of Japanese security studies and foreign policy. Sheila Smith of the Council on Foreign Relations in Washington calls it a “truly wonderful book” that “offers much needed insight to academics and policymakers alike as they seek to understand the changes in Japan's security choices. ”

    By looking at intelligence issues in this way, Samuels has also traced larger contours in Japanese history: first, an opening up to the world, then the alignment with the U.S. in the postwar world, and now a move toward greater capabilities. On the intelligence front, those capabilities include enhanced analysis and streamlined relations across units, heading toward the full spectrum of functions seen in the other major states.  

    “It’s been the assumption that the Japanese just don’t do [intelligence activities], except economics,” Samuels reflects. “Well, I hope after people see this book they will understand that’s no longer the case, and hasn’t been for some time.”

    12:00p
    Alzheimer’s plaque emerges early and deep in the brain

    Long before symptoms like memory loss even emerge, the underlying pathology of Alzheimer’s disease, such as an accumulation of amyloid protein plaques, is well underway in the brain. A longtime goal of the field has been to understand where it starts so that future interventions could begin there. A new study by MIT neuroscientists at The Picower Institute for Learning and Memory could help those efforts by pinpointing the regions with the earliest emergence of amyloid in the brain of a prominent mouse model of the disease. Notably, the study also shows that the degree of amyloid accumulation in one of those same regions of the human brain correlates strongly with the progression of the disease.

    “Alzheimer’s is a neurodegenerative disease, so in the end you can see a lot of neuron loss,” says Wen-Chin “Brian” Huang, co-lead author of the study and a postdoc in the lab of co-senior author Li-Huei Tsai, Picower Professor of Neuroscience and director of the Picower Institute. “At that point, it would be hard to cure the symptoms. It’s really critical to understand what circuits and regions show neuronal dysfunction early in the disease. This will, in turn, facilitate the development of effective therapeutics.”

    In addition to Huang, the study’s co-lead authors are Rebecca Canter, a former member of the Tsai lab, and Heejin Choi, a former member of the lab of co-senior author Kwanghun Chung, associate professor of chemical engineering and a member of the Picower Institute and the MIT Institute for Medical Engineering and Science.

    Tracking plaques

    Many research groups have made progress in recent years by tracing amyloid’s path in the brain using technologies such as positron emission tomography, and by looking at brains post-mortem, but the new study in Communications Biology adds substantial new evidence from the 5XFAD mouse model because it presents an unbiased look at the entire brain as early as one month of age. The study reveals that amyloid begins its terrible march in deep brain regions such as the mammillary body, the lateral septum, and the subiculum before making its way along specific brain circuits that ultimately lead it to the hippocampus, a key region for memory, and the cortex, a key region for cognition.

    The team used SWITCH, a technology developed by Chung, to label amyloid plaques and to clarify the whole brains of 5XFAD mice so that they could be imaged in fine detail at different ages. The team was consistently able to see that plaques first emerged in the deep brain structures and then tracked along circuits, such as the Papez memory circuit, to spread throughout the brain by six-12 months (a mouse’s lifespan is up to three years).

    The findings help to cement an understanding that has been harder to obtain from human brains, Huang says, because post-mortem dissection cannot easily account for how the disease developed over time and PET scans don’t offer the kind of resolution the new study provides from the mice.

    Key validations

    Importantly, the team directly validated a key prediction of their mouse findings in human tissue: If the mammillary body is indeed a very early place that amyloid plaques emerge, then the density of those plaques should increase in proportion with how far advanced the disease is. Sure enough, when the team used SWITCH to examine the mammillary bodies of post-mortem human brains at different stages of the disease, they saw exactly that relationship: The later the stage, the more densely plaque-packed the mammillary body was.

    “This suggests that human brain alterations in Alzheimer’s disease look similar to what we observe in mouse,” the authors wrote. “Thus we propose that amyloid-beta deposits start in susceptible subcortical structures and spread to increasingly complex memory and cognitive networks with age.”

    The team also performed experiments to determine whether the accumulation of plaques they observed were of real disease-related consequence for neurons in affected regions. One of the hallmarks of Alzheimer’s disease is a vicious cycle in which amyloid makes neurons too easily excited, and overexcitement causes neurons to produce more amyloid. The team measured the excitability of neurons in the mammillary body of 5XFAD mice and found they were more excitable than otherwise similar mice that did not harbor the 5XFAD set of genetic alterations.

    In a preview of a potential future therapeutic strategy, when the researchers used a genetic approach to silence the neurons in the mammillary body of some 5XFAD mice but left neurons in others unaffected, the mice with silenced neurons produced less amyloid.

    While the study findings help explain much about how amyloid spreads in the brain over space and time, they also raise new questions, Huang said. How might the mammillary body affect memory, and what types of cells are most affected there?

    “This study sets a stage for further investigation of how dysfunction in these brain regions and circuits contributes to the symptoms of Alzheimer’s disease,” he says.

    In addition to Huang, Canter, Choi, Tsai, and Chung, the paper’s other authors are Jun Wang, Lauren Ashley Watson, Christine Yao, Fatema Abdurrob, Stephanie Bousleiman, Jennie Young, David Bennett and Ivana Dellalle.

    The National Institutes of Health, the JPB Foundation, Norman B. Leventhal and Barbara Weedon fellowships, The Burroughs Wellcome Fund, the Searle Scholars Program, a Packard Award, a NARSAD Young Investigator Award, and the NCSOFT Cultural Foundation funded the research.

    11:59p
    Using machine learning to hunt down cybercriminals

    Hijacking IP addresses is an increasingly popular form of cyber-attack. This is done for a range of reasons, from sending spam and malware to stealing Bitcoin. It’s estimated that in 2017 alone, routing incidents such as IP hijacks affected more than 10 percent of all the world’s routing domains. There have been major incidents at Amazon and Google and even in nation-states — a study last year suggested that a Chinese telecom company used the approach to gather intelligence on western countries by rerouting their internet traffic through China.

    Existing efforts to detect IP hijacks tend to look at specific cases when they’re already in process. But what if we could predict these incidents in advance by tracing things back to the hijackers themselves?  

    That’s the idea behind a new machine-learning system developed by researchers at MIT and the University of California at San Diego (UCSD). By illuminating some of the common qualities of what they call “serial hijackers,” the team trained their system to be able to identify roughly 800 suspicious networks — and found that some of them had been hijacking IP addresses for years. 

    “Network operators normally have to handle such incidents reactively and on a case-by-case basis, making it easy for cybercriminals to continue to thrive,” says lead author Cecilia Testart, a graduate student at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) who will present the paper at the ACM Internet Measurement Conference in Amsterdam on Oct. 23. “This is a key first step in being able to shed light on serial hijackers’ behavior and proactively defend against their attacks.”

    The paper is a collaboration between CSAIL and the Center for Applied Internet Data Analysis at UCSD’s Supercomputer Center. The paper was written by Testart and David Clark, an MIT senior research scientist, alongside MIT postdoc Philipp Richter and data scientist Alistair King as well as research scientist Alberto Dainotti of UCSD.

    The nature of nearby networks

    IP hijackers exploit a key shortcoming in the Border Gateway Protocol (BGP), a routing mechanism that essentially allows different parts of the internet to talk to each other. Through BGP, networks exchange routing information so that data packets find their way to the correct destination. 

    In a BGP hijack, a malicious actor convinces nearby networks that the best path to reach a specific IP address is through their network. That’s unfortunately not very hard to do, since BGP itself doesn’t have any security procedures for validating that a message is actually coming from the place it says it’s coming from.

    “It’s like a game of Telephone, where you know who your nearest neighbor is, but you don’t know the neighbors five or 10 nodes away,” says Testart.

    In 1998 the U.S. Senate's first-ever cybersecurity hearing featured a team of hackers who claimed that they could use IP hijacking to take down the Internet in under 30 minutes. Dainotti says that, more than 20 years later, the lack of deployment of security mechanisms in BGP is still a serious concern.

    To better pinpoint serial attacks, the group first pulled data from several years’ worth of network operator mailing lists, as well as historical BGP data taken every five minutes from the global routing table. From that, they observed particular qualities of malicious actors and then trained a machine-learning model to automatically identify such behaviors.

    The system flagged networks that had several key characteristics, particularly with respect to the nature of the specific blocks of IP addresses they use:

    • Volatile changes in activity: Hijackers’ address blocks seem to disappear much faster than those of legitimate networks. The average duration of a flagged network’s prefix was under 50 days, compared to almost two years for legitimate networks.
    • Multiple address blocks: Serial hijackers tend to advertise many more blocks of IP addresses, also known as “network prefixes.”
    • IP addresses in multiple countries: Most networks don’t have foreign IP addresses. In contrast, for the networks that serial hijackers advertised that they had, they were much more likely to be registered in different countries and continents.

    Identifying false positives

    Testart said that one challenge in developing the system was that events that look like IP hijacks can often be the result of human error, or otherwise legitimate. For example, a network operator might use BGP to defend against distributed denial-of-service attacks in which there’s huge amounts of traffic going to their network. Modifying the route is a legitimate way to shut down the attack, but it looks virtually identical to an actual hijack.

    Because of this issue, the team often had to manually jump in to identify false positives, which accounted for roughly 20 percent of the cases identified by their classifier. Moving forward, the researchers are hopeful that future iterations will require minimal human supervision and could eventually be deployed in production environments.

    “The authors' results show that past behaviors are clearly not being used to limit bad behaviors and prevent subsequent attacks,” says David Plonka, a senior research scientist at Akamai Technologies who was not involved in the work. “One implication of this work is that network operators can take a step back and examine global Internet routing across years, rather than just myopically focusing on individual incidents.”

    As people increasingly rely on the Internet for critical transactions, Testart says that she expects IP hijacking’s potential for damage to only get worse. But she is also hopeful that it could be made more difficult by new security measures. In particular, large backbone networks such as AT&T have recently announced the adoption of resource public key infrastructure (RPKI), a mechanism that uses cryptographic certificates to ensure that a network announces only its legitimate IP addresses. 

    “This project could nicely complement the existing best solutions to prevent such abuse that include filtering, antispoofing, coordination via contact databases, and sharing routing policies so that other networks can validate it,” says Plonka. “It remains to be seen whether misbehaving networks will continue to be able to game their way to a good reputation. But this work is a great way to either validate or redirect the network operator community's efforts to put an end to these present dangers.”

    The project was supported, in part, by the MIT Internet Policy Research Initiative, the William and Flora Hewlett Foundation, the National Science Foundation, the Department of Homeland Security, and the Air Force Research Laboratory.

    << Previous Day 2019/10/08
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org