MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, February 3rd, 2016

    Time Event
    12:00a
    Hack-proof RFID chips

    Researchers at MIT and Texas Instruments have developed a new type of radio frequency identification (RFID) chip that is virtually impossible to hack.

    If such chips were widely adopted, it could mean that an identity thief couldn’t steal your credit card number or key card information by sitting next to you at a café, and high-tech burglars couldn’t swipe expensive goods from a warehouse and replace them with dummy tags.

    Texas Instruments has built several prototypes of the new chip, to the researchers’ specifications, and in experiments the chips have behaved as expected. The researchers presented their research this week at the International Solid-State Circuits Conference, in San Francisco.

    According to Chiraag Juvekar, a graduate student in electrical engineering at MIT and first author on the new paper, the chip is designed to prevent so-called side-channel attacks. Side-channel attacks analyze patterns of memory access or fluctuations in power usage when a device is performing a cryptographic operation, in order to extract its cryptographic key.

    “The idea in a side-channel attack is that a given execution of the cryptographic algorithm only leaks a slight amount of information,” Juvekar says. “So you need to execute the cryptographic algorithm with the same secret many, many times to get enough leakage to extract a complete secret.”

    One way to thwart side-channel attacks is to regularly change secret keys. In that case, the RFID chip would run a random-number generator that would spit out a new secret key after each transaction. A central server would run the same generator, and every time an RFID scanner queried the tag, it would relay the results to the server, to see if the current key was valid.

    Blackout

    Such a system would still, however, be vulnerable to a “power glitch” attack, in which the RFID chip’s power would be repeatedly cut right before it changed its secret key. An attacker could then run the same side-channel attack thousands of times, with the same key. Power-glitch attacks have been used to circumvent limits on the number of incorrect password entries in password-protected devices, but RFID tags are particularly vulnerable to them, since they’re charged by tag readers and have no onboard power supplies.

    Two design innovations allow the MIT researchers’ chip to thwart power-glitch attacks: One is an on-chip power supply whose connection to the chip circuitry would be virtually impossible to cut, and the other is a set of “nonvolatile” memory cells that can store whatever data the chip is working on when it begins to lose power.

    For both of these features, the researchers — Juvekar; Anantha Chandrakasan, who is Juvekar’s advisor and the Vannevar Bush Professor of Electrical Engineering and Computer Science; Hyung-Min Lee, who was a postdoc in Chandrakasan’s group when the work was done and is now at IBM; and TI’s Joyce Kwong, who did her master’s degree and PhD with Chandrakasan — use a special type of material known as a ferroelectric crystals.

    As a crystal, a ferroelectric material consists of molecules arranged into a regular three-dimensional lattice. In every cell of the lattice, positive and negative charges naturally separate, producing electrical polarization. The application of an electric field, however, can align the cells’ polarization in either of two directions, which can represent the two possible values of a bit of information.

    When the electric field is removed, the cells maintain their polarization. Texas Instruments and other chip manufacturers have been using ferroelectric materials to produce nonvolatile memory, or computer memory that retains data when it’s powered off.

    Complementary capacitors

    A ferroelectric crystal can also be thought of as a capacitor, an electrical component that separates charges and is characterized by the voltage between its negative and positive poles. Texas Instruments’ manufacturing process can produce ferroelectric cells with either of two voltages: 1.5 volts or 3.3 volts.

    The researchers’ new chip uses a bank of 3.3-volt capacitors as an on-chip energy source. But it also features 571 1.5-volt cells that are discretely integrated into the chip’s circuitry. When the chip’s power source — the external scanner — is removed, the chip taps the 3.3-volt capacitors and completes as many operations as it can, then stores the data it’s working on in the 1.5-volt cells.

    When power returns, before doing anything else the chip recharges the 3.3-volt capacitors, so that if it’s interrupted again, it will have enough power to store data. Then it resumes its previous computation. If that computation was an update of the secret key, it will complete the update before responding to a query from the scanner. Power-glitch attacks won’t work.

    Because the chip has to charge capacitors and complete computations every time it powers on, it’s somewhat slower than conventional RFID chips. But in tests, the researchers found that they could get readouts from their chips at a rate of 30 per second, which should be more than fast enough for most RFID applications.

    “In the age of ubiquitous connectivity, security is one of the paramount challenges we face,” says Ahmad Bahai, chief technology officer at Texas Instruments. “Because of this, Texas Instruments sponsored the authentication tag research at MIT that is being presented at ISSCC. We believe this research is an important step toward the goal of a robust, low-cost, low-power authentication protocol for the industrial Internet.”

    The MIT researchers' work was also funded by the Japanese automotive company Denso.

    12:00a
    Energy-friendly chip can perform powerful artificial-intelligence tasks

    In recent years, some of the most exciting advances in artificial intelligence have come courtesy of convolutional neural networks, large virtual networks of simple information-processing units, which are loosely modeled on the anatomy of the human brain.

    Neural networks are typically implemented using graphics processing units (GPUs), special-purpose graphics chips found in all computing devices with screens. A mobile GPU, of the type found in a cell phone, might have almost 200 cores, or processing units, making it well suited to simulating a network of distributed processors.

    At the International Solid State Circuits Conference in San Francisco this week, MIT researchers presented a new chip designed specifically to implement neural networks. It is 10 times as efficient as a mobile GPU, so it could enable mobile devices to run powerful artificial-intelligence algorithms locally, rather than uploading data to the Internet for processing.

    Neural nets were widely studied in the early days of artificial-intelligence research, but by the 1970s, they’d fallen out of favor. In the past decade, however, they’ve enjoyed a revival, under the name “deep learning.”

    “Deep learning is useful for many applications, such as object recognition, speech, face detection,” says Vivienne Sze, the Emanuel E. Landsman Career Development Assistant Professor in MIT's Department of Electrical Engineering and Computer Science whose group developed the new chip. “Right now, the networks are pretty complex and are mostly run on high-power GPUs. You can imagine that if you can bring that functionality to your cell phone or embedded devices, you could still operate even if you don’t have a Wi-Fi connection. You might also want to process locally for privacy reasons. Processing it on your phone also avoids any transmission latency, so that you can react much faster for certain applications.”

    The new chip, which the researchers dubbed “Eyeriss,” could also help usher in the “Internet of things” — the idea that vehicles, appliances, civil-engineering structures, manufacturing equipment, and even livestock would have sensors that report information directly to networked servers, aiding with maintenance and task coordination. With powerful artificial-intelligence algorithms on board, networked devices could make important decisions locally, entrusting only their conclusions, rather than raw personal data, to the Internet. And, of course, onboard neural networks would be useful to battery-powered autonomous robots.

    Division of labor

    A neural network is typically organized into layers, and each layer contains a large number of processing nodes. Data come in and are divided up among the nodes in the bottom layer. Each node manipulates the data it receives and passes the results on to nodes in the next layer, which manipulate the data they receive and pass on the results, and so on. The output of the final layer yields the solution to some computational problem.

    In a convolutional neural net, many nodes in each layer process the same data in different ways. The networks can thus swell to enormous proportions. Although they outperform more conventional algorithms on many visual-processing tasks, they require much greater computational resources.

    The particular manipulations performed by each node in a neural net are the result of a training process, in which the network tries to find correlations between raw data and labels applied to it by human annotators. With a chip like the one developed by the MIT researchers, a trained network could simply be exported to a mobile device.

    This application imposes design constraints on the researchers. On one hand, the way to lower the chip’s power consumption and increase its efficiency is to make each processing unit as simple as possible; on the other hand, the chip has to be flexible enough to implement different types of networks tailored to different tasks.

    Sze and her colleagues — Yu-Hsin Chen, a graduate student in electrical engineering and computer science and first author on the conference paper; Joel Emer, a professor of the practice in MIT’s Department of Electrical Engineering and Computer Science, and a senior distinguished research scientist at the chip manufacturer NVidia, and, with Sze, one of the project’s two principal investigators; and Tushar Krishna, who was a postdoc with the Singapore-MIT Alliance for Research and Technology when the work was done and is now an assistant professor of computer and electrical engineering at Georgia Tech — settled on a chip with 168 cores, roughly as many as a mobile GPU has.

    Act locally

    The key to Eyeriss’s efficiency is to minimize the frequency with which cores need to exchange data with distant memory banks, an operation that consumes a good deal of time and energy. Whereas many of the cores in a GPU share a single, large memory bank, each of the Eyeriss cores has its own memory. Moreover, the chip has a circuit that compresses data before sending it to individual cores.

    Each core is also able to communicate directly with its immediate neighbors, so that if they need to share data, they don’t have to route it through main memory. This is essential in a convolutional neural network, in which so many nodes are processing the same data.

    The final key to the chip’s efficiency is special-purpose circuitry that allocates tasks across cores. In its local memory, a core needs to store not only the data manipulated by the nodes it’s simulating but data describing the nodes themselves. The allocation circuit can be reconfigured for different types of networks, automatically distributing both types of data across cores in a way that maximizes the amount of work that each of them can do before fetching more data from main memory.

    At the conference, the MIT researchers used Eyeriss to implement a neural network that performs an image-recognition task, the first time that a state-of-the-art neural network has been demonstrated on a custom chip.

    “This work is very important, showing how embedded processors for deep learning can provide power and performance optimizations that will bring these complex computations from the cloud to mobile devices,” says Mike Polley, a senior vice president at Samsung’s Mobile Processor Innovations Lab. “In addition to hardware considerations, the MIT paper also carefully considers how to make the embedded core useful to application developers by supporting industry-standard [network architectures] AlexNet and Caffe.”

    The MIT researchers' work was funded in part by DARPA.

    << Previous Day 2016/02/03
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org