MIT Research News' Journal
 
[Most Recent Entries] [Calendar View]

Thursday, October 30th, 2014

    Time Event
    12:00a
    Harnessing error-prone chips

    As transistors get smaller, they also grow less reliable. Increasing their operating voltage can help, but that means a corresponding increase in power consumption.

    With information technology consuming a steadily growing fraction of the world’s energy supplies, some researchers and hardware manufacturers are exploring the possibility of simply letting chips botch the occasional computation. In many popular applications — video rendering, for instance — users probably wouldn’t notice the difference, and it could significantly improve energy efficiency.

    At this year’s Object-Oriented Programming, Systems, Languages and Applications (OOPSLA) conference, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory presented a new system that lets programmers identify sections of their code that can tolerate a little error. The system then determines which program instructions to assign to unreliable hardware components, to maximize energy savings while still meeting the programmers’ accuracy requirements.

    The system, dubbed Chisel, also features a tool that helps programmers evaluate precisely how much error their programs can tolerate. If 1 percent of the pixels in an image are improperly rendered, will the user notice? How about 2 percent, or 5 percent? Chisel will simulate the execution of the image-rendering algorithm on unreliable hardware as many times as the programmer requests, with as many different error rates. That takes the guesswork out determining accuracy requirements.

    The researchers tested their system on a handful of common image-processing and financial-analysis algorithms, using a range of unreliable-hardware models culled from the research literature. In simulations, the resulting power savings ranged from 9 to 19 percent.

    Accumulating results

    The new work builds on a paper presented at last year’s OOSPLA, which described a programming language called Rely. Each paper won one of the conference’s best-paper awards.

    Rely provides the mechanism for specifying the accuracy requirements, and it features an operator — a period, or dot — that indicates that a particular instruction may be executed on unreliable hardware. In the work presented last year, programmers had to insert the dots by hand. Chisel does the insertion automatically — and guarantees that its assignment will maximize energy savings.

    “One of the observations from all of our previous research was that usually, the computations we analyzed spent most of their time on one or several functions that were really computationally intensive,” says Sasa Misailovic, a graduate student in electrical engineering and computer science and lead author on the new paper. “We call those computations ‘kernels,’ and we focused on them.”

    Misailovic is joined on the paper by his advisor, Martin Rinard, a professor in the Department of Electrical Engineering and Computer Science (EECS); by Sara Achour and Zichao Qi, who are also students in Rinard’s group; and by Michael Carbin, who did his PhD with Rinard and will join the EECS faculty next year.

    In practice, Misailovic says, programs generally have only a few kernels. In principle, Chisel could have been designed to find them automatically. But most developers who work on high-performance code will probably want to maintain a degree of control over what their programs are doing, Rinard says. And generally, they already use tools that make kernel identification easy.

    Combinatorial explosion

    A single kernel, however, may still consist of 100 or more instructions, any combination of which could be assigned to unreliable hardware. Manually canvassing all possible combinations and evaluating their effects on both computational accuracy and energy savings would still be a prohibitively time-consuming task.

    But the researchers developed three separate mathematical expressions that describe accuracy of computation, reliability of instruction execution, and energy savings as functions of the individual instructions. These expressions constrain the search that the system has to perform to determine which instructions to assign to unreliable hardware. That simpler — though still complex — problem is one that off-the-shelf software can handle.

    “I think it’s brilliant work,” says Luis Ceze, an associate professor of computer science and engineering at the University of Washington. “All the trends point to future hardware being unreliable, because that’s one way of making it more energy-efficient and faster.”

    Ceze points out, however, that the hardware model the MIT researchers used requires specification of the reliability with which individual operations are executed. “That limits the savings that you can get from approximation,” Ceze says. “I believe in approximation that is much coarser-grained. You do a whole bunch of operations in one shot, approximately. That can take you from tens of percent to tens of [multiples] in terms of performance and energy efficiency improvement.”

    As for whether the MIT researchers’ work can be adapted to accommodate such a model, “It could be done,” Ceze says. “In fact, my group has done similar work in a different context. So it’s definitely viable.”

    12:00a
    Raising cryptography’s standards

    Most modern cryptographic schemes rely on computational complexity for their security. In principle, they can be cracked, but that would take a prohibitively long time, even with enormous computational resources.

    There is, however, another notion of security — information-theoretic security — which means that even an adversary with unbounded computational power could extract no useful information from an encrypted message. Cryptographic schemes that promise information-theoretical security have been devised, but they’re far too complicated to be practical.

    In a series of papers presented at the Allerton Conference on Communication, Control, and Computing, researchers at MIT and Maynooth University in Ireland have shown that existing, practical cryptographic schemes come with their own information-theoretic guarantees: Some of the data they encode can’t be extracted, even by a computationally unbounded adversary.

    The researchers show how to calculate the minimum-security guarantees for any given encryption scheme, which could enable information managers to make more informed decisions about how to protect data.

    “By investigating these limits and characterizing them, you can gain quite a bit of insight about the performance of these schemes and how you can leverage tools from other fields, like coding theory and so forth, for designing and understanding security systems,” says Flavio du Pin Calmon, a graduate student in electrical engineering and computer science and first author on all three Allerton papers. His advisor, Muriel Médard, the Cecil E. Green Professor of Electrical Engineering and Computer Science, is also on all three papers; they’re joined by colleagues including Ken Duffy of Maynooth and Mayank Varia of MIT’s Lincoln Laboratory.

    The researchers’ mathematical framework also applies to the problem of data privacy, or how much information can be gleaned from aggregated — and supposedly “anonymized” — data about Internet users’ online histories. If, for instance, Netflix releases data about users’ movie preferences, is it also inadvertently releasing data about their political preferences? Calmon and his colleagues’ technique could help data managers either modify aggregated data or structure its presentation in a way that minimizes the risk of privacy compromises.

    Staying close

    To get a sense of how the technique works, imagine an encryption scheme that takes only three possible inputs, or plaintexts — “A,” “B,” and “C” — and produces only three possible outputs, or ciphertexts. For each ciphertext, there is some probability that it encodes each of the three plaintexts.

    The ciphertexts can be represented as points inside a triangle whose vertices represent the three possible plaintexts. The higher the probability that a given ciphertext encodes a particular plaintext, the closer it is to the corresponding vertex: Ciphertexts more likely to encode A than B or C are closer to vertex A than to vertices B and C. A secure encryption scheme is one in which the points describing the ciphertexts are clustered together, rather than spread out around the triangle. That means that no ciphertext gives an adversary any more information about the scheme than any other.

    Of course, for most encrypted messages, there are way more than three possible corresponding plaintexts. Even a plaintext as simple as a nine-digit number has a billion possible values, so the probabilities corresponding to an encoded Social Security number would describe a point in a billion-dimensional space. But the general principle is the same: Schemes that yield closely clustered points are good, while schemes that don’t are not.

    An adversary wouldn’t actually know the probabilities associated with any given ciphertext. Even someone with access to an encryption scheme’s private key would have difficulty calculating them. For their analyses, Calmon, Médard, and their colleagues developed security metrics that hold for a wide range of distributions, and they augmented them with precise calculation of the worst cases — the points farthest from the center of the main cluster. But the mathematical description of the degree to which the probabilities cluster together is a direct indication of how much information an adversary could, in principle, extract from a ciphertext.
     

    Targeted protection

    In their first Allerton paper, in 2012, the researchers used this probabilistic framework to demonstrate that, while a ciphertext as a whole may not be information-theoretically secure, some of its bits could be. It should thus be possible to devise encryption schemes that can’t guarantee perfect security across the board but could provide it for particular data — say, a Social Security number.

    “Talking with cryptographers, they would always ask us, ‘Oh, cool! You can guarantee that regardless of what you do, you can hide individual symbols. What about functions of the plaintext?’” Calmon says. “Standard cryptographic definitions of security care about that.”

    An encryption scheme might, that is, guarantee that an adversary can’t extract an encoded Social Security number; but it might still allow the adversary to extract the last four digits of the number. Similarly, it might prevent an adversary from determining a subject’s age; but it might allow the adversary to deduce that, say, the subject is between 30 and 40 years of age.

    This is the problem that the researchers tackle in their last two Allerton papers. There, Calmon, Médard, and Varia show that if you can determine that a particular function is difficult or easy to extract from a ciphertext, then so are a host of correlated functions. In addition to addressing cryptographers’ concerns about functions of the plaintext, this approach has the advantage of not requiring analysis of massively multidimensional probability spaces. Information about the security of a single function — which can often be determined through a fairly simple analysis — can provide strong guarantees about the security of an encryption scheme as a whole.

    “Perfect secrecy is a very stringent requirement — essentially, the only way of guaranteeing that is to use a one-time pad, like they would in spy novels,” says Maxim Raginsky, an assistant professor of electrical and computer engineering at the University of Illinois at Urbana-Champaign. “Instead, let’s just accept the empirical fact that practical security systems we rely on every day do not deliver perfect secrecy. Some information about the data they try to protect will leak out. The work by Calmon, Varia, and Médard shows that there are limits to what an adversary can infer from this leaked information. Naturally, this is relevant in the age of big data.”

    The mathematical techniques that the MIT researchers employed “have been used in statistical analysis,” Raginsky adds. “But the information-theoretic implications are all new. This will definitely lead to a great deal of interesting research activity.”

    7:34p
    Lambda Chi Alpha national suspends MIT chapter for at least five years

    The Lambda Chi Alpha General Fraternity (LCAGF) board of directors has voted to suspend its Lambda Zeta chapter at MIT for at least five years due to “conduct that does not support the fraternity’s priority of providing a healthy chapter environment for its members.” A representative from LCAGF headquarters met with chapter members this evening to make the announcement. In light of the suspension, MIT has withdrawn its recognition of Lambda Zeta chapter as an official student organization and announced that the chapter house will close and residents be required to leave the building this Sunday.

    In a letter to chapter members, Chris Colombo, dean for student life, announced that on-campus housing is available to members for the remainder of the semester, and that they could stay in the chapter house until Sunday under an agreement with Lambda Zeta Associates, the alumni corporation that owns the chapter house at 99 Bay State Rd. The terms of that agreement include no events or alcohol of any kind on the premises, and a prohibition against visitors except Lambda Zeta chapter members and alumni. Lambda Zeta chapter alumni will also live in the house until it closes.

    Colombo also wrote that “[i]n making its decision to close the house swiftly, MIT was concerned with protecting the interests of the overall FSILG system. Lambda Zeta chapter has demonstrated an inability to adhere to certain standards, and that has had reputational consequences for all of the other FSILGs. MIT determined that allowing Lambda Zeta chapter members to continue to live in the chapter house even after having been suspended by the national chapter would introduce further risk to the FSILG system as a whole.”

    “MIT is ready to support you through the transitions,” Colombo wrote, offering relocation help on Sunday and assistance with finding on-campus or off-campus housing for the spring. He also encouraged members who need additional emotional or academic assistance to contact Student Support Services or MIT Mental Health and Counseling.

    The LCAGF has asked that MIT consider allowing Lambda Zeta to recolonize once the suspension has been served. The Institute has indicated its willingness to discuss recolonization so long as certain conditions are fulfilled by the LCAGF and Lambda Zeta chapter’s alumni. Also, MIT will approach Lambda Zeta alumni and the IFC to work out a plan for maintaining the famous “Smoot” markings on the Harvard Bridge, which were first painted by Lambda Zeta pledges in 1958, based on the height of fellow pledge Oliver Smoot ’62.

    Chancellor Cynthia Barnhart said, “Lambda Zeta chapter of LCA has fallen short of the MIT community’s expectations, to our great disappointment. Our decision to accept the action taken by the national of LCA is not made lightly. It is my hope that this incident will reveal itself as a learning opportunity for those involved. My staff will do all it can to help the members of LCA transition to new living arrangements and to manage the disruption that this may cause them.”

    << Previous Day 2014/10/30
    [Calendar]
    Next Day >>

MIT Research News   About LJ.Rossia.org