Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, August 3rd, 2016
Time |
Event |
7:49p |
White House to Fund Tech Growth ‘Beyond Moore’s Law’ On the one-year anniversary of the White House having launched its National Strategic Computing Initiative, the group’s Executive Council published a report detailing new and revised topics for collaboration between academic research, the private sector, and government agencies. In that report, NSCI presents its strategy for discovering a new way to resume the growth patterns for computing power first ascertained by Intel founder Gordon Moore in 1965.
Strategic Objective 3, as the NSCI report presents it, outlines a moon shot-like goal of “establishing, over the next 15 years, a viable path forward for future HPC (high-performance computing) systems even after the limits of current semiconductor technology are reached (the post-Moore’s Law era).”
Simply stating the objective this way represents a concession by the federally funded and managed agency that it is no longer possible, given the current constraints of physics, for transistors to be “crammed” (Moore’s word for it) onto semiconductor dies as small as they currently are. Intel, which currently produces Xeon processors using a 14 nm process, had a roadmap in place for 10 nm lithography, followed by 7 nm and 5 nm.
But last year, it had to begin delaying implementation of the 10 nm phase until 2017. And last March, the company was forced to retire its famous “tick-tock” production cadence, in favor of a strategy involving partnership with software and systems developers to find new ways to increase performance, at something approaching the levels that data centers have come to expect.
The NSCI is all in favor of partnership and collaboration. But with respect to finding a new track for sustained performance growth over the long haul, it’s looking to principles that, as of today, still sound like science fiction.
“The NSCI envisions a more heterogeneous future computing environment, where digital (von Neumann-based) computing is augmented by systems implementing alternative computing paradigms to efficiently solve specific classes of problems,” reads the group’s current report. “These alternative computational paradigms — whether quantum, neuromorphic or other alternatives — may solve some classes of problems that are intractable with digital computing, and provide more efficient solutions for some other classes of problems.”
Just four weeks ago, a consortium of the world’s semiconductor associations, including the U.S. Semiconductor Industry Association (SIA), issued its 2015 International Technology Roadmap for Semiconductors. (You might say they were a tad late.) In contrast to the NSCI, which has clearly assumed Moore’s Law has run its course, the ITRS consortium asserts that producers will be able to pick up the physics train from where it left off, by exploring the third dimension.
“Moore’s Law is now entering into a third phase,” the ITRS report reads, “characterized by vertical integration and performance specifications driven towards reduction of power in either the active or the stand-by nodes. . . In the next decade, ITRS 2.0 predicts that the advent of the third phase of scaling — ‘3D Power Scaling’ — will become the driver of the rejuvenated semiconductor industry.”
An example of this concept currently in production is Intel’s and Micron Technology’s Hybrid Memory Cube, currently co-branded as 3D XPoint. Micron is currently putting its 3D NAND technology to work in the production of NAND flash storage devices and solid-state drives. Here, layers of 2D memory are linked to one another by way of a type of via, or pathway through which electrons traverse the various layers [depicted at top]. It’s a surprisingly simple concept.
But the White House initiative is focused on the next step beyond, utilizing resources from the National Science Foundation, NASA, and other agencies to “invest in other, post-Moore’s Law technologies that have the potential to support the NSCI Strategic Objectives, as they are identified.”
Title image courtesy Micron Technology | 7:57p |
Google Cloud Customers Can Now Use their Own Encryption Keys
 Brought to You by The WHIR
The Customer-Supplied Encryption Keys (CESK) option for Google’s Compute Engine has been released to general availability, the company announced Monday. Google will continue to encrypt customer content by default, but making CESK generally available allows customers to better control their data security.
Google Cloud Platform (GCP) automatically uses one or more encryption mechanisms, and at the storage level data in encrypted with AES256 or AES128, but in theory it may be possible to steal the keys or access data from within Google. The company also supplies the whitepaper Encryption at Rest in Google Cloud Platform to provide further information about encryption at rest on GCP.
“Customer-supplied encryption keys give us the fidelity and granular control to provide strong data-protection assurances to our customers,” said Neil Palmer, CTO of Advanced Technology at FIS Global in a Google blog post. “It’s a critical feature and Google’s approach is key to our end-to-end security posture.”
CESK for Compute Engine, which Google introduced to beta a year ago, allows companies to tell their clients that keys are not stored with third parties, but carries the risk of stranded data in the case of lost keys, as Google can neither recover keys nor access protected data without them.
CSEK is available now in the U.S., U.K., Canada, France, and Germany, and is expected to be extended to Australia, Italy, Mexico, Norway, and Sweden later this month.
A number of cloud providers currently allow customers to supply keys, including AWS, Azure, and Box. The approach allows service providers a way to avoid “technical assistance” requests from government agencies, which are a looming privacy and legal issue for the industry.
This post originally appeared at The Whir.
| 8:03p |
Red Hat, Microsoft and Codenvy Push DevOps with New Language Protocol  By The VAR Guy
Red Hat, Microsoft and Codenvy partnered recently to knock down another barrier to open collaboration through the launch of a new open source project called Language Server Protocol. Here’s what it means for the channel.
The three companies describe the protocol, which they announced in June, as “an open source project that defines a JSON-based data exchange protocol for language servers, which can provide programming language services like Find By Symbol or Refactoring consistently across different code editors. This protocol is accessible over standard I/O, allowing both locally installed and remotely hosted editors to access these features, running inside a language server.”
What that means in non-technical terms is that the protocol provides a common, open standard for allowing developers to use any type of programming language with any type of programming app.
That’s important, according to Codenvy CEO Tyler Jewell, because traditionally it has been hard for developers to switch from one language to another without having to learn a new programming app and revamp their entire workflow. “Historically, most programming languages have only been optimized for a single tool,” he said. “This has prevented developers from using the editors they know and love, and has limited opportunities for language providers to reach a wide audience.”
With the Language Server Protocol, “developers can gain access to intelligence for any language within their favorite tools,” according to Jewell.
What It Means for the Channel: Advancing DevOps and Open Source
To be sure, the Language Server Protocol is something that only programmers are likely to appreciate fully.
Yet the project is also important from a broader channel perspective, for two reasons.
First, it’s interesting as the latest partnership between erstwhile enemies Microsoft and Red Hat. While it’s no longer news that Microsoft wants to cooperate with the open source community, it’s remarkable that Redmond is now going so far as to help found an open source project whose goal is to erase platform lock-in for programming. In the past, platform lock-in constituted the crux of Microsoft’s business strategy, but those days are long past.
Second, this news is evidence of how DevOps practices are revolutionizing the channel. The Language Server Protocol is the latest in a series of DevOps tools designed to make app development and delivery more modular and platform-agnostic, while freeing programmers to use whichever toolset they decide is best for the job at hand. Vendors who want to prepare for the future need to adopt the same mindset. |
|