Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, April 28th, 2016
| Time |
Event |
| 12:00p |
The Hyperconverged Approach to Increasing Efficiency Mohit Aron is CEO and Founder of Cohesity.
As the volume and complexity of data continue to increase, CIOs and system administrators must consider a structurally new approach to managing secondary storage. A focus on simplification and efficiency is the best way to attack a vast and sprawling problem, and in the case of storage management this means consolidation. Companies today have an enormous opportunity to cut storage costs (or at least halt their growth) and eliminate management headaches by consolidating secondary storage use cases, such as data protection, test and development, and file services, on a single platform.
Secondary Storage Solutions Have Multiplied Beyond Control
Two major forces are driving today’s need for consolidation: the proliferation of point solutions to handle different secondary storage use cases and the exponential growth in the amount of data organizations store. Although the term “secondary storage” is relatively young, it’s been widely adopted to succinctly wrap up the array of storage workflows that aren’t dedicated to mission-critical operations. IT administrators have been using “primary storage” to describe high-performance workloads for years, but only recently have use cases like disaster recovery, archive, test and development, and analytics been grouped together under the secondary storage umbrella.
The term “secondary storage” recognizes that these distinct use cases require a unified approach and also do not require high performance SLAs. Managing different point solutions for archiving, backup, test/dev and analytics (just to name a handful of secondary storage examples you’ll find at a single company), creates serious administrative headaches today. In a recent survey by IDC, IT decision makers ranked data complexity across different departments and locations within the organization as a top concern. Despite the greater attention paid to primary storage, the volume of data held in secondary storage solutions is actually much larger at most companies, averaging 80 percent of total data. In this way, primary storage is really the tip of the iceberg, with data in secondary storage representing the much greater portion hidden below the surface. By bringing together various data use cases on a single platform, IT administrators can gain a much clearer view of their data than the fragmented landscape of point solutions has allowed.
Applying the Proven Benefits of Hyperconvergence
Consolidating secondary storage can also reduce the strain on IT resources caused by growing data. Allocating different storage systems for separate workflows translates into excess capacity for each use case, and so unnecessary or unused capacity multiplies with each additional storage solution, compounding inefficiency over time. By consolidating secondary storage on a single hyperconverged platform that integrates into public clouds, administrators get a holistic view of data utilization that enables more cost-effective usage and ongoing planning. A single copy of data can be used for both backup and then repurposed for test/dev, and also archived to the public cloud, for example, to increase efficiency and manage sprawl.
In fact, the same principle has already proven effective for different primary storage use cases through the success of the hyperconvergence movement. Hyperconvergence made it easier for various virtualized workflows to run across a single scale-out architecture that eliminates hardware compatibility or management issues. Today, system administrators don’t segregate primary storage workflows based on how the data is being used; instead they put it in a single group defined by the performance and resiliency that mission-critical operations require (which often means using all-flash storage arrays and removing any spinning-disk hardware).
There’s no reason companies can’t achieve the same efficiency with secondary storage. Of course, these workflows often have more diverse performance requirements, so designing an effective platform is not a trivial engineering problem. For example, backup is traditionally considered a passive data workflow but it still requires specific ingest speeds and recovery time objectives. On the other hand, test/dev demands higher levels of performance but lower resiliency requirements. But the upside of consolidating secondary storage (and the recent rise of affordable flash and web-scale storage architectures that enable much more flexible platforms) makes this a challenge that should be tackled head on.
Organizations now grapple with an enormous volume of data that is simultaneously being applied to increasingly complex use cases. Most of this data growth – and fragmentation – is happening in the realm of secondary storage. The answer to this problem is a more simple and efficient approach to managing data that also incorporates the public cloud. We’ve seen it work with hyperconvergence for primary storage, but that’s just the tip of the iceberg. The value of converging secondary storage will be enormous.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 4:52p |
Guide to Facebook’s Open Source Data Center Hardware When Facebook rolled out the Open Compute Project in 2011, it started somewhat of a revolution in the data center market. In a way, that revolution had already been going on; Google had figured out it was better off designing its own hardware than buying off-the-shelf products from the top vendors, and it had been some time before Facebook reached that point too.
But OCP, the now non-profit organization that aggregates open source hardware and data center designs and promotes applying the open source software ethos to the world of hardware, has become a hub of sorts, where vendors and operators of some of the world’s largest data centers come together to build the next wave of internet infrastructure driven by actual specific requirements of the data center operators rather than vendors’ own ideas about market needs.
Both Microsoft and Apple have joined OCP, and Microsoft has already contributed multiple cloud server designs to the open repository. Google joined this year, announcing it would contribute a data center rack and power distribution design it has been using in its facilities. A host of telcos are involved, as they transform their infrastructure to support Software Defined Networking and Network Function Virtualization, and some of the biggest financial services firms, who need more computing capacity today than they’ve ever needed in history and are looking for the most cost effective ways to build out that infrastructure.
Read more: What Enterprise Data Center Managers Can Learn from Web Giants
Facebook of course was the first contributor of intellectual property to the open source project and has contributed more designs of servers, electrical infrastructure components, network hardware, and software than any other company.
Here’s a guide to all the Facebook data center hardware contributed so far:
| | 8:16p |
IT Innovators: Taking Data Security to the Next Level  By WindowsITPro
When Joseph Latouf was in high school, a challenge sparked his curiosity. His algebra class was informed that if anyone could come up with a prime number generator, they would win a $100,000 reward. Latouf got fast at work, and after some intense analyzing and deliberating, uncovered a clever method of creating a prime number generator. A professor at a nearby university was called in to prove that his prime number generator worked—and indeed, it did. Sadly, however, there really wasn’t a $100,000 prize.
Latouf said he tucked away the fruits of his labor in his back pocket, hoping that it would someday lead to something of value. After all, he knew that a prime number generator was important, since it holds the keys for encryption.
Fast forward many years later, when Latouf was wrestling with the idea of security and encryption and feeling uneasy about the fact that if he had a prime number generator, others likely do too. And, that meant that there were people out there who can crack encryption.
As more and more breaches and stolen data began to surface in the news, Latouf set out to create a better and more secure way of saving data. That’s when CORA, which stands for Context Order Replacement Algorithm, was born. The data security solution touted as “unbreakable” is currently raising funds on Indiegogo.
Latouf describes encryption as “a big safe that you can open up, put whatever valuables you want inside and then close and lock it.” Essentially, he says, that’s what encryption does to data on a hard drive; it locks it up with prime numbers. Now, if someone physically stole a personal computer or hacked into a server or hard drive, they would need a prime number generator or to use brute force to crack the code and access the data. Cracking encryption is something that is hard to do, but not impossible. Latouf decided that there must be a better way.
With CORA, Latouf says, storing data is more like taking a book, pulling the letters out in a random fashion and storing them in three, five or even 25 different places—with some data on a hard drive, some on OneDrive, some on another cloud, and so on. This way, if someone breaches one location, they might have, say 10,000 letters, but that would only be a small subset of letters and not enough to reconstruct phrases, let alone an entire book.
Latouf says the CORA security solution is “unbreakable” with one caveat: it must be used properly by storing the data in different locations.
With this level of unbreakable encryption, an IT professional would be positioned to better defend their company against potential cyberattacks. In general, encryption helps to protect data. But by taking security a step further with CORA, small subsets of data are stored in various locations, and if the data were to be compromised in a single breach, the encrypted data–even if cracked–would never represent the full set of data and would therefore, lack any real value to a hacker.
In addition to CORA, Latouf is also developing an app to allow people to have total control over their online footprint. In a nutshell, the app will allow users to delete a photo or file permanently, leaving no one with the ability to access that package even after it has been shared. “If I want my picture taken offline, I should be able to shut it down,” Latouf explains. “My next goal, following CORA, will be to also make that possible.”
Renee Morad is a freelance writer and editor based in New Jersey. Her work has appeared in The New York Times, Discovery News, Business Insider, Ozy.com, NPR, MainStreet.com, and other outlets. If you have a story you would like profiled, contact her at renee.morad@gmail.com.
The IT Innovators series of articles is underwritten by Microsoft, and is editorially independent.
This first ran at http://windowsitpro.com/it-innovators/it-innovators-taking-data-security-next-level | | 9:33p |
Intel Promotes Top Data Center Executive Diane Bryant As part of the big restructuring initiative Intel announced last week, the company has promoted Dian Bryant, who leads the Data Center Group, its fastest-growing business, from senior VP to executive VP.
This is a key promotion that will play a big role in Intel’s future. The goal of the restructuring program, which also includes laying off 12,000 people, or 11 percent of its total workforce, revolves around making Intel a company focused on the data center market, rather than a company focused on the PC market, where shipments and revenue are steadily shrinking.
Today, 40 percent of Intel’s revenue and 60 percent of its margin come from businesses outside of PCs, Intel CEO Brian Krzanich said on an earnings call this month. The restructuring is meant to reorient the company’s strategy accordingly, so it is better positioned to pursue high-growth areas like data centers, memory, and the Internet of Things.
Bryant joined Intel in 1985. She has been leading the company’s data center business since 2012. Prior to that, she was in charge of Intel’s corporate IT infrastructure as CIO for nearly four years.
Last year, Fortune Magazine included Bryant on its annual Most Powerful Women list.
Read more: Intel: World Will Switch to “Scale” Data Centers by 2025 |
|