Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, April 28th, 2016

    Time Event
    12:00p
    The Hyperconverged Approach to Increasing Efficiency

    Mohit Aron is CEO and Founder of Cohesity.

    As the volume and complexity of data continue to increase, CIOs and system administrators must consider a structurally new approach to managing secondary storage. A focus on simplification and efficiency is the best way to attack a vast and sprawling problem, and in the case of storage management this means consolidation. Companies today have an enormous opportunity to cut storage costs (or at least halt their growth) and eliminate management headaches by consolidating secondary storage use cases, such as data protection, test and development, and file services, on a single platform.

    Secondary Storage Solutions Have Multiplied Beyond Control

    Two major forces are driving today’s need for consolidation: the proliferation of point solutions to handle different secondary storage use cases and the exponential growth in the amount of data organizations store. Although the term “secondary storage” is relatively young, it’s been widely adopted to succinctly wrap up the array of storage workflows that aren’t dedicated to mission-critical operations. IT administrators have been using “primary storage” to describe high-performance workloads for years, but only recently have use cases like disaster recovery, archive, test and development, and analytics been grouped together under the secondary storage umbrella.

    The term “secondary storage” recognizes that these distinct use cases require a unified approach and also do not require high performance SLAs. Managing different point solutions for archiving, backup, test/dev and analytics (just to name a handful of secondary storage examples you’ll find at a single company), creates serious administrative headaches today. In a recent survey by IDC, IT decision makers ranked data complexity across different departments and locations within the organization as a top concern. Despite the greater attention paid to primary storage, the volume of data held in secondary storage solutions is actually much larger at most companies, averaging 80 percent of total data. In this way, primary storage is really the tip of the iceberg, with data in secondary storage representing the much greater portion hidden below the surface. By bringing together various data use cases on a single platform, IT administrators can gain a much clearer view of their data than the fragmented landscape of point solutions has allowed.

    Applying the Proven Benefits of Hyperconvergence

    Consolidating secondary storage can also reduce the strain on IT resources caused by growing data. Allocating different storage systems for separate workflows translates into excess capacity for each use case, and so unnecessary or unused capacity multiplies with each additional storage solution, compounding inefficiency over time. By consolidating secondary storage on a single hyperconverged platform that integrates into public clouds, administrators get a holistic view of data utilization that enables more cost-effective usage and ongoing planning. A single copy of data can be used for both backup and then repurposed for test/dev, and also archived to the public cloud, for example, to increase efficiency and manage sprawl.

    In fact, the same principle has already proven effective for different primary storage use cases through the success of the hyperconvergence movement. Hyperconvergence made it easier for various virtualized workflows to run across a single scale-out architecture that eliminates hardware compatibility or management issues. Today, system administrators don’t segregate primary storage workflows based on how the data is being used; instead they put it in a single group defined by the performance and resiliency that mission-critical operations require (which often means using all-flash storage arrays and removing any spinning-disk hardware).

    There’s no reason companies can’t achieve the same efficiency with secondary storage. Of course, these workflows often have more diverse performance requirements, so designing an effective platform is not a trivial engineering problem. For example, backup is traditionally considered a passive data workflow but it still requires specific ingest speeds and recovery time objectives. On the other hand, test/dev demands higher levels of performance but lower resiliency requirements. But the upside of consolidating secondary storage (and the recent rise of affordable flash and web-scale storage architectures that enable much more flexible platforms) makes this a challenge that should be tackled head on.

    Organizations now grapple with an enormous volume of data that is simultaneously being applied to increasingly complex use cases. Most of this data growth – and fragmentation – is happening in the realm of secondary storage. The answer to this problem is a more simple and efficient approach to managing data that also incorporates the public cloud. We’ve seen it work with hyperconvergence for primary storage, but that’s just the tip of the iceberg. The value of converging secondary storage will be enormous.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:52p
    Guide to Facebook’s Open Source Data Center Hardware

    When Facebook rolled out the Open Compute Project in 2011, it started somewhat of a revolution in the data center market. In a way, that revolution had already been going on; Google had figured out it was better off designing its own hardware than buying off-the-shelf products from the top vendors, and it had been some time before Facebook reached that point too.

    But OCP, the now non-profit organization that aggregates open source hardware and data center designs and promotes applying the open source software ethos to the world of hardware, has become a hub of sorts, where vendors and operators of some of the world’s largest data centers come together to build the next wave of internet infrastructure driven by actual specific requirements of the data center operators rather than vendors’ own ideas about market needs.

    Both Microsoft and Apple have joined OCP, and Microsoft has already contributed multiple cloud server designs to the open repository. Google joined this year, announcing it would contribute a data center rack and power distribution design it has been using in its facilities. A host of telcos are involved, as they transform their infrastructure to support Software Defined Networking and Network Function Virtualization, and some of the biggest financial services firms, who need more computing capacity today than they’ve ever needed in history and are looking for the most cost effective ways to build out that infrastructure.

    Read more: What Enterprise Data Center Managers Can Learn from Web Giants

    Facebook of course was the first contributor of intellectual property to the open source project and has contributed more designs of servers, electrical infrastructure components, network hardware, and software than any other company.

    Here’s a guide to all the Facebook data center hardware contributed so far:

    Data center, open sourced in 2011: One of the first things Facebook open sourced was the design spec for its data center in Prineville, Oregon, the first facility the company designed and built for itself to replace leased facilities. The document describes mechanical and electrical specifications created to maximize efficiency of its web-scale infrastructure. Triplet rack, open sourced in 2011: Facebook deployed servers in Prineville in what it calls “triplet racks.” Each group of three 42U racks holds a total of 90 servers and has two top-of-rack switches. Battery cabinet, open sourced in 2011: Facebook has a dedicated backup battery cabinet for each pair of triplet racks, ready to supply DC power to six racks in case the main AC power supply is interrupted. The cabinets replace traditional UPS systems used in data centers. Freedom server, open sourced in 2011: The Server V1 design, also known as “Freedom,” features a custom server chassis that go into the triplet racks enables installation of components without any tools. Spitfire server (AMD), open sourced in 2011: This is a variant of an OCP motherboard for AMD chips. Power supply, open sourced in 2011: This power supply is what enables OCP servers to run on AC power but take DC power from the battery cabinets when main power gets interrupted. The self-cooled power supply features a converter with independent AC and DC output connectors and a DC input connector for backup voltage. The design’s main focus is high energy efficiency. Windmill, open sourced in 2012: The Server V2 design, also known as “Windmill,” was a power-optimized, bare-bones motherboard for Intel Xeon processors designed to provide lowest capital and operating costs. It did away with many features that vendors usually include in servers but that aren’t necessary for Facebook’s needs. Watermark server (AMD), open sourced in 2012: In 2012, Facebook contributed a V2 server design for AMD Opteron processors. It was designed with the same power and cost saving design principles that were employed in designing Windmill. Mezzanine card V1, open sourced in 2012: Facebook’s first mezzanine cards for Intel V2 motherboards offered extended functionality, such as support for 10GbE PCI-E devices. Open Rack V1, open sourced in 2013: Facebook’s first rack design was created to maximize operational efficiency. It required things like tool-less routine service procedures, no vanity features, direct integration with air containment solutions, the ability to do installation and operations work in the cold aisle, and data cables in front. Winterfell, open sourced in 2013: Winterfell was a web server with three x86 server nodes in an OCP chassis. Group Hug Board, open sourced in 2013: This was a spec for a completely vendor-neutral motherboard that was meant to last through multiple generations of processors. Knox, or Open Vault, open sourced in 2013: The Open Vault was a storage solution for the Open Rack with modular I/O topology. It was optimized for high disk density, holding 30 drives in a 2U chassis, and could work with almost any host server. Mezzanine card V2, open sourced in 2014: Based on the original OCP mezzanine card, this card’s mechanical and electrical interface was extended to accommodate new use cases. Cold Storage, open sourced in 2014: This is a storage server designed for data that’s accessed less frequently, such as old Facebook photos. It is optimized for low hardware cost, high capacity and storage density, as well as low power consumption. Facebook built separate simplified data centers just to house these cold storage servers. Panther Micro Server, open sourced in 2014: The microserver is a PCI-E-like card with an SoC (Server-on-Chip), memory, and storage for the SoC. It can be plugged into baseboard slots used for power distribution and control, BMC management, and network distribution. It can be applied to servers, storage, or networking devices. Open Rack V2, open sourced in 2014: The second-generation rack increased the maximum weight of IT gear that can be installed in the rack from 950 kg to 1,400 kg and increased height from 2,100 mm to 2,210 mm. Honey Badger, open sourced in 2014: The lightweight Honey Badger compute module turns an Open Vault from a JBOD (Just a Bunch of Disks), which needs to be controlled by a host server, into a full-fledged storage server in its own right. Wedge, open sourced in 2015: The Wedge switch was Facebook’s first foray into designing networking hardware. The design team gave this top-of-rack switch the same power and flexibility as a server, with flexible hardware configuration, including the ability to use Intel, AMD, or ARM processors, thanks to the use of the Group Hug architecture. 6-Pack, open sourced in 2015: The 6-Pack is a core switch that followed the Wedge top-of-rack box. It sits at the core of Facebook’s data center network fabric and includes six Wedge switches as its basic building blocks, hence the name. Yosemite, open sourced in 2015: Yosemite is Facebook’s multi-node server platform. It hosts four OCP-compliant one-socket server cards in a sled that can be plugged directly into the Open Rack.
    8:16p
    IT Innovators: Taking Data Security to the Next Level
    By WindowsITPro

    By WindowsITPro

    When Joseph Latouf was in high school, a challenge sparked his curiosity. His algebra class was informed that if anyone could come up with a prime number generator, they would win a $100,000 reward. Latouf got fast at work, and after some intense analyzing and deliberating, uncovered a clever method of creating a prime number generator. A professor at a nearby university was called in to prove that his prime number generator worked—and indeed, it did. Sadly, however, there really wasn’t a $100,000 prize.

    Latouf said he tucked away the fruits of his labor in his back pocket, hoping that it would someday lead to something of value. After all, he knew that a prime number generator was important, since it holds the keys for encryption.

    Fast forward many years later, when Latouf was wrestling with the idea of security and encryption and feeling uneasy about the fact that if he had a prime number generator, others likely do too. And, that meant that there were people out there who can crack encryption.

    As more and more breaches and stolen data began to surface in the news, Latouf set out to create a better and more secure way of saving data. That’s when CORA, which stands for Context Order Replacement Algorithm, was born. The data security solution touted as “unbreakable” is currently raising funds on Indiegogo.

    Latouf describes encryption as “a big safe that you can open up, put whatever valuables you want inside and then close and lock it.” Essentially, he says, that’s what encryption does to data on a hard drive; it locks it up with prime numbers. Now, if someone physically stole a personal computer or hacked into a server or hard drive, they would need a prime number generator or to use brute force to crack the code and access the data. Cracking encryption is something that is hard to do, but not impossible. Latouf decided that there must be a better way.

    With CORA, Latouf says, storing data is more like taking a book, pulling the letters out in a random fashion and storing them in three, five or even 25 different places—with some data on a hard drive, some on OneDrive, some on another cloud, and so on. This way, if someone breaches one location, they might have, say 10,000 letters, but that would only be a small subset of letters and not enough to reconstruct phrases, let alone an entire book.

    Latouf says the CORA security solution is “unbreakable” with one caveat: it must be used properly by storing the data in different locations.

    With this level of unbreakable encryption, an IT professional would be positioned to better defend their company against potential cyberattacks. In general, encryption helps to protect data. But by taking security a step further with CORA, small subsets of data are stored in various locations, and if the data were to be compromised in a single breach, the encrypted data–even if cracked–would never represent the full set of data and would therefore, lack any real value to a hacker.

    In addition to CORA, Latouf is also developing an app to allow people to have total control over their online footprint. In a nutshell, the app will allow users to delete a photo or file permanently, leaving no one with the ability to access that package even after it has been shared. “If I want my picture taken offline, I should be able to shut it down,” Latouf explains. “My next goal, following CORA, will be to also make that possible.”

    Renee Morad is a freelance writer and editor based in New Jersey. Her work has appeared in The New York Times, Discovery News, Business Insider, Ozy.com, NPR, MainStreet.com, and other outlets. If you have a story you would like profiled, contact her at renee.morad@gmail.com.

    The IT Innovators series of articles is underwritten by Microsoft, and is editorially independent.

    This first ran at http://windowsitpro.com/it-innovators/it-innovators-taking-data-security-next-level

    9:33p
    Intel Promotes Top Data Center Executive Diane Bryant

    As part of the big restructuring initiative Intel announced last week, the company has promoted Dian Bryant, who leads the Data Center Group, its fastest-growing business, from senior VP to executive VP.

    This is a key promotion that will play a big role in Intel’s future. The goal of the restructuring program, which also includes laying off 12,000 people, or 11 percent of its total workforce, revolves around making Intel a company focused on the data center market, rather than a company focused on the PC market, where shipments and revenue are steadily shrinking.

    Today, 40 percent of Intel’s revenue and 60 percent of its margin come from businesses outside of PCs, Intel CEO Brian Krzanich said on an earnings call this month. The restructuring is meant to reorient the company’s strategy accordingly, so it is better positioned to pursue high-growth areas like data centers, memory, and the Internet of Things.

    Bryant joined Intel in 1985. She has been leading the company’s data center business since 2012. Prior to that, she was in charge of Intel’s corporate IT infrastructure as CIO for nearly four years.

    Last year, Fortune Magazine included Bryant on its annual Most Powerful Women list.

    Read more: Intel: World Will Switch to “Scale” Data Centers by 2025

    << Previous Day 2016/04/28
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org