Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, May 2nd, 2016

    Time Event
    12:00p
    Top 10 Data Center Stories of the Month: April

    Here are the most popular stories that ran on Data Center Knowledge in April:

    Microsoft Moves Away from Data Center Containers – Google has taken the container route for building out data center capacity in the past but eventually decided against it. Now, Microsoft has also found that containers just aren’t the best way for it to scale.

    Google to Build and Lease Data Centers in Big Cloud Expansion – While Google famously likes to keep as much of its data center design and operations in-house as possible, it would be difficult to scale globally at a pace that’s quick enough to keep up with competition.

    Intel: World Will Switch to “Scale” Data Centers by 2025 – This shift will affect virtually every industry and it is a big opportunity for Intel, which is looking at the data center market as its best bet going forward, faced with slowly but steadily dwindling revenue from PC parts and a weak position in the mobile chip market.

    What Cisco’s New Hyperconverged Infrastructure Is and Isn’t Good At – The HyperFlex architecture integrates directly into existing Cisco management environments to allow for complete data center scale, but not without limitations.

    HyperFlex, Cisco's hyperconverged infrastructure solution (Photo: Cisco)

    HyperFlex, Cisco’s hyperconverged infrastructure solution (Photo: Cisco)

    Emerson Files Papers to Shed Network Power Business – Gets closer to spinning off business unit that has suffered from years of slumping sales.

    Data Center Chief Dean Nelson Leaves eBay – On his watch, the company deployed in production some of the more unusual critical infrastructure ideas, such as containerized, or modular data centers, ultra-high-density power and cooling infrastructure, and fuel cells.

    Gartner analyst Ray Paquet interviews Dean Nelson (left) of eBay's global foundation services about what impacts the recent deployment of eBay's dashboard metrics have had on the organization in 2013. (Photo by Colleen Miller.)

    Gartner analyst Ray Paquet interviews Dean Nelson (left) of eBay’s global foundation services about what impacts the recent deployment of eBay’s dashboard metrics have had on the organization in 2013. (Photo by Colleen Miller.)

    Coca-Cola Selling Atlanta Data Center as it Shifts Apps to Cloud – Coca-Cola is one of the big corporations shrinking the amount of data center capacity they operate on their own by moving more and more applications to cloud service providers.

    A Coca-Cola stall at Wembley Stadium during the Olympic Games in London, August 1948. (Photo by Topical Press Agency/Hulton Archive/Getty Images)

    A Coca-Cola stall at Wembley Stadium during the Olympic Games in London, August 1948. (Photo by Topical Press Agency/Hulton Archive/Getty Images)

    Latest OpenStack Release Advances ‘Intent-Based’ Configuration – The ability to drive server utilization and keep data center footprints small depends, in very large part, upon whether cloud infrastructure systems — today, mainly OpenStack — make optimum use of storage, compute, memory, and bandwidth resources throughout the data center, once they’ve all been pooled together.

    BMW, Toyota Select Microsoft Azure Cloud Services to Make Cars Smarter – While AWS touts General Motors as one of its case study darlings, Microsoft Azure has nabbed BMW and Toyota this week as the car manufacturers leverage Microsoft Azure in different capacities, setting the groundwork for future initiatives in connected cars.

    The concept car 'Vision Next 100' by BMW is presented during the celebration marking the company's 100th anniversary on March 7, 2016 in Munich, Germany. (Photo by Lennart Preiss/Getty Images)

    The concept car ‘Vision Next 100’ by BMW is presented during the celebration marking the company’s 100th anniversary on March 7, 2016 in Munich, Germany. (Photo by Lennart Preiss/Getty Images)

    New Equinix Data Hub Broadens Enterprise Colo Options – According to one Equinix executive, Data Hub will bring large data stores out of the trap of legacy data warehouses and closer to the edges of those connections between owned resources and the cloud.

    Stay current on data center news by subscribing to our daily email updates and RSS feed, or by following us on Twitter, Facebook, LinkedIn and Google+.

    5:45p
    Why Google Wants to Rethink Data Center Storage

    Growth forecasts for data center storage capacity show no signs of slowdown. Cisco expects that by 2019, 55 percent of internet users (2 billion) will use personal cloud storage — up from 42 percent in 2014. By 2019, a single user will generate 1.6 Gigabytes of consumer cloud storage traffic per month — up from 992 megabytes per month in 2014. Finally, data created by devices that make up the Internet of Things, which Cisco calls “Internet of Everything,” will reach 507.5 Zettabytes per year by 2019 — up from 134.5 ZB per year in 2014.

    Needless to say, that’s a lot of data, which will require a lot of storage, and Google is proposing a fundamental change to the way engineers think about and design data center storage systems, a rethink that reaches all the way down to the way optical disks are designed.

    Cloud Needs Different Disks

    At the 2016 USENIX conference on File and Storage Technologies (FAST 2016), Eric Brewer, Google’s VP of infrastructure, said the company wanted to work with industry and academia to develop new lines of disks that are a better fit for data centers supporting cloud-based storage services. He argued that the rise of cloud-based storage means that most (spinning) hard disks will be deployed primarily as part of large storage services housed in data centers. Such services are already the fastest-growing market for disks and will be the majority market in the near future.

    Read more: Intel: World Will Switch to Scale Data Centers by 2025

    He used Google’s subsidiary YouTube as an example. In a recent paper on Disks for Data Centers, Google researchers pointed out that users upload over 400 hours of video every minute, which at one gigabyte per hour requires adding 1 petabyte (that’s 1 million Gigabytes) of data center storage capacity every day.

    This is a tough reality to face for an industry that’s so dependent on this one fundamental technology. The current generation of disks, often called “nearline enterprise” disks, are not optimized for this new use case; they are designed around the needs of traditional servers. Google believes it is time to develop a new kind of disks designed specifically for large­-scale data centers and cloud services.

    Let’s take a step back and look at storage from Google’s perspective. First of all, the company says you should stop looking at individual disks (and even arrays) as standalone technologies. Rather, it’s time to focus on the “collection.”

    The key changes Google proposes fall in three broad categories:

    1. The “collection view,” in which we focus on aggregate properties of a large collection of disks
    2. A focus on tail latency derived from the use of storage for live services
    3. Variations in security requirements that stem from storing others’ data

    Taking the Collection View

    The collection view implies higher-level maintenance of bits, including background check-summing to detect latent errors, data rebalancing for more even use of disks (including new disks), as well as data replication and reconstruction. Moderns disks do variations of these internally, which is partially redundant, and a single disk by itself cannot always do them as well. At the same time, the disk contains extensive knowledge about the low­-level details, which generally favors new APIs that enable better cooperation between the disk and higher-level systems.

    The third aspect of the collection view is that we optimize for the overall balance of IOPS and capacity, using a carefully chosen mix of drives that changes over time. We select new disks so that the marginal IOPS and added capacity bring us closer to our overall goals for the collection. Workload changes, such as better use of SSDs or RAM, can shift the aggregate targets.

    Why Not SSDs?

    But why are we talking so much around spinning disks rather than SSDs, which are much faster and whose cost has been coming down?

    Arguably, SSDs deliver better IOPS and may very well be the future of storage technologies. But according to Google, the cost per GB remains too high. More importantly, growth rates in capacity per dollar between disks and SSDs are relatively close (at least for SSDs that have sufficient numbers of program-­erase cycles to use in data centers), so cost will not change enough over the coming decade. Google does make extensive use of SSDs, but it uses them primarily for high-­performance workloads and caching, which helps disk storage by shifting seeks to SSDs.

    Redesign the Disk

    Now things get even more interesting. Google is essentially calling on the industry to come to the round table to create a new standard for disk design.

    As the company points out, the current 3.5” HDD geometry inherited its size from the PC floppy disk. An alternative form factor should yield a better total cost of ownership. Changing the form factor is a long-term process that requires a broad discussion, but Google believes it should be considered. Although it could spec its own form factor (with high volume), the underlying issues extend beyond Google’s designs, and developing new solutions together with the industry will better serve everybody, especially once a standard is achieved. And that’s one of the key points here: standardization.

    There is a range of possible secondary optimizations as well, some of which may be significant. These include system-­level thermal optimization, system­-level vibration optimization, automation and robotics handling optimization, system­-level helium optimization, and system-­level weight optimizations.

    What’s Next for “Legacy” Data Center Storage?

    Yes, cloud-based storage continues to grow at amazing speed. Yes, we’re seeing even more adoption of new end-point technologies, IoT, and virtualization. All of these are creating more demand around storage and data optimization.

    But before you get flustered and start looking at the storage alternatives of the future, you have to understand how big of an undertaking Google is proposing. It is suggesting a redefinition of the modern, standardized disk architecture, an architecture that’s been around for quite some time.

    In 1956, IBM shipped the first hard drive in the RAMAC 305 system. It held 5MB of data at $10,000 a megabyte. The system was as big as two refrigerators and used 50 24-inch platters. In 1980, Seagate released the first 5.25-inch hard disk. Then, in 1983, Rodime released the first 3.5-inch hard drive; the RO352 included two platters and stored 10MB.

    In their paper, Google discusses physical changes, such as taller drives and grouping of disks, as well as a range of shorter-term firmware-only changes. They discuss their goals which include higher capacity and more I/O operations per second in addition to a better overall total cost of ownership. But even with Google’s size and market sway, how feasible is this?

    We’re talking about creating a new storage standard for every business, data center, and ecosystem leveraging disk-based environments. Google believes this is will be the new era of disks for data center storage.

    It seems like a huge lift. But maybe it’s really time to take a step back and look over a technology that’s decades old. Maybe it’s time to develop a storage environment capable of meeting the demands of cloud-scale ecosystems. Either way, this is no easy task and will require the support of the industry to make adoption a reality.

    6:42p
    Virtustream Debuts Enterprise Storage Cloud at EMC World
    By Talkin' Cloud

    By Talkin’ Cloud

    Cloud service provider Virtustream, a subsidiary of EMC, launched the Virtustream Storage Cloud platform on Monday during EMC World in Las Vegas.

    The Virtustream Cloud Platform is a globally available offering designed specifically for the large enterprise and public sector crowds, according to the announcement. The platform is capable of handling web-scale object storage and can also extend on-premise EMC storage to the cloud.

    Virtustream said its latest solution is ideal for managing highly sensitive data thanks to an extensive customer testing regarding latency and performance.

    “Enterprises are generating exponentially-growing data and looking for cost-effective strategies for long-term backup retention and archival storage, in addition to seeking resilient, cost-effective and scalable platforms for cloud-native application data,” said Rodney Rogers, CEO at Virtustream, in a statement. “Virtustream Storage Cloud will provide a scalable, enterprise-class platform to meet our customers’ cloud storage needs for both second and third platform applications.”

    Straight from the press release, features include:

    • Engineered-in resiliency delivering up to 13 x 9s of data durability
    • Architected and optimized for performance, particularly for large object sizes
    • Available read-after-failure provides resiliency and data integrity even in case of single site failure
    • Seamless extensibility of on-premises primary storage and backup to the cloud

    Virtustream Cloud Storage is set to be available on May 10, but the platform has already been in testing for several years, where it has already been operating as the primary object storage platform for a select number of large-scale customers.

    EMC has also pledged to provide Day 1 compatibility with a number of its solutions, including EMC Data Domain, EMC Isilon, EMC Data Protection Suite and the VMAX, XtremIO and Unity Systems. Additional support for web-scale onkect storage for cloud-native applications is also in the works.

    Syncplicity is one of the first companies to adopt Virtustream Cloud Storage as its primary customer storage offering.

    “The combination of Syncplicity’s industry-leading hybrid EFSS solution with Virtustream’s highly secure and scalable storage cloud delivers unrivalled mobile access user experience anytime, anywhere and on any device, with the security and data residency compliance demanded by enterprises, while at the same time enabling IT infrastructure digital transformation initiatives and significant cost take out,” said Jon Huberman, CEO at Syncplicity.

    This first ran at http://talkincloud.com/cloud-computing/virtustream-debuts-enterprise-storage-cloud-emc-world

    7:20p
    Michael Dell Unveils New Name for Combined Dell and EMC
    By The VAR Guy

    By The VAR Guy

    Dell Technologies will be the official name of the new company expected to be formed after the pending merger of Dell and EMC Corp., Michael Dell announced during his keynote speech at this week’s EMC World conference in Las Vegas.

    Dell founder and CEO outlined the vision and branding strategies for the new company on Monday morning. All of Dell’s and EMC’s existing business ventures – including VMware and RSA – will be housed under the Dell Technologies brand, according to the announcement.

    “Dell Technologies will create more value for customers and partners than any other technology solutions provider today. We will be more nimble and innovative, and we will deliver world-class products and solutions to customers of all shapes and sizes,” Michael Dell said in a statement.

    Read more: What About Dell’s Own Huge Data Center Software Portfolio?

    While Dell Technologies is the name of the overarching company, all enterprise products and solutions sold directly and indirectly through the channel will be subcategorized under the Dell EMC brand.

    All client solutions for consumers, business, and institutional customers, meanwhile, will exist under the Dell name, according to the announcement.

    Dell and EMC are currently working on the final visual branding efforts for the combined company, most likely to avoid confusion regarding the impending name changes.

    Dell said the merger is progressing according to the original timetable and terms.

    This first ran at http://thevarguy.com/information-technology-events-and-conferences/dell-rebrands-pending-emc-merger-dell-technologies

    9:46p
    Cold Storage Comes to Microsoft Cloud

    Microsoft has launched a cold storage service on its Azure cloud, offering a low-cost storage alternative for data that’s not accessed frequently.

    The launch is a catch-up move by Microsoft, whose biggest public cloud competitors have had cold-storage options for some time. Amazon launched its Glacier service in 2012, and Google rolled out its Cloud Storage Nearline option last year.

    The basic concept behind cold storage is that a lot of data people and companies generate is accessed infrequently, so it doesn’t require the same level of availability and access speed as critical applications do. Therefore, the data center infrastructure built to store it can be cheaper than primary cloud infrastructure, with the cost savings passed down to the customer in the case of a cloud provider.

    Microsoft’s new service is called Cool Blob Storage, and it costs from $0.01 to $0.048 per GB per month, depending on the region and the total volume of data stored. The range for the “Hot” Blob storage tier is $0.0223 to $0.061 per GB, so some customers will be able to cut the cost of storing some of their data in Microsoft’s cloud by more than 50 percent of the opt for the “Cool” access tier.

    Web-scale data center operators of Microsoft’s caliber have looked at reducing their infrastructure costs by better aligning infrastructure investment with the type of data being stored for some time now. Facebook has revealed more details than others about the way it approaches cold storage, including open sourcing some of its cold storage hardware designs through the Open Compute Project.

    Related: Visual Guide to Facebook’s Open Source Data Center Hardware

    The social network has designed and built separate data centers next to its core sever farms in Oregon and North Carolina specifically for this purpose. The storage systems and the facilities themselves are optimized for cold storage and don’t have redundant electrical infrastructure or backup generators. The design has resulted in significant energy and equipment cost savings, according to Facebook’s infrastructure team.

    Read more: Cold Storage: the Facebook Data Centers that Back Up the Backup

    Related: Google Says Cold Storage Doesn’t Have to Be Cold All the Time

    Microsoft hasn’t shared details about the infrastructure behind its new cold storage service. In 2014, however, it published a paper describing a basic building block for an exascale cold storage system called Pelican.

    Pelican is a rack-scale storage unit designed specifically for cold storage in the cloud, according to Microsoft. It is a “converged design,” meaning everything, from mechanical systems to hardware and software, was designed to work together.

    Pelican’s peak sustainable read rate was 1GB per second per 1PB of storage when the paper came out, and it could store more than 5PB in a single rack, which meant an entire rack’s data could be transferred out in 13 days. Microsoft may have a newer-generation cold storage design with higher throughput and capacity today.

    Cool Blob Storage and the regular-access Hot Blob Storage have similar performance in terms of latency and throughput, Sriprasad Bhat, senior program manager for Azure Storage, wrote in a blog post recently announcing the launch.

    There is a difference in availability guarantees between the two, however. The Cool access tier offers 99 percent availability, while the Hot access tier guarantees 99.9 percent.

    With RA-GRS redundancy option, which replicates data for higher availability, Microsoft will give you a 99.9 percent uptime SLA for Cold access versus 99.99 percent for the Hot access tier.

    << Previous Day 2016/05/02
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org