Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, November 8th, 2016

    Time Event
    2:00p
    Multimedia and Personal Devices Driving Need for Denser Storage Solutions

    By Tim Poor is CCO of Equus Computer Systems.

    Storage capacity demands for all types of organizations continue to increase dramatically every year – many are now reporting a doubling of data in less than a year. No one in the industry is taking any bets on that number remaining constant. Fueled largely by the enormous increase in multimedia and personal devices, data centers are looking at higher density drives as one avenue to accommodate growing storage needs in the face of power, cooling, and space restraints. To make informed choices on a balanced system architecture, administrators need to carefully evaluate a number of factors, including performance, data types, capacity, environments, and relative costs.

    Industry Trends Affecting the Need for Denser Storage Solutions

    The real catalyst for the exponential growth in data storage has multiple prongs. Everything from home photos and movies, to smart devices. Companies face a real challenge adapting to this massive growth in data and devices.

    Shutterbugs who used to selectively snap photos, aware of film and photo processing costs, now shoot numerous photos and video on their digital cameras with little or no thought of the increasing storage needed for this new content. With the increase in availability and acceptance of cloud-based storage, more and more of this data is migrating from personal devices directly to cloud data centers.

    This increase in media quality is also forcing storage increases on the corporate world. Companies that historically shared information on their website by posting text files and photos are now switching to rich multimedia formats such as movies, video clips, and high definition photos.

    In addition to end user media growth, the advent of smart devices for home and industry has caused another explosion of data. With the Internet Of Things “IOT” rapidly moving everyday appliances and systems into the digital world with the integration of intelligent sensor technology, millions of new data generating devices are going online every day. Companies are now having to consider how to best handle constant feeds of data coming from these devices, how long they need to keep the data, and what storage systems they will need to house the data.

    This all adds up to a need for huge increases in data center storage capacity.

    Storing the growing number and size of files is one of the largest challenges facing data centers today. While the maximum hard drive capacity has increased dramatically from 2-4 terabytes (TB) just three or four years ago to around 10TB today, it has also taken new ideas and designs for the storage systems themselves to manage this explosion in data.

    Just making the hard drive capacity larger isn’t enough. Increasing the number of hard drives in a single storage system is also needed to keep up with the storage demand. Over the past few years, 4U storage systems have scaled from 16-20 3.5” hard drive bays up to 60 and beyond, increasing the density of drives per rack. A 60-drive server with a total storage capacity of more than one-half a petabyte in a 4U form factor recently came on the market as well. This equates to more than 5 petabytes per 42U rack, enabling data centers to store more data in their valuable data center floor space.

    This ever-increasing volume of data is forcing data centers to rethink how and where they store data. Facing a variety of constraints, including power, cooling, cost and sheer lack of physical space, data centers are turning to both higher capacity drives and more dense storage systems to increase their storage capacity, reducing operational expenses and accommodating the data center’s footprint limitations.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:15p
    Server Farm Realty Sells High-Efficiency Santa Clara DC to Zayo

    One of the jewels in Server Farm Realty’s data center portfolio — a nearly 13,600 square foot facility in Santa Clara and a stone’s throw east of Levi’s Stadium and the convention center — has been sold to telecom infrastructure provider Zayo Group Holdings, the buyer announced Monday afternoon.

    The facility, located at 5101 Lafayette Street, covers 26,900 gross square feet, with 3 MW of critical power, and delivering critical load capacity of 220 watts per square foot.  It’s constructed in a space that was originally built in 1984 to serve as a clean room.  SFR had been boasting the use of redundant hybrid air handlers in its cooling system, rendering it unnecessary to deliver mechanical cooling for most of the year.

    In 2012, SFR told Data Center Knowledge it was leasing a good chunk of the facility’s space to an unnamed Fortune 500-ranking semiconductor company.  At present, it appears to share space with data center process management solutions provider Cranntech.

    Zayo’s colocation group, zColo, already offers dark fiber-connected facilities throughout Santa Clara.  In a statement Monday, the firm’s COO characterized the acquisition as a perfect complement to its existing facilities and services.

    Just last week, Zayo Group Holdings was the subject of a glowingly positive write-up by Goldman Sachs’ analyst Brett Feldman.  “In our view,” he wrote, “Zayo is among the most attractively valued providers of telecom infrastructure,” upgrading Goldman’s recommendation for Zayo stock to “buy.”  In the wake of that report, investment research firm Zacks upgraded its “most accurate” earnings estimates for Zayo stock.  The firm is scheduled to give its quarterly earnings report after the markets close Tuesday.

    7:45p
    Obama Administration Formally Accuses Russia of Cyber-Shenanigans
    By The VAR Guy

    By The VAR Guy

    We’ve all been saying for months that Putin has been behind a series of cyber-attacks on U.S. entities, including the Democratic National Committee and certain high-ranking individuals. On Friday, the Obama administration officially pointed the finger at Russia.

    White House officials have accused Russia of leaking emails on WikiLeaks and other sites that “are intended to interfere with the U.S. election process,” according to a statement by the director of national intelligence, James Clapper Jr., and the Department of Homeland Security.

    “Such activity is not new to Moscow — the Russians have used similar tactics and techniques across Europe and Eurasia, for example, to influence public opinion there,” the statement said. “We believe, based on the scope and sensitivity of these efforts, that only Russia’s senior-most officials could have authorized these activities.”

    While Clapper didn’t explicitly name Putin, we’re not sure how else we’re supposed to interpret that.

    It wasn’t the only time the statement seemed to be hedging its bets. It referred to recent “scanning and probing” of election systems, saying that they “in most cases originated from servers operated by a Russian company.” No explicit mention of the Russian government there, but is there anyone who thinks good ol’ Putin didn’t have a hand in the whole mess?

    Rumor has it that White House aides have been debating a number of responses, including economic sanctions and targeted action against the servers from which the attacks originated. No word yet on what course of action President Obama has decided on, if any.

    This article was first published here by The VAR guy.

    7:51p
    Google Cloud Platform Expands Asian Footprint with New Tokyo Region
    Brought to You by The WHIR

    Brought to You by The WHIR

    Google Cloud Platform (GCP) launched its new asia-northeast1 region from Tokyo on Tuesday to boost its performance in the booming Asian market.

    The Tokyo region doubles its Asian presence to two regions and six zones, and Google’s ambitious expansion plans include new Asia-Pacific regions slated for Mumbai, Singapore, and Sydney in the next year, and additional new regions in the U.S., Finland, Germany, UK, and Brazil.

    Google now offers six regions and 18 availability zones, and is scheduled to reach 14 regions and around 40 zones by the end of next year. AWS and Azure are both offered from more regions, and are also planning to launch new points of presence, but the new regions will reduce the difference.

    “Low latency and high performance are key considerations when choosing a region to deploy resources,” wrote Google Cloud Platform product managers Varun Sakalkar and Dave Stiver in a blog post. “By opening a dedicated cloud region in Tokyo, we’re bringing Google’s compute, storage and networking services directly to Japanese businesses. Based on our testing, customers in cities like Tokyo, Osaka, Sapporo and Nagoya experience 50-85 percent lower latency on average when served from the Tokyo region compared to Taiwan.”

    Google Compute Engine, Cloud Storage, Cloud Persistent Disk, App Engine standard environment, and Container Engine are now available from the Tokyo region, as well as all core networking and security services and some database and analytics services. Google is running Intel Broadwell processors in the region, and has partnered with several companies in Japan to help onboard customers.

    Google upgraded GCP’s database services for enterprise customers in August.

    Google has also recently invested in a pair of submarine cables, FASTER and PLCN, as it attempts to scale its network to compete with Amazon and Microsoft.

    This article was originally published here by The Whir.

    8:34p
    Cloud Veteran George Karidis Joins Virtuozzo as CEO
    Brought to You by The WHIR

    Brought to You by The WHIR

    Virtuozzo has named a new CEO that is a familiar face in the web hosting industry. George Karidis, formerly of SoftLayer and IBM, joined Virtuozzo on Tuesday to lead the company and drive its partner focus.

    Virtuozzo currently offers a complete portfolio of open source and commercial virtualization products, including containers, an optimized KVM hypervisor, and software-defined storage.

    Prior to Virtuozzo, Karidis was president, Cloud Technology Services, at managed services provider CompuCom. Karidis ran the cloud business unit for 18 months after leaving IBM, which he joined as part of its acquisition of SoftLayer in 2013. At SoftLayer, Karidis served as chief strategy officer and chief operating officer.

    When it came to joining Virutozzo, Karidis said it was an easy choice; “I knew a lot of the team already, and they were a big partner when I was at SoftLayer,” Karidis said in an interview with The WHIR. Karidis replaces interim CEO Mike Riolo, SVP Worldwide Sales, Virtuozzo, who stepped in after Rob Lovell left the company in July.

    READ MORE: After Parallels Spin Out, Virtuozzo Refocuses on Partners and Technology

    Virtuozzo spun out of Parallels late last year as a standalone company. It’s hoping a renewed energy and market familiarity with containers will expose it to new partners and bring its virtualization platform into more hands.

    “There’s a tremendous opportunity to go after managed service providers, and system integrators,” Karidis said. “OEMs are a great partner for us as well, and fit into this space perfectly.”

    As it stands, Virtuozzo is used by more than 700 service providers, ISVs, and enterprises worldwide.

    “We are going to stay true to who we are [by] investing in partners,” he says.

    In addition to targeting new types of partners to boost its own partner channel, Karidis also sees an opportunity for Virtuozzo in partnering in the broader container ecosystem by forging partnerships with Docker and other significant players.

    “I see us expanding our go-to-market approach,” he says.

    With new leadership, Virtuozzo plans to launch new capabilities and services in the near future. For now, however, Karidis is remaining tight-lipped, but says they will present a “phenomonal opportunity for partners.”

    This article was originally published here by The Whir.

    10:35p
    Cisco CTO to Cloud-Native Group: Stop Building Walls

    In very brief comments delivered Tuesday morning in Seattle, at a conference of developers and other members of the Cloud Native Computing Foundation, Cisco CTO Ken Owens candidly advised attendees — including several Google engineers — that the way forward for building their Kubernetes orchestration platform is not to create divisions between themselves, the rest of the contributor community, and the vendor-driven platforms with which they’ll have to integrate.

    “Don’t build walls.  Don’t build this environment where you have [this],” said Owens, indicating a slide showing a “meme”-style picture of U.S. Republican presidential candidate Donald Trump, inside a caption that read, “We need to build a wall | VMware is going to pay for it.”

    “The CNCF is building more and more of an ecosystem around helping you in this journey,” Owens continued, referencing Cisco’s involvement in the Foundation.  “So leverage us, work with us, and we’ll help you get there.”

    Equalization Without Equivocation

    Software-defined infrastructure is a world with a handful of continents, all of which appear to be traveling in different directions.  CNCF represents the emerging ideal of Kubernetes: that application-centered workloads, rather than entire virtual machines, may be staged and managed in an environment where the details of the underlying hardware and platform minutiae are abstracted from it.

    It’s one way for hyperconvergence to finally make inroads, and for vendors of hyperscale hardware to spread out their customer base, between mid-level enterprises looking to deploy hybrid clouds, and corporations that need workload portability across continents.  Live migration of virtual machines makes something called “portability” feasible today, but distributing pieces of workloads (minus the overhead of VM operating systems and libraries) across cloud platforms, on-premises and in public spaces, requires a more effective orchestrator.

    To this end, Kubernetes (with Google’s backing) is competing against Apache Mesos in the open source space.

    “The first thing you have to do, when you’re making this move toward Kubernetes and cloud-native architectures,” said Cisco’s Owens today, “is you have to know where you’re going.  You have to have a vision, and have a set direction that you want to get to, and think about how you’re going to get there.  Don’t just start — Everyone just starts doing proofs-of-concept, and plays around with the technology.  That’s all good things to do, but at some point you have to know where you’re going to head with this new technology, and what you want to do as your organization grows.”

    Whose Level Playing Field Is It?

    Taken out of context and read as a paragraph by itself, Owens’ statement may seem a little generic.  But here, in this particular venue, there’s a point to his statement that could very easily be missed.

    The CNCF is a Linux Foundation project that came together in 2015 with the objective of assembling a platform around containerized workloads — the kind made popular that year by the rapid rise of Docker.  Google rapidly took the lead with the CNCF group, contributing code based on an orchestration concept it originally built in-house, called “Borg.”  The CNCF’s founding came literally on the heels of the creation of the Open Container Initiative, launched after Docker Inc. donated a critical piece of containerization technology — the container runtime — to the open source community.

    At first, there was confusion and even some dispute over why the OCI and CNCF were founded separately.  Didn’t both foundations seek to advance the cause of containerization?

    But since the start of this year, OCI’s mission started centering around standardizing the container format, which ended up helping wean containers from their tight association with Docker.  CNCF, meanwhile, focused on the orchestration of workloads, which include Docker and CoreOS rkt containers, but which have the theoretical possibility of expanding in scope.  For example, a China-based firm called Hyper has constructed a way for Kubernetes to orchestrate hypervisor-driven workloads, and to pair containers with a new kind of hypervisor.

    This while VMware (which we can assume will refuse to pay for any wall) “embraces and extends” the containerization concept, taking after Microsoft.  It’s built two container platforms of its own — one which pairs containerized workloads with VMware ESX hypervisors in a hybrid environment with vSphere, and another which would replace vSphere with a fully containerized system built on its NSX network virtualization platform.

    In both of those environments, however, it may be possible to run Kubernetes.  And if you’re asking why, consider that vSphere’s management environment may not have the flexibility to control microservices in a manner that makes them scalable.  Theoretically, VMware’s platform can be set to manage lower-level infrastructure, while Kubernetes devotes itself purely to making microservices work on the upper tier of a cloud-native architecture.

    Point of Reference

    On Tuesday, CNCF Executive Director Dan Kohn gave attendees their first peek at what that architecture would look like, at least on paper.  In version 0.9 of what it calls Cloud Native Reference Architecture (not to be confused with the IBM project with the same name) various open source components including Docker and Kubernetes are distributed among tiers in a five-ply stack: from top to bottom, Application Definition / Development, Orchestration and Management, Runtime, Provisioning, and Infrastructure.  Arguably, Kubernetes would have a place in the second layer from the top, and Docker and CoreOS rkt (pronounced “rocket”) one layer below.

    “In the era of GitHub, people do not need to be told what to do, they need help, services and common infrastructure that we can provide,” wrote CNCF Technical Committee Chairman Alexis Richardson (also the CEO of SDN software maker Weave), in a CNCF blog post published Tuesday.  “We ask them how we can help — we don’t tell them, we don’t make them join committees.  We love open source, which is fast, more than open standards, which are important but emerge slowly over time.  We are not a kingmaker organization — we believe the market and community will select leaders (plural) in time.”

    It’s another way of saying, don’t build walls.  The problem is, with five tiers in the emerging CNCF reference architecture, there are at least four opportunities for some very tall walls — or some nice fences.

    << Previous Day 2016/11/08
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org