Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, December 1st, 2015
Time |
Event |
8:00a |
Hewlett Packard Enterprise Rethinks Enterprise Computing Fighting to hold on to a leading position in the data center of the future, Hewlett Packard Enterprise today unveiled a vision of enterprise computing infrastructure that is very different from the world of computing of past several decades, where the company earned its current dominance.
The vision is “composable infrastructure.” Devised under the codename “Project Synergy,” it is infrastructure that quickly and easily reconfigures itself based on each application’s needs, designed for the new world where companies even in the most traditional of industries, from banking to agriculture, constantly churn out software products and generally look to software as a way to stand out among peers.
HP, the company HPE was part of until the beginning of last month, and other big hardware vendors that have dominated the data center market for decades, companies like Dell, IBM, and Cisco, have all struggled to maintain growth in a world where not only developers but also big traditional enterprise customers are deploying more and more applications in the public cloud, using services by the likes of Amazon and Microsoft.
Enterprises increasingly look at cloud infrastructure services as a way to reduce the amount of data center capacity they own and operate, painting a future vision of enterprise computing where the dominant hardware suppliers of today have a much smaller role to play.
Fluid Resource Pools
At least for the foreseeable future, however, companies will not be ready to move all of their critical data and applications to the cloud. Previously existing and new applications they choose to keep in-house are what HPE hopes will run on its new enterprise computing infrastructure.
It is both hardware and a software stack that manages and orchestrates it. It breaks up compute, storage, and networking into individual modules, all sitting in what the company calls “frames.” Each frame is a chassis that can hold any mix of compute or storage modules a customer desires, plus a networking device that interconnects resources inside the frame and resources in other frames. Any interconnection setup is possible.
The idea is to create virtual pools of compute, storage, or networking resources, regardless of which chassis the physical resources are sitting in, and to provision just the right amount of each type of resource for every application almost on the fly to support the accelerating software release cycle many enterprises now have.
Not a New Idea
While radically different from the traditional data center environment, where each resource is often overprovisioned, just in case demand rises, or where some resources, such as compute, for example, are overprovisioned, while others aren’t, the idea isn’t new.
Facebook and Intel introduced the idea of the “disaggregated rack,” or, in Intel’s parlance, Rack Scale Architecture, in 2013. One purpose was to provision the right amount of resources for every application; another was to enable Facebook data center managers to upgrade individual server components, such as CPUs, hard drives, or memory cards, individually, without having to replace entire pizza-box servers.
Using software to create virtual pools of resources out of disparate physical resources that can sit in different parts of the data center also isn’t a new concept. Open source software called Mesos, for example, creates virtual pools of resources using existing hardware in the data center. Mesosphere, a startup that built a commercial product based on Mesos, sells what it calls a Data Center Operating System, which essentially presents all resources in the data center to the applications as a single computer.
Unified API for Faster Automation
A key element of HPE’s composable infrastructure is an open unified API that DevOps staff can use to write infrastructure automation code. It replaces multiple APIs they usually have to program for separately in a more traditional environment.
In one example, Paul Durzan, a VP of product management at HPE, listed nine APIs DevOps usually have to code for to automate the way applications use infrastructure. They included, among others, APIs to update firmware and drivers, select BIOS settings, set unique identifiers, install OS, configure storage arrays, and configure network connectivity.
DevOps staff, who are usually the ones programing this, aren’t always familiar with the physical infrastructure in the data center, so they have to communicate with the infrastructure team, which prolongs the process further, Durzan said, adding that it can take up to 120 hours to write automation code to all the APIs.
HPE’s single-API alternative, the company claims, enables automation with a single line of code that invokes a pre-defined template. The infrastructure admins control the templates that can be used by DevOps tools, such as Chef, Puppet, or Docker.
According to HPE, that single line of code may look something like this:
New-HPOVProfile -name$name, -baseline$base, -sanStorage$san, server$server
It is “one API that can reach down and program your whole infrastructure,” Durzan said. The API is powered by HP OneView, the company’s infrastructure management platform that has been around for about two years.
One template, for example, could be for a SQL database running on bare-metal servers using flash storage; another could be a cluster of servers virtualized using hypervisors with flash storage; there could also be a unified communications template for Microsoft’s Skype for Business.
‘Trying the Right Things’
While HPE’s composable-infrastructure ideas aren’t new, the company’s scale, existing customer relationships, and breadth of its services organization are substantial advantages. As the superstar Silicon Valley venture capitalist Vinod Khosla recently pointed out at the Structure conference in San Francisco, IBM, Dell, HP, and Cisco are all “trying the right things,” even though they haven’t come up with new, truly innovative ideas in decades.
HPE may also potentially be in a better position to compete in the data center market as a smaller and nimbler company than it was until it was separated from the consumer and printing segment. Its first results post-split, announced last week, illustrated that HPE has much better growth prospects than the other of HP’s two daughter cells. | 4:00p |
Understanding the Economics of HPC in the Cloud The same improvements virtualization and cloud have brought to traditional data centers are coming to the world of high-performance computing.
HPC in the cloud was a major discussion topic at last month’s SC15 conference on supercomputing in Austin. Diane Bryant, senior VP and general manager of Intel’s Data Center Group discussed new types of products ranging from high-end co-processors for supercomputers to Big Data analytics solutions and high-density systems for the cloud. An SC15 paper discusses the rising popularity of cloud for scientific applications.
HPC users have approached cloud computing cautiously due to performance overhead associated with virtualization and interference caused by multiple VMs sharing physical hosts. Recent developments in virtualization and containerization have alleviated some of these concerns. However, the applicability of such technologies to HPC has not yet been thoroughly explored. Furthermore, scalability of scientific applications in the context of virtualized or containerized environments has not been well studied.
Cloud Computing Distribution and Scale
Scale-out architecture has become a hot topic in the HPC community. Resource utilization for high-end workloads running on supercomputer technology requires very careful resource management. One big aspect of this is Big Data. Many organizations are virtualizing HPC clusters to be able to span out their environment into a hybrid cloud platform. Doing this without powerful automation and connectivity technologies can be cumbersome. Technologies like VMware vCloud Automation Center, when coupled with the VMware vCAC API, allow organizations to scale their platforms from a private cloud to outside public cloud resources. With such optimization, replication and resiliency become much easier to control for a vHPC platform.
Now, let’s look at a few use cases where HPC workloads can live in a virtual and cloud-ready environment.
- Server platforms. There are a lot of new types of server platforms and systems being developed specifically for HPC and parallel workload processing. Most of all, these systems are now virtualization and cloud ready. Ever hear of HP’s Moonshot platform? Here’s a chassis that shares power, cooling, management, and fabric for 45 individually serviceable hot-plug server cartridges. What’s it perfect for? Running cloud-based applications capable of handling a large numbers of parallel task-oriented workloads. Now imagine deploying a virtual platform on top of this type of server architecture. Imagine being able to better control resources and migrate your data. These new types of server platforms are lending themselves to more optimization and better utilization.
- There a number of new types of workloads being run on top of a vHPC platform. Everything from big data to life science applications can be found on some type of HPC system. Whether you’re doing a geological study, design automation, or quantifying large data sets, optimization and data resiliency are critical. Through it all virtualization introduces a new paradigm to consider.
- Traditional HPC clusters run a single standard OS and software stack across all nodes. This uniformity makes it easy to schedule jobs, but it limits the flexibility of these environments, especially in cases where multiple user populations need to be served on a single shared resource. There are many situations in which individual researchers or engineers require specific software stacks to run their applications. For example, a researcher who is part of a scientific community that has standardized their software environments to facilitate easy sharing of programs and data. To prevent islands of compute, HPC virtualization allows researchers to “bring their own software” onto a virtualized HPC cluster. Basically, vHPC enables the creation of shared compute resources while also maintaining the ability for individual teams to fully customize their OS and software stack.
We are beginning to see new applications and deployment methods for HPC applications and workloads. However, before everyone begins to migrate their HPC environment to a virtual ecosystem, there are a few things to be aware of.
It’s important to understand where HPC and even cloud-ready virtual HPC environments have limitations and cost concerns.
- Resource utilization and scale. Islands of compute can become a real problem for HPC environments. Organizations – often academic institutions – have many islands of HPC due to either departmental boundaries (commercial) or the mechanics of grant funding, which gives researchers money to support hardware for their research. This is an inefficient use of resources and one that virtualization and cloud can sometimes fall victim to as well. Although you can consolidate your resources, it’s critical to know where HPC resources are being used and how. “Resource sprawl” can negatively impact vHPC economics and prevent proper scale. This happens when the same control islands and policies are transferred to a virtual environment.
- The critical nature of data, and the HPC workloads running these data sets, must be agile and capable of scale. Here’s an example: without virtualization and cloud expansion capabilities, quickly flexing the amount of resource available to individual researchers can be a real challenge. With virtualization, resources can be very rapidly provisioned to a new (virtual) HPC cluster for a researcher rather than having to order gear. However, not every workload is designed to be virtualized or delivered via cloud. You can negatively impact agility by virtualizing an HPC workload which needs dedicated on-premise resources. Remember, the same principles of traditional server virtualization don’t often equate to HPC virtualization.
- Density and consolidation. Consolidation is very common in enterprise IT environments because those applications are generally not that resource-intensive. This consolidation allows customers to reduce their hardware footprint while still meeting their QoS requirements. By contrast, HPC administrators generally never want to reduce the amount of hardware their workloads are utilizing. In fact, they are always looking for more hardware to allow them to run bigger problems or to generate better answers for existing challenges. And, because this is High Performance Computing, these workloads will almost never be over-subscribed onto hardware. They will not run more vCPUs than there are physical cores in the system. This means that some of your systems will simply require non-virtualized parallel-processing capabilities. Administrators must know when to virtualize and which workloads require additional levels of scale rather than processing power.
- Money (and budgets). Organizations are still concerned about how much money is being spent on physical systems and how best to spend the money to maximize value to end-users. Often HPC sites — especially academic ones — buy as much gear as they can and then rely on cheap labor to handle the operational complexities. But is this the best approach? Could putting in place a “better” software infrastructure be a more optimal use of funds? The answer is “yes and no.” Not every HPC workload is meant to be virtualized. This means that if an organization sets to place an HPC application into a vHPC ecosystem they need to make sure that this will actually optimize the entire process. Otherwise, performance could be negatively impacted, processing could take longer, and resources wouldn’t be utilized properly, all of it leading to more cost.
A recent 451 Research study showed that the average cost of running an AWS multi-service cloud application is $2.56 per hour, or just $1,865 per month, which includes bandwidth, storage, databases, compute, support, and load balancing in a non-geographically resilient configuration. At this hourly price for an application that potentially could deliver in excess of 100,000 page views per month, it’s easy to see how cloud is a compelling proposition. However, when we look at HPC, the conversation shifts to different concerns. As the HPC report from SC15 states, such virtualization comes with an associated performance overhead. Second, virtual instances in the cloud are co-located to improve the overall utilization of the cluster, and co-location leads to prohibitive performance variability.
In spite of these concerns, cloud computing is gaining popularity in HPC due to high availability, lower queue wait times, and flexibility to support different types of applications (including legacy application support). | 4:30p |
Stop Making a Big Stink About Data Center PUE Zahl Limbuwala is CEO of Romonet.
Data center managers are finding that their Chief Finance Officers are increasingly interested in the organization’s infrastructure. This is only natural; data centers represent a major investment for the business, and CFOs will want to be certain they’re getting value for money.
The challenge then is how to present this value to CFOs and sell them on a particular investment choice. Power Usage Effectiveness (PUE) has often been cited as the de facto indicator of data center performance, but the definition has been stretched to the point where it is almost unusable. As we know, PUE is the ratio of the energy taken in by a data center to that actually used by IT – with 1.0 being the impossible ideal. While this can give an idea of efficiency, it does not provide the full picture. Instead of focusing on abstract measurements like PUE, organizations should concentrate instead on delivering the best cost for the business.
Traps and Deadends
“Design” PUE is also not particularly useful. Instead of a realistic view of data center performance, it tends to show how a data center is run in its optimum state. Unsurprisingly, a data center may well never achieve this state in its lifetime, let alone immediately after it opens for business.
PUE makes two assumptions which cause problems. Firstly, that all IT load is good, and second that all IT equipment is of equal efficiency and value. If I have two data centers, one with an exceptional PUE but using IT equipment with no power management and hosting non-critical development platforms; and a second with a poor PUE but correct power management and hosting only essential business IT servers, I need to know more to understand which is actually the most efficient. In the same vein, a half-full data center will have a very different PUE to one running at full capacity, yet PUE won’t show which of these actually costs the business more. Another counter-intuitive example is turning equipment off overnight to reduce costs and emissions. This will worsen PUE since IT load has dropped while the power consumption of the data center remains constant, meaning managers may receive contrary reports that power saving measures are actually reducing efficiency.
Getting the Metrics Right
We shouldn’t assume that PUE is now worthless. Used correctly it can still give an impression of data center efficiency. However, organizations need to be sure that they are using metrics that will show the true value of their investment. The first step should be establishing the Total Cost of Ownership (TCO) of the data center. While aiming for better efficiency is all well and good, if this doesn’t result in improved profitability of services or reduced costs then there’s little point.
Secondly, we need to consider what value the organization receives from its data centers. To do this, you should use metrics that work in the context of what the organization wants to achieve. For example, a retail hub like eBay will be concerned about cost per transaction, while other enterprises might like to know the precise cost of each email inbox or other crucial business service. Once businesses know these costs, they can then optimize them; from which improved efficiency and reduced carbon emissions naturally follow.
As mentioned, the CFO is the ultimate decision-maker behind any IT investment. Their only concern is delivering IT at the lowest cost to the business and avoiding unnecessary investment. Being able to provide the precise, real-word costs for individual IT services, instead of more abstract figures such as PUE, will provide a huge benefit to data center teams pushing for investment (and fending off the challenge of the cloud with its clearly delineated costs).
Measuring Up
Unless PUE is used correctly and in the right context of business costs, it will be of most use as a marketing number. Effciencies in power consumption and management will instead follow if organizations aim at delivering the best TCO. Understanding the TCO of the entire data center estate; knowing whether this is in line with the business’s expectations; and being sure of what, if any, investment is required, will allow organizations to make the best use of their resources well into the foreseeable future.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 5:00p |
Top 10 Data Center Stories of November 2015 Here is a recap of 10 of the most popular stories that were published on Data Center Knowledge in November.
Why CenturyLink Doesn’t Want to Own Data Centers – CenturyLink’s colocation business, the business whose seeds were sown primarily four years ago with the $2.5 billion acquisition of Savvis, is not doing well. Colo revenue is not growing, and the telecommunications giant is looking for ways to avoid investing more capital in the segment.
Who May Use the World’s First Floating Data Center? – Nautilus staff have taken many IT execs on tours of the prototype and the construction site on the barge, the company’s execs said. Those who participated in the proof of concept include Silicon Valley’s A10 Networks and Applied Materials, as well as the US Navy itself.
 This barge, docked at the US Navy base at Mare Island in Vallejo, California, will soon hold the first floating data center built and operated by Nautilus Data Technologies (Photo: Nautilus Data Technologies)
Data Center Network Traffic Four Years from Now: 10 Key Figures – Five years from now, cloud traffic will account for most data center network traffic, according to Cisco.
Telecity Data Center Outage in London Dings Cloud, Internet Exchange – Two consequent power outages at one of TelecityGroup’s data centers in London disrupted operations for many customers, including the London Internet Exchange and AWS Direct Connect, the service that connects companies to Amazon’s cloud through private network links.
Why CA Stopped Selling its DCIM Software Suite – While CA was considered a market leader – because of its vision for DCIM – the vision never translated into a lot of sales, save for several big contracts, including with Facebook and the NTT-owned data center provider RagingWire.
 (Photo: CA Technologies)
Europe Greenlights Equinix-Telecity Merger, With Caveats – The European Commission issued the approval on the condition that the two companies sell eight specific data centers in London, Amsterdam, and Frankfurt.
 Inside one of TelecityGroup’s Dublin data centers (Photo: TelecityGroup)
Linus Torvalds: Perfect Security in Linux is Impossible – Does Linus Torvalds fail to take security in the Linux kernel seriously, and is the world doomed because of it?
When is the Best Time to Retire a Server? – There is always a point in time at which holding on to a server becomes more costly than replacing it with a new one. Finding out exactly when that point comes requires a calculation that takes into account all capital and operational expenditures associated with owning and operating that server over time.
Juniper Opens Data Center Network OS – Both the open version of Junos and Juniper’s new QFX5200 access switches, which support 25/50 Gigabit Ethernet, can be bought together or separately. When bought together, however, they enable deployment of third-party network services or applications directly on the Juniper platform.
US Data Center REITs Enjoying a Booming Market – US data center providers operating as Real Estate Investment Trusts all reported high rates of revenue growth in the third quarter compared to one year ago. All of them are building out more capacity across major US markets in response to high demand.
 DuPont Fabros Technology’s ACC2 data center in Ashburn, Virginia (Photo: DuPont Fabros)
| 6:55p |
Five Cybersecurity Threats to Watch Out for in 2016 
This article originally appeared at The WHIR
Among the new challenges for online security in 2016 include new Internet of Things (IoT) exploits and malware that can escape sandboxes and move from an isolated hypervisor to the host operating system. And, all the while, hackers are finding new way to avoid detection and hide evidence of tampering.
These are some of the findings of New Rules: The Evolving Threat Landscape in 2016, a new report from FortiGuard Labs, the research division of cybersecurity provider FortiGuard, and is based on the analysis of threat intelligence feeds from millions of devices deployed worldwide.
The report notes that, similar to years past, IoT and cloud technologies are key enabling technologies but they’re also subject to new malicious tactics and strategies that service providers and organizations will have to deal with. And evasion techniques will increasingly overcome detection and forensic investigation from law enforcement, meaning that systems could remain compromised for longer after security incidents – increasing their potential impact.
The top five cybersecurity trends for 2016 include:
1. M2M Attacks and Propagation Between Devices
FortiGuard researchers anticipate that IoT devices lacking adequate security could be an easy entry point for attackers. Connected consumer devices could provide a foothold within corporate networks to wage a “land and expand” attack.
Proofs of concept for this type of attack were seen in 2015, and FortiGuard expects further development of exploits and malware that target trusted communication protocols between these devices and the network.
2. Worms and Viruses Targeting IoT Devices
While worms and viruses have been costly and damaging in the past, the potential for harm when they can propagate among millions or billions of devices from wearables to medical hardware is orders of magnitude greater. FortiGuard researchers and others have already demonstrated that it is possible to infect headless devices with small amounts of code that can propagate and persist. Worms and viruses that can propagate from device to device are definitely on the radar.
3. Attacks On Cloud and Virtualized Infrastructure
Virtualization might not provide the isolation needed to keep threats within virtual machines. Vulnerabilities like Venom suggest that malware could escape from a hypervisor and access the host operating system. This could mean that vulnerabilities within one client system (even a mobile device) could compromise an entire public or private cloud system.
4. Undetectable “Ghostware” Attacks
FortiGuard predicts the use of “ghostware” that erases the indicators of compromise, making it difficult for organizations to track the extent of data loss or what systems are compromised.
Researcher also predict that “Blastware” like Rombertik, which is designed to destroy or disable a system when it is detected, will grow in 2016, but undetectable Ghostware could haunt systems for a long time.
5. Malware That Tricks Sandboxes
Sandboxing is sort of like a bomb disposal container where any potentially dangerous activity is set off in a controlled environment. Executing runtimes in a self-contained sandbox helps determine if code has a malicious payload to deliver.
But what’s interesting is that researchers have found “two-faced malware” that behaves differently during a Sandbox inspection so it will pass a sandbox inspection and be able to deliver its payload when executed on the system proper.
Blackhat hackers are finding new ways to exploit trends in devices and IT delivery, making it important for service providers and organizations to keep security in mind while adopting new technologies – and updating their existing services as new vulnerabilities are found.
This first ran at http://www.thewhir.com/web-hosting-news/5-cybersecurity-threats-to-watch-out-for-in-2016 |
|