Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, August 4th, 2015
| Time |
Event |
| 4:01a |
IIX Unveils SaaS Platform for Data Center Network Interconnection Using private network connections to bypass the public internet when connecting to public cloud services has been a popular way for the more security and performance conscious enterprises to take advantage of public cloud.
A number of data center colocation companies have been providing such connections as a service, and for some it has proven to be a major new avenue for growth. Equinix, the world’s largest data center provider, said its direct network on-ramps to cloud providers are its fastest-growing business.
But according to Al Burgio, CEO and founder of a startup called IIX, standing up and managing network interconnection is a complex task most enterprises don’t have the expertise to perform, which translates into high cost of using such services. IIX believes it has an answer to this problem.
Its new subsidiary Console, launched recently, will offer a Software-as-a-Service platform that reportedly makes it easy for any enterprise to connect to any service provider in any data center around the world. The service isn’t publicly available yet, and Burgio did not want to disclose many details of how the service would work, saying more information will be available after a formal unveiling in September.
The service goes beyond private links to public cloud. It includes network interconnection between data centers, connections to network carriers and to a multitude of other types of service providers.
The goal is “helping change the way enterprises connect to their customers, vendors, and partners,” Burgio said. There’s demand among enterprises for more advanced solutions than the interconnection services data center providers generally offer today.
“Simply providing a Layer 1 or Layer 2 connection … is great, but it’s not the full solution that really will help with making it simple and easy for the enterprise to use,” he said.
Console will be IIX’s “SaaS pure play,” Burgio said. The company also has a service that offers private connections to peering exchanges around the world.
IIX raised a $10.4 million Series A funding round last year. Also last year, it acquired Allegro Networks, a UK-based technology company with connectivity automation capabilities.
IIX’s network now extends across about 150 nodes. It reached this footprint after it acquired a competitor called IX-Reach earlier this year.
The nodes today are located primarily in North America and Europe, but the company is expanding into Asia Pacific. It has already selected node locations in Hong Kong and Singapore, Burgio said. | | 12:00p |
Salesforce Latest Convert to the Web-Scale Data Center Way Once a cloud services company reaches a certain size, the web-scale data center economics starts to make a lot of sense, and it looks like Salesforce is the latest major service provider to have crossed the threshold.
The company is going through a “massive transformation” of the way it runs infrastructure, going from lots of specialized custom server specs and manual configuration work to the approach web-scale data center operators like Google and Facebook use, standardizing on bare-bones servers and implementing sophisticated data center automation tools, many of them built in-house, according to TJ Kniveton, VP of infrastructure engineering at Salesforce.
Kniveton told about the transition while sitting on a panel with infrastructure technologists from Google and Joyent at the recent DCD Internet conference in San Francisco.
His team is looking at all the infrastructure innovation web-scale data centers have done over the past 10 years or so and adapting a lot of it for its own needs. It is a wholesale redefinition of the relationships between data center providers (Salesforce uses data center providers as opposed to building its own), hardware and software vendors, and developers that build software that runs on top of that infrastructure, Kniveton said.
One big change is transitioning to a single server spec as opposed to having a different server configuration for every type of application. When Microsoft announced it was joining Facebook’s Open Compute Project, the open source data center and hardware design initiative, early last year, it kicked off a similar change in strategy, standardizing on a single server design across its infrastructure to leverage economies of scale.
Facebook’s approach is slightly different. While it has standardized servers to a high degree, it uses several different configurations based on the type of workload each server processes.
Kniveton wants to take standardization further at Salesforce, standardizing on a single spec. He did not provide details about the design, but said there were lots of benefits to cutting down to one configuration.
Another big change is relying much more on software for things like reliability and general server management. Like web-scale operators, Salesforce is going to rely on software to make its applications resilient rather than ensuring each individual piece of hardware runs around the clock without incident.
“I’m doing a lot more software development now than we’ve ever done before,” Kniveton said.
Automation: the Glue Between Applications and Infrastructure
Much of that software work goes into data center automation so that computers can do the manual work of the human system administrators. His goal ultimately is not just automation of simple tasks but creating autonomous systems that configure themselves to provide the best possible infrastructure for the application at hand.
The efforts at Salesforce rely to some extent on open source technologies, but his team found that not everything they need is open source. “There are building blocks out there,” he said, but the team still has to create a lot of technology in-house, and he hopes to open source some of it in the future.
Data center automation is crucial to the web-scale approach. As Geng Li, Google’s CTO of enterprise infrastructure, put it, automation is the glue that keeps everything together. “It’s not just a set of technologies you buy from a vendor or a set of vendors,” Li said. It’s about having a software-oriented operations team to “glue” the workload the data center is supporting to the infrastructure.
Automation enables a single admin to manage thousands of servers, which is the only way to manage infrastructure at such scale. There are no sys admins at Google data centers, Li said. There is a role at Google called Site Reliability Engineer. “Those guys are software developers,” he said. Such an engineer receives a service to support and it’s her or his responsibility to automate the infrastructure to properly support that service.
Automation also helps increase utilization rates of the infrastructure. It enables virtualization or abstraction of the physical pieces and creation of virtual pools of resources that can be used by applications. All flash memory capacity available in a cluster of servers, for example, can be treated as a single flash resource and carved up as such, as opposed to individual applications using some flash resources on certain servers, leaving a lot of free capacity idling.
The Risk of Ripple Effect
Obviously, the bigger and more automated the system, the bigger the magnitude of the impact if there is an issue. In a highly automated system where everything is interconnected, a single software bug can cascade in ways that were not anticipated by the software developers, causing widespread service outages.
Bryan Cantrill, CTO of the cloud provider Joyent and the third panelist, warned about the dangers of too much automation, where a single mistake can have a disastrous ripple effect across the entire infrastructure. “You are replacing the fat finger of the data center operator with the fat finger of the software developer,” he said.
Kniveton acknowledged the risk, saying the automated approach means more thinking needs to go into avoiding scenarios where effects of a single mistake can be greatly magnified. “With great power comes great responsibility,” he said. | | 3:30p |
Lack of Water is Chilling Reality for California Data Centers Jorge Balcells is Director of Technical Services for Verne Global.
Being in the limelight is a pretty comfortable place for California. As the home to Hollywood and Silicon Valley, the spotlight usually shines pretty brightly on whatever is happening in the state. These days the focus is on the severe drought conditions California is currently experiencing and the data center industry, like many others, is paying particularly close attention. While water is the issue, this is really just one of the most recent examples of location-based infrastructure issues facing the data center industry. How are CIOs and data center managers finding innovative alternatives for coping with these types of issues? One answer, decoupling data centers from the locations where corporate headquarters are located.
As we know, up to 40 percent of power costs are allocated to helping keep servers cool in data centers. In California and other western parts of the country, areas that have relatively cool climates and low humidity were particularly attractive for establishing data centers, which led to a building boom in the dry desert mountains from eastern Washington state to Phoenix, Arizona. While these options are good for lowering energy costs, there are extensive water costs tied to a commonly used technique called adiabatic, or evaporative cooling.
Adiabatic cooling is used to extend the hours available for free cooling at certain ambient temperatures. With this process, millions of liters of water are evaporated and must be replaced in this open-loop scenario to cool down servers within the data center. In looking at a data center in Reno, Nevada, running a 1 megawatt IT load, 24/7 for a 12-month period, the results show that in an average year, the ambient air temperature exceeds 18°C for 2,285 hours or 26.1 percent of the year. During this time, the evaporation portion of the cooling process is active and the total water consumed over the year will be 2.54 million liters or, on average, 212,000 liters per month. This is a large amount of water consumption for any location, much less an area suffering from an extreme drought.
While the problem is a deficit in water in states like California and Nevada, other locations are experiencing their own unique infrastructure issues. In the UK, the National Grid has reported only a 3 to 4 percent power surplus for 2015. Companies wanting to scale data center deployments to support High Performance Computing (HPC) or Big Data Analytics, among other things, are limited by the amount of energy available. For other locations, such as Siberia, where there is an abundance of free cooling, the issue is having enough high-tech workers available to support a fully established data center industry. Power availability and a qualified workforce are key factors today in the data center site-selection process.
Data centers are being built in locations that weren’t even considered viable 10 years ago. Finland, Sweden, Norway, and Iceland have become ideal data center locations due to their immunity to the core infrastructure limitations found in other areas: there is abundant and renewable energy, 365 days a year of ambient free cooling, and an available high-tech work force. In Keflavik, Iceland, for example, the average ambient air temperature never exceeds 18°C and as a result, no water is used in the evaporation process.
While water is clearly an issue in California today, constant environmental changes will always yield data center challenges for locations that are reliant on instable factors. The fact that companies are starting to welcome the notion that data centers do not have to be tethered to the cities or regions where the company is located will provide CIOs with many more options in how they manage their data into the future.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 4:43p |
HP Makes Hyper-Converged Infrastructure Appliance More Configurable With more IT organizations starting to embrace hyper-converged infrastructure, where compute, storage, and sometimes also networking come as a pre-integrated package, the competition between major server vendors is starting to get fierce. Upping the ante in this category HP today unveiled the ConvergedSystem 250-HC StoreVirtual (CS 250), an aggressively priced configurable hyper-converged appliance.
High degree of configurability is a big change here. Previously, HP offered hyper-converged appliances that came in configurations that were limited in the number of nodes and storage devices that customers could opt to deploy.
“Before we assumed the customers would want hyper-converged appliances to [come in] pre-defined chunks,” said Rob Strechay, director of product marketing and management for software-defined storage at HP. “Instead, we’ve discovered they want to be able to configure them for themselves.”
The new CS 250 can be configured with up to 96 processing cores, a mix of SSD and SAS disk drives, and up to 2TB of memory per four-node appliance, which is double the density of previous generations of HP’s hyper-converged appliances.
In terms of pricing, HP claims that a three-node configuration of the CS 250 is up to 49 percent more cost effective than comparable configurations from Nutanix, one of the more well-known hyper-converged infrastructure vendors, and other competitors. A three-node CS 250 with Foundation Carepack and VMware vSphere Enterprise starts at a list price of $121,483.
To help drive further adoption, HP is also providing three 4TB StoreVirtual Virtual Storage Appliance licenses that IT organizations can use to replicate data to any other StoreVirtual-based solution at no additional cost, in effect essentially providing free backup for the appliances.
The CS 250 also comes pre-configured for vSphere 5.5 or 6.0 and HP OneView InstantOn to enable customers to be production-ready with only 5 minutes of keyboard time and a total of 15 minutes deployment time, claims HP.
The company is also introducing Software-Defined Storage Design and Integration services to help customers deploy highly scalable, elastic cloud storage services. The integration service provides customers with detailed configuration and implementation guidance tailored to their specific environments.
While not every data center environment is going to embrace hyper-converged appliances that combine servers and storage in one unified platform, Strechay said interest in this approach to IT infrastructure is rising rapidly because it allows IT organizations to reduce their dependency on specialists to manage specific types of IT infrastructure. In their place, IT organizations rely more on IT generalists to manage the infrastructure at higher levels of abstraction using software-defined tools.
The tradeoff is that traditional rack systems enable IT organizations to scale compute and storage independently from one another, which, depending on the nature of application workloads, may prove to be more technically desirable in environments where workloads don’t scale linearly in terms of the compute and storage resources they require. | | 4:57p |
Seagate Launches Its Next-Gen SAS Solid State Drive Seagate has announced the first product to come out of its strategic alliance formed with Micron — a next-generation high-capacity SAS solid state drive.
The new Seagate 1200.2 SAS SSD leads the next-gen SSD platform. The company claims it is the first 12 gigabits-per-second SAS device to optimize dual channel throughput with up to 1800 megabytes-per-second sequential reads. Micron is also launching new SAS SSD products with this technology.
The storage giant has evolved its product portfolio to match technology trends over the years. Within the past year it picked up the assets of LSI’s Accelerated Solutions division and Flash components division from Avago and formed a new Cloud Systems and Solutions division to focus on original equipment manufacturer solutions. Phil Brace, president of this new division, will keynote the Flash Memory Summit next week in Santa Clara, talking about the combination of flash and hard drives in the future data center.
Evolving with business needs and industry trends in flash, the 1200.2 SSD is engineered for both enterprise and cloud workloads, where speed, security, data protection, and reliability are of the utmost importance. Seagate’s 2.5 inch drives are offered in four tiers, balancing cost with endurance and performance, and scaling up to 4TB.
Catering to workload-optimized needs of its intended audience, Seagate said the new drives feature three levels of security: secure diagnostics and download, self-encrypting drive, and FIPS drive. They also include next-generation Power Loss Protection in the event of unexpected power interruptions.
Seagate said it will offer a 5 year drive warranty even under write-intensive workloads. | | 5:30p |
Cloud Hosting Firm Linode Opens New Data Center in Frankfurt 
This article originally appeared at The WHIR
Cloud hosting provider Linode announced the launch of a new data center inFrankfurt, Germany on Monday, its eighth data center and first in continental Europe. The launch continues Linode’s global expansion.
The new data center doubles Linode’s capacity in Europe, which previously all came from its London location, and it offers a number of regional advantages. As a financial and digital hub, Frankfurt is a natural choice for Linode’s expansion.
Over a third of Europe’s Internet traffic travels through Frankfurt, which is home to the largest Internet exchange in the world by traffic, DE-CIX, and the company expects its services to benefit from abundant peering opportunities, according to a blog post. The new presence also allows domestic German customers to host data with Linode in compliance with the country’s federal data protection act, or Bundesdatenschutzgesetz (BDSG), which are among the world’s strictest local data storage requirements.
“There is a growing demand for hosting infrastructure across Europe,” said Linode CEO Christopher S. Aker. “There is an exciting startup and technology development scene in Germany and throughout Europe. We are pleased to expand our simple and powerful cloud services to developers, entrepreneurs, and businesses in the region.”
Like the company’s London location, Linode is using colocation and data center services from TelecityGroup for its Frankfurt presence. Linode Frankfurt is KVM only, as Linode began offering generally available upgrades from Xen in its other data centers in June. Xen has been dogged by security issues lately, causing numerous headaches for service providers. Container hypervisor LXD was shown to run denser machines with lower latency than KVM in benchmarking by Canonical in May, but LXD does not support Windows, as KVM does, so for now KVM continues to be a popular hypervisor suitable for most end-users.
In a statement, COO Thomas Asaro touted Linode’s superior performance to Amazon and Google clouds in uptime, processing power, bandwidth, throughput, and customer service benchmarks, while Aker said the company continues to assess global demand in considering its next expansion. It opened a data center in Singapore earlier this year.
Linode announced in July that it will relocate its head office to Philadelphia and conduct a large hiring round to accommodate its expansion.
This first ran at http://www.thewhir.com/web-hosting-news/cloud-hosting-firm-linode-opens-new-data-center-in-frankfurt | | 5:53p |
AT&T, ClearPath, Nokia Join OpenDaylight SDN and NFV Community OpenDaylight, a community-led initiative around Software Defined Networking and Network Functions Virtualization, has a few new heavy-hitters as members. AT&T, ClearPath Networks, and Nokia Networks have all joined.
The goal of OpenDaylight is to develop an open and common SDN and NFV platform. SDN allows carving out networks via software and NFV is for the creation of network functions in software rather than individual hardware appliances.
In order to carve out these virtual boundaries and functions, everything needs to be able to communicate in a standard, universal fashion. Vendors don’t often agree with one another, so open source is the meeting ground, with projects like OpenDaylight neutral territory for vendors to meet up. OpenFlow, managed by the Open Networking Foundation, is a competing standard.
AT&T has used OpenDaylight since its first release and is incorporating it into production for its Network on Demand capabilities. AT&T configures devices in its software-based network using a tool built on a data modeling language called Yang. The company is submitting its customized Yang Design Studio Tool to OpenDaylight.
Membership benefits AT&T by enabling developers to create services that fit into AT&T software-defined framework and any Yang-driven implementation.
“Open source will speed up our innovation, lower costs, and help us virtualize 75 percent of our network by 2020,” said John Donovan, senior executive VP of AT&T Technology and Operations. “Collaborating with and contributing to the open source community is crucial to drive this software shift at AT&T and in the industry.”
AT&T recently discussed its open sourcing network hardware and software at the Open Network Summit.
ClearPath Networks provides cloud managed networking and distributed virtual network function lifecycle management for network operators. ClearPath’s flagship offering is its NFV-based Virtual Services Platform. ClearPath’s senior VP of marketing Marc Cohn noted in a press release that NFV and SDN are rapidly converging, and said ClearPath intends to extend its NFV offerings to exploit SDN.
Nokia works closely with operator customers, helping build telco cloud deployments based on virtualized network functions.
“The future of networks is in distributed architecture and cloud platforms that enable full use of NFV and SDN,”said Henri Tervonen, VP of Mobile Broadband Architecture at Nokia, in a press release. “We are excited to shape the future together with the OpenDaylight community, which gives us further opportunities to drive the industry towards openness and collaboration.”
The latest OpenDaylight release, Lithium, came out in June. Brocade CEO Lloyd Carney recently discussed with Data Center Knowledge the impact of SDN on the market. | | 6:29p |
NaviSite Joins VMware vCloud Air Service Provider Program Time Warner Cable-owned cloud and managed service provider NaviSite has joined VMware’s vCloud Air Network Service Provider Program.
The program is a bid to create an ecosystem out of all the service provider clouds built atop of VMware software. This same ecosystem is a way for service provider partners to extend their offerings with VMware public cloud to better serve enterprise hybrid needs.
VMware recently discussed its “One Cloud” strategy, which sees the company selling cloud services both directly and through its partners. The vCloud Air Network has over 4,000 service providers around the world who purchase software from VMware to build cloud infrastructure. This network of service providers is vital to VMware’s success. Other technology giants are building similar ecosystems, Cisco’s Intercloud being a prime example.
The NaviSite and VMware partnership goes back a decade. The deepened relationship will enable a broader portfolio of integrated solutions overall and provide NaviSite with earlier access to new technology and platform updates from VMware.
As part of the partnership, customers can “test drive” NaviSite’s cloud using VMware vCloud for 30 days. At the end of trial customers can click over to production if they’re happy.
NaviSite hit a rough patch during the last recession, with shares trading at around a quarter at a low point. The company refocused on growing its enterprise hosting business, a large part of which is its VMware-centered offerings.
As part of the refocusing, NaviSite shed non-core business lines like its Lawson/Kronos application services business and got rid of some of its data centers. The company enjoyed a turnaround prior to its acquisition by Time Warner Cable in 2011.
NaviSite is staying the course in targeting enterprise needs, with its core competency being managed services. VMware is focused on providing service provider cloud building blocks, as well as providing consistency and management between well-defined public cloud from VMware and private cloud from a partner.
VMware is squarely enterprise-focused. The company believes hybrid deployments will be a strong majority in the next two years, according to Angelos Kottas, senior director product marketing, cloud, VMware.
“As we expand upon the relationship we have built with NaviSite over the past decade, we look forward to the collaboration between the two companies to enable the migration of the next set of enterprise workloads to the cloud,” said Geoff Waters, VP, Service Provider Channel, VMware, in a press release. |
|