Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, February 12th, 2015
| Time |
Event |
| 1:00p |
Nine-Foot Cabinets Help Test-Drive New Design for DFT ASHBURN, Va. - As you enter the data hall, the unusually tall white cabinets dominate your field of view. Standing nine feet tall, these cabinets can house up to 58 servers, compared to the 42 rack units found in a standard seven-foot cabinet. A metal duct emerges from the top of each cabinet, serving as a chimney to the deliver exhaust heat from the servers into a hot air plenum above the ceiling.
This is the “proof of concept” room inside ACC7, the massive new data center built by DuPont Fabros Technology in the heart of nothern Virginia’s technology corridor. It’s where the rubber meets the road for a new design the developer created to improve the energy efficiency of its facilities.
The 8,500 square data hall was created to stress-test this design. The room is filled with more than 200 cabinets representing every flavor of air containment, including more than 20 designs from eight different vendors. Each of these cabinets was outfitted with a load bank, an appliance that simulates the electrical and heat loads created by a full rack of servers.
DuPont Fabros ran these load banks at power densities of up to 15 kilowatts per cabinets, testing virtually every permutation of loads, rack heights and widths.
Building Comfort Level for Tenants
“It gives us a comfort level to tell our customers that we have designed it, and proven that it works,” said Scott Davis, the executive vice president of operations at DuPont Fabros.
Why the focus on proving the merits of the new design? ACC7 marks the culmination of a multi-year effort to update DuPont Fabros’ data center design to meet the needs of the Internet’s most demanding technology companies. DFT houses large chunks of the cloud infrastructure for Facebook, Apple and Microsoft. After close consultation with these tech titan and their other clients, DuPont Fabros’ new approach included several significant changes from its previous facilities.
The biggest benefit is improved energy efficiency. DFT’s initial design, which was notable for its use of an ISO parallel power distribution that allows load sharing across multiple data halls, was developed at a time when uptime was far more important than efficiency.
As energy use has gained mindshare among cloud providers, DuPont Fabros has retooled its design to adapt. ACC7 is expected to operate at a Power Usage Effectiveness of about 1.15, a solid improvement over the PUEs of 1.28 seen in earlier facilities on the Ashburn campus.
Embracing the Slab
The biggest change was the elimination of raised floor, which is ubiquitous in the enterprise and thus has been a staple of most service provider data centers. But many hyperscale data centers now place racks directly on a slab floor and use containment to manage airflow. Many of the world’s most efficient data center use this approach, including company-built facilities for Microsoft and Facebook.
ACC7 is DFT’s first roll-out of the new design, which also revamps the chilled water cooling and power distribution systems. As it began to market ACC7 to its existing tenants, the proof of concept helped establish a comfort level with the new approach, validating performance in a variety of designs that use hot aisle containment systems.
ACC7 is the company’s largest data center yet, with a power capacity of 41.6 megawatts and 446,000 square feet of space. The building is more than 1,200 feet in length, or slightly longer than an aircraft carrier.
 The newest data center completed by DuPont Fabros Technology in Ashburn, Virginia. At nearly 450,000 square feet, ACC7 is the largest data center in the company’s portfolio. (Photo: DFT)
The refinements by DuPont Fabros represent one company’s effort to enhance its existing design. This has been the story of the data center industry over the past decade, as a history of secrecy gave way to a community discussion of best practices. One after another, industry innovators have come forward to share their insights and, in some cases, publish the designs so others can improve upon them.
This has created a virtuous cycle in which data center builders have boosted their efficiency even as they super-sized their facilities. The powerful growth of the Internet has given these data centers a key role in the transformation of the global economy, disrupting a breadth of industries. Cloud infrastructure has shifted countless offline business processes to highly efficient digital platforms.
With great power comes great responsibility. As the data center industry uses more and more power, the relentless focus on greater efficiency and sustainability goes forward as well, at DuPont Fabros and across the industry. One day at a time, one tweak at a time, and always with a vision of doing it even better tomorrow.
Author’s Note: This is my 7,930th and final post here at Data Center Knowledge, which I founded in 2005. DCK is in good hands, with new leadership and new owners, making this the right moment for me to move on. Please stay in touch by following me on Twitter or connecting with me on LinkedIn. | | 3:00p |
Blue Box Pitches Turnkey Private OpenStack Cloud Blue Box, a Seattle-based cloud infrastructure company, has teamed up with system integrator Alliance Technology Group to bring to market an on-premise version of its hosted OpenStack private cloud service.
While popularity of private and public cloud services hosted in providers’ data centers is on the rise, lots of customers still have applications that need to be hosted in data center environments they control. On-premise private cloud is a good option for companies that need that control but want to take advantage of the flexibility of cloud infrastructure.
The new Blue Box offering, which the company expects to bring to market in the second quarter, is similar to the offering by Metacloud, a company recently acquired by Cisco. Blue Box CTO Jesse Proudman said he considered Metacloud’s the only offering on the market competing with his company’s upcoming product.
There are numerous companies that have OpenStack distributions and that help customers stand up private OpenStack clouds in their data centers. The Blue Box product is different in that it is a full hardware-and-software package that is fully managed by the provider.
Customers that simply deploy OpenStack software packages find the process of operating their environments difficult, according to Proudman. “They want to put workloads on top of clouds, not run the clouds themselves,” he said.
Another point of differentiation is the company’s hosted private cloud service, which will be able to integrate with the in-house environments.
The private clouds will be manageable via the provider’s web-based Box Panel interface. They will be linked to the company’s operations center.
The hardware stack will consist of Juniper network gear, Dell servers, and Nimble storage arrays. Blue Box plans to offer KVM hypervisor as one virtualization option and Docker containers as another option for customers that want to spin up application containers on bare-metal servers. Users will be able to use OpenStack APIs to boot and manage Docker containers, Proudman said.
The offering will be based on Juno, the latest release of OpenStack. | | 4:30p |
How Network Operators Can Close the Agility Gap With over 14 years of experience delivering service provider software solutions, Prabhu Ramachandran directs WebNMS, the service provider division of Zoho Corporation.
IT workload migration from the enterprise data center to remote, cloud data centers, increasing IT mobility, and remote management for the Internet of Things (IoT) all drive the need for assured network services for business-critical communications.
Assured network services offer availability and performance guarantees, unlike the best effort services provided by the Internet. Telecom network operators see this assured service demand as an opportunity for revenue growth, both from existing services like Carrier Ethernet and MPLS as well as new, innovative services that incorporate SDN, NFV and M2M technologies. The highly diverse, reliable and scalable connectivity they can provide with their currently deployed infrastructure provides their competitive advantage to sell not only assured network services, but also to bundle value-added data center services.
However, these network operators must cross a service agility gap to compete successfully in this growth market. Today, a typical assured network service takes between one to four months to fulfill from time of order to activation. This performance is manageable for long-lived enterprise service orders, but cloud services require a far more dynamic response. This gap between the agility of service supply and demand is one of the key factors to the network operator success in the virtual services market.
Three Vertical Layers, Innovation at Each
Network operators use extensive and complex systems to deliver services that can be broadly categorized into three vertical layers. Operations Support Systems (OSS) and their close cousin the Billing Support System (BSS) deal with the business aspects of service fulfillment. The OSS layer requests the provisioning, test and ongoing assurance from the middle layer that includes service and network management systems. This layer translates the requests into configuration and status commands for the underlying network equipment which provides data plane connectivity.
- Operations Support Systems (OSS)
- Service and Network Management Systems
- Multi-Vendor Network Equipment Systems
These systems have evolved organically around servicing brick-and-mortar enterprise, SMB and residential customers, with reliability and scalability being of primary focus and agility second. During that evolution, a variety of management silos have arisen and been managed by network operators in swivel chairs, manually executing operations. These silos and the manual operations that manage them are the primary drag on network service agility. To boost the agility of these platforms, providers are innovating at each layer using orchestration and virtualization techniques borrowed from cloud operators.
Competing in Cloud Data Center IT
Defined by cloud and enterprise SDN, orchestration refers to comprehensive, automated control of resources and management. Network operators have defined two types of orchestration that align with their top two levels—OSS orchestration and service orchestration.
An OSS orchestration automates business operations to convert on-demand service requests from end users into API calls to the service management layer. In turn, service orchestration uses end-to-end network resource control to automate provisioning, test and ongoing assurance for virtual services and virtual private networks (VPNs). By eliminating manual operations, OSS and service orchestration accelerate the fulfillment of service orders on available resources.
In parallel, service providers are looking to network function virtualization (NFV) to deploy virtual network functions (VNFs) as cloud-hosted virtual machines (VMs). NFV decouples higher layer services from the physical connectivity infrastructure, increasing resource elasticity and service agility.
For example, access service providers are looking at virtual customer premise equipment (vCPE) applications where the consumer’s set-to-box or enterprise network interface device provides basic connectivity to routers, firewall and storage hosted in the data center. The provider can then bundle these services and remotely activate them for customers with existing access gear.
Helping Drive Service Adoption
OSS orchestration, service orchestration and NFV illustrate the types of solutions that network service providers need to compete in the cloud data center IT world. These techniques will apply not only to existing enterprise and consumer services such as Carrier Ethernet, MPLS, Broadband and LTE, but also to new IoT services that enable reliable, intelligent infrastructure such as smart cities and smart homes. Agile, assured network services will help drive service adoption and business results for the entire industry.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 5:00p |
Disaster Recovery in the Cloud: Are We Ready? Disaster recovery–getting systems back online and data available after a service interruption–is a mission critical activity for many, if not all, businesses. The Bureau of Labor cites that 20 percent of business experience a failure (fire, flood, power outage, natural disaster, etc.) in any given year, and 80 percent of those businesses will go under in just over a year.
Although there may be debate over what statistics are most accurate, common sense tells us that if IT systems are not restored in a timely fashion, it is highly likely that a company may suffer an untimely demise.
Planning for disaster recovery and thinking through new options is the topic at several sessions at the spring Data Center World Global Conference in Las Vegas. Brian Vandegrift, executive vice president of sales and innovation, Venyu, is a session leader.
Disaster Planning and Disaster Experience
The U.S. Gulf Coast is one area of the states that has historically been experienced hurricanes, storms and flooding. For Venyu, a data center operator in the states of Louisiana, Florida and now Mississippi as well Texas and Massachusetts, they know what issues arise in a disaster situation. They have lived through it, especially in Louisiana.
“We have grown up with our primary customer base on the Gulf Coast, but we have customers around the country,” said Brian Vandegrift. Customers go with Venyu because “they like that we are battle-tested. It’s know we have been through disasters.”
Vandegrift added, “A lot of people out there are selling back up and disaster recovery, and they don’t have to do what they sell.”
“We are prepared to run indefinitely without utility power,” he said. The utility power grid is used as a cheaper source of power for when there is no disaster.
How the Cloud Changes Disaster Recovery
 Cloud backup, storage and recovery all options that can make disaster recovery smoother.
Since Venyu started as a systems integrator and IT consulting shop, the company has worked with many clients over the past 24 years in developing ways to protect clients’ data and equipment and assisting in the development of disaster recovery plans.
Currently, they are helping businesses leverage both colocation and cloud services in their disaster plans. “We still take an integrator, consultative approach,” he said. “We leverage the data center assets we have as well as assets the client owns, the skill sets the client has and develop a disaster recovery plan that brings it all together.”
Cloud has made a huge difference in recovery from outages. “Disaster recovery today is easy, it’s not that much heavy lifting. You used to have to have backup tapes and ship them around the country,” he said. “The technology has evolved where it is easy to orchestrate and to fail back to production.”
While not all businesses are cloud-enabled, many have leveraged virtualization. If a business is using virtualization, they are ready for a disaster recovery strategy that uses cloud services.
“Clients have been taking steps toward virtualization to squeeze more out of their footprint, to get more power and computing from what they have. We can leverage their virtualization,” Vandegrift said. “So we can very quickly, within 1 hour, and directly, get them into a cloud solution. They can be up and running again quickly.” (Of course, clients have to work with a provider PRIOR to a disaster to have that kind of smooth response.)
Certainly at the time of disaster, business are not wise to be pinching pennies. However, the time for negotiation is prior to the disaster declaration. A sound disaster plan worked out ahead of time would outline customer services and their costs.
For example, Venyu works on an allocation model: The cloud pricing is based on RAM/Storage. It is not a usage model. “The client gets full use of the resource. You use it more, you don’t get billed more,” he said. “Not everyone needs public cloud or private cloud. We do customized solutions depending on the client, they could have combination of private, public and colocation services.”
Vandegrift said, for the DR solution, clients are reserving the right to use the infrastructure at the time of a disaster, and then when there is a disaster there is a cost. (It is not based on minute by minute use.)
Broader Advantages to Cloud
The cloud allows the IT staff to focus on the core competency of the business, not doing things (keeping hardware running) that take away from the core competency of the business, said Vandegrift. “IT people are playing a bigger role in the board room. In a competitive marketplace, it makes sense to have IT more involved in the business rather than putting out fires and running infrastructure.”
To find out more about how cloud services can be included in your DR plans, attend the session by Vandegrift at spring Data Center World Global Conference in Las Vegas.Learn more and register at the Data Center World website. | | 6:04p |
Survey: Wider Docker Adoption Hinges on Security, Tooling Docker, the hot application container technology, needs to overcome security and operational tool maturity hurdles to achieve widespread adoption, according to a new survey. Docker packages an application in a way that makes it portable across different data center or cloud environments.
A company in the Docker ecosystem called StackEngine, as well as VMblog and CloudCow conducted the new survey across about 750 respondents in enterprise virtualization and cloud computing fields.
StackEngine recently emerged from stealth. The company builds software that helps massively deploy, manage, and scale resilient applications in modern container-centric architectures.
About three-quarters of enterprises are using or evaluating Docker, according to the survey. It’s still early in the Docker market, with most usage in development or with internal applications, but many looking to use it more widely and in production. Docker is seeing bottom-up, rather than top-down adoption; developers, rather than executive CIO initiatives are pushing it. The company and the eponymous open source technology have captured developer hearts. The survey revealed, however, that it’s also on executives’ radar, and widespread adoption is a matter of reaching a certain maturity level.
“It’s similar to how virtualization first started,” StackEngine CEO and Co-Founder Bob Quillin said. “The same audience [that] was an early adopter of virtualization is an early adopter of Docker. Most are thinking about using it in production but want more confidence and capabilities.”
The survey identifies the two biggest hurdles as security and limited capabilities.
“Operations teams are getting used to it, but security is one inhibitor, while the other is they don’t have the operational tools,” Quillin said.
Docker has been working on building in tools and capabilities. It acquired a company called Orchard last year.
Docker 1.5 came out this week, featuring some enhancements, and showing that the feature set is maturing.
There is also a growing ecosystem of companies focused on building capabilities around Docker. One recent example is Logentries, which provides a real-time log management service for Docker.
Part of the problem is customer awareness of what’s available, and what’s battle tested. There needs to be a central, easy way to get these tools, Quillin said. There are many you can download on GitHub; the operational tools are there, but it all hasn’t gelled into one cohesive platform yet.
The survey reveals a lot about VMware in particular, as 65 percent defined their infrastructure as VMware-based. Almost half of those users said they want to use existing tools from VMware to manage Docker, which is an opportunity for the company. VMware has been building out its Docker strategy.
These same shops, however, cited VMware independence (37 percent) and VMware costs (44 percent) as motivation for using Docker. Other usage drivers were testing the waters (42 percent) and hybrid cloud (45 percent).
Docker is making some very good strides around security of containers themselves, Quillin said. “Concerning to customers is you need to use command line interface, which gives you access to the host. You don’t want to give root access to developers.”
Complete survey results and infographics are available here | | 8:30p |
Microsoft Offers Azure Cloud Hosting Credits to Y Combinator Startups 
This article originally appeared at The WHIR
Microsoft is slashing costs for Y Combinator startups by giving away $500,000 worth of Azure cloud hosting. The new enterprises will also receive three years of Office 365, access to Microsoft developer staff, and one free year each of CloudFlare services andDataStax software.
The free services will be given out starting with the winter 2015 cohort. YC president Sam Altman said in a blog post that the total value will reach over $1 million per company.
“This is a big deal for many startups,” Altman said. “It’s common for hosting to be the second largest expense after salaries.”
Despite the obvious costs and risks associated with giving free hosting to startups, engaging with them is important for any hosting provider hoping to grow market share, and companies like Rackspace, SoftLayer, ViaWest, and ProfitBricks began targeting them directly years ago.
Rackspace gave away over $400,000 to UK startups in the first six months of its program there in 2013. In November IBM launched a program offering free cloud hosting to startups.
More recently Google Cloud Platform for Startups began offering $100,000 credits towards public cloud.
International markets are investing in software and technology startups too, with Deutsche Telekom launching a venture fund for German startups, and Daum Kakao launching a $90 million fund for South Korean companies late last year.
YC startups could eventually become significant hosting customers, so there is an aspect of direct investment in customer acquisition to Microsoft’s generosity. Microsoft also has partnerships with past YC companies Dropbox and Docker.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/microsoft-offers-azure-cloud-hosting-credits-y-combinator-startups | | 9:00p |
HP to Integrate Emerson’s Trellis DCIM in Data Center Management Framework Emerson Network Power has partnered with HP on a joint solution that combines Emerson’s Trellis data center infrastructure management software and HP’s data center management consulting services.
The idea is to address the old problem of silos in data center management, where the IT team is completely separated from the facilities team, both losing out on the efficiencies that can be gained from managing the data center as a single entity.
HP’s Converged Management is a framework for integrating DCIM with IT service management. It’s about giving IT managers awareness of data center capacity – things like space, power, and cooling –and applications the IT gear is running and what business functions they are supporting. This knowledge, in real time, is what informs decisions about facilities management, planning out and executing changes, and automating processes.
HP is pitching the data center management framework for everyone, from enterprise and government operators to colocation and cloud service providers. Its consulting services include workshops, roadmap creation, designing the management architecture across IT, facilities, and service management, and implementation.
DCIM software plays a key role in the framework, since HP views it as the thing that bridges the gap between IT and facilities. Some of the more comprehensive DCIM solutions on the market combine capabilities like tracking what kinds of IT gear you have in which rack in the data center with capabilities like power consumption, temperature, and humidity monitoring across the data center floor.
Emerson’s Trellis is one of the DCIM suites that aims to have as comprehensive a feature set as possible. Its main competitor in the space is Schneider Electric, the other big data center infrastructure equipment vendor.
Rich Einhorn, vice president of HP Data Center Consulting, said the combination of Trellis and Converged Management consulting would help clients get more efficient without disruption to ongoing operations. “It also allows us to better integrate the DCIM system with software-defined IT platforms, such as HP OneView, making the operation between facility and IT seamless,” he said in a statement. | | 9:30p |
Facebook Launches Program to Encourage Cyberthreat Information Sharing 
This article originally appeared at The WHIR
Facebook launched a cybersecurity threat information sharing platform called ThreatExchange on Wednesday. Since hackers often attack multiple targets with the same approach, and one compromised network can lead to more, Facebook sees a network security benefit for all collaborators.
Pinterest, Tumblr, Twitter, and Yahoo assisted with feedback during ThreatExchange’s development. Link shortening-service Bitly and Dropbox are among others partners to join since, and the ThreatExchange website also includes a form for companies interested in participating.
Mark Hammell, Facebook threat infrastructure team manager, said in a blog post that ThreatExchange’s origin goes back to a spam-driven malware attack last year. Hammell reached out to other large web companies for information, and found willing information-sharing partners.
Since Facebook was already developing a threat information platform in-house, expanding the project to enable corporate co-operation required little more than building APIs to query or publish information for specific companies or groups.
“Threat data is typically freely available information like domain names and malware samples, but for situations where a company might only want to share certain indicators with companies known to be experiencing the same issues, built-in controls make limited sharing easy and help avoid errors by using a pre-defined set of data fields,” Hammell said.
ThreatExchange is built on the existing Facebook platform infrastructure, and uses Facebook Graph, TechCrunch reports.
Efforts to improve security through information sharing among companies and governments are a growing trend. The information sharing aspect of the Obama administration’s cybersecurity framework compelled the US Department of Justice and the Federal Trade Commission to issue a joint statement last April clearing participants of any antitrust implications, assuming information is shared properly.
Symantec set out to build an attack information sharing hub in November 2013, while the European Cyber Security Group was formed earlier in 2013 to share information across national borders.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/facebook-launches-program-encourage-cyberthreat-information-sharing |
|