Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, October 28th, 2013
| Time |
Event |
| 1:17a |
Terremark Data Center Outage Knocks HealthCare.gov Offline A service outage at a Verizon Terremark data center caused downtime Sunday for HealthCare.gov, the trouble-plagued online insurance marketplace created by the Affordable Care Act.
The Department of Health and Human services said Sunday that the Healthcare.gov “application and enrollment system is down because the company that hosts site has an outage. Terremark working to fix.”
“We are working with Terremark to get their timeline for addressing the issue,” Health and Human Services Department spokeswoman Joanne Peters told Reuters. “We understand that this issue is affecting other customers in addition to HealthCare.gov, and Terremark is working (to) resolve the issue as quickly as possible.”
It wasn’t immediately clear which Terremark data center had experienced the outage.
The outage is the latest difficulty for the HealthCare.gov site, which has been plagued by problems, with many users unable to access the site, and others stymied by enrollment problems.
Terremark Federal Group, a unit of Verizon, has received $15.5 million for cloud computing services provided to the HealthCare.gov website, according to media reports. Terremark began work on the five-year contract in 2011. | | 11:15a |
Rackspace Expands Data Services With Hortonworks’ Hadoop In the Cloud Rackspace Hosting will offer Hortonworks’ flavor of Apache Hadoop in the cloud and as a managed service, as part of Rackspace Data Services, a collection of “Big Data” database offerings available as a service.
Rackspace continues to expand both its big data solutions and managed support in efforts to court big enterprise customers. With the managed service offering, customers are able to deploy a fully featured and supported Hadoop infrastructure through a single vendor contract. Rackspace is offering early access to some customers, and will gradually expand access to the offering.
Offering Hadoop as a Service continues to be a focus for leading hosting and cloud providers. “In years past, it was about how to store data; now customers are asking ‘what can I do with my data to generate revenue?’” said Sean Anderson, Product Marketing Manager for Data Solutions at Rackspace. “I feel that our cloud product is accelerating that data exploration. When you look at traditional IT, there’s always a concern with moving to a more developer-focused platform. This allows them to integrate Apache Hadoop without adding the resources and costs needed to do so.
Rackspace first partnered with Hortonworks last year, and recently acquired Object Rocket and Excellent Cloud Servers to boost its offerings on the big data front. Future NoSQL offerings will join the portfolio to provide hybrid alternatives to address MongoDB and Redis technologies.
The benefits of using Hadoop in this model include the ability to rapidly deploy with low operational burden. Rackspace says it is offering simple pricing plans to address pricing confusion experienced with some cloud services.
“The volume of data being processed in businesses today is astounding,” said John Engates, CTO of Rackspace. “Companies need help analysing and extracting value from this vast amount of information, as Big Data solutions are difficult to deploy and harder to maintain. With Rackspace’s new Cloud Big Data Platform offering and Managed Support for Apache Hadoop, we’re providing an open, hybrid Big Data solution for dedicated and cloud instances. Combining this technological expertise with enhanced support from Hortonworks allows us to bring a best-in class Big Data solution to market.”
Benefits of the new offering include:
- Design an Optimal Big Data configuration: Offers customized configurations to address specific data processing requirements such as high compute, high storage, and balanced workloads. Reference architectures and flexible network design allow customers to design environments for their specific use case.
- Reduces Operational Burden: Reduces the amount of time required to deploy and maintain data processing environments. This solution allows customers to leverage Rackspace’s Fanatical Support and the deep expertise of Rackspace and Hortonworks to help with patching, cluster management, job execution and standard maintenance.
- Integrates with Custom Applications: Allows customers to leverage the full range of tools in the Apache Hadoop ecosystem to integrate with business intelligence or data applications with no additional re-tooling.
“We are excited to be partnering with Rackspace to bring the Hortonworks Data Platform to the Rackspace Hybrid Cloud,” said John Kreisa, vice president of strategic marketing at Hortonworks. “By combining the open source commitments of Rackspace and Hortonworks, we are creating a platform of capabilities that accelerates the adoption of open standards while continuing to deliver expertise to the broader Hadoop community.” | | 11:30a |
Data Center Jobs: Green House Data At the Data Center Jobs Board, we have a new job listing from Green House Data, which is seeking a Senior Sales Engineer in Cheyenne, Wyoming.
The Senior Sales Engineer is responsible for working directly with and support Green House Data’s complex sales cycle for cloud Infrastructure as a Service (IaaS) and colocation services, supporting efforts to design client solutions based on custom network, compute resource, storage, security, and management service specifications, working closely with the customer service and operations teams to ensure the successful implementation of client solutions, and developing and supporting new client projects, both independently, and as part of a larger team. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 1:00p |
SoftLayer Partners With Cloudera on Big Data Offering  A rack filled with servers in a SoftLayer data center. The IBM cloud computing unit today announced a partnership with Cloudera on Apache Hadoop solutions. (Photo: SoftLayer)
Cloud providers continue to expand their big data solutions portfolios through partnering. SoftLayer, an IBM company, is teaming with Cloudera to offer turnkey, multi-server infrastructure Apache Hadoop big data solutions. The goal is to make it easy to design and deploy Cloudera applications on-demand and in real-time through SoftLayer’s web-based Solution Design, portal and API.
The solutions are built on bare metal servers optimized specifically for runningCloudera on SoftLayer’s global cloud infrastructure. These systems are available with a wide range of options to tailor memory and storage. Cloudera supplies the management platform based on Apache Hadoop, while SoftLayer provides the infrastructure.
“Cloudera customers are looking for the paramount solution for managing vast amounts of varied data,” said Tim Stevens VP of Business Development at Cloudera. “With automated deployment of robust bare metal infrastructure to support the program, Cloudera’s joint solution with SoftLayer will push big data solutions into an even higher level of performance and reliability.”
Bare Metal’s High Performance Meets Big Data
Cloudera provides scalable and powerful tools for managing and analyzing vast amounts of varied data using the open source Apache Hadoop. The standard version of Cloudera is free, which combines an enterprise ready version of Hadoop with Cloudera manager, which provides robust cluster management capabilities like automated deployment, centralized administration, monitoring and diagnostic tools.
The Enterprise version of Cloudera adds Cloudera Navigator for data management, technical support, indemnity, and open source advocacy. Additional capabilities such as data management and disaster recovery can be added to Cloudera Enterprise through add-on subscriptions.
“Running big data applications on bare metal servers gives users both the raw performance and consistency they need to analyze vast amounts of data in real-time,” says Marc Jones, VP of product innovation for SoftLayer. “The ability to provision a large scale Hadoop cluster in just a few hours sets this offering apart with speed and agility. We’re pushing the envelope of what’s possible on the cloud by giving enterprises the power and flexibility they need to tackle the toughest workloads.” | | 1:59p |
7 Attributes That Help Counter Data Center Downtime Peter Panfil is Vice President Global Power Sales, Emerson Network Power. With more than 30 years of experience in embedded controls and power, he leads global market and product development for Emerson’s Liebert AC Power business.
 PETER PANFIL
Emerson’s Liebert AC Power
As computing demands and complexity in the data center continue to rise, unplanned data center outages remain a significant threat to organizations in terms of business disruption, lost revenue and damaged reputation.
A recently-completed survey of U.S.-based data center professionals from the Ponemon Institute and sponsored by Emerson Network Power, shows that an overwhelming majority of respondents have experienced an unplanned data center outage in the past 24 months (91 percent). Regarding the frequency of outages, respondents experienced an average of two complete data center outages during the past two years. Partial outages, or those limited to certain racks, occurred six times in the same timeframe.
However, there is a bright spot to the survey. The results do show that many companies are more aware of the causes of downtime and taking steps to minimize the risk. In fact, the survey took a closer look at those high-performing data centers that experienced the least amount of downtime and identified seven common attitudes and actions largely shared by the organizations.
Not every data center will be able to adopt all seven of these attributes. But even implementing a few of them might greatly decrease the frequency of unplanned downtime and mitigate its impact.
1. Consider data center availability your No. 1 priority – even above minimizing costs.
Given tightening budgets, this might be one of the hardest attitudes for many organizations to adopt. However, with the increase in reliance on IT systems to support business-critical applications, a single downtime event now has the potential to significantly impact the profitability of an enterprise. In fact, for enterprises with revenue models that depend on the data center’s ability to deliver IT and networking services to customers, downtime can be particularly costly.
2. Utilize best practices in data center design and redundancy to maximize availability
It all comes down to the fundamentals. There are a number of proven best practices that serve as a good foundation for data center design and redundancy. These best practices represent proven approaches to employing cooling, power and management technologies in the quest to improve overall data center performance. They include everything from matching cooling capacity and airflow to IT load, to utilizing local design and service expertise to extend equipment life.
3. Dedicate ample resources to recovery in case of an unplanned outage
This is more than having enough people to be able to reset breakers and cycle the power on servers following an outage. It involves having site preparedness – food, lodging, alternate transportation – for personnel in the event the outage is the result of a natural disaster. Hurricane Sandy taught us that having enough generator fuel on hand, and an established supply chain for replenishment that could stretch into days was critical to some facilities staying up.
4. Have complete support from senior management on efforts to prevent & manage unplanned outages
The Ponemon Institute survey exposes a difference in perception that often exists between senior management and those reporting to them when it comes to downtime. Forty-eight percent of senior-level survey respondents had greater confidence that leadership is supportive of efforts to prevent outages. While 71 percent of supervisor and below respondents believe their organization has made sacrifices to availability to improve efficiency or reduce costs inside their data center. Supervisor and below respondents were also more likely than senior management to believe that unplanned outages happen frequently. This disparity shows the importance for frank discussions about unplanned outages and the level of support and investment needed to prevent and manage the incidences.
5. Regularly test generators and switchgear to ensure emergency power in case of utility outage
The most rigorous form of this testing is commonly referred to as “pull the plug.” This sort of routine testing is mandated to meet local codes for some industries, such as healthcare. It confirms the proper operation during a utility outage of the automatic transition from utility to battery to generator and back. It keeps the facility team updated in their training should an unplanned outage occur. This also confirms for the facility management team that the data center will ride through a utility outage without incident, and gives them time in a controlled manner to harden any deficiencies.
6. Regularly test or monitor UPS batteries
Having a dedicated battery monitoring system is a must. According to Emerson Network Power’s Liebert Services business, battery failure is the leading cause of UPS system loss of power. Utilizing a predictive battery monitoring method can provide early notification of potential battery failure. The best practice is to implement a monitoring system that connects to and tracks the health of each battery within a string.
7. Implement data center infrastructure management (DCIM)
It is important to ensure the foundation for effective management of the data center is in place in the form of an up-to-date visual model of the facility and centralized monitoring of infrastructure systems. This will likely include the deployment of a DCIM platform capable of providing a holistic view of data center operations based on real-time data that spans facilities and IT systems.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 2:30p |
With its Healthcare Cloud, Veeva Shows the Power of Industry Clouds  Veeva Systems Founder and CEO Peter Gassner (center) rings the opening bell at the New York Stock Exchange to celebrate the company’s IPO on October 16. (Photo: NYSE Euronext)
Cloud isn’t always one size fits all. By specifically targeting the healthcare and life sciences industry, Veeva has established a clear leadership position in a potentially lucrative niche where few other players have gained major traction.
Veeva is a profitable cloud computing service provider specializing in healthcare and life science. The company counts big names such as Novartis AG, Merck & Co., Eli Lilly and Co and Bayer Healthcare AG as customers.
On Oct. 16 Veeva raised about $194 million after its IPO was priced at $20 per share, well above the expected price range. The first day of trading was a massive success, with shares nearly doubling during Veeva’s debut, hitting a high of $39.64 and valuing the company at nearly $5 billion. Shares of VEEV closed Friday at $43 in trading on the NYSE.
What is the company doing to enamor both investors and its customers?
Profitability, Revenue Growth = Win on Wall Street
Veeva is unique because it is a cloud provider with profitability and high growth, with revenue rising from $29.1 million in 2011 to $129.5 million in 2013. The company cites lower customer acquisition costs, efficient marketing and focus.
“What is it we’re doing differently? It’s this notion of industry cloud,” said Matt Wallach, co-founder and president of Veeva. “Veeva is specifically tailored to life sciences, a vertical that can’t use ‘one size fits all’ types of cloud offerings. We’re a beneficiary of the whole move from client/server to cloud. We have deep applications n specific areas within one industry. Big Data that is something that is possible with cloud computing. Another benefit is the level of functionality. All of our R&D is super-focused on what we can do in this industry.”
The company has three product lines, but the best example of Veeva’s vertical cloud approach is its customer relationship management (CRM) offering, which uses the platform from Salesforce.com essentially as the big database. “It’s better than a database, but for us it’s just a backend,” said Wallach. “Starting from the data model, The CRM is much more built from the ground up.”
Infrastructure from Salesforce.com, Data Centers from NTT
“While the CRM uses Salesforce.com as the backend database, our two other product lines, we’ve built from the ground up,” said Wallach. “”We can’t use AWS for this. We have managed data centers from NTT. They keep up the hardware side, we keep the software side running.”
The company has data centers in the US, a data center in Japan, the same building as Salesforce.com. Wallach says there will be a data center in 2014 in Europe, most likely in the same building as Salesforce.com.
The architect behind Veeva’s infrastructure is Mitch Wallace, who set up all of Salesforce.com’s internal systems and was managing their data center infrastructure. “Having set up the world’s largest Software as a Service Infrastructure, Mitch is unimpeachable,” said Wallach. “We took the best possible cloud expert.”
“Multi-tenant cloud is a fundamentally better for both us and customers,” said Wallach. “By having all of these applications targeted at one industy, we become the innovation engine for the customer.” | | 2:44p |
Level 3 Connects With Amazon Web Services Direct Level 3 adds Amazon Web Services connection support, Radware joins the open networking foundation to further SDN, and EdgeCast helps Atlantic Media serve blazing-fast content around the world.
Level 3 adds AWS Direct Connect support
Level 3 Communications (LVLT) announced support for Amazon Web Services (AWS) as an AWS Partner Network Technology Partner. The partnership leverages the suite of Level 3 Cloud Connect Solutions across the global Level 3 network to create a more efficient and reliable cloud operating environment with improved application performance and network security. With this connection, Level 3 offers private network connections to every AWS Direct Connect location. Support for new Ethernet and VPN hosted connections delivers direct access to leading cloud services such as Amazon Elastic Compute Cloud, Amazon Simple Storage Service and Amazon Virtual Private Cloud. The combined network infrastructure provides an easier migration path for enterprises to effectively establish and scale connectivity between remote office locations, data centers and AWS to create a cloud ecosystem with greater flexibility to address evolving IT requirements. “To do business in the cloud, enterprises have to solve their migration, security and performance challenges,” said Paul Savill, senior vice president product management for Level 3. “The combination of Level 3 Cloud Connect Solutions and AWS Direct Connect make it easier and more cost effective to move, operate and secure enterprise applications in the cloud.”
Radware joins open networking foundation
Radware (RDWR) announced it is now part of the Open Networking Foundation Northbound Interface Working Group, joining the ranks of companies such as Intel and Microsoft in order to assist the industry accelerate the adoption of open SDN. Chartered this month, the working group was created to help reduce end-user confusion on the Northbound Interface and to help the application developers actively seeking an open Application Programming Interface (API) to develop code against. It will enable acceleration of SDN innovation by allowing significant application portability across SDN controllers, both open source and proprietary. “We are elated to find ourselves in the same company of those who are part of the core team,” says Avi Chesla, chief technology officer, Radware. “This working group will help define various SDN Controller Northbound API Interfaces (NBIs) in order to significantly increase the speed in which new SDN applications are developed. Collectively, we will be able to accelerate innovation in order to quickly adapt the customer’s needs from their networks.”
EdgeCast enables Atlantic Media
EdgeCast Networks announced that Atlantic Media leveraging the company’s global acceleration network to serve its content to millions of users around the world. The Atlantic requires availability, reliability, and speed – regardless of the end user’s device or location. A benefit of EdgeCast is that it completes most purges in just seconds with its new “Piranha Purge” feature. “Whether it’s quickly updating a breaking story or correcting inaccuracies, fast updates go a long way in preserving credibility and trust,” said Tom Cochran, CTO of Atlantic Media. “Piranha Purge is a tool that’s extremely valuable in a breaking news world.” | | 5:30p |
Understanding the True Cost of Lost Capacity in the Data Center Today’s data centers are being designed around multi-tenancy, greater efficiency, and high-density computing. These new demands are the direct result of more users, more data, and a lot more cloud computing. In fact, the global data center industry is booming, with triple-digit investment growth in South East Asia and double-digit growth in mature Western European markets. However, a substantial proportion of this investment (up to 50 percent in the worst cases) is being spent on compute capacity that will never be utilized. As more users are placed within the modern data center, lost resources become very expensive items because of efficiency and lost utilization.
Because of this resulting lost capacity, there needs to be a way to logically monitor the entire environment to prevent resource provisioning challenges from happening. This white paper from Future Facilities, you will learn how the only way to prevent lost capacity is through the use of simulation and modeling techniques. These tools and solutions enable the four key capacities of data center infrastructure – space, power, cooling and cabling – to be proactively and collaboratively managed throughout the life of the facility.
This ever-changing environment makes successfully delivering the space, power, cooling and cabling requirements over time a very difficult task. To exacerbate the problem, those responsible for signing off data center budgets often have scant idea of the gap between the capacity they’ve paid for and what they actually get; even less what to do to close it. So there’s little or no top-down pressure to tackle and rectify the problem.
At the root of this financial horror story are some simple but fundamental questions that every data center budget-holder should be able to answer:
- How much Data Centre Capacity have I lost?
- What is the true cost of lost capacity to my business?
- Do our IT and Facilities teams have the tools to reclaim this lost capacity?
- How can we prevent lost capacity in the future?
Developing new ways of controlling data center resources can be challenging and is often best taken one step at a time. In this whitepaper, Future Facilities recommends that data center operators with multiple facilities identify a pilot site to use as a basis for building their first Virtual Facility. The virtual model can then be aligned to any existing DCIM systems, monitoring tools or change management databases, as well as being fully incorporated into the facility management processes. Download this white paper today to understand the key aspects of lost resources and how to quickly overcome these data center challenges. | | 6:30p |
NTT Communications Acquires Controlling Interest in RagingWire  A look at the interior of one of the RagingWire data centers in Sacramento.
In a move that will dramatically expand its presence in the U.S. data center market, Japan’s NTT Communications will acquire an 80 percent equity interest in RagingWire Data Centers for $350 million, the companies said today. RagingWire’s founders and management team will continue to operate the company under the RagingWire brand and maintain a minority interest.
The deal will more than double NTT Com’s data center footprint in the U.S., where it currently has data centers in northern Virginia and Silicon Valley. NTT says the additional 650,000 square feet of space operated by RagingWire will allow it to meet strong demand for data center services in North America. It also positions NTT for future growth, as RagingWire has an expansion underway in Sacramento and has acquired property for a large campus in the key data center hub of Ashburn, Virginia.
NTT is also paying $525 million to acquire Virtela Technology Services, a Denver-based firm that specializes in managed network services, including software-defined networking (SDN) and enterprise cloud services. Between them, the Virtela and RagingWire deals represent an $875 million investment by NTT in the global data center market.
“We are rapidly expanding our capabilities to provide cloud and telecommunications solutions worldwide, and the deal with RagingWire is critical to increasing our overall capacity, providing data center infrastructure management tools which enhance colocation service’s reliability and efficiency,” said Akira Arima, CEO of NTT Communications. “RagingWire leads the data center industry in availability, innovation, and customer experience, and that will enhance our global cloud solutions significantly.”
Poised for Expansion
RagingWire has annual revenues of approximately $85 million, and has been growing at about 30 percent a year. The company was founded in the year 2000 and has approximately 300 employees at its campuses in Sacramento, California, and Ashburn, Virginia.
RagingWire has begun construction of a new 150,000 square foot data center in Sacramento and will soon break ground on a 78 acre parcel of land in Ashburn, Virginia where it plans to build up to 1.5 million square feet of data center space. The company has more than 200 Internet and enterprise customers including such notable names as Flextronics, Polycom and NVIDIA.
“The RagingWire management team and employees are excited to be part of the NTT family of companies,” said George Macricostas, founder and CEO of RagingWire. “By joining NTT, we will be able to extend our data center platform globally, expand the markets we serve and add more strategic value to our customers.”
The acquisition reflects the increasingly global nature of the data center business. Although the U.S. remains the largest market for data center services, data center developers and service providers have been rapidly expanding their presence in the Asia Pacific market. RagingWire won’t be the only prominent provider with a Japanese parent company, as Telehouse is owned by Japanese telco KDDI. | | 7:00p |
The Barge Mystery: Floating Data Centers or Google Store?  The unusual structure on a barge off of Treasure Island, which is in San Francisco Bay. (Photo: Jordan Novet)
The prototypes of the “Google Navy” have been discovered on both coasts. But are they floating data centers? Or some kind of marketing facility for Google Glass?
CNet reported Friday that a barge in San Francisco Bay stacked high with shipping containers may be a floating data center being built by Google. A nearly identical facility has appeared in a harbor in Portland, Maine, according to the Portland Press-Herald.
Both barges are owned by “By and Large. LLC,” which is a mysterious company whose name echoes a fictional corporation from Wall-E and other Pixar films. CNet has found numerous hints that the firm is tied to Google, which has a history of using LLCs to seed its data center projects.
But is it a data center? San Francisco’s KPIX reports that the building is indeed a Google initiative, but is actually a secret marketing barge to promote Google Glass, the company’s new wearable tech offering. The TV station said work has been halted because Google doesn’t have permits to park the barge at San Francisco’s Fort Mason, as it had hoped. (See Data Center Knowledge photo spread of the San Francisco structure.)
So which is it? We don’t cover retail much, so we’ll leave that branch of speculation to others. But let’s look at the evidence for and against the data center theory.
Hints From Google’s Patents
Google’s interest in floating data centers was revealed in a 2008 patent, which generated significant discussion in the industry about the pros and cons of the concept. The structure in San Francisco Bay bears a strong resemblance to designs from Google patent filings for modular data center technology.
Google isn’t the only one that has been intrigued with the idea of seagoing server farms. In early 2008, start-up International Data Security announced plans to build a fleet of “maritime data centers” on cargo ships docked at ports in San Francisco Bay. But the plan was never funded and the efforts were discontinued.
Google’s first company-built data centers were assembled using shipping containers filled with servers, as seen in this photo of a facility built in 2005.
 A look at shipping containers packed with servers inside a Google data center. (Photo: Google)
Google soon shifted back to more traditional data center designs using rows of servers in a data hall. So why would it now pursue the “data barge” concept? Google isn’t likely to adopt such a shift in its mission-critical infrastructure unless it brings significant new capabilities or improved economics. Operating a floating data center in San Francisco wouldn’t appear to offer either possibility.
At the same time, the structure on the barge at Treasure Island closely resembles a data center design Google patented in 2010, which describes up to 100 server-filled shipping containers stacked four levels high.
 An image from a 2010 Google patent depicting a stack of data center containers.
The patent outlines how the stacked containers connect to a vertical utility spine providing power and network connections. Upper-level containers would be accessed via a series of stairs on the front side of the containers. A similar stairway is clearly seen on the mystery barge at Treasure Island.
 A staircase along the exterior of the barge in San Francisco Bay bears similarities to designs in Google patents. (Photo: Jordan Novet)
Google’s patent describes containers packed with up to 2,000 processors and 5 terabytes of storage. Container sizes could vary from 20 feet to as long as 53 feet, and would be optimized for outdoor installations and “sealed against environmental elements of wind and moisture.”
“As one example, a data center may be disposed onboard a ship,” Google said in its patent filing. “Electrical power may be provided by an on-ship generator, and the cooling plant may incorporate seawater.”
Speed to Market and Pre-Fab Construction
Google notes some of the advantages of containers in the patent filing.
“Such modular computing environments may facilitate quick assembly of large data centers,” the patent says. “Large portions of a data center may be prefabricated and quickly deployed; in particular, portions of data centers may be constructed in parallel, rather than in sequence. Critical portions of data centers may be mobile and easily transported from one site to another. Portions of the data center may be manufactured by manufacturing labor, rather than constructed by trade labor (e.g., in a controlled manufacturing environment rather than in an uncontrolled construction site), possibly resulting in reduced costs.”
For more clues, let’s shift from San Francisco to the East Coast. |
|