Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, July 25th, 2013
| Time |
Event |
| 12:30p |
In Disaster Recovery Planning, Don’t Neglect Home Site Restoration Michelle Ziperstein is the Marketing Communications Specialist at Cervalis LLC, which provides data backup and disaster recovery solutions for mission-critical data.
 MICHELLE ZIPERSTEIN Cervalis
In IT, many of us live by Intel chairman Andy Grove’s famous maxim: “Only the paranoid survive.” When it comes to Business Continuity and Disaster Recovery (BCDR), this is true in spades.
The most immediate focus of disaster response and business continuity is the challenge of fulfilling your Emergency Response Plan (ERP), with priorities such as getting employees out of harm’s way, safeguarding company assets from further damage, switching to the backup systems within the shortest possible time, and, if you’ve reserved temporary office space at the BCDR data center (which you should), getting essential employees to that space where they can recover critical business functions and bring core operations back online.
Why is Planning for Home Site Restoration Critical?
However, home site restoration should be an equally important part of your disaster recovery plan. If your company has neglected to provide for adequate resources, manpower and planning to get your main office up and running again, you could get trapped in your recovery environment. In addition to the cost of operating from a rented data center, your failover solution will very likely fall short of the full resources you had at the home site, causing your company to operate below efficiency and losing business to competitors who managed to restore faster.
Explore Alternatives for Home Site
The most severe disruption would come from a permanent or long-term loss of access to your primary location; for this eventuality, you need to have plans to set up normal operations at an alternate site. While a giant IT company like Google that operates multiple locations can respond to the loss of one data center by shifting the load to other facilities and restoring from off-site backups, most companies lack that luxury. Maintaining a “cold site” is a good compromise between readiness and cost – such a site will have the space and basic infrastructure. If you have planned well, your applications and databases will be safely backed up at a BCDR data center; a comprehensive primary site restoration plan should include ready candidates for a new permanent facility, as well as sources for all the necessary equipment to run your regular operations.
Restoration Planning
If your primary location isn’t completely lost, your restoration plan will allow you to quickly assess the damage, find out which equipment and infrastructure are intact and can be restarted immediately, and draw up a plan for repairs and replacements for the facilities and equipment that were affected. A good restoration plan will recognize the possibility that you may need to split your workforce and workload between your primary location and your temporary office space at the business continuity data center, and will make arrangements for communication and management in two locations at once.
There are many other specific ways you can improve your turnaround time when it comes to home site restoration as part of DR planning. Neglect this aspect of disaster recovery at your peril.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
For more DCK business continuity/disaster recovery articles, visit our Disaster Recovery channel. | | 12:32p |
SolidFire Raises $31 Million, Says Its SSD is Now Cheaper Than Disk SolidFire is on, er, fire this month. It had a major customer win with Colt and now it tops the month off announcing that it has raised an additional $31 million in funding. The company also unveiled the SF9010 a huge, fast, all-SSD storage platform that the company says is the largest and fastest in the market today.
Oh, and the SSD solution costs less than disk, according to the company.
The company closed a Series C financing round led by Samsung Ventures, with all previous investors NEA, Valhalla Partners, and Novak Biddle Ventures Partners, participating. The $31 million in funding brings total funding to $68 million. The company will use the money on an expansion of sales and marketing to support growing enterprise and service provider demand.
“We feel that SolidFire is uniquely positioned to take advantage of the rapid growth in both public and private cloud computing markets,” said Jay Chong, Senior Director at Samsung Ventures. “They have demonstrated clear technological leadership with their patent pending QoS capabilities, and true scale-out architecture. And their strong customer traction puts them in the right position as the cloud computing market matures.”
9010 – SSD Cheaper Than Disk?
The company says that all-SSD now storage has crossed a major threshold, and now has a lower cost than disk. The SF9010 has less than $3/GB effective capacity, and less than $1/IOP. SF9010 is a beast; it leverages the most current market drive capacity. A single node has 9.6TB raw capacity and 34TB effective capacity, 75,000 IOPS. A full cluster of 100 nodes has 960TB raw capacity with 3.4PB of effective capacity, with 7,500,000 IOPS. This means the SF9010 has greater capacity than traditional systems. EMC VMAX 40k has 3.2PB with 3TB drives in RAID 6. HP 3PAR StoreServe has 1.6PB capacity with 2TB drives in RAID 6.
“The SF9010 really highlights SolidFire’s storage architecture in action,” said SolidFire Founder and CEO, Dave Wright. “The system is incredibly flexible and is designed to consume the latest flash technology available. This enables us, and our customers, to keep pace with the rapid advances in the flash market and take advantage of falling cost and rising density. The SF9010 takes us across the threshold of flash becoming lower than the cost of performance disk.”
Along with the SF9010, SolidFire also released SolidFire Element OS Version 5, which adds VMware VAAI and VASA support, full encryption-at-rest without performance impact, and more detailed per-volume and per-tenant performance reporting. | | 12:45p |
DataStax Raises $45 million for Big Databases DataStax raises $45 million to expand its product development and channel growth, Cloudera adds an Apache security module for Hadoop, and Univa and MapR partner on enterprise-grade workload management for Hadoop.
DataStax raises $45 million. DataStax announced the completion of a $45 million series D funding round led by Scale Venture Partners with participation from existing investors Lightspeed Venture Partners, Crosslink Capital and Meritech Capital Partners, and new investors DFJ Growth and Next World Capital. DataStax will use the investment to further its international expansion, channel growth and product development. “The evolution of enterprise applications and rise of big data has eclipsed traditional database capabilities and provides an opening for a significant new market entrant,” said Andy Vitus, partner, Scale Venture Partners. “DataStax is poised to disrupt the traditional RDBMS market and has already demonstrated significant momentum – signing an enviable list of enterprise customers, expanding into Europe, and unveiling innovative releases that make the product easier to adopt, deploy, and manage. We look forward to working with the team to further accelerate their expansion as they address this large and growing market.”
Cloudera adds Security Module for Hadoop. Cloudera announced Sentry – a new Apache licensed open source project that delivers the industry’s first fine-grained authorization framework for Hadoop. To meet enterprise Role Based Access Control (RBAC) requirements of highly regulated industries Sentry is a security module that integrates with open source SQL query engines Apache Hive and Cloudera Impala, delivering advanced authorization controls to enable multi-user applications and cross-functional processes for enterprise datasets. Sentry represents a quantum leap forward for Hadoop and Cloudera’s Platform for Big Data, enabling enterprises in the public and private sectors to now leverage the power of Hadoop and remain compliant with regulatory requirements like HIPAA, SOX and PCI, among others. ”As Hadoop crosses over to the enterprise, it will be expected to deliver the same level of security for protecting sensitive data as any mission-critical data platform,” said Tony Baer, principal analyst at Ovum. “With announcement of Sentry, Cloudera is addressing an important piece of the puzzle, especially with regard to role-based data access privileges below file level for SQL-on-Hadoop paths like Hive and Impala. It is one of the pieces that must fall into place if Hadoop is to fulfill its promise as a powerful extension of analytic computing environments.”
Univa and MapR partner on enterprise-grade workload management. Univa and MapR announced a partnership to integrate Univa Grid Engine with the MapR enterprise platform. The partnership enables customers to leverage the MapR Distribution for Hadoop in conjunction with Univa Grid Engine capabilities of policy control management and infrastructure sharing to run mixed workloads with insightful enterprise reporting and analytics. The partnership matches Univa’s distributed resource management software platform with the MapR enterprise-grade big data platform. The Univa Grid Engine platform combined with the MapR Distribution for Hadoop enables enterprises to deploy a best-in-breed Hadoop solution with enterprise policy controls, such as different end-users securely accessing the cluster to advance fair share rules. ”Univa Grid Engine is already a key product in the Big Data infrastructure mix, but with this partnership we are providing customers with integration capabilities to the unrivaled enterprise-class MapR platform for Hadoop,” said Rahul Jain, VP of Big Data, Univa. “This was a natural marriage as MapR provides the most advanced enterprise-grade capabilities for Hadoop, while Univa operationalizes mission-critical applications in mixed workloads to reach its maximum potential and accelerate Hadoop deployments into production.” | | 3:15p |
Number of U.S. Government IT Facilities Rises to 7,000 
The U.S. government wants its servers to come out of the closets. But first it has to figure out exactly how many server closets it has, and the number continues to be a moving target.
As it seeks to consolidate its IT infrastructure across dozens of agencies, the government has had trouble sorting out how many data centers it has, and even more trouble adding up all the server closets, which includes rooms smaller than 100 square feet.
As a result, the number of IT facilities included in the Federal Data Center Consolidation Initiative (FDCCI) continues to grow. The number, which started at 432 in 1999, grew to 3,000 last year and has now exploded to nearly 7,000.
It should come as no surprise that The Office of Management and Budget (OMB) has to explain this to a House Oversight and Government and Reform subcommittee this week. Part of the explanation is defining what to count. Several years into the consoldiation process, the FDCCI was integrated with the new PortfolioStat, meaning that sub-500 square foot data centers (or server closets) would now be counted. More than 70 percent of Federal “data centers” are actually server closets, according to testimony from the General Services Administration’s David McClure.
Now Over 7,000 Data Centers
So how many “data centers” are there? The most recent number, given during a joint House-Senate briefing was 7,000. This is up from a recent Government Accountability Office finding, which pegged that 22 of 24 agencies which are part of OMB-led FDCCI now house 6,800 data centers. This is more than double of the last estimate of 3,133. The data centers are so sprawling that it’s hard to get a figure, as DCK noted late 2012.
“Initially, OMB required agencies to report only data centers that were greater than 500 square feet in size and that met one of the tier data center classifications defined by the Uptime Institute,” said the General Services Administration’s David McClure in Congressional testimony. “Based on that definition, the first data center inventory, reported in October 2010, identified the data center asset baseline as 2,094 data centers.”
In a sense, the number has been growing along with the government’s ambitions. Server closets in office buildings allow agency staff to keep their IT assets nearby, but are typically less efficient than data centers. A key goal of the consolidation effort is to shift equipment from legacy facilities into data centers with energy efficient designs. With a big chunk of the government’s $82 billion in IT spending living in small inefficient rooms, the opportunity for savings is immense.
New Focus on Optimization
Federal agencies have closed 484 data centers as of May 2013, up from 381 closures last time DCK checked in, last November, 2012. There’s a total of 855 planned closures by the end of FY 2013, according to Federal CIO Steven Van Roekel.
The Government Accountability Office (GAO) says the consolidation effort must provide metrics beyond facility closures. “OMB had not tracked and reported on other key performance measures, such as progress against the initiative’s cost savings goal of $3 billion by the end of 2015,” according to David Powner, Director of Information Technology Issues at the GAO.
VanRoekel agrees that these key performance measures are imperative to FDCCI’s success. “In the initial stages of the effort, it was necessary to focus on data center counts and physical closures,” said Van Roekel. “Today, we are looking at new incentives are focused on a more outcome-based approach, to improve the overall efficiency and effectiveness of data center operations to optimize total cost of ownership.”
The suggested next step is to identify data centers are being as core and non-core, according to the GSA’s McClure said. The core data centers will serve as consolidation points, thanks to their economies of scale. Agencies are encouraged to concentrate on optimizing their data centers across total cost of ownership metrics, while striving to reach an overarching goal of closing 40 percent of facilities.
In conjunction with the FDCCI Task Force, the GSA developed a tool to help agencies identify and select their core data centers. It defines nine draft criteria that are key attributes for core data centers:
- Power usage effectiveness (PUE) must be lower than 3.0
- Data center must be metered for use of electricity
- Agency must have sufficient information to calculate a cost of operating system per hour (COSH) score
- Virtualization must be at least 40% – Virtualization is defined as a technology that allows multiple, software-based machines, with different operating systems, to run in isolation, side-by-side, on the same physical machine.
- There must be at least a ratio of 10 servers per full time equivalent(FTE)
- Power capacity must be at least 30 watts per square foot
- Facility utilization must be between 20% and 80% of the data center space
- Data center must meet at least the Tier One standards defined by the Uptime Institute
- Data center must be agency owned, leased or in the cloud
The GSA has also developed a Total Cost of Ownership (TCO) model.
The bottom line is that the number keeps changing because the definition keeps changing and they keep finding more sprawl. The FDCCI, going forward, will focus more on key performance measures, total cost of ownership, and identifying core data centers to spearhead consolidation efforts. | | 6:47p |
Best of the Data Center Blogs for July 25th Here’s a roundup of some interesting items we came across this week in our reading of data center industry blogs for July 25th:
“Watts per Square Meter”: the Wrong Way to Specify Density – The Schneider Electric blog examines how to calculate power density requirements. “The traditional method for specifying power density using a single figure such as watts per square meter (or foot) is an unfortunate practice that often leads to confusion as well as a waste of energy and money.”
Thunder and Lightning and Blackouts, Oh My! – DataCave looks at the seasonal issue of summer power outages: “As temperatures climb, individuals and companies across the Midwest find themselves using more power and experiencing rolling blackouts. While power outages and blackouts are not exclusive to the summer months, they are often more prevalent during these months. The effects of these outages, while not always devastating or headline generating, are often costly.” First of a series. See part two for more on this topic.
Deconstructing SoftLayer’s Three-Tiered Network – At the InnerLayer blog, they’re talking networking. “SoftLayer’s hosting platform features an innovative, three-tier network architecture: Every server in a SoftLayer data center is physically connected to public, private and out-of-band management networks. This ‘network within a network’ topology provides customers the ability to build out and manage their own global infrastructure without overly complex configurations or significant costs, but the benefits of this setup are often overlooked.”
Giving the Web Hosting Industry a Voice – At The WHIR, David Hamilton examines hosting and regulation: “Without general oversight or a single, unified voice, the web hosting industry lacks the power to counter changes that would negatively affect them, their customers, and the Internet as a whole. And while cooperation certainly exists in web hosting, there are some serious issues and considerations standing in the way of web hosts addressing industry issues, and promoting shared ideals and goals.” | | 7:04p |
WHIR Networking Event: Washington, DC The WHIR brings together professionals in the hosting industry for fun (and free!) networking events at different locales in the U.S. and internationally as well. The one-night event is an opportunity to meet like-minded industry executives and corporate decision makers face-to-face in a relaxed environment with complimentary drinks and appetizers.
The WHIR provides a great local venue, and you do the rest – do business, make new connections and learn more about those in the web hosting industry.
Date: Thursday, August 22, 2013
Time: 6:00 pm to 9:00 pm
Location: The Gryphon, 1337 Connecticut Avenue, NW, Washington, DC, 20036, USA
Learn more and RSVP!
YOU MUST BRING A BUSINESS CARD TO WIN A PRIZE
For more events, return to the Data Center Knowledge Events Calendar. | | 7:05p |
WHIR Networking Event: Los Angeles The WHIR brings together professionals in the hosting industry for fun (and free!) networking events at different locales in the U.S. and internationally as well. The one-night event is an opportunity to meet like-minded industry executives and corporate decision makers face-to-face in a relaxed environment with complimentary drinks and appetizers.
The WHIR provides a great local venue, and you do the rest – do business, make new connections and learn more about those in the web hosting industry.
Date: Thursday, October 24, 2013
Time: 6:00 pm to 9:00 pm
Location: TBA , Los Angeles, CA, USA
Learn more and RSVP!
YOU MUST BRING A BUSINESS CARD TO WIN A PRIZE
For more events, return to the Data Center Knowledge Events Calendar. | | 7:06p |
WHIR Networking Event: Houston The WHIR brings together professionals in the hosting industry for fun (and free!) networking events at different locales in the U.S. and internationally as well. The one-night event is an opportunity to meet like-minded industry executives and corporate decision makers face-to-face in a relaxed environment with complimentary drinks and appetizers.
The WHIR provides a great local venue, and you do the rest – do business, make new connections and learn more about those in the web hosting industry.
Date: Thursday, November 14, 2013
Time: 6:00 pm to 9:00 pm
Location: TBD, Houston, TX
Learn more and RSVP!
YOU MUST BRING A BUSINESS CARD TO WIN A PRIZE
For more events, return to the Data Center Knowledge Events Calendar. | | 8:40p |
CoreSite Raises Guidance, Sees Gains in Interconnections  The server hall of a data center operated by CoreSite, which reported second quarter revenues.
CoreSite Realty raised its projections for earnings, citing solid customer growth and an increase in revenue from interconnections, a key strategic focus for the company.
Total operating revenue for the second quarter ended June 30, was $57.7 million, a 13.9 percent increase year-over-year and up 4.7 percent over last quarter.
The company raised and narrowed its guidance, to $1.76 to $1.84 per share from the prior range of $1.72 to $1.82. Its guidance on adjusted EBITDA and sales are at the high end of guidance, but lower end of revenue range. The company said this discrepancy is largely due to a customer in its SV3 data enter using less power, but outperformance on high-margin services like cross connects.
The company is showing particular strength in interconnection revenue, which is up 17 percent since the same period last year, and 7 percent in terms of revenue from last quarter. Interconenctions now count for $7.1 million in quarterly revenue.
“We are pleased with the continued evolution we saw in our sales mix, recording an increasing number of leases bringing high-value applications to our platform,” said Tom Ray, CEO of CoreSite. ”We believe that we have considerable upside embedded in our portfolio as we increase the utilization of existing and new inventory, positively mark to market expiring capacity, and most importantly, continue to drive increased network density and valuable customer communities across our data centers.”
Solid Customer Growth
CoreSite executed 115 new and expansion leases in the quarter including agreements with 33 new customers.“We see market dynamics consistent with the prior 12 months. Market demand remains healthy, with performance little changed from Q1,” the company said on the earnings call. Here’s a look at some of the key developments for CoreSite (COR) in the quarter:
- It leased out 42,672 net rentable square feet of new and expansion leases with annualized GAAP rent of $147 per square foot. It executed new and expansion data center leases representing $5.8 million of annualized GAAP rent at a rate of $188 of annualized GAAP rent per square foot. Rent growth on signed renewals was 5.4% on a cash basis, and 11.7% on a GAAP basis, with rental churn of 2.0 percent.
- It saw a 35% increase in multi-site leases. Forty of these 115 leases signed were customers who do business in more than 1 data center, meaning customers are buying across their platform. It’s averaging $180 per net rentable square foot and it recently renewed 44k with an average 5.4% rent increase.
- The strongest market was Los Angeles, with LA2 in the first half of 2013 representing 78% of GAAP rent on LA campus (hence the need for expansion, discussed later). Northern Virginia was the second strongest market in Q2.
- In New York, CoreSite will tether NY2 with NY1, as well as with 60 Hudson and others. The strategy is working in other markets. It talked about tethering when entering a market as a way to seed growth. It’s the “Hub and spoke” method it’s been doing for years.
- There were 215 new and expansion leases in the network vertical, so CoreSite upped its connectivity significantly. There were 18 cloud customer leases. These customers strengthen CoreSite offerings, through both network and platform.
In terms of customers mentioned, Hibernia Networks was a key customer expansion. FK broadband in South Korea is leveraging CoreSite to expand into North America. NTT expanded into 3 additional locations to support north American growth. In Chicago, a leading global mobility provider, as well as a handful of others in the space, signed up. They’re seeing mobile applications as a key growth opportunity. Enterprise IaaS provider iland was a notable addition to the company’s open cloud exchange. It’s looking to add more participants going forward. It had an expansion supporting a SaaS offering from a large financial services provider.
Development Activity
The company says it is continuing to invest, with four construction projects currently underway. These projects total 236,673 of usable raised floor data center space, including new data centers at SV5 (San Francisco Bay area), VA2 (Northern Virginia area), and NY2 (New York). There’s also an expansion underway in Los Angeles, at its LV2 facility.
- NY2 remains on schedule, first phase in the fourth quarter
- In Virginia, first phase will be late Q1 in 2014
- In LA, the company is adding 20,400 square feet
As of the end of the quarter, CoreSite is about $59.4 million into the estimated $188 million required to complete these projects.
Healthy Wallet
The company has $2.8 million cash available on its balance sheet and $324.5 million capacity left under its credit facility. Long-term debt is 1.2x its annualized revenue, $132 million. The company will spend about $120 million of capex during the rest of the year, giving it good liquidity.
It’s sharing the love, announcing a dividend last May of .27 cents per share of common stock & equivalents for the second quarter.
The company has 14 data center campuses and counts over 750 customers | | 9:10p |
Savvis Announces Availability of Rebuilt Cloud Data Center Savvis, the cloud hosting unit of CenturyLink, announced global availability of an infrastructure as a service offering called Cloud Data Center. Built on VMWare vCloud Director 5.1 and Cisco’s Unified Data Center technologies, it’s aimed at enterprise hybrid needs.
“Businesses are looking to migrate applications hosted on-premise into the cloud using a variety of hybrid solutions,” said Andrew Higginbotham, chief technology officer at Savvis. “Designed for customers who rely on VMware technologies, Savvis Cloud Data Center streamlines extensions into the cloud with easy-to-use familiar interfaces and tools for scaling performance to their needs.”
Savvis’ Cloud Data Center Services consist of:
- vCloud Data Center – Public: a multitenant public cloud
- Cloud Data Center – Private: dedicated compute infrastructure
This isn’t Savvis’ first go around in the IaaS world, so what’s new here? It’s completely new and rebuilt (when compared to previous offerings like Virtual Private Data Center). It has a new user interface. In addition, the native support of vDC 5.1 API is a big deal to enterprises; it enables consistent cloud management experience and seamless hybrid cloud adoption.
The pitch is that Cloud Data Center helps facilitate rapid migration of enterprise applications to the cloud, with the service offered in multiple data centers across the globe. Customers are able to retain governance and control of corporate assets in the cloud, through proven security technologies and best practices. Moving to cloud optimizes Total Cost of Ownership through the pay-as-you-go consumption model.
Cloudswell signed up as a beta client and continues to use the service.
“We vetted many providers with great solutions in specific types of service offerings, but chose Savvis Cloud Data Center based upon its ability to provide top-tier services across many different service classes,” said Roger Hale, vice president of cloud services and chief information security officer at Cloudswell. “Working with Savvis, we have employed a hybrid cloud solution that provides scalability, flexibility and budgetary control with the visibility and auditability to meet our customers’ information security and compliance requirements.”
Savvis Cloud Data Center is part of a broad suite of scalable, on-demand cloud services available through Savvis data centers across North America, Europe and Asia. Savvis offers a complete portfolio of IT solutions including managed hosting, colocation, consulting services and CenturyLink’s global network services. |
|