Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, September 30th, 2013

    Time Event
    11:30a
    Data Center Jobs: RagingWire Job Fair

    At the Data Center Jobs Board, we have a new job listing from RagingWire, which is seeking applicants for Data Center Jobs in Ashburn, Virginia.

    RagingWire Data Centers will be hosting an exclusive TWO day Job Fair event on Friday October 18th from 4:00pm to 8:00pm, and Saturday October 19th from 9:00am to 2:30pm, at the Embassy Suites Dulles – North/Loudoun Ashburn, VA. You can RSVP early by submitting your resume online, as individuals may be contacted for in-person interviews during the event. Walk-ins are WELCOME, but RSVP is highly encouraged. Please be sure to tell your friends, family and co-workers about this exciting annual job fair event.  To view full details and apply, see job listing details.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

    12:30p
    Fibre Channel: The Best Kept Secret in Enterprise Data Centers

    Tim Lustig is Director of Corporate Marketing at QLogic Corporation

    Tim_Lustig_tnTIM LUSTIG
    QLogic Corporation

    Driven by the desire for a competitive edge, IT landscapes are rapidly changing to improve compute infrastructure and better service business needs. The storage network plays a very vital role in these environments, and IT administrators know that Fibre Channel inherently delivers what is required from a storage transport technology; performance and reliability. In a recent end-user survey by Enterprise Strategy Group* (ESG), 85 percent of the participants responded that they will increase or maintain investment levels in Fibre Channel SANs. Key drivers noted were ‘Performance’ and ‘Reliability’ over other storage technologies. According to IDC, more than 90 percent of Fortune 1,000 data centers use Fibre Channel as the de facto standard for storage networking. In addition, IDC recently projected a compound annual growth rate of more than 60 percent for enterprise data storage through 2015.

    A key factor driving IT transformation and Fibre Channel health is server virtualization. The availability of servers based on Intel’s E5 processors combined with new features within VMware’s vSphere 5.1 and Microsoft’s Hyper-V hypervisors, have introduced a new game-changing compute platform. This platform supports new levels of virtual machine (VM) density, and Tier-1 applications that previously required dedicated server hardware for the first time could run on virtual servers.

    ESG’s survey confirms this rapid growth in server virtualization. Of all servers capable of being virtualized within their data centers, 41 percent of respondents reported between 51 and 100 percent of those servers were virtualized. Over the next three years, 71 percent of respondents planned to virtualize between 51 and 100 percent of servers that could be virtualized. In this same three year time frame, IDC reports that VM deployments will grow by more than 91 million.

    Infrastructure to Support New Technologies Lags Behind

    While improved hypervisors and E5-based servers are driving significant deployment in many enterprise data centers, the I/O and network infrastructure to support these new technologies lags far behind. To fully optimize a virtualized data center, servers need maximum I/O capacity to support Tier-1 applications that require higher bandwidth. In addition, increased bandwidth is needed for densely virtualized servers, which aggregate I/O from multiple VMs to the host’s data path. Highly virtualized environments generate a tremendous amount of I/O traffic, magnifying the I/O performance bottleneck issue already present in most enterprises.

    As companies move Oracle database applications, Microsoft SQL Server, SAP and other mission-critical applications onto virtualized servers, the robust nature of Fibre Channel becomes necessary for satisfying storage I/O performance and data integrity requirements. Fibre Channel’s credit-based flow control – one of the exclusive features of Fibre Channel that make it so well suited for block-level storage data networks and interconnects – delivers data as fast as the destination buffer is able to receive it, without dropping frames or losing data.

    Cloud Computing Driving Demand

    Cloud architectures are also driving demand for more modern IT landscapes that deliver multi-tenancy, greater bandwidth and faster transactional response times. Multi-tenant infrastructures by nature put additional stress on storage networks for greater reliability, stability and scalability. Within these new IT architectures, storage technology must support granular Quality of Service (QoS) as an essential attribute to avoid I/O bottlenecks and maintain Service Level Agreements (SLA’s). Fibre Channel is deterministic by design and can be fine-tuned with capabilities that eliminate network congestion and maximize efficiency and performance to guarantee SLAs.

    Cloud computing, plus the latest VMware and Windows hypervisor implementations are pushing the limits of what storage I/O can handle. The evolving needs of storage networks point directly to Fibre Channel, and more specifically, to the advanced capabilities of 16Gb Gen 5 Fibre Channel. Gen 5 Fibre Channel is backward compatibility with the huge installed base that represents an incredible investment by the world’s largest and most successful companies. Gen 5 Fibre Channel continues to empower end-users by delivering architectural flexibility, enabling a more agile, cost effective and efficient environment. With a strong roadmap on the horizon, Fibre Channel provides confidence that investments in the technology will be preserved into the foreseeable future, while its inherent characteristics play an even greater role in the protocol’s long-term viability.

    Notes:
    *ESG Fibre Channel End User Survey commissioned by QLogic Corporation and completed in April 2013.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    12:30p
    QTS Realty IPO May Raise Up to $422 Million
    qts-suwanee

    The QTS Realty data center in Suwanee, Georgia. The company expects to price its IPO between $27 and $30 per share. (Photo: QTS)

    QTS Realty Trust hopes to raise as much as $422 million when it goes public through an IPO, which is currently scheduled for Oct. 9. In a regulatory filing this week, the data center operator said it expects to sell 12.25 million shares of common stock at a price between $27 and $30 per share. At the midpoint of the proposed range, QTS Realty Trust would have a market value of about $995 million. The company plans to convert to a real estate investment trust (REIT) and list on the NYSE under the symbol QTS.

    The QTS Realty IPO is a key step in the company’s ambitious growth strategy, which has focused on buying massive industrial facilities and adapting them for data center use. The company hopes to expand seven of its data centers across the county, investing up to $277 million to add more than 312,000 square feet of customer space in key markets over the next two years.

    QTS operates 10 data centers in seven states offering 714,000 square feet of raised floor data center space and 390 megawatts of available utility power. The company reported revenue of $84.4 million in the first half of 2013, with net income of $7.1 million and funds from operation (FFO, a key benchmark for REITs) of $26.7 million. In 2012, the company had revenues of $157.6 million and FFO of $45.2 million.

    The company is one of three in the hosting and data center sector that have filed plans for IPOs this year, along with Endurance International Group and IO.

    1:00p
    Nirvanix Collapse Provides Stress Test for Cloud Migration
    cloud-vm-movement

    The collapse of Nirvanix will test the migration capabilities of cloud storage providers.

    It’s been the vision of cloud nirvana: a world in which applications and large amounts of data can migrate seamlessly from one public cloud to another. The next two weeks will provide a stress test of the cloud computing sector’s progress toward that goal.

    On Oct. 15, cloud storage provider Nirvanix will shut its doors, having run dry of funding. That leaves two weeks in which all the many petabytes of data stored on the Nirvanix cloud must find a new home.

    “For the past seven years, we have worked to deliver cloud storage solutions,” Nirvanix said Friday on its website. “We have concluded that we must begin a wind-down of our business and we need your active participation to achieve the best outcome.

    “We are dedicating the resources we can to assisting our customers in either returning their data or transitioning their data to alternative providers who provide similar services including IBM SoftLayer, Amazon S3, Google Storage or Microsoft Azure.”

    Scramble for Customers, But Many Questions

    The confirmation of Nirvanix’ shutdown follows days of published reports that the end was near. It has kicked off a scramble for customers to move their data, a process that would be easier if they weren’t all trying to do it at the same time. The mass exodus of data will test the pipes at Nirvanix, which says it has “established a higher speed connection with some companies to increase the rate of data transfer from Nirvanix to their servers.”

    There is not much precedent for this type of mass migration, which raises tough questions about the process, as noted by Jay Heiser of Gartner.

    “What kind of a data storm do you get when a thousand customers all simultaneously start trying to copy out petabytes of data?” Heiser writes. “How much technical support can a company offer when they are going bankrupt? Can you reasonably expect that their staff will be motivated to stay on, and undertake any necessary heroic efforts that might be needed to help you recover your data? Where are you going to put that data?”

    Competitors Circle With Rescue Plans

    Nirvanix says it has an agreement with IBM SoftLayer to assist customers in rescuing their data. Many other cloud providers are pitching their services to Nirvanix customers. Companies running Google ads courting Nirvanix customers include Rackspace, Microsoft, SunGard, ViaWest and Verizon Terremark, among others.

    CoreSite Realty says that the easiest migrations will be ones where the data needs only to travel across the aisle, rather than across the web.

    “One of the issues being faced by customers is time,” CoreSite noted in an email to DCK. “Many will not be able move terabytes and petabytes of data off Niravanix’s cloud by October 15. But since Nirvanix is colocated within CoreSite’s One Wilshire data center campus in Los Angeles (among other locations), the proximity of other cloud storage providers within the facility – including Amazon Web Services and Synoptek – is making it easier and faster for CoreSite to transfer customer data to other services.”

    Cloud Exit Plans Are Suddenly Sexy

    Gartner analyst Kyle Hilgendorf said the Nirvanix meltdown should prompt cloud users to get serious about developing an exit strategy for migrating their cloud apps and data. Gartner has been advocating this for some time, but Hilgendorf says few firms have taken concrete steps.

    “Cloud exits are not nearly as sexy as cloud deployments – they are an afterthought,” Hilgendorf wrote in a blog post. ” It’s analogous to disaster recovery and other mundane IT risk mitigation responsibilities. These functions rarely receive the attention they deserve in IT, except for immediately following major events like Hurricane Sandy or 9/11.

    “If you are a customer of any other cloud service (that is basically all of us) – take some time and build a cloud exit strategy/plan for every service you depend upon,” he added. “Cloud providers will continue to go out of business. It may not be a frequent occurrence, but it will happen. ”

    The Nirvanix collapse could also prompt a fresh look at services that provide migration-friendly cloud storage options. That’s the pitch from cloud broker Oxygen Cloud, which allows customers to use one or more storage providers to  back up clouds.

    “Oxygen connects your Oxygen Drive to your choice of storage, and creates a secure container between each client and storage backend,” writes Oxygen VP of Product Leo Leung in a blog post. “Behind the scenes, Oxygen directs reads and writes between the client and storage, never getting in the way. The user experience looks the same: all users see is a different shared folder on their desktops and devices.”

    Oxygen said that it has already helped migrate one of its customers that stored “a ton of data” for more than 1,000 customers on Nirvanix.

    Will Nirvanix customers be able to successfully migrate their data? If network bottlenecks present challenges, there’s always the physical approach. As the old saying goes: “Never underestimate the bandwidth of a truck full of backup tapes.”

    1:30p
    More Data Center Buyouts Ahead in 2014, Predicts DH Capital
    Peter Hopper, president and co-founder of DH Capital. (Photo: Rich Miller)

    Peter Hopper, president and co-founder of DH Capital.

    LAS VEGAS - Thus far 2013 has been a fairly quiet year for data center mergers and acquisition (M&A). But expect that to change next year, says Peter Hopper of DH Capital, a veteran dealmaker in the hosting and data center industries.

    “Transactions in 2014 will exceed 2013 dramatically,” said Hopper, the CEO and co-founder of DH Capital, who discussed his take on the market at last week’s 451 Research Hosting & Cloud Transformation Summit. “The reason for this is that the buyer universe has has never been better. Multiples will remain stable, and the debt markets remain highly supportive.”

    DH Capital has been involved in 105 transactions representing more than $8.5 billion in value since it was founded in 2001. Of these transactions, 58 were acquisitions and 47 were private capital placements.

    The most important metric, according to Hopper, is the EBITDA multiple, which is calculated as the value divided by EBITDA (Earnings Before Interest, Taxes, Depreciation and Amortization).  Everyone is always interested in the multiple a company receives when it is acquired, as this is often seen as a benchmark that could be applied to future deals.

    Deal Valuations Still Solid

    “Multiples remain strong for both managed (hosting) and colo, but multiple spreads are wider,” noted Hopper, meaning that the deal valuations have varied. Hopper says that he sees the scatter of multiples tightening up going forward.

    While the average multiple in colocation deals is about 8 times EBITDA , Hopper notes that a number of deals featuring “legacy assets” have featured lower valuations. “The multiple on managed hosting  deals has slowly crept up as there’s more and more focus on managed assets,” Hopper notes.

    Data center REITs(real estate investment trusts) are the healthiest in terms of EBITDA growth, in the high 40s and even 60 percent. They are also the most consistent. Colocation providers are in the 40 to 45 percent EBITDA range while managed hosting providers are in the 20-30 percent range. Managed service providers generate the most money per square foot, but also have higher expenses as they purchase the servers and equipment inside the data center. Hopper sees managed providers adding cost to their models. “In our view is this is a conscious decision,” said Hopper.

    Seeking Scale or Specialization?

    Hopper said he’s seeing providers pursue one of two paths: scale or specialization, which he likens to focusing on raw materials (space) or finished goods (services). “It’s all about the use case,” said Hopper.”There’s huge demand for solutions, not just raw materials.”

    For this reason, he sees the colocation market splitting into two directions: “Colocation providers are either going hard at more managed services or going the wholesale REIT direction.” On one side are companies like Telx and Equinix, who have unique business models built atop interconnections. On the other hand, providers like RagingWire that are going hard at opportunities in the wholesale market.

    While the public markets react to revenue growth from quarter-to-quarter, Hopper says he is seeing some very healthy trends in REITS, colocation providers, and managed services providers that aren’t always highlighted to the public. His message to the community: “Stay on top of metrics.”

    Digging Deeper into Equinix, Rackspace

    An example: revenue growth quarter over quarter at Equinix seems to be slowing, Hopper notes some very positive trends beyond the headline numbers. Revenue per cabinet is not only solid, but rising at an annual growth rate of 7.6 percent from 2011 to 2013. Revenue per cabinet remains solid, at around $2,225.  Revenue per square foot is also consistent or growing. Despite less impressive revenue growth on a quarter-to-quarter basis, Hopper views the fact that monthly revenue per cabinet and revenue per square foot continues to grow as a very positive trend.

    A similar trend is occurring with Rackspace. Quarterly revenue growth hasn’t impressed the public markets (although the company enjoyed a good bounce back in Q2). But the company continues to grow monthly revenue per cabinet and per square foot. Prices are holding up and even firming, a very positive trend that Hopper says is worth noting for the long term.

    Other trends Hopper notes: Monthly revenue per kW is healthy, kW used per server is solid, and there’s an upward trend in terms of density. Revenue per square foot is north of $600, and monthly revenue per server is on the rise.

    While the public markets react to revenue growth, the underlying metrics to companies like Rackspace and Equinix show healthy businesses getting even healthier.

    “Customers are focused on the value proposition of the cloud,” said Hopper. “There’s a huge opportunity for hybrid and private cloud going forward.”

    2:00p
    Uptime Institute Announces Brill Awards for Efficient IT

    Uptime Institute, an independent division of The 451 Group, is seeking applicants for its new awards program, the Brill Awards for Efficient IT.

    The award program, which will be overseen by Kevin Heslin, Uptime Institute’s Senior Editor and former editor of Mission Critical magazine, aims to continue the late Uptime founder Ken Brill’s vision of sharing best practices and new ideas to improve data center and IT efficiency.

    The awards seek to recognize efficiency in the broadest sense of the word — efficiency of capital deployment, technology, design, operations and overall management.

    “Ken Brill always believed that sharing information about innovation and best practices in IT would ultimately benefit not only the industry but also society in general. I think the Brill Awards will honor Ken’s legacy by attracting more enterprises to share their ideas and also by disseminating information about exciting innovation in the industry around the world,” Heslin said.

    The Brill Awards will recognize IT efficiency in five categories, across four global regions: North America, EMEA, Latin America and APAC. Judges will consider the merits of projects in each of the five categories — Data Center Design; Operational Data Center Upgrade; Data Center Facilities Management; IT Systems Efficiency; and Product Solutions. Categories will be judged by in-region experts and global authorities with local expertise.

    Applications are being accepted through December 13, 2013. For further information and to apply, visit the Uptime Institute website.

    2:30p
    Learn to Safeguard Your Network and Protect Your Data Center

    Whether you are a large enterprise, e-commerce company or traditional service provider trying to replace diminishing Internet access revenue with outsourced IT services such as email, Web hosting or software as a service (SaaS), your Internet data center (IDC) plays a vital role in the success of your business.

    Virtualization, the latest trend for IDCs, was designed to consolidate the sprawling IDC and reduce costs. However, it brings a new level of complexity that raises issues such as outsourcing applications and data security. To make matters worse, traditional security mechanisms such as firewalls and intrusion detection systems (IDSs) were not designed for these virtualized environments and thus are not protecting today’s modern IDCs. In fact, there are known cases where these traditional security systems are the targets of attacks.

    Today’s IDC needs a security solution that can simultaneously protect its network infrastructure, IP-based services and data—all of which are vulnerable to attacks or compromise. This white paper outlines the need for additional security around the modern data center and how the Arbor Networks’ Peakflow technology can provide new types of security measures.

    Download this white paper today to learn about the three pillars of protection required for the modern IDC. This includes:

    • Pillar 1: Network Infrastructure Protection
    • Pillar 2: Application/Service Protection
    • Pillar 3: Data Protection

    Regardless of what your data center is hosting – it’s important to have the right security solutions in place to ensure optimal infrastructure integrity. Find out how the Arbor Networks’ Peakflow product family can help today’s IDC operators overcome new types of security challenges by securing critical network infrastructure, applications/services and data—thereby providing the pillars of protection needed to optimize IDC operations.

    5:00p
    Is The Power Grid Ready for Worst-Case Scenarios?
    Tom Popik, chairman and co-founder, Foundation for Resilient Societies.

    Tom Popik, chairman and co-founder, Foundation for Resilient Societies, on wide-area threats and common mode failures for data centers, at the opening session of Data Center World Fall 2013 in Orlando. He noted that long-term electric outages impact every other critical infrastructure, and a 500-mile distance between your data centers may not be enough distance to fail over in the case of electrical grid outages. (Photo by Colleen Miller)

    ORLANDO, Fla. - Is America ready for a sudden loss of electric power over large portions of the country? How likely are these Doomsday scenarios? And how can the data center industry prepare for the worst-case impacts on power grids?

    Those questions were explored Sunday in the opening panel of the Data Center World Fall conference, as three experts outlined the threats the U.S. electrical infrastructure may face from seemingly unlikely events – a massive burst of solar weather, terrorist attacks on critical transformers, and radio frequency weapons with the potential to wipe out all the data housed in a server farm.

    While these events may seem like the stuff of science fiction, they are more likely than you may suspect, the panel told its audience. For an industry that must account for any and all threats to uptime, these “wide area threats” are tricky because they may seem like a low probability, but carry such a high impact.

    A terrorist attack or major geomagnetic storm could knock out power over large area of North America for weeks or event months, warned Tom Popik, founder of the Foundation for Resilient Societies, which works to raise awareness of threats to the power grid.

    “Electric power is the glue that holds the infrastructure together,” said Popik, noting that many other critical infrastructure systems need power to function. “A long-term power outage will affect all types of infrastructure.”

    Popik and his fellow panelists urged data center managers to engage with public utilities and elected officials  to make them aware of threats to the U.S. power grid and develop plans to address some of these worst-case scenarios.

    Custom Transformers = Recovery Bottleneck

    Popik said one of the challenges is the interdependency of the North American electrical grid, which makes it difficult to isolate some types of failures. Incidents that affect large portions of the grid are hard to contain and create worrisome recovery scenarios, Popik said.

    He said a particular vulnerability is the replacement of some types of equipment, especially Large Power Transformers (LPTs), which are custom-designed, expensive to replace and hard to transport. LPTs weight between 100 and 400 tons, cost millions of dollars and and can take as long as 20 months to manufacture, according the U.S. Energy Department, which addressed the risks of LPT shortages in a 2012 report (PDF).

    “Most utilities have few spare transformers,” said Popik, who warned that an incident that damaged multiple large power transformers could cause lengthy problems to the power grid while the units were repaired or replaced.

    Popik said a potential shortage of LPTs is an internal weakness of the grid. Other speakers on the panel warned of external threats.

    Space Weather and Geomagnetic Storms

    The potential for power outages from solar weather has been discussed by the data center industry in recent years. But John Kappenman of Storm Analysis Consultants, said the threat from geomagnetic storms is not well understood.

    “We’re looking at impacts that could be measured in trillions of dollars of damages,” warned Kappenman. “The country’s ability to respond to this and recover is very limited. it is arguably one of the worst natural disasters we could have. We have never had a design code that takes this threat into consideration.

    “How probable is this? It’s 100 percent probable,” said Kappenman. “They’ve happened here before, and will happen again. They’re not very frequent. We are playing a game of Russian Roulette with this problem.”

    5:10p
    Keeping Pace With the Cloud: How Enterprise Data Centers Can Compete
    Shannon Poulin, vice president of Intel's Datacenter and Connected Systems Group, spoke at the morning keynote of AFCOM's Data Center World conference in Orlando.

    Shannon Poulin, vice president of Intel’s Datacenter and Connected Systems Group, spoke at the morning keynote of AFCOM’s Data Center World conference in Orlando. He emphasized a re-architecture of the data center is needed from the server at rack level, the network, and the storage. All this will facilitate a more flexible, faster service-oriented infrastructure that is cloud-friendly. (Photo by Colleen Miller.)

    ORLANDO, Fla. - Developers and end users want the speed and convenience of cloud computing. If companies aren’t prepared to deliver cloud-style services from their own data centers, their users will seek them out from public cloud providers.

    That creates a challenge for data center managers, according to Shannon Poulin, vice president of Intel’s Datacenter and Connected Systems Group. Poulin was the keynote speaker at the Data Center World Fall conference, and came bearing good news: new developments in both hardware and software can help rearchitect enterprise data centers for a service-oriented world.

    Poulin said the rise of cloud computing has placed additional pressure on data center managers, who must sort out the cloud equation and remain relevant in a fast-changing environment for IT service delivery. It means balancing competing calls to be both more secure and more nimble.

    Raising the Bar for Internal IT

    “Business expectations are being altered by the rapid deployment of consumer services,” said Poulin. “There’s a lot of pressure to to look at cloud environments. If I’m going to compete with a public cloud provider, I have to get better. That’s how to keep workloads on premises. We have to deliver services much more quickly than we currently do.”

    Poulin said most CIOs want the economics of a public cloud, but they want it on premises. Keeping workloads on premises isn’t easy when any developer with a credit card can launch virtual servers on Amazon and a plethora of other cloud providers. Intel speaks from experience on this issue.

    “Last year Intel spent $5 million at a public cloud provider,” said Poulin. “We have our own cloud environments, and we have a very clear policy, yet we still see this rogue cloud usage. It’s probably happening in your company as well.”

    Software Defined Everything

    The growth of cloud services are part of a broader shift towards a “software-defined data center.” But getting there will require new architectures, and new approaches to storage and networking.

    “We believe the underlying infrastructure of the data center must be more cloud-friendly,” said Poulin, who said virtualization has made great headway on the server, but has a way to go in storage and networking. “In networking, there’s not much separation between hardware and software, between the control plane and the data plane.”

    The rise of OpenFlow and other software defined networking (SDN) technologies have laid the groundwork for a shift in the networking space. On the storage side of the house, there’s a growing trend towards tiered storage, which sorts data into hot, warm and cold buckets based on use profiles. This allows data center operators to segment their storage infrastructure, reserving expensive high-end hardware for priority data while shifting “cooler” data to commodity platforms.

    Matching Processors to Workloads

    These new architectures allow data center managers to squeeze more performance and efficiency out of their infrastructure. Poulin says Intel is working to match its processor offerings to a range of uses to support specialized computing. But these new approaches also require tools to manage and automate infrastructure, speeding the delivery of new offerings and allowing the creation of service catalogs.

    “You need to have policies and some level of automation, so you can deploy in minutes instead of months,” said Poulin. “You have to provide tools, or else your users are going to go to a public cloud provider. We want to get to the point where these private clouds are competitive with public clouds.”

    The shift to a cloud-powered, service-driven IT world provides both opportunities and challenges. Companies must decide what to outsource, and what to keep in-house.

    “There’s going to be a shakeout,” said Poulin. “The companies that can use IT to solve their challenges will win.”

    << Previous Day 2013/09/30
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org