Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, July 1st, 2015
| Time |
Event |
| 12:00p |
CyrusOne Says DCIM Software Data Not Open Enough An enterprise data center and a colocation data center have different Data Center Infrastructure Management needs. Colocation houses many different types and sizes of customers, while an enterprise data center houses one.
These needs are different enough that many DCIM software providers have released colocation-specific offerings. Colocation is a large potential market opportunity for them, as more businesses become comfortable with outsourcing, and they don’t want to miss out. Data center provider CyrusOne exemplifies an ideal customer in this category.
Access to DCIM Data Limited for Users
So, how do you win a DCIM deal with a CyrusOne?
Amaya Souarez, VP for Data Center Systems and Security at CyrusOne, joined after the decision to use DCIM was made, but she was tasked with driving operational insight.
The company uses DCIM software, but there is still a single requirement that Souarez continues to look for that hasn’t been met by DCIM vendors, which is direct access to the data that DCIM suites collect. “The area I wish that I had been here to drive more requirements for is in access to the SQL data being collected by these packages,” she said.
CyrusOne has had several discussions with different DCIM providers, according to Souarez, which continue to this day. “The DCIM packages (suites) are great,” she said. “I’m impressed with Fieldview; Panduit has done great work on SynapSense. But where they’re falling down is I need better ability to extract data out of these systems.”
Souarez knows infrastructure monitoring. Prior to CyrusOne, she spent a baker’s dozen of years at Microsoft, most of it driving the company’s earliest programs to pull useful, actionable information out of the data center to improve efficiency. Souarez was tasked with building an internal toolset at CyrusOne as well, and not a single DCIM provider can offer her an API that provides direct access to the SQL databases, she said.
“When you look at where everything is going in terms of cloud and ticketing, any number of solutions – having a really great API and having the ability to easily extract out data is super important,” said Souarez. “But it hasn’t been embraced fully by the data center industry quite yet.”
Although she has pushed for better direct access, she understands why DCIM providers are hesitant. “I feel that they’re concerned I’ll figure out their schema,” she said. “However, this is just me – for internal use. There are a lot of analytics we want to do within the company.”
Large companies often build the kind of analytics Souarez desires in-house. A multi-national bank or a web-scale data center user has these capabilities because it has the resources to build them.
What Souarez is asking for may mean a different business model for DCIM providers. API direct SQL access to information might give away the secret sauce – how DCIM takes information and turns it into actionable insight.
So where should the value of DCIM software be? Her suggestion is providing the direct access while focusing on developing a marketplace for other tools. It would be akin to how Salesforce.com enabled developers to use data for a variety of purposes in a wider marketplace.
Colocation DCIM Usage
As mentioned above, colocation is very different from enterprise. CyrusOne concerns itself with what’s going on with the facility, while the companies colocating inside are responsible for what’s going on with their servers. It is also in a position where it has “different ages and flavors” of data centers, which means having to understand and measure a portfolio of heterogeneous facilities that look and act differently.
“Since DCIM has really started to pick up as a concept in recent years, it’s made tremendous strides,” said Souarez. ”It’s great to see people stepping up. However, for me it’s about the openness.”
The company does use DCIM software and a variety of other tools extensively in the data center. “If you were to name a BMS (Building Management System), we probably have it somewhere in our portfolio,” said Souarez.
It recently outlined its work with Panduit. That relationship began with SynapSense, which Panduit acquired last year.
How CyrusOne Uses DCIM
CyrusOne has two primary uses for infrastructure management: increasing efficiency and providing intel to customers. So far, the focus has been optimization of the facility environment on the CyrusOne side. Its customers primarily use DCIM to track the provider’s ability to meet Service Level Agreements.
Panduit’s SynapSense is being deployed in three phases. In the first phase, wireless environmental monitoring is deployed. Next, Panduit Services, together with the CyrusOne team, uses Panduit tools and metrics to optimize data center cooling and “balance” the airflow. In the final phase, Active Control, a feature of SynapSoft software, automatically maintains efficient and cost-effective operating conditions. It dynamically matches cooling to the varying equipment loads and conditions in the data center.
As a result, operational reliability is enhanced while saving on energy costs is maximized.
How CyrusOne Customers Use DCIM
“Different customers want different things,” said Souarez. “We’re not a company that has four huge, mammoth customers leasing entire data centers. We have customers from a single rack to entire suite and every flavor in between.”
Almost all customers use what CyrusOne provides minimally, according to her. The ownership in colocation is split between facility and the servers within, meaning customers may deploy their own DCIM software solutions.
Almost all customers leverage CyrusOne’s DCIM deployment for one purpose: to see if Service Level Agreements were met. About one out of ten of those customers want to look at this data on a daily basis. The balance of 1 percent are heavy users, though Souarez said their needs are increasing. “One percent want me to provide them with near-real-time data,” she said. “Fortune 1000 companies want real-time data streaming to them.”
These customers often do their own DCIM thing atop of what CyrusOne offers. “What I see from my customers, some of them have their own platforms, where they’re looking at the compute and storage and want to integrate that data on the backend,” said Souarez.
In addition to the one percenters, Souarez also noted several instances where customers started taking a peek at the data only at the end of the month and eventually became daily users wanting information down to the rack level.
Visit the Data Center Knowledge DCIM InfoCenter for guidance on DCIM purchasing and implementation and for latest news in the world of DCIM. | | 3:00p |
On-Prem Data Center, Colo, or Cloud? Demystifying the Dillema When it comes to deploying application workloads these days there is certainly no shortage of locations to chose from. IT organizations can build their own data center facilities, take advantage of colocation or hosting services, or opt to deploy workloads in the cloud.
At the Data Center World conference in National Harbor, Maryland, this September, Laura Cunningham, a consultant with expertise in data center economics, will explain how much of that decision is actually influenced by whether the CFO prefers to treat IT as a capital or an operating expense.
In situations where access to capital is tight, or there’s a lot of internal competition for a limited number of budget dollars, IT organizations tend to favor treating IT investments as an operating expense, Cunningham said. Conversely, organizations that have a lot of access to capital tend to favor investing in data centers, because the potential tax savings can be substantial.
Another major factor to consider may be the amount of time an application workload needs to be deployed in. Cloud computing services tend to be more expensive than data centers over time, but they also provide a level of agility that most internal IT teams working within a data center can’t match.
All factors considered, the key to getting any IT project approved is understanding the financial preferences and predispositions of the organization being asked to approve it, Cunningham said.
In addition, organizations need to be aware of what tax incentives might be applicable. Many second-tier cities or even smaller countries will provide incentives to entice organizations to build a data center in a particular location.
Finally, internal IT organizations need to pay close attention to the overall process to make sure they are actually getting what they want, she added. A facilities team, for example, may select a data center location based on the cost of real estate and access to inexpensive sources of power, without considering how important network latency might be for any given set of applications.
It’s usually crucial for modern web and cloud applications to be located only one or two network hops away from an internet peering exchange.
There are clearly a lot of factors that go into determining where best to physically locate any given application workload. The good news is that barring any specific compliance requirement, most IT organizations today have plenty of options to consider.
For more information, sign up for Data Center World National Harbor, which will convene in National Harbor, Maryland, on September 20-23, 2015, and attend Laura Cunningham’s session titled “Demystifying the Data Center Sourcing Dilemma” | | 3:30p |
How to Move Data Restoration to the Top of the To-Do List Kornelius Brunner is the Head of Product Management at TeamViewer.
Despite the growing awareness surrounding the importance of backing up data, many organizations still see it as a nuisance and are willing to take the chance on their own technology. However, since backing up data is extremely important to businesses, it can’t be just another item added to the end of a to-do list. IT managers must take on the responsibility to ensure that all data is backed up on a regular basis.
However, backing up data is just the first step; having the ability to restore the data in case of data loss is equally important. According to a NetApp Study, only 33 percent of the IT professionals surveyed were certain that they were able to retrieve their data quickly and correctly during an emergency. In fact, only a quarter of the companies with data backup processes practice for such an emergency with employees.
To be part of the well-prepared 25 percent, IT professionals need to make a conscious effort to be sure employees know how to restore their data by teaching them to do it the correct way. However, all the training in the world doesn’t replace practicing over and over again. Once employees have it down, monthly restore trainings will ensure that data recovery is second nature; as they say, “practice makes perfect.”
A detailed handbook that lists step-by-step instructions should an emergency occur is a great resource for employees. Monthly training sessions may be enough for some, but others may feel more confident when they can practice independently.
Again, while it is important that employees understand the restoration process, IT needs to choose the best data restoration technology for ensuring 100 percent backup compliance and to be prepared to take over the restoration process if necessary.
Restoration technology should be evaluated in two ways:
Flexibility
First, company-wide technology implementations must be easy enough for employees to understand and flexible enough for IT managers to implement. For backup and restoration technology, it is important that IT managers can save and restore data to a precise location, where users can be confident they’ll find it in case of an emergency if the IT manager is not around. However, in the event that the user has forgotten the backup location and it cannot be found inside the virtual backpack, a flexible search function within the backups can prove very useful.
Ease of Use
Following Murphy’s Law, data loss seems to always occur at the worst possible moment. Whether it is when finalizing an important presentation or writing an urgent briefing for the boss, there is usually not a lot of time to search for the backed-up data during an “accident.” However, it should also not be necessary. Expending time and effort to retrieve data contradicts the basic idea that a backup should be easy to use. It should be thought of as a rescue parachute that engages at exactly the right moment.
In today’s world of increasing cyber attacks on companies, no one should count on the unconditional availability of data. An accidental deletion can quickly transform a normal workday into chaos, and employees must be skilled enough to perform a backup and restoration.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 6:19p |
Open-IX Expands Outside of US, Certifies French Data Center Jaguar Network, a provider of hosting services with a data center in France, has become the first company to get an Open-IX certification outside of the US.
Open-IX is a non-profit that certifies internet exchanges and data centers based on a set of standards it has devised. Founded by Google, Netflix, and Akamai, among others, the organization is tasked with providing a neutral mechanism through which to evaluate the robustness of an internet exchange facility.
Open-IX has gained some traction in the US since it was founded, primarily with wholesale data center providers looking to boost interconnection value of their facilities, and with European internet exchange operators looking to expand to US markets. How well the standard will do outside of the US remains to be seen.
One of the main reasons Open-IX was started was to create robust alternatives to the handful of data center providers in the US that control nearly all major internet exchanges and have become the default go-to exchange operators. Another reason was to create distributed exchanges in the US that span multiple data centers, which is a popular model in Europe.
Certifying a data center in France indicates that Open-IX has global ambitions. In its press release, Jaguar referred to the certification as a “global standard” – a kind of language Open-IX have steered clear of in the past.
Gabe Cole, board liaison and chair of the Open-IX Data Center Standards Committee, said Jaguar was first in what will soon be a series of certifications of international data center operators.
Via a private network, its data center in Marseille is connected to over 30 interconnected data centers, serving customers in France, Africa, and the Middle East.
“There are now 24 to 25 data centers that have been OIX-2 certified in North America,” Cole said. “One of the reasons that we started this process in the first place is because we noticed that internet performance was a little better in Europe than in the US.”
Based on a self-assessment made by the data center operator, OIX-2 certifications are bestowed after a committee reviews those documents. Once approved, that data center provider is then listed in a directory of OIX-2 providers Open-IX makes available to IT organizations evaluating internet exchanges.
Next up, the organization plans to put a complaint process in place, through which a data center operator might actually lose their certification if enough complaints are substantiated, Cole said.
Naturally, anything involving certifications engenders a lot of heated debate among data center operators, providers of the certifications, and the IT organizations that make use of data center facilities. Self-assessments can only be relied on to a point.
But sending people that are qualified enough to independently verify those assessments is cost prohibitive. In the absence of those hands-on assessments, there comes a point where selecting an internet exchange provider still comes down to trust and reputation. | | 7:33p |
AWS Releases Open Source TLS Encryption Protocol 
This article originally appeared at The WHIR
Amazon Web Services has released a new open source implementation of the TLS encryption protocol, called signal to noise (s2n). Released on Tuesday, the s2n library is designed to be smaller, faster and easier to review than TLS.
According to a blog post by Stephen Schmidt, VP and chief security officer for AWS, s2n today is “just more than 6,000 lines of code”, considerably less than OpenSSL, the most popular reference implementation, which contains more than 500,000 lines of code with 70,000 of those involved in processing TLS. He said that s2n isn’t a replacement for OpenSSL: “OpenSSL provides two main libraries: ‘libssl’, which implements TLS, and ‘libcrypto, which is a general-purpose cryptography library. Think of s2n as an analogue of ‘libssl,’ but not ‘libcrypto.’”
“The last 18 months or so has been an eventful time for the TLS protocol. Impressive cryptography analysis highlighted flaws in several TLS algorithms that are more serious than previously thought, and security research revealed issues in several software implementations of TLS,” Schmidt said. “Overall, these developments are positive and improve security, but for many they have also led to time-consuming operational events, such as software upgrades and certificate rotations.”
In March, a TLS vulnerability known as the FREAK attack was discovered, which allowed attackers to intercept HTTPS connections between clients and servers.
AWS plans to integrate s2n into several AWS services over the next few months.
The source code, documentation, commits and enhancements are all publicly available under the terms of the Apache Software License 2.0 from the s2n GitHub repository.
This first ran at http://www.thewhir.com/web-hosting-news/aws-releases-open-source-tls-encryption-protocol | | 11:12p |
Startup to Build Underground Data Center in Finland A Finnish-Israeli data center services startup has rented a cave from a city in Finland where it plans to build a data center.
While underground data centers or data centers in caverns are unusual, a number of them exists around the world. Their benefits besides the obvious ability to withstand bombings or natural disasters are usually in energy savings around cooling. Caverns are cold places that can provide a lot of free cooling to supplement mechanical cooling systems.
The company also claims the facility will protect against attacks by high-power microwave and electromagnetic pulse weapons.
Often, caverns data center operators choose to transform into mission-critical facilities are former military sites. And that is the case with the cavern in Finland, which the City of Tampere recently agreed to rent to Aiber Networks. Built to military standards, the underground facility was used since 1930s by Valmet, a formerly government-owned Finnish aviation company.
Finland, like its Nordic neighbors Sweden and Iceland, has had some success attracting data center development thanks primarily to low power costs and a cool climate.
Google has a gigantic data center in Hamina, Finland. Microsoft has a data center in the country, and so does TelecityGroup, the UK-based data center service provider that’s being acquired by Equinix.
Aiber’s focus will be on serving Finnish, Russian, and Israeli customers. “We focus on international cloud business customers from Finnish, Israeli, and Russian timezones,” the company’s CEO Pekka Järveläinen said in a statement.
Aiber has another data center in Tampere, located in a telecom hub about 2 kilometers away from the future underground data center, according to its website. The company also has data centers in Stockholm and St. Petersburg metros.
Aiber’s main investor is Daniel Levin, an Israeli citizen who has done a lot of business in Russia. According to his LinkedIn profile, he is CEO of Aiber Group, a major Russian construction company.
Aiber Networks is owned by Levin and the company’s employees, according to its press release.
Initial investment in the company is €2 million, which it expects to raise by several million.
It plans to provide the full range of data center services, from data center space, servers, and storage to integration management and help desk, going primarily after customers that build web and mobile applications and cloud platforms, Järveläinen said.
Renovation work in the cave has started already. Ultimately, the company expects to invest anywhere between €50 million and €100 million in building out its underground data center.
The 14,000-square-foot facility has three tunnels. Each tunnel will be its own data center. Aiber’s current design calls for 4 MW of capacity. The site’s current capacity is 2.4 MW. |
|