Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, July 21st, 2015
| Time |
Event |
| 12:00p |
Firms Rethink Russian Data Center Strategy, as Data Sovereignty Law Nears Activation The Russian law that will require all personal data of Russian users to be stored in data centers within the country’s borders goes into effect in less than two months — on September 1 — and several major internet properties are working toward compliance. Others are exiting the Russian market altogether.
It will be difficult for companies that have been operating globally distributed services, where data is stored in multiple locations around the world, to build out new data center infrastructure in Russia to comply. Difficult, but not impossible, and some companies are rethinking their infrastructure in the country.
The On Personal Data (OPD) Law was expanded in July 2014 to include a data localization requirement. It means that many websites serving Russian users will have to change the way they host personal information. Namely, databases will have to be physically located on Russian territory, and personal data will have to remain in-country.
A European think tank, called the European Centre for International Political Economy (ECIPE), called the move a “self-imposed sanction,” estimating the losses for implementing the law at $5.7 billion.
Russia’s move is somewhat similar to data sovereignty laws popping up in other countries, however, there is a larger context of digital sovereignty: in addition to keeping data in-country, the Russian internet at large is increasingly isolated. Vietnam, China, Indonesia, and India have implemented similar laws. Brazil implemented but later withdrew data localization, reportedly because of its potential for economic damage.
Kommersant, a Russian news daily, attributed the law’s creation directly to spoiled relations with the west (in Russian) as a result of the crisis in Ukraine and the annexation of Crimea.
While advantageous to some Russian data center providers and providers of other IT and communications services, the law will be damaging to others. On one hand, it will encourage hosting locally, meaning more business. On the other hand, the exit of some companies from Russia will shrink the market’s size.
Government-controlled companies that stand to benefit from the law include the likes of Rostec, a massive corporation that serves defense and civilian sectors and has been working on rolling out an online air-travel booking service, and Rostelecom, which has been building out and buying data center capacity in the country, according to Kommersant.
The news service pointed to Google’s announced plans to discontinue development work in Russia and move its engineering operations there to other countries. Adobe said it would close its offices in Russia, and Microsoft closed a developer office in the country, moving a significant portion of the operation to Prague.
But this doesn’t mean companies like Google are not going to compete in Russia, which is simply too big a market to ignore. Those that wish to continue to serve the Russian population now have less than two months to migrate data or equipment in-country.
To comply with the law, many have already moved servers inside the country’s borders. eBay, Google, and others are in the process or have already moved user data in-country. eBay is transferring data from Switzerland to Russia. Google has moved some servers in-country to comply, reported the Wall Street Journal in April.
Hotel booking site Booking.com said that it is ready to move personal data, reported Kommersant. Russia is one of the company’s bright spots in terms of growth. Booking.com said it had 3.3 million visitors a month from there.
There is a big cost impact. Data migration is time-consuming and costly, and companies will likely have to rely on local partners for help.
Another issue is properly identifying Russian citizens. Operators storing personal data are liable for keeping data confidential, and a range of organizational and technical measures regarding protection of personal data are outlined, however, there’s uncertainty regarding things like storing copies of data outside of the country.
Adding to the uncertainty is that the law applies to personal data and not necessarily other user-related data. According to the law, personal data is defined by its ability to identify a specific individual. But ECIPE, the European think tank, sees this as an issue.
“In reality, there is no technical or legal way to separate personal data from non-personal mechanical information,” wrote ECIPE. “Any transaction on the internet made while logged in to an account is effectively personal data, and even the most harmless pieces of company data will contain information about the employee. The scope of the law is sweeping, and firms are likely to store non-personal data locally.”
Russian President Vladimir Putin said in April 2014 that the state should defend its interests on the internet. Last year, the Ministry of Communications along with security forces carried out exercises on disabling the internet in case of emergency from within and from outside of the country in case of “malicious acts.”
Last year, Symantec reported that an unknown government—likely in the west—was spying on Russia and Saudi Arabia. Data collection and spying occurred through complex surveillance software called Regin.
Russia is also advocating controlling traffic and domains in the .ru and .rf Top Level Domains (TLDs), filtering all network content. | | 1:00p |
CoreOS Launches Tectonic Preview With Kubernetes Alongside the formal release of version 1.0 of the open source Kubernetes orchestration framework for containers, CoreOS today at the OSCON 2015 conference unveiled Tectonic Preview, a version of its Linux distribution through which the startup will provide 24/7 support for IT organizations using Kubernetes to manage Docker containers on the CoreOS platform.
CoreOS CEO Alexi Polvi said at this juncture open-source Kubernetes, originally developed by Google, will emerge as the de facto standard for managing containers across hybrid cloud computing environments.
“Google has the largest deployment of containers running in a production environment,” said Polvi. “I think most organizations will want to use the orchestration framework they developed. We’re betting the company on it.”
Polvi said the most significant thing about the arrival of Kubernetes 1.0 is that hybrid cloud computing becomes practical for IT operation teams, because there is now a framework for managing Docker containers on multiple platforms. That capability, he added, is also likely to prove critical to Google’s cloud business because applications developed on the Google Compute Engine platform can be moved on-premise or vice versa, moved into the Google cloud.
Priced starting at $1,500 a month, IT organizations can opt to deploy CoreOS on-premise, in a hosted environment or in the cloud, Polvi said. CoreOS will also be working with Google and Intel to provide additional Kubernetes training.
While it’s unclear how often Docker containers will be deployed on bare metal servers versus VMs, Polvi said it may not matter. Docker containers are about creating a single pool of resources that can be made up of both physical and virtual machines.
Conversely, VMs are about slicing a larger physical server into many smaller machines. As a result, Polvi contends that Docker containers are really a superset of VMs.
What is almost for certain is that there will soon be many more Docker containers than VMs. As Docker sprawl continues to grow across multiple data centers, IT organizations are going to need orchestration software to manage it all, regardless of whether those containers are running on a bare-metal server, a VM, or some private or public Platform-as-a-Service environment.
In fact, Polvi said, the primary challenge may not necessarily be the volume of containers that needs to be managed but rather the velocity at which those containers will come and go inside the data center. | | 1:00p |
Hitachi Adds Kubernetes Support to Converged Infrastructure At the OSCON 2015 conference today Hitachi Data Systems announced support for version 1.0 of the Kubernetes orchestration framework for Docker containers.
Michael Hay, vice president and chief engineer for HDS, said the company has worked closely with Google to ensure tight integration between Kubernetes and the underlying software it relies on to manage the Hitachi Unified Compute Platform.
“We wanted to lower the barrier to entry for organizations to make the move to Docker containers,” said Hay. “We felt that is was important to get Kubernetes running on a converged infrastructure platform as soon as possible.
Hay said that the UCP Director software that HDS uses to manage its converged infrastructure is already built on top of containers. As a modern piece of software based on a microservices architecture, extending UCP Director to work with Kubernetes was a pretty straightforward effort.
Hitachi UCP systems, said Hay, are certified to run containers on both CoreOS and CentOS operating systems alongside VMware running on a variety of operating systems. HDS expects IT organizations to run containers and virtual machines side by side for years to come.
But it’s also clear that given the momentum surrounding Docker containers in particular, HDS sees the shift to microservices, using containers as an opportunity to gain market share at the expense of more established server rivals inside the data center. While HDS has established a reputation as a provider of enterprise-class storage systems in the US, its presence in the x86 server market is by comparison relatively nascent.
Hay said Docker represents an opportunity for HDS, because most IT operations teams are already being made aware of the implications Docker containers will have on both application performance and IT infrastructure utilization rates. With more applications contending for I/O resources HDS is betting that Hitachi UCP systems will be seen as a viable alternative to legacy systems that were not developed with containers in mind.
The degree to which Docker containers will force IT infrastructure upgrades naturally remains to be seen. On the one hand, Docker containers initially might only wind up increasing utilization rates on existing servers. Conversely, HDS is betting that an increase in the number of application workloads per server will tax I/O capabilities beyond their existing limits.
Of course, that doesn’t necessarily mean organizations will embrace converged infrastructure to run containers. But it does mean they will at least be more likely than ever to consider their options. | | 3:00p |
Optimized Flash Storage Platforms: Disrupting the Economics of the Data Center Erik Ottem is Director of Product Marketing for Violin Memory.
Imagine that someone devises a solution that reduces your hour-long commute to a couple of minutes. That’s the equivalent of what optimized flash storage platforms do in the data center.
Most servers are I/O constrained so speeding up storage can boost a server’s output considerably. Compared to a 60 hard-drive RAID box delivering 9000 IOPS in total, the optimized flash platform delivers 1 million IOPS at blindingly fast speeds.
This extra horsepower can really impact your data center cost profile. Like many data centers, you probably have “farms” of the fastest hard drives you can find to get enough IOPS for your servers. Capacity isn’t the issue; and, in fact, most storage is significantly over-provisioned to provide adequate performance. Hard drives, for a decade or so, have maxed out at around 300 IOPS. Putting an optimized flash platform into your SAN will supercharge it, provide the needed IOPS to load up your servers, and allow those hard drives to be relegated to tertiary storage, where their lethargic performance no longer matters.
The half-step to an optimized flash platform is the hybrid array. It uses flash as a cache, with auto-tiering software moving data out to the secondary hard drive storage. The problem with hybrid arrays is that a cache miss, where the data must come from the HDD, creates an I/O that may be 100 times slower than data pulled from flash. Tiering may be rational if there is significant money to be saved, but with the new economics of flash, tiers for active data can no longer be justified.
The major implication of this performance is workload consolidation. Instead of buying 100 new servers to keep up with your workload, an optimized flash platform makes your existing servers considerably more efficient. Workload consolidation is made possible through the combination of virtualization and flash storage. But not all flash is created equal. SSD-based designs have latency spikes as the arrays go through garbage collection. A flash-optimized design doesn’t use SSDs, but manages flash across the entire array as a pool of flash for consistent low latency. Consolidation with the right platform provides a great return on investment.
More efficient servers mean fewer servers are required since they spend far less time waiting on I/O, so CPU utilization can hit 90 percent or more. In addition to less server hardware, that’s 100 sets of software licenses that you just avoided. It also means less labor managing the applications and operating system updates and savings on power, cooling, hardware maintenance and support.
System users stand to benefit as well. Consistent low-latency storage impacts job run times. A task that took three hours might be completed in minutes. When compiling code, run times drop from hours to minutes, and the productivity of your expensive software developers goes way up. Low latency also opens up new business opportunities at the corporate level. For exame, one major European telco found that its billing software couldn’t keep up with users and employed an optimized flash platform and added $140 million to its bottom line.
The bottom line is if you haven’t tried a flash optimized platform, you should. They drop easily into existing SANs, and the performance benefits and enterprise data services will be obvious in a few hours of use. When the total cost of ownership is considered, an optimized flash platform is actually less expensive than legacy disk-based storage, hybrid storage or even SSD-based storage.
IDC predicts that all-flash array revenues will top $41 billion in 2016, so it just may be time to declare disk dead for active data. | | 5:48p |
Cologix Raises $255M in Debt for Data Center Expansion Data center service provider Cologix has secured $255 million in debt funding to pay for data center construction and acquisition. The company already has extensive footprint in North America but plans to continue data center investment.
Many institutional investors have become more knowledgeable about the data center provider market in recent years, and data center investment has become more commonplace among traditional debt and private-equity financiers than it has been in the past. It has become relatively easy to raise money for a provider with an existing customer base, positive cash flow, and an executive team with a proven track record in the market.
Denver-based Cologix has 21 data centers in eight second-tier markets in the US and Canada, such as Dallas, Columbus, Toronto, and Montreal, among others. The company emphasizes interconnection as a major part of its offering, competing with data center providers like Equinix, Telx (soon to be part of Digital Realty), CenturyLink, and numerous smaller firms.
The company expands by building new data centers and buying other data center providers. In March, for example, it opened its third data center in a Minneapolis carrier hotel. Last year it bought a company called Colo5 to expand in Florida and an Ohio provider DataCenter.BZ to expand in the Columbus market.
RBC Capital Markets, TD Securities, and CIT were joint lead arrangers on its expanded credit facilities, announced Tuesday. Other investors were Scotiabank, ING, JPMorgan Chase, Bank of America, Raymond James and others.
“This financing augments Cologix’s robust balance sheet and cash flows to provide capital for further organic and inorganic opportunities driving our growth,” Cologix CFO Brian Cox said in a statement. | | 6:05p |
Cloud Giants Form Foundation to Drive Container Interoperability The Linux Foundation in collaboration with 18 vendors and IT organizations announced today the formation of a Cloud Native Computing Foundation, which is committed to validating references architectures for integrating various technologies built on top of Docker containers.
Patrick Chanezon, member of Docker technical staff, said the core CNCF mission is to determine the state of interoperability between various technologies built on top of the Open Container Project specification unveiled last month. The Open Container Project is being renamed as Open Container Initiative to avoid confusion with other OCPs, such as the Open Compute Project or the Linux Foundation’s Open Compliance Program.
As part of the effort, Docker and Google are contributing core technologies to CNCF, which is defining cloud-native applications as being applications or services that are container-packaged, dynamically scheduled, and micro service-oriented. If it’s determined that additional work is needed to foster that interoperability then the CNCF will take the lead on the development of the required APIs, said Chanezon
Founding CNCF members include Box, Cisco, CoreOS, Cycle Computing, Docker, eBay, Goldman Sachs, Google, Huawei, IBM, Intel, Joyent, Mesosphere, Twitter, Switch Supernap, Univa, VMware, and Weaveworks. In addition, Chanezon noted, the CNCF is still open to recruiting additional organizations to participate in the project.
From the perspective of IT operations teams responsible for deploying these technologies CNCF represents an effort to not only promote interoperability, but also prevent IT organizations from getting locked into any one technology, Chanezon said.
“We’re focused on building bridges between various projects,” he said. “A lot of that work is going to be focused on the orchestration engines.”
Specifically, CNCF will become a mechanism for collaboration that focuses on the interoperability layer above the Docker orchestration engine.
As the number of Docker-related projects continues to multiply, IT operations teams should take comfort in the fact that over time CNCF will help simplify the deployment of various stacks of Docker container services that have been certified to work with one another, Chanezon explained.
Those reference architectures should reduce time, effort, and money associated with deploying Docker applications and services, many of which will eventually be made up of thousands of Docker containers strewn across multiple data center environments.
Chanezon said the Linux Foundation has been chosen to shepherd CNCF because of its extensive history of administrating similar projects and the fact that container technology itself came out of Linux. The Linux Foundation will mainly focus on administrative issues, leaving CNCF members to focus most of their time on technical issues, said Chanezon. | | 7:18p |
HP Buys Wind Power for Massive Texas Data Center Cluster HP has signed a long-term power purchase agreement with a wind-farm developer to get renewable energy for its huge Texas data center operations, the company announced Tuesday.
This is HP’s first utility-scale purchase of renewable energy, which the company expects will put it ahead of schedule in meeting its carbon-reduction goals. Its Texas data center operations are enormous: 1.5 million square feet of data center space across five facilities in multiple cities supporting the company’s own global IT requirements as well as IT services it provides to some of its customers. There is one each in Houston, Hockley, and Plano, and two in Austin.
Investment in utility-scale renewable-energy projects via long-term power purchase agreements (PPAs) to compensate for non-renewable energy consumed by data centers is an approach pioneered by web-scale data center operators, namely Google, which started doing it about five years ago. Since then, Microsoft, Facebook, and Amazon have struck similar deals with developers.
Some of the most recent renewable-energy deals — announced just this month — were struck by Amazon, which invested in a North Carolina wind farm to address the power consumption of its data centers in Virginia, and by Facebook, which signed a wind PPA in Texas to power the data center it’s building in Fort Worth.
In most instances, these PPAs provide the funding necessary to complete the projects, which is the case with HP’s recent agreement. HP’s 12-year commitment to buy the output of 112 MW of generation capacity enabled SunEdison, the developer, to start construction of the project, according to HP.
HP said the deal will provide enough renewable energy to power 100 percent of its Texas data center operations.
When completed, the South Plains II wind farm in Texas will have total generation capacity of 300 MW. SunEdison will operate the wind farm, but it will be owned by TerraForm Power, a major international owner and operator of renewable-energy plants.
The amount of money the ICT sector in general spends on renewable energy is growing faster than other sectors’ spend, according to a recent report by the US government’s National Renewable Energy Laboratory, and compensating for grid power used by massive data centers is one of the major investment drivers.
Globally, data centers are responsible for about 30 percent of electricity consumption attributable to the ICT sector, the NREL report said, citing an estimate by researchers at the Ghent University in Belgium. The rest of the sector’s energy consumption is split between telecommunications infrastructure and end-user devices.
In the US, HP is the fifth biggest user of renewables, having consumed 280,560 MWh of clean energy in 2014, according to NREL, which used figures some companies disclose voluntarily through various programs. The figure represents 14 percent of the company’s total energy consumption that year.
At the top of the list is Intel, which bought enough renewable energy or renewable energy credits to make the entire 3 million MWh of energy its offices, data centers, and processor fabrication facilities consumed in 2014 renewable. Intel is followed by Microsoft, Google, Apple, and Cisco.
HP’s goal has been to reduce greenhouse-gas emissions from its operations by 20 percent of 2010 levels by 2020. The company expects its Texas wind PPA to get it to that goal by the end of this year. | | 8:15p |
Samsung Launches Next-Gen Data Center SSD Line Samsung has launched a line of high-performance SATA enterprise solid state drives.
With planar NAND flash reaching scaling limits, the move to new technologies is picking up steam. Intel and Micron announced new 3D NAND drives a few months ago, and SanDisk launched a 4TB enterprise SSD in May.
Samsung’s next-generation 3-bit PM863 and 2-bit SM863 6Gbps data center SSD models were introduced earlier in the year at CES and are now generally available. Samsung said they deliver faster speeds and improved reliability in much higher capacities, as well as greater power efficiency, in order to support the heavy demands placed on the data center.
Thanks to Samsung’s 3-bit V-NAND technology the new data center SSDs are able to achieve impressive densities in the 2.5-inch form factor. PM863 models add 1.9TB and 3.8TB capacities to the line, with read speeds up to 540 MB/s, according to the company. The SM863 model has up to 1.9TB capacities and features read speeds up to 520 MB/s and write speeds up to 485 MB/s. Samsung began production for its V-NAND line last year at its 2.5 million-square-foot facility in Xi’an, China.
Samsung’s V-NAND differs slightly from the 3D NAND approaches from Intel and Micron’s (floating gate design) or Toshiba and SanDisk’s (Bit Cost Scaling). | | 8:35p |
RingCentral Answers Call for Office 365 Integration According to our sister site Talkin’ Cloud, business communications provider RingCentral is expanding its relationship with Microsoft to integrate its RingCentral Office offering with Office 365, creating a total work environment from one interface.
“There is tremendous growth in the marketplace, with 46 million paying users of Office 365,” noted Richard Borenstein, senior vice president of Business Development at RingCentral during an interview with Talkin’ Cloud at Microsoft Worldwide Partner Conference 2015. “This is a big step for customers to have seamless integration between their communications and their productivity suite.”
Partnerships with other service providers are an important part of Microsoft’s go-to-market strategy for its cloud services. Chinese data center provider 21Vianet, for example, is the official provider of Office 365 and Microsoft Azure in mainland China. 21 Vianet recently announced that it has extended its partnership with Microsoft until 2018.
Another example is the US-based data center services giant Equinix, which provides direct network access to Office 365 and Azure from its facilities around the world.
The integration will offer users all the capabilities of the RingCentral Office cloud-based service, including calling, SMS text messages, conferencing and web-meetings, from within the Office 365 user interface. What’s more, users can click to call any phone number—both internal and external—from within any of the Office 365 applications, and contacts are automatically combined between Outlook and RingCentral Office.
Read the full article at http://talkincloud.com/cloud-services/072115/ringcentral-answers-call-office-365-integration
|
|