Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, February 1st, 2016
| Time |
Event |
| 1:00p |
Hong Kong, China’s Data Center Gateway to the World Hong Kong may be one of the world’s tiniest nations, but its importance on the internet’s global map is huge, and, if the Hong Kong government plays its cards right, that importance is only going to grow.
Hong Kong is important because China is important. As China’s Special Administrative Region, it is more westernized and open than the People’s Republic. It is both a springboard and a gateway between the recently emerged economic powerhouse and the rest of the world.
“Springboard” and “gateway” are “the two words that encapsulate the China effect” on Hong Kong and Singapore, the other Asia Pacific business and interconnection hub that’s enjoying a similar status, Jabez Tan, senior analyst at Structure Research, a data center market research firm, says.
Read more: Report: Singapore is a $1B Data Center Market and Growing Fast
The Hong Kong data center market is a springboard in the sense that international service providers that want to serve China often start in Hong Kong, which doesn’t have requirements like China’s Internet Data Center License, uncertainty about data privacy, or anything like the Great Firewall of China. “Companies understand the risk of going directly to China,” Tan says.
It’s a gateway in the sense that digital business flows through it both ways: international companies use it to get into China, and Chinese companies, from search engine and cloud giants to online gaming and mobile app developers, set up shop in Hong Kong before they branch out into Southeast Asia, Europe, or North America. Tencent followed this path, and so did Alibaba, to name some of the biggest examples.
Read more: Chinese Internet Giants Eyeing Silicon Valley Data Centers
This of course makes Hong Kong one of the world’s most lucrative markets for data center service providers. Tencent uses services of the Hong Kong data center provider HTC Global Center, while Alibaba’s cloud services subsidiary Aliyun uses Towngas Telecom and another Hong Kong telco to serve customers outside of China, Tan says. Aliyun also has data centers in Singapore and, since last year, in the US. In July, it announced a plan to also launch data centers in Europe and the Middle East.
Hong Kong Data Centers, by the Numbers
While a smaller data center market than Northern Virginia, Silicon Valley, London, or Amsterdam, Hong Kong has about 40 data center providers with more than 50 operational data centers across its 400-plus square miles of land, according to Structure’s latest report on the Hong Kong data center market.
These providers brought in $616 million in revenue last year, and the market is projected to grow by 15 percent in 2016. By 2020, Structure expects the Hong Kong data center market to reach $1.39 billion.
Hong Kong Data Center Market by the Numbers (Courtesy of Structure Research):
- Number of data center providers (2015): 38
- Number of unique operational data centers (2015): 53
- Total critical power capacity (2015): 208MW
- Total data center space (2015): 1.9 million square feet
- Total rack capacity (2015): 59,000 racks
- Total 2015 colocation services revenue: $616 million (HK$4.8 billion)
- Total 2020 projected colocation services revenue: $1.39 billion (HK$10.8 billion) – 18 percent CAGR
The top four Hong Kong data center providers are Silicon Valley-based Equinix, Japan’s NTT Communications, and Hong Kong’s native PCCW Solutions and iAdvantage, a subsidiary of SUNeVision. Together, these four companies have two-thirds of the total market share.
Government Holds the Keys
Structure splits the Hong Kong market into five sub regions: Tseung Kwan O, Kowloon West, Kowloon East, Hong Kong Island, and New Territories North. Most of the data center capacity – about one-third – is in Tseung Kwan O, development on reclaimed land on the shores of the Tseung Kwan O bay, to the east of Hong Kong Island. Kowloon West, a territory north of Hong Kong Island that’s comprised of two cities, Tsuen Wan and Kwai Chung, has the second-largest data center presence, just slightly smaller than Tseung Kwan O’s.

Hong Kong data center market sub-regions (Image: Structure Research)
Unlike the major US data center markets, places like Northern Virginia and Silicon Valley, the data center hubs in Tseung Kwan O and Kowloon West didn’t form because of circumstances. The supply of land in Hong Kong is extremely tight – which is one of the hardest aspects of entering the market or expanding there – and the government decides what gets built where. Both sub-regions are data center hubs because the government specifically set land aside within them for data centers, Tan says.
Data centers do get permitted and built outside of those designated areas, called Industrial Estates, but the highest concentration is within them. Operators have announced three new data centers that will launch over the next two to three years – by China Unicom, Global Switch (a new provider in the Hong Kong market), and iAdvantage – all of them in Tseung Kwan O.
Not only does the government tightly control land supply in Hong Kong, it also has some fairly restrictive rules for using the land it does set aside for data centers. For example, a data center operator in the Tseung Kwan O industrial estate isn’t allowed to sublet space within their facility there, Tan says. They can provide data center services, but they cannot give the tenant full access control. In other words, the wholesale data center lease model, where the provider simply collects rent and has no access to the facility, unless the tenant permits them, doesn’t work there.
Providers have found ways to work around the rule, Tan says, and he isn’t aware of any company having to pay penalties or shut down because they broke it, but it does create a different set of problems than in other markets.
Lots of Data Center Capacity Available
Regardless of its problems, be they the tight real estate market, the hands-on government, or high electricity rates (anywhere from 13.5 to 20 cents per kWh, depending on location), Hong Kong’s proximity to mainland China and its strategic hub status make it one of the world’s most attractive data center markets.
While it’s hard to build new data center capacity there, there is currently a lot of capacity available in existing data centers, Tan says, which is an important thing to understand for companies considering Hong Kong as the location for their next data center. “The market is developing, and there is still a meaningful amount of runway left for the current inventory that is built out.” | | 5:44p |
Beefing Up Data Center Resilience Sev Onyshkevych is Chief Marketing Officer for FieldView Solutions.
A data center is very much like a car – it needs maintenance to run smoothly and not break down in the middle of your journey. The measurement of how vulnerable your system is to failure determines the resilience of your facility. You can increase that resilience to boost your uptime.
Data Center Resilience (or Resiliency) as described by TechTarget is defined as: “the ability of a server, network, storage system, or an entire data center, to recover quickly and continue operating even when there has been an equipment failure, power outage or other disruption.”
Here are five ways data center operators can increase the resilience of their facility – and secure smooth operations without failure – by deploying the best-of-the-breed data center infrastructure management (DCIM) solutions.
Realize That Your Resilience Changes Constantly
Imagine that your car is running on four donuts instead of tires. The first step is to acknowledge you’re riding on donuts – and know that while you’re still moving, you’re just not as safe as you might be. Knowing that you’ve got a single point of failure, and are operating in a weaker (less resilient) environment should lead you to take a corrective action by locating a garage quickly and replacing the donuts with new tires or better yet, renting another car with real tires so you can drive without any failures.
Your system can be more or less resilient at any given moment, depending on such variants as: the reliability of your power sources, load, time of day and the occurrence of any planned maintenance or unplanned outage. Constant monitoring of your resilience will allow you to take proactive measures to improve it without the risk of failure. You have the option to fix things, or shift load around to avoid disasters.
Have a ‘Dashboard’
What if you had no dashboard in your car? How would you know where you were going, how fast you were driving, when you might run out of gas, how many miles you had on the car or if anything was wrong with any of your systems? Would you feel safe with readings that had been taken last week?
Having a central place to view all the pertinent information about your data center infrastructure is as critical as your car’s dashboard. Maintaining a data center with a clipboard and a spreadsheet in hand is a thing of the past – not to mention that it is too cumbersome, time consuming and by the time you gather the critical information it’s obsolete. A real-time dashboard showing all the critical information in one single pane of glass allows you to proactively prevent failures and plan effectively and intelligently for the future.
Know Your Capacity
Just as you would use your dashboard to find out how much gas you’ve got in your tank (i.e. — before getting on the highway for that long-distance trip), your data center management dashboard should offer real-time intelligence about how much space, energy, cooling and network capacity you’ve got left. More importantly, it should show you how to use all this capacity to its fullest. This could include information on whether you can delay expansion plans or all together eliminate the need for expensive facility construction.
Not having all this information at your fingertips would be the same as driving without a gas gauge, never mind your temperature gauge, your oil level, etc. Forewarned is forearmed.
Run Failure Simulations
Before you buy a car, you read about the crash tests and other safety tests conducted by the manufacturers, Government and the likes of Consumer Reports, to know what your risk is in case of an accident, and to ensure the brakes, the suspension and everything else in the car works perfectly.
Data centers are no different when it comes to testing the infrastructure. Running “What If?” analyses is the equivalent of crash testing a car – it helps you be aware of failure points and the necessary measures you need to take to avert disaster, as well as what the impact might be when a disaster strikes. The “What If?” analysis should help you answer such questions as:
- What if something fails while you are doing maintenance on your equipment?
- What if something fails after something else has failed already, and you’re operating in a less-resilient environment?
- If your disaster scenario happens, where will the load go? What else may fail as a result? Will that failure be contained, or will it become a “cascading failure?” (your multi-car collision scenario)
So in effect, you’re testing how resilient your system is today, and how resilient it might be under varying circumstances.
The ability to test your system in simulation allows you to discover weak spots and make changes to strengthen your infrastructure. It allows you to be proactive about disaster avoidance, and know the appropriate corrective responses to avoid disasters – which in the data center world means costly downtime.
Alarms and Alerts
You know those “idiot lights” in your car that let you know when something is wrong -hopefully before a total system breakdown? Or a system like On Star alerts the right people when something goes wrong– people who can help and make a difference.
Well, deploying a system that provides similar alarms and alerts in a data center can ensure smooth operations and decrease downtime. It can alert you when something has the potential to go wrong, leaving you enough time to correct it and avoid disaster. This could include an alarm that lets you know your temperature is too high, or your power has switched to a back-up system, or alerts you if you’re nearing capacity.
If you think of your data center as being a lot like your car, then you know that you have the power to increase its resilience – and ensure its ability to keep running, even when something goes wrong. It’s simple. Maintain it and pay attention to the details, and it will run smoothly for you. Ignore it or let up on your vigilance, and you’re headed for a breakdown. Luckily tools like DCIM help data center operators guarantee uptime with real-time information that helps managers make critical business decisions and avoid disasters.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 7:02p |
Cloud Underwater? Microsoft Tests Submarine Data Center Continuing its long tradition of data center experimentation in the name of efficiency, Microsoft announced it has been testing an unusual new data center concept: placing servers underwater out in the ocean.
Close to half of today’s society lives near large bodies of water, and since physical distance creates the ultimate speed limit for transferring data, storing data under the sea, close to major population centers, is a logical way to optimize delivery of cloud services.
“Half of the world’s population lives within 200 km of the ocean, so placing data centers offshore increases the proximity of the data center to the population, dramatically reducing latency and providing better responsiveness,” Microsoft said on the website dedicated to the research effort, called Project Natick.
Microsoft hasn’t shied away from experimenting with novel ideas for data center infrastructure in the past. In Wyoming, for example, the company tested a data center powered by fuel cells that converted methane from a waste processing plant to electricity. In another experiment, Microsoft researchers tested small fuel cells installed directly into IT racks.
What the company learned from Project Natick may lay the groundwork for deploying data center capacity underwater at scale, cooled by seawater and potentially even powered by tidal energy. “While every data center on land is different and needs to be tailored to varying environments and terrains, these underwater containers could be mass produced for very similar conditions underwater, which is consistently colder the deeper it is,” the company said.
Another potential benefit is quick deployment. It took 90 days to build and deploy the test system, which is much faster than the typical process of getting permits for a brick-and-mortar data center, designing, and building the facility.

Project Natick server rack being placed inside the shell for underwater deployment (Photo: Microsoft)
Around August of last year, Microsoft researchers deployed the test system off the coast of California. It was a rack of standard servers sitting in a cylindrical steel shell (10 feet by 7 feet). Heat exchangers were outside of the shell, providing the servers with free cooling. In December, the 38,000-pound container was out of the water and back at the company’s campus in Redmond, Washington.
Microsoft has not released any results of the experiment, saying only that they were “promising.” At this stage, the project is more about collecting data than developing a specific solution. There are still major hurdles to actually implementing something like this.
“While at first I was skeptical with a lot of questions. What were the cost? How do we power? How do we connect?” Christian Belady, general manager for data center strategy at Microsoft, said in a statement. “However, at the end of the day, I enjoy seeing people push limits.” Belady says. “The reality is that we always need to be pushing limits and try things out. The learnings we get from this are invaluable and will in some way manifest into future designs.”
Here’s a short video about Project Natick, produced by Microsoft:
| | 10:58p |
US, European Officials Miss Deadline on Data Transfer Deal 
By The WHIR
Negotiations over data transfer regulations between American and European officials stalled over the weekend, missing the Sunday deadline set by Europe’s national privacy agencies.
According to a report on Monday by the New York Times, negotiations in Brussels hit a several snags, including around options for European citizens to seek redress over data privacy violations.
The negotiations started shortly after the European Court of Justice rejected the Safe Harbor agreement in October 2015, which allowed US tech companies to use a single standard for consumer privacy and data storage in the US and Europe.
Read more: Safe Harbor Ruling Leaves Data Center Operators in Ambiguity
A deal is expected to be reached in the coming days, but national data protection regulators in Europe could meet as soon as Tuesday to start restricting trans-Atlantic flows of data, according to the report.
American companies aren’t expected to make any changes to how they do business immediately, and while there could be legal implications for companies regardless of size, experts say the most likely targets for litigation are big US tech companies like Google and Facebook that rely heavily on personal data, according to the report. Hosting companies with data centers in Europe and the US could also be impacted.
According to the Times, American officials have offered a number of concessions in recent weeks, including increased oversight over American intelligence agencies’ access to European data, and creating a data ombudsman within the State Department to give Europeans a direct point of contact should they believe their data was misused by a US government agency.
However, European officials are skeptical that these moves would hold up in European courts, and are looking for more specifics around how these proposals would work, which should come to light over the next several days as both sides work to come to an agreement.
On Thursday, the US Senate Judiciary Committee approved on a bipartisan vote the Judicial Redress Act (JRA), which provides a legal mechanism that entitles EU citizens to sue in US court for violations of the US Privacy Act, according to Forbes.
This first ran at http://www.thewhir.com/web-hosting-news/us-european-officials-miss-deadline-on-data-transfer-deal | | 11:10p |
IT Innovators: Scaling Efficiently in the Cloud with Software Innovation 
By WindowsITPro
With more than 600,000 direct customers and 2 million customers indirectly through resale partnerships, Hostway Services, Inc., a provider of cloud, managed, Web and hybrid hosting, has emerged as one of the largest cloud hosting and infrastructure-as-a-service (IaaS) providers in the world. Software played a big part in helping the company to scale efficiently, especially when meeting fast-growing demand for private and hybrid cloud services.
“Software is at the core of pretty much everything we do from a cloud perspective,” said Tony Savoy, senior vice president and general manager of managed hosting and cloud services at Hostway.
In the past, Hostway had to turn down some very big opportunities because it couldn’t offer robust network isolation. With a virtual machine operating system upgrade in late 2014, then additional enhancements in June 2015, Hostway was exposed to new capabilities that allowed for deeper network isolation.
Read more: Forget Hardware, Forget Software, Forget the Infrastructure
“A software-defined data center has made it easier for us to segment customers from a network perspective,” said Savoy, adding that one important advancement in network virtualization is NVGRE technology – Network Virtualization using Generic Routing Encapsulation as the mechanism to virtualize IP addresses for load-balanced, multi-tenant networks that can be shared across cloud and on-premises environments.
In addition to deploying NVGRE technology, Hostway partnered with a virtualization management and security provider to add even more granular control over network virtualization and isolation for customers. These capabilities allowed Hostway to, “Engage different types of customers; customers that are more security conscious,” said Savoy. “And customers whose applications require them to segment and isolate one workload from another.”
“Now we can do some layers of multitenancy to reduce cost for customers, for economies of scale,” he said, adding that more economic solutions are of particular benefit to service providers. “Economies of scale benefits service providers because they can do more things in a tighter, more encapsulated series of technologies and then they can extend those economies of scale down to customers in terms of more economical price points.”
“The second key element that the software does for us is provide consistency and quality to the services that we offer customers because it can be done in a repeatable process,” said Savoy, adding that any time you introduce humans you have the tendency to introduce risk.
“This starts to go to the concept of self-healing, machine learning—there are tools and instrumentation out there that when certain triggers are met, the software can automatically repair, resolve or restart a particular scenario,” said Savoy. “That’s something that we look to do from our software, so that we don’t have to have humans going in and returning to service the most common things that fail.”
Sometimes, for example, the best way to troubleshoot is to restart the web server. Through monitoring tools, when certain thresholds are met, Hostway can automatically trigger this restart. In the past, a staff member had to log into the machine, find the problem and resolve the issue manually. This was time consuming and injected cost.
“Companies are shifting more to this DevOps kind of work scenario where developers are more of the operators and the way that they operate is through automation,” said Savoy, advising that IT organizations look to automation to increase efficiency and provide a higher level of service.
“Definitely invest in the automation,” recommended Savoy. “If automation is an afterthought, you’ll be sitting here a year from now and wondering why the wheels are starting to fall off on the business, so invest in the very beginning in automation and streamline the flow of service transition, implementation and support for clients.”
Christy Peters is a writer and communications consultant based in the San Francisco Bay Area. She holds a BS in journalism and her work covers a variety of technologies including semiconductors, search engines, consumer electronics, test and measurement, and IT software and services. If you have a story you would like profiled, please contact her at christina_peters@comcast.net.
The IT Innovators series of articles is underwritten by Microsoft, and is editorially independent.
This first ran at http://windowsitpro.com/it-innovators/it-innovators-scaling-efficiently-cloud-software-innovation |
|