Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, July 28th, 2015

    Time Event
    12:00p
    Report: Singapore is a $1B Data Center Market and Growing Fast

    The Singapore data center market is booming thanks to the country’s proximity to China, political stability, and a business-friendly regulatory environment. Colocation providers in the island nation made close to $1 billion in revenue in 2014 – a market size projected to surpass $1.2 billion next year, according to a new report by analysts at Structure Research.

    The technology market in Asia is growing faster than anywhere else, most of the growth occurring in China. As companies from elsewhere look to expand to Asian markets, while Asian companies expand globally, Singapore, like Hong Kong, has emerged as a data center and network gateway between key Asian markets and the rest of the world.

    The Structure report focuses specifically on data center colocation services and counts about 50 colocation providers in Singapore, including providers with their own data centers as well as sizable resellers. Cumulatively, these providers sell colo space in 44 operational data centers today, but there’s a lot of data center construction taking place, and more facilities are going to come online this and next year.

    Singapore Data Center Market, By the Numbers:

    • Number of data center providers: <50
    • Number of operational data centers: 44
    • Total critical power capacity: 200 MW
    • Total data center space: <2 million square feet
    • Total rack space: 55,000 racks
    • Total 2014 colocation services revenue: $963 million
    • Total projected 2016 colocation services revenue: >$1.2 billion

    Numbers courtesy of Structure Research

    First-Mover Advantage

    While it is now competing with neighboring markets for data center business – markets such as Cyberjaya and Johor in Malaysia or Jakarta in Indonesia – Singapore started laying the foundation for a data center market much earlier than other countries in the region did and therefore has some clear advantages, Jabez Tan, senior analyst at Structure who happens to be a native Singaporean, said.

    There is robust network and power infrastructure throughout the island, and numerous key submarine cable systems land there, connecting it to Hong Kong and Australia. There’s a diverse enterprise ecosystem in Singapore, including a robust financial-services sector.

    “Business ecosystems like the ones Singapore has established take years of time and effort to develop and are not easily replicated, since there are no shortcuts, which is why first-mover advantage is critical and why Singapore has successfully positioned itself as the top data center hub in Asia,” Tan said.

    Lay of the Land

    Structure splits Singapore into three distinct regions: northern, western, and eastern. Most data centers are in the eastern region. “More than half of data center capacity of Singapore is on the eastern side,” Tan said. The western region has the second-largest colocation data center presence.

    There isn’t a whole lot of colocation data center capacity in the northern region, but at least two providers (Global Capacity and 1-Net) are building data centers there. The northern part is also where several US internet giants have built their data centers: Google, Microsoft, and Yahoo, according to Tan.

    Aliyun, the cloud services arm of the Chinese internet giant Alibaba, disclosed plans to expand its data center infrastructure to Singapore, among other locations, last week.

    Meet the Top Players

    The biggest data center provider in Singapore today is the local telco Singtel, which has eight data centers on the island and commands 40 percent market share, according to Structure. Silicon Valley-based Equinix is the second-largest player with three existing data centers and confirmed expansion plans in Singapore.

    The third- , fourth- , and fifth-largest players are London-based Global Switch, Singaporean Keppel, and San Francisco-based Digital Realty, in that order.

    Among the top 10 colo providers on the island, most are international companies. Only four out of 10 are local, Tan said.

    Malaysia: the New Jersey of Singapore

    Malaysia is Singapore’s biggest competitor for data center projects. In many ways, Malaysia is to Singapore what New Jersey is to New York City, Tan said. It is a close neighbor with cheaper land, power, and labor. But there are lots of characteristics the Singapore data center market has going for it that Malaysia does not.

    Political instability, subpar connectivity, and a sizable gap in terms of educated workforce are all examples of things Malaysia would have to overcome if it were to successfully compete with Singapore for data center construction, Tan explained.

    Another obstacle would be Singapore’s data sovereignty laws, which require certain data center workloads and sensitive user data to be stored in the country. These laws will make it difficult for a service provider who wants to sell into the Singapore market (Malaysia’s largest neighboring market) to set up shop in Malaysia.

    Scarcity and high cost of real estate will only play in favor of the colocation market in Singapore. “Our research indicates that colocation will be the preferred way forward” as an enterprise data center strategy in Singapore, Tan said.

    3:00p
    Putting Pen to Paper – Four Essential Factors to Calculating TCO on UPS Equipment

    Anderson Hungria serves as senior UPS product manager for Active Power.

    Data center operators are always aiming to boost profit margins and make the most out of their investments, especially with products that are designed to last up to 20 years. Total cost of ownership (TCO) is a key factor in realizing energy and cost savings over the life of any electrical product, such as a UPS. TCO is the total cost needed to purchase (capital expenses or CAPEX) and operate and maintain (operational expenses or OPEX) a product or facility. Since data centers consume a tremendous amount of energy, any reduction in energy consumption directly affects the bottom line, and that starts with the electrical infrastructure – primarily the UPS.

    Let’s evaluate four essential factors to calculating the TCO of a UPS – from initial purchase and installation cost to UPS efficiency, cooling needs and required maintenance and component replacement. For consistency and transparency, all of the comparative figures highlighted throughout the article are based on the following scenario and assumptions.

    • Four flywheel UPS systems compared to four double conversion UPS systems with four standard battery cabinets each deployed in parallel
    • Both flywheel and battery UPS systems are rated at 750 kVA / 0.9 power factor
    • 7 megawatt of total UPS capacity protecting a 1 megawatt load
    • UPS systems operating at 40 percent load for redundancy
    • Flywheel UPS has an efficiency rating of 96.5 percent vs. 93 percent for battery UPS
    • Battery UPS includes battery monitoring per cabinet
    • Initial cost and startup and installation costs are identical for both systems

    Initial Purchase and Installation Costs

    The initial cost of a UPS is only a small part of the equation of owning and operating an efficient and profitable data center. Aside from the UPS itself, the choice of energy storage and electrical infrastructure is also very important to determine the overall initial purchase and installation costs.

    Keep in mind that in many instances the lowest initial cost solution is not the best long-term decision. An integrated flywheel UPS, for example, does not require costs associated with purchasing battery cabinets, battery monitoring and additional safety and cooling provisions, thus delivering a lower TCO at a competitive initial cost.

    Even when the initial cost of a double conversion UPS is lower, about 40 percent of the initial price is associated with batteries that will have to be replaced in four to six years depending on usage and maintenance. Battery installation can be another large and time consuming initial expense. In comparison, more than 95 percent of the initial investment (capital expense) of an integrated flywheel UPS will never have to be replaced.

    While the initial cost of a UPS is certainly important, TCO evaluations need to be balanced and long-term operational, maintenance, and replacement costs considered. Operating costs can quickly exceed the initial investment of a UPS. Now, let’s take a look at those factors that will affect operational costs.

    UPS Efficiency

    Efficiency plays a significant role in energy and operating cost savings of a UPS over the life of the product. The higher the power demand, the higher the savings, even with a 1 or 2 percent efficiency gain. A high efficiency UPS is a must for today’s energy saving and green mentality; however, two important aspects must be considered:

    • What is the UPS efficiency at the actual rated load?
    • What is the level of protection or mode the UPS is operating in?

    First, UPS efficiency is load dependent and not a linear relationship which means UPS efficiency curves will dictate the correct efficiency at a rated load. For traditional UPS loads in the 40-50 percent range, a conventional double conversion UPS efficiency is approximately 93 percent versus an integrated flywheel UPS at approximately 96.5 percent.

    Secondly, it’s important to assess the UPS operating mode. Eco Mode or high-efficiency mode can offer 98-99 percent efficiency; however, these modes don’t offer the same voltage regulation and protection as normal or online mode. An integrated flywheel UPS can offer up to 98 percent efficiency and maintain +/- 1 percent voltage regulation and protection to critical loads at all times while delivering significant savings over time.

    Over a 10-year period at 96.5 percent efficiency, an integrated flywheel UPS can deliver savings of more than $300,000 when compared to a battery UPS operating at 93 percent efficiency.

    Cooling Needs

    After servers and computer equipment, cooling represents about 30 percent of a data center’s energy usage. Many techniques are currently implemented to reduce cooling costs including hot/cold aisle containment, economizers and free-cooling. When it comes to the UPS, energy savings and low TCO can be realized by choosing a UPS that can operate in higher ambient temperatures and has low heat dissipation. UPS batteries must be kept at 77 F requiring a tremendous amount of cooling equipment, and, in some cases, dedicated battery rooms. One major advantage of a flywheel UPS is the fact that it can operate in environments up to 104 F with no degradation to performance, lowering overall cooling requirements by almost 50 percent. For a data center with a 1 megawatt load, cooling savings can exceed $100,000 over a 10-year period.

    Maintenance and Component Replacement

    Data centers are like living organisms and require a significant amount of maintenance in order to ensure high reliability and availability to critical loads. For conventional UPS products, batteries need to be checked two to four times annually while an integrated flywheel UPS requires only one preventative maintenance event per year. Minimizing maintenance frequency will lower operational expenses and reduce the possibility of any downtime caused by service or potential human error.

    As part of most maintenance contracts, the periodic replacement of certain components such as batteries, bearings or DC capacitors is usually included. These are all additional costs that some data center managers wish they could avoid regardless if they are treated as a capital or operational expense. Since a flywheel UPS does not rely on batteries, the system can offer the lowest TCO in the market when it comes to replacement costs. Batteries are normally replaced after four to six years of usage, so in 10 years a data center can expect two battery replacement cycles at a total cost of approximately $750,000.

    Based on the assumptions outlined earlier, overall TCO savings is approximately $1.5 million over a 10-year period for an integrated flywheel UPS versus a conventional UPS with batteries.

    Take Action

    The typical lifespan of a UPS product (not the battery energy storage) can approach 20 years, which is a long time to be paying for ever-increasing operating costs, particularly if the system is inefficient and requires lots of maintenance. How do we prevent these costly mistakes?

    Do your homework. Question your vendors. Request TCO models and have the vendor walk you through the numbers. Only then will you see the possibility of saving a substantial amount of money over the life of the UPS.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:06p
    Puppet Labs Adds Visualization to IT Automation Experience

    Infrastructure as code is nice, but for many, visualization of that code is even better.

    Puppet Labs today rolled out an update to the 2015 release of its IT automation software that sports a new user interface and a tool to help IT organizations visualize infrastructure as code, looking to make the popular open source-based DevOps platform easier to use.

    The company has also unified the agent software provided in the open source version of its software with the agent software it previously made available in the commercial version.

    Tim Zonca, director of product marketing for Puppet, said by unifying the agent software the company hopes to make the transition from using the open source version of to the commercial release of the IT automation platform a much smoother process for the user.

    “Our agent software is now 50 percent smaller and 20 percent lighter,” said Zonca. “We’re trying to make moving to Puppet Enterprise more straight-forward.”

    Finally, Puppet Enterprise 2015.2 adds a more robust programming environment along with tighter integration with switches from Cisco, application delivery controllers from Citrix, and VMware’s vSphere hypervisor.

    Although Puppet already has over 26,000 customers, the single biggest challenge in getting IT organizations to adopt software that essentially turns infrastructure into programmable code remains inertia, Zonca said. Many IT organizations still rely on custom scripts that don’t scale or are particularly well documented. As a result, when IT administrators that wrote those scripts leave the company, they take that intellectual property with them.

    A recent Puppet Labs 2015 State of DevOps Report, produced in partnership with book publisher IT Revolution, finds that organizations that have a more mature approach to managing the DevOps process not only deploy code 30 times more frequently; they also experience 60 percent fewer service disruptions while recovering 168 times faster when there is a service interruption.

    To help more IT organizations achieve those numbers, Zonca said, Puppet has added an Interactive Node Graph that dynamically visualizes infrastructure models defined using Puppet code. The end goal is to make it easier to optimize Puppet code and respond to changes faster, ultimately reducing the time it takes to achieve a particular desired infrastructure state. Meanwhile, the new user interface makes it simpler to visualize all that data across larger sets of infrastructure deployments.

    Thanks to the rise of application programming interfaces at both the application and infrastructure level data centers are essentially becoming programmable entities. The rate at which IT organizations embrace that notion will naturally vary. But given the complexity of the data center environment and the general lack of talent in critical data center technology areas, the rise of the programmable data center is now all but inevitable.

    5:49p
    Fujitsu Dumps Neutron for Midokura in OpenStack Cloud

    OpenStack is gaining in popularity, but several components of the open source cloud infrastructure software family still leave a lot to be desired. OpenStack networking is one.

    As part of a larger push into the private cloud realm, Fujitsu announced today it will make network virtualization software from Midokura a core element of its OpenStack-based private cloud platform.

    Ashish Mukharji, director of business development for Midokura, said this latest effort builds on an existing alliance between the two companies that revolves around the Midokura Enteprise MidoNet (MEM) network virtualization software that replaces the Neutron, the OpenStack networking controller, with a MidoNet plug-in which, according to him, scales better.

    The latest version of MEM added support for flow tracing of virtual ports and enhanced Border Gateway Protocol Configuration session views, support for Puppet and Docker container environments.

    As one of the many providers of technologies that can plug into OpenStack, Midokura has been making a case for an open source version of its MidoNet network virtualization software that scales across thousands of virtual ports. MEM is an enterprise-class version of MidoNet, strengthened with high-quality support and management tools. Fujitsu has been among the leading contributors to the MidoNet project has been Fujitsu.

    Mukharji said Midokura was making a concerted effort to reach out to both OEM partners, such as Fujitsu, and systems integrators to increase support for usage of both MEM and MidoNet in cloud computing deployments.

    “We’re working with about 25 partners,” said Mukharji. “We’re seeing strong demand on the private cloud side.”

    In general, OpenStack is not an all-or-nothing proposition. IT organizations can choose to implement any subset of the software family. They can also opt to download the raw bits themselves or make use of an OpenStack distribution through which upgrades and extensions are curated and validated by a third party.

    Most of the existing deployments of OpenStack have been executed by IT organizations with extensive engineering resources. But as OpenStack technologies become more mature the platform itself becomes more assessable to the average IT team. That in turn is creating a significant challenge for vendors like VMware and Microsoft, who provide similar IT management capabilities using primarily proprietary software.

    The degree to which OpenStack will usurp Microsoft and VMware remains to be seen. No doubt many of them are keeping track of OpenStack developments, including launching pilot projects to gain some familiarity with the framework. After all, the price of open software is too hard to ignore, unless, of course, implementing and managing it proves to be more trouble than its worth.

    7:13p
    Report: Google Mulls Another Oregon Data Center Build

    While its Android developer team is busy patching up the Stagefright vulnerability, made public yesterday, Google’s infrastructure team continues working to make sure the company never runs out of data center capacity to support its products and services, including those around its mobile OS.

    The company has secured 23 acres of land for potentially more construction near the already massive existing Google data center campus in The Dalles, Oregon. Google is exploring the possibility of building a new data center on the land at the Port of The Dalles, according to a report by OregonLive published Monday evening.

    Like other internet and cloud services giants, Google never stops expanding data center capacity. The most recent Google data center expansion projects include construction on the site of a shuttered coal power plant in Alabama, a $300-million build in the Atlanta metro, a $380-million data center in Singapore, and a $66-million project in Taiwan.

    Google’s Oregon plans at the moment seem vague. Darcy Nothnagle, the company’s head of external affairs for the western region, told the news service the team was “excited about exploring the possibility of expanding our operations.”

    The Google data center in the city is the first the company designed and built for itself in 2006, when it switched from leasing data center space to building its own. It invested $1.2 billion in the first facility there, and this April formally launched a second one, estimated to have cost about $600 million.

    The company enjoys state and local support for its construction projects there in the form of tax breaks. If it comes to fruition, the project at the port will be exempt from local property taxes, according to OregonLive.

    But those tax breaks aren’t free. If Google decides to build another data center, it will have to pay $1.7 million to the city and county initially, and then $1 million more per year.

    Google has a standing tax agreement with local authorities to pay $1.2 million in 2013 plus $800,000 per year starting in 2016. The money goes to the city, the county, and the county school district.

    7:43p
    Time to Get Serious About Data Security in the Data Center

    With IT organizations of all sizes now being held more accountable for security than ever, it’s become apparent that the level of security provided by a data center operator is now a key point of service differentiation.

    Bill Kiss, CEO of Global 1 Research and Development, will detail many of the ways that data security inside the data center can be routinely compromised in a presentation at the Data Center World conference in National Harbor, Maryland, this September.

    “IT organizations need to be more proactive, versus reactive, about data security,” Kiss said. “They need to make sure the security they have in place is validated.”

    In addition to routine threats, Kiss noted that data centers are now targets in a game of asymmetrical cyber warfare that now occurs between nation states. In fact, everything from customer data to super administrator credentials is fair game for a hacker ecosystem that gets more sophisticated with each passing day.

    Not only has the IT security budget become limited, this is a a time when the size of the attack surface continues to expand in the age of the cloud. Kiss said it’s not so much a matter of when a data center will be compromised at this point, but rather how to manage and contain the inevitable breach.

    IT organizations should make sure encryption is applied as broadly as possible because most hackers won’t be able to make use of encrypted data, which more than likely means they’ll apply their skills somewhere else that is likely to prove to be more economically beneficial to them, he said.

    The first step, of course, is making the effort to truly assess the level of security being applied inside any given data center. According to Kiss, it starts with not relying solely on the judgement of internal IT administrators that by and large are not technically equipped to deal with the true scope of modern IT security threats.

    For more information, sign up for Data Center World National Harbor, which will convene in National Harbor, Maryland, on September 20-23, 2015, and attend Bill’s session titled “Improving IT Data Security.”

    9:51p
    Amazon’s MySQL Alternative Aurora Now Generally Available

    Amazon Web Services rolled out Aurora, its alternative to open source MySQL built from the ground up for cloud, into wide release. Aurora is available as a database engine in the growing AWS Relational Database Service. It has a MySQL interface but was built from the ground up and optimized for cloud infrastructure.

    How do you position against open source? Emphasize performance and charge for the infrastructure in a Database-as-a-Service offering. The company positions Aurora as a formidable alternative to traditional MySQL, claiming Aurora couples enterprise-grade performance with open source database economics: it pegs the cost at a tenth of a traditional MySQL offerings. Amazon claims Aurora also performs up to five times better with commercial-grade stability. It was designed for 99.99 percent availability.

    Instance health is continuously monitored to automatically detect and recover from most database failures in less than 60 seconds, according to Amazon. Data is automatically and continuously backed up to S3, its cloud storage. The database cache survives a restart with no cache warming required and fails over to a read replica in the event of a failure.

    Aurora is aimed at companies looking to leverage the performance and economics of cloud without having to reinvent their database. MySQL on AWS is also available, but Aurora will be the default recommendation.

    To further encourage Aurora usage, AWS also made it easy for existing MySQL customers to transfer over with minimal changes. Aurora’s compatibility with MySQL means it’s easy for customers to transfer existing MySQL databases over with one click.

    “We started with a blank piece of paper,” said Anurag Gupta, the product’s general manager announcing the preview at Amazon re:Invent last year. “It’s a database built for AWS cost structure.”

    An increasing number of database workloads are moving to the cloud in order to take advantage of cloud economics. Rather than spend on hardware, cloud provides the advantage of paying for what you use. Renting rather than buying also means customers can leverage more powerful hardware and scale easier than within the confines of on-premise hardware.

    “Today’s commercial-grade databases are expensive, proprietary, high lock-in, and come with punitive licensing terms that these database providers are comfortable employing,” said Raju Gulabani, VP of database services at AWS, in a press release. “It’s why we rarely meet enterprises who aren’t looking to escape from their commercial-grade database solution.”

    The big cloud providers continue to differentiate IaaS with advanced services like DBaaS (Database as-a-Service). Aurora is a close competitor to Microsoft’s SQL Azure database.

    In terms of cloud database offerings, a lot of the focus has been on NoSQL. IBM acquired Cloudant to boost its cloud database offerings last year, with Cloudant complementing SoftLayer infrastructure and tying into IBM’s analytics portfolio. Google made its internal NoSQL database Bigtable available as a service on its cloud in May.

    Aliyun, the cloud computing arm of China’s Alibaba is diversifying its cloud database offerings, recently partnering with EnterpriseDB to make its PostgreSQL relational database available on Aliyun’s cloud.

    Rackspace is focused on managed database services. It acquired ObjectRocket in 2013 and has been expanding managed database services from there.

    Big cloud provider database offerings also develop partner ecosystems of their own. AWS partners include MySQL alternative and drop-in replacement MariaDB, Tableau, Toad, Webyog, Navicat, and Talend. All have certified or are certifying their products with Amazon Aurora.

    A wide range of companies participated in a preview, many of which commented in a press release on Aurora’s ability to scale with no degradation in performance, as well as how easy it was to migrate existing databases into the service. Customers include Pacific Gas and Electric, weather data for IoT provider Earth Networks, facial recognition provider FacialNetwork, online course provider Coursera and Intuit.

    The initial regions for Aurora are US East (Northern Virginia US West (Oregon) and EU (Ireland) with plans to roll out wide in the coming months.

    10:01p
    Google Cloud Platform to Let Customers Control Encryption Keys

    logo-WHIR

    This article originally appeared at The WHIR

    Google Cloud Platform is beginning to allow developers to manage their own encryption keys, providing them more control of their data security.

    Prior to the Tuesday announcement, Google encrypted all of the data stored on its cloud, but also held the encryption key that provides access to encrypted data. This meant that there was some uncertainty whether someone infiltrating Google or Google itself could access data stored on its service.

    Now, the “Customer-Supplied Encryption Keys” feature allows customers to use their own encryption keys as a free beta feature, providing customers more control around their data security, as long as they are able to securely store the encryption key.

    “With Customer-Supplied Encryption Keys, we are giving you control over how your data is encrypted with Google Compute Engine,” Leonard Law, product manager forGoogle Cloud Platform for Enterprise, wrote in a blog post. “Keep in mind, though, if you lose your encryption keys, we won’t be able to help you recover your keys or your data – with great power comes great responsibility!”

    Amazon and Box already allow customers to use their own encryption keys, which can simplify application security and compliance in highly regulated industries, and help control the flow of data.

    “Google Compute Engine gives us the performance and scale to process high-volume transactions in the financial markets,” Sungard Consulting Services CTO Neil Palmer said in a statement. “With Customer-Supplied Encryption Keys, we can independently control data encryption for our clients without incurring additional expenses from integrating third-party encryption providers. This control is critical for us to realize the price/performance benefits of the cloud in a highly regulated industry.”

    Customer-Supplied Encryption Keys are now available in beta in select countries, and accessible through Google’s API, Developers Console, and command-line interface gcloud.

    This first ran at https://www.thewhir.com/web-hosting-news/google-cloud-platform-to-let-customers-control-encryption-keys

    << Previous Day 2015/07/28
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org