Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, March 18th, 2015
| Time |
Event |
| 4:00a |
CenturyLink Lights up First Cloud Data Center in Asia Just as Asian service providers are extending their infrastructure reach to the U.S. and Europe, American and European companies are building out local presence in Asian markets.
The most recent example of the latter is the launch of a cloud data center in Singapore by CenturyLink, announced Tuesday. This is the first location in Asia for CenturyLink’s cloud infrastructure, although the company has been providing other services in the region for years.
The cloud infrastructure is in one of the Monroe, Louisiana-based company’s two data centers in Singapore. The launch is part of a global cloud expansion push CenturyLink is undertaking this year, which will take its cloud services infrastructure from the current 13 locations (including the new Singapore site) to about 20. Other existing CenturyLink cloud data centers are in the U.S., Canada, and Europe.
Pent-Up Cloud Demand in Asia
The company’s reasons for launching a cloud data center in Asia are similar to reasons other service providers have cited for expanding to the region. There is pend-up demand for IT infrastructure services from both local companies and multinationals looking either to enter the market or expand their existing footprint there.
CenturyLink has heard many requests from its existing customers for a cloud location in the region, Richard Seroter, director of cloud product management for the company, said. “We are seeing a number of large multinational companies looking to expand into the region.”
Seroter ended up at CenturyLink in 2013, when the telco and IT services giant bought his former employer Tier 3, a cloud service provider where he worked as senior product manager. Tier 3’s products formed the foundation of what is today CenturyLink cloud.
The list of cloud services available out of the Singapore data center is similar to the services available from the cloud’s other locations, including high-performance and standard servers, storage, orchestration, white label cloud, and service catalogs. The company plans to add its managed services to the list in the near future, Seroter said.
The idea is to make the customer’s experience of deploying cloud resources in Singapore the same as anywhere else. “It just kind of shows up for them [on the customer portal] as a destination,” he said.
Singapore is Just the Start
The company is planning to deploy more Asia-Pacific cloud locations, but Singapore was a logical place to start building CenturyLink’s cloud presence in the region. The company already had a successful colocation and managed hosting business there, as well as people who were familiar with the local market. “Singapore has proven to be a successful area for us,” Seroter said.
CenturyLink is competing with Asian as well as American and European players in Singapore and the Asian market at large.
A New Jersey cloud provider called Linode announced plans to add a Singapore data center to its existing cloud location in Tokyo earlier this year, for example. Amsterdam-based LeaseWeb opened a Singapore site within a Pacnet data center last year. Microsoft’s Azure cloud has a Singapore location, and so do Amazon Web Services and IBM SoftLayer.
Local Firms Care About Data Sovereignty
Over the past couple of years, data sovereignty has joined the list of top considerations for users of cloud services, as a result playing a bigger role in data center site selection by cloud service providers.
Seroter said customers in Singapore, especially local companies, were increasingly concerned with data sovereignty, wanting to make sure their data does not leave the country. “We field those questions a lot,” he said.
CenturyLink guarantees that its cloud customers’ data does not leave the region it’s in. “We’ll make sure that you’re comfortable knowing that your business data has never left that region,” he said. | | 3:30p |
Private Versus Public Cloud: No Longer an Either/Or Debate Peter Cutts is the VP of Cloud Solutions at EMC.
The prolific growth of the public cloud over the last several years has propelled many organizations into the future, leaving IT departments understandably questioning what their role is and should be in this new world.
Outsourcing next generation applications and resource requirements to public cloud leaders like Amazon and Dropbox certainly yields many advantages, such as low price point, speed and agility, but does it also mean the demise of the traditional IT department? Moving forward, will IT resources be delivered exclusively via public clouds?
When it comes to private versus public cloud, we are no longer living in a black and white, “either/or” world. Rather, the businesses that truly harness the power of both public and private cloud are finding that they have a significant advantage over their “either/or” peers. The truth is that despite the allure of the public cloud, we still need IT at the helm. We trust and depend heavily on our IT departments to make crucial decisions around security, governance and performance. And after all, they have a vested interest in brokering the best solutions for their company that will ultimately drive business acceleration and the bottom line.
Businesses need their IT departments looking out for them and IT departments need to prove their value to the business by showing off their agility, service capabilities and innovation.
Solving Business Problems First
Fast, cheap, and flexible. This will always be a business’s preferred approach to solving a problem. However, when it comes to efficiently managing workloads, there are other complicated considerations that come into play, and what’s fast, cheap and flexible in the moment might not turn out to be the best business decision in the long run.
The IT department, arguably the central nerve of the company, is in a singularly unique position to map out, make a case and broker a solution for the private/public/on-premises/off-premises mix that makes the most strategic sense for their organization. A hybrid cloud model allows IT to determine the most suitable mechanism based on specific workload requirements and make adjustments as needed – a competitive differentiator that no public cloud provider is able to claim.
So how can the IT department navigate these changes and present a solution that benefits both the business and its own interest? By focusing on a hybrid cloud solution that addresses the business and workload problems of internal stakeholders, IT can ultimately reaffirm its relevance and strategic value to the organization.
Making the Case for Hybrid
In the past, IT generally had the final say on the best way to deliver services to their clients. Today, however, instances of ‘shadow IT’ have increased when a business user feels there is a better, faster, or cheaper answer to his or her workload requirements than the solution being proposed by the in-house IT department. Unfortunately, the battle against shadow IT is still ongoing, so it falls on IT departments to make the case for why businesses should steer clear of the outdated ‘either/or’ scenario and focus on a hybrid solution that is effective, user-friendly, and benefits the wider organization.
In other words, IT departments need to keep their primary focus on the end-user experience, and provide plenty of transparency and communication along the way. A seamless user experience is key for business users to become more receptive to an IT solution that balances both types of clouds in order to maximize security, performance, cost, and compliance.
The Time is Now
While the role of the in-house IT department is shifting, it’s far from becoming obsolete. In a recent survey from The Economist Intelligence Unit, Line of Business managers indicated by a margin of nearly three to one that they would prefer their IT departments to broker all services for them.
Delivering the right hybrid cloud solution for the organization – and by extension the right balance of cost, security and efficiency – will ultimately reinforce the value and expertise of the in-house IT department. IT is already poised to make the best recommendations to bring value to the business – they just need to continue to deliver a great user experience and make their voices heard.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 5:00p |
Foundation Unveils Slew of OpenPOWER Firsts OpenPOWER is experiencing a lot of “firsts” this week. Indicative of the maturity and work happening around the POWER architecture, OpenPOWER Foundation members are unveiling more than 10 new hardware solutions spanning systems, boards, and cards, and a new microprocessor customized for the Chinese market.
The foundation licenses out IBM’s POWER platform, making POWER hardware and software available to open development. The hope is to enable unprecedented customization and new styles of server hardware for a variety of computing workloads. The flood of new “firsts” illustrates the growing need for customization.
Google, IBM, Rackspace, and Suzhou PowerCore are joining more than 100 other organizations from more than 20 countries to unveil new data center innovations that deliver an open alternative computing server solution at this week’s inaugural OpenPOWER Summit.
Last April, OpenPOWER held a “show-and-tell,” showcasing six solutions and two hardware examples. It received rave reviews. “I will tell you right now, both of those hardware examples have grown incredibly,” Calista Redmond, director of OpenPOWER Global Alliances, said. “There are a couple of systems from Tyan that are impressive and Google will certainly speak for itself.”
Some of the highlights include: the first officially available OpenPOWER server, first open server design combining OpenPOWER, Open Compute and OpenStack (the culmination of Rackspace’s efforts); a prototype of Firestone, another step closer to exascale; big development for OpenPOWER in China; and the creation of an advisory group that will encourage cross-pollination among the open communities.
“Last year was the ‘tell me’ year for OpenPOWER, and this year is the ‘show me’ year,” Redmond said. “We have tangible products going out to customers from several partners. We’re starting to mature beyond a hardware show. Built around chip and architecture, we saw an early raft for optimization, memory, IO; and now we’re seeing maturity for workloads.”
Rackspace Reveals Prototype
Rackspace has revealed a prototype of an open server design that exemplifies the type of work the new advisory group aims to promote. It combines OpenPOWER (architecture) and Open Compute (hardware) design concepts with OpenStack management: the trifecta of open cloud. The new design will be incorporated in Rackspace data centers.
As Rackspace has illustrated, interest in OpenPOWER goes beyond hardware vendors, extending to service providers.
Tyan’s First OpenPOWER Server
The first commercially available OpenPOWER server comes from Tyan, one of the first partners in the consortium and the first with a customer reference system. The servers are designed for large-scale cloud deployments. Tyan TN71-BP012 will be available in second quarter 2015.
OpenPOWER Firsts in China
Chinese chip design company PowerCore introduced CP1, the first POWER chip for the China market. CP1 will be utilized by Zoom Netcom for a new line of servers called RedPOWER, the first China OpenPOWER two-socket system coming to market in 2015.
OpenPOWER member Chuang He shared designs for China-branded OpenPOWER systems and incorporated POWER8 processors, with planned availability in 2015.
China is a huge (and growing) technology market, and OpenPOWER has made early in-roads with an emerging OpenPOWER ecosystem there.
The Chinese government endorsed OpenPOWER last year through the formation of the China POWER Technology Alliance, whose mission is to promote the upgrading of China’s industrial structure through the integration of local Chinese and OpenPOWER ecosystem resources under the guidance of Chinese government.
“Any country that wants to grow their local economy using their own tech and thought leadership will have that possibility when they collaborate together with others,” said Redmond. “For OpenPOWER, we look at China, we see a country that wants to feed that local innovation; that wants to build technology themselves and cultivate that center of knowledge.”
First GPU Developer Platform
As one of the OpenPOWER foundation’s newest members, Cirrascale is marketing the first OpenPOWER-based GPU developer platform: the Cirrascale RM4950. The company collaborated with NVIDIA and Tyan on the GPU developer platform, which is available now and shipping broadly in second quarter 2015. The platform supports development of Big Data analytics, machine learning, and scientific computing GPU applications.
New Advisory Group
The foundation announced the formation of the OpenPOWER Advisory Group, a formal mechanism for engaging with other open development organizations.
“The promise of open development is that no one company should control the agenda; no one organization should control agenda as well,” said Redmond.
Inaugural members represent the Linux Foundation, Facebook’s Open Compute Project, and the China POWER Technology Alliance.
“Collaborating across our open development communities will accelerate and broaden the raw potential of a fully open data center,” said Corey Bell, CEO of the Open Compute Foundation, in a press release. “We have a running start together and look forward to technical collaboration and events to engage our broader community.”
The OpenPOWER foundation is growing, and now includes more than 110 members across 20 countries. OpenPOWER has grown from a potentially disruptive development alliance to one that is truly disruptive and each of its product announcements changes the competitive landscape. “We’re are no longer dependent on x86,” said Redmond. “It’s not changing the game; there’s a new game.” | | 6:46p |
DataBank’s Minnesota Data Center Design Tier III Certified Uptime Institute awarded DataBank with Tier III design documents certification for a 20-megawatt Minnesota data center currently under construction.
The Dallas-basedcompany has been working closely with Uptime from the early stages of the project to ensure that the data center in Eagan upheld the standards necessary for certification.
“With the dual-role this facility performs as both a top-tier data center and carrier-hotel, a tremendous amount of time and effort went into the design,” said Dan Allen, DataBank’s VP of technical operations, in a press release.
DataBank entered the Minnesota data center market in March 2013 by acquiring VeriSpace along with its 10,000-square-foot facility in Edina, Minnesota and announced further expansion in summer 2013. Early success gave the company incentive to build the upcoming facility, which represents a huge growth of its footprint in the state.
Other providers in the area include: Cologix, ViaWest, OneNeck IT, Stream Data Centers, CenturyLink, and others. Cologix recently opened its third data center in downtown Minnesota and is also said to be pursuing Uptime Certification.
Some controversy surrounds the lack of distinction between design and construction certification, following a recent accusation that ViaWest had misrepresented the type of certification it had for its Las Vegas facility.
DataBank CEO Tim Moore addressed the issue in a press release.
“Though many providers attribute a ‘tier certification’ level to their facility construction and infrastructure, there are relatively few who actually go through the extensive process to vet those claims are accurate,” he said. “We felt this certification, and the ‘Build Certification’ which will follow it, would be particularly important to differentiate our environment and service-level to the discerning client base that exists in this market.” | | 7:41p |
Report: DISA Cancels $1.6B VMware Cloud Contract Following Protests The U.S. Defense Information Security Agency has cancelled a $1.6 billion, five-year cloud contract with VMware that spanned various military branches following protests from other cloud competitors, according to SiliconANGLE.
The agency did not give a reason for canceling the deal, but the protests coupled with the lack of a bidding process most likely contributed to the decision.
The cancellation came around the same time Government Accountability Office dismissed the March 12 protests against the award made by Amazon Web Services, Citrix, Nutanix, and Minburn Technology Group.
DISA filed a document following the initial controversy to justify the award, with the argument focusing primarily on cost savings.
This is not the first big government cloud contract undone by protests, although it is one of the most sizable. Of note is a $600 million AWS contract with the CIA that was originally cancelled following IBM protests to the GAO. AWS ultimately won it back following a court ruling.
The Department of Veterans Affairs also terminated a $36 million contract with HP Enterprise roughly a year after it was awarded.
In other VMware news, its Government Cloud achieved FedRAMP certification last month with hosting service provided by Carpathia’s data centers. Carpathia also recently announced the availability of a platform for AWS. | | 8:06p |
Carpathia Helps Enterprises Manage Hybrid Clouds with New Cloud Operations Platform for AWS 
This article originally appeared at The WHIR
With hybrid cloud becoming more popular with enterprises, managing workloads across public clouds and on-premise infrastructure can be a challenge. To address this, hosting and cloud provider Carpathia launched its Cloud Operations Platform for Amazon Web Services on Wednesday. The platform enables customers to manage and monitor workloads on AWS and in Carpathia data centers through a single pane of glass.
Carpathia’s VP of business operations Phil Hedlund was responsible for product management of the platform, and said that the company has increasingly seen its customers wanting to use infrastructure in a dedicated facility as well as within public cloud environments like AWS.
“We saw demand from our existing customers to be able to hybrid connect to the public cloud including AWS, and demand from our customers to be able to monitor and manage those workloads,” Hedlund said in an interview with the WHIR.
Prior to the launch of the platform, if a customer of Carpathia wanted to deploy workloads in AWS they would have to use Amazon Direct Connect to connect their Amazon infrastructure to their Carpathia infrastructure.
“Prior to the availability of this service, a workload that existed in Amazon wasn’t being monitored and managed in the same way the workload sitting on a server in a Carpathia data center,” Hedlund said.
In addition to the availability of its Cloud Operations Platform for AWS, Carpathia announced that it is a Consulting Partner in the AWS Partner Network and a member of its Channel Reseller Program. With the latter designation, customers can purchase their AWS infrastructure through Carpathia however it is not a requirement.
Carpathia is one of an increasing number of service providers that have found an opportunity in working with, rather than against, public cloud providers like AWS. Hedlund says Carpathia’s value proposition is managing the hybrid cloud.
“We view AWS and public cloud as something that customers are going to use for portions of their environment. There are workloads that make sense to put in public cloud infrastructure and there are those that don’t make sense,” Hedlund said. “Carpathia’s view is that we’re not going to dictate to the customer where they should put their workloads, we think that is a customer decision. What we do see is that customers when they are deploying hybrid implementations…customers want to see and get a holistic view of their infrastructure.”
Almost 90 percent of organizations used public cloud, with AWS leading public cloud adoption.
The new Cloud Operations platform allows Carpathia to automatically discover that the workload has been created by a customer in AWS.
Designer flash sale website Gilt is one of Carpathia’s existing customers that wanted to have seamless monitoring and management of their AWS and Carpathia infrastructures.
“There are parts of the Gilt infrastructure – you can think of it as the more ‘retail-y’ parts – that do need to exist inside AWS,” Hedlund said.
“Gilt is a fairly dynamic IT shop because they’re constantly developing and their IT operations is fairly decentralized in terms of allowing their developers to independently develop features for their application,” Hedlund said.
“As those features and functionalities become more critical or they need to leverage things that they’ve architecturally decided are better served on bare metal they need to be able to connect what’s going on with AWS with what’s in the bare metal infrastructure.”
Gilt previously purchased AWS separately from the services provided by Carpathia, but now is procuring AWS via Carpathia.
This story originally appeared at http://www.thewhir.com/web-hosting-news/carpathia-helps-enterprises-manage-hybrid-clouds-new-cloud-operations-platform-aws | | 8:44p |
Exclusive: Sabey Breaks Ground on Quincy Data Center Sabey Data Centers has kicked off construction of a second massive data center on its Quincy, Washington, campus. Construction crews broke ground last week, the company’s president John Sabey said.
Quincy is one of the state’s primary data center markets. In addition to data center providers like Sabey and Vantage, tech giants like Dell, Microsoft, and Yahoo have data centers there.
Based in Seattle, Sabey is one of the biggest data center landlords in Washington. In addition to Quincy, it has large data center facilities in Seattle and Wenatchee. The company also has data centers in New York City and Ashburn, Virginia.
Sabey’s existing building in Quincy is nearly full, but there is lots of demand in the market, so the company is racing to bring more capacity online to take advantage of the opportunity. It has already pre-leased close to 1 megawatt of capacity in the future data center, John Sabey said.
The new facility will be about 135,000 square feet in size, with ability to support 10.5 megawatts of IT load total. Sabey expects the first 2.7-megawatt pod to come online in October.
Once the second building is completed, there will be more than 400,000 square feet of data center space on campus total. The site also has enough space and power capacity for a third building.

Site plan of Sabey’s Intergate.Quincy campus. The existing data center is Building C, and the one that is currently under construction is Building A. (Image courtesy of Sabey Data Centers)
Even though internet giants Microsoft and Amazon Web Services have headquarters not too far from Quincy, they do not contribute to the demand for data center space from companies like Sabey. With a few exceptions, companies like that have switched from leasing space to building and operating their own data centers for their primary capacity.
Smaller, more traditional enterprises that are not in need of web-scale data center infrastructure are in the market. “Basically. every [industry] vertical is looking in Quincy,” Sabey said. Companies in healthcare, financial services, entertainment, internet, and education are shopping for data center space in the market.
Quincy, and Washington State in general, are attractive for data center operators for a number of reasons. There is a big supply of low-cost hydropower; there is little risk of natural disaster; there are sales tax breaks on data center equipment purchases, including IT and supporting infrastructure.
The deals that are being shopped around vary in size. Some are as small as 15 kilowatts, while others are in the multi-megawatt category, Sabey said.
Like elsewhere around the country, there is more demand today for multiple levels of infrastructure redundancy in the same data center. Today’s customers are smarter about matching each application’s uptime requirements with infrastructure it is being hosted on to optimize their data center cost.
To respond to this trend Sabey included ability to support varying levels of redundancy in the design of the new facility. “We have the ability, through our design, to actually provide customers with varying resiliency within the same suites,” the company’s president said. |
|