Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, June 3rd, 2015
| Time |
Event |
| 12:00p |
First Two Microsoft Data Centers Coming to Canada in 2016 Microsoft announced plans to establish two data centers in Canada that will provide its cloud services to customers in the country.
The data centers in Toronto and Quebec City will be the first Microsoft cloud locations in Canada. The cloud services used by about 80,000 Canadian businesses today are served out of Microsoft data centers outside of the country.
The announcement may be welcome news for those in Canada who are concerned with data sovereignty, or the idea that access to data that’s stored within a country’s borders should be governed by that country’s laws. Data sovereignty and data privacy concerns have reportedly heightened around the world since former U.S. National Security Agency contractor Edward Snowden leaked information about the agency’s widespread electronic surveillance capabilities.
Cloud service providers like Microsoft and Amazon have used these concerns as a way to make their services more attractive by building more data centers outside of the U.S.
Amazon mentioned Germans’ concern with data sovereignty when it opened a Frankfurt data center to support its cloud services last October. And Microsoft did the same in its announcement of the upcoming data centers in Canada: “These new locally deployed services will address data residency considerations for Microsoft customers…”
As Microsoft’s recent legal tussle with U.S. federal law enforcement officials, however, illustrates that it’s not enough to simply build a data center in a foreign country to ensure data stored there cannot be accessed by the U.S. government.
A U.S. federal court ruled last year that Microsoft must hand over personal data of one of its customers stored in its data center in Dublin, Ireland, the feds had requested. The company has appealed that decision. A ruling has not yet been issued.
A more clear-cut data sovereignty angle for cloud data centers in Canada may be providing services to Canadian government agencies. The government is in the process of crafting rules for using and providing cloud services, and one potential requirement may be that the data is stored within the country’s borders.
The two Microsoft data centers in Canada are due to come online next year. They will support Azure, Office 365, and Dynamics CRM Online.
Google, one of Microsoft’s biggest rivals in the cloud services market, made two new data center construction announcements this week: one in Singapore, and the other in the Atlanta metro. | | 3:30p |
Why an Application Strategy Can Make or Break a Cloud Migration Bob Dvorak is the President of KillerIT.
Sixty-nine percent of enterprises already run applications in the cloud today, according to a recent report from IDG. However, many of those organizations have not implemented the critical first step of migration: devising a migration strategy for their application portfolio.
This oversight could be costly, in part because the cloud continues to gain momentum. A recent Data Center Knowledge article reinforced the rapid pace of cloud adoption, expounding on the reasons why hybrid cloud in particular has become so popular. According to the Cisco report cited in the article, by 2018, cloud data centers will process more than 78 percent of workloads, with traditional data centers processing the remaining 22 percent.
That means it’s up to CIOs to decide which vendor and which deployment model is best suited to their company, as well as what assets they should migrate. By incorporating existing applications and business metrics, IT must make smart decisions about which ones provide value and which are financial drains on the organization.
“Many organizations may try to take a more laissez-faire approach to governing these efforts, but they do this at a substantial risk,” wrote Gartner analysts in the 2014 report, “How to Budget, Plan and Govern Application Rationalization.” That “risk” easily translates into real dollars and cents. For example, on average, organizations fail to use 28 percent of all deployed software, while every PC costs $224 in wasted licenses, according to a study by software lifecycle automation services company 1E. It conservatively estimated the total cost of deployed, yet unused software within companies of 500+ desktops, at $6.6 billion in the U.S.
While software represents 34 percent of enterprise technology spending, CIOs spend 55 percent of the applications’ budget on maintenance and support, according to Forrester Research’s most recent “State Of Enterprise Software And Emerging Trends” report. The visibility into which applications to migrate to the cloud and which to eliminate completely will prove to have enormous cost savings – money that could be reallocated to cloud projects.
Meticulous discovery will help inform accurate applications analysis. A detailed cloud strategy should begin by categorizing enterprise applications with metrics such as: total cost of ownership, number of unique functionalities, number of applications that depend on it to function, and how many other interfaces with which it interacts. Because the organization also needs to determine deployment models, IT leaders should rank each application with risk and security metrics, assuring the most sensitive information stays on private infrastructure.
From there, the CIO can distinguish both the deployment model and the best cloud service vendor based on the overall application categorization and score. This newfound understanding of the organization’s IT portfolio may also allow the enterprise to trash a number of applications that have completely lost their value.
Once IT has completed thorough asset identification and categorization processes, CIOs should communicate their findings to their teams and ensure they have a holistic understanding of what is involved.
“When the CIO issues the simple directive, ‘Move some applications to the cloud,’ architects face bewildering choices about how to do this, and a decision must consider an organization’s requirements, evaluation criteria, and architecture principles,” said Richard Watson, research director at Gartner. “However, no alternative offers a silver bullet: All require architects to understand application migration from multiple perspectives and criteria, such as IT staff skills, the value of existing investments, and application architecture.”
All of these steps are necessary because migrating to the cloud is an incredibly complex undertaking – one that can trip up half of all organizations.
According to Gartner’s 2014 study, “Best Practices Mitigate Data Migration Risks and Challenges,” through 2019, more than 50 percent of data migration projects will exceed budget and/or result in some form of business disruption due to flawed execution.
However, with a thoughtful application migration strategy, CIOs can shield their companies from failure and lead them through a successful migration.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
| | 4:52p |
Piston Acquisition Beefs Up Cisco OpenStack Cloud Team Cisco announced it will acquire Piston Cloud Computing, a startup that began as a provider of private OpenStack cloud infrastructure on commodity hardware but earlier this year expanded via its CloudOS, becoming a provider of a simple way to deploy several open source platforms, such as Hadoop and Spark.
Cisco is acquiring Piston for its talent and expertise around distributed systems and quick and easy deployment and automation. This is a second Cisco OpenStack acquisition, following the purchase of Metacloud late last year. Both Piston and Metacloud are highly complementary to each other and in the wider Cisco “Intercloud” vision.
“Piston has focused on the deployment problem, while Metacloud has focused on the day-two probem,” said Scott Sanchez, director of Cisco OpenStack strategy. Sanchez came to Cisco via the Metacloud acquisition.
“They built a great team,” he said about Piston. “One of the things they always focused on was the distributed systems aspect. They rebranded and took a step back from OpenStack to amplify the CloudOS message. It’s about how do we take these complicated platforms and make them easy to deploy in an automated way?”
Cisco’s strategy has been centered on what it calls Intercloud. Intercloud is not a product but an overarching concept that basically means connected clouds, or a cloud of clouds. Sanchez said Cisco wants to play a similar role with cloud as it did with the larger internet, where it is a key player across hardware, software, and services centered on the network.
Piston is more than a talent acquisition for the Cisco OpenStack team, as it brings a lot of automated deployment technology to the table as well. Its aim is to give companies that are not quite as large as Facebook or Google data center infrastructure capabilities the likes of Facebook and Google have.
Piston accelerates Cisco’s ability to help customers deploy their clouds. If a private OpenStack cloud currently takes Cisco 14 days to deploy for a customer, Piston will help shave that time down. The distributed systems expertise means its easier to deploy more complex, distributed clouds and platforms as well.
“Today, we have multiple offerings working towards a more converged stack behind the scenes,” said Sanchez.
OpenStack is just one ingredient in a larger cloud strategy for Cisco, albeit an important one.
“There’s a shift that has started to occur in the market,” said Sanchez. “It’s less about the technology platform and more about what you do with it. Cisco, Metacloud, and Piston all recognized this. OpenStack is a key part of our execution, but its just a piece of a recipe. We’re past the hype stage of OpenStack, and it’s execution time. We’re seeing tremendous traction.”
Last year, Cisco pledged to spend $1 billion over two years in pursuit of the cloud market. Cisco is a massive business with networking equipment at its core. Its Intercloud retains that networking angle, with the company focusing on connection of clouds. However, Intercloud isn’t a product but rather a strategy and vision of hyper-connected clouds. Cisco is building out several pieces so it can be the hub across all points of the chain, from hardware to service provider. | | 5:34p |
CoreSite Kicks Off Massive Silicon Valley Data Center Expansion Responding to growing demand for compute capacity that is expected to continue to rise through much of the rest of the decade, CoreSite Reality Corp. announced this week that it plans to add 230,000 square feet of space to its data center facilities in Santa Clara, California.
The new Silicon Valley data center is expected to be completed in 2016, Ryan Oro, vice president, general management at CoreSite, said. Demand for data center capacity in the San Francisco Bay Area is being fueled by a massive expansion of the number of web-scale companies being founded and the number of global companies that now need a data center presence in the area.
“Tech companies want to be close to their data center,” says Oro. “They could find data centers will cheaper access to power 100 miles away, but it’s hard to find IT people that want to work there.”
Just this April, the company announced it would build a 140,000 square foot data center on its Santa Clara for a single tenant whose name was not disclosed.
Oro says CoreSite is scheduled to break ground on the new multi-tenant Silicon Valley data center, called SV7, in the third quarter of this year. The company expects to substantially complete Phase I of the building by the second quarter of 2016, bringing total size of the CoreSite Santa Clara data center campus to approximately 600,000 square feet.
CoreSite currently provides customers with access to 40 network, cloud, and IT service providers, including direct connection to Amazon Web Services. Oro says that one of the primary reasons that customers opt to deploy applications in the company’s data center facilities is access to the largest internet peering exchange on the west coast.
Many web applications are especially sensitive to network latency. Rather than deploying applications in rural locations, IT organizations are opting to deploy web applications in data center facilities that are at most one network hop away from an peering exchange.
In addition to the data center facilities itself, CoreSite provides access to CoreSite Open Cloud Exchange, a portal through which IT organizations can provision resources from CoreSite and its partners on demand.
Oro says that CoreSite will use the new facility to expand both its retail and wholesale data center services, both of which he says continue to grow rapidly.
As yet, Oro says that most of the IT personnel managing those data centers are working out of the offices of their own individual companies, but Oro notes that CoreSite does make office space available to customers that need it.
Assuming, of course, there is no technology bubble to actually burst, demand for data center services in the Bay Area should be expected to outstrip available supply for some time to come. | | 6:15p |
Report: $30B Worth of Idle Servers Sit in Data Centers Working in partnership with Stanford University and TSO Logic, a provider of data center analytics tools, IT consulting firm Anthesis Group released a report this week that suggests there are 10 million physical servers deployed inside data centers around the world that are currently not actually being used. The report refers to these idle servers as “comatose.”
Citing a general lack of management oversight in the data center, Jonathan Koomey, a consulting professor at Stanford who co-authored the report, says roughly 30 percent of the servers deployed worldwide have not delivered information or computing services in the last six months.
To address what the report estimates to be about $30 billion in IT infrastructure assets sitting idle, Koomey says IT organizations need to aggressively eliminate all the management silos inside their organization. Lack of communication between teams results in deployment of IT infrastructure that ultimately gets wasted.
“There really needs to be one boss, one team, and one budget,” says Koomey. “We need to change the way the data center is managed.”
In the absence of that centralized approach to IT management, he contends, it will only become more difficult for IT organizations to justify acquiring servers when the business can rely on Infrastructure-as-a-Service providers that offer access to servers as needed.
TSO Logic CEO Aaron Rallo says the primary reason so much idle server capacity exists is that IT organizations generally lack access to analytics tools. As a result, they don’t have much visibility into what resources are actually being consumed by applications.
Worse yet, based on the projects provided by the business, Rallo says, most IT organizations wind up overprovisioning server capacity. Because they usually pay for servers using capital budgets, they generally acquire servers years in advance of anticipated need. The projections by the business, however, rarely live up to expectations in terms of IT capacity required.
Rallo says that first and foremost IT organizations need to rationalize their application portfolios in a way that will not only reduce the number of servers they might have sitting idle, but also generate additional savings in terms of the number of application licenses they have running on servers that are not actually being used.
All things considered, Koomey says, organizations need to think of their data center as a strategic asset that needs to be optimized to the fullest extent possible. To achieve that IT organizations need to map out every usage of a server to a specific business process it enables.
The degree to which any organization can actually accomplish that goal will naturally vary. But in the absence of any effort being made at all, Koomey says, it’s now only a matter of time before the IT organization winds up losing control of their data centers altogether. | | 7:11p |
HP Discover 2015: HP Extends Cloud Framework Reach 
This article originally appeared at Talkin’ Cloud
As a superset of all its integration products and services HP and its partners have a lot riding on CloudSystem, a platform for building out hybrid cloud computing environments.
At the HP Discover 2015 conference this week HP extended that platform by providing tighter integration with the Helion platform-as-a-service (PaaS) based on open source Cloud Foundry software, while at the same time incorporating support for a private cloud platform that is compatible with the application programming interfaces (APIs) used by Amazon Web Services (AWS) that HP gained when it acquired Eucalyptus in 2014.
HP CloudSystem 9.0 also now supports Microsoft Hyper-V, Red Hat KVM, VMware vSphere virtual machines as well as bare metal servers, sports tighter integration with HP OneView systems management software and the latest version of HP Cloud Service Automation software. HP has also added support for Swift OpenStack Object Storage and made it possible to deliver CloudSystem as a virtual appliance that can be set up in a few hours.
Finally, HP also announced a beta release of HP Helion Managed Cloud Services for HP Helion OpenStack Managed Private Cloud and HP Helion Eucalyptus Managed Private Cloud, and that it will also support the HP Helion PaaS environment via its managed services offerings as well.
At its core, Shashi Mysore, director of product management for HP Helion, said CloudSystem 9.0 is a framework for building multiple types private clouds that can easily be integrated with multiple public clouds, including AWS, Microsoft Azure, HP Helion Public Cloud or any public cloud based on OpenStack or VMware software. Myore said the framework is specifically designed to give IT organizations control over where workloads are simultaneously deployed on a private or public clouds. IT organization can either opt to deploy the entire framework or any given subset of HP products and services they deem necessary.
While HP itself may not be a dominant cloud services provider it views the emergence of heterogeneous cloud computing environments as a development that plays to its integration strengths. Given its massive based of installed servers HP should be in a position to lead the development of private clouds running on premise or in hosted environments. From there HP can then extend its reach into public clouds that it envisions IT organizations primarily using as extensions of their private clouds. The basic idea is to convince customers to standardize on a core cloud integration framework that can be extended in any direction as needed.
The degree to which HP can execute that strategy at a time when most cloud computing deployments today are semi-autonomous remains to be seen. But the one thing that is for certain is that HP is moving all the pieces in place today that it needs to turn that vision into reality in one form or another.
This first ran at http://talkincloud.com/cloud-computing/06032015/hp-extends-cloud-framework-reach | | 8:09p |
Pure Storage Intros Next-Gen All-Flash Enterprise Storage Arrays Pure Storage, 15-year-old Silicon Valley enterprise storage company, has made some big announcements this week, launching an all-flash storage array line called FlashArray//m and introducing a new business model with Evergreen Storage as a new approach for storage procurement and upgrades.
As its fourth-generation hardware, the FlashArray//m is an end-to-end integrated solution with hardware and software combined and optimized for peak performance, according to the vendor. In three rack units the new array integrates Intel Haswell controllers running the Purity Operating Environment 4.5, new NV-RAM cache modules, and new dual-drive flash modules. The base chassis draws approximately 1 kW of power and uses six cables.
Matching product models with performance and capacity needs, the new array portfolio is offered with three controller options, according to the company. The //m20, //m50, and //m70 feature 120TB, 250TB. and 400TB of usable storage respectively, with increasing amounts of IOPS.
In addition to all of the all-flash array software features that Pure is known for, the company announced Pure1, a single platform for cloud-based storage management and support. Through a single web interface Pure1 brings the simplicity and cost savings of the SaaS model to enterprise storage management, the company says.
The new FlashArray//m models are expected to enter general availability in the third quarter of this year.
In addition to the modular and upgradable aspects of the new all-flash arrays, Pure introduced Evergreen Storage as a new storage ownership model for keeping technology current and avoiding forklift upgrades. The new model aims to keep customer deployments fresh over time, adding upgrade flex bundles to allow customers expanding capacity to upgrade their controller hardware and if desired receive trade-in value for their existing controller investment. |
|