Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, April 5th, 2016
Time |
Event |
9:00a |
Top Five Data Center Migration Challenges Chris Alberding is Vice President of Product Management for FairPoint Communications.
Migrating to a new data center facility presents major challenges that must be addressed to avoid costly mistakes. In particular, moving all or part of your IT operation to a data center colocation provider requires not only a thorough evaluation of the facility, but also of the provider’s stability. Here are five aspects of data center migration you must thoroughly analyze:
How the Provider Got into the Data Center Business
You want a data center colocation provider that thoroughly understands the industry, not a company that saw the growth and profitability prospects and decided to jump into the business. Research how the provider got to where it is today.
Does it have a long track record operating colocation data centers? How did it enter the industry and grow its business? Did it purchase data center operations from multiple providers? A significant amount of merger and acquisition activity in a provider’s background may create some instability.
In addition, a provider who leases its facilities instead of owning them may not provide the stability or peace-of-mind required for your business-critical IT assets. You may also be subjected to price increases as the provider attempts to recoup its acquisition costs.
How the Provider Services its Customers
A good provider will have local, on-site expertise and assistance available whenever customers need it, as well as a track record of quick response times. A provider with many acquisitions in its history may have trouble providing consistent service. Restructured workforces and employee attrition could lead to service issues. You must carefully evaluate whether a prospective provider can meet and exceed your expectations.
How the Provider Determines its Data Center Footprint
Where your provider’s data centers are located could affect your future operations. Even if the location and provider you select meets your requirements today, there’s no guarantee they will continue to do so in the future.
For example, if a provider acquires another data center provider’s locations, it may end up with facilities within close proximity of each other. If this is the case, the acquiring provider may decide to eliminate some locations. If your location is on the chopping block, you will incur the time, effort and cost to relocate your facility. The best option is to partner with a provider that owns and strategically locates its facilities.
How the Provider Bundles its Services
A major benefit for data center customers is access to facility resources and network connectivity. Does your provider offer these capabilities on its own, or does it need to partner with a third-party? In some scenarios, partnerships may not last in the long term. And a discontinuation of the partnership will affect your services.
For example, once the network and data center services are decoupled, a significant portion of the value proposition is lost. Customers will need to deal with two service providers that may not be able to offer seamless and cost-efficient service. One-stop shopping experiences backed by strong SLAs and competent on-site staff create an attractive value proposition.
How Reliable are Provider Services?
Since unexpected outages could cost you in terms of lost revenue, productivity and more, you should select a data center colocation provider with the fewest number of service outages in their history. To determine the level of reliability, you want to analyze a provider’s security systems, fire suppression systems, environmental controls and other measures. For example, working with Security Operations Center (SOC)-certified data centers can help assure businesses that the facility offers state-of-the-art systems and procedures.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
| 9:01a |
New Equinix Data Hub Broadens Enterprise Colo Options This morning, colocation leader Equinix took another step in an ongoing effort to transform its service into what one 451 Research analyst called a “nexus of both cloud and enterprise IT.” Acutely aware that ever-denser storage devices are not having the impact on data center footprints that many expected, Equinix introduced Data Hub, a complement to its Performance hub technology introduced in 2014.
Its objective: Ease enterprises’ transition to SaaS services while making them comfortable with colocation. According to one Equinix executive, Data Hub will bring large data stores out of the trap of legacy data warehouses and closer to the edges of those connections between owned resources and the cloud.
“It gives me the ability to leverage the scale and elasticity of public cloud while still retaining control and management of my data,” said Lance Weaver, Equinix vice president for platform strategy, in an interview with Datacenter Knowledge.
Edges of the World
Equinix already has adjacency to hundreds of major cloud services, said Weaver, by virtue of their membership in the provider’s Cloud Exchange program. These services and cloud providers typically expect to connect to Amazon’s Elastic storage, to gain access to customer data.
During a keynote address to the Open Networking Conference in Santa Clara a few weeks ago, Equinix CTO Ihab Tarazi explained how Cloud Exchange works, and how participating cloud providers contribute to it.
“The way these cloud providers design their networks, they have massive data centers where they have cheap power, and they can optimize it efficiently,” said Tarazi. “But to get to their customers and connect, they come to our data centers and put an edge part of their cloud deployments. So within these 145 (Equinix) data centers, you’ll find just about every logo of cloud, every logo of network, and hundreds of enterprise customers’ financial services are coming in.
“These are the core nodes of the world,” the CTO continued, “where the whole world’s connections and communications take place.”
The new service acts as a kind of integration play for Equinix, moving these connections into its customers’ zones of sovereignty. This way, Weaver told us, customers would gain the same elasticity they expect from a public cloud storage service, without having to relocate that data into a public cloud.
Freedom from Choice
Data Hub could open up a world of possibilities for classes of institutions, including financial, healthcare, and the public sector, where due to compliance constraints, public cloud data storage is not an option.
“The architecture is changing,” Weaver remarked. “If my applications are being leveraged off SaaS or public cloud resources, then I’m in a hub-and-spoke model, trying to reach back for the data sitting within these data warehouses. In a better system, I can get the performance that I desire as an enterprise, and I can also choose the cloud-agnostic nature of it.”
Relocating data within Data Hub, he explained, would give a customer the means to reach over 500 services across 145 data centers in 21 metros worldwide, while freeing the customer from being tied to a single provider.
“If you’re an enterprise customer, and you want to deploy your platform globally across the world, you really have two choices,” explained Equinix CTO Tarazi. “If you go in a virtualized manner, you go to AWS, Microsoft, or Google. If you decide that you want to own your infrastructure and virtual servers, to be able to deploy globally and have consistent performance and get to all of your customers and to thousands of networks, Equinix would be your number one choice.”
Turnkey
“Customers want simplicity,” as Jabez Tan, senior analyst with Structure Research, reminded us. “If I can go with one provider for all my needs — a single SLA, a single contract agreement — the fewer vendors I have to juggle, the better.”
Tan believes Equinix may be doing what competition demands of it — taking steps to maintain, and strengthen relationships with, existing customers. Roughly 60 to 80 percent of revenue growth for a colo provider, his research tells him, comes from the existing base.
Equinix will make Data Hub available as a supplement to Performance Hub, which provides the connectivity between Cloud Exchange resources and customer assets. With Performance Hub, said Equinix’ Weaver, customers can spin up connections with SaaS providers on-demand, and back down when no longer necessary. Up until now, high-volume, latency-sensitive customer data stores have not played a part in these connections.
Structure Research’s Tan sees this as a potential benefit for Equinix: the ability to plug in a new service without disturbing the customer base. “I think it’s going to be less of an education-type message; with their previous products, they had to educate the market on what they mean by ‘interconnection’ and ‘Performance Hub’ and ‘edge.’ I think with the storage piece, it’s kind of a natural pivot for them.”
Tan also noted that Equinix may be uniquely positioned to provide a service at this level on a global scale. While Digital Realty (now in tandem with Telx) may be able to achieve similar scale, Tan feels Equinix’ bundling of colocation with interconnection with scale, justifies its position as a premium service provider.
That said, the fact that Equinix is indeed perceived as premium, Tan also believes, it will mean customers may refrain from migrating their entire data warehouses — lock, stock, and barrel — into Equinix’ space. “While customers may not put all of their data with Equinix,” he told Datacenter Knowledge, “they’ll probably put their performance-sensitive data there, and then they can have a secondary site at a much more cost-effective co-location provider (than Equinix) that catches the bulk of the non-core, non-critical data.
“I think Equinix is going after that performance-sensitive, critical data, with obviously a security angle,” said Tan. | 12:00p |
What Cisco’s New Hyperconverged Infrastructure Is and Isn’t Good At Today’s business is tightly coupled with data center capabilities. We’re seeing more users, more virtual workloads, and many more use cases for greater levels of compute density. There’s no slowdown in data growth in sight, and data center requirements will only continue to grow. Organizations have been working hard to improve data center economics with better underlying data center ecosystem technologies.
In comes hyperconverged infrastructure (HCI) — a next-generation technology which tightly couples the virtual controller layer with its own operating mesh. Here’s something to remember: there are a number of similarities between HCI and converged infrastructure. However, the biggest difference comes in how these environments are managed. In HCI, the management layer – storage, for example – is controlled at the virtual layer. Specifically, HCI incorporates a virtual appliance which runs within the cluster. This virtual controller runs on each node within the cluster to ensure better failover capabilities, resiliency, and uptime. In these types of models, you begin to see how technologies around software-defined storage (SDS) impact converged infrastructure systems.
Businesses across all verticals are seeing benefits behind this tight integration with virtual technologies as well. HCI reduces complexity and fragmentation around having to manage resources sitting on heterogeneous systems; it can reduce data center footprints; and it can greatly reduce deployment risks with validated deployment architectures.
There’s clear demand in the market. Consider this: according to the latest Gartner Magic Quadrant for Integrated Systems report, “hyperconverged integrated systems will represent over 35 percent of total integrated system market revenue by 2019.” This makes it one of the fastest-growing and most valuable technology segments in the industry today.
A recent IDC report looked at the converged systems market in Q3 of 2015. The market generated 1,261 petabytes of new storage capacity shipments during the quarter, which was up 34.8 percent compared to the same period a year ago. Finally, the report gave a big stat showing the amount of growth in the converged market: hyperconverged sales grew 155.3% percent year over year during the third quarter of 2015, generating more than $278.8 million in sales. This amounted to 10.9 percent of the total market value.
And so, with a booming and demanding market, Cisco jumped into the HCI waters. In March, it announced a new line of hyperconverged infrastructure systems called HyperFlex.
Cisco HyperFlex Limitations
If you take a look at the overall Cisco platform, the biggest missing component was a powerful virtual controller helping manage storage resources. This is where Springpath comes into play. The result? HyperFlex, a system that combines software-defined computing in the form of Cisco Unified Computing System (UCS) servers, software-defined storage with the new Cisco HyperFlex HX Data Platform Software (Springpath), and software-defined networking with Cisco UCS fabric that integrates with Cisco Application Centric Infrastructure (ACI).
It’s a great start and a great technology for an organization that already has a massive user base with the UCS and networking ecosystem. The HyperFlex architecture integrates directly into existing Cisco management environments to allow for complete data center scale. Still, as many first releases go, there are some limitations:
- Today, HX systems will only support VMware vSphere. However, other virtualization technologies like Hyper-V are in line for support in the future. For now, this type of architecture can make a lot of sense only if you’re a VMware-heavy shop.
- As it stands, there is a limit of eight HX nodes per cluster. However, with the Hybrid option, you can add an additional four classic B200M4 Blades for more compute power. This gives the architecture the ability to run 12 servers in a hybrid configuration. There is a catch, however: in the Hybrid solution, the B200M4 local storage is not utilized by the Cisco HX system.
- If you’re hoping to integrate an NVIDIA GRID card into the Cisco HX solution, you’re out of luck. Currently, graphics acceleration cards are not supported. However, there should be support for this architecture soon. There is a cool workaround however: in the Hybrid mode, where you can add B200M4 Blades, you can also integrate the NVIDIA GRID. You’ll still have to create a separate UCS domain, but at least you can integrate GPUs into the mix. With this in mind, you can still deploy powerful, multi-tenant environments around virtual application and even desktop delivery. You can create great economics as long as the use-case fits in.
- Deduplication cannot be turned off and deduplication and compression are both in-line. This means that if you have apps or workloads that cannot work with deduplication or compression, deploying them on a Cisco HX architecture might be a challenge.
That said, there are numerous critical features that set this technology apart from any other HCI solution out there. One of those aspects revolves around the fact that HyperFlex comes with full network fabric integration. This type of integration allows administrators to create QoS policies and even manage vSwitch configurations that scale throughout the entire fabric interconnect architecture.
Furthermore, the Cisco HX system takes data from a different approach. Unlike Nutanix, which believes in “data locality,” Cisco HX spreads data across all nodes at the same time. This is accomplished by first writing to the local SSD cache; from there, replicas are written to the remote SSD drives in parallel before the write is acknowledged. This, however, also brings up a use-case limitation: you won’t be able to deploy an app or workload that requires more SSD capacity than the system has available in cache.
Understanding Cisco HyperFlex Use Cases
Even though this is a 1.0 version, there are still a number of use cases where Cisco HyperFlex can be deployed:
- VDI and Application Virtualization. Let’s be honest, this is a virtualization-ready ecosystem. This is especially the case if you’ve standardized on the VMware hypervisor. Management is done completely from the Cisco Data Platform Administration Plug-in for vCenter. This allows administrators to control their resources, monitor the environment, and provision workloads as needed. This direct coupling allows for easy VM management and direct integration with a software-defined data center (SDDC) ecosystem. Virtual applications live very well in these kinds of environments, as do virtual desktops. Still, keep in mind the graphics limitations and know which types of apps and workloads will work well on the HX system.
- Remote office, small/medium/large data centers, branch offices. Instead of deploying large and bulky pieces of hardware, you can utilize Cisco HX for specific use cases or entire remote architectures. For remote office admins, they have fewer management points and can control their environments much more effectively. These types of systems aim to reduce overall data center footprints while still increasing density and simplifying management.
- Test/Dev and non-critical workloads. There are some limitations which may prevent the deployment of critical applications. However, for sandboxing environments, dynamic resource control, testing, and workloads delivery, Cisco HX can make a lot of sense. If you plan on putting some mission-critical applications or workloads on the HX system, make sure to look at the underlying requirements and making appropriate adjustments.
As Cisco takes a big swing at a new market, it’ll be interesting to see the growth in the space and all the new use cases around HCI architectures. As for Cisco, how much will the HX System cost? According to CRN, the pricing for a three-node HyperFlex cluster starts at $59,000, including one year of 24x7x4 on-site support.
Even though Cisco is a bit late to the HCI market, they are leveraging a huge install base and very loyal customers. There’s no doubt there will be a number of use cases around Cisco HX solutions which will help change the way data centers are designed and deployed. Either way, the market is diving deeper into software-defined technologies as the virtual controller layer becomes even more intelligent. | 5:52p |
Ingram Micro to Acquire Cloud Application Distributor Ensim  By Talkin’ Cloud
Ingram Micro has entered into a definitive agreement to acquire Ensim, a provider of provisioning and management technology for cloud-based applications. Terms of the deal were not disclosed.
Founded in 1998, Ensim is based in San Jose, Calif. and has over 5,000,000 seats deployed worldwide and is used by over 20,000 organizations through service providers, system integrators, MSPs and resellers.
Ensim’s flagship offering, the Ensim Automation Suite, offers a complete solution for marketplace, storefront, subscription management, service catalog, ticketing, provisioning, usage collection, recurring billing and reporting. The Suite is used by service providers to offer their solutions to customers and resellers, resellers, and enterprises and government agencies, according to Ensim. In 2015, Ensim Automation Suite added over 60 new features.
It is unclear exactly how Ingram Micro will leverage the acquisition of Ensim, or how Ensim customers will be impacted by the deal. Ensim’s technology could be used to improve Ingram Micro’s cloud marketplace, which added some enhancements at the beginning of March.
| 8:53p |
Microsoft Next Platform Bet: Bot Assistants  By WindowsITPro
It is hard to imagine a major platform introduction going worse than Microsoft’s bet on intelligent agents, also known as bots.
Tay, the teen-inspired chatbot from Microsoft Research,went horribly off the rails a few hours after it launched, spewing racist invective, death threats, and more at users.
Microsoft blamed the problems on coordinated effort by some users trying to troll the internet, but then it accidentally came back online briefly with the same problems. It is now a locked account, but while the episode would have led to programs being scrapped at other companies, Microsoft’s Satya Nadella has vowed to press on.
In fact, to Nadella, conversation is the next platform, like the mobile, web, and the desktop before it. And Microsoft’s bots — bumps along the way not withstanding — are key to winning that platform.
And being one of the first to rush there means taking some of the heat when things go wrong.
Even Nadella was creeped out by one of Microsoft’s early creations, as Bloomberg reported:
Some engineers hacked a Satya-bot that answered questions like what’s Microsoft’s mission? and where did you go to college? in Nadella’s voice by culling quotes from his speeches and interviews.
Connell thought it was a clever way to show how the technology worked and told Nadella about it, thinking he’d be flattered. But the CEO was weirded out by a computer program spitting back his words. I don’t like the idea, said Nadella, half laughing, half grimacing on a walk to a secret room earlier this month to preview bot and AI capabilities he demoed Wednesday at Microsoft’s Build conference. I shudder to think about it.
Shuddering, but not stalling.
During Build, Microsoft unveiled the Microsoft Bot Framework that lets you build once and then deploy assistants who can text, Skype, Slack, email, web chat and more.
It’s a path forward now that Microsoft has largely been blocked out of mobile OS wars. Assistants that can reach out and help you make better, more informed decisions, are a way to runaround platform gatekeepers and offer something that they still struggle with: continuity.
It’s not just in one operating system, because that’s not what we believe in. It’s available on all your devices and all your experiences, Nadella told Matt Rosoff in a recent interview. Instead of you going to 20 apps and having to do all of this in your own head, what if all the apps came to you, whether they be through bots in a conversational canvas like Skype or through a personal digital assistant?
Expect to find out the answer to that what if soon, with Microsoft starting to ramp up an army of Skype Bots while integrating Cortana deeper into business operations withCortana in the Enteprise.
Original article appeared at http://windowsitpro.com/cloud/microsoft-next-platform-bet-bot-assistants-know-all-about-you-and-your-business |
|