Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, June 29th, 2016
Time |
Event |
12:00p |
Data Center SDN: Comparing VMware NSX, Cisco ACI, and Open SDN Options The data center network layer is the engine that manages some of the most important business data points you have. Applications, users, specific services, and even entire business segments are all tied to network capabilities and delivery architectures. And with all the growth around cloud, virtualization, and the digital workspace, the network layer has become even more imporant.
Most of all, we’re seeing more intelligence and integration taking place at the network layer. The biggest evolution in networking includes integration with other services, the integration of cloud, and network virtualization. Let’s pause there and take a brief look at that last concept.
Software-defined networking, or the abstraction of the control and data plane, gives administrators a completely new way to manage critical networking resources. For a more in-depth explanation of SDN, see one of my recent Data Center Knowledge articles.
There are big business initiatives supporting the technology. Very recently, IDC said that the worldwide SDN market, comprising physical network infrastructure, virtualization/control software, SDN applications (including network and security services), and professional services, will have a compound annual growth rate of 53.9% from 2014 to 2020 and will be worth nearly $12.5 billion in 2020.
As IDC points out, although SDN initially found favor in hyperscale data centers or large-scale cloud service providers, it is winning adoption in a growing number of enterprise data centers across a broad range of vertical markets, especially for public and private cloud rollouts.
“Large enterprises are now realizing the value of SDN in the data center, but ultimately, they will also recognize its applicability across the WAN to branch offices and to the campus network,” said Rohit Mehra, VP, Network Infrastructure, at IDC.
“While networking hardware will continue to hold a prominent place in network infrastructure, SDN is indicative of a long-term value migration from hardware to software in the networking industry. For vendors, this will portend a shift to software- and service-based business models, and for enterprise customers, it will mean a move toward a more collaborative approach to IT and a more business-oriented understanding of how the network enables application delivery,” said Brad Casemore, Director of Research for Data Center Networking at IDC.
There are several vendors offering a variety of flavors of SDN and network virtualization, so how are they different? Are some more open than others? Here’s a look at some of the key players in this space.
VMware NSX. VMware already virtualizes your servers, so why not virtualize the network too? NSX integrates security, management, functionality, VM control, and a host of other network functions directly into your hypervisor. From there, you can create an entire networking architecture from your hypervisor. This includes L2, L3, and even L4-7 networking services. You can even create full distributed logical architectures spanning L2-L7 services. These services can then be provisioned programmatically as VMs are deployed and as services are required within those VMs. The goal of NSX is to decouple the network from the underlying hardware and point completely optimized networking services to the VM. From there, micro-segmentation becomes a reality, increased application continuity, and even integration with more security services.
- Use cases and limitations. The only way you can truly leverage NSX is if you’re running the VMware hypervisor. From there, you can control East-West routing, the automation of virtual networks, routing/bridging services for VMs, and other core networking functions. If you’re a VMware shop hosting a large number of VMs and are caught up in the complexities of virtual network management, you absolutely need to look at NSX. However, there are some limitations. First of all, your levels of automation are limited to virtual networks and virtual machines. There’s no automation for physical switches. Furthermore, some of the L4-L7 advanced network services are delivered through a closed API, and might require additional licensing. Ultimately, if you’re focused on virtualization and your infrastructure of choice revolves around VMware, NSX may be a great option. With that in mind, here are two more points to be aware of: If you have a super simple VMware deployment with little complexity, you’ll probably have little need for NSX. However, if you have a sizeable VM architecture with a lot of VMware networking management points, NSX can make your life a lot easier.
Big Switch Networks. Welcome to the realm of open SDN. These types of architectures provide for more options and even support white (brite) box solutions. Big Switch has a product called Big Cloud Fabric, which it built using open networking (white box or brite box) switches and SDN controller technology. Big Cloud Fabric is designed to meet the requirements of physical, virtual, cloud and/or containerized workloads. That last part is important. Big Switch is one of the first SDN vendors out there to specifically design networking services for containerized microservices. Here’s the other cool part: BCF supports multiple hypervisor environments, including VMware vSphere, Microsoft Hyper-V, KVM, and Citrix XenServer. Within a fabric, both virtualized servers and physical servers can be attached for complete workload flexibility. For cloud environments, BCF continues OpenStack support for Red Hat and Mirantis distributions. The other cool part is your ability to integrate it all with Dell Open Networking switches.
- Use cases and limitations. Even though it will support other hypervisors, the biggest benefits come from the integration with VMware’s NSX. BCF interoperates with the NSX controller providing enhanced physical network visibility to VMware network administrators. Furthermore, you can leverage the full power of your white (brite) box switches and extend those services throughout your virtualization ecosystem and the cloud via OpenStack. That being said, it’s important to understand where this technology can and should be deployed. If you’re a service provider, cloud host, or a massively distributed organization with complex networks, working with a new kind of open SDN technology could make sense. First of all, you can invest and have confidence around commodity switches since the software controlling it is powerful. Secondly, you’re not locked down by any vendor, and your entire networking control layer is extremely agile. However, it won’t be a perfect fit for everybody. Arguably, you can create a “one throat to choke” architecture here; but it won’t be quiet as clean as buying from a single networking vendor. You are potentially trading off open vs proprietary technologies, but you need to ask yourself: “What’s best for my business and for my network?” If you’re an organization focused on growth, your business, and your users, and you simply don’t have time or want to work with open SDN technologies, this may not be the platform for you. There will be a bit of a learning curve as you step away from traditional networking solutions.
Cumulus Linux. This has been an amazing technology to follow and watch gain traction. (Please note that there are many SDN vendors creating next-generation networking capabilities built around open and proprietary technologies. Cumulus Linux is included here as an example and to show just how far SDN systems have come.) The architecture is built around native Linux networking, giving you the full range of networking and software capabilities available in Debian, but supercharged … of course! Switches running Cumulus Linux provide standard networking functions such as bridging, routing, VLANs, MLAGs, IPv4/IPv6, OSPF/BGP, access control, VRF, and VxLAN overlays. But here’s the cool part: Cumulus can run on “bare-metal” network hardware from vendors like Quanta, Accton, and Agema. Customers can purchase hardware at cost far lower than incumbents. Furthermore, hardware running Cumulus Linux can run right alongside existing systems, because it uses industry standard switching and routing protocols. Hardware vendors like Quanta are now making a direct impact around the commodity hardware conversation. Why? They can provide vanity-free servers with networking options capable of supporting a much more commoditized data center architecture.
- Use-cases and limitations. Today, the technology supports Dell, Mellanox, Penguin, Supermicro, EdgeCore, and even some Hewlett Packard Enterprise switches. Acting as an integration point or overlay, Cumulus gives organizations the ability to work with a powerful Linux-driven SDN architecture. There are a lot of places where this technology can make sense. Integration into heavily virtualized systems (VMware), expansion into cloud environments (direct integration with OpenStack), controlling big data (zero-touch networking provisioning for Hadoop environments), and a lot more. However, you absolutely need to be ready to take on this type of architecture. Get your support in order, make sure you have partners and professionals who can help you out, and ensure your business is ready to go this route. Although there are some deployments of Cumulus in the market, enterprises aren’t ripping out their current networking infrastructure to go completely open-source and commodity. However, there is traction with more Linux workloads being deployed, more cloud services being utilized, and more open sources technologies being implemented.
Cisco Application Centric Infrastructure (ACI). At a very high-level, ACI creates tight integration between physical and virtual elements. It uses a common policy-based operating model across ACI-ready network and security elements. Centralized management is done by the Cisco application policy infrastructure controller, or APIC. It exposes a northbound API through XML and JSON and provides a command-line interface and GUI that use this API to manage the fabric. From there, network policies and logical topologies, which traditionally have dictated application design, are instead applied based on the application needs.
- Use-cases and limitations. This is a truly powerful model capable of abstracting the networking layer and integrating core services with your important applications and resources. With this kind of architecture, you can create full automation of all virtual and physical network parameters through a single API. Furthermore, you can integrate with legacy workloads and networks to control that traffic as well. And yes, you can even connect non-Cisco physical switches to get information, on the actual device and what it’s working with. Furthermore, partnerships with other vendors allow for complete integrations. That said, there are some limitations. Obviously, the only way to get the full benefits from Cisco’s SDN solution is by working with the (sometimes not entirely inexpensive) Nexus line of switches. Furthermore, more functionality is enabled if you’re running the entire Cisco fabric in your data center. For some organizations, this can get expensive. However, if you’re leveraging Cisco technologies already and haven’t looked into ACI and the APIC architecture, you should.
See also: Why Cisco is Warming to Non-ACI Data Center SDN
As I mentioned earlier, there are a lot of other SDN vendors that I didn’t get the chance to discuss. Specifically:
- Plexxi
- Pica8
- PLUMgrid
- Embrane
- Pluribus Networks
- Anuta
- And several others…
It’s clear that SDN is growing in importance as organizations continue to work with expanding networks and increasing complexity. The bottom line is this: There are evolving market trends and technologies that can deliver SDN and fit with your specific use case. It might simply make sense for you to work with more proprietary technologies when designing your solution. In other cases, deploying open SDN systems helps further your business and your use-cases. Whichever way you go, always design around support your business and the user experience. Remember, all of these technologies are here to simplify your network, not make it more complex. | 3:00p |
No ‘Unregulated Open Field’ for Fintechs, French Watchdog Warns (Bloomberg) — Financial technology companies: the industry rules apply to you too, and the French markets watchdog wants to make sure you’re respecting them.
“We don’t want to give fintechs the illusion that they’re operating in a totally unregulated open field,” Franck Guiader, who heads a newly-created division for innovative services at the Autorite des Marches Financiers in Paris, said in an interview. “Existing European regulation may already apply to part of their businesses and if there are loopholes regulation is likely to evolve.”
Automated financial advice, blockchain and big data are among the priority fields for the AMF as it seeks to draft and influence regulation for new financial services in Europe and globally, said Guiader.
The AMF is not the first European regulator to zoom in on fintechs. The U.K.’s Financial Conduct Authority has an innovation hub to hand-hold companies through the introduction of new services to the market, and it’s also opening up another program for testing new ideas.
The move by the AMF is a “positive sign,” said Julien Maldonato, director for the financial industry at Deloitte in Paris. It means startups will benefit from regulatory coaching as they develop, reducing the risk of “emergency” disciplinary measures once they’re already established, he said.
Global fintech funding was about $22.3 billion last year, 75 percent more than the year before, according to Accenture. While France made up only a small fraction of that, with $189 million, it’s growing faster than the global average and playing catch-up to its neighbors Germany and the U.K. Of Europe’s $1.5 billion in fintech funding last year, the U.K. made up $962 million and Germany $193 million, according to a report by KPMG.
“Perhaps in case of Brexit the Paris market may become more interesting, and regulators desire to be attractive,” said Deloitte’s Maldonato.
Surprise Resignation
Debate about regulating new financial services is flaring up after LendingClub Corp. chief executive officer Renaud Laplanche’s unexpected resignation last month, as the U.S. peer-to-peer lending platform disclosed internal-control lapses and abuses related to the sale of some of its loans. Companies are also asking watchdogs more questions, and some are calling for a clearer framework as they develop new services.
While the AMF will look closely at new financial services whether they’re coming out of a startup’s lab or a well-established bank, it will also work directly with entrepreneurs to prepare them for how the framework is likely to evolve, Guiader said.
“We want to accompany entrepreneurs,” said Guiader. It’s about helping them “develop sustainably in the long run.” | 3:30p |
Cisco to Acquire Cloud Security Firm CloudLock for $293M  Brought to You by Talkin’ Cloud
Cisco has agreed to acquire cloud access security broker CloudLock for $293 million in cash and equity, bringing the team under its networking and security business group, the networking giant announced this week.
CloudLock is based in Waltham, Massachusetts, and claims to have more than 700 customers with tens of millions of users under management. CloudLock said the deal will not impact existing partners.
“As enterprises are retooling themselves and increasingly building their futures in the cloud, security has not only become a top business priority, it is now universally demanded by businesses and individuals alike, as a necessity to keep their cloud applications, their data, and their businesses safe,” CloudLock co-founder and CEO, Gil Zimmerman, said in a statement.
The deal comes shortly after Cisco acquired CliQr to add its application-defined cloud orchestration platform.
A version of this article ran first at http://talkincloud.com/cloud-computing-mergers-and-acquisitions/cisco-acquire-cloud-access-security-broker-cloudlock-293-mi | 6:30p |
Google-Backed FASTER Submarine Cable to Go Live This Week FASTER, the Google-backed submarine cable that adds much needed network bandwidth between data centers in the US and data centers in Japan, Taiwan, and the broader Asia-Pacific market, has been completed, about two years after the project was first announced. The cable will start carrying traffic on Thursday, a Google spokesperson said via email.
As more and more data is generated and transferred around the world, demand for connectivity is skyrocketing. There has been an increase in submarine cable construction activity in response, with major internet and cloud services companies like Google, who are the biggest consumers of bandwidth, playing a bigger and bigger role in this industry.
The FASTER system lands in Bandon, Oregon; two cities in Japan, Chikura and Shima; and in Taishui, Taiwan, according to TeleGeography’s submarine cable map. The cable landing stations are connected to nearby data centers, from where the traffic is carried to other locations in their respective regions.
On the US side, data center providers Telx (owned by Digital Realty Trust), CoreSite, and Equinix have made deals to support the new system. A Telx data center in Hillsboro, Oregon, is connected to the landing station in Bandon. FASTER traffic will be backhauled to Equinix data centers in Silicon Valley, Los Angeles, and Seattle. CoreSite’s big connectivity hub in Los Angeles will also have direct access to the system.

The FASTER submarine cable system lands in the US, Japan, and Taiwan (Source: TeleGeography, Submarine Cable Map)
Google invested $300 million as member of the consortium of companies that financed the submarine cable’s construction. Other members are China Mobile International, China Telecom Global, Malaysian telco Global Transit Communications, Japanese telco KDDI, and Singaporean telco Singtel.
Both KDDI and Singtel are also major data center services players. Singtel is the biggest data center provider in Singapore, one of Asia’s most important network interconnection hubs, and has a partnership with Aliyun, the cloud services arm of China’s internet giant Alibaba. KDDI subsidiary Telehouse operates data centers throughout Asia, as well as in Europe, US, and Africa.
The rate of growth in global internet traffic has been breathtaking. Cisco’s latest Global Cloud Index projects the amount of traffic flowing between the world’s data centers and their end users to grow from 3.4 zettabytes in 2014 to 10.4 zettabytes in 2019. It would take the world’s entire 2019 population streaming music for 26 months straight to generate 10.4 zettabytes of traffic, according to Cisco’s analysts.
Learn more: Data Center Network Traffic Four Years From Now: 10 Key Figures
Cloud will be responsible for the majority of all that traffic four years from now, according to Cisco, so it comes as no surprise that the cloud giants have ramped up investment in global network infrastructure.
Amazon Web Services, the biggest cloud provider, made its first major investment in a submarine cable project earlier this year. The Hawaiki Submarine Cable, expected to go live in June 2018, will increase bandwidth on the network route between the US, Australia, and New Zealand. Amazon agreed to become the cable’s fourth anchor customer, which finalized the financing necessary to build the system.
Microsoft and Facebook announced in May a partnership to finance construction of a transatlantic cable called MAREA, which will land in Virginia and Bilbao, Spain.
Microsoft is also an investor in the New Cross Pacific Cable System, a transpacific cable that will land in Oregon, China, South Korea, Taiwan, and Japan, and the transatlantic system called Express, which will land in Canada, the UK, and Ireland.
Read more: Microsoft Invests in Several Submarine Cables to Support Cloud Services
Corrected: A previous version of this article incorrectly stated that the FASTER cable went live on Tuesday evening, Pacific Time. The cable is expected to come online Thursday, and the article has been corrected accordingly. | 10:07p |
Six Mistakes Hosting, Cloud, Managed, and Colo Providers Make  Brought to You by The WHIR
As nearly every industry under the sun gets disrupted by IT, there are unprecedented opportunities and challenges.
Regardless of company size, location, and vertical, many CEOs and CIOs find it nearly impossible to scale their IT infrastructure and secure it without help from their IT partners which include hosting providers, cloud service providers, managed service providers, and colocation providers.
However, the way that these decision makers and their influencers go about researching and purchasing these services has changed drastically in recent years.
Adapt to a World Where Buyers Control All
Disruptive forces such as search engines, social mobile, mobile devices, cloud computing, and selective consumption have totally turned the traditional sales and marketing playbook on its head. People are tired of being interrupted by obnoxious sales and marketing ploys and have fought back by adopting tools and habits that put them in the driver’s seat. Caller ID, spam blockers, DVRs, and satellite radio, for example, all scratch the same itch for control over unwanted marketing.
As these techniques spilled over from B2C (business to consumer) to B2B (business to business), CEOs, sales directors, and marketing directors of hosting, cloud, managed, and colo providers have had to confront a painful reality: they no longer hold all the cards and no longer control most of the sales cycle.
So pervasive is this lack of control that as much as 70 percent of the decision-making process and in some cases even more, is over before influencers and decision makers are ready for a sales conversation.
How to Get Found Earlier in the Hosting and Cloud Buyer’s Journey
To survive and thrive in this environment, it’s critical to get found by your ideal clients early, often, and in the right context.
Here are six of the biggest revenue eating mistakes companies make:
- Investing your entire budget on one small piece of the puzzle
To be effective, your revenue strategy should take into account differentiation, traffic generation, lead generation, sales cycle acceleration, and retention. All too often, hosting, cloud, managed, and colo providers only pursue a “point” strategy in isolation such as search optimization (SEO) or pay per click advertising (PPC). As an analogy, think about someone building a fantasy baseball team that blows their entire payroll on a single pitcher and catcher; there are virtually no funds left to invest in the other 23 players on the team that are needed to compete.
- Not establishing goals
It’s nearly impossible to know if you’re making progress if you don’t know where you’re going. All too often, IT providers chase after vanity metrics (Likes, Followers, etc.) and an ego-driven agenda with no regard whatsoever to more relevant goals such as client acquisition, revenue growth, sales cycle acceleration, or profit margin improvement.
- Treating all prospects the same
How many CIOs do you know that have the same priorities and worries as sales directors? What would happen if you tried to plan a breakfast seminar or webinar that appealed to both groups? Ten years ago, it might’ve been acceptable to lump everyone’s content strategy into vague categories such as “small business.”
Today there’s simply way too many things competing for your prospects’ highly-fragmented attention. Semi-relevant won’t cut it when you only have two or three seconds to convince a website visitor to stick around and not back-button out of your website
- Keeping sales, marketing, and services in silos
Ten years ago, it was no big problem if your sales team trash-talked your marketing team for doing “arts and crafts projects” or playing with company swag. Or conversely, if your marketing team stereotyped your sales team as a bunch of spoiled, lazy, overpaid, egomaniacs.
In today’s competitive marketplace where getting found early is critical for earning a seat at the table, these silos and toxic beliefs need to disappear and be replaced by much more productive sales and marketing alignment.
- Ignoring promotion and distribution
Too many IT providers write a piece of content, hit the publish button, auto-announce it to a few social profiles, and move on to their next priority. Huge mistake. Without promotion and distribution, even the most remarkable content will usually fail to reach its most receptive audience.
A simple rule of thumb: half the resources on content creation and half the resources on content promotion and distribution
- Not showing up early enough to earn trusted advisor status
If your sales team and executives feel like they’re constantly losing out on deals because the prospect’s mind was already made up, you’re probably absent from most of the buyer’s journey. If you’re getting backed into the corner and forced to slash prices and destroy your margins, again you’re probably not getting found early enough, by the right influencers and decision makers, in the right context.
Now that you know about these six mistakes, what can you do to proactively address these issues, differentiate your company from the competition, and grow revenue profitably?
Learn the answers to these questions and more by attending my session at HostingCon Global 2016 on “How Hosting, Cloud, Managed, and Colo Service Providers Use Inbound to Differentiate and Grow Revenue” on Wednesday, July 27, 2016 at 11:00 am (Central) in room 206 of the Ernest N. Morial Convention Center in New Orleans.
About the author: Joshua Feinberg is Vice President and Co-Founder of SP Home Run, Inc. — which helps hosting, cloud, managed services, and data center providers grow their leads, client base, revenue, and profitability.
This first ran at http://www.thewhir.com/blog/6-mistakes-hosting-cloud-managed-and-colo-providers-make-in-todays-competitive-marketplace |
|