Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, January 27th, 2016
| Time |
Event |
| 1:00p |
Creating Your Enterprise Cloud Connectivity Strategy It’s safe to say that enterprise cloud is here to stay. Cloud services have augmented the way we deliver resources, support new types of users, and create new types of business strategies. Today, organizations are looking at even more ways to leverage cloud computing environments to help their businesses become much more agile.
Spending on cloud infrastructure and platform could rise from $16 billion in 2014 to $43 billion by 2018, according to a recent Goldman Sachs report. The share of cloud infrastructure and platform in enterprise IT spending is forecast to increase from 5 percent in 2014 to 11 percent by 2018. This will be driven by the increasing shift of IT budget from traditional in-house delivery methods to various flavors of cloud computing as a means to cut cost and create new revenue streams.
All this in mind, let’s focus on one of the biggest questions facing enterprises when they look at the modern cloud ecosystem: “How do I create a good cloud connectivity strategy that will allow me to leverage my on-premise investment and a public cloud architecture?”
To answer it, let’s look at two leading public cloud providers and what they’re offering around enterprise cloud connectivity. But first, we’ll need some definitions.
What is Enterprise Cloud Connectivity?
Enterprise cloud connections enable the interoperability between on-premise resources and public cloud environments. The on-prem environment can be a branch office, a colocation data center, or even a major enterprise data center. The goal is to create optimal business agility, where the business can adjust or scale according to market demand.
Enterprise cloud connectivity uses a variety of secure (and fast) connection protocols to allow organizations to integrate with network, storage, compute, and even user environments. The biggest difference has been the ease of creating these connections and how they can help transform a business. In the past, these connections were made manually and required a lot of administration. Today, major providers are offering easier ways to integrate with their cloud resources.
Now, let’s look at two examples of enterprise cloud connectivity in the public cloud ecosystem.
AWS Direct Connect
Amazon Web Services is the most popular cloud on the market. They can support compliance-based workloads, integrate with complex storage environments, and even provide new types of workload delivery methodologies. Recently, Amazon has made it even easier to connect into its ecosystem.
AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations. Using industry standard 802.1q VLANs, this dedicated connection can be partitioned into multiple virtual interfaces. This allows you to use the same connection to access public resources, such as objects stored in Amazon S3 using public IP address space, and private resources, such as Amazon EC2 instances running within an Amazon Virtual Private Cloud using private IP space, while maintaining network separation between public and private environments. Virtual interfaces can be reconfigured at any time to meet your changing needs.
This kind of enterprise cloud connectivity comes with very real business and data center benefits:
- Better bandwidth controls around cost and delivery
- Allowing for better SLAs around consistent network performance
- Integration with all AWS cloud services
- Greater levels of business and data center elasticity
If you’re looking at integrating with an AWS environment, make sure to look at Direct Connect. It allows your organization to directly align with a specific cloud strategy, which can be an all-encompassing extension into the cloud, or you can utilize a specific AWS service, like integrating into the Amazon S3 storage ecosystem. The good part is that Direct Connect helps ease the adoption of your specific cloud use case.
Microsoft Azure ExpressRoute
Much like AWS, Azure ExpressRoute lets you create private connections between Azure data centers and on-prem infrastructure in your enterprise data center or a colocation environment. As Microsoft points out, ExpressRoute connections don’t go over the public internet. The connection architecture allows for more reliability, faster speeds, lower latencies, and higher security than typical internet connections. In some cases, using ExpressRoute connections to transfer data between on-premise systems and Azure can yield significant cost benefits.
For example, you can establish connections to Azure at an ExpressRoute location, such as an exchange provider facility, or directly connect to Azure from your existing WAN network, such as a multi-protocol label switching (MPLS) VPN provided by a network service provider. There are several benefits to using this type of architecture:
- Data center and cloud extension
- Building an ecosystem for hybrid applications
- Creating an architecture build on auto-scaling and provisioning
- Integrating Active Directory services across multiple locations
- Use-cases around storage, backup, and recovery
Furthermore, with ExpressRoute, you can establish connections to Microsoft cloud services other than Azure, such as Office 365 and CRM Online. Connectivity can be from an any-to-any (IP VPN) network, a point-to-point Ethernet network, or a virtual cross-connection through a connectivity provider at a colocation facility.
Public cloud providers know that you’ve made big investments into your on-premise data center ecosystem. They know that you’re probably not ready to rip everything out and move everything to the cloud. This is why there has been some big initiatives around optimizing the way businesses connect into a public cloud ecosystem. These new solutions allow for an easier way to integrate on-premise resources with powerful cloud services. Ultimately, this helps organizations create faster, more reliable, and much more secure connections into their cloud ecosystem. | | 5:58p |
Equinix, AT&T, Verizon Join Facebook’s Open Source Data Center Project As their networks become increasingly virtualized, telecommunications company offices will function and look more like data centers, and the companies are looking for the most effective and low-cost modern ways to build this new infrastructure.
For help, some of the world’s largest telcos have turned to the Open Compute Project, the Facebook-led open source data center and hardware design initiative. AT&T, Verizon, Deutsche Telekom, EE, and SK Telecom are all joining OCP, the organization announced today. Also joining are the Finnish communications tech firm Nokia, data center and interconnection giant Equinix, and Nexius, a provider of network technology and solutions to network operators and enterprises.
Read more: Why Should Data Center Operators Care about Open Source?
Facebook claims to have saved billions by designing its own custom IT hardware and data center infrastructure. It has designed its own compute and storage servers, and later its own networking switches. The company made a lot of those specs and design documents available to the public the same way developers share open source software.
Until now, however, OCP has been limited to the realm of web-scale and large enterprise data centers. Today’s announcement marks a major expansion of its relevance and influence, with a whole new industry on board whose infrastructure is going through fundamental changes.
Read more: Wall Street Rethinking Data Center Hardware
“AT&T will virtualize 75 percent of its network functions by 2020, and to do that, we need to move to a model of sophisticated software running on commodity hardware,” Andre Fuetsch, senior VP of architecture and Design at AT&T, said in a statement. “We’re becoming a software and networking company. As a result, our central offices are going to look a lot more like data centers as we evolve our networking infrastructure.
The organization has created a new OCP Telco Project to focus specifically on telco needs.
Equinix is joining as part of the OCP Telco Project. Many of its customers are network providers – about 1,100 of them – and it wants to be involved in development of next-generation interconnection technologies. “It also enables us to tap into the industry’s formidable resources and brain power, and share our own data center and interconnection expertise,” Equinix CTO Ihab Tarazi said in a statement. | | 7:22p |
7 Biggest Data Center Migration Mistakes (and How to Avoid Them) Chris Rechtsteiner is VP of Marketing & Products for ServerCentral.
Data center migrations aren’t something most people do every day. They’re typically a once-in-a-career event — twice if you’re lucky (or unlucky, depending on how you look at it). No matter which camp you’re in, moving networks, servers, data and applications from one location to another tends to elicit a string of four-letter words.
Slow. Pain. Ouch. Nope. (Not the words you were thinking?)
This is for good reason.
In helping hundreds of companies migrate everything from single applications to full data centers, we’ve identified seven common mistakes people make during data center migrations, and more importantly, how to avoid them.
A data center migration process can be broken down into seven steps, each with its own potential for mishaps: discovery, planning, development, validation, migration, management, and scale.
Mistake #1 (Discovery Phase): Lacking A Complete Infrastructure Assessment
The most common mistake made during discovery is not doing a complete infrastructure assessment. This is a rack-by-rack and U-by-U documentation of each device and its associated applications.
This assessment should note all things physical and virtual, network devices, network topography, etc. Don’t take shortcuts here because there’s no such thing as too much information for a migration.
| Pro Tip: |
Include operational and technical interdependencies in your assessment. For example, app, web, and database servers on a related application must migrate as a bundle. |
Mistake #2 (Planning Phase): Unclear Leadership
The most common mistake made during the planning phase is failing to establish clear leadership. This means identifying someone who is responsible for communicating clearly and definitively across all teams at all stages of the migration process.
A single department leading the way will, by default, look out for their best interest. The project leader must be an impartial party who understands and accurately reflects each group’s objectives and success criteria. This person must also have the authority to demand execution and the communication skills to keep everyone on the same page.
Mistake #3 (Development Phase): Not Recognizing Dependencies
The most common mistake made during development is upgrading parts of the infrastructure stack without recognizing the resulting dependencies. Of course, there’s nothing wrong with upgrading outdated components during a migration. New network equipment, for instance, is easily implemented, as are transitions from physical to virtual. However, when a migration has fractional upgrades, it’s easy to overlook the trickle-down impact of these changes.
| Pro Tip: |
Be sure that any upgrades and their dependencies are noted for specific testing during the validation phase. |
Mistake #4 (Validation Phase): Skipping Business Validation
The most common mistake made during the validation phase is skipping business validation.
During validation, IT, security, and network-engineering teams are typically heads down, hammering through checklists—and the business is ignored.
Changes inevitably happen due to unforeseen requirements. Application upgrades, for instance, have a habit of suddenly coinciding with a migration. Be sure the business understands all of the changes that can directly impact day-to-day operations. Investing 30 minutes here is going to save you days as you enter the next phases.
Mistake #5 (Migration Phase): Underestimating The Migration Timeline
A common mistake made during the actual migration is failing to set realistic time expectations. Production migrations are inherently slower than test migrations, as they require more care and attention to detail.
It’s always smart to pad your move time.
| Pro Tip: |
Use the test migration to assess actual migration times. You’ll be surprised how long some applications take. Don’t just expect delays—plan for them. |
Mistake #6 (Management Phase): Setting It And (Actually) Forgetting It
The most common mistake made during the management of your new infrastructure is adopting the “set it and forget it” mentality. Everyone is so excited to be done, they immediately wash their hands of the migration. You want to be sure you have both technical and business hands, eyes, and ears on everything while it settles
| Pro Tip: |
After completing the migration, plan on spending at least 48 hours on proactive monitoring and support. |
Mistake #7 (Scale Phase): Thinking You’re Done
The most common mistake made during the scale phase is losing momentum now that you’re done.
In the words of Yogi Berra, “If you don’t know where you are going, you’ll end up someplace else.” These are words to live by when planning beyond your migration. Set an annual plan, maintain quarterly reviews, and develop a process for ad-hoc infrastructure requirements.
You’ve just invested significant time, energy, and money into executing a difficult ( though critical) process. Don’t lose the energy or attention to detail now that it’s over.
This is not the only model for data center migrations, and these certainly aren’t the only mistakes people make. The important thing here is to constantly update processes as your technology, operating requirements, and experiences change.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 9:36p |
Walmart Open Sources Cloud Management Platform 
By The VAR Guy
Walmart became the latest major business to embrace the open source cloud this week with the release on GitHub of OneOps, the company’s formerly closed-source cloud management and application lifecycle platform.
OneOps is a platform for building and launching cloud-based applications across varied and changing environments. It offers a way to deploy apps on different providers’ platforms, from Microsoft Azure, Rackspace and CenturyLink public clouds to private or hybrid environments built using OpenStack.
Read more: Why Should Data Center Operators Care about Open Source?
The main selling point of OneOps for businesses is that it lets organizations switch between different providers easily to take advantage of changes in pricing, features and scalability. Meanwhile, for developers, it makes it easier to build and deploy cloud apps in a vendor-agnostic way.
Walmart says it chose to open source OneOps in order to gain contributions from the open source community as the platform continues to develop.
“Why open source? Walmart is a cloud user, not a cloud provider,” the company said in a statement. “It makes sense for Walmart to release OneOps as an open source project so that the community can improve or build ways for it to adapt to existing technology.”
Walmart added that it has already contributed other code to the open source world. “We are no stranger to open source. We’ve been an active contributor, releasing technologies such as Mupd8 and hapi with the community. We’ve loved using and contributing to other popular open source projects like React, Node.js and Openstack.”
These days, the decision by major companies to open source their code is becoming a dog-bites-man story. It has become so common that it is no longer remarkable. The bigger story here is OneOps’s availability as a new tool for the open source cloud community as developers work to make and keep their applications compatible with the growing and ever-changing array of cloud providers.
This first ran at http://thevarguy.com/open-source-application-software-companies/walmart-open-sources-oneops-cloud-application-management- |
|