Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, July 27th, 2016
Time |
Event |
9:00a |
Navigating the Data Center Networking Landscape Today’s networking layer has become one of the most advanced infrastructure components in the data center. We are far beyond simple network route tables and ensuring data traffic patterns. Now, we’re creating contextual policies around information, users, applications, and entire cloud infrastructure components. We’ve created automation at the networking layer; and have even completely abstracted the data and control plane via next-generation SDN.
Administrators today are tasked with creating a much smarter networking layer. One that is capable of keeping up with some of the most advanced business and IT demands. In a recent Worldwide Enterprise Networking Report, IDC pointed out that virtualization continues to have a sizable impact on the enterprise network. IDC expects that these factors will place unprecedented demands on the scalability, programmability, agility, analytics capabilities, and management capabilities of enterprise networks. They predict that in 2016, overall enterprise network revenue will grow 3.5 percent to reach $41.1 billion.
It’s really no surprise that these new types of technologies will have major impacts around the entire enterprise networking layer. Most of all – these systems will change the way business create go-to-market strategies and where next-generation networking technologies can make an impact.
In one of my recent articles here on DCK, we defined the overall SDN landscape. We examined technologies like NSX, ACI, and even open SDN systems. Today, we take a step back and will look at four data center networking components impacting the modern business:
- Traditional data center networking.
- Software-defined networking.
- White/brite box networking.
- Cloud networking.
In working with these various networking options – it’s critical to understand use-cases and where these types of systems actually fit in with your data center. Remember, oftentimes there isn’t one solution that will fit all requirements. Many organizations will deploy multiple networking technologies spanning SDN, cloud, and even traditional technologies to keep up with business and market demands.
- Traditional Data Center Networking. There have been many advancements in the traditional networking layer. Cisco, Arista, Juniper, Brocade, HPE, and Dell are continuing to develop integrated networking systems for the modern data center. Yes – there are other data center network manufacturers – but for the sake of keeping the article’s length manageable, it’s actually more important to understand overall capabilities than individual vendors. Organizations like Cisco bank on the fact that you can integrate the entire network and fabric backplane with their networking technologies. Of course, you’ll get the biggest benefits when you’re running a complete Cisco ecosystem. However, if you’re working with smaller offices, branch locations, or even a smaller amount of users – it’s critical to look at other technologies which can make an impact. Dell and HPE, for example, have been known to provide solid solutions at great price points. Or, let’s assume you need a massive amount of throughput at your data center layer. Solutions from Juniper and Arista can help those organizations looking to pass a massive amount of traffic through the networking layer. Even though there are similarities between these major vendors – traditional data center networking technologies are powerful in the use-cases they support. So, if you’re working with convergence – look for networking technologies which can better support it via native means. So, Cisco, HPE, and Dell may be great options. Ultimately, traditional networking is a lot smarter and a lot more powerful. These architectures can act as direct extensions of the data center; even incorporating things like SDN into the ecosystem.
- Software-Defined Networking (SDN). This brings us to the next point. SDN has become a big technology for many organizations looking to build more intelligence into the networking layer. There are a few key considerations here. Vendors like Cisco (ACI) and Arista (EOS) have taken SDN to the next level by integrating into their hardware and software stack. From there, vendors like VMware (NSX) have integrated SDN control into the entire virtualization ecosystem. Finally, open source and open SDN vendors like Cumulus Networks, Plexxi, and Big Switch are creating powerful networking overlays and integration technologies for even more use-cases. In this scenario alone – you see three options to work with. If you have invested in existing data center technologies (like Cisco or Arista, for example) working with their SDN solution might make a lot of sense. It’ll integrate better, you’ll have less of a learning curve, and you’ll be able to leverage existing investments. Working with hypervisor-layer SDN makes sense if you’re very heavily virtualized and need to better control VM traffic. Remember, SDN at this layer doesn’t always provide the best levels of visibility into the physical layer – but it will certainly help you control your distributed VM architecture. Finally, working with open SDN solutions is an up-and-coming trend. If you’re a service provider, or are trying to create your own customized networking environment, working with these types of SDN overlay technologies could do the trick. Just remember that these are newer technologies and there is still a bit of a learning curve.
- White/Brite Box Networking. Now, we break away from a lot of the named brands out there and discuss a big trend impacting a lot of data centers and service providers. Let’s quickly define white box versus brite box. Simply put, white box networking is a hardware component which, at its core, provides the underlying hardware for you to utilize. An example of this would be a QuantaMesh switch. Brite box networking is basically branded white box solutions. So, for example, placing Cumulus Linux on top of a Dell, HPE, Mellanox, or even SuperMicro switch. The cool part here is that we are seeing trends around white/brite box adoption. Gartner recently pointed out that that by 2018, non-traditional switching will account for more than 10 percent of global data center port shipments, up from under 4 percent in 2013. Now, that being said, this type of networking architecture is yet to see real mainstream adoption. For now, it really is limited to the big service providers, cloud hosts, and those organizations creating customized hyperscale solutions. Still, there are big cost saving opportunities here and the ability to create powerfully custom business solutions. If you’re looking to work with these types of systems – make sure to line up support, a good partner which can help out, and ensure you have the right use-case.
- Cloud Networking. It’s a big, bold, cloudy world out there and networking is certainly playing its part. Cloud networking has become a big initiative for those organizations scaling their data centers into a cloud ecosystem. There are numerous platforms which help extend data center networking components into the cloud. Let me give you an example – OpenStack Neutron. Neutron creates a network-as-a-service environment which interfaces between your data center networking devices and those managed by other OpenStack services and resources. So, if you’re trying to extend your data center into the cloud; and then further integrate your cloud with other cloud services and hosts – working with OpenStack as an overall management solution might make sense. So, maybe you’re trying to build a multi-tier application delivery architecture; or – you’re creating your own advanced networking services which plugs into the OpenStack ecosystem; working with Neutron might be the right answer. These types of cloud networking technologies help create advanced provider networks which give you granular controls and very powerful network customization capabilities. So, if you’re working with cloud – look at cloud networking solutions to help simplify management and allow you to further integrate with advanced cloud solutions.
Moving forward, we’re going to see even more data pass through the networking layer. Dependency on networking systems will grow as more organizations deploy dense virtualization solutions and continue their migrations into the cloud. “The network is becoming a more strategic element of many business strategies and, in many cases, is the backbone of the business moving forward,” said Rohit Mehra, vice president, Network Infrastructure at IDC. “In 2016, we will see that manifest as enterprise IT decision makers seek new technologies that can create new efficiencies by shifting the physical to the virtual, leveraging APIs, and building better pathways to customer engagement while maximizing the value of pre-existing networking deployments.”
Always make sure to work with networking technologies which support your business functions. Most of all – ensure that these technologies create a more agile data center infrastructure, which allow your users to be a lot more productive. | 8:03p |
T5 Data Centers, Colovore Back ZeroStack Clouds When ZeroStack first revealed its customer value proposition about a year ago, it made the case for helping customers move their critical workloads out of the public cloud — where scaling up may have incurred more expenses than they expected — and into their private data centers. But this week, the cloud service provider has made a critical adjustment to that proposition, bringing in two well-known managed service providers, and arguing that it may make just as much sense to move from a public cloud stack to a managed private one.
“How about I give you a one-liner back: All roads lead to Rome,” said ZeroStack Vice President of Marketing Steve Garrison. In an interview with Datacenter Knowledge, Garrison told us he’s now opening up a migration path for public cloud customers moving to colocation MSPs — and in the process, perhaps save as much as two-thirds.
T5 Data Centers’ 16 MW Los Angeles facility, opened in 2013, will be one location making room for incoming ZeroStack customers. Colovore’s 9 MW facility in Santa Clara, announced in 2013 and opened for business the following year, will be the other.
The plan is for ZeroStack to make colocation an option, for customers who want to bring their critical business assets back within their own firewalls. As Garrison explained, the objective of these partnerships will be to offer both the expertise and the management skills that some enterprises may still lack.
“I’m sure you’ve heard the stories of people stealing OpenStack teams, the battle between Walmart and Comcast being one,” he reminded us. “OpenStack promises to be a framework where you get agility, but you have a whole lot of heavy lifting. It’s a framework; it’s not a solution. You have to make it a solution.
“The cost of Amazon is great,” he continued, “but as you scale up and go into production, it can become out of control and very costly — several thousand a month, when you started out with a few hundred a month. And to build an OpenStack team, we’re pretty confident that you need several engineers.”
Under the new MSP arrangement, ZeroStack will provide the engineering expertise, while T5 or Colovore provide the capacity.
“What we’re here to do is provide scalable infrastructure,” said Craig McKesson, T5’s executive vice president, speaking with Datacenter Knowledge, “in a true, mission-critical environment within which they [ZeroStack] host their platform. One of the primary benefits of ZeroStack is the ability to re-capture your compute and storage from the public space, and bring it back private.
“By deploying within a true service provider facility,” McKesson continued, “it now becomes much more robust, resilient, and secure.”
McKesson described T5’s Los Angeles facility as surpassing Tier-3 status. It’s served by a 16.5 MW substation on-site, and is equipped with static transfer switches for fault tolerance, and has the same seismic reinforcement rating as a modern hospital, he said. Cooling is provided by a Munters evaporative cooling unit for high-efficiency, coupled with El Segundo’s native coastal climate, for a PUE ratio he claims falls within the vicinity of 1.3.
T5 will be deploying ZeroStack’s multi-tenant Enterprise Solutions Suite, with the ability to scale at densities of 25 KW per cabinet, McKesson told us. “It’s really the L.A. area’s only true, purpose-built, mission-critical data center.”
It’s here where T5 will be hosting ZeroStack’s Z-Block Cloud Appliances. ZeroStack’s Garrison told us his company based its pricing model around a Z-Block appliance with storage and compute capacity similar to leasing the same capacity from Amazon AWS at $15,000 per month. Dividing by about 3, ZeroStack came up with $4,999.
On ZeroStack’s near-term roadmap, he told us, may be options for adding dedicated fiber and virtual load balancing. T5’s McKesson said his firm’s managed services and Direct Cloud Connect console will be included with the deal.
“As technology continues to evolve, we’re totally agnostic as to what’s actually running,” explained Garrison. “We can support scalable high densities as a result of our infrastructure, so from our perspective, it’s like, bring it on. The higher the density, the better.” | 8:07p |
2 Virginia Data Centers Acquired by Lincoln Rackhouse Two weeks ago, Netrality acquired a four-property portfolio of facilities from Digital Realty Trust that included a St. Louis carrier hotel complex in a prime location, once known as the Bandwidth Exchange. But another part of that portfolio included two smaller facilities in Virginia: one in Reston, and the other a few miles west in Herndon.
This week, Lincoln Rackhouse confirmed to Datacenter Knowledge that it has acquired those two Virginia properties from Netrality, with plans to lease space and capacity in both locations immediately.
The larger of the two facilities in Herndon is located at 251 Exchange Place, within a stone’s throw of the Dulles Toll Road. It’s a nearly 71,000 square foot complex that Digital Realty acquired a decade ago, as part of a nearly $58 million capacity expansion plan.
A Lincoln Rackhouse spokesperson confirmed to Datacenter Knowledge that Herndon Data Center had been occupied in its entirety by Level 3 Communications for the last 15 years, but is now completely vacant. Most recently, its principal tenant referred to this facility as “Level 3 Herndon 2,” though a 2014 Level 3 flyer [PDF] referred to it as “Herndon1.”
Records show the Herndon property as supporting 23,424 square feet of colocation space, and Lincoln Rackhouse tells us the space supports 5 MW of power. For a new tenant looking to expand close to the nation’s capital, the spokesperson said, Herndon presents “a blank canvas.”
The smaller facility, at 1807 Michael Faraday Court in Reston — on the same side of the freeway as Herndon 2 — was constructed in 1982, according to records. It has over 11,000 sq. ft. of raised floor space, and while recent records show the building has 900 kW of capacity, with redundancy supplied by 2N inertial systems, Lincoln Rackhouse tells us that capacity has increased to 1 MW.
Digital Realty acquired the Reston complex in 2006 from AboveNet, following a bankruptcy filing by AboveNet’s parent firm. While a Lincoln spokesperson confirmed the Reston property is partially occupied, it declined to provide the name of the tenant.
In a study prepared for Lincoln Rackhouse and released last April, and citing data from commercial real estate service provider Jones Long LaSalle, Trepp researcher Susan Persin listed northern Virginia as having one of the five top data center markets for supplying increasing demand in 2015. The other five markets were Dallas, Seattle/Portland, San Francisco/Silicon Valley, and Chicago.
Last January, Level 3 announced it was opening a new data center in Cali, Colombia, after having told investors the month before that it had no plans to sell off any of its data center assets. | 8:29p |
HostingCon 2016: The Silk Road Takedown, and Why Hosts Should Know their Local FBI Agent  Brought to You by The WHIR
Remember the basics when it comes to security, and take your local law enforcement out for lunch. These are two strategies that will help service providers’ deal with the increasing security risks and immediate threats to their businesses, according to industry experts who spoke at HostingCon this week.
It is critical to get to know your local law enforcement before there is an issue and they show up at your data center with a search warrant. Doing so can help them understand your business better, and what your policies are, Jane Shih, assistant general counsel, Endurance International Group said in a panel on Tuesday.
“The best way to not have FBI come in and take a whole rack of servers is education,” David Snead, general counsel of cPanel said in agreement.
Jay Sudowski, CEO of Handy Networks, says that providing education for staff is also important so that in the event that FBI does come knocking, they are prepared for what to do.
So who are these FBI agents and what are they like? The HostingCon audience got a peek behind the curtains of what FBI sees in capturing some of the world’s most wanted cyber targets – including hackers behind LulzSec and Anonymous. Chris Tarbell, former FBI agent involved in the Silk Road bust, spoke on Tuesday on his career in the FBI where he started in computer evidence and international terrorism before becoming involved in cybercrime.
These early career stints were imperative in learning where the evidence is stored on a computer and how to find things, as well as the importance of log information, he says.
In 2010, Anonymous started to be on the FBI’s radar more after its Operation Payback, where the hacking group launched massive DDoS attacks against payment providers like Visa, PayPal and MasterCard after they cut off support to WikiLeaks.
Around the same time, HBGary Federal sought to deanonymize the hacking group, only for Anonymous to hack CEO Aaron Barr’s email and within 20 minutes, shut down his entire online life, Tarbel says. Shortly after that Barr was forced to resign, in one of many examples of the true cost of cybercrime.
In 2011, another hacking group, called LulzSec, started to make headlines for its attacks on targets such as Sony, Fox, and the CIA.
Tarbell descibes getting a tip from another hacker – a kid in New Jersey who said he knew Hector Xavier Monsegur aka Sabu, the leader of LulzSec. He only knew that he lived in New York but that was enough for the FBI.
“We dug through all the logs, we found one IP address that was in New York: it was Hector,” he says.
Once they were able to track him down in his apartment, Sabu spent two hours trying to convince the FBI that he didn’t know anything about computers. He eventually agreed to become an informant for the FBI to teach them all about how groups like LulzSec hack.
With Sabu’s help, the FBI was able to arrest Jeremy Hammond, one of the FBI’s most-wanted cybercriminals, who used TOR to protect his identity, and the arrest of Silk Road founder Ross Ulbricht aka Dread Pirate Roberts in 2013. Silk Road was a $1.2 billion website that operated on TOR, used bitcoins so money couldn’t be traced. The online marketplace offered hacking services, murders for hire and drugs. Ulbricht is currently serving a life sentence with no chance of parole.
So what advice does Tarbell have for hosting providers when it comes to security? Bring it back to the basics. The number of hacks that happen because users use the same password across multiple sites is staggering. Doing simple tweaks can help prevent an organization or hosting end-user from being a hacker’s next target.
This post was first published by The Whir. |
|