Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, May 4th, 2017

    Time Event
    12:00p
    The Fat New Pipes that Link Facebook Data Centers

    Earlier this week Facebook announced that over some recent period of time traffic between its data centers has been carried by a separate network backbone than the backbone that connects its data centers to the public internet – a major architectural change for the social network which for 10 years had been moving both types of traffic on a single backbone.

    The amount of traffic traveling between Facebook data centers today is many times bigger than the amount of traffic that travels between its infrastructure and the internet, where its 2 billion monthly active users access the social network. And while Facebook’s traffic to the internet has grown slowly in recent years, bandwidth needs required for replicating rich content like photos and videos across multiple data centers have skyrocketed, forcing the company’s engineers to rethink the way its global network is designed.

    Along with implementing a whole new global backbone, dubbed Express Backbone, or EBB, Facebook’s infrastructure team also designed a new control stack for managing traffic flows on it, taking lots of cues from the way network fabric inside its data centers is designed.

    Traffic growth on Facebook’s backbone networks in recent years (Image: Facebook)

    See also: New Six Pack Switch Powers Facebook Data Center Fabric

    Facebook is not unique in having separate backbones for internal and external traffic. Google has B2, an internet-facing backbone, and B4, an inter-data center backbone. But Google is just one example, albeit a massive-scale one.

    “All of the hyper-scalers and many financial services companies have private fiber networks that traverse long distances between [data centers] but are not (directly) connected to the public internet,” Kyle Forster, founder of the software-defined networking startup Big Switch Networks, said via email. Internal traffic often runs on a company’s own fiber, which for a hyper-scale platform can be cheaper than using service providers, he added.

    Web-oriented companies often use this type of network design to isolate “dirty” network activity from “clean network” activity, explained JR Rivers, co-founder and CTO of Cumulus Networks, which sells a Linux-based operating system for data center networks. In the mid-2000s, Rivers worked at Google, where he was involved in designing the company’s home-grown data center network.

    Having separate backbones for internal and external traffic helps companies “provide quality of service, denial-of-service and intrusion protection, and network address isolation,” Rivers wrote in an email. “Some even use optimized network protocols on their ‘inside’ networks, which is enabled by this physical isolation.”

    The innovative part of Facebook’s announcement is the control stack its engineers built to manage EBB. According to a company blog post, it includes:

    • Centralized (and highly redundant) ensemble of BGP-based route injectors to move traffic on/off the network
    • sFlow collector, based on collecting sFlow samples from the network devices, used to feed in active demands into the traffic engineering controller
    • Traffic engineering controller, which computes and programs optimum routes based on the current demand set
    • Open/R agents running on network devices to provide IGP and messaging functionality
    • LSP agents, also running on network devices to interface with the device forwarding tables on behalf of the central controller

    See a detailed technical description of the control stack here.

    Building such a system requires advanced technical skills and significant investment in innovation, Ihab Tarazi, CTO of Equinix, commented via email. “This architecture includes the development of automated network routing, management, and operations tools that are very sophisticated. It also leverages open source hardware design. More importantly, this is a whole new culture and agile DevOps model that is unique to Facebook.”

    While particularly well-suited for hyper-scale platforms, they’re not the only type of companies using the approach. Cumulus works with numerous customers of different scale who have similar architectures, according to Rivers. “Some with smaller scale will use virtual networks to provide this isolation, and others will have separate physical networks,” he said.

    In the enterprise space, however, the architecture is becoming less common, said Big Switch’s Forster: “While numerous large enterprises used to have these private backbones, it went out of fashion over the last ten years, as prices from major telcos have come down, and the numbers are dwindling.”

    3:37p
    Cisco Buying SD-WAN Startup Viptela May Herald a Sunset for MPLS

    Theoretically, a software-defined wide-area network (SD-WAN) is a convenient mechanism for linking broadly dispersed branch offices into one virtual unit, and then operating upon that unit like a single data center.  Theoretically.

    Cisco had a strategy for executing SD-WAN and moving large data workloads across dispersed facilities.  It involved a type of overlay network based around a VPN for secure Internet Protocol (IPsec) called Dynamic Multipoint VPN (DMVPN) and typically involved one means or the other for rendering obsolete an old IP routing shortcut protocol called Multiprotocol Label Switching (MPLS).

    Nothing underscores the fact that Cisco’s approach has encountered too many dead ends than this week’s announcement that the company would seek to acquire Viptela, a San Jose-based five-year-old exclusive producer of SD-WAN technology, in a deal valued at $610 million.

    Spin Out

    Last January, Viptela snatched Praveen Akkiraju, formerly the CEO of Cisco co-owned VCE (maker of Vblock), as its own CEO.  Akkiraju had spent most of the previous 24 years working either at Cisco or a Cisco-owned property, prior to EMC’s acquisition last year of the majority of Cisco’s stake in VCE.

    Just last month during the Google Cloud Next conference, Akkiraju told SiliconAngle’s The Cube why MPLS, from his perspective, represented a 20-year-old networking mindset that failed to take the cloud into account.  Suppose a user of a SaaS application, he explained, wanted it to access data presently stored in a branch office.  Should that data be routed back through the main data center, and then to the public cloud — as MPLS would mandate?

    “Most branches today have internet connections that are faster than anything MPLS VPN can provide,” he argued, citing one Viptela customer’s estimate of the per-megabit cost for MPLS data routing at $200.  Relying on the internet alone would cost the same customer $2 per megabit.

    The concept of MPLS worked on ATM-switched networks at first.  But around the turn of the century, MPLS had become the original “internet fast lane” for ISPs that wanted to get in the data shuttling business, using IP protocol as its conduit.  Technically speaking, it employed a dedicated route between two hosts — say, between a branch office and a central data center — and instructed IP routers to employ that dedicated route by means of labeling the packets.  Those labels put packets on the fast lane and gave ISPs an opportunity to charge premiums.  One of the earliest arguments about net neutrality that ever came to light before a legislative or regulatory body involved this very practice.

    Cisco continues to offer managed MPLS VPN service today and has touted MPLS as a modern component of its “Intelligent WAN” architecture as recently as last April 13.  At the same time, it’s been pushing its DMVPN service as an alternative means for enabling a secure wide data tunnel between dispersed branches and the home office.  Seemingly depending at times upon whether the day was numbered odd or even, DMVPN may or may not have been the core of Cisco’s SD-WAN portfolio.

    In an August 2015 presentation to the analysts gathered for Tech Field Day that year, Jeff Reed — who at the time oversaw Cisco’s SD-WAN efforts, and who is now senior VP for its security products — admitted that customer adoption of new networking technology tended to be too hard.  “We lacked an abstraction layer at the network,” he told them, which contributed to the overall problem of the calcification of the entire stack at the network layer.  Deploying new network capabilities required upgrading the embedded operating systems in switches.

    “Out of that spawned this idea that we should fundamentally rethink how we’re building networks,” Reed said, “in terms of what functionality should live at the device layer; what functionality should we push to the controller layer; and even, over time, what functionality should live in the cloud?”

    Those questions were the catalyst, to coin a phrase, for the company’s SD-WAN strategy.  DMVPN appeared to address that strategy with a reasonable, practical model, in which branch offices became spokes connected to the central office hub.

    But Cisco found itself re-articulating that strategy, introducing it all over again to the same group of analysts the very next year.  By 2016, the goal was delivering application-level functionality across disparate network endpoints, which seemed clear enough.

    But in that re-articulation, MPLS reared its head (or headed its rear) once again, as an option for binding virtual VPN units to link Cisco Cloud Bridges.  Neither Overlay Transport Virtualization, nor any of the other proposed Cisco protocols to render MPLS obsolete, made an appearance.

    Spin In

    It was getting to the point where business and financial analysts — not even network analysts — were speculating as to whether there was a more conspiratorial logic behind Cisco keeping MPLS alive.  Business Insider had been following the careers of several Cisco engineers (which, if it had dug another half-hour longer, it would have realized were four in number, not three) whose career paths would have them officially leaving the company to form startups, only for them to be soon solely acquired by Cisco for considerable sums.  The process was called “spin-in,” and the common bond between Mario Mazzola’s, Prem Jain’s, Luca Cafiero’s, and Soni Jiandani’s accomplishments appeared to be as simple as the initials of their given names: M.P.L.S.

    In June 2016, all four members of team M.P.L.S. resigned from Cisco, in the midst of a management re-organization ordered by new CEO Chuck Robbins.  In a parting memo to his former staff, Mazzola explained to the world that the startup process was not a conspiracy at all, but a policy inspired by former CEO John Chambers.  “We entered the data center market with the first spin-in, Andiamo Systems, which became the MDS SAN product line,” Mazziano wrote.  “The creative model of the spin-in demonstrated that by tying specific execution timelines, revenue and profitability targets to engineers’ compensation a new market could be opened for the company with minimal financial risk to Cisco.”

    But Mazziano also acknowledged that his fellow engineers — particularly Prem Jain — joined him as champions of MPLS, the protocol.  Their interests in building business units around technologies based upon MPLS may explain why Cisco could simultaneously offer DMVPN-based SD-WAN, along with a second SD-WAN option around its Meraki line of wireless appliances, while proceeding with a strategy that continued to tout MPLS as its principal tool.

    Meanwhile, as Viptela continued its evolutionary path, it compared its SD-WAN not to the MPLS scheme, but to DMVPN.  The Achilles’ heel of that later approach, wrote Viptela CTO Khalid Raza (another former Cisco Distinguished Engineer. . . see if you spot a trend forming), is that it relied upon the creation of an artificial subnet for simulating router adjacency — a subnet which introduced fixed IP addresses into the mapping scheme.  Meanwhile, with Viptela’s singular control plane approach, no artificial subnet is needed.

    Which brings us to this week, as plans begin for Cisco to acquire a company currently led by two of its most distinguished engineers in its former employ.  Maybe spin-in is not dead after all.

    4:57p
    Report: Facebook Plans Another Huge Expansion at Texas Data Center Campus

    Just as the racks for the first building arrived at the new Facebook data center in Fort Worth in November 2016, the company announced additions to the $1 billion facility that would triple its size.

    Well, Mark Zuckerberg and company are at it again. Just days before the social media giant was set to unveil the completed data center to officials and the public, Facebook filed paperwork to spend another $267 million on a 500,000-square foot addition to the campus, reported the Dallas Business Journal. A building permit was filed with the city on April 26.

    The Menlo Park, California-based company outlined preliminary plans for the second phase, which include 25,406 square feet of office space, a conferencing center and break room, a multipurpose room, space for mechanical equipment and a 219,989-square foot data hall.

    Once completed the campus will span five buildings totaling 2.5 million square feet of data center space on about 150 acres in Fort Worth—the largest such project under construction in North Texas. That’s a far cry from July 2015 when Facebook broke ground on what was then a three-building plan that would total 750,000 square feet.

    The data center is built on land purchased from a real estate company run by the eldest son of former presidential candidate Ross Perot (you may remember Perot as the most successful third-party presidential candidate since Theodore Roosevelt). Site will use power from a large wind farm  on 17,000 acres of land in Clay County, about 90 miles away.

    According to Facebook, the second phase should be completed by the end of 2017.

    See also: Everything You Wanted to Know About Facebook Data Centers

    5:30p
    Five Tips for Deploying SQL in the Cloud

    David Bermingham is Senior Technical Evangelist at SIOS.

    For many organizations, moving applications that can tolerate brief periods of downtime to the cloud is a straightforward decision with clear benefits. The cost justification is usually easy to figure out and the cloud almost always comes out looking like a sound investment.  However, concerns about how to provide high availability and disaster protection in the cloud may make this decision more difficult for business-critical applications such as SQL.

    Understanding the facts can help you make informed decisions about moving applications to the cloud, while ensuring the important business operations that depend on them are protected from downtime and data loss. Here are five tips for deploying SQL in the cloud that can save businesses money and ensure the most value is derived from cloud deployment:

    Build Your Team

    When moving to the cloud you’re using the cloud’s infrastructure and servers, so you’re still going to need the same players on your team as you would on premise. This includes:

    • Network admins. The cloud has the concept of a virtual network, so you’re going to need people that understand routing, access control and networking in general.
    • Storage admins. There are a number of different storage options available in the cloud and priorities will not necessarily be the same as on premise, so it will be important for storage admins to help analyze options that may be right for the company.
    • Security admins. Security is a top concern when moving business critical data to the cloud, so it’s important to have security experts on hand that understand security the cloud has in place as well as all the different aspects from data encryption at rest, in transit etc.
    • Database administrators (DBAs). DBAs are needed to install, configure and manage SQL servers just like they would on premise, but now running on cloud instances.
    • Whether on premise, in the cloud or using Platform as a Service (PaaS), you are always going to need developers to build your solutions.
    • Help desk. The help desk will need to learn a new paradigm, including not only new tools and technology, but understanding what issues customers might run into in the cloud. This includes issues customers have to deal with on premise, from network access to the cloud to monitoring the cloud health.

    Understand Service Level Agreements

    One of the first things you need to understand when moving to the cloud is what service level agreement (SLA) the cloud provider is offering. SLA identifies the agreed-upon services that will be offered by the cloud provider. If you don’t look very hard you might think that you can sit back and put your feet up when moving to the cloud, but you can’t assume providers will ensure your SQL will stay up and running.

    Companies should be looking for ways to deliver on their SLA commitments and provide the same level of availability protection for these business-critical applications in cloud environments as they do in traditional on-premises failover clustering environments. Your SLA is only as good as its weakest link, so it’s important to consider availability in all areas; including internet, cloud platform, geographic recovery, virtual machines, storage, SQL Server, application servers and AD/DNS.

    SQL License, BYOL, Pay As You Go, or PaaS?

    Another thing to consider is if you should bring your own licenses, pay additional licensing fees or use PaaS. You should calculate the server license needed, compare and analyze your requirement and see which method is less expensive. It’s also important to consider if you want to pay for the investment up front or pay a monthly cost over time. Looking at your company’s break-even point can help in making this decision. When renting a license, you will start spending more on SQL server licensing sometime around the two-year point. Every instance is different, but if you anticipate being in the cloud for more than two years it’s often more cost effective in the long run to bring your own license.

    Other questions to consider: Are you passing the cost off to customers? Service providers that are building a service and selling it to customers can pad the monthly cost of SQL servers into their offering. In that case it would make sense to rent licenses in cloud. Can you take advantage of PaaS? How does SQL Server AlwaysOn impact licensing? How does Instance Size impact licensing? It’s important to address these questions upfront, so you’re able to assess all of your options ahead of deployment.

    Choose Your Instance Size Wisely

    Choosing a cloud instance size can be more complicated than you might think. There are a lot of different sizes and they typically offer more as time goes on. It’s also important to understand that one size doesn’t fit all; they will all have different amounts of central processing units (CPU), memory, storage, network etc. For example, one may have enough CPUs, but when you start looking at storage throughput you may really need the next level up. That can impact your decision. However, the benefit of the cloud is if you don’t get it right the first time, it is easy to resize. But if you don’t get it right the first time with an on-premise server, it’s incredibly difficult to package it back up and exchange it for a new one.

    Choose the Right HA and DR Solutions

    Most companies architect solutions to be highly available (HA), but do not always account for disaster recovery (DR). Cloud does not negate the need for HA, DR and backup. The concepts behind both HA and DR are similar – make your systems available when you need them. HA options are essentially the same as on premise (database mirroring, log shipping, backup and restore), but there is no SAN available. Availability groups provide benefits that include quick failover, readable secondary, automatic page repair and don’t require third-party products.

    Failover Cluster Instance provides easier management, but lacks SAN or another shared storage device. Cloud providers have storage that just attaches to each instance for availability groups but in failover cluster without a SAN you’re not going to be able to build a failover cluster instance unless you use some type of third-party storage device. It does however protect an entire instance, support DTC, and work with SQL Server Standard Edition as well as support SQL Server 2008 R2 and earlier.

    Whether you’re starting the process or still just thinking about making a move to the cloud, it’s important to consider how you will protect business-critical applications from downtime and data loss. While traditional SAN-based clusters are not possible in these environments, SAN-less clusters can provide a simple, cost-efficient alternative to implement a failover cluster in a cloud.  These clusters not only provide HA protection, but also enable significantly greater configuration flexibility and potentially dramatic savings in licensing costs.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
    6:48p
    Congressman Calls for Prosecution of MSP in Clinton Server Case

    A Republican Congressman is calling for the criminal prosecution of a managed services provider (MSP), claiming the CEO refused to comply with subpoenas seeking information about the handling of Hillary Clinton’s private servers.

    In an April 27 letter, Rep. Lamar S. Smith, R-Texas, informed U.S. Attorney General Jeff Sessions that he was formally referring Platte River Networks CEO Treve Suazo for criminal prosecution.

    Smith chairs the U.S. House Committee on Science, Space and Technology, which issued subpoenas to Platte River (PRN), Datto and SECNAP Network Security as part of its investigation into Clinton’s emails.

    Datto and SECNAP reached an agreement with the committee to provide responsive records, while Colorado-based PRN did not.

    “Since January 2016, Mr. Suazo and his counsel repeatedly refused to comply with requests for documents,” Smith wrote. “Furthermore, Mr. Suazo refuses to comply with lawfully issued subpoenas, making no valid legal arguments for its refusal to comply.”

    The case illustrates the prickly predicament in which an MSP can find itself when torn between loyalty to a customer and a legal subpoena.

    PRN, Datto and SECNAP were retained by Clinton to manage various components of her IT operations.

    The Committee, charged with determining whether additional cybersecurity legislation is needed, subpoenaed a variety of documents and information, including transcripts of interviews with PRN employees.

    “PRN, according to media reports and (FBI) documents, performed certain services related to maintaining and securing former Secretary Clinton’s private email server,” Smith’s letter says.

    “Among other items, the Committee requested their assistance in understanding work each company performed to secure the server, and whether it was performed in accordance with NIST’s (National Institute of Standards and Technology) Framework,” the letter goes on. “(The) Committee requested from PRN’s CEO, Mr. Suazo, all documents and communications related to the cybersecurity measures taken to secure former Secretary Clinton’s private email server.”

    The politicization of Congressional investigations into Clinton’s email server are well documented; and the Science, Space and Technology committee’s request appears relatively broad.

    “The…subpoena required PRN’s CEO to produce all documents and communications referring or relating to the following: private servers or networks used by Secretary Clinton for official purposes, the methods used to store and maintain data on private servers or networks used by Secretary Clinton for official purposes, any data (on) security breaches to private servers or networks used by Secretary Clinton for official purposes, and any documents related to the NIST Framework or FISMA,” the letter states.

    “Because any work performed by PRN during or after Secretary Clinton served as Secretary of State is pertinent to the Committee’s investigation, the subpoena required the production of all such documents, and not just documents relating to work carried out while Secretary Clinton served as Secretary of State.”

    PRN responded to the Committee through its lawyer that the company didn’t have responsive records, and refused to turn over transcribed interviews, citing the completed FBI investigation, the letter states.

    The agreements for Datto and SECNAP to produce records were brokered by Clinton’s lawyer, and committee officials have suggested that the same attorney orchestrated PRN’s obstruction.

    “The refusal to provide witnesses for transcribed interviews without a valid assertion of privilege(s) prevented the Committee from completing its investigation,” Smith’s letter said. “Further, PRN’s false statements to the Committee concerning a lack of responsive documents (belied by Datto’s production to the Committee) and complete failure to respond to the Committee’s lawfully issued subpoenas amount to obstruction…as well as a violation…for false statements made to the Committee.”

    Smith indicated his hope that the current Justice Department under Sessions would be more amenable to enforcing subpoenas in the Clinton email case than was the previous administration.

    Justice Department officials confirmed to POLITICO.com that they had received Smith’s letter, but offered no further comment.

    An attorney for PRN also declined to comment to the publication about Smith’s letter.

    This article originally appeared on MSPmentor.

    << Previous Day 2017/05/04
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org