Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, December 18th, 2015

    Time Event
    1:00p
    Understanding the Rise and Impact of SDN and NFV

    There is a very clear and marked evolution happening within the modern data center. We discuss this often, but some of these biggest shifts are actually being driven by the end user and the organization. The industry is moving toward a much more agile data center and business ecosystem. We’re seeing more data being passed through the data center, more users coming in, and a lot more focus on resource efficiency.

    As it stands, global cloud traffic crossed the zettabyte threshold in 2014, and by 2019, more than 86 percent of all data center traffic will be based in the cloud, according to the latest Cisco Cloud Index report.

    Significant promoters of cloud traffic growth include the rapid adoption of and migration to cloud architectures and the ability of cloud data centers to handle significantly higher traffic loads. Cloud data centers support increased virtualization, standardization, and automation. These factors lead to better performance as well as higher capacity and throughput.

    As the report discusses, scalability and allocation of resources are the major advantages of virtualization and cloud computing. Administrators can bring up virtual machines and servers quickly without having the overhead of ordering or provisioning new hardware. Hardware resources can be reassigned quickly and extra processing power can be consumed by other services for maximum efficiency. By taking advantage of all the available processing power and untethering the hardware from a single server model, cost efficiencies are being realized in both private and public clouds.

    Cloud computing aside, let’s dive a bit deeper into the conversation around virtualization. Specifically, network virtualization technologies. Already, managers are looking at next-generation solutions which will change the way cloud and data center resources are controlled. For example, the latest AFCOM State of the Data Center report indicates that between now and 2016, 83 percent of survey respondents said that they’ll be implementing, or have already deployed, software-defined networking or some kind of network function virtualization. Furthermore, 44 percent have deployed or will be deploying OpenStack over the course of next year. Finally, even though it’s a new technology platforms like Docker are already seeing 14 percent adoption.

    Did you see that stat? 83 percent are already in some way deploying or looking at next-generation networking technologies. So, with that in mind, why are we seeing such a huge jump in interest and adoption? And, what’s the real difference between SDN and NFV to begin with?

    • Understanding SDN: With SDN, at a very high level, administrators are able to control and manage entire network through the abstraction of higher-level functionality. Now, let’s dive a bit deeper. This is all accomplished by abstracting the layer which manages how traffic is distributed and where it’s being sent. This is the control plane. The underlying system helps control traffic destination. This is the data plane. To make SDN work there has to be some kind of communication between both the control and data plane, even though management is abstracted. It may sound complicated, but it really isn’t. The concept with SDN is to create a dynamic and highly programmable network infrastructure which is capable of controlling underlying infrastructure components while still being abstracted from applications and network services. This allows for better programmability across the all networking layers, better agility, central management, and an open-standards architecture. This means that SDN can drastically simplify network design by allowing administrators to aggregate physical resources, point them to an abstracted management layer (SDN) and create intelligent, programmatically configured, controls around the entire network. This means you can present network resources (via SDN) to applications and other resources. To the administrator they have visibility into the entire network flow architecture. To the applications or resources using that network resource, they simply see a logical switch. SDN’s abstraction concept fundamentally simplifies some of today’s most complicated and fragmented networking ecosystems. This is why we’re seeing so much adoption in the data center space. Organizations use SDN to deal with complexity, better policy control, improved scalability, and to remove vendor dependencies. Most of all, SDN helps with new concepts around IoT, cloud integration and cloud services, controlling vast amounts of data (big data), and even improving IT consumerization and mobility.
    • Understanding NFV: Although there is a direct relationship between SDN and NFV, they’re not really dependent on each other. Network function virtualization is similar to traditional server virtualization mechanisms but clearly focuses on networking services. Within NFV, there are virtualized network functions (VNFs). It’s a confusing acronym, but an important one. VNFs are implementations of network functionality which is deployed on top of an NFV infrastructure. With that, we have orchestration capabilities, data repositories, automation layers, and management environments. All of this sits on the NFV platform. So, what are some specific examples around NFV? This could be a virtual appliance that’s only responsible for load-balancing workloads. Or, you could have a virtualized firewall scanning a specific network segment. Similarly, you can have virtual networking services like IPS, IDS, and malware engines. Finally, you can have a distributed NFV architecture using virtual WANOP accelerators as tools for network and service controls.

    Catch all that? With all of this in mind one of the biggest questions still revolves around deploying SDN and NVF and understanding the use-cases. First of all, you don’t have to have both to accomplish a use-case. As mentioned earlier, these are not dependent technologies. You could very well have just an NFV platform operating a piece of your environment or just SDN.

    For example, if you have a complex and fragmented network ecosystem which spans multiple data centers, it might make sense for you to abstract that control layer and involve SDN. From there, you can control network functionality, traffic distribution, and even network automation.

    When it comes to NFV, let’s assume that you already have a homogenous networking environment. But, you need to control and monitor specific services within your data center. Here, you can deploy a virtual appliance which acts as a powerful load-balancer keeping an eye on workloads in your data center, the cloud, and in between. Similarly, you deploy a virtual security service which monitors traffic hitting a specific application. These are all examples of NFV deployments where you’re utilizing virtualization technologies to reduce cost and physical infrastructure complexity.

    Now that you have a clearer picture, know that your organization may very well have use-cases for one or the other technology. In some cases, both might be a good fit. The key, however, is understanding the differences and knowing how SDN and NFV can positively impact your data center and your business.

    5:29p
    IBM to Take Over AT&T’s Managed Hosting Business

    IBM is taking over AT&T’s managed application and managed hosting services business, acquiring the equipment used to support those services and access to AT&T data centers where that equipment sits.

    Like several other major telcos, AT&T has been looking for ways to offload some of its data center assets since at least early this year. Together, those assets are reportedly worth about $2 billion.

    IBM plans to integrate AT&T’s managed services into its extensive lineup of cloud services, the company said in a statement. It expects the combination to make it easier for customers to integrate networks and cloud workloads with their IT environments.

    “After close, IBM will deliver the managed applications and managed hosting services AT&T provides today,” IBM said.

    The companies did not disclose terms of the deal.

    IBM vowed to ensure a “smooth transition” for AT&T’s existing managed services customers.

    Other telcos looking for alternatives to ownership of extensive IT services portfolios they built up over the past several years include CenturyLink and Verizon.

    CenturyLink executives said publicly that they were mulling a sale of all or some of the company’s data center assets, while Verizon’s plans were leaked to the press. Verizon officials denied the report, which came out in November and relied on anonymous sources, but a new report surfaced earlier this month saying the company was again “evaluating its options.”

    In October, Windstream, a much smaller telco than any of the giants described above, sold its much smaller data center business to TierPoint, a data center services roll-up focused on underserved regional US markets.

    5:37p
    “Boomerang Routes” Send Canadian Data Through US: Researchers

    WHIR logo

    Article courtesy of theWHIR

    A map of internet traffic routes was launched Wednesday by researchers at theUniversity of Toronto to help Canadians understand how data moves on the Internet, and specifically when it moves through the jurisdiction of the US National Security Agency. Researchers say that Canadian’s information passing through the US raises both a security issue, and a privacy issue.

    Canada’s Internet infrastructure is intricately connected to US networks, and the researchers say the networks of major Canadian Internet providers push Canadian data through major routing hubs in New York, Chicago, Seattle, and California. Therefore, even though Canadians likely recognize that their interactions with popular US-based sites like Google, Facebook, and Amazon are exposed to American surveillance practices, they may be surprised to learn that interactions Canadian and even local sites often pass through “boomerang routes” that are subject to the same exposure.

    “There is nothing inherently wrong with data moving unencumbered across an interconnected global Internet infrastructure,” said Andrew Clement of the University of Toronto. “It is, however, critical that Canadians understand the implications of their data being stored on U.S servers and moving through U.S. jurisdiction. ISPs need to be transparent, privacy protective and accountable custodians of user information in this regard. Internet users should be fully informed consumers and citizens when making choices about their sensitive personal data.”

    The Canadian Internet Registration Authority (CIRA), which funded the IXmaps tool through its Community Investment Program, also invested heavily in the creation of Canada’s national network of exchange points, which allows peering and data exchanges east to west, rather than north to south. In addition to the infrastructure, the commitments and practices of ISPs are a factor in the security and privacy of Internet data, and the Canadian data traffic route web site also includes a 2014 transparency report.

    While keeping data out of the NSA’s jurisdiction is likely to appeal to many Canadians,the county’s own Internet surveillance agency works with agencies in other countries to gather information, potentially raising further concerns about data security and privacy.

    This first ran at http://www.thewhir.com/web-hosting-news/boomerang-routes-send-canadian-data-through-us-researchers

    7:53p
    Juniper Finds Backdoor in Its Data Center Security Software

    Juniper, during a routine internal code review, discovered unauthorized “backdoor” code in its ScreenOS software, which powers its firewall and VPN applications for data center, large enterprise, and carrier networks.

    The code can be used by an attacker who knows about its existence to get administrative access to devices running ScreenOS and decrypt VPN connections, Juniper senior VP and CIO Bob Worrall wrote in a security advisory issued Thursday.

    Juniper is one of the largest networking technology vendors for data centers and carriers.

    The company issued patches along with the advisory and recommended that customers running ScreenOS versions 6.2.0r15 through 6.2.0r18 and 6.3.0r12 through 6.3.0r20 apply the patches “as soon as possible.” The affected versions indicate that the vulnerability may have been present since at least 2008, the year ScreenOS 6.2 came out, as noted by the Register.

    Companies use software firewalls to protect their networks from intrusion. They rely on VPNs to encrypt connections to their systems by authorized personnel over public networks. In other words, the vulnerabilities Juniper has identified are potentially responsible for gaping holes in enterprise security of many of its customers.

    At this point, Juniper doesn’t know when or how the unauthorized code ended up in the software, according to Worrall. He also mentioned that there’s no evidence that someone has exploited the vulnerabilities.

    The list of potential scenarios is long. Close to the top of it, however, is the possibility that the backdoors were introduced by the NSA or a foreign spy agency.

    Among Edward Snowden’s disclosures was one about an NSA program through which the agency could intercept Cisco products on their way to customers to install backdoors. Another NSA program, called Feedthrough, was reportedly created to covertly install malware into Juniper firewalls that can be used to install other NSA software on the vendor’s customers’ equipment.

    9:21p
    Australian Data Center Connectivity Firm Megaport Goes Public

    Shares of Megaport, Brisbane-area provider of connectivity services for data centers that uses software defined networking platform to provision network connectivity, started trading on the Australian Securities Exchange Thursday under the ticker symbol MP1. The company raised AU$25 million through the IPO.

    Megaport’s founder and executive chairman Bevan Slattery is a well-known Australian tech entrepreneur. He also co-founded the telco Pipe Networks and founded NextDC, a major Australian data center provider that went public in 2010 in a $40 million IPO.

    NextDC, which Slattery left in 2013, also raised $30 million on ASX this week through an offering of additional shares.

    Megaport sells network connectivity to large data center operators, such as enterprises, carriers, and cloud service providers. It is one of several companies that emerged recently to make connectivity provisioning faster and easier, since enterprises increasingly rely on a complex set of interconnections to run their businesses.

    Another prominent example of a startup in this space is IIX, a Silicon Valley company that provides instant connectivity to any site on its long list of data centers around the world. IIX raised $26 million from a group of heavyweight Silicon Valley VCs in November.

    Provisioning interconnection is a complex engineering task, and companies often hire contractors to help them set up links to other data centers or cloud service providers. Startups like Megaport and IIX promise to take that burden off their shoulders, providing easy-to-use cloud-based interfaces for network provisioning.

    Megaport’s SDN platform is based on OpenDaylight, a Linux Foundation-governed open source SDN project.

    Its shares were up 75 percent on the day of their debut on ASX.

    << Previous Day 2015/12/18
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org