Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, February 11th, 2014

    Time Event
    1:30p
    Sticking Point: SDN Management Challenges

    Cengiz Alaettinoglu is CTO, Packet Design, where he provides the technical direction for the company’s portfolio of route analytics products as well as the prototype of an SDN management tool.

    CengizAlaettinoglu-tnCENGIZ ALAETTINOGLU
    Packet Design

    Achieving the many promises of software defined networking (SDN) needs to happen in evolutionary – not revolutionary – steps. One evolution that needs to happen is the adaptation of network management processes and tools that can keep up in a programmable world.

    Management processes and tools always seem to lag behind, but they are especially critical in SDN. This is because the human operator’s visibility and control is curtailed, management tools cannot match the automation SDN brings, and there can be great variability of traffic demand. And as with any new technology, SDN is being deployed in mixed environments, making management even more difficult.

    Also, deploying SDN across multiple data centers adds another layer of complexity. SDN is an island within a single data center, but across the WAN where resources such as bandwidth become scarce, management becomes much more complicated.

    All this begs the question: How can you adequately manage an SDN environment under these conditions? Whether the network is programmed or configured (or a combination), it’s not impervious to faults, including link or node failures. It’s difficult to plan for new applications and services if you do not understand the current and historical traffic load to ascertain how changes will impact existing applications. For troubleshooting, you still need to be able to compare and contrast the current network state to a baseline and find the root cause of problems quickly.

    In addition, SDN introduces additional risks, including failure of the controller itself and the possibility of multiple controllers issuing contradictory instructions to forwarders.

    The bottom line with SDN is that you still need to be able to manage the network according to time-tested management practices. You need to audit the network to make sure it’s healthy, including the integrity of flow paths, for instance. You need to understand what the footprint is now vs. the programmatic changes that the SDN controller will request, to ensure that the required resources are available.

    Network Virtualization

    One clear example of the need for SDN management is in network virtualization. In an SDN environment, you must be able to simulate moving a virtual machine, whether it is inside a VPN or not, from one location or data center to another, as well as all the flows originating from it. Also, you need to visualize and analyze the impact the move may have on any of the new network paths (such as congestion) and if so, what other services are effected. The ability to model modifications to the network and to flow records in real time is critical.

    Also, if an application is going to run for a long time, it is important to understand historical traffic loads to predict loads in the future. For example, what if the SDN controller makes a request in a trading network a few seconds before financial markets open? Traffic loads will change dramatically once market data starts flowing. How do you know if the request will negatively impact the trading application or not? Past traffic volumes can be used to predict future traffic profiles, and these profiles can determine whether the application should be permitted to run or not.

    Automation Needed

    What’s needed is to translate today’s tried and true management techniques – including some of the manual functions done today such as planning – to ensure the network can handle what’s being requested. We need to apply automation to all responsible management best practices, including controlled configuration and provisioning, sustained monitoring for availability and performance, efficient troubleshooting, and effective security and policy governance.

    The management vendors are lagging behind right now, but network managers and directors need all the help they can get as SDN adoption becomes more widespread. SDN will not make these network professionals obsolete. In fact, their knowledge and experience is more crucial and valuable then ever to ensuring optimal network performance. But they need the right management tools to be able to visualize and analyze what’s happening in the network. Only then will SDN fulfill its promise.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:00p
    Interxion Hits 500 Carriers, Sees Growth in European Connectivity Hubs
    European data center specialist Interxion’s PAR7.

    European data center specialist Interxion’s PAR7 data center. The company said today that it now has 500 carriers in its European data centers. (Photo: Interxion)

    Colocation and interconnection service provider Interxion has passed the 500-carrier mark in its European data centers. The European cloud and colocation provider has been focusing on providing the best connected data center environment it can, as well as setting up communities of interest which bring like-minded companies together.

    “The number of carriers is a key metric that we keep an eye on,” said Mike Hollands, Director of Connectivity and Mobile Community at Interxion. “Five hundred means 500 different companies, it’s not inflated by reaching Internet exchanges. When I joined the company two years ago in January 2012, we had 350 connectivity providers across the European footprint. That number has now hit 500. Our data centers continue to attract connectivity players. They come because they do a lot of business with one another, and also because they get a lot of revenue serving the digital media, cloud and, finance.”

    Interxion has more than 1,400 customers in its 34 data centers across 11 countries, all in Europe. Network services account for 35 percent of Interxion’s annual revenues. The company had revenue of 78 million Euros ($106 million U.S.) in the third quarter of 2013.

    How does 500 carriers stack up against other data center providers? In Europe, it puts Interxion at or near the top. Equinix states it offers access to more than 950 network providers, but that’s a global number. Telx has also built a significant business on interconnection but doesn’t give an exact number. Among European specialists, TelecityGroup says it offers access to “hundreds of carriers”.

    In Europe’s major markets, a data center emerges to become the connectivity hub of that city. “In Interxion’s case, we’ve achieved that in Frankfurt, Vienna, and Madrid,” said Hollands. “In other cities like London, it hasn’t been the traditional connectivity hub.”

    In London that role belongs to Telehouse, but Interxion has doubled the number of connectivity providers in London due to growth among financial customers. Hollands says some telecom providers want to diversify from the Docklands area, which is driving the increase of the number of carriers it has. Those 500 carriers fall into specific segments:

    • International providers offering IP and Ethernet services for corporate networks
    • National and metro providers enabling access to a high number of residential and business premises
    • Local loop providers delivering high capacity links between key data centres in the metro
    • Internet exchanges offering low cost exchange of Internet traffic
    • Content Distribution Networks enhancing user experiences of online content and websites
    • Mobile Network Operators simplifying access to mobile service platforms and end users

    Growth in the FLAP Region

    “In terms of growth from existing carriers, a huge amount of network capacity growth by carriers is being implemented in FLAP – Frankfurt, London, Amsterdam and Paris – the core ring in Europe is where core growth is occurring,” said Hollands.

    “We’ve also been seeing the Asian Pacific Carriers in large numbers,” said Hollands. “The other area is in the mobile community, mobile network companies that work alongside network operators. The whole mobile ecosystem is really exploding.”

    Chicken or the Egg?

    Customers beget more connectivity providers as networks look to serve new clientele. New customers look for rich, diverse connectivity. In Interxion’s case, it really built up its connectivity first, which drove its customer growth. The company is strong with the financial vertical, digital media, and cloud and service providers.

    “They want diverse options and good pricing,” said Hollands. “They want to be able to pick a provider with great connectivity depending on their strategy. “We do have a carrier neutral policy and offer no connectivity ourselves. We’re an independent consultant.”

    Continuing to Expand Data Center Campuses

    “Our policy has been to continue expanding our existing data center campuses,” said Hollands. “We just finished Frankfurt 7, and are building 8 and 9. We have Paris 7 opened, and built a second data center in Madrid. We’re expanding in Vienna and Stockholm. These are all places we’ve been in the past. Carriers really like that once they put in infrastructure, we’ll continue to put more and more space.”

    By continuing to expand its existing data centers, it ensures carriers investing in infrastructure within the facility will continue to pay dividends as the clientele grows.

    3:00p
    Cloud Provider DigitalOcean Continues To Grow, Adds Region In Singapore

    Public cloud provider Digital Ocean is launching its latest region in Singapore to serve Asia Pacific. The company is using Equinix as its data center provider.

    DigitalOcean is a relatively new cloud provider that first landed on the radar last August, due to explosive growth. The company has built its cloud to be speedy and developer-friendly, using solid state drives for hardware. This has helped its customers launch more than 1.1 million cloud servers. DigitalOcean has been expanding its infrastructure to keep pace with this growth.  The company believes that Asia Pacific is very likely to be the leader in growth over the next few years.  

    “We have a large customer base in Asia, which will only increase with this announcement,” said CEO Ben Uretsky. “It will allow them to leap-frog outdated markets and take advantage of lower costs to building out infrastructure.”

    “Singapore continues to invest in its infrastructure,” said Moisey Uretsky, DigitalOcean’s Chief Product Officer. “Five or six years ago, Hong Kong and Tokyo were higher on the list for an initial presence, now we believe it’s Singapore.”

    Singapore is an extremely well connected region, allowing local users to significantly reduce their latency time. “Connecting to Amsterdam or San Francisco can cause 200ms of latency,” says Chief Operating Officer Karl Alomar. “This will drop latency to 30ms and allow large customers to expand their presence for greater distribution.”

    Additionally, it chose Singapore as its first AsiaPac hub due to IPv6. “Obviously IPv6 is a big concern that we’ve been looking to move into,” said Moisey Uretsky . “That location has a lot of IPv6 adoption. We’re new to the cloud space, so one of the issues we’ve run into is the limited available of IPv4 space. ARIN is hinting at the fact that if anyone wants new allocation, they’ll need to be IPv6 ready. ARIN’s tried to move that ahead of time.”

    The company has a couple of different regions. Its New York region also acts as HQ, and it has San Francisco and Amsterdam. In Singapore, it’s starting with a smaller footprint.

    The company had accumulated 35,000 customers in a short span, launching recently in 2011. “We’ve certainly overcome the hump of the initial explosion,” said Moisey Uretsky. “We had to scale staff and infrastructure unexpectedly. Now we’re on top of it, instead of behind. That turning point was recent, in last December.”

    The company attributes its growth to making cloud hosting easier than what’s currently offered on the market. The message has been working.

    DigitalOcean says it will continue to invest heavily in their infrastructure as more data centers are added throughout the world. This is one of many exciting announcements to come within the first half of the year, as the company is expected to switch from the legacy IP address standard IPv4 to IPv6,and roll out new features such as Load Balancing, Object Storage, CDN, and 1-click installs of common frameworks.

    3:00p
    Creating Cloud Optimization with Network Intelligence

    The direct proliferation of cloud computing has resulted in a huge boom in traffic over the WAN. There are more users connecting, a lot more data points, and the modern data center sits right in the middle. Throughout this entire cloud evolution – the infrastructure was forced to change as well. Edge and core routing needed to evolve to handle this influx of traffic and new content. Service providers as well as enterprises deploying demanding, mission-critical applications are facing unique networking challenges.

    Here’s part of the issue: Network administrators are trying to deal with new types of cloud growth issues by adding additional providers, creating and applying routing policies in reaction to situations, and rerouting the traffic manually. This requires additional engineering time to be spent. Multi-homing avoids downtime by providing redundancy; however, it does not address congestion-related problems that occur in the “middle-mile” backbone networks linking the service providers and enterprises to end-users.

    Examining Routing and Networking Infrastructures

    In this whitepaper from Noction, we see how the Intelligent Routing Platform (IRP) adds intelligence to multi-homed routing decisions. It leverages your organization’s existing infrastructure to deliver substantial network performance improvements, optimization of existing Internet connectivity and reduces the cost of operating the network.

    Download this white paper today to learn how a powerful Intelligent Routing Platform can truly optimize you infrastructure. Some direct features include:

    • Automatic dynamic route updates based on network performance and cost metrics
    • Customer-defined policies for performance and ISP usage that consider ISP pricing structures
    • Automatic adjustment of traffic levels based on the commit rates set with your provider
    • Automatic load balancing for uniform traffic distribution among several providers
    • Performance thresholds for optimization to individual destination networks or prefixes
    • Routing Policies based on business objectives

    Remember, the modern enterprise is only going to continue to evolve under new demands of the user and the cloud. When it comes to creating your own infrastructure – while multiple internet connections offer redundancy, they do nothing more than that. BGP routing doesn’t consider latency, packet loss, or congestion when making routing decisions. Instead, the routes are selected based on reducing the number of networks that packets have to transit through on their way to the destination, regardless of performance. This is why it’s important to consider an Intelligent Routing Platform which can dynamically evaluate your environment and automate the optimization of your network.

    3:30p
    Inside SuperNAP 8: Switch’s Tier IV Data Fortress
    supernap-cooling-470-2

    A look at the cooling units outside the SuperNAP 8 data center in Las Vegas. These 1000-ton units can switch between multiple cooling modes, and have on-board flywheels to provide extended runtime in the event of power outages. Click for a larger version of this image. (Photo; Switch)

    LAS VEGAS - Once you’ve built the mighty SuperNAP, what do you do for an encore? If you’re data center provider Switch, you build a better SuperNAP right next door.

    The debut of the SuperNAP data center in 2009 put Switch on the map in a big way. At more than 400,000 square feet, the SuperNAP offered unprecedented scale and the ability to support extreme power density. The facility hosts servers and storage for many of the world’s leading technology companies, including more than 40 cloud computing companies and a dense concentration of network carriers.

    The company’s newest creation, known as SuperNAP 8, builds on that foundation with a number of innovations in cooling and reliability. The building has just become the first multi-tenant data center to earn Tier IV Constructed Facility certification, the highest rating possible under The Uptime Institute’s ratings for mission-critical reliability.

    For Switch founder and CEO Rob Roy, SuperNAP 8 is the culmination of a decade-long effort to rethink the data center. The design for SuperNAP 8 can operate effectively in any climate, providing an ultra-efficient template for global growth. Switch is finalizing plans for an international expansion, with details to be announced later this year.

    “We’ve really been focused on creating the world’s best data center,” said Roy, who has patented many of the design innovations at Switch. “SuperNAP 8 is the end game of that effort. I’ve wanted to see if we could create one global standard for our data centers.”

    First Tier IV Colocation Facility

    The effort has made an impression on The Uptime Institute, which has evaluated data centers around the world for its Tier certification program. Only four finished data centers in the U.S. have ever earned Tier IV for a finished facility, the highest certification level, and until now all have been single-tenant financial services data centers.

    “The first Tier IV Facility Certification in the colocation sector speaks for itself: another world-class accomplishment,” said Ed Rafter, Vice President of Technology for The Uptime Institute. “Switch SuperNAP 8 has incorporated a number of well-planned and innovative solutions for their facilities infrastructure requirements.”

    SuperNAP 8 is the next step in Roy’s vision for a massive technology ecosystem in Las Vegas. Switch now has more than 1,000 customers and 315 employees, and its projects keep 1,000 construction workers employed. The 300,000 square foot SuperNAP 8 facility is built several hundred yards from the original SuperNAP (now known as SuperNAP 7).

    SuperNAP 8 was built using pre-fabricated modular components manufactured by Switch. The major building block is known as a MacroMOD, and includes two data halls. Switch is installing customers in the first two data halls, which represent half of the building’s total capacity.

    supernap-teradata-470

    Rows and rows of high-capacity Teradata enterprise storage inside customer cages within the SuperNAP, the huge Switch data center in Las Vegas. The arrays are housed in Switch’s containment systems, known as T-SCIFs. Click for a larger version of this image. (Photo: Switch)

    So what’s different about SuperNAP 8? Data Center Knowledge recently had a tour of the new facility, which features the same combination of density and efficiency seen at SuperNAP 7, which operates at a full-year Power Usage Efficiency (PUE) of 1.18. That puts its efficiency nearly on par with Google, which has a full-year PUE of 1.12 for its fleet of data centers.

    This level of efficiency is unusual for a multi-tenant facility, which has less flexibility in pushing the boundaries of server inlet temperature. Switch operates the SuperNAPs’ server halls at 69 degrees and 40 percent humidity, while hyperscale players like Google and Facebook can push temperatures closer to 80 degrees.

    A high-level change in the new design is how the data center is organized. At SuperNAP 7, a massive power spine runs down the center of the building, with data halls and power rooms on each side. At SuperNAP 8, all the power rooms are together along the perimeter of one side of the building, with the power spine alongside.

    supernap-powerspine-470-2

    A view of the power spine inside SuperNAP complex in Las Vegas, showing the large number of conduits housing power cabling. SuperNAP 7 features 100 megawatts of power capacity, while SuperNAP 8 currently offers 50 megawatts. Click for a larger view of this image. (Photo: Switch)

    The data halls are now together in the remainder of the interior space, with the exterior cooling units lining the far side of the building. This diagram provides a cross-section of the facility, showing the placement of (from left to right) the generators, power rooms, power spine, data halls, and cooling units.

    supernap-8-xsection

    Separating the power equipment from the servers and the cooling units provides additional reliability, limiting the potential for problems should the electrical gear.

    6:44p
    Big Data News: Splice Machine, Carpathia, Altiscale, DataGravity

    Splice Machine raises $15 million to further its transactional real-time SQL-on-Hadoop database for big data applications, Carpathia and Altiscale partner for Hadoop-as-a-Service, and DataGravity is recruiting channel partners for its early-access program to take advantage of its product release later this year.

    Splice Machine raises $15 million for big data applications. Big data application provider Splice Machine announced the closing of its $15M Series B round of funding. The investment is led by InterWest Partners, along with returning Series A investor Mohr Davidow Ventures (MDV). Splice Machine provides a transactional real-time SQL-on-Hadoop database for Big Data applications. It provides application developers and database architects the best of Big Data: the scalability of Hadoop and HBase, the ubiquity of SQL, and the transactional integrity of an RDBMS. After getting its first round of funding from MDV in 2012 the company has tripled its staff, engaged with over 10 charter customers, and delivered a limited product release where more than 50 enterprises validated use cases, tested SQL cover and benchmarked performance. The new funds will help the company accelerate its product development and expand its sales and marketing team in preparation for its upcoming public beta offering in the first quarter of this year. “Unlike any other SQL-on-Hadoop database, only Splice Machine supports real-time, ACID-compliant updates for both operational and analytical applications on standard Hadoop distributions,” said Bruce Cleveland, General Partner, InterWest Partners. “With its lockless transactional architecture, Splice Machine fills a critical gap in the SQL-on-Hadoop market.”

    Carpathia partners with Altiscale for Hadoop-as-a-Service. Carpathia announced a partnership with Altiscale, provider of a purpose-built Apache Hadoop cloud. The strategic partnership delivers Hadoop-as-a-Service (HaaS) to enterprises, government agencies and global Software-as-a-Service (SaaS) providers looking for more powerful and cost-effective access to large-scale data processing capabilities. The Altiscale HaaS solution will be available in Carpathia’s network of data centers, giving its customers a turnkey Hadoop solution. Together, Carpathia and Altiscale deliver a fully managed service that is optimized for Hadoop, providing customers with always-on access to their data, proactive monitoring of jobs, and predictable monthly pricing“Hadoop is becoming critical for businesses as they seek to make sense of unstructured data, accelerate queries on massive datasets, uncover hidden trends, and enhance insights for predictive modeling,” said Raymie Stata, CEO, Altiscale. “The Carpathia partnership and the Altiscale Data Cloud frees customers from the infrastructural and operational burdens of Hadoop, allowing them to quickly scale and pay only for the Hadoop resources they use.”

    DataGravity announces channel program. Early-stage company DataGravity announced it is recruiting a select group of channel partners for its early-access channel program. The DataGravity early-access channel program, which will be unveiled during the VMware Partner Exchange event, will give partners with expertise in storage, server and virtualization technology an early view into the DataGravity solution. The DataGravity platform will address the challenge of extracting insight without funding all-in data centers, or hiring IT specialists to build complex models and analyze data sets. Channel partners in the early-access program will gain the benefits of engagement so they can move quickly for their customers when DataGravity launches its solution this year. “There is tremendous demand for new, simple and cost-effective technology in both small and mid-sized companies, and the channel model is evolving into a more diversified revenue stream for both partners and tech companies,” said David Siles, vice president of worldwide field operations at DataGravity.  ”DataGravity has created an opportunity for partners and customers to influence the development of a sustainable channel program and business that will level the playing field of unstructured data analysis, giving end users and IT teams the ability to reach actionable answers to difficult business questions.”

    7:00p
    Video: Applied Micro Presents Server on a Chip

    Data Center Knowledge chats with the Applied Micro team, who were at the Open Compute Summit V in San Jose. The video features Michael Major, vice president corporate marketing, and Kumar Sankaran, senior director of embedded systems, who shows us the X-Gene, which is a high performance enterprise class ARMv8 64-bit Server SOC. The server can be used for Web front end, memory caching, big data and cloud storage. The X-Gene is a fully integrated SOC, without the need for other chips such as I/O controller hub, NIC, baseboard management controller. The X-Gene was being demonstrated at Applied Micro’s booth at the event. The video runs 2:13 minutes.

    For additional video, check out our DCK video archive and the Data Center Videos channel on YouTube.

    8:00p
    DE-CIX Expanding Its New York Internet Exchange

    German Internet exchange operator DE-CIX has been quickly growing on the wave of the Open-IX movement in the United States. It’s New York area Internet exchange has scaled across 111 access points across Manhattan, New Jersey, and Long Island. The company has also introduced the Apollon technology platform into its NY IX as the foundation for the regional infrastructure.

    The Apollon platform is an Ethernet interconnection platform that consists of a 100 Gigabit Ethernet-capable switching system that supports a large number of 100 GE ports across the switching fabric. Apollon delivers secure, resilient connectivity to the DE-CIX New York peering platform.

    “In addition to metro fiber, we have also secured dark fiber riser assets in multi-tenant buildings and connected them to DE-CIX New York switching sites,” said Frank Orlowski, Head of Marketing for DE-CIX. “This expands our reach to 111 access points in seven facilities across the NY/NJ region – more than any other Internet exchange in North America. One port from a DE-CIX site now delivers access to any other customer at the same or 110 other locations in the New York metro, plus the more than 600 customers at our primary Frankfurt exchange. The New York metro is a competitive market, but DE-CIX Apollon is taking peering and interconnection to the next level. We are working diligently to make DE-CIX New York the major exchange in this metro market.”

    DE-CIX New York uses multiple dark fiber rings to provide scalable backbone infrastructure to the exchange. A downtown Manhattan ring was implemented in 2013, and the company has added a New Jersey ring that connects facilities to the Apollon nodes in Manhattan. The expanded exchange also operates a Chelsea ring, which connects to the Telehouse Chelsea facility to the same nodes.

    DE-CIX New York is a carrier- and data center-neutral Internet exchange distributed across major carrier hotels and data centers throughout the New York/New Jersey metro. The exchange supports settlement-free interconnection between Internet backbones. Announced in September 2013 and first opened for customer orders in November, DE-CIX New York continues to add data center sites to expand the coverage of the exchange and increase its utility to the Internet peering industry. Ninety-nine of the 111 access points at DE-CIX New York are 100 Gigabit Ethernet ready, with remaining points to be upgraded in the near future.

    << Previous Day 2014/02/11
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org