Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, March 10th, 2015

    Time Event
    4:01a
    This Startup Challenges the ‘Data Center Aisle’ Concept

    If the words “data center” invoke any associations for you at all, they probably conjure up images of neatly arranged rows of black or gray server cabinets on a raised floor made of white perforated tiles. If you’re more intimately involved with IT facilities, you may be thinking of air handlers and water chillers, or UPS systems and switchgear in a room somewhere close to the data hall.

    What you’re probably not imagining is a room filled with round black cylinders that resemble developing tanks for film but 9 feet in diameter. But that’s sort of what a data center imagined by founders of a new startup looks like.

    Vapor IO, which came out of stealth Tuesday, has designed something it calls the Vapor Chamber, which the founders say is a lot cheaper to deploy and operate than a traditional hot aisle-cold aisle data center environment. More than anything, it is for so-called “edge data centers,” or data centers that are much smaller than the hyperscale facilities the likes of Google and Facebook have been building in rural towns in and outside of the U.S. Edge data centers are in or close to major population centers, storing data and content that needs to be delivered to users who live there.

    Cole Crawford, Vapor CEO and one of the company’s founders, said the Vapor Chamber was for the “Internet-of-Things cloud,” which is about delivering data at the edge. “That cloud is powered and looks very different than the general-purpose cloud, and economics for that are very different as well,” he said.

    That Crawford knows his data centers and his cloud would be an understatement. He was one of the people involved in creation of Nova, the cloud architecture designed for NASA’s internal use in the second half of the last decade that was later rolled into a collaboration with Rackspace to create OpenStack – the family of open source technologies that enjoys widespread popularity today.

    He’s also been involved with the Open Compute Project, Facebook’s open source data center and hardware design initiative, since the project’s start in 2011. And for the last 1.5 years he’s been executive director of the Open Compute Foundation, the non-profit that oversees OCP. Open Compute is having its annual summit in San Jose, California, this week.

    The Edge is Dense

    The reason Vapor Chamber looks the way it does is that it’s designed to be deployed in places where physical space is in short supply. It’s designed to provide a lot of compute capacity within a relatively small footprint. A single chamber – 9 feet in diameter – can accommodate up to 150 kW across six 42RU racks.

    The “edge cloud,” Crawford said, will not run in data centers that are 100 megawatts and up (the ones Facebooks and Googles of the world run in). The network edge is in urban areas, and “we’re not building 100-plus-megawatt data centers in downtown New York or downtown San Francisco or downtown Chicago.”

    Vapor Chamber reduces the amount of space needed for cooling traditional data centers. The six racks are wedge-shaped, forming a cylinder when put together. Inside the cylinder is what Crawford called the “hot column,” which servers push hot air into. The column replaces what would be a hot aisle in a traditional data center. A 36-inch variable-speed fan at the top sucks the hot air out of the column. Air pressure in the column is lower than pressure outside of the chamber. The difference in pressure is the mechanism that pulls cold air inside.

    Vapor Chamber from all angles (Image: Vapor IO)

    Vapor Chamber from all angles (Image: Vapor IO)

    The chamber also includes things like rectifiers, PDUs, backup batteries, fire detection and suppression – in other words, all the things that are usually decoupled from the IT racks and take up extra space in the building – all within the slightly under 81 square feet it occupies. Not only can the edge data center be smaller in size, it can be a much less sophisticated (read “cheaper”) environment than your typical mission-critical facility.

    The racks are inspired by Open Compute racks, or racks Facebook designed for its own use and contributed to its open source project. The chamber supports any IT gear, however, as long as it satisfies a few basic OCP requirements. “It will support any standard IT equipment,” Crawford said. “You have to be able to get power at the rear, and you have to be able to service it and hook up your networking gear at the front.”

    Open Source DCIM

    Adding more bang to its coming-out party, Vapor IO also announced its own data center infrastructure management and analytics system that includes hardware sensors and software. The company said it was contributing a foundational element of the system to the Open Compute Project. That element is called Open DCRE (Data Center Runtime Environment), which is a combination of sensors, firmware, and a controller board. It is a way to gather not only temperature and humidity data, but also pressure and vibration metrics, both of which matter a lot in operation of the Vapor Chamber.

    The technology that is not open source is CORE, or Core Operating Runtime Environment. It provides a layer of analytics-driven intelligence on top of Open DCRE. Taking things beyond energy efficiency, it enables users to define units of production for their IT equipment – be they URL pages or transactions – and determine how efficiently their data center assets produce those units. The approach takes a line of thinking that is similar to the one eBay took with its Digital Service Efficiency dashboard, announced in March 2013. While eBay’s dashboard is specific to the online auction company, however, Vapor’s CORE is aimed at a wide range of users.

    First Customer on Board

    Vapor IO did not share pricing details but disclosed the name of its first customer. Union Station Technology Center in South Bend, Indiana, will use Vapor Chamber to build a cloud setup. Crawford said there were more customers in the pipeline, but the company wasn’t ready to disclose who they were. He also declined to share how the startup is being funded.

    Crawford’s co-founders are Steven White, a former colleague from Nebula, an OpenStack-based private cloud vendor, and Nick Velander, founder of Signal Search Group, which helps local companies get their electronics products manufactured in China.

    Vapor’s exclusive manufacturer is Jabil, a manufacturing services company based in Florida. Also among its partners are Romonet, a London company that makes analytics software for data center management, and Mesosphere, the San Francisco-based startup that has developed an operating system for the entire data center that’s based on the open source Apache Mesos project.

    A Fresh Look at a Real Need

    Given space constraints in urban centers of the major metros and demand for edge data center capacity, Vapor IO is trying to address a real need in the market with a very unorthodox solution. While the design is unusual, the company’s founders have taken into consideration the fact that the data center industry is a very conservative one. The wedge racks can be installed in a traditional data center, for example. But traditional data center is not the main target. The target is a new kind of data center, one where building and supporting infrastructure needs are minimal, and location takes precedence over everything else.

    3:00p
    IBM Opens SoftLayer Data Center in Montreal Area

    IBM has opened a SoftLayer data center in Drummondville, an hour outside of Montreal, Canada. The new data center follows a recent Toronto data center opened in August of last year.

    The facility is the latest as part of a $1.2 billion ongoing investment in its cloud business. The Drummondville data center is similar in scope and size to the recent Toronto facility, with room for more than 8,000 servers. While Toronto has long been publicly disclosed as one of the planned data centers, this facility is a recent addition to list of planned expansions.

    This is the fifth SoftLayer data center center launched in four months. The others are in France, Germany, Mexico and Japan. Other data centers expected in 2015 will be in Milan, Italy and Chennai India. At the time of acquisition SoftLayer had 13 data centers. Under IBM that total has more than doubled in short order.

    The data center expands customer capabilities when it comes building private, public or hybrid clouds, with the ability for in-country data redundancy in both Montreal and Toronto. Drummondville is an hour and a half away and is actually in a different seismic and climatic zone than Montreal.

    “The local presence of SoftLayer’s cloud center not only demonstrates the company’s significant investment in Québec, but also its unique ability to meet the needs of Canadian customers to quickly leverage untapped business models and services with cloud,” said Denis Desbiens, IBM Vice President, Quebec in a release

    The full suite of services is available including bare metal, virtual servers, storage, security services and networking. These services can be deployed on-demand with full remote access and control through a customer Web portal or API. Services also are available in French.

    “As the second-largest city in Canada, Montréal is a vital center of commerce and technology,” said Marc Jones, SoftLayer’s CTO, in a release. “Canada is an important market for IBM Cloud services and this new facility will provide regional customers with the security, resiliency, and scalability for placing demanding workloads in the cloud.”

    Montreal and the surrounding area features relatively low-cost clean hydro power. Montreal has great connectivity to New York, Toronto, and Europe.

    Last month, Cogeco said it was opening a 100,000 square foot data center in Montreal.

    3:30p
    Conducting Your “Power Due Diligence” When Acquiring Existing Data Centers

    Mark Townsend, senior field application engineer, at GE’s Critical Power business, works with data center customers to build and sustain massive data and network capacity with reliable and energy-efficient power.

    There are a lot of approaches to expanding data center capacity – build new, upgrade existing infrastructure, establish colocation facilities. Each has its pros and cons. But larger companies, whether large data users or data services providers, often have the opportunity to acquire the data center assets of another firm. Perhaps the company is expanding through acquisition or is making a strategic move to expand to another market and sees an acquisition as a quick expansion path.

    The due diligence needed to acquire existing information technology assets – equipment, facilities, power generation and protection platforms – involves a lot of factors. These range from assessing the remaining life-cycle value of capital equipment, or the ratio of current server processing capacity per square foot, to the operational expenditures (OpEx) and related energy efficiency of server, cooling and power protection systems.

    Data center power – from power distribution to power protection – is part of that due diligence, and goes far beyond just assessing the age of the backup generator or power protection and distribution equipment.

    What Are the Capacity Needs?

    The first question in evaluating the power factors of a data center under consideration is the immediate goals and requirements for the operation. For example, if the rationale for acquiring an existing facility is as an interim, supplemental data center operating at less than capacity, then using “best conventional” power technologies, acquired at the right price, can make sense. OpEx costs for older equipment should also be a factor in this cost analysis.

    Interim Upgrade

    With an interim-use view of the facility, inventorying the age, energy efficiency and maintenance schedule of power equipment, such as the batteries in uninterruptible power supply (UPS) units, should be factored into the purchase evaluation. A potential buyer of a data center certainly should also ask for the uptime performance and maintenance records from the facility.

    Energy efficiency and the related OpEx costs of UPS units might be satisfactory for a short time if operating at traditional power conversion levels of 92 to 94 percent is acceptable. This isn’t a long-term strategy, but may suit the immediate need for data center capacity. An analysis of power consumption records against comparable facilities as measured by watts per square foot can be an evaluation metric. So can measuring the kilowatt-hours of energy per cubic feet of cooling water when using a chiller plant for cooling.

    State-of-Art Facility

    Conversely, the criteria for acquiring a current data center might be to have a robust capacity operating at high efficiency power levels. If so, then factors such as the age of the power assets, with higher maintenance costs and lower power efficiency, may influence this decision. If power efficiency and lower OpEx costs are a factor, then the extent to which newer high-efficiency power systems are deployed is a key factor. Newer multi-mode UPS units, operating at 97 percent efficiency in double conversion mode, and 99 percent efficiency in multi-mode, present immediate energy cost savings that can add up to millions of dollars in OpEx costs over ten years.

    Power efficiency also plays a role for facilities employing newer less power-hungry systems, such as free air, evaporative cooling or even liquid cooled or liquid immersed servers, versus traditional air conditioning approaches.

    Charting an Upgrade Path

    A comprehensive and thorough power assessment offers insight in the final buy-or-pass decision in acquiring a third-party data center. That same assessment also offers a good blueprint for further upgrades. For example, an older power system might be a logical choice as a near-term interim strategy. This same assessment gives new owners a clear picture of the investment and ramp-up schedule to upgrade to more up-to-date power-efficient, lower OpEx power distribution and protection infrastructure.

    Hidden Service Agreement Risks

    For data center service providers, one power consideration often overlooked is the impact on power-related service interruption on service level agreements (SLAs). The overall reliability of the power structure has a direct impact on uptime and system recovery, and the SLA payouts a data center provider might have to pay.

    Assessments in this scenario should include a review of power protection redundancy and the age of UPS units as well as related battery back-up systems. Data center service providers should also assess whether the current technology can support the SLA contractual requirements and marketing claims of an Uptime Institute Tier III or Tier IV rating. For example, while an operational data center may initially be rated at Tier IV, data center providers should determine if an aging infrastructure or batteries nearing end of life, for example, still support the Tier IV status and quality claims made in marketing promotions and SLA agreements.

    Location, Location, Location

    An easily overlooked power consideration is the location of the data center facility. Certainly power availability is a factor, but more important is the quality of the power from the local utility. A more urban setting with higher peak demands may affect the quality of the power and the reliance on UPS batteries and generator OpEx costs. A small initial investment in a power quality audit before the purchase will indicate the quality of local power, as would the public records of utility power outages or brownouts.

    A more comprehensive look at the state of a data center’s power assets, operations and efficiencies can turn up some significant decision points in helping make that go/no-go decision about existing data center assets, as well as highlighting hidden issues and opportunities that lie ahead.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission processfor information on participating. View previously published Industry Perspectives in our Knowledge Library.

    3:49p
    Defining the Modern Cloud Architect: a Look at Today’s Business

    Let’s start the conversation with the very real fact that cloud computing is growing quickly. Organizations are adopting the cloud as a new foundation for their business models. This kind of growth is only slated to continue as a recent Gartner report shows that cloud computing will become the bulk of new IT spend by 2016.

    Furthermore, the idea and perception of cloud is changing as well. The Gartner report goes on to discuss how there is a flawed perception of cloud computing as being one large phenomenon. Rather, cloud computing is actually a spectrum of things complementing one another and building on a foundation of sharing. Inherent dualities in the cloud computing phenomenon are spawning divergent strategies for cloud computing success. The public cloud, hybrid clouds, and private clouds now dot the landscape of IT based solutions. Because of that, the basic issues have moved from ‘what is cloud’ to ‘how will cloud projects evolve’.

    With all of this in mind, it’s important to now look at one of the most critical components behind the delivery of a cloud solution: the cloud architect.

    There are now unbelievable new opportunities for those that truly understand cloud, the architecture, and the business drivers which push it all forward. Indeed.com lists tens of thousands of positions revolving around “Cloud Architect” jobs. Who’s hiring? Citi, CocaCola, Red Hat, IBM, AWS, GE Healthcare, and VCE; just to name a few. Salaries range from $90K to $170 and up. And, there are a lot of options as well ranging from specialization in virtualization to HPC and big data management.

    Working in the cloud field and speaking with a number of different organizations, I’ve found that there are a few key traits that all cloud architects should share. Let’s look at what can help make a cloud architect be successful.

    • Understanding the ecosystem. As a cloud architect your job isn’t to know just one piece of technology. Even if you’re an OpenStack expert, you have to understand how all of the underlying components interact. How does SDN impact the delivery of workloads? How does storage replication happen and which APIs are supported? Is there an interoperability issue with certain operating systems or applications? I’m not saying that you have to be an expert in every technology out there. However, a successful cloud architect will know, at the least, the foundational theories of how ecosystem technologies work and interact with their specific environment.
    • ROI, value, and the business. Let’s assume that you’re a consulting cloud architect. Now, a large organization has just asked you to do a cloud ROI and you’re one who has to deliver it. Believe it or not – this absolutely falls within the capabilities of the modern cloud architect. Now, you’ll have to gauge metrics around WAN utilization, end-point devices, user interviews, virtualization levels, application delivery, data center components, external resources usage, and much more. From there, you’ll have to understand these metrics and then quantify the results to show the business where there is value. This is beyond “speaking the language of business.” Cloud architects are now a part of the business; and the technology organization. In fact, many organizations will now have a senior cloud architect help drive entire corporate initiatives. Why? They have a foundational understanding around the end-user, deploying mobility, accessing remote workloads, and how all of this impacts the business.
    • Evolving with emerging technologies. One of the most critical traits for a cloud architect is to consistently evolve with the pace of the business and technological landscape. Never be afraid of change and always test out new technologies. Now, it’s easier to deploy entire platforms than ever before. Many new solutions can now be deployed as virtual machines. For example, the Palo Alto VM-Series firewalls are virtual appliances. Now, you can segment your network and test some next-generation security platforms. Similarly, Atlantis USX is a powerful storage abstraction technology – also a virtual appliance. Here, you can test out the next-generation of data control and abstraction; which can run on a number of different hypervisors. The point is that you can test the solutions, see if there is a benefit, and quickly roll them into your own environment. Evolving with emerging technologies revolves around understanding a multi-layered technological approach. This, of course, includes both virtual as well as physical platforms.

    As data center, business, and cloud components become even more intertwined, cloud architects will be the masters who can put all the pieces together. An architect who can define the value of their infrastructure to an executive staff suddenly becomes an absolutely critical asset to the organization. Why? They can see the direct tie between business and technology. Furthermore, they can explain it well to a broad audience.

    Here’s the reality: it’s not easy. It’s an extremely competitive market out there and oftentimes it’s hard to take the blinders off when you’re an app developer or a specialized engineer. However, if your goal is to become involved in cloud architecture, learn as much as you can about the entire ecosystem that supports it and make sure you understand the direct tie into the business process. From there, the challenge revolves around maintaining a sharp technical and business edge.

    4:00p
    HP Cloudline: A New Cost and Performance Optimized Server

    The power to build your own data center platform now extends locations, data points and even entire architectures. You have the capability to create a customized infrastructure capable of meeting very specific organizational and application demands. But why is this necessary?

    As an IT-enabled entity, your organization and business model are evolving rapidly. New user demands and the “always-on” generation are requiring new types of distributed access to systems and information. Users are asking for specific applications, housing very specific data points. As a manager, how do you deploy these platforms without having to incur new manufacturer costs?

    Commodity systems and “white-box” servers have been around for a while. But so have challenges around support and maintenance when it comes to these types of platforms. Big shops like service providers are constantly challenged with creating platforms built on scale, performance, and cost effectiveness. Now is your chance to create a new data center and business paradigm.

    In this white paper, we introduce the HP Cloudline platform. You’ll learn how you can create your own customized server infrastructure directly architected without the unnecessary bells and whistles which can add up to 20 percent in cost per server. You’ll also find out how you can deploy the only tier 1 server platform in the world which provides vanity-free simplicity, all with HP support.

    Organizations are actively looking at ways they can extend their resources into the cloud to help mitigate growth challenges and optimize workload performance. The proliferation of cloud computing and mobile technologies has introduced a number of new server complexities to manage, control and optimize.

    With so much more connecting into the cloud, service providers have seen a distinct boom in business. So how do you keep up? What are the big trends impacting the service provider industry and how can they create a server platform built on efficiency and scalability?

    Growth in mobile and data platforms will only continue to grow. For the data center and service provider, this means constant pressure to be extremely agile and cost effective.

    • In 105 countries around the world, there are now more mobile devices than people. The International Telecommunication Union (ITU) estimates that there will be close to 7.3 billion mobile subscribers in the world this year – or more mobile devices than people on the planet.
    • Americans already own an average of three mobile devices each, according to Sophos Lab research.
    • For the first time ever, mobile apps have overtaken PC usage. These trends will cause global mobile data traffic to increase 11-fold from 2013 to 2018, surpassing 15 exabytes per month by 2018.

    It’s time to take a look at customized server platforms and why this can become a critical part of your data center.

    Here’s the new reality for the modern service provider: For every 600 smartphones or 120 tablets, service providers have to deploy another server. That comes at a cost and every penny spent on CapEx is one less in profit. This is where a new set of servers are creating a fresh kind of landscape designed for service providers.

    Download this white paper today to see how you can create your own customized server infrastructure directly architected without the unnecessary bells and whistles which can add up to 20 percent in cost per server. Plus, discover how the Intel-Powered HP Cloudline family of servers is changing the market for service providers.

    5:34p
    Apple Joins Facebook’s Hardware Design Community

    Apple has officially joined the Open Compute Project, the Facebook-led open source hardware and data center design initiative, after being involved in it quietly for some time.

    While the company is known primarily for its consumer devices and iTunes, it also has a massive data center infrastructure that supports its online services. Companies that operate large data centers benefit from designing their own hardware, and OCP has become a beachhead for the community of vendors and end users that support this approach.

    Frank Frankovsy, chair and president of the OCP Foundation, announced the addition of Apple to the list of members during the opening keynote of the foundation’s annual summit in San Jose, California. “Apple has been involved in this project quietly for a long period of time,” he said. “They have excellent infrastructure engineering people.”

    Apple wasn’t the only new OCP member announced Tuesday. Others who joined include end users, such as Bank of America and CapitalOne, as well as vendors, such as HP, Cisco, Juniper, and Schneider Electric.

    OCP members have been using the project to develop hardware designs based on open specs and designs that are open sourced through the project.

    For end users, the benefit is not only the ability to get custom hardware but also to have multiple vendors supply the same products, which ensures supply and brings down the price. For vendors, the community has been a way to get engaged with data center operators the size of Facebook, Microsoft (which joined last year), and now also Apple.

    HP announced at the summit its first line of commodity servers for hyperscale data centers such end users operate. The new Cloudline servers are compliant with OCP.

    6:06p
    Linux OS for Network Switches Officially Part of Open Compute

    Big Switch, the Silicon Valley data center networking startup that makes software for bare-metal switches and software defined networking, has contributed a Linux-based network operating system to the Open Compute Project, the Facebook-led open source data center and hardware design initiative.

    The Santa Clara, California-based company announced today that OCP has accepted its Open Network Linux OS as a “reference network operating system.” OCP is holding its annual summit in San Jose this week.

    Numerous vendors have introduced so-called “bare-metal” switches to the market recently. These are switches that are not closely coupled with specific network operating systems and network management software. They are pitched as a less-expensive alternative to closed, proprietary hardware-and-software bundles incumbents like Cisco, Juniper, and HP, among a handful of others, have been shipping traditionally.

    A few companies, such as Big Switch and another one called Cumulus Networks, have sprung up seeing the trend as an opportunity to create and sell software for bare-metal switches.

    Seeing the writing on the wall, some of the “incumbent” network vendors have announced data center switches that don’t necessarily ship with their own software. Dell said it would start shipping switches with Cumulus OS or with a Big Switch OS last year; Juniper announced a “white box” switch that can run any open network OS in December; HP announced plans to ship a commodity switch with Cumulus OS this February.

    Companies like Facebook and Google have been designing their own hardware and going directly to manufacturers in Taiwan and China. They have used this approach to procure servers first, but now also networking hardware.

    Facebook announced its first switch design, called Wedge, last year, and this February the company unveiled another design, called Six Pack, which relies on the design concepts introduced in Wedge. Facebook said it was planning to contribute the Six Pack design to OCP as well.

    There are currently six hardware switch specs in OCP, and at least four of them can be ordered. The specs are by Accton, Alpha Networks, Broadcom, Mellanox, and Intel. There are also four network software specs, including the Linux network OS by Big Switch.

    ONL includes the Linux kernel, multiple drivers, installation scripts, and a netboot-capable bootloader. According to Big Switch, the open source OS supports 12 different open switch hardware platforms.

    7:00p
    New Open API Decouples Network OS from Network Silicon

    A group of vendors and data center operators, including Dell, Facebook, and Microsoft, have created a piece of software that abstracts network silicon for the network operating system.

    Called Switch Abstraction Interface, it is a network API (application programming interface) that enables the OS to control the underlying switch regardless of the kind of silicon it is running. The usual approach has been to write unique conversion code for each type of silicon.

    As Dell Open Networking director Adnan Bhutta put it in a blog post, the concept is similar to developers not having to think about whether their application will run on an Intel- or an AMD-based server. “SAI is a standardized API to express switch abstractions,” he wrote.

    Besides the three companies already mentioned, Broadcom, Intel, and Mellanox also participated in development of of the open network API. The companies announced today they have submitted it to the Open Compute Project, the Facebook-led open source hardware and data center design community.

    OCP is holding its annual summit in San Jose, California, this week.

    The idea is to enable developers to customize network software more freely. Silicon vendors may also benefit from being able to address a broader customer base.

    The announcement is yet another step toward freedom to customize data center network software, which has traditionally been proprietary, shipped together with network hardware it is closely coupled with.

    Another announcement that was made in conjunction with the OCP summit took a step in that direction. A startup called Big Switch announced that OCP had accepted its open Linux operating system for network switches as a reference network OS.

    9:00p
    Pacnet Extends Software Defined Networking to Optical Layer

    Pacnet, the Hong Kong-based telco that operates globally, is one of the companies leading the pack in employing software defined networking to provide services to their customers. Today, the company announced another step in that direction — deployment of Infinera’s new Open Transport Switch software. The integration of OTS extends software-enabled network automation into the optical layer, meaning lower latency, high transfer speed, and guaranteed performance for much bigger network capacity.

    Software defined networking allows a user to easily carve out what they need from their network through software. The technology is more advanced inside the data center than outside (the WAN), and not many service providers offer commercial SDN. Pacnet, through a past Infinera partnership, took somewhat of a lead in offering dynamic bandwidth to customers, and OTS extends that lead.

    OTS addition means users of Pacnet Enabled Network (PEN, Pacnet’s SDN platform) can now dynamically provision Layer 1 bandwidth on demand.

    “It’s still early days for the WAN (Wide Area Network), but this is probably the first wide scale implementation of transport services under SDN control,” said Mike Capuano, Infinera’s vice president of marketing.

    PEN was initially launched in late 2013. It was an early commercial SDN-based service delivery platform for Layer 2 Ethernet. The on-demand Layer 2 services are available from 1 megabit per second up to 10 Gbps. New dynamically provisioned Layer 1 bandwidth is available in increments of N x 10 Gbps, according to Pacnet.

    The SDN platform is deployed across a company-owned 100-Gbps-enabled Trans-Pacific and Asian submarine network in the Asia-Pacific region.

    Pacnet said it integrated and deployed OTS and PEN in a few months rather than the 12-24 month period it normally takes to develop these services by using DevOps methodology.

    Capuano said that another interesting feature Infinera delivered is Hybrid Control Mode. “Pacnet already has our platforms deployed in different regions running production services with Service Level Agreements,” he said. “With hybrid control they can identify the pool of available resources. It’s a nice way to migrate to SDN.”

    Pacnet has deployed the Infinera OTS into its existing production network by running in Hybrid Control Mode, with new services leveraging bandwidth under SDN control. Existing production services continue to operate using their Infinera DNA network management system. DNA is a suite of different network management components.

    Infinera said it has tuned its recent products to appeal to DevOps, and the increasing number of service providers that want to both implement and appeal to customers that use the approach. The software defined data center is infrastructure tuned for DevOps, with capabilities to provision and change infrastructure through software growing every day.

    “The growth of traffic, the dynamism of cloud services, increasing need for end-user control of virtualized networks with SDN are going to be key,” said Capuano. “We believe Infinera has the obvious solution for DevOps service provider.”

    OTS was designed from the ground up with an IT mindset, he said. Infinera offers what is fundamentally an abstraction layer that can plug into any controller. It is a lightweight and open web 2.0 architecture, and service providers can rapidly integrate new features as they go.

    << Previous Day 2015/03/10
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org