Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, September 28th, 2015

    Time Event
    12:00p
    Vapor IO’s Server Management Controller to Be Powered by 64-Bit ARM Chips

    Vapor IO, the startup that is challenging many of the data center design conventions, including the shape and placement of IT racks, has partnered with Applied Micro, whose 64-bit ARM processor will be the brain of the Vapor Edge Controller, a centralized, shared top-of-rack server management controller meant to replace proprietary Baseboard Management Controller in each individual server in the rack.

    Vapor got a lot of attention when it came out of stealth in March, unveiling the Vapor Chamber, which replaces straight data center aisles with cylindrical pods that arrange six wedge-shaped racks in a circle. Cold air enters from outside of the cylinder and exhaust air gets blown into a chamber in the middle and gets sucked out by a fan at the top.

    The physical design makes for higher power density per square foot, but that’s only part of Vapor’s pitch. The other part is a data center infrastructure management platform, consisting of hardware sensors, monitoring and analytics software, and a server management controller board, which is the board that will now be powered by Applied Micro’s chips.

    Vapor chose the Sunnyvale, California, semiconductor maker’s quad-core HeliX 2 processor for a number of reasons. “The high performance, combined with low power consumption, small form factor, support for ARM 64-bit instruction set architecture and an integrated 64-bit memory controller are key features that we have leveraged to enable users to monitor and manage data center components like never before,” Cole Crawford, Vapor’s founder and CEO, said in a statement.

    Crawford is a former executive director of the Open Compute Foundation, the Facebook-led open source hardware and data center design effort. Vapor’s rack design is inspired by rack designs of the Open Compute Project, and the company has submitted an element of its data center management software called OpenDCRE (Open Data Center Runtime Environment) to the open source project.

    Traditional server BMCs, used for monitoring a server’s vital signs, such as temperature, fan speed, and power consumption, among other parameters, have been criticized for being a good attack surface for hackers, primarily because of their proprietary nature, which makes them hard for users to fix if there are problems or secure on their own.

    Facebook recently released open source server BMC software to address another set of problems with proprietary BMC. Facebook infrastructure engineers like to customize components to their needs, but server vendors are too slow to add features or fix problems in BMC software for Facebook’s needs.

    Vapor’s server management controller replaces proprietary BMCs, regardless of rack or server type, the company said in a statement.

    5:30p
    The Dawn of a New Era for Data Center Interconnect

    Jim Theodoras is Senior Director of Technical Marketing for ADVA Optical Networking.

    The balance of power in the optical networking market has undergone a seismic shift in recent years. Data center interconnect (DCI) has emerged as the main driving force for innovation and spending, shaking up the entire structure of the industry. This new market focused purely on DCI networks has become the engine for expansion and innovation, completely disrupting the traditional supply chain and even altering equipment design.

    As DCI reshapes the landscape, we’re seeing deployed bandwidth capacity climb 25 to 30 percent year-over-year. One-hundred gigabit per second (100GB) transport is now finally unseating the technology warhorse of the past decade, 10GB, with 100GB deployments expected to grow five times over in 2017. That’s because data centers are the early adopters and drivers of 100GB across the globe.

    It’s a Brave New Data Center World

    We’re hearing more and more about the world’s largest Internet content providers (ICPs) and cloud service providers (CSPs) – the challenges they’re facing, the technologies they’re building and the partnerships they’re developing. The phenomenal growth in Internet traffic and the fierce migration to cloud-based services in recent years are combining to force a dramatic rethink of how data centers are connected. If data center operators are to continue meeting customer expectations, they need to build an optimized data center interconnect infrastructure that can be scaled indefinitely.

    DCI-optimized networks are necessary to eliminate data bottlenecks and to accommodate the enormous demand for flexible, cost-efficient growth. ICPs, CSPs and the like are all too aware that they need transport technology purpose-built for the task of interconnecting data centers. Without it they’ll be unable to continue their current business momentum, resulting in direct financial consequences.

    So, why are the needs of interconnecting data centers different than those of traditional transport networks? Why is a fundamental shift toward DCI-optimized technology required to ensure continued business expansion? And, how will it enable ICPs, CSPs and the other businesses in the DCI ecosystem to scale to meet demand both now and in the future?

    What They Want, How They Want It

    The needs in DCI are different than traditional optical networking. Data centers organize their data differently, so they require technology built specifically for their requirements. And with purchasing clout now tilted in their favor, data center operators are finally driving the design decisions in the optical networking industry.

    Data center operators start with a stringent set of demands for density, scalability and energy: more than a terabit per second (Tbit/s) per shelf for optimal utilization of space and equipment in colocation and leased spaces; open solutions featuring open protocols, hardware and software interfaces for scaling best-of-breed, multi-vendor networks; and less than 500MW of energy usage per Gbit/s of bandwidth delivered for superior efficiency. Furthermore, data center operators need a lower number of touch points and inventory items. What’s more, they need plug-and-play simplicity.

    What they really care about is software-defined networking (SDN) and open application programming interfaces (APIs). Data center operators began building orchestration software to run their own internal networks long before they started worrying about the transport connections among data centers. ICPs and CSPs need more than just scale and energy efficiency if they’re to truly expand their DCI connectivity. Software and ease of integration are critical but often overlooked factors. For example, they want a wide range of customizable APIs for programmability, no matter the size or architecture of the underlying network (simple point-to-point networks, meshed networks, global-scale virtualized networks, etc.)

    Data center operators are no longer satisfied with settling for technologies that the Tier 1 telcos wanted. They want what they want, and they demand it on their own business terms. That’s because they now have the purchasing clout and are finally getting exactly what they demand.

    Hot Vs. Not

    There are important distinctions between the requirements of traditional Tier 1 telcos and data center operators that must be accounted for when optimizing DCI infrastructures:

    • Where telcos derive revenues from transporting bits, data center operators derive revenue from applications and content (in fact, transport is a cost center).
    • Where equipment flexibility is key to telcos, cost-per-port is a higher priority with data center operators.
    • Where telco network design is driven by reliability and service-level agreements (SLAs), cost and capacity requirements, plus real estate constraints, drive network design for data center operators.
    • Where telcos typically own their fiber assets, data center operators typically lease.
    • Where telcos have typically sourced their control planes from vendors and settled for limited interoperability, data center operators want open-source, open-vendor solutions and SDN.

    A New Market to Conquer

    DCI is an entire ecosystem. Yes, the major ICPs and CSPs drive a lot of the spend, but there are more than 1,000 other companies in the same line of business: dark fiber and wavelength service providers, colocation providers, and leased equipment vendors. All of these enterprises have decided to connect their own data centers and all the links from data center to IXP to point of presence.

    Data center network spending has now eclipsed that of Tier 1 telephone companies. That’s a trend that has utterly disrupted the optical networking supply chain. Not long ago it was very stable and established – from research and development, to component provider, to subsystem provider, to equipment provider, to value-added reseller, to network builder, to dark-fiber trencher. But no more. Today, players along the old chain are scrambling to secure their place in the new DCI-driven world.

    The Future Beckons

    So, given the unique needs of data center operators, where does the optical networking industry go from here? It’s likely we’ll see devices that are more open, more programmable, denser, cheaper and require lower power. They’ll have a migration path to 500 petabits per second as well as open line systems, both optical and management. We’ll see greater spectral efficiency enabled by new modulation techniques and adoption of Raman and other optical technologies to maintain distance and improve flexibility; and there’ll certainly be more focus on optimized space, cooling and security solutions and less and less vendor/technology lock-in.

    A fundamental shift toward technology specifically optimized for the DCI opportunity has overtaken the optical networking industry. It looks like it could be the major driver of equipment design decisions for years to come.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:42p
    ARIN Issues Final IPv4 Addresses in Free Pool, Forcing Shift to IPv6

    logo-WHIR

    This article originally appeared at The WHIR

    With the explosion in the number of Internet-connected devices, the IPv4 address space used to identify networked devices has been quickly running out, making the transition to IPv6, which provides additional address space, inevitable.

    This week, the American Registry for Internet Numbers (ARIN), the regional Internet registry for much of North America, has issued the final IPv4 addresses in its free pool, meaning that IPv4 has finally reached depletion.

    At 128 bits, IPv6 has a much larger address space than the current standard, IPv4, which is facing the threat of address exhaustion because of its small size. IPv6 provides more than 340 trillion, trillion, trillion addresses, compared to the four billion IP addresses that are available with IPv4.

    IPv6 also provides more flexibility in allocating addresses and routing traffic, eliminating the need for network address translation. Furthermore, with the coexistence of IPv4 and IPv6, gateway masking can add latency and could remove accurate geolocation and customer analytics data from IPv4.

    With the first exhaustion of the ARIN IPv4 free pool, organizations will have to shift their attention to IPv6.

    “The exhaustion of the free IPv4 pool was inevitable given the internet’s exponential growth,” ARIN president and CEO John Curran said in a statement. “While ARIN will continue to process IPv4 requests through its wait list and the existing transfer market, organizations should be prepared to help usher in the next phase of the internet by deploying IPv6 as soon as possible.”

    Even though it has reached depletion, there will continue to be IPv4 address space issued to organizations by ARIN over the coming months.

    Over the past few months, organizations qualifying for large block sizes were given the choice of joining the waiting list for unmet requests or accepting a smaller /24 block that was available. In the case they chose not to accept the /24 block, that block would go back into the inventory.

    In the future, any IPv4 address space that ARIN receives from IANA, or recovers from revocations or returns from organizations may be used to satisfy approved requests on the waiting list for unmet requests. If ARIN is able to fully satisfy all of the requests on the waiting list, any remaining IPv4 addresses would be placed into the ARIN free pool of IPv4 addresses to satisfy future requests.

    According to the latest statistics from Google, IPv6 is used in 21 percent of website connections in the US, but it’s still below nine percent globally. However, organizations have been aware of the need to deploy IPv6 for quite some time. In a 2010 survey, approximately 84 percent of organizations already had IPv6 addresses or have considered requesting them from their supplier.

    Many hosting and infrastructure providers have also ensured their IPv6 compatibility over the past few years including Carpathia Hosting, Telehouse, Verio, SoftLayer,CoreLink, The Planet and NTT.

    ARIN board chairman and Internet pioneer Vint Cerf said, “When we designed the Internet 40 years ago, we did some calculations and estimated that 4.3 billion terminations ought to be enough for an experiment. Well, the experiment escaped the lab… It needs room to grow and that can only be achieved through the deployment ofIPv6 address space.”

    This first ran at http://www.thewhir.com/web-hosting-news/arin-issues-the-final-ipv4-addresses-in-its-free-pool-forcing-shift-to-ipv6

    5:49p
    Linux Foundation and ODPi Promise Open Source Hadoop Big Data Standard

    varguylogo

    This post originally appeared at The Var Guy

    By Christopher Tozzi

    One of big data’s biggest problems is lack of standardization. Today, the Linux Foundation announced a strategy for addressing that challenge by promoting open source standards for Hadoop big data in collaboration with ODPi.

    ODPi defines itself as “a shared industry effort focused on promoting and advancing the state of Apache Hadoop and big data technologies for the enterprise.” The group has grown its membership steadily since launching last February under the name Open Data Platform Alliance. Supporters now include Ampool, Altiscale, Capgemini, DataTorrent, EMC, GE, Hortonworks, IBM, Infosys, Linaro, NEC, Pivotal, PLDT, SAS Institute Inc, Splunk, Squid Solutions, SyncSort, Telstra, Teradata, Toshiba, UNIFi, Verizon, VMware, WANdisco, Xiilab, zData and Zettaset.

    Today, ODPi takes a major step forward by securing official endorsement by the Linux Foundation, which promotes Linux and other open source software. The support turns ODPi into a Linux Foundation collaborative project.

    It also signals the launch of a new platform called ODPi Core, which aims to become “a common reference platform that enables users to realize business results more quickly,” according to the Linux Foundation. The Foundation adds that ODPi Core development will proceed under “an open and transparent planning and release” process directed by the Apache Software Foundation.

    The Linux Foundation says this initiative will bring badly needed standardization to the Big Data world—particularly the one centered around Apache Hadoop, the open source big data platform.

    “The state of the Apache Hadoop demands open standardization and integration that can accelerate and ease deployments among its massive user community,” said Jim Zemlin, executive director at The Linux Foundation. “We’ve seen this model work with open source technologies experiencing rapid growth—projects like Debian, among others—and know it can increase adoption and open up opportunities for innovation on top of an already strong Hadoop community.”

    According to the Linux Foundation, the next major steps in ODPi Core’s development include the release of a specification and reference implementation, as well as the launch of an ODPi Certification Program. The open source community can follow ODPi’s progress via its GitHub repository.

    This first ran at http://thevarguy.com/open-source-application-software-companies/092815/linux-foundation-and-odpi-promise-open-source-hadoop-big-

    6:01p
    Data Center Management and Control: Good SLAs Make Good Neighbors

    In selecting the right colocation provider, important considerations are the data center management and monitoring tools available for the customer. Monitoring power consumption, having a DR-ready environment and truly partnering with the provider are vital consideration points in making the proper data center choice.

    During the planning phases, things like contracts, expectations, and data center management tools must be laid out to ensure that everyone is on the same page. When working with a colocation provider, there will be important planning points and ongoing considerations around a good data center rollout.

    • Working with a Service Level Agreement. When selecting the right colocation provider, creating or having a good SLA and establishing clear lines of demarcation are crucial. Many times, an SLA can be developed based on the needs of the organization and what is being hosted within the data center infrastructure. This means identifying key workloads, applications, servers and more. From there, an organization can develop base service agreements for uptime, issue resolution, response time and more. Creating a good SLA document can take some time, but it’s important to do so carefully since this can govern the performance of your environment. Some very high-uptime environments will build in credits into their SLA. In these situations, for example, a colocation provider could issue credits if power is unavailable. Creating an SLA is a partnership between the data center provider and the customer. Expectations must be clearly laid out to ensure that all performance, recovery and other expectations are met. Surprises or encountering unknowns in a production, highly utilized environment can result in loss of productivity, time and dollars.
    • Maintenance and testing. Don’t forget, when you buy data center colocation you are buying a slice of critical infrastructure and ongoing maintenance. Without a robust maintenance program, technology will fail. Look for documented MOPs (method of procedure) and SOPs (standard operating procedure) that are used consistently and improved over time. Make sure your SLA does not exclude maintenance windows or emergency maintenance. Your colo provider should be able to show you their monthly, quarterly, and annual maintenance schedules for all critical elements of the mechanical and electrical systems including chillers, air handlers, generators, batteries, and UPSs. You should be able to observe and even participate in maintenance exercises. How are you notified about maintenance windows and procedures? Finally, ask the ultimate question, “Do you plan for and test a full utility outage?” Systems need to be designed with sufficient redundancy to allow for proper maintenance. Colocation providers are reluctant to maintain systems if it could potentially cause an outage. The industry best practice is to be able to “fix one and break one, concurrent with a utility outage.”
    • Having a DR-ready Contract. For some organizations, moving to a colocation data center is the result of a disaster recovery plan. In these situations, it may very well be possible to integrate a DR contract into an SLA or as a standalone agreement. In this document, the organization and colocation provider establish which internal systems must be kept live and create a strategy to keep those systems running. When designing a contract around a disaster recovery initiative, consider some of the following points:
      • Use your BIA. As mentioned earlier, a business impact analysis will outline the key components within a corporate environment which must be kept live or recovered quickly. This BIA can be shared with your colocation provider to ensure that they are able to meet those requirements. By having such a document, an organization can eliminate numerous variables in selecting a partner which is capable of meeting the company’s DR needs.
      • Communicate Clearly. Good communication between the colocation partner and the organization is vital in any DR plan. A situation in which an unknown system or component (that was deemed as critical but not communicated) goes down will become a serious problem. Aside from bringing the piece back up – now there is the question of responsibility. By knowing who is responsible for which piece during a disaster event will greatly streamline the recovery process.
      • Understand the DR Components and Data Center Offerings. Prior to moving to a colocation, establish your DR requirements, recovery time and future objectives. Much of this can be accomplished with a BIA, but planning for the future will involve conversations with the IT team and other business stakeholders. Once those needs are established, it’s important to communicate them to the colocation provider. The idea here is to align thought processes to ensure a streamlined DR environment.
      • Onsite and Offsite Supplies. If a disaster occurs, you need both onsite and offsite sources of key supplies. Are there onsite supplies of diesel fuel for generators and water for cooling systems? Are there established services in place for delivery of water and diesel fuel should onsite supplies be depleted? Does the colo provider conduct disaster recovery scenarios with key suppliers? For example, what if there is a power outage that causes the offsite diesel fuel supplier to be unable to fill their trucks – what happens?

    Today’s world of high uptime demands and resource requirements mean that data center professionals must clearly understand how their services are managed. This is done through good communication with the partners, understanding underlying contracts, and ensuring that your business is constantly aligned. Remember, it’s important to work with partners that can clearly explain their level of service and keep up with your business demands.

    10:05p
    HP Adds Second Third-Party OS to Open Networking Switches

    HP will ship its open networking switches for data centers with an operating system by Pica8, the second non-HP switch software certified for the product line. HP added the option of Linux-based switch software by Cumulus Networks in February.

    HP and several other major suppliers of data center networking gear have been introducing open networking switches that can be used with third-party software since last year. The product lines are targeted at web-scale data center operators like Google, Facebook, and other cloud service providers who want more choice than the incumbent switch vendors have traditionally given them.

    Juniper announced its “white box” line in collaboration with Taiwanese manufacturer Alpha Networks in December 2014. Another Taiwanese manufacturer, Accton Technology Group, makes the hardware for HP’s open networking switches, called Altoline. Dell and Cumulus rolled out Dell’s open networking line with Cumulus software in January of last year.

    Cisco has not done anything along the same lines. Seemingly to demonstrate that the world’s largest data center networking vendor has not locked itself out of the web-scale market by continuing to sell closed, integrated hardware-software bundles, Cisco CEO Chuck Robbins said on an earnings call in August that the company had delivered a custom high-end networking platform for one of the “cloud titans.”

    Even though some of the web-scale operators design their own switches and source them from Asian design manufacturers directly – Google has been doing so for years, and Facebook started on this path recently – HP continues to compete for that business, Sean Maddox, senior business developer for HP Network Support Services, said.

    “We’re talking to the hyperscale players,” he said. “You know who they are by name.”

    But there is also a lot of interest in the product line from large enterprises and companies HP considers mid-market, he added. His team has daily interactions with non-hyperscale operators who want to learn about the economics of the approach.

    Besides an operating system, Pica8’s NOS gives HP’s Altoline switches something Cumulus does not: support for OpenFlow, the popular open Software Defined Networking standard. This means it can take advantage of applications in HP’s relatively recent SDN App Store, a hub for SDN application by HP and others.

    If the interest in open networking and commodity switches is growing among non-hyperscale customers, a big question for HP is what the trend will mean for its more traditional, integrated networking solutions in the future, since large enterprise and mid-market customers have been the primary users of the integrated solutions.

    Today, there is enough differentiation between the two approaches to ensure one doesn’t eat into the other’s market share, according to Maddox. HP’s traditional networking lines provide network fabric capabilities and a rich feature set the commodity line does not. “There are more knobs you can turn on those things than you’re going to find today in an open networking solution,” he said.

    << Previous Day 2015/09/28
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org