Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, April 17th, 2014

    Time Event
    12:00p
    With Performance Hub, Equinix Targets the Enterprise

    Equinix thinks the enterprise is its big opportunity going forward. The focus of this effort is Performance Hub, a network optimization offering that offers an attractive entry point into the Equinix model. It’s part consulting and part formalization of some proven designs.

    Performance Hub improves network and application performance. It  is an extension node of a company’s existing network that enables it to locate equipment in Equinix data centers, and then connect directly to network and cloud providers for reduced latency and improved performance.

    The goal for Performance Hub is to expand Equinix’s customer base beyond the Internet sector and into the enterprise. The multi-tenant data center giant has already established itself as a major player in terms of cloud providers, Internet companies, media and advertising specialists, and systems integrators. The next step is products and services to court the enterprise, which it has done well with, but where it sees the most potential for growth going forward.

    Entry Point for the Enterprise

    “We see this as an entry point to the Equinix model for the medium enterprise and above,” said Ryan Mallory, Sr. Director for Global Solutions Architects at Equinix. “We’re helping our network and cloud providers provide service to them.  For medium and large enterprises, they can build infrastructure and feel like a Tier 1 provider.”

    So far Equinix has seen good penetration and adoption in oil and gas, beverage, and logistics companies.

    Performance Hub is a technical solution that originated two years ago with some internal testing with a very large client. It became a formalized process and just recently launched officially.

    “What Performance Hub allows us to do is it allows us to look at the customer’s architecture and optimize it,” said Mallory. “Here’s a cabinet, here’s some routing and switching and WAN acceleration. We’ve seen and tested this in Equinix-validated designs. While we’ve defined the program around Performance Hub, each individual client is very unique.”

    What they look at, from the customer perspective, is the demographics in the market. Is the customer a single office, or a hundred offiices?

    “With that, we can start sizing the hardware infrastructure that they need,” said Mallory. “Do they need bandwidth aggregation calculation? From a bandwidth perspective we help identify the technical components, whether it be routing or switching. Above that, there’s some flexibility in terms of what the customer wants. Do they want a public IP gateway?  What kind of WAN acceleration do they need?”

    Growing Focus on Services

    These additional services make the idea of outsourcing to Equinix an easier one for enterprises.

    “With Performance Hub, what we’ve seen is tremendous savings in terms of cost aggregation inside the IBX (data center) ,” said Mallory. “It allows you direct access to clients or companies inside that IBX. Putting a performance hub in, you can cross connect to Cisco, etc, whomever you want to.”

    The big picture at Equinix is that is has been moving beyond colocation – a product defined in terms of space and power – and developing services around the enterprise. “We’re continuing to expand our focus around solutions,” said Mallory. “If they want us to help to build the whole deployment model for them, we can do that. We’ve also formalized Equinix consulting services.”

    Performance Hub also ties directly into the customer marketplace the company launched several years ago, which continues to evolve as the company adds more features. One recent addition is circuit quoting tools. You can go in and understand what carriers are in what building,” said Mallory. “This is in partnership with Global Capacity.”

    Performance Hub is supported by 30 architects around the globe, but the company plans to add 10 more. While the concept kicked off two years ago with a proof of concept, the Performance Hub offering was only officially launched two weeks ago.

    12:30p
    Will High Frequency Trading Become A Flash in The Pan?

    Patrick Mannion is the Director of Data Center Strategy in Align’s Professional Services team.

    To the typical data center salesperson, a client who wants to perform high‐frequency trading (HFT) in their facility is a wonderful thing. These clients may be a little demanding on service levels, and they may push the envelope for a site’s capabilities or delivery timelines, but the clients themselves recognize this and are willing to pay a premium for the increased service levels they receive. Not surprisingly, the typical salesperson wants to take care of these clients and nurture them for the long term.

    Given recent events, those salespeople and sales managers should be getting a little concerned, especially if they understand the clients’ business models. Most publicly, the Volcker Rule — part of the Dodd‐Frank Wall Street Reform and Consumer Protection Act — threatens the basic legality of high frequency trading.

    Also, the recent publication of Michael Lewis’ book, Flash Boys, and the subsequent 60 Minutes exposé on the topic have brought serious concerns around the ethics of the practice into the Zeitgeist. Combine these public relations stresses with the realities of the business, and one conclusion becomes apparent: HFT as a business practice is on thin ice.

    Origins of High Frequency Trading

    In 1998, when the U.S. Securities and Exchange Commission authorized the use of electronic exchanges for stockbrokers, we saw the first glimmer of firms using technology to game the system. Arbitrage plays, where stocks could be bought from one player and sold to another with a significant price difference, were made more obvious through automated execution of those trades and were only a few keystrokes away. Within a few years, trade execution went from being measured in minutes to less than a second, and trading volume on the major exchanges skyrocketed.

    Over the next decade, the gap of time between the trade bid and execution continued to tighten. An interesting dynamic began to appear — the Cold War Arms Race of the late 20th century was reborn in financial circles.

    The quest for speed started with network connections and circuits, then server hardware and trading platforms and finally through physical moves out of the financial giants’ data centers to colocation nearer the exchanges’ order management systems and matching engines. In eliminating all the places a delay could occur, market participants battled each other in a “race to zero latency” along the path a trade would travel. By 2010, the industry measured trade executions in milliseconds.

    Trades Executived in Nanoseconds

    As we enter 2014, many of the big players are talking nanoseconds within their platforms and microseconds for exchange-based executions. At some point soon, technology‐ and location‐focused improvements will stall, and the theoretical maximum speed for a trade will be reached. HFT will no longer be about speed and throwing technology at the race, but instead be about strategy and more sophisticated trading algorithms.

    Of course, all of these high‐speed circuits, high‐end servers and routers and proximity‐focused colocation come at a constantly increasing price, not to mention the army of programmers and system administrators behind the algorithms who are trying to stay ahead of their opponents on the battlefield with new approaches and alternative trading strategies. In parallel, the execution of successful trades — known as the “fill rate” — is constantly decreasing as peers ramp up their own platforms and steal away opportunities as quickly as they appear.

    As the race goes on, it becomes harder and more expensive for an HFT shop to compete. Between finding a bleeding‐edge technical advantage and building algorithms that can successfully find openings for trades, the brokerages are fighting to capture fractions of pennies on each execution. In fact, the TABB Group estimates that HFT‐focused profit is falling quickly — from $7.3 billion in 2009 to $1.3 billion in 2013 across all markets.1

    Changes Within the Ecosystem: What Lies Ahead?

    Focusing on their core competencies — the strategy definition and execution and the algorithms — the business of HFT becomes the focus and they will offload the physical management of the data center and its gear. This provides an opening for third parties to step in and provide value in their equipment procurement processes, technology operations teams and business aware
    intelligent move, add, change processes. As their profit margins are squeezed from both directions, they will look to reduce their costs at every point in the chain.

    There is another pressure to consider as well — the riskiness of the HFT proposition in general. In 2010 and 2013, the financial world was rocked by two so‐called “flash crashes” where markets saw an immense drop in prices occur in an incredibly small period of time. Directly tied to the use of HFT and “black‐box” strategies searching for arbitrage opportunities, these flash crashes were unpredictable and dangerous to the entire marketplace. Billions of dollars were lost and incompletely regained in a matter of seconds, and there is no single explanation of what triggered the crashes or recoveries across the entire marketplace.

    Regulators are rightly concerned that these types of events will recur, possibly at a greater magnitude and with a reduced or even without any subsequent recovery. This brings more uninvited scrutiny into the HFT community. That scrutiny has already uncovered the fact that most HFT shops hedge their risk through alternative HFT strategies. They effectively take other bets to control their losses, but use the same methods that they are supposed to be protecting against failure.

    Modern data center providers with an HFT client base are typically aware of these concerns. Across the industry, there is a movement afoot to contain these clients and hedge against the potential loss of momentum.

    Some providers have custom‐built facilities to house these clients, satisfying the clients’ appetite for proximity to their trading partners, while reducing the potential losses to a small number of easily sold sites. Other providers have attempted to “salt‐and‐pepper” the clients with a wide vertical market mix in the same facility in an effort to protect their investments. Whatever the approach, the fact remains, the existing client base of hundreds of thousands of pieces of equipment across tens of thousands of cabinets and circuits focused on high‐frequency trading as we know it today will not last forever.

    When the change comes, what form will it take? Will the HFT platforms shrink, or be transformed into something smarter? Will regulators force these market players to change their practices, or will the exchanges take drastic action to limit the effect of HFT? If they do, what will the effect be on the data center industry? Will the financial vertical lose its lucrative luster, while those providers that cater to it become more commoditized? Will the New Jersey and Chicago markets, home to the exchanges and the major players, feel a contraction or even bottom out?

    Endnote
    1
    Wall Street Market Structure Expert, TABB Group Research Firm Founder Larry Tabb Responds to CBS 60 Minutes Interview with Flash Boys Author

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:00p
    It’s All About the App, as Red Hat Drives Containerized Application Delivery

    Red Hat‘s 2014 Summit kicked into high gear Tuesday, with innovations for its Linux Container vision of streamlined application delivery, interoperability announcements with container provider Docker, and broad customer support for its OpenStack-powered product offerings focused on delivering an open hybrid cloud. The event conversation can be followed on Twitter hashtag #RHSummit.

    Containerized Application Delivery

    Red Hat launched several new Linux Container innovations, to support its vision for streamlined application delivery and orchestration across bare metal systems, virtual machines and private and public clouds via containers and Docker technology. As a new community-driven innovation, Atomic will develop technologies for creating lightweight Linux Container hosts, based on next-generation capabilities in the Linux ecosystem. The tools that result from Project Atomic will allow creation of a new variant of Red Hat Enterprise Linux, set to debut with Red Hat Enterprise Linux 7. The Atomic container host provides the essential functionality for running application containers like Docker, while maintaining a small footprint and allowing for atomic updates

    GearD is a new OpenShift Origin community project to enable rapid application development, continuous integration, delivery, and deployment of application code to containerized application environments. An expansion of the Red Hat Enterprise Linux 7 high-touch beta program to include Red Hat Enterprise Linux Atomic Host and Docker container technologies that will enable select customers to evaluate these new container technologies in enterprise environments. GearD was created to provide Integration between application containers and deployment technologies like Git to allow developers to quickly go from application source code to containerized application stacks deployed onto production systems.

    Open source containers help to separate infrastructure services from the application, allowing portability across not only different clouds, but also physical and virtual environments. This means that the container consumes only the needed services and it delivers upon the extreme flexibility promised by the open hybrid cloud. As an increasing number of enterprises embrace DevOps philosophies, Red Hat expects container technologies will play a significant role in how organizations deliver and manage applications. Pairing the Red Hat enterprise Linux platform and an extensive ecosystem of support and services, with an enterprise class, container-specific host will deliver on the comprehensive vision of containerized application delivery for the open hybrid cloud.

    “As the cloud enters the computing mainstream and applications, not infrastructure, become the focus of enterprise IT, the operating system takes on greater importance in supporting the application and the infrastructure, without sacrificing the basic requirements of security, stability and manageability,” said Paul Cormier, president, Product and Technologies at Red Hat. ”Our newly-announced container offerings, including Red Hat Enterprise Linux Atomic Host, will drive this vision forward, helping enterprises embrace streamlined application delivery through the power of Linux Containers and Docker, and enabling the free movement of applications across cloud, virtual and physical environments, a key tenet of the open hybrid cloud.”

    Red Hat and Docker interoperability

    Container solution provider Docker and Red Hat announced a deeper relationship that builds on the existing technology collaboration between the companies. As part of the expanded collaboration, Docker and Red Hat will work together on interoperability between Docker’s hosted services and Red Hat certified container hosts and services. Red Hat has worked to extend Docker for inclusion in Red Hat Enterprise Linux, with new production-grade file-system options, integrated systemd process management, and use of SELinux to provide military-grade security. Red Hat has also packaged Docker for Fedora and Red Hat Enterprise Linux, and launched the Red Hat Container Certification in March 2014.

    1:30p
    Microsoft Launches Data Platform, Adds Internet of Things Service

    Alongside the launch of SQL Server 2014 at a customer event Tuesday, Microsoft (MSFT) CEO Satya Nadella outlined the company’s path to deliver a platform for ambient intelligence. Stressing a “data culture,” Nadella also shared the results of new IDC research that shows that a comprehensive approach to data can help companies realize an additional 60 percent return on their data assets — a worldwide opportunity of $1.6 trillion.

    “The era of ambient intelligence has begun, and we are delivering a platform that allows companies of any size to create a data culture and ensure insights reach every individual in every organization,” Nadella said.

    Microsoft defines its data platform with new products and services, built for the era of ambient intelligence.  After announcing its platform last summer Microsoft released SQL Server 2014, with real-time performance with built-in in-memory technology and public cloud scale and disaster recovery with Microsoft Azure.

    Under a limited public preview, the Microsoft Azure Intelligent Systems Service will help customers embrace the Internet of Things and securely connect to, manage and capture machine-generated data from sensors and devices, regardless of operating system. Finally, the general availability of the Analytics Platform System (APS) combines the best of Microsoft’s SQL database and Hadoop technology in one low-cost offering that delivers “big data in a box”.

    These new solutions build on 12 months of innovation — including Power BI for Office 365, a cloud-based, self-service business intelligence solution with groundbreaking natural language capability; Azure HDInsight for elastic Hadoop in the cloud; PolyBase to bring structured and unstructured data together in a data warehouse appliance; and Power Query for Excel, which makes it easier for people to discover data — to deliver the most comprehensive data platform with real-time performance built into everything.

    New research commissioned by Microsoft and conducted by IDC estimates that organizations could realize a “data dividend” of roughly $1.6 trillion in additional revenue, lower costs and improved productivity over the next four years by putting in place a holistic approach to data that spans datasets, analytics and more. The research was conducted among more than 2,000 mid-sized and large organizations in 20 countries worldwide.

    “Customers who take a comprehensive approach to their data projects realize a higher data dividend than customers who take a point-by-point approach,” said Dan Vesset, program vice president, Business Analytics and Big Data, at IDC. “This new research shows that by combining diverse data sets, new analytics and insights to more people — at the right time — businesses worldwide can tap into a more than trillion-dollar opportunity over the next four years.”

    2:00p
    Linode Invests $45 Million to Retool its Infrastructure

    Cloud computing specialist Linode is investing $45 million in its infrastructure and undertaking a major corporate relaunch. This is the largest capital investment in the company’s history, and comes with an overhaul designed to position it to better compete with market leader Amazon Web Services (AWS) in the cloud hosting space.

    “The cloud hosting industry is changing and Linode is changing with it,” said Christopher Aker, founder and CEO of Linode.  “Developers are constantly seeking simpler, high value alternatives to AWS, and we are uniquely positioned to serve that demand. With faster SSDs, more flexible payment options, and unprecedented customer support, we are not only setting a higher standard of service but are also proud to still be the only cloud hosting company built and owned 100 percent by developers.”

    Linode has invested in server hardware and networking equipment, including a shift to native solid state drives (SSD) that will improve customer performance over the previously standard hard disk drives (HDDs). The amount of RAM received across all plans has doubled, and network throughput to each host server has increased from 2Gbps to 40Gbps.

    In short, Linode is going for a rebirthre, from Linux Virtual Private Server hoster to full on Infrastructure as a Service player.

    In addition to improvements to its infrastructure architecture, it’s also going targeting enterprise customers with larger plans built specifically to their needs. Linode has a history of being strong with the self-serve DIY crowd. It has a significant gaming hosting operation, and is home to many development projects and startups. However, its base has broadened over the years, with the company landing some Fortune 50 companies and enterprises.

    It’s been adding several features to appeal to the business world. In recent months the company has also released several new product such as NodeBalancer SSL, a cloud-based load balancing service. It expanded into managed services with Linode Managed, the company’s round-the-clock incident response service, and it added Longview, a server statistics and graphing service.

    The new positioning and cash will help it grow a burgeoning business that goes beyond the techie and into new territories. It’s a straight competitor with AWS, Rackspace, and IBM.

    Free hardware and service upgrades include:

    • SSDs: All new Linodes are available on SSD-powered host servers; eliminating spinning disks and boosting provisioning time to well under a minute while also improving application response time.
    • RAM: All Linodes now have double the amount of memory.
    • CPU: All Linode host servers come with the latest Intel® Xeon® E5 2680v2 Ivy Bridge processors; a full-powered CPU offering higher core counts and better performance than any other cloud hosting provider.
    • NET: Each Linode host server now has 40Gbits of network connectivity. Inbound throughput to each Linode is 40Gbit and outbound can reach up to 10Gbit per second.
    • Hourly Billing: Linode now bundles compute, persistent storage, and network transfer into a simple hourly rate, enabling customers to pay for only what is used with the predictability of a monthly cap that does not exceed current monthly rates.

    Linode made a significant investment last spring to bolster network speed. It implemented Cisco Nexus 7000 and 5000 series switches and Nexus 2000 series Fabric Extenders designed to improve redundancy and load balancing to each node.

    The company has six geographically diverse data centers. This is the largest overhaul of Linode’s infrastructure since its founding in 2003. Since then, the company has launched over 5,000,000 virtual machines and has more than 250,000 customers worldwide.

    2:30p
    Why Some IT Equipment Racks Need High Power Deployments

    The evolution of the current data center infrastructure has allowed many organizations to deploy new types of workloads, support more users, and enhance their overall business model. Now, we have virtualization, big data, and a lot more cloud computing. Although this has certainly allowed the business entity to do a lot – it also puts additional resource strain on your data center environment.

    Still, many data center managers are doing a good job conserving energy – decreasing PUE, raising data center temperatures, using air-side economizers to reduce energy consumption for cooling – but average power consumption at the rack is still going up. In fact, the increased efficiency means more power is available for servers to support data center growth. Data centers are finding that they must deploy more and more power to their racks. Why is this happening? Data center managers are deploying more and more power to their IT equipment racks to keep up with power-hungry devices.

    According to this whitepaper from Raritan, nearly half (49%) of the data center managers polled had a maximum rack power density of 12kW or less. Their expectations were that two years later, only one-third (33%) would have a maximum rack power density of 12kW or less. Some data centers today have racks wired to provide as much as 30kVA.

    In this whitepaper, we quickly learn the key considerations surrounding the deployment of high power into modern IT equipment racks. Remember, when considering power demand, it is important to determine and design for peak actual demand. Designing to IT equipment nameplate ratings is excessively high. Designing for average power consumption may not be sufficient for periods of peak demand.

    Download this whitepaper today to learn about the necessary aspects around high power deployments. This includes:

    • Trends in Data Center Power Deployment
    • Drivers for High Power Racks
    • Data Center Power Distribution Around the World
    • What Is High Power?
    • High Power, High Outlet Density
    • High Power, Low Outlet Density
    • Branch Circuit Protection
    • Intelligent Rack PDU Feature Considerations
    • The Advantages of Higher Voltage for High Power Racks
    • And much more!

    As the modern data center continues to evolve and take on more complex workloads – it’ll be critical to evaluate power requirements and deployment best practices. Whether you operate a large, a medium or even a small data center, it may be time for you to consider deploying high power to at least some of your racks. Here’s the reality – High-density racks can be deployed in small, medium or large data centers. Even in Raritan’s data center, they’ve increased temperature set points to where their cooling capacity has increased to support higher density rack loads. All of these efficiencies and power controls will help create a more optimally run data center – meeting both today’s and tomorrow’s demands.

    3:00p
    Tata Expands Global Data Center Network

    Global network service provider Tata Communications announced that it will use strategic partnerships to expand its data center footprint to Australia, Germany and Austria, and Malaysia. With over a million square feet of colocation space across the globe, the expansion to state of the art data centers with NEXTDC in Australia, Interxion in Germany and Austria, and Pacific Link Telecom (PLT) in Malaysia will act as extensions to further expand its global data center footprint. Tata’s data centers are fully integrated into the company’s global IP network, offering maximum traffic capacity into and out of its facilities.

    “As technologies evolve and newer trends like BYOD and video streaming become a central part of enterprise business needs, it is important to have these applications hosted on a global data centre network to provide a uniform experience to the end-user,” said Srinivasan CR, Vice President, Data Centre Services at Tata Communications. ”Additionally, hosting applications in interconnected data centre enables the user to access the data from anywhere, anytime and on any device. Expanding our data centre footprint forms part of our strategy to enable businesses with the most robust backbone for their organisations’ digital infrastructure.”

    Tata Communications has 80 offices located in more than 40 countries, and has the largest privately-owned submarine cable networks in the world, which accounts for over 20 percent of the world’s internet routes.

    6:03p
    Vantage Data Centers Boosts Credit Line to $275 Million

    Vantage Data Centers has increased its revolving credit facility by over 30 percent, from $210 million to $275 million. The company said the additional credit will allow it to continue to its expansion, including forays into new geographic markets.

    “We are pleased to close on this oversubscribed, upsized financing with the strong support of five existing banks and three new lenders,” said Sureel Choksi, President and CEO, Vantage Data Centers. “This financing enables Vantage to continue to support the growth of our customers by pursuing significant expansion in Santa Clara, Quincy, and potential new markets.”

    Vantage has campuses in Santa Clara, California, and Quincy, Washington. The portfolio includes four enterprise-grade data centers totaling over 100 megawatts of potential capacity. Technology investor Silver Lake, which has over $20 billion in combined assets under management and committed capital, backs Vantage.

    RBC Capital Markets, Bank America Merrill Lynch, KeyBank Capital Markets, SunTrust and Regions Capital Markets served as Joint Lead Arrangers. ING Capital was Managing Agent and RBC Capital Markets served as the Sole Book Running Manager. Royal Bank of Canada was the Administrative Agent. The bank syndicate also includes Barclays and Western Alliance.

    6:09p
    Bomb Threat at Google Data Center Prompts Evacuation

    The Google data center in South Carolina was evacuated this morning after a report of a bomb threat, according to local media.

    The Berkeley County sheriff’s department said that a caller had alleged this morning that there was a bomb at the center. Google staff were evacuated, and deputies from Berkeley County searched the building, assisted by the bomb squad from Charleston County, according to the Post & Courier. No threats had been found as of 2:45 p.m.

    “Our first priority is the safety and security of everyone on the site,” a Google spokesperson said. “We’re working with the relevant authorities to investigate the situation and are grateful to emergency services for their thorough and timely assistance.”

    We’ll update the story as we know more.

    These kind of security threats are unusual, as all Google data centers are protected by extremely tight security measures. The buildings are fenced, with round-the-clock access control from on-site security staff. The entire grounds and data center are under video surveillance. Here’s a video overview of the security measures Google uses to protect its data centers.

    << Previous Day 2014/04/17
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org