Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, October 29th, 2014

    Time Event
    12:00p
    Allied Fiber Rolls Out Network, With Modular Colo Attached

    Jumping onto the fast lane of the Internet usually means finding an on-ramp in one of the major Internet peering points in big-city data hubs. Allied Fiber is seeking to change this by creating on-ramps and off-ramps everywhere, tapping dark fiber and modular data centers to bring the core of the Internet closer to the edge of the network.

    Allied Fiber CEO Hunter Newby has been quietly building this dark fiber network since 2010. Last week the company announced a milestone, signing a deal with Summit Broadband, which will expand its operations using colocation facilities along a new dark fiber route in Florida.

    Newby called the Summit Broadband deal “a perfect fit with our mission of linking the submarine cable systems in the United States while also providing direct, physical access for all local communities along the way.”

    Allied Fiber’s business is built atop dark fiber capacity: fiber-optic cabling in underground conduits that is yet to be “lit” with Internet traffic. It is building network-neutral dark fiber routes along railway lines owned by Norfolk Southern Railway. Unlike most fiber routes, which operate like a limited-access highway between two points, Allied Fiber uses a system of ducts to provide periodic on-ramps and off-ramps. Better access to dark fiber could improve connectivity to wireless towers and rural networks outside of major markets, bringing the concept of a data center “meet me room” (MMR) to new geographies.

    Tapping modular enclosures

    This type of network access requires both fiber and facilities. Newby is using modular data centers to create low-cost colocation facilities along Allied Fiber’s network. By placing network equipment in these colo centers, Summit Broadband can allow users in places like Rockledge, Florida to tap the connectivity of its global backbone through the undersea cables at a landing station in Boca Raton.

    Allied Fiber envisions a network of carrier-neutral facilities that can bring the rich connectivity of big-city network hubs to second-tier markets along their route.

    “We are one big, distributed neutral colo,” said Newby, who was an early innovator in the interconnection model as Chief Strategy Officer at Telx. “I want to create and breed a thousand networks.”

    Allied Fiber recently completed its Miami to Jacksonville fiber route, which is lined with six modular colo sites. The company plans to to complete a network buildout to Atlanta by June, adding five more colo sites along the route.

    The southeastern route is part of a larger network construction project laying fiber along rights-of-way for the Norfolk Southern railroad. A Northeast route will run from Chicago to New York to northern Virginia, with at least 20 Allied Fiber colocation facilities lining the route.

    Optimized for network gear

    Each colo site features a 1,200 square foot facility comprised of three modular components that are assembled on-site. The units are built by CellXion, a unit of Sabre Industries with experience in the mobile infrastructure market. They allow companies to easily add capacity for mobile traffic, edge caching and content delivery and cloud computing connectivity.

    “These are basically cell tower huts that we’ve made bigger,” said Newby. “It’s a battle-hardened MMR in a box.”

    The colo modules are designed to be multi-tenant, with generator support, a fence with perimeter security, mantrap and monitoring via remote NOC. The locked cabinets are filled with network gear rather than servers, with average power density of about 4kW a rack, somewhat less than the 6kW to 8kW you might see in a data center.

    “It’s designed to be a core transport facility,” said Newby. “We can add capacity by building adjacent data centers from containers or modular units. We can work with any of the modular vendors on this.”

    Newby likes the economics of the modular approach. “This way, I don’t need to make a big bet on capacity in Rockledge, Florida,” he sayd. “When we get to 75 percent capacity, we buy another unit and bring it in on a crane. I can scale to 3,600 feet on the same property.”

    Building upon experience

    The Florida project provides an opportunity for Allied Fiber to prove its model, and build momentum for expansion. Newby believes that the ability to connect cell towers and edge networks directly to long haul fiber will prove to be a compelling model, offering cost savings and ultimately greater choice in network providers.

    Newby points to his experience with Telx, which built a major operation connecting networks inside 60 Hudson Street, a major Manhattan telecom hub, that soon become the nexus of a national network of sites. Newby believes providing neutral access to dark fiber and colo sites will prove appealing well outside the major data center markets.

    “People would say ‘why do you think you can do this?’ It’s like Telx,” said Newby. “I learned a lot about the process of how to get to critical mass.

    “There are a lot of skeptics and naysayers, because this is bold,” he said. “There are also a lot of people rooting for us.”

    Here’s a look at Allied Fiber’s network model and how Newby envisions different industry players connecting to it:

    A diagram of the Allied Fiber approach to connect its dark fiber route to various properties, including modular data centers.

    A diagram of the Allied Fiber approach to connect its dark fiber route to various properties, including modular data centers.

    3:30p
    WAN Orchestration Leveraging Big Data

    Dr. Cahit Akin, is the co-founder and chief executive officer of Mushroom Networks, a privately held company based in San Diego, California, providing broadband products and solutions for a range of Internet applications.

    Multi-office organizations can’t compete at their peak without high performance and highly reliable IP connectivity that connects their branch offices to their private, public and hybrid clouds. The new generation of applications and services that powers various functions within the multi-office organization, heavily depend on the IP connectivity infrastructure between offices as well as connectivity to the rest of the world.

    Performance, reliability and cost are the parameters that directly affect IP connectivity and architecture decisions of the IP department, however, unlike earlier corporate networks, today’s enterprise networks are living and breathing organisms that change, fluctuate and have complicated interactions with the flows filling the networks. IT departments need to make sense of the complex system and manage it accordingly, in order to ensure the service level agreements (SLAs) are delivered to their constituency – the employees of the company.

    WAN orchestration is the technique that enables just that – an intelligence layer that sits on top of the WAN network as an overlay. This is a powerful concept to facilitate the understanding, re-engineering and management of the enterprise WAN networks. WAN orchestration resides either in a physical network appliance or a virtual machine that runs the WAN orchestration software, that can accomplish the synthesis, management and monitoring of WAN networks.

    Understanding the network

    WAN orchestration refers to the intelligence layer that can measure and make-sense-of the various characteristics of the WAN resources. The key capability to look for in this type of functionality is to cover important parameters that will be of high value in managing the network.

    Some “must have” parameters include capacity, loss-rate, latency and jitter both as instantaneous and time series formats. Depending on the implementation there will be various other parameters and derivative parameters that are sniffed actively or passively from the network by the WAN orchestrator.

    Managing the network

    Today’s WAN orchestrators can accomplish sophisticated network functions beyond simple static traffic policing. Some examples include Broadband Bonding, dynamic flow mapping, flow based traffic grooming, Elastic IP address management and various advanced Quality of Service functions. All these network functions are mostly geared toward improving WAN performance and reliability and as a result provide better end-user experience.

    Consequently, these network functions will also have a direct impact on the cost structure of the WAN network as more can be accomplish with same set of WAN links or same performance metrics can be achieved with fewer or less expensive WAN links, compared to a legacy system that lacks a WAN orchestration capability.

    Monitoring the network

    Collecting large sets of data and intelligence from live networks, without interfering with the data flows, is crucial. However, all that intelligence is only valuable if put to good use.

    More specifically, the collected data, either as instantaneous data alerts or flags as a function of big data analysis of the collected data need to be presented to the IT team in a consumable and actionable manner.

    There are a variety of levels of human involvement that can be designed into taking actions as a result of the monitoring. The WAN orchestrator tools can minimize the human involvement, as many of the analysis and distillation can be converted into automated actions.

    We are entering a new era of branch office and cloud connectivity where a single static WAN resource will not be able to provide the service levels that organizations demand from their networks. The promise of WAN orchestration solutions are simple: One needs to be able to measure, correlate and understand network data to be able to take or automate actions for intelligent WAN management.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:01p
    US House of Representatives Looking For Data Center

    The US House of Representatives is shopping for a new data center. The House issued a request for proposal for data center space outside the Washington, D.C., metro. The RFP is for colocation that can support day-to-day operations for multiple agencies as well as disaster recovery and business continuity.

    The proposal is for potentially six years with operations contract due November 25. The facility needs to be 300 to 350 miles as the crow flies from Capitol Hill, 100 miles from the coastline and within 100 miles of the nearest military facility. Think areas like Charlotte, North Carolina, Columbus, Ohio, or Albany, New York.

    The RFP includes a hot aisle containment requirement. The data center needs to provide a secure cage for a handful of individual agencies. The request also calls for outside space to install a 5 meter dish satellite antenna and space for government office trailers — potentially one per agency — and room for about six 800 square foot trailers.

    The service level agreement required is typical, looking for 100 percent uptime with graduated penalties for dropping below five nines. Rack power needs to be between 5 kW and 20 kW. Environmental monitoring of the data center and critical infrastructure is also required.

    Agencies are trying to cut costs and consolidate while trying to maintain reliability. The government data center consolidation initiave has been going on since 2010, and the latest progress report on the effort was issued last month.

    This RFP is one example of the government looking to multi-tenant data centers to consolidate operations of multiple agencies into one facility. A recent survey revealed IT workers were not confident in government data center reliability.

    The agencies named in the RFP include the Library of Congress, Architect of the Capitol, U.S. Capitol Police, Congressional Budget Office, Government Accountability Office, Government Printing Office, and other supportive agencies of the Legislative Branch.

    The Social Security Administration opened up a brand new data center last month.

    5:28p
    Latisys Expands Suburban Chicago Data Center

    Latisys has commissioned 25,000 square feet of raised floor in its suburban Chicago data center. The new build is called DC-06 and comes with an additional 3.6 megawatts of critical power. The company will make space available in increments.

    The 146,000 square foot Chicago data center campus serves as the company’s Midwest hub. Denver serves as a potential disaster recovery site for Chicago customers. Its other data centers are located in Southern California and Northern Virginia. Latisys also recently deployed Cloud-Enabled Systems Infrastructure (CESI) in London.

    Latisys specializes in tailored, hands-on solutions in addition to colocation space. Its CESI is a hybrid suite of cloud computing and infrastructure services that includes colocation, managed hosting and private cloud, managed storage, backup, replication, and security services. The company does well with enterprises seeking hybrid configurations that leverage the right product and setup for the right application.

    Latisys Chicago is in close proximity to downtown Chicago and the Chicago Mercantile Exchange, offering low latency to the Central Business District and surrounding areas.

    The company’s Chicago story began in August of 2008, when it purchased the Stargate facility under its former name of Managed Data Holdings. The company expanded in Chicago by 9,000 square feet in 2009, added 32,000 square feet the following year and another 10,000 in 2012 as part of an expansion in four different markets.

    The metro’s two markets

    Downtown Chicago and the suburbs are two distinct markets. The suburbs began as primarily a disaster recovery location but have grown in popularity for primary infrastructure as server huggers have learned to not hug so tight. Latisys’ hands-on managed hosting and cloud also means less need for the customer to be a stone’s throw away from their servers.

    Colocation pricing remains stable in both markets, though downtown continues to go for a premium. Due to tight supplies, the new expansion is welcome among Chicago-based organizations whose existing capacity has been reached or whose limited power densities make it difficult to grow efficiently.

    Supply is particularly tight downtown, although projects such as McHugh’s planned 350,000 square foot facility, Centerpoint’s construction close to 350 E. Cermak, and QTS’ Chicago Sun-Times plant acquisition all mean lots of capacity is in the pipeline. There are also Ascent Corp., Digital Capital and Server Farm Realty, among others.

    In the suburbs, providers other than Latisys include DuPont Fabros Technology, Forsythe Technology, ByteGrid, Server Farm Realty and Continuum.

    “Chicago’s western suburbs continue to gain in popularity as the primary alternative for Chicago enterprises looking for new capacity and the ability to extend their infrastructure nationally and internationally,” said Doug Butler, Latisys CFO and president of colocation services.

    7:02p
    Digital Realty Eyes Entry Into Germany, Japan

    As it exits markets it considers secondary to its strategy, Digital Realty Trust is looking to enter new international markets, namely Germany and Japan.

    The company’s CFO and interim CEO William Stein revealed the plans on its third-quarter earnings call Tuesday. “We plan to enter these markets with our customers over time,” he said.

    Digital Realty is in the midst of a major transformation of its massive global property portfolio and business model. For the first time in its 10 years of existence, the company is pruning the portfolio of “non-core” properties and partnering with service providers on bundled offerings that include its traditional space-and-power products with services higher up the stack.

    But don’t expect the San Francisco-based data center real estate giant to get into Germany and Japan with speculative builds. “We have primarily transitioned to a build-to-order inventory management practice,” Stein said. This means the company is not likely to build a new data center anywhere without a signed lease with a tenant.

    Geographic expansion is response to demand

    Expansion in Asia and Europe will in all likelihood be led by Bernard Geoghegan, who was appointed as managing director for EMEA and Asia Pacific in September. Having previously overseen the company’s business in the EMEA region, Geoghegan replaced Kris Kumar in the managing director role, who resigned that month.

    There has been a lot of data center activity in Germany recently. Amazon Web Services and VMware recently opened data centers there to support their cloud services, and Oracle announced plans to open two data centers in the country. Microsoft is reportedly planning an Azure data center in Germany as well.

    A lot of the new data center announcements have cited higher level of concern about data sovereignty in Germany than in other parts of the world.

    Digital Realty’s Germany and Japan expansion plans are likely to be driven by demand from existing customers. The company has been in the Asia Pacific region for years, with data centers in Hong Kong, Singapore, Melbourne and Sydney. It has seen strong absorption rates across its footprint in the region, Stein said, especially in Hong Kong and Singapore.

    Solid Q3 performance

    Digital Realty beat analyst estimates for Q3 earnings per share by a penny, reporting $1.22 instead of the estimated $1.21. The company has surpassed analyst EPS expectations every quarter since Q4 2013.

    Its revenue for Q3 of this year was $412 million – up 9 percent year over year. Net income was $130 million.

    Nine properties slated for pruning

    Digital Realty started its portfolio pruning project early this year, and as of this week has identified nine properties it will sell. Five of them are already on the market, Scott Peterson, the company’s chief investment officer, said, adding that value for two of them had been written down. The team is in the process of preparing the other four for sale.

    There is a variety of properties in its portfolio Digital Realty may consider non-core. Some are not data centers, and others are in markets where the company no longer wants to be in, CTO Jim Smith told us in an interview earlier this month.

    Diversifying services

    In addition to optimizing its list of properties, the company has been pursuing partnerships with cloud providers and increasing focus on colocation and connectivity services. The biggest recent cloud partnership came in September, when the company teamed up with VMware to provide private network links to its public cloud called vCloud Air.

    Also in September, Digital Realty partnered with Carpathia Hosting to jointly market solutions that combine colocation space in Digital Realty’s data centers with Carpathia’s offerings, which include cloud and managed services.

    The high-tech economy’s landlord

    The company is still very much a massive wholesale data center landlord, however, providing a lot of data center space for some of the high-tech economy’s biggest names. Its largest tenant, CenturyLink, occupies about 2.36 million square feet of space across more than 40 locations. Equinix, second largest, leases about 850,000 square feet across 10 data centers.

    Digital Realty is a landlord for Amazon, Facebook, LinkedIn, Yahoo, Verizon, NTT and IBM, among many others.

    7:11p
    Fujitsu Intros 56-Petabyte Storage System

    Fujitsu has introduced the Eternus CD10000 storage system, optimized for hyper-scale data center environments and packing up to 56 petabytes of data storage capacity.

    Engineered for petabyte-scale volumes of data the new storage system scales by adding up to 224 storage nodes which combine disks with the controllers, all of which can be added without shutting down the system. Fujitsu says it will release updates next year that take the system beyond 56 petabytes.

    To accommodate various types of demands on storage, the Eternus CD10000 has node types optimized for basic, capacity and performance.

    The system uses the open source Ceph distributed file system to automatically distribute data so that processing loads are not concentrated on any given node. This eliminates the need for conventional RAID.

    Ceph is a software-defined storage system designed for cloud computing and for organizations looking to replace legacy storage systems. Earlier this year Red Hat acquired Ceph distribution provider Inktankfor for $175 million.

    With support for object, file and block storage the new system will also be able to integrate with OpenStack environments.

    8:12p
    Emerson’s Future Ohio R&D Center to Work on Data Center Cooling Technology

    Emerson Network Power is building a research and development center on the University of Dayton campus in Ohio where it will have a dedicated facility for developing new data center cooling technology.

    Called Emerson Innovation Center, the 38,000 square foot building is expected to open in late 2015, the company said. The data center portion of it will work on “next-generation” approaches to controlling data center environments and managing heat.

    While vendors have increased efficiency of data center cooling products incrementally, there has not been innovation in cooling technology as disruptive as innovation in IT technology it supports. The most radically different approach introduced in recent years has been cooling servers by submerging motherboards into dielectric fluid, but there has not been wide scale adoption of products from the handful of vendors in this category.

    Energy consumption by cooling systems is one of the biggest operating expenses in data centers, so the space is ripe for disruption.

    Rather than looking for disruptive technologies, however, Emerson’s R&D center will collaborate with researchers and students from the university’s school of engineering on intelligent cooling technologies and controls that improve energy efficiency and maximize the use of free cooling.

    “The complex, dynamic nature of today’s data center requires more than just new cooling technologies,” John Schneider, vice president and general manager of thermal management at Emerson, said in a statement.

    All-in-all, the $35 million center will employ between 30 and 50 people. Besides the data center research module, it will have facilities for supermarket refrigeration, food service operations, residential connected homes, and light commercial buildings.

    8:30p
    DreamHost’s Public Cloud Service DreamCompute Comes Out of Private Beta

    logo-WHIR

    This article originally appeared at The WHIR

    After months of testing, DreamHost’s public cloud computing service, DreamCompute, which has been in private beta for months, has now left private beta.

    Aimed at software developers from development and test environments to full production deployments, DreamCompute is based on OpenStack cloud orchestration software. It also uses Ceph, the distributed storage platform incubated in the DreamHost Labs, and Akanda, DreamHost’s own open-source networking package for OpenStack which provides network services that are Layer 3 and higher, such as routing and firewalls.

    DreamCompute uses a redundant Ceph cluster to store data and OS images for higher performance, reliability, and scalability, as well as “lightning fast boot times”.

    DreamCompute allows users to spin up unmanaged virtual servers in seconds, with each virtual server featuring full Layer 2 tenant isolation, IPv4 and IPv6 support, and full administrator privileges for users via virtual OSI Layer 2 switching.

    DreamCompute project lead Justin Lund noted that DreamCompute provides access to the OpenStack compute, networking, image service, identity, and block storage APIs, as well as the ability to use leading deployment tools, which is all apparent in the documentation.

    “We’ve gone to great lengths to document the very architecture of DreamCompute to be as transparent as possible with our users,” Lund said in a statement. “We believe that when the cloud is open, everybody wins.”

    DreamCompute is available now in three configurations for flat monthly fees starting with a $19 per month plan which includes up to two instances, 2 GB of RAM, 25 GB of block storage and one floating IP. For $129 per month, the user gets up to 18 instances, 18 GB of RAM, 500 GB of block storage and two floating IPs.

    DreamCompute has been years in the making, and represents DreamHost’s effort to make a product that competes directly with offerings like Amazon Web Services’s public cloud service EC2.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/dreamhosts-public-cloud-service-dreamcompute-comes-private-beta

    9:00p
    Major US ISPs Provide Connection Speeds Below Broadband: Report

    logo-WHIR

    This article originally appeared at The WHIR

    All major US ISPs are providing connection speeds below broadband, according to a damning report from independent internet connection measuring organization Measurement Lab (M-Lab). The report, which was released Tuesday, studies connections between ISPs, and finds that under-provisioning is a causal factor in US customers receiving speeds during peak hours below the 4mbps that the FCC defines as “broadband speed,” while innocent technical problems are not.

    M-Lab is an open analysis organization which measures internet connection quality and detects censorship, technical faults and network neutrality violations.

    The report “ISP Interconnection and its Impacton Consumer Internet Performance” (PDF) calls interconnection “both definitive of the internet, and a manifestation of a business relationship between two ISPs.” It studies the 2013 and 2014 broadband performance of Comcast, Verizon, AT&T, CenturyLink, and Time Warner Cable, and finds major performance degradation from all five.

    “We see the same patterns of degradation manifest in disparate locations across the US,” the report concludes. “Locations that it would be hard to imagine share any significant infrastructure (Los Angeles and New York City, for example). We thus conclude that the business relationships between impacted Access ISP/Transit pairs is a factor in the repeated patterns of performance degradation observed throughout this research.”

    While slow speeds are most common during the peak hours of 7pm to 11pm, the report also notes sub-4mbps speeds at other times, and sometimes for protracted periods in certain areas and networks.

    The report mentions the possibility of a third party (other than the two ISPs in a given connection) being involved in the interconnection, and therefore possibly to blame for its slowness, though the consistency of the findings suggests that the problem is not specific to a certain contract or set of connections.

    In a dispute between Netflix and Verizon this summer over alleged throttling, Verizon reported that they had reviewed their network and found no congestion, a contention flatly disproved by the M-Lab report. Under-provisioning was specifically alleged during that dispute.

    The report results also support the findings of the FCC when it found in June that most major US broadband companies fail to deliver advertised speeds.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/major-us-isps-provide-connection-speeds-broadband-report

    << Previous Day 2014/10/29
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org