Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, October 27th, 2014

    Time Event
    12:00p
    Xand Acquisition Gives TierPoint Instant National Player Status

    TierPoint continues this year’s momentum with the acquisition of Xand and its sizable footprint in the northeast data center market. Freshly recapitalized, TierPoint has been expanding aggressively through both acquisition and building.

    Colocation and cloud provider Xand adds six data centers, totaling 140,000 square feet and 3,700 customers. The purchase price was not disclosed, but the acquisition is being funded through a combination of incremental equity from existing TierPoint investors and new investor Ontario Teachers’ Pension Plan.

    Xand’s data centers are in New York, Pennsylvania, Connecticut and Massachusetts. TierPoint has data centers in Dallas, Spokane, Seattle and Baltimore.

    Bringing its portfolio size to more than 300,000 square feet of raised floor, the deal instantly makes TierPoint a national player, with a total of 13 data centers in 10 markets. It also sets TierPoint up for further growth, giving the company master-planned space to add up to 150,000 square feet of data center space.

    Expanding size and product portfolio

    It is a straight-forward consolidation deal. Both companies wanted to expand out of their markets and had similar cultures and play books.

    “It’s kind of ironic: If you look at our stories, we’re both founded on acquisitions,” said Xand founder and CEO Yatish Mishra. “We’re both focused on Tier 2 markets, both on the SME (small and mid-size enterprise). Both going towards hybrid. We essentially have the same playbook. The match is unbelievably perfect.”

    “It gives us a nice northeastern presence,” said TierPoint CEO Paul Estes. “It fills in and rounds out management team as well. Very similar in culture and in products that we offer.”

    Xand locations now part of growing TierPoint portfolio

    Xand locations now part of growing TierPoint portfolio

    The merger serves both companies’ existing customers well, giving them the option to expand geographic diversity through a single data center provider, Estes said. He noted that Xand was a little further ahead when it came to cloud which will help boost these offerings across the footprint.

    “Another important combination is of our technical development team, meaning we can dedicate more dollars towards developing our services,” said Andy Stewart, TierPoint’s CFO.

    TierPoint execs said that all of Xand’s markets and facilities fir into their strategy. Among them, the Philadelphia market was the most attractive because of an existing presence in Philadelphia. “It solved an expansion issue we had,” Estes said about Philadelphia.

    TierPoint has predominantly relied on acquisitions to expand but is also building an Oklahoma facility. It acquired Perimeter Technology in 2011.

    Xand backer ABRY Partners is a private equity investment firm with a lot of stakes in the data center and communications space. It acquired Xand in 2011 and significantly built the company up through a merger with Access Northeast in 2012. Xand and Access Northeast together formed one of the largest privately held data center companies in the northeast, which is now being folded into TierPoint.

    “We were exploring ways to expand the business,” Mishra said. “We were bound in the northeast and wanted to expand beyond, a number of clients asking for east and west.”

    TierPoint on lookout for more acquisitions

    The deal is the latest example of ongoing consolidation in the colocation data center market, where a handful of providers have been carving out territories. Multi-tenant data centers continue to proliferate in emerging and secondary markets, outside of the major metros. The data center is getting closer to the customer as more enterprises move off premises and more web workloads need to be closer to end users for lower latency.

    “With this acquisition, TierPoint is doubling in size, gaining expansion space on the East Coast, and adding to its managed services capabilities,” said Kelly Morgan, research director for data centers at 451 Research. “It’s certainly a big move but probably will not be the firm’s last acquisition, though we expect it to focus on integration for the next few months.”

    “The integration process is going to take a while,” said Stewart. “The first twelve months are critical for us. We’ll close the acquisition and put our heads down operating and integrating, but there are other exciting acquisition possibilities for the future.”

    DH Capital, which usually has its hands in the biggest data center deals (such as IBM’s SoftLayer acquisition) served as exclusive financial advisor to ABRY and Xand. RBC Capital Markets and Credit Suisse will be providing debt financing for the transaction.

    TierPoint’s current investor group includes Cequel III management, led by chairman Jerry Kent, RedBird Capital Partners, The Stephens Group, Jordan/Zalaznick Advisers, and Thompson Street Capital Partners.

    New investor Ontario Teachers’ pension plan has more than $140 billion in net assets and demonstrates it’s probably a good time to be a teacher in Ontario.

    3:30p
    Is There a Ticking Time Bomb in Your Network?

    Ryan Smith is a network engineer for Cervalis LLC, a premier provider of IT infrastructure and managed services.

    In most networks the routers that are used to peer with upstream providers are mostly considered a set-it-and-forget-it type of device. Substantial work and testing is done initially to establish peering with the ISP and then, for the most part, administrators are free to step back and let the routers and routing protocols take over. What many don’t realize is that these routers can easily turn into time bombs ready to blow at any minute.

    Growth of the Internet route exchange

    Border gateway protocol (BGP) is, by and large, the most predominant method of exchanging routes with a provider. Through this protocol organizations not only are able to accept and transmit routes, but also to modify and manipulate the routes they receive. This peering process is one of the fundamental building blocks of the modern day Internet.

    With the rapidly approaching complete depletion of IPv4 addressing space, large ISPs, ARIN and other RIRs are constantly working together to reclaim and repurpose IPv4 resources and to segment larger, previously aggregated blocks to serve more end companies. Because of this, we are seeing the level of Internet route exchange continue to grow over time instead of seeing it stabilize. This, together with the recent surge in acceptance of IPv6, means that organizations that handle their own route exchanges need to be cautious and may need reexamine their environments.

    Back in the earlier part of 2010, the size of the global routing table was under 300,000 IPv4 routes. While large, this table was much more manageable as compared to today’s 500,000+ IPv4 routes. Combined with the sharp increase in IPv6 routes – which today number nearly 19,000 – this means that the router memory space consumed by these tables has almost doubled. With the recent scramble to reallocate IPv4 resources, an IPv4 route that previously was aggregated into a single /19 subnet representing one line in the table may soon be segmented into 32 different /24’s taking up 32 lines in the table. Coupled with the ongoing growth of IPv6, this will cause the table sizes and memory requirement to continue to balloon.

    Reaching the point of critical mass

    Network administrators and engineers must pay very special attention to their edge routers and peering relationships with upstream providers. Accepting this many routes from a provider may have been fine initially, but if left uninhibited, this growth may quickly consume all the available memory on a router and lead to major traffic forwarding and router stability issues.

    Compounding this problem is the fact that this route growth happens completely unexpectedly. In a period of just a few hours, without warning route tables can grow by thousands of routes. Hitting the point of critical mass is something no organization wants to experience.

    Defining the needs of your organization

    Numerous techniques and methods can be applied both at the network edge itself and within the provider’s network. Administrators should define what their organization’s needs are when arranging a peering relationship with an upstream provider. In most instances a full routing table isn’t required and a summarized table can more than suffice. Using summarized tables organizations can exponentially decrease the table size and system hardware requirements, although they might miss out on some more advanced traffic management methods. When full routing tables are required, administrators should take a close look at the resources available on their routers and determine whether or not an upgrade is needed.

    Certain managed services providers are in a unique position to help customers permanently eliminate this time bomb issue from their network as they regularly deal with aggregating multiple gigabits of traffic to upstream and thus maintain substantial edge networks to route and manage the traffic.

    These routing platforms are purpose-built with a service provider mindset and for which leading edge hardware and technology have been deployed. Because of this they are able to maintain full routing tables from numerous providers simultaneously and can easily deal with the continual growth of the global table. Customers of these managed services providers are shielded by this routing table growth and can also leverage additional routing capabilities and options through the MSP that can’t be achieved by working directly with an Internet provider. Routing techniques and methodologies that were previously unobtainable on smaller routing platforms can easily be attained on an MSP’s platform.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:00p
    Data Center Jobs: DuPont Fabros Technology

    At the Data Center Jobs Board, we have a new job listing from DuPont Fabros Technology, which is seeking a Assistant Critical Infrastructure Manager in Santa Clara, California.

    The Assistant Critical Infrastructure Manager is responsible for executing operating and maintenance strategies to ensure continuous availability of all critical systems, coordinating and inspecting services performed by outsourced vendors, assisting the SC1 Director in managing costs and budgets associated with the critical infrastructure and associated operations, and ensuring diligent adherence to policies and proven processes. To view full details and apply, see job listing details.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

    5:36p
    Xplenty Raises $3M for Cloud-Based Enterprise Hadoop

    Xplenty, a startup with cloud-based Hadoop tools aimed at making data analytics easy for enterprise users, has raised $3 million. The company’s web-based interface doesn’t require the user to know MapReduce or coding. The tools are cloud-native, and most of the customers are Amazon Web Services users.

    The round will go toward marketing and advancing the product. “The Hadoop ecosystem is moving forward in terms of new products and features and we need to keep up,” Xplenty founder and CEO Yaniv Mor said. “Part of the money will be used to establish a U.S. home base to better serve North America.”

    The company will also be using the money to expand its user base to clouds other than AWS. It has had a partnership with SoftLayer since before the service provider was acquired by IBM and will be part of IBM’s cloud marketplace. More said that some of its customers are Rackspace customers, but there is no official partnership as of yet.

    The closest competitors would be the Elastic MapReduce service by AWS itself and dig data integration software provider Talend.

    Mor said that AWS’ offering isn’t nearly as easy to use, while offerings like Talend haven’t been written natively on Hadoop and don’t necessarily run the processing jobs. “We are 100 percent service,” he said.

    Talend raised $40 million last year.

    Xplenty and its competitors are seeking to capitalize on the trend of companies outsourcing big data analytics to cloud services.

    “In terms of workloads on the cloud, there’s a trend of organizations moving or offloading to the cloud — most of the time it makes economical sense, there’s financial value,” Mor said. “You don’t need a big infrastructure running all the time. You can fire it up when you need it, tear down the cluster when you don’t.”

    Xplenty sees a growing number of examples of fully cloud-based companies that store application, log files and data on AWS, MongoDB or other relational databases and offers integration of many sources. RedShift is the data warehouse of choice for most of its customers who use Amazon’s cloud.

    5:49p
    2014 Data Center Efficiency Summit

    The 2014 Data Center Efficiency Summit will be held Wednesday, November 5 at the Santa Clara University School of Engineering in Santa Clara, California.

    The Data Center Efficiency Summit is a signature event of the Silicon Valley Leadership Group in partnership with the California Energy Commission and Lawrence Berkeley National Laboratory, which brings together engineers and thought leaders for one full day to discuss best practices, cutting edge new technologies, and lessons learned by real end users.

    Highlights for this year’s summit include keynotes by Aziz Safa with Intel and Commissioner Andrew McAllister with the CEC, a panel on process for the Department of Energy’s Better Buildings Challenge for Data Centers, and case studies by Microsoft, Intel, eBay and NetApp among others.

    For more information – including speakers, sponsors, registration and more – follow this link.

    To view additional events, return to the Data Center Knowledge Events Calendar.

    7:13p
    Red Cloud to Use Cannon Data Center Modules for Massive Australia Expansion

    Australian provider of data center services Red Cloud is planning a massive data center portfolio expansion around the country using T4 data center modules by Cannon Technologies.

    Red Cloud announced it was going to add 11 facilities to its portfolio, using Cannon’s modules exclusively. The plan is to eventually add a total of 1 million square feet of data center space.

    Together, Red Cloud and Cannon are using a model similar to the one by U.S. data center provider IO, which also sells colocation space within its modular data centers installed in massive warehouses. While there are no IO locations in Australia, there is one in Singapore, which may compete with Red Cloud on a regional level.

    Red Cloud has in fact bought IO modules for an expansion in Australia three years ago. The companies said then that Red Cloud would deploy 4.5 megawatts of capacity using IO.Anywhere modular data centers in Melbourne, Sydney and Perth.

    The company expects to bring the first facility, called The Garry Henley Data Center Park, online in the first quarter of 2015. The location is at the Phoenix Business Park in the Perth suburbs.

    Red Cloud is planning to open two more sites by the third quarter of next year. The provider is targeting wholesale data center customers.

    Carl Woodbridge, Red Cloud CEO, said the company chose Cannon’s modules because of comparatively low total cost of operation and flexibility the solution offered. “The customization aspect is proving to be very popular,” he said.

    Red Cloud specifically went for the T4 Granular Modular Data Center, which Cannon introduced in July. These modules can be assembled onsite using conventional hand tools, according to the vendor, which is valuable because it doesn’t require brining heavy machinery into an operating environment.

    Cannon used to be a data center rack and enclosure vendor but expanded its business model by adding the modules around 2010.

    8:00p
    Report: Microsoft Testing Windows Server for ARM

    Microsoft is testing a version of its Windows Server operating system for servers powered by ARM processors, Bloomberg Business Week reported citing anonymous sources.

    Low-power ARM chips are inside most of the world’s smartphones, but a handful of companies have adopted the architecture – which they license from Cambridge, England-based ARM Holdings – for System-on-Chip cards for servers. They compete most closely with Intel’s Atom architecture in the server space.

    It took a few years for processor makers to come up with 64-bit ARM chips for servers, and earlier this month HP became the first vendor to bring an ARM-based server to market, releasing two versions of its Moonshot microservers, one powered by Applied Micro’s 64-bit X-Gene SoC, and the other powered by Texas Instruments’ 32-bit ARM SoC.

    Rather than competing with general-purpose commodity x86 servers, ARM servers are positioned for specific workloads. HP’s 64-bit Moonshot, for example, is geared toward web-caching workloads in web-scale service provider data centers. The 32-bit version is optimized for real-time data processing in video, encoding and audio analysis workloads, leveraging TI’s extensive digital signal processing capabilities.

    Besides engineering challenges associated with server hardware to support the new architecture, the ARM server ecosystem also needs to address the lack of commonly used software that can be ported onto these systems. Windows Server is a widely used server operating system, and support by Microsoft is bound to make a big difference in speed of adoption of ARM servers by data center customers.

    Canonical, one of the leading vendors of Linux distributions, already supports ARM publicly. HP’s ARM-based Moonshot servers come with a bundle of Linux software by Canonical preinstalled.

    It will be important for Microsoft, Canonical, or any other company building software for the ARM architecture, to ensure that there is as little difference as possible from the user’s standpoint between writing software for x86 and ARM servers.

    John Zannos, vice president of cloud alliances and channels at Canonical, told us the company’s approach in creating software for ARM was to make sure a developer can write a piece of software on one OS or processor architecture but be able deploy it on another.

    “We’re trying to make as few changes as possible,” he said. “We don’t’ anticipate it to be a big porting exercise.”

    8:00p
    Battle.net Adds World of Warcraft Game Servers in Australia to Reduce Latency

    logo-WHIR

    This article originally appeared at The WHIR

    Battle.net, Blizzard Entertainment’s online-gaming service, said it is deploying World of Warcraft game servers in Australia for the first time in support of the upcoming “Warlords of Draenor” game expansion.

    Australian WoW servers will come online October 28, more than two weeks prior to the November 13 launch of Warlords of Draenor. There will be downtime as servers will be taken offline on October 28, with the game servers put back online “as soon as possible,” according to Battle.net.

    In the 10 years that WoW has been around, gamers in Australia and New Zealand had initially had to connect to faraway North American servers. Blizzard later introduced a cluster of servers for Oceania located on the west coast of North America, and, now, these servers will be migrated to Australia. With the game servers closer to the actual users, it’s likely that players in this region will experience lower latency, which is especially important in creating an immersive gaming experience that responds quickly to the gamer.

    Unlike other games that allow third-party game hosts, Blizzard only allows its Battle.net division to host WoW. While there have been WoW realms illegally hosted in Australia, these have been largely unpopular because they could be shut down really at any time, losing players whatever gains they had made on those servers and in that world.

    Many players have expressed their excitement around the announcement, and some have questioned why this has taken so long.

    The WoW universe is actually split into different “realms” which are like parallel universes that only contain a fraction of all players, and which make it possible for WoW to host such a massive number of users.

    The Oceanic realms include Barthilas, Frostmourne, Thaurissan, Saurfang, Caelestrasz, Jubei’Thos, Khaz’goroth, Aman’Thul, Nagrand, Dath’Remar, Dreadmaul, and Gundrak.

    According to a Q&A posted to the Battle.net forum, players can choose to move their characters on a North American realm to an Oceanic realm for free for a limited time. It states, “While no players will be required to move off their current realm, Australian and New Zealand players who opt not to play on Oceanic realms will not receive the same latency benefits as those on regional game servers hosted in the ANZ data centre.”

    Blizzard is the game developer behind not only Warcraft, but also the Starcraft and Diablo franchises, and various other games. The company has been looking into ways to improve the experiences of its users in various localities. For instance, in March it added local game servers in Australia to support Diablo III: Reaper of Souls, and it used dedicated local infrastructure for alpha testers of the upcoming game Heroes of the Storm.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/battle-net-adds-world-warcraft-game-servers-australia-reduce-latency

    8:30p
    HP Rolls Out OpenStack-based Helion Cloud and App Development Platform

    logo-WHIR

    This article originally appeared at The WHIR

    HP has now publicly released its commercial version of HP Helion, its OpenStack-based cloud platform, along with the HP Helion Development Platform, a fully integrated development environment based on Cloud Foundry technology.

    Announced Thursday, this is a major milestone for HP, which first announced its HP Helion portfolio in May 2014 along with a plan to invest more than $1 billion over the next two years on cloud-related products, services, marketing and R&D.

    Previously, Helion OpenStack had been available as a preview, and, with the general release, HP is now confident that it’s a hardened and commercial-grade deployment of OpenStack.

    Helion offers an integrated IaaS and PaaS solution, and the Helion Development Platform provides a simple way for developers to build cloud-native applications using a wide range of programming languages including Java, Ruby, Node.js, PHP, Python and Perl as well as various application services. And it supports Docker containers so that developers can create easily deployable and portable applications.

    HP is also rolling out a few optimized solutions which are integrated with Helion software, HP server and storage hardware, and its managed services.

    On Thurday, for instance, HP announced its first optimized solution for Helion, the HP Helion Content Depot, which is a scale-out object storage solution. Built with OpenStack Swift, and combining HP ProLiant servers and HP Networking, Helion Content Depot is designed to be a highly available, secure storage solution for enterprise clouds.

    On the storage front, Helion OpenStack includes built-in support for HP StoreVirtual VSA, a virtual storage appliance with Cinder integration for high performance and highly availability data storage. It also provides integrated Cloud Foundry Database-as-a-Service, and multiple database and queuing services options such as MySQL, PostgreSQL, RabbitMQ, Redis, Memcache and Trove.

    HP has been in the midst of a major identity change, having recently announced that it would split into two companies with one basically focusing on printing, and the other focusing on enterprise hardware, software and services.

    HP has also been making major strides in the open-source world with the introduction of a hardware-agnostic cloud service provider network, and the purchase of cloud startup Eucalyptus Systems whose software helps make hybrid cloud solutions that are compatible with Amazon’s public cloud. These measures towards more interoperable cloud solutions are very much in demand from enterprise customers who do not want to be locked into a particular software and hardware stack.

    For those interested in using the Helion Development Platform can obtain a user account on an HP Helion Public Cloud, and follow a quick-start guide.

    ISV and SaaS solution providers can also use the HP Helion Ready and HP Helion Developer Network to develop competency around the HP Helion platform, and help them build services.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/hp-rolls-openstack-based-helion-cloud-app-development-platform

    9:32p
    Telstra, Equinix to Link to Azure Data Centers in Australia

    Australia’s largest telco Telstra Corp. will provide private network connections to the two new Azure data centers Microsoft recently launched in the country. Equinix will also offer the service to its customers in Australia, Microsoft announced.

    This is a first for Telstra, but for Equinix the deal is an addition of new locations to an offering previously available in other parts of the world.

    The ExpressRoute service is a way for customers to connect their own servers to the servers that host their virtual infrastructure in Microsoft’s Azure cloud directly, bypassing the public Internet. According to the providers, this way of connecting to Azure data centers is faster and more secure. It enables a customer to treat Azure VM instances as servers on their own internal data center LAN.

    In the third quarter, Equinix launched ExpressRoute in its data centers in Silicon Valley, Los Angeles, Chicago and New York. It has announced plans to add the service in two more U.S. locations and one location in Brazil, two data centers in Japan, one each in Hong Kong and Singapore, and another one in Sydney.

    Telstra’s data center colocation footprint is not limited to Australia. It has data centers in Asia’s major hubs, such as Singapore, Hong Kong and Tokyo, as well as in the UK and U.S.

    To stand up its newest Azure cloud region in Australia, Microsoft deployed servers in data centers in two states – one in New South Wales and one in Victoria. The company did not say which cities the facilities were in.

    Microsoft CEO Satya Nadella and executive vice president of cloud and enterprise Scott Guthrie announced launch of the Australian region at an event in San Francisco earlier this month. The company is also planning to launch a region in India, and reports have surfaced of Azure expansion plans in Germany and South Korea.

    Offering private network links to their cloud infrastructure from colocation data centers has been a way Microsoft and its cloud rival Amazon Web Services expanded the reach of their clouds. Private connections are targeted primarily at security- and compliance-conscious enterprise customers.

    AWS recently launched a data center in Frankfurt area, adding a second cloud availability in Europe. Equinix announced that it would offer AWS Direct Connect links to the data center from its facilities around Germany.

    << Previous Day 2014/10/27
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org