Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, February 3rd, 2015

    Time Event
    12:44a
    VMware Makes Major Updates Across Data Center Software Portfolio

    VMware announced major updates to some of its core data center software products Monday.

    The company has made it easier to move live workloads from data center to data center over long distances – from New York to London, for example – and added more capabilities for melding clients’ in-house VMware environments with its public cloud services.

    VMware also launched VMware Integrated OpenStack, its own distribution of the popular open source cloud architecture it announced open beta for last year. The distro will be integrated with the company’s own cloud management software, which customers will be able to use to manage their OpenStack clouds.

    The vendor has made big additions in storage management and storage virtualization software. Its hypervisor now has native VM awareness of a range of third-party storage systems, and its virtual SAN now supports all-flash architecture and substantially larger virtual storage clusters.

    “Every traditional industry across the globe is being transformed by software,” VMware CEO Pat Gelsinger said in a statement. “Today, we are taking another leap forward in helping our customers meet these demands through a unified platform, defined in software, which will offer unmatched choice and extends our innovations across compute, networking and storage to deliver the hybrid cloud.”

    Fleshing Out Hybrid Cloud Technology

    The data center software vendor first announced its intentions to become a public Infrastructure-as-a-Service provider in May 2013, entering a competitive market dominated by Amazon Web Services, and, to a much lesser extent, by Microsoft, IBM, and Google, among a handful of others.

    The pitch for the services, first called vCloud Hybrid Service and later renamed vCloud Air, was that companies with existing VMware deployments in their data centers will find it easy to simply extend those existing environments into the public cloud, giving them a degree of scalability and elasticity they did not have.

    Given VMware’s ubiquity across enterprise data centers, the promise made it a serious contender for enterprise cloud market share. In January it unveiled vCloud Air networking services that enable this integration and Monday’s announcement built on that.

    The integration is enabled through VMware NSX, the company’s network virtualization platform. Delivered as a service, the capabilities include network traffic isolation, dynamic routing. VMware is planning to make the services available sometime in the first half of the year.

    Long-Distance Workload Migration

    The new long-distance live workload migration capabilities are part of the latest vSphere 6, the company’s flagship server virtualization software. VMware is promising “zero-downtime” migration with multi-processor fault tolerance to ensure large VMs stay up during migration.

    One of the other major additions in vSphere 6 is “instant clone technology,” which makes it possible to quickly replicate and provision thousands of container instances and VMs for rapid scalability.

    1:00p
    EdgeConneX Plans to Add 10 Edge Data Centers in 2015

    Edge data center colocation provider EdgeConneX expects to be north of 30 data centers this year. The company currently has 20 facilities, all but two of which were added in 2014. While the expansion expectations suggest the company might be calling a few racks a data center, these are all facilities of between 15,000 and 40,000 square feet.

    EdgeConneX focuses on “edge” data centers, helping content providers colocate as close to areas with high concentration of users as possible. It serves the very big content providers, such as cable companies and content delivery networks.

    The idea of an edge data center sounds similar to CDN. However, instead of providing caching servers, the company provides high-grade colocation space, or “Layer 0,” as the company itself calls it.

    “We ourselves are not a CDN,” Clint Heiden, chief commercial officer at EdgeConneX, said. “We put infrastructure as close as you can have it. We provide management of the data center. We give a user a view into every data center so they know what’s going on down to the rack and component level.”

    The company took the pricing structure of the big wholesale players like DuPont Fabros and applied it to the edge locations on a much smaller individual scale. If a content provider deploys across its entire portfolio, they pay a per-kilowatt rate across all of those cabinets. It’s like buying wholesale space that’s geographically dispersed.

    CDNs themselves have proven to be an ideal customer. The biggest CDN provider, Akamai, is present across the footprint and has also invested in the company.

    Unique Site Selection Criteria

    Its site selection is not always where other data center providers are located, even those specifically serving secondary markets, Heiden said.

    Opening up an edge data center can be a risky proposition. Lots of the markets it’s in can require speculative builds. But every data center the company builds is EBITDA-positive from the start, Heiden said. EdgeConneX often acquires and modernizes existing facilities that meet various selection criteria.

    It goes through the selection process in less than a week and moves along the permit process almost immediately, getting data centers up in about four months.

    The company builds out in increments of about 2 megawatts. The first deployment in Pittsburgh, for example, had a critical load of 1.6 megawatts. In every market, it has the ability to at least double that, according to Heiden.

    Edge Data Centers — Answer to Comcast’s Problem

    EdgeConneX was founded as a wireless play, but then pivoted to providing network points of presence. Its colocation story starts with Comcast.

    Comcast R&D wanted to move its DVR to the cloud. The pilot program for Cloud DVR was out of Denver, but when serving Boston, for example, it brought the network to its knees. So Comcast asked EdgeConneX to evaluate options for bringing the content closer to the eyeballs.

    The problem it addressed, Heiden said, was the space either didn’t exist, or was low-grade legacy telco space. “Even The CDNs are having the same issues,” he said. “A current wireless carrier customer had racks next to a broom and mop.”

    What was available didn’t work. “The power was wrong; the cooling was wrong; the power density and power per rack were wrong. Some had enough space and some didn’t. Comcast needed an environment where it could put its infrastructure, and that was the start of our edge data centers.”

    But edge data centers quickly became a universal need. “The Internet was moving at lighting pace in the 90s. Two years ago the edge was a conversation, it made sense. It brough a lot of debate. I think you can definitively say in 2015 that the edge has become a fact.”

    1:00p
    DataStax Acquires Aurelius and its TitanDB Graph Database

    DataStax has acquired Aurelius, expanding capabilities of DataStax’ enterprise Cassandra offerings with the addition of the TitanDB graph database technology. The two companies will also work on an all-new product. Terms of the deal were not disclosed, but all eight engineers on the small but innovative Aurelius team are joining DataStax.

    DataStax provides enterprise implementations of the open source Apache Cassandra. The rationale behind the acquisition hinges on the importance of bringing graph database capabilities to Cassandra. The company is building a unified, distributed database.

    Cassandra is an ultra-powerful NoSQL database that’s all about extreme scale and fast performance. One famous user of Cassandra is Netflix, which does over a trillion transactions a day using it. Hulu also uses Cassandra.

    Graph databases are a powerful tool for building algorithms based on patterns. They use graph structures to form many-to-many relationships across heterogeneous data.

    Graph capabilities are a heavily requested feature. “We’re constantly listening, and the volume of interest we have heard for graph database has grown to a roar,” said DataStax Executive Vice President of Engineering Martin Van Ryswyk.

    Van Ryswyk said the two companies have similar cultures, goals and vision. TitanDB is built atop of Cassandra, so both people and products are a good fit.

    DataStax raised $106 million last September and $45 million in 2013, and that funding has gone toward extending Cassandra. It recently added analytics and search capabilities, as well as an in-memory option last year, with the release of version 4.0.

    A user can create powerful algorithms that take advantage of heterogeneous data sources. Recommendation engines are one popular example.

    Aurelius’ aim was to take a nascent technology and make it highly scalable. “People thought you couldn’t scale graph, so we sought to prove them wrong,” said Matthias Broecheler, managing partner at the company.

    Titan has mature systems but it is still a young project. Matthias said that a lot of people are using it in production, but many of them are requesting a commercial offering. “We want to build this commercial offering, Titan is built on Cassandra, so it makes a lot of sense.”

    Aurelius is very much an engineering company, and the acquisition will take the project to the next level thanks to DataStax’s resources.

    4:31p
    All About Location: Maximize Uptime and Improve Performance

    Pete Mastin has over 25 years of experience in business strategy, software architecture and operations and strategic partnership development. His background in cloud and CDN informs his current role with Cedexis as their lead evangelist.

    In spite of what many have predicted, data centers continue to grow in popularity. The prevalence of “server huggers” and cloud privacy concerns will continue to keep a significant number of enterprises from taking their applications to the cloud.

    As Ron Vokoun mentioned in his article on Top 10 Data Center Predictions 2015, FUD will also play a part in maintaining a steady need for new data centers (as opposed to wholesale migration to the cloud). We agree with his assessment that an optimized hybrid model of both is much more likely.

    Bigger, Stronger, Faster … Or Are They?

    Today’s data centers are built to be bigger, stronger and more resilient. Yet not a month goes by without news of a commercial or private data center failure. A survey of AFCOM members found that 81 percent of respondents had experienced a failure in the past five years, and 20 percent had been hit with at least five failures.

    Most seasoned operations managers have a wall of RFOs (Reason for Outage). I called mine “The Wall of Shame.” Whether it is the UPS system, the cooling system, the connectivity or any of a myriad of subsystems that keep the modern data center working, N+1 (or its derivatives) does not guarantee 100 percent uptime. Until the robots take over, nothing can mitigate human failure.

    Outside These Four Walls

    Furthermore, if your applications require top performance, there are a near infinite number of things that can impact you outside the data center. Connectivity issues (both availability and latency) are out of your control, from acts of god to acts of man. Peering relationships change, backhoes continue to cut fiber and ships at sea continue to drag their anchors. Hurricanes, earthquakes, tornadoes, tsunamis and rodents on high wires will continue to avoid cooperation with data center needs.

    So how do we overcome these challenges? The simple answer: multi-home your data center.

    Split Them Up, Spread Them Out

    There is no reason for the type of outages described above to impact the correctly configured enterprise application. Architects and designers have long realized that data center outages are a fact of life. Every disaster recovery and high availability architecture of the past 10+ years relies on the use of geographically diverse deployment. Generally, the best practices for critical application deployment are:

    • Have your technology deployed across multiple availably zones to maximize uptime in case of natural disasters such as hurricanes or earthquakes.
    • Have your technology deployed across multiple vendors. Vendor specific outages are more common than natural disasters. Even carrier neutral data centers often have backchannel between their own data centers and these loops can be damaged. Other software related failures can plague specific vendors and cause issues. Further, having multiple vendors can help your costs during annual renewals.

    One More Piece to the Puzzle

    The first two are well-understood rules. But many architects miss the third leg of this stool: Adequate monitoring of your applications (and its attendant infrastructure) and deploying a global load balancing based on real-time performance data is critical for 100 percent uptime.

    All too often we see applications having performance issues because the monitoring solution used is measuring the wrong things or perhaps the right things but too infrequently. The type of monitoring can and will change based on a variety of factors. Our findings show that the best solution is to mix Application Performance Monitoring (APM) with a good Real User Measurements (RUM) tool to get the best of both types. Performance issues are avoidable when real-time traffic management is deployed. We propose the following addendum to the traditional rules above:

    • Use real-time global traffic management – based on a combination APM and RUM – and make this data actionable to your Global Traffic Management (GTM) tool to distribute traffic in an active-active configuration.

    Following these best practices will allow applications to maintain 100 percent uptime and maintain the best possible performance, regardless of their providers’ maintenance or acts of god. There is no substitute for RUM in this equation. While synthetic measurements (via APM) are a very important part of the mix, you really do not understand what your end users experience unless you measure it. While this seems like a tautology, far too many fail-over solutions miss this vital point. If your data center goes down you must immediately route traffic away. The bottom line of something going wrong is when your end users experience it.

    The upside of this approach (if deployed correctly) is that you actually get improved performance – since the traffic will flow to the best performing data center – even when nothing is going catastrophically wrong. This will make your end users happy which, after all, is what we’re here for. At least until the robots take over.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    6:39p
    Report: AT&T to Sell $2B Worth of Data Center Assets

    AT&T wants to sell about $2 billion worth of data center assets as it looks for ways to pay down debt after making huge investments in spectrum and acquisitions, Reuters reported, citing three anonymous sources.

    No details about the planned AT&T data center sale have been made public. A company spokesperson declined to comment.

    The report suggested the company may be facing a rising debt ratio because of pending acquisitions and because of an $18.2 billion spectrum purchase it made in an auction that ended last week.

    A company selling data centers doesn’t necessarily mean it is moving equipment out of the facilities. A sale-leaseback transaction is a common way for a company to get out of ownership and management of a large real-estate asset while keeping it as part of the company’s infrastructure.

    AT&T made two such deals in 2013, when it sold a 100,000 square foot data center in Brentwood, Tennessee, for about $110 million and a 140,000 square foot data center in Waukesha, Wisconsin, for $52 million to Carter Validus Mission Critical REIT. AT&T signed leases for both properties as their single tenant following the transactions.

    “Enterprises that own their own data centers are often looking at a sale-leaseback as an option,” Tim Huffman, executive vice president and national director of the Technology Solutions Group at Colliers International, said. “It’s a very much a growing trend.”

    The most attractive aspect of a sale-leaseback transaction for its occupant is that the asset changes hands without disruption to IT. Data center migration is a very disruptive and costly process.

    The purchaser usually specializes in data center operation. For such company, a sale-leaseback transaction with a marquee tenant is a perfect deal, since it simply adds a revenue-generating property to their portfolio.

    Often, such transactions also involve properties that are underutilized by the owner, Huffman added. Once bought, the new owner maximizes their use by remodeling and leasing the unoccupied space.

    Carriers make similar deals with their cell towers, as the Reuters report pointed out. AT&T sold $4.85 billion worth of its towers to tower operator Crown Castle in 2013.

    7:34p
    Internap Lights Up OpenStack Public Cloud in New Jersey

    Internap has made its OpenStack-based public cloud services, called AgileCloud, available in the New York metro region. Hosted in the company’s new Secaucus, New Jersey, data center, it is the fourth location of the cloud launched in 2014.

    Internap isn’t as much after the public cloud market as it is after the hybrid cloud opportunity. The OpenStack public cloud is a complement to private cloud deployments. Customers can seamlessly link it public cloud with private bare-metal instances they rent from the provider, or their colocation and managed hosting environments in the provider’s data centers.

    Internap services run the gamut, so customers can choose the right setup for the right workloads through one provider, depending on performance, and their economic and technical needs.

    “As enterprises increasingly adopt OpenStack for private clouds, they require public cloud platforms that provide the same level of interoperability as well as the performance and reliability needed to deploy their mission-critical applications at scale,” Satish Hemachandran, senior vice president and general manager of cloud and hosting at Internap said in a release.

    There are several performance tiers and custom configurations on AgileCloud. Customers choose from three configuration series depending on performance requirements. There are dedicated CPU options as well as all-SSD storage options for those with heavy IOPS needs. Internap also offers networking and management tools.

    The other AgileCloud deployments are in Dallas, Montreal, and Amsterdam data centers. AgileCloud went into beta in 2013.

    The Secaucus facility opened in early 2014 after Internap moved out of Google-owned 111 8th Ave.

    Internap has been a contributor to the OpenStack project since 2010.

    8:03p
    White Label Cloud Provider Peak Rebrands as Faction

    Peak, whose current specialty is providing white label cloud infrastructure to enable channel partners to become cloud service providers, has rebranded to Faction so that its name better reflects its business.

    Prior to Peak, the company was named PeakColo. As the old name implied, it did offer colocation services before refocusing its efforts on being a cloud enabler through Peak and dropping “colo” from its name.

    This week’s name change comes with a promise not to compete with the company’s customers by providing its own cloud services.

    There are several companies whose names include the word “Peak,” such as Peak Technology Solutions and Peak Technology Enterprises. Perhaps the most confusion occurred between Peak and Peak 10, a managed services, cloud, and data center provider. Both companies deal with cloud and both do channel business. The former is different in that it doesn’t deal with end users like the latter does. It only helps other service providers set up and sell their own cloud services.

    Faction’s main product’s name has also changed to “Cloud building Blocs.”

    The new product portfolio includes:

    • Faction Cloud: Build and tailor your cloud using customizable modular cloud building “blocs.” Re-size, make it private cloud, apply reservation models, and even resell.
    • Advanced Solutions Group: Some hands-on help that addresses custom engineering requests such as bare-metal solutions, recalibrating workloads and hybrid cloud
    • White Label Cloud: Engineered cloud infrastructure for channel customers that includes support by Faction

    Faction’s cloud platform is technology-agnostic. Its cloud infrastructure is based on VMware vCloud, NetApp, Cisco UCS, and Open Compute servers.To customers wanting to set up clouds it provides dedicated virtualization management portals, dedicated resources, and a 100 percent money back Service Level Agreement.

    The company has several patents that deal with Layer 2 private pathway connectivity. The Layer 2 connect means customers can use their own IP schema, firewalls, and routing, which makes it less complex than connecting through Layer 3. The conversion to Layer 3 is usually a time-consuming process, particularly for larger companies with a multitude of devices.

    Faction raised $9 million in equity last November for a total of $16 million raised since December 2013. The company is in its early, high-growth stage.

    “Following three years of triple-digit growth, we knew it was time for our corporate identity to closely match our core tenets,” Luke Norris, CEO and founder, said in a release. “The Faction name is a perfect representation of our beliefs. Together our company’s members represent a unique breed of IaaS cloud thought leaders and engineers who believe there is a better way to deliver cloud. We banded together for the pursuit of giving our customers and partners control, not compromise.”

    Faction operates cloud nodes in Silicon Valley, Seattle, Denver, Chicago, New Jersey, New York, Atlanta, and the U.K.

    9:36p
    AMS-IX to Launch New York IX PoP at Telx Data Center

    AMS-IX USA, subsidiary of AMS-IX, operator of the Amsterdam Internet Exchange, is planning to put a Point of Presence at the Telx data center within the Google-owned carrier hotel and office building at 111 8th Ave., New York.

    The New York IX PoP is the latest in a series of recent moves by the European Internet exchange giant to expand in the North American market. After signing agreements to put PoPs in four New York metro region data centers in 2013, last year AMS-IX put one at CME Group’s Chicago data center at 350 E. Cermak and expanded to West Coast with PoPs at Digital Realty and CoreSite data centers in San Francisco and Silicon Valley, respectively.

    AMS-IX is a member of Open-IX, an effort by a group of companies, including Akamai, Netflix, and Google, to establish member-run Internet exchanges (similar to the exchange model widely used in Europe) that extend across multiple data centers in a metro. The ultimate goal is to bring more diversity to an exchange market currently controlled by a handful of data center providers, including, to the largest extent, Equinix.

    The Telx data center where the new New York IX PoP by AMS-IX will be located is in one of the most connected buildings in the world, making it a key network interconnection hub. Besides all the carriers that have equipment in the building, there are also lots of data center providers.

    Since Google bought the building in 2010, however, the company has not allowed data center providers to renew their leases there. Google bought the building for office space.

    One of the providers that had to move out because its lease was expiring was Internap, which relocated to a brand new data center in New Jersey last year. Telx CEO Chris Downey told us earlier that the provider was not facing a looming end of lease in the building. Its current lease will not expire for several decades, he said.

    Companies peering on the AMS-IX exchange at 111 8th Ave. will also be able to peer with customers at the other two Telx data centers in Manhattan: 60 Hudson St. and 32 Avenue of the Americas.

    “With several potential and existing AMS-IX customers within Telx’s data centers facilities, NY2 [name of the data center at 111 8th Ave.] is the perfect location for us to expand the AMS-IX New York platform and further grow our community,” Job Witteman, CEO of AMS-IX, said in a statement.

    << Previous Day 2015/02/03
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org