Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, July 28th, 2014

    Time Event
    12:00p
    RackWare Adds Disaster Recovery to Cloud Migration Software Suite

    RackWare, a provider of intelligence and automation management software, has added cloud disaster recovery capabilities into the latest version of its software. RackWare Management Module (RMM) 3.0 adds simple disaster recovery to any cloud and helps prevent vendor lock-in, according to the vendor.

    The company’s software is often used for migrating physical workloads into or out of any cloud. It has been expanding into complementary capabilities, such as auto-scaling and now easy cloud disaster recovery.

    CEO and co-founder Sash Sunkara said customers often first use Rackware to either migrate and or map out and assess a deployment.  Those customers often stick around and use it for auto-scaling and management. The new disaster recovery capability extends RMM’s use further.

    Rackware recently raised more than $7 million and passed the 200-customer mark. It has several partnerships in place with data center and cloud providers, including IBM SoftLayer, CenturyLink and Peer 1. Many of its customers are partners that use it on a wide scale. CenturyLink, for example, used it extensively for migrating customers into its cloud.

    After raising some money, the company has been building complementary technology around moving and managing workloads and applications. Sunkara, who is also a co-founder, said disaster recovery was not only one of the most demanded capabilities, but also the logical next step for the company.

    The newly added capability provides whole-server protection and failover. It’s an alternative but not necessarily a replacement to more expensive DR options, such as running a fully replicated data center architected for high-availability or clustering technologies.

    RackWare’s benefits over traditional disaster recovery are set-up speed and simplicity. Workloads are protected in as little as an hour, compared to days and weeks it takes to deploy more complex disaster recovery options.

    The disaster recovery in RMM 3.0 is already being used in production by a few select customers. Sunkara said the limited access period helped the company gather feedback and fine-tune the product. It’s now widely available.

    “Disaster recovery is an essential element of any business IT,” Sunkara said. “However, so far enterprises have been limited to either complex, expensive high-availability solutions or tape backup solutions with lengthy restoration times, not ideal for a business that needs to get back up and running quickly after an outage or natural disaster. With today’s launch of RackWare Management Module 3.0, we extend our intelligent automation and policy framework to enable business-critical use cases.”

    Key new functionality inRMM 3.0 includes:

    •  Cloning of production servers – enabling full replication of the operating system, applications and data from production servers into cloud recovery instances
    •  Incremental synchronization – changes in the operating system, applications and data are all synchronized to the recovery instances as the production instances change, only differences are transmitted, saving bandwidth and resources
    •  Cloud-to-Cloud – production and recovery instances span heterogeneous cloud infrastructure or remain in the same infrastructure, across Amazon Web Services, Rackspace, VMware, SoftLayer and OpenStack, among others
    • Physical-to-Cloud – physical production servers in traditional data centers can be protected by cloned instances into any cloud
    •  Failover – should an outage occur on the production system, the recovery instance is fully synchronized and automatically takes over workload processing
    •  Failback – once the production server is restored, the recovery instance synchronizes all changes that took place during the outage back to the production server for normalized operations
    •  Complete protection – Recovery Time Objective (RTO) and Recovery Point Objectives (RPO) exceed expectations by expanding the scope of disaster recovery to include workloads that are normally underprotected
    12:00p
    Q2 Was Good to U.S. Wholesale Data Center Providers

    U.S. wholesale data center real estate giants DuPont Fabros Technology and CoreSite Realty Corp. announced strong second-quarter results last week, as their rivals Digital Realty Trust and QTS Realty Trust prepared to brief investors on their performance during the quarter, both reports scheduled for tomorrow.

    DFT and CoreSite’s results indicate a very positive market environment for companies that deal in large amounts of space and multi-megawatt power deals. Both finished the quarter with little finished data center space in their portfolios left without tenants; both are building out more capacity; and both have plenty of capital available to fund construction.

    CoreSite successful with large and small deals

    CoreSite reported fairly strong year-over-year growth in both revenue and funds from operations (a REIT equivalent of earnings) for the quarter. Its FFO grew about 13 percent, reaching $0.57 per share, and revenue grew about 14 percent, to $65.7 million.

    As of the quarter’s end, about 85 percent of finished data center space in the company’s portfolio was occupied. CoreSite commenced leases on about 60,000 square feet at an annualized rent of $135 per square foot during the quarter and signed new and expansion leases at an annualized rate of about $160 per square foot, which will bring about $9.4 million in additional revenue per year.

    CoreSite does both wholesale and smaller retail-scale deals. Leases signed during the quarter totaled about 60,000 square feet, spread across about 120 separate contracts. Slightly under half of that space was in the company’s SV3 data center in Santa Clara, California, and was leased out to one customer under a single contract.

    With the bulk of its finished space spoken for, CoreSite is fitting out 50,000 square feet in its VA2 data center in Reston, Virginia. The company expects to spend about $74 million on the project, most of which ($61 million) it has spent already.

    Further expansion will not be hard to finance, since CoreSite has about $11 million in the bank and $221 million available under its credit facility.

    CoreSite CEO Tom Ray said the volume of sales of turn-key space was during the quarter was the highest since the company went public. “Second-quarter sales production reflects the execution of a large lease in the Bay Area and strong leasing across the remainder of the portfolio led by Los Angeles, Boston and Northern Virginia,” he added.

    DuPont Fabros commissions space just in time

    CoreSite’s bigger rival DuPont Fabros reported healthy year-over-year revenue and earnings growth as well. The company’s revenue for the quarter was $102 million – 11 percent up – and earnings were $0.32 per share – up nearly 80 percent from the second quarter of last year.

    Revenue increased primarily because of the three new leases that commenced during the quarter, DFT said. They totaled about 60,000 square feet of space with access to about 10 megawatts of critical power capacity.

    The company finished the quarter with 96 percent of its operating portfolio leased. It executed three leases during the quarter, totaling nearly 50,000 square feet and about 7.10 megawatts of power.

    Since the quarter ended, DFT brought online Phase I (about 12 megawatts) of its ACC7 data center in Ashburn, Virginia, 17 percent of which was preleased. The company also recently brought online a new 9-megawatt phase at its Santa Clara data center, with 77 percent of the phase pre-leased before it was lit up.

    DFT President and CEO Hossein Fateh said there was growing customer demand, which the company stood to capture with the unleased capacity in the two new phases on east and west coasts. “We are equipped to capture more demand with the 21 new megawatts of development we’ve placed in service in the vibrant data center hubs of Northern Virginia and Santa Clara,” he said.

    DFT is building at full steam. Construction has started on another 9-megawatt phase in Santa Clara and a 7-megawatt phase at the company’s CH2 site in Chicago. It expects to bring both online next year – Santa Clara in the first quarter and Chicago in the third.

    Like CoreSite, DFT has access to plenty of capital to finance further expansion: $56 million in cash and $560 million of available credit capacity.

    DLR and QTS cooking with different ingredients

    It will be interesting to watch QTS and Digital Realty report their results for the quarter against the background of healthy tight-supply, strong-demand conditions their two rivals operate in. They will not be as clear-cut, since QTS recently bought several massive properties, and Digital Realty is going through a rough transition following the recent departure of its founding CEO Michael Foust, selling off underperforming assets and focusing on maximizing profitability of the portfolio.

    Both are reporting their results Tuesday.

    12:30p
    Making Clusters Easy to Use and Easy to Own

    Jerry Melnick is currently responsible for defining corporate strategy and operations at SIOS Technology Corp.

    Companies have more choices than ever to save money and improve efficiency in their IT environment such as public and private cloud, virtual server environments, and high performance SSD storage. They can even combine a traditional physical server environment with cloud in hybrid configurations for better disaster recovery protection.

    However, protecting business critical applications such as SQL, Oracle, SAP, file and print from downtime and disasters in these environments poses a variety of challenges to traditional SAN-based cluster environments. Companies should rethink their approach to high availability (HA) and disaster recovery (DR) and consider alternatives that do not require the use of shared storage.

    Options for high availability and disaster protection

    Traditionally, companies have provided HA protection by configuring a primary application server and a standby server in a cluster configuration. They use clustering software that monitors the application and in the event of a failure, performs a “failover” of application operation to a standby server. Since both the primary and standby servers use the same shared storage (SAN), the backup server has immediate access to up-to-date data as soon as it takes over operations. This method poses a variety of limitations that may completely offset the benefits of adopting today’s flexible, dynamic data center configurations.

    For example, cloud providers do not offer SAN-based clusters for HA protection. Even a fully redundant SAN resides in a single location, making it a potential single point of failure that is vulnerable to threats, ranging from simple power failures to regional disasters. To add disaster protection to an existing SAN environment, you have to purchase an additional SAN that is identical to the first—doubling the hardware cost and adding vendor lock-in. You may find it difficult or even impossible to integrate it with failover clustering.

    Clusters without limitations

    One alternative is SAN-less HA failover cluster software that eliminates the limitations of shared storage. It works like its traditional counterpart, but instead of using shared storage, it uses block level replication to synchronize local storage in the primary and standby servers. The synchronization process gives the standby server access to a copy of the same data as the primary server.

    SAN-less software can be used as a cost-effective add-on to Windows Server Failover Clustering or to provide complete Linux clustering. With it you can use efficient and low cost server side storage. The key benefits of SAN-less software cluster designs include:

    • Dramatic cost savings compared to SAN based configurations
    • Flexible cluster configurations that can be built to suit your HA/DR needs
    • High performance for your most demanding applications

    SAN-less clusters give you the freedom to configure your data center without giving up HA protection. Use both SAN and SAN-less environments and any combination of physical, virtual, and cloud configurations. Since you don’t need identical hardware at the source and destination there is no vendor lock-in. You can even use your existing hardware.

    Leveraging the full benefits

    Virtual server environments let you allocate computing resources more efficiently and scale your data center as your company grows. You can create a SAN-less cluster using VMs sitting on any hypervisor using replication to synchronize storage on the primary VM with storage on a standby VM. Locate the standby VM in the same data center, in your DR site, or both. In the event of a disaster, the standby VM can be brought into service with little to no data loss, eliminating the hours needed for restoration from back-up media.

    If you install SAN-less clustering software at the Microsoft Hyper-V hypervisor level, you can replicate and failover VMs from one host machine to another. Move data, applications, and entire VMs from one host to another quickly and easily without interrupting service to end users. You can also restore replicated VMs to perform DR testing without disruption to the production site.

    Extend clusters for disaster recovery

    Extend a SAN or SAN-less cluster to a remote location, the cloud or a node outside your cluster to provide efficient, real-time, block level replication and disaster protection for your business critical applications. If you have a failure in the primary system, the application will failover to the local standby server automatically. If all of your systems fail or if you have an outage of the entire data center, you have a third node in the cloud waiting to take over with a real-time copy of the data.

    The software enables failover of applications across geographic locations and cloud availability zones or regions to provide site-wide, local, and regional disaster protection.

    SAN-less clusters for high performance storage

    You can build a SAN-less cluster with local attached high performance SSD storage to speed application response times and get complete failover protection for a fraction of the cost of a SAN-based cluster.

    New configurations are giving you more freedom than ever to configure your data center in more efficient, responsive ways. SAN-less clusters provide the flexibility and ease of use needed to take full advantage of these new configurations.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:47p
    Telehouse to Expand London Docklands Campus With £135M Data Center

    Japanese telco KDDI is investing an additional £135 million (about $230 million) in the London Docklands data center campus of its subsidiary Telehouse to fund construction of the North Two facility.

    KDDI continues to invest and expand outside of Japan. It has spent more than £272 million on the campus over the past 25 years, £172 million of which was invested over the last five years alone.

    Telehouse Europe Managing Director Hiroyuki Soshi said, “We believe that as the Internet continues to develop at such a dramatic pace, the underlying infrastructure in the Docklands must stay ahead to meet the needs of the future.”

    Investment from Japan in general has been critical to economic development in the UK. The country’s secretary of state for business Vince Cable said Japan has been the second-largest investor in the UK during 2013 and 2014.

    North Two will add about 250,000 square feet of floor space, bringing the site’s overall footprint to about 770,500 square feet. It will be built next Telehouse North, one of the earliest purpose-built carrier neutral colocation data centers in Europe.

    Telehouse said North Two is being created with the “New Internet” in mind, a term it uses to describe the shift from webpages to apps, smartphones and tablets, and hybrid infrastructure.

    The company acquired a two-acre site near its Docklands campus to be used for expansion  in February 2013. North Two is in line with its proposal in documents posted from the Greater London Authority. 

    Telehouse’s Docklands data center campus houses a high proportion of telecommunications carriers. This inward investment gives the campus the chance to build on its reputation as the hub of the Internet in the UK, John Souter, CEO of London Internet Exchange (LINX), said. “What was once the most critical maritime port in the world is now one of the most connected places on earth.

    The campus already facilitates the majority of LINX capacity, he continued, adding that Internet traffic will continue to grow, especially with the transition to 100G technology that is now underway.  “The vast majority of LINX’s member-facing high capacity ports are in Telehouse. With such a huge proportion of all UK internet traffic flowing through LINX, this investment in the national critical infrastructure of the UK gives us great confidence.”

    Cable welcomed investment by the Japanese ICT giant: “Britain’s economy is growing thanks to Japanese investment and in 2013-2014 they were the second-biggest investor into the UK starting over 100 new projects and creating 3,000 new jobs.”

    6:03p
    CoreOS Launches First Stable Release of Its Web-Scale Operating System

    CoreOS announced its first stable release late last week on SysAdmin day. It means that a tested, secure and reliable version of its operating system for web-scale data center infrastructure is now available for users who want to run production workloads on it.

    CoreOS is a startup with a Linux OS distribution that can update simultaneously across massive server deployments. The lightweight OS automatically updates and patches lots of servers at once, enabling highly resilient massive-scale infrastructure. It helps improve security and makes it easier for companies to build compute clusters that can withstand single-note outages.

     CoreOS has already been tested by a lot of companies, but this is the first full production-ready version. The company has been supporting customers prior to this release, but the stable version puts it in a better position to scale adoption. The stable release does not include etcd and fleet as stable, however. It is only targeted at the base OS and Docker 1.0. Stable support for etcd and fleet (cluster management tools) will be included in subsequent releases, the company said.

    CoreOS 367.1.0, the first stable version, includes:

    •  Linux 3.15.2
    • Docker 1.0.1
    • Support on all major cloud providers, including Rackspace Cloud, Amazon EC2 (including HVM) and Google Compute Engine
    • Commercial support via CoreOS Managed Linux

    The startup recently raised $8 million, and has been backed in the past by venture capital firms Andreessen Horowitz and Sequoia Capital.

    Highlights since the first alpha release in August 2013:

    •  191 releases have been tagged
    • Tested on hundreds of thousands of servers on the alpha and beta channels
    • Supported on more than 10 platforms, ranging from bare metal to primary images on Rackspace and Google clouds

    “This is a huge milestone for us,” wrote CoreOS founder and CEO Alex Polvi. “It is a big day for us here at CoreOS, as we have been working hard to deliver the stable release.”

    6:52p
    Pivotal and Hortonworks to Collaborate on Hadoop Management Tools

    Pivotal and Apache Hadoop distro provider Hortonworks have teamed up to collaborate on an open source project called Apache Ambari, which provides tools and APIs to provision, manage and monitor Hadoop clusters. The collaboration looks to strengthen Hadoop as an enterprise offering and to advance the Hadoop ecosystem.

    Ambari’s aimed at making Hadoop operations simpler through a standard management tool. There is a wealth of interest and investment in Hadoop, and many vendors are focused on making the open source distributed computing architecture user-friendly – an aim that provides the basis of the Ambari project.

    In a boost to efforts to make Hadoop more palatable for enterprises, Hortonworks recently landed a $50 million equity investment from HP. Now it gains another strong partner in the form of Pivotal, the EMC and VMware company led by former VMware CEO Paul Maritz.

    Pivotal aims squarely at the enterprise software developer, and enterprise Hadoop is a big component of its play. It is invested heavily in Hadoop, for which it has its own distribution, and complementary modules, such as HAWQ, GemFire XD and Pivotal Command Center.

    “Apache Hadoop projects are central to our efforts to drive the most value for the enterprise,” Jamie Buckley, vice president of product management at Pivotal. “An open source, extensible and vendor-neutral application to manage services in a standardized way benefits the entire ecosystem. It increases customer agility and reduces operational costs and can ultimately help drive Hadoop adoption.”

    Pivotal is expanding its open source investment by dedicating Pivotal engineers to contribute installation, configuration and management capabilities to Ambari. At the same time, Pivotal will continue to deliver on commitments to existing customers and work closely with them to help them benefit from this collaboration.

    Shaun Connolly, vice president of strategy at Hortonworks, said, “Pivotal has a strong record of contribution to open source and has proven their commitment with projects such as Cloud Foundry, Spring, Redis and more.  Collaborating with Hortonworks and others in the Apache Hadoop ecosystem to further invest in Apache Ambari as the standard management tool for Hadoop will be quite powerful. Pivotal’s track record in open source overall and the breadth of skills they bring will go a long way towards helping enterprises be successful, faster, with Hadoop.”

    << Previous Day 2014/07/28
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org