Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, November 3rd, 2015
| Time |
Event |
| 1:00p |
DuPont Fabros Sees Data Center Leasing Momentum in Virginia While the third quarter wasn’t stellar for DuPont Fabros Technology in terms of signing new leases, the period immediately after the quarter’s last day saw a series of major deals.
The two most notable ones were the lease of a portion of capacity across four East Coast data centers formerly occupied by the bankrupt Net Data Centers (known in the past as Net2EZ), and a lease by one company of more than 10 MW of capacity in Ashburn, Virginia, that used to be occupied by Yahoo. Yahoo, like other big web companies, has built its own data centers and moved into them a lot of infrastructure it used to house in wholesale facilities operated by the likes of DuPont.
The lease of former Net capacity was by Anexio, which recently acquired all of Net’s East Coast assets. Anexio took 4 MW total across three data centers in Virginia and one in New Jersey.
Washington, DC-based DuPont is one of the biggest wholesale data center providers in the US. Specializing in building and leasing out huge data center facilities, concentrated primarily in Northern Virginia, its biggest tenants are Microsoft, Facebook, Rackspace, and Yahoo, among others.
Collectively, the top four tenants contribute 60 percent of the provider’s total rent revenue, which has worried some investors, especially as big web companies continue building and operating their own data centers.
ACC2 Goes to Unnamed Big Tenant
DuPont did not disclose who the tenant that took Yahoo’s former 10 MW in one of its Ashburn facilities, ACC2, was. It is one of the provider’s existing customers, however, company executives said on its third-quarter earnings call last week.
The deal is welcome news for the provider, since the building is designed for large users, and single-tenant 10 MW requirements are relatively rare, DuPont CEO Christopher Eldredge said on the call.
“Last year, we saw three such requirements in Northern Virginia,” he said. “We captured one requirement for 13.65 megawatts at ACC4. We could not accommodate the others, as Yahoo! could not vacate ACC2 in time to meet these requirements.”
Because of “exponential” growth in cloud, data storage, and other internet applications, however, the company’s leadership was confident another large requirement would eventually surface. “It has, and we’ve captured it,” Eldredge said.
Operating Costs High for Single-Tenant Facility
Whoever the user is, they got a steep discount on the lease, although it doesn’t mean they will pay less for the facility in the end. Because ACC2 is a single-tenant data center, it is much smaller than the massive multitenant buildings DuPont specializes in and therefore costs more to operate, which is why the provider had to lower the rent to make it more competitive.
Operating expenses at ACC2 are 75 percent higher than other DuPont data centers, such as its newest 40 MW ACC7 facility, for example. ACC2 is smaller and its cooling costs are “significantly higher,” DuPont CFO Jeffrey Foster said on the call.
Taking operating expenses into account, the tenant will end up spending as much on their infrastructure housed at ACC2 as they would in other DuPont buildings.
“On a total cost of occupancy basis the new customer will pay as much at ACC2 as a super wholesale customer would at ACC7,” he said.
Q4 Quickly Outpaces Q3
DuPont also signed three more leases since the third quarter ended, all with tenants whose name the landlord did not disclose.
Two of the deals, with a single customer, totaled 6 MW across two phases in ACC7. The third one was a single 6 MW deal in the Chicago market.
The company also extended an existing 1.5 MW lease at ACC7 for another four years.
All that adds up to a fourth quarter that is already much better than the third in terms of new leases. The company did not sign any new deals in the third quarter, only extending an existing 0.6 MW one at ACC5.
Two leases totaling 2.6 MW commenced during the quarter.
DuPont reported $115 million in revenue for the third quarter – up 9 percent year over year. Its earnings per share were flat year over year, remaining at $0.29. | | 1:30p |
Infomart to Add 16MW to Portland Data Center Infomart Data Centers, the company formed as a result of a merger between Fortune Data Centers and the Dallas Infomart last year, announced plans to expand its Hillsboro, Oregon, data center by an additional 16 MW of capacity.
Then the two-phase expansion is complete, the 350,000-square-foot campus will have 24 MW of critical power.
Hillsboro, a town near Portland, is a popular data center location with relatively low energy costs. Other data center providers in the market include ViaWest, T5, and Digital Realty Trust. Companies using data centers in Hillsboro for their IT infrastructure include NetApp, Comcast, DreamHost, and Intel, among others.
Demand for data center space in Hillsboro as well as in nearby central and eastern Washington State is growing, according to a recent market report by the commercial real estate firm Jones Lang LaSalle.
“Cloud and SaaS companies, along with content delivery networks, continue to expand there,” JLL analysts wrote. “In addition, telecommunications companies continue to expand their robust grids in these markets.”
Infomart said a “substantial portion” of the capacity being added at its Hillsboro site is pre-leased.
The company plans to add 4 MW in Phase I, expecting to compete it in Marcy 2016. The second phase will add 12 MW, slated for completion mid-summer of the same year. It will include a new 100,000-square-foot building. | | 5:14p |
Three Roadblocks to Realizing the Benefits of Flash Storage Vish Mulchand is Senior Director of Product Management and Marketing for HP Storage.
It used to be that the major roadblock to flash storage adoption was cost – the price-point of flash made it prohibitive for all but the most mission-critical high-performance applications. Now with the cost of flash rivaling HDD and continuing to fall, those days are thankfully behind us.
As a result, more and more organizations are turning to flash to keep up with the accelerating demands of enterprise applications. In the process, they’re discovering that flash media changes the performance balance between servers, networks and storage, requiring you to re-think your data center environment.
While flash storage can enhance the performance of your applications, there are three potential roadblocks to realizing the full value from a flash investment:
- Storage network capacity
- Storage architecture
- Resiliency
Network Capacity
Picture yourself on the freeway at 5:00 in the morning. Traffic is relatively light, and everyone is moving along at or close to the speed limit. Now add more cars as the morning commute kicks in, and things gradually slow down. Add more cars, and eventually you’re approaching gridlock. That’s what flash storage can do to your network.
Flash media is fast – up to hundreds of thousands of I/O operations per second (IOPS) at sub-millisecond latency. That’s orders of magnitude beyond the performance of spinning disk. But that ability to generate more read and write operations means more traffic for your storage network, moving data back and forth between storage and servers. And as network traffic piles up, latency increases. The end result is a traffic jam that slows down application performance.
For example, a common online transaction processing (OLTP) workload connected to flash storage can quickly saturate 8Gb/s fiber channel (FC) network components like host bus adapters (HBAs), network switches, and target adapters. Your storage network can become a bottleneck, preventing you from fully utilizing your compute and storage resources.
To get the most from your flash investment, you may need to consider a network upgrade. In our OLTP example, upgrading from 8Gb/s FC to 16 Gb/s FC can increase bandwidth and IOPS by at least 35% and improve storage latency by 2.5X or more. That’s the equivalent of adding extra lanes to a freeway to support more traffic. A network upgrade provides the added benefit of requiring fewer components (switches, adapters, etc.) to achieve bandwidth and latency targets, resulting in lower costs.
Storage Architecture
It’s not just your network that can slow you down; the architecture of your flash array itself could prevent you from realizing the full benefits of flash media.
Some vendors have entered the flash market by simply re-equipping existing disk storage arrays with flash media. On the surface that might sound like a good idea, but in reality it’s a bit like dropping a finely tuned racing engine into the family minivan. Sure it will run faster, but the minivan has no hope of getting the full performance benefit from the horsepower the racing engine can generate because if you look end-to-end to all the elements that make up the performance, the engine only represents a fraction of the elements.
Similarly the characteristics of flash require re-thinking performance through the end-to-end I/O path, including server connectivity, switches, storage controllers and backend connectivity to the solid-state drive (SSD) media. Much like a race car is optimized to get the full performance from its racing engine, a flash array and the supporting architecture should be optimized to support flash media.
Storage controllers and algorithms not designed specifically for the rigors of flash will not deliver the desired latency and I/O performance. Having sufficient bandwidth in your array is another consideration.
Typical dual-controller storage designs suffer from an inability to effectively scale to keep up with flash performance as storage performance in flash arrays is a function of the controller performance.
Resiliency
Some vendors have taken a different approach, designing flash optimized storage arrays from the ground up. While this can alleviate the bottlenecks associated with legacy storage architectures, it can pose another set of challenges. Often the redesign comes at the expense of the Tier-1 resiliency and data services you rely on, which can be a little like driving that race car without a helmet – everything is fine, until it’s not. Features like hardware and software redundancy, non-disruptive upgrades, transparent active-active failover, and remote synchronous/asynchronous replication are critical to your data center but are not yet always standard offerings with all-flash arrays.
Deploying one of these systems can also mean accepting another separate and distinct storage architecture into your data center, creating an additional storage silo and complicating your data protection strategy. To provide best value, flash storage arrays should integrate with the tools you already use, enabling hypervisor and application owners to control backup and recovery processes directly from their preferred system management consoles. To achieve true data protection, you’ll also need to go beyond snapshots to create fully independent backup volumes that can be restored at the volume-level in the event of disaster.
Flash storage changes the performance balance between servers, network and storage, requiring you to re-think your architecture. Realizing the full benefits from your flash investment requires a balance of the right storage, the right storage architecture, the right data services and features, and the right network solution.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 5:47p |
IBM Acquires Gravitant to Boost Hybrid Cloud Capabilities 
This article originally appeared at The WHIR
IBM has acquired cloud software developer Gravitant to improve the management and efficiency of hybrid environments for brokers and enterprise customers, the company announced Tuesday. Gravitant’s technology, which enables integration and management of mixed public and private clouds from multiple suppliers, will be integrated with the IBM Global Technology Services unit, and sold as a SaaS offering.
Gravitant solutions allow computing and software services from different suppliers to be compared by price and capabilities, as well as purchased from a central console. Once purchased, they can also be managed and offered as a service through the same console.
“The reality of enterprise IT is that it is many clouds with many characteristics, whether they be economic, capacity or security,” said Martin Jetter, Senior Vice President, Global Technology Services, IBM. “Gravitant provides an innovative approach to add choice and simplicity to how enterprises can now manage their environments. It will be a key component as we broaden our hybrid cloud services.”
The Austin-based Gravitant was founded in 2004, but took on the current form which attracted IBM in 2011 with the release of cloudMatrix. The company also has two development sites in India. Financial details of the agreement were not disclosed.
Cloud brokerage, management, and visibility products have become hot commodities as enterprises embrace some form or other of hybrid cloud. Recently Cloud Cruiser and VMware have expanded their portfolios to provide upgraded hybrid management and visibility. Red Hat acquired Ansible in October to grow its hybrid management capabilities.
Experts expect the market to grow rapidly, with IDC predicting the cloud systems management software market will hit $8.3 billion by 2019, and Gartner calling for the cloud brokerage enablement market to exceed $2 billion by 2018.
This first ran at http://www.thewhir.com/web-hosting-news/ibm-acquires-gravitant-to-boost-hybrid-cloud-capabilities | | 6:52p |
Juniper Opens Data Center Network OS After launching a line of data center network switches that can run other companies’ operating systems, Juniper Networks has opened its data center network operating system Junos so other companies’ software can run on top of it. Initially, this capability is only available on a new line of Juniper hardware switches that run the open version of Junos.
Both the open version of Junos and Juniper’s new QFX5200 access switches, which support 25/50 Gigabit Ethernet, can be bought together or separately, the company announced Tuesday. When bought together, however, they enable deployment of third-party network services or applications directly on the Juniper platform.
They also enable users to write their own software directly to the platform using the software model defined by the Open Compute Project, the Facebook-led open source data center and hardware design initiative.
Major networking technology vendors opening up their stacks for greater interoperability is a recent trend. The traditional model for top vendors like Juniper, Cisco, HP, and Dell has been to sell tightly integrated data center networking solutions that included hardware and software and were completely closed and proprietary. In other words, you get the functionality and interoperability the vendor provides; nothing less, nothing more.
But web-scale data center operators like Google and Facebook found that off-the-shelf networking gear didn’t exactly provide what they needed. It was expensive, impossible to customize, and had functionality they did not need. Google was a pioneer among data center operators that started designing their own switch hardware and sourcing it directly from design manufacturers in Asia – often the same ones that built off-the-shelf systems for the incumbent vendors.
Today, it’s common for the largest web-scale data center operators to design and source their own hardware when products available off-the-shelf don’t fit their functionality or price requirements.
But there is also a growing market for supplying solutions with similar benefits to companies that operate data center infrastructure that’s large and distributed but isn’t quite at Google’s scale. These companies don’t have the resources to develop custom networking technology in-house but would like to take advantage of the price and feature flexibility the web-scale model promises. A new group of vendors has formed, including both big incumbents and startups, to go after these customers, such as smaller cloud service providers and telcos.
A lot of demand for greater openness and interoperability in data center networking is also driven by the advent of Software Defined Networking and Network Function Virtualization.
Data center networking software startups like Cumulus Networks and Big Switch Networks have brought Linux-based network operating systems to market. HP launched an open source network OS earlier this year.
Incumbent networking hardware vendors, including Juniper, HP, and Dell, have introduced hardware data center switches that can be used with third-party operating systems. Cisco, which has the biggest market share in data center networking by far, has not done so. | | 8:19p |
Top Data Center Providers Strike Submarine Cable Deals There is a resurgence of submarine cable construction, and two of the world’s largest data center service providers, Equinix and Digital Realty Trust, are positioning to benefit from the new intercontinental cable systems that are being built.
Earlier this year, a partner of Irish submarine cable operator AquaComms has started laying one of the first trans-Atlantic cables in more than a decade in response to growing demand for intercontinental bandwidth. The system, called America Europe Connect, or AEConnect, will span 5,400 kilometers between the west coast of Ireland and Long Island.
Equinix announced a deal to connect its data centers to the new cable system in August. Digital announced its own deal with AquaComms this week.
“A resurgence of subsea cable projects is … creating opportunity for Equinix, helping service providers accelerate returns by terminating cables directly into our IBXes,” Equinix CEO Stephen Smith said on the company’s third-quarter earnings call late last month, referring to the company’s International Business Exchange data centers. “Equinix is working with AquaComms, who is deploying one of the first trans-Atlantic subsea cables in more than a decade to meet increased bandwidth needs for global businesses.”
Both Equinix and Digital also have agreements to connect data centers to a submarine cable system called Faster, which will reach across the Pacific Ocean, connecting the US West Coast with Asia.
Equinix announced an agreement with the operator of Faster in September. Digital will tap into the system through Telx, the data center operator it recently acquired. Telx announced it will plug one of its West Coast data centers into the Faster system in July.
The use of cloud services and consumption of online content is growing, and more and more companies provide content and services internationally, which is driving demand for connectivity bandwidth between continents. Multitenant data center providers like Equinix and Digital make their facilities more attractive for prospective customers by positioning them as access points to the new network capacity.
Equinix data centers in New York and London will provide access to AEConnect, while its Silicon Valley, Seattle, and Los Angeles facilities will plug into the Faster system.
Digital plans to connect its facilities in major New York carrier hotels – 32 Avenue of the Americas, 111 8th Avenue, and 60 Hudson Street – to AEConnect. The Telx agreement with Faster was to connect the cable system to the data center operator’s Hillsboro, Oregon, facility.
Telecommunications companies as well as web giants are often behind the massive submarine construction projects. Investors in the Faster system include Asian telecoms and an IT services firm as well as Google. Microsoft is the first “foundational” customer of AEConnect, which will connect directly to its data centers in Ireland. This is one of several submarine cable projects Microsoft is involved in.
Both new systems will support 100 Gbps connectivity necessary for the new breed of applications.
“Out of approximately 230 subsea cables across the globe, very few are currently equipped to fully support coherent technology with 100 Gbps capabilities,” Dave Crowley, managing director of global network procurement for Microsoft, said in a statement. “With bandwidth-hungry applications on the rise, we want to ensure our customers are getting the capacity across the Atlantic that they need.” | | 10:27p |
Microsoft Brings Open Source PaaS Cloud Foundry to Azure 
This article originally appeared at The WHIR
Among the announcements at Cloud Foundry Summit Europe in Berlin this week, Microsoft announced the general availability of Cloud Foundry on Azure, and that Cloud Foundry now supports Docker and .NET natively.
Cloud Foundry is a Platform-as-a-Service (PaaS) software technology, which is a popular category for larger organizations. Penetration of PaaS among midmarket and enterprise organizations has reached 35 percent, and a further 20 percent are in the process of evaluating PaaS, according to a poll by the Enterprise Strategy Group (subscription required).
According to Microsoft Azure senior program manager Ning Kuang, “Cloud Foundry makes it faster and easier to build, test, deploy, and scale cloud applications from different languages. With today’s announcement, developers can enjoy a consistent Cloud Foundry experience in Azure, and a simplified provisioning workflow by leveraging Azure Resource Manager templates.”
In June, Microsoft had released Cloud Foundry on Azure Preview, then its second preview version in September 2015 allowing customers to deploy a standard Cloud Foundry infrastructure on Azure using Bosh-Init, and also use the latest Azure resource management framework to support multiple Cloud Foundry VMs.
Projects Pave Way for Docker Images, .NET Apps
Three initiatives the Cloud Foundry Foundation had been incubating have been promoted to active projects: Diego, Garden, and Greenhouse.
Next-generation runtime project Diego provides a more flexible architecture at the runtime layer, allowing Cloud Foundry to natively support Docker container images and .NET applications. The Garden project addresses Cloud Foundry’s container orchestration layer and uses the Open Container Initiative‘s runC implementation. The Greenhouse project enables support for .NET within the platform.
Supports for .NET means organizations now have the flexibility to work with a mix of both Linux and Windows applications on Cloud Foundry. And Docker support allows developers to choose between pushing code or pushing container images into the platform, and providing consistency in how applications are managed and scaled.
Cloud Foundry also announced this week that the Cloud Foundry Foundation now has more than 50 members, reaching this milestone with the addition of CA Technologies, Cisco, Citi, Hitachi Ltd., RBC and SUSE.
Hewlett Packard Enterprises, which bought Cloud Foundry-based Platform-as-a-Service startup Stacato months ago, launched its Helion Stackato PaaS on Cloud Foundry v2 as an IaaS agnostic solution.
Also this week, Cloud Foundry revealed a new logo. It features a molten liquid pouring out of the letter “O” into a gear that also forms the shape of a lightbulb. A Cloud Foundry blog post explains that the new logo represents open source’s “free flow of ideas,” combined with continuous innovation, creation and craftsmanship, as well as thought and innovation.
This first ran at http://www.thewhir.com/web-hosting-news/cloud-foundry-adds-native-docker-and-net-support-microsoft-launches-cloud-foundry-on-azure |
|