Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, May 9th, 2016
Time |
Event |
12:00p |
CyrusOne Plans Huge Expansion at CME Data Center Campus in Chicago Seeing an influx of inquiries from financial services firms about colocation at the CME Group data center outside of Chicago CyrusOne acquired in a sale-leaseback transaction announced in March, the data center provider is planning to build a 500,000-square foot building on the property to expand capacity. CyrusOne executives announced the plan on the company’s first-quarter earnings call on May 5.
The data center REIT, in partnership with CME, which operates some of the largest derivatives and futures exchanges in Chicago and New York, intends to nurture and grow a customer ecosystem that will be colocated at the Aurora, Illinois, campus. CyrusOne gains more tenants and sells more power, while CME benefits from more trading activity generated by both traditional financial firms as well as companies in the vibrant fin-tech startup space.
CyrusOne bought the CME data center for $130 million. CME, which was formed after the Chicago Mercantile Exchange acquired the Chicago Board of Trade 10 years ago, hosts its electronic trading platform CME Globex at the data center. CME agreed to a 15-year lease for 72,000 square feet of data center space as part of the deal with CyrusOne and will continue to host Globex there.
Huge Chicago Expansion Plans
CyrusOne currently has just 36,000 square feet of colocation capacity “available to sell” in the existing CME facility, the data center provider’s CEO Gary Wojtaszek said on the call. The wording apparently implies there may be significant expansion space in the existing building which CyrusOne isn’t currently able to market. He also mentioned a new 1MW customer has made a verbal commitment to take down a portion of the available capacity.
The new 500,000-square foot building will be constructed on the 15 acres of expansion land that was included in the deal. Company execs did not say when they were planning to commence construction of the new building.
The partnership with CME “will accelerate the goal of creating the largest financial super center in Chicago, becoming the nexus for financial, energy, social media, and cloud companies,” CyrusOne said in a statement. Wojtaszek pointed out on the call that “…most of the largest hedge funds in the world are now customers, including all the large commercial banks and investment banks.”

Source: CyrusOne – Investor Day 2016
Chicago Needed Supply
Chicago has trailed red-hot data center markets like Northern Virginia, Silicon Valley, Dallas, and Portland/Seattle, mainly due to lack of purpose-built data center halls available for lease.
It appears that 2016 will be an inflection point for that market. In addition to the CME facility, QTS Realty, DuPont Fabros Technology, and Digital Realty all have plans to deliver either new buildings or large megawatt-scale data halls during the second half of the year.
Read more: QTS to Launch Huge Chicago Data Center in July
The CyrusOne announcement reinforces the expectation that Chicago will now be in the hunt moving forward for large local and regional deployments.
Multilayered Financial Services Strategy
CyrusOne’s financial services strategy began to unfold publicly during 2015, but the cadence has picked up significantly during the last two quarters.
Last year, CyrusOne closed its $400 million acquisition of Cervalis, a colocation and disaster recovery data center provider with extensive operations in the New York market. Many Wall Street firms were key customers of the company, representing two-thirds of its revenue.
During the fourth-quarter 2015 earnings call, Wojtaszek also announced that CyrusOne planned to expand into Northern New Jersey, one of the biggest data center market for financial services.
Read More: CyrusOne Reports Record 2015, Plans Big New Jersey Expansion
Historically, New Jersey has served New York financial firms, but leasing velocity has slowed considerably during the past few years, according to real estate market analysts. It remains to be seen whether CME and Cervalis customers will be driving demand for CyrusOne’s New Jersey expansion as well.
CyrusOne Enterprise DNA
While the large CME deal garnered all the attention, it only represented 40 percent of CyrusOne’s leasing tally for the first quarter. The company also boasted wins in gaming, financial services, energy, and IT services.
It added three new Fortune 1000 logos to the customer roster during the quarter. Notably, about 70 percent of CyrusOne revenue already comes from Fortune 1000 customers, a leg-up in a market segment virtually all data center providers are after today.
Most data center REITs initially focused on building networks and cloud density, hosting and IT services, and serving financial services firms, which were early adopters of third-party data center services. Lately, however, these competitors have begun to focus on serving the enterprise vertical, as CIOs and CTOs become increasingly receptive to outsourcing data center capacity and setting up hybrid cloud infrastructure.
CyrusOne has been calling on enterprise customers for years and continues to reap the rewards that come from being a first-mover in a market segment with a notoriously long sales cycle.
Record Backlog
CyrusOne’s record Q1 results continued driving shares higher and rewarded investors with 32-percent gains year-to-date.
During its fourth-quarter 2015 earnings release, CyrusOne announced record leasing activity with a $42 million booked-not-billed backlog creating momentum going into 2016.
CyrusOne signed 30MW of new contracts during the last three months of 2015. During the first quarter of this year the company signed 25MW, with non-CME signings representing approximately 60 percent of the total.
It has now grown the backlog to $74 million based upon annual revenues and confirmed on the call a large percentage of these revenues would commence in either the second or third quarter.
From an overall industry perspective, it is significant that the CyrusOne sales funnel remains just as strong as it was last quarter despite the large bookings during the past two quarters.
Its Fortune 1000 enterprise-first focus continues to pay huge dividends for shareholders. On May 6, CyrusOne shares closed up over 3 percent, recording another all-time high of $48.97 per share. | 3:00p |
OpenStack VDI: The What, the Why, and the How Karen Gondoly is CEO of Leostream.
Moving desktops out from under the users’ desks and into the data center is no longer a groundbreaking concept. Virtual Desktop Infrastructure (VDI) and its cousin, Desktops-as-a-Service (DaaS) have been around for quite sometime and are employed to enable mobility, centralize resources, and secure data.
For as long as VDI has been around, so have industry old-timers VMware and Citrix — the two big players in the virtual desktop space. But, as Bob Dylan would say, the times, they are a-changing.
OpenStack has been climbing up through the ranks, and this newcomer is poised for a slice of the VDI pie. If you’re looking for an alternative to running desktops on dedicated hardware in the data center, open source software may be the name of the game.
What is OpenStack?
OpenStack, an open source cloud operating system and community founded by Rackspace and NASA, has graduated from a platform used solely by DevOps to an important solution for managing entire enterprise-grade data centers. By moving your virtual desktop infrastructure (VDI) workloads into your OpenStack cloud, you can eliminate expensive, legacy VDI stacks and provide cloud-based, on-demand desktops to users across your organization. Consisting of over ten different projects, OpenStack hits on several of the major must-haves to deliver VDI and/or Desktops-as-a-Service (DaaS), including networking, storage, compute, multi-tenancy, and cost control.
Why VDI and Why OpenStack?
Generally speaking, the benefits of moving users’ desktops into the data center as part of a virtual desktop infrastructure are well documented: your IT staff can patch and manage desktops more efficiently; your data is secure in the data center, instead of on the users’ clients; and your users can access their desktop from anywhere and from any device, supporting a bring-your-own-device initiative.
Many organizations considered moving their workforce to VDI, only to find that the hurdles of doing so outweighed the benefits. The existing, legacy VDI stacks are expensive and complicated, placing VDI out of reach for all but the largest, most tech-savvy companies.
By leveraging an OpenStack cloud for VDI, an organization reaps the benefits of VDI at a much lower cost. And, by wrapping VDI into the organization’s complete cloud strategy, IT manages a single OpenStack environment across the entire data center, instead of maintaining separate stacks and working with multiple vendors.
How to Leverage OpenStack Clouds for Virtual Desktops
Now, “simplification” is not a benefit for building OpenStack VDI and DaaS. If you’re not an OpenStack expert, then you may want to partner with someone who is. Companies like SUSE, Mirantis, Canonical, and Cisco Metapod, can help ease your migration to the cloud. Keep in mind that your hosted desktop environment will need to be resistant to failure and flexible enough to meet individual user needs.
So, if you’re really serious about VDI/DaaS, then you’ll need to leverage a hypervisor, display protocol, and a connection broker. A recent blueprint dives into the details of the solution components and several important usability factors.
Here’s the Reader’s Digest version:
- Hypervisor: A hypervisor allows you to host several different virtual machines on a single hardware. KVM is noted in the OpenStack documentation as being the most highly tested and supported hypervisor for OpenStack. To successfully manage VDI or DaaS, the feature sets provided by any of the hypervisors are adequate.
- Display Protocol: A display protocol provides end users with a graphical interface to view a desktop that re- sides in the data center or cloud. Some of the popular options include Teradici PCoIP, HP RGS, or Microsoft RDP.
- Connection Broker: A connection broker focuses on desktop provisioning and connection management. It also provides the interface that your end users will use to log in. The key in choosing a connect broker is to ensure that it integrates with the OpenStack API. That API allows you to inventory instances in OpenStack. These instances are your desktops. It also makes it easy to provision new instances from existing images, and assigns correct IP addresses to instances.
How do you bring everything together? The process can be summarized into four basic steps.
- First, you’ll want to determine the architecture for your OpenStack Cloud. As mentioned, there are a number of solid experts that can help you with this step, if you’re not an expert yourself.
- Then as you onboard new groups of users, make sure to place each in their own OpenStack project, which means defining the project and the network.
- Next, you’ll want to build a master desktop and image, which can be used to streamline the provisioning of desktops to users. At this stage, you’ll want to explore display protocols and select a solution(s) that delivers the performance that your end-users need.
- The final step is to configure your connection broker to manage the day-to-day activities.
Conclusion and Takeaways
When it comes to leveraging OpenStack clouds to host desktops, there’s a lot to think about and several moving parts. For those looking outside the box of traditional virtualization platforms, OpenStack may be your golden ticket. Key to delivering desktops is choosing an adequate display protocol and connection broker.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 4:41p |
Megaport Brings Azure to EdgeConneX’s Portland Data Center EdgeConnex, a data center provider that specializes in edge data centers, has partnered with Megaport, an Australian connectivity services provider that automates the process of interconnection between various companies in colocation data centers using its proprietary software platform.
Megaport will set up shop in EdgeConneX’s data center in the Portland, Oregon, market to offer the facility’s customers connectivity to a multitude of service providers, including Microsoft Azure, a key player for EdgeConneX, which is pursuing a strategy of providing access to direct network links to the big cloud providers in edge data center markets where it isn’t already available.
EdgeConneX announced availability of Amazon Web Services Direct Connect at its Portland data center last week. The facility became the first physical location in that market from where companies could get a direct private network link to Amazon’s cloud. Access to Azure via Megaport is another piece of the puzzle.
Read more: EdgeConneX Brings the Edge of Amazon’s Cloud to Portland
After quickly building out a substantial data center fleet across secondary US markets over the last two years and positioning itself as an edge data center provider in the sense that it offered data center capacity and network connectivity for online content caching in places that were otherwise being served that content from remote hubs on core internet markets, EdgeConneX is now focusing on extending the “edge” of the public clouds.
Before AWS Direct Connect became available in Portland, for example, the closest places where customers in Portland could access the service from were Seattle, Las Vegas, San Francisco, and Los Angeles. West Coast availability of Azure ExpressRoute, a service similar to Direct Connect, which offers direct links to Microsoft’s cloud that bypass the public internet, has been limited to the same locations.
The focus on cloud is “the next logical step” in the development of EdgeConneX, Phil Lawson-Shanks, the company’s chief architect and VP of innovation, said. “The core is well served for the cloud,” he said, referring to the core internet markets like Silicon Valley and Northern Virginia, which have huge clusters of data centers operated by the cloud providers themselves and colocation companies, like Equinix or CoreSite, that offer private connectivity to those data centers.
EdgeConneX leadership believes we’re seeing the start of a move by either the cloud providers themselves or service providers like Megaport to bring that direct private network access to public clouds to metros that aren’t being served by the likes of Equinix and CoreSite.
The Megaport partnership makes the EdgeConneX facility in Portland an access point to much more than Azure, however. Using its SDN platform, Megaport offers the ability to quickly provision connections to any of the company’s 250 service providers or enterprise customers, including, in addition to Microsoft Azure and Office 365, AWS, Rackspace, and CloudFlare.
Location-wise, Megaport has extensive presence in data centers around North America and Asia Pacific.
Read more: How Edge Data Center Providers are Changing the Internet’s Geography | 5:18p |
After Break, Internet Giants Resume Data Center Spending: Gadfly
(Bloomberg) — The Web’s biggest spenders have mostly started splurging again. I wrote six months ago that Google, Microsoft and Amazon — among the world’s biggest operators of Internet technologies and cloud-computing services — had slowed or reversed their perky growth in spending on huge server farms and other capital projects. I didn’t expect the breather to last — and for the most part, it hasn’t.
Collectively, capital spending by those three giants and Facebook has risen 23 percent so far this year after a collective decline of 0.9 percent in 2015. Only Google’s parent company Alphabet has bucked the capex-growth rebound.
The tech superpowers don’t disclose exactly where their capital expenditures go, but it is clear that buying and maintaining their gigantic computing networks represent a big chunk of the overall bill. It costs a ton to run seven Web services with at least a billion monthly users, as Google does, or to power other companies’ computer networks, as Amazon Web Services does.
Alphabet, Microsoft, Amazon and Facebook spent a combined $23 billion in 2015 on capital projects. During an investment flurry from 2011 to last year, the companies’ combined capex nearly tripled.
Capital spending has become essential fuel in the Internet superpowers’ war for consumers and for companies that rent computing horsepower. Even a fraction of a second delay in pushing out an ad on mobile phones, or in running a holiday retail sales forecast, is potentially lost business for the Web giants. That makes investments in computing networks an important area to watch in the jockeying for tech superiority.
Combined Increase from 2011 to 2015 in the Big Four’s Capex: 180%
Amazon’s financial disclosures say a big chunk of the company’s increased capital spending — up 35 percent in the first quarter from a year earlier — is for AWS, the fastest growing and most profitable part of the company.
Microsoft has said it plans to spend more on capital projects, particularly for its expanding cloud-computing businesses including one that competes with Amazon. Last month, Microsoft’s chief financial officer told analysts the company’s 66 percent capex increase in the three months ended March 31 was due primarily to spending on data centers and computer servers.
Read more: Microsoft Ramps Up Cloud Data Center Spend
Yet the biggest of the Big Four, Alphabet, has retained its stingier ways from 2015. Spending on capital expenditures fell 17 percent in the first three months of this year compared with figures in the period a year earlier, following a 10 percent decline in 2015.
Alphabet’s capex decline largely reflects the work of Ruth Porat, the CFO imported from Wall Street last year. Porat essentially promised she would keep a lid on spending growth in at least some areas. In a conference call last month, Porat said that Alphabet still considered its sophisticated computing networks to be key assets but that the tech minions had figured out ways to squeeze more out of existing computing resources to “meet our growing Google requirements cost effectively.” That’s all music to investors’ ears.
Read more: Google to Build and Lease Data Centers in Big Cloud Expansion
Parsimony at Alphabet is all relative. The company’s $9.9 billion in capital expenditures for 2015 was nearly more than the combined capex spending of Microsoft and Amazon. And Alphabet will most likely need to pick up spending for a raft of new computing networks the company pledged to open for its growing cloud-computing operation that also competes with Amazon.
I added Facebook into the calculation because the company has made the jump into the capex big leagues. The company said last week that it expected this year’s bill for data center equipment and related costs to come in at the high end of a previous forecast of $4 billion to $4.5 billion. That means the 12-year-old company is on track to spend nearly as much on capital expenditures as 22-year-old Amazon did in 2015.
Every year, each dollar invested in computing networks goes a little further. But the Big Four all want to get even bigger in cloud computing, mobile advertising or data-hogging Web services such as live Web video. And that makes it inevitable that the capital spending bills will stay as outsized as the ambitions of the Web superpowers.
This column does not necessarily reflect the opinion of Bloomberg LP and its owners.
| 7:22p |
Dutch Data Center Group Says Draft Privacy Shield Weak An alliance of data center providers and data center equipment vendors in Holland, whose members include some of the world’s biggest data center companies, has come out against the current draft of Privacy Shield, the set of rules proposed by the European Commission as replacement for Safe Harbor, a legal framework that governed data transfer between the US and Europe before its annulment by the EC last year.
The Dutch Datacenter Association issued a statement Monday saying Privacy Shield “currently offers none of the improvements necessary to better safeguard the privacy of European citizens.”
The list of nearly 30 association participants includes Equinix and Digital Realty, two of the world’s largest data center providers, as well as European data center sector heavyweights Colt, based in London, and Interxion, a Dutch company headquartered just outside of Amsterdam.
In issuing the statement, the association sided with the Article 29 Working Party, a regulatory group that consists of data protection officials from all EU member states. Article 29 doesn’t create or enforce laws, but data regulators in EU countries base their laws on its opinions, according to the Guardian.
Related: Safe Harbor Ruling Leaves Data Center Operators in Ambiguity
In April, the Working Party said it had “considerable reservations about certain provisions” in the draft Privacy Shield agreement. One of the reservations was that the proposed rules did not provide for adequate privacy protections for European data. Another was that Privacy Shield wouldn’t fully protect Europeans from mass surveillance by US secret services, such as the kind of surveillance the US National Security Administration has been conducting according to documents leaked by the former NSA contractor Edward Snowden.
Amsterdam is one of the world’s biggest and most vibrant data center and network connectivity markets. Additionally, there are several smaller but active local data center markets in the Netherlands, such as Eindhoven, Groningen, and Rotterdam.
There are about 200 multi-tenant data centers in the country, according to a 2015 report by the Dutch Datacenter Association. Together, they house about 250,000 square meters of data center space.
The association has support from a US partner, called the Internet Infrastructure Coalition, which it referred to as its “sister organization.” David Snead, president of the I2Coalition, said his organization understood the concerns raised by Article 29.
“We believe that many of the concerns raised by the Working Party can be resolved with further discussions,” he said in a statement. | 10:16p |
IT Innovators: Entrepreneur Sets Out to Redefine Cloud Management  By WindowsITPro
Today, many Fortune 500 companies are enlisting a hybrid cloud approach that uses a patchwork of on-premises, private cloud and third-party, public cloud services to allow workloads to move between clouds to meet the ever-evolving demands of computing needs and cost expectations. In turn, these companies benefit from greater flexibility and more data deployment options.
However, Tom Gillis, founder of startup Bracket Computing, quickly realized that this approach, with server hardware, software applications, storage capacity, and networking services spread across data centers and multiple service providers, invites operational complexity and introduces an opportunity for error. Gillis decided there was an unmet need for a new virtualization technology; one that could secure multiple cloud environments by creating a container for infrastructure so that an enterprise could move data out on the public cloud, while still maintaining the control it wanted.
On his mission to create a virtualization technology that could provide one set of infrastructure across multiple clouds, Gillis was met with a technical challenge: when sticking a hypervisor on top of a hypervisor at the cloud, the technology was incredibly slow and performance was being cut in half. To overcome this challenge, a lot of trial and error, fine-tuning and tweaking was needed to get the technology—Bracket Computing Cell—to a point that Gillis refers to as “lightning fast.”
Gillis says that his most valuable strategy throughout this mission to ultimately transform how data centers are built, was bringing the mindset of patience and extensive testing of the solution to the forefront of his priorities. In hindsight, Gillis says this is a tactic any IT professional or entrepreneur can stand to benefit from.
“We spent three years developing our technology in stealth mode, working very closely with a handful of very large customers that believed in our architecture and vision, and helped us test and reiterate the technology over and over again,” Gillis says. “Four years later, our product looks very different from its original design, but we now finally feel ready for more broad deployment and production,” he adds.
Another challenge that Gillis encountered throughout his journey was attracting investor interest to support his big ideas. He decided to devote time to gaining a deep understanding of what his own business horizon was and then seek investors who shared that horizon.
Gillis is confident that his virtualization technology could benefit IT professionals by providing one single set of security policies that will expand across private and public clouds, in part since encryption technology is deployed within the Computing Cell. This solution, while providing a single, virtual infrastructure with consistent controls, will not only minimize exposure outside of the enterprise, but can also free up IT departments to focus on innovating and taking on other challenging issues.
“It has most certainly been a work in progress,” Gillis says of the Computing Cell, but adds that it’s very rewarding to know that broader deployment is just around the corner.
Renee Morad is a freelance writer and editor based in New Jersey. Her work has appeared in The New York Times, Discovery News, Business Insider, Ozy.com, NPR, MainStreet.com, and other outlets. If you have a story you would like profiled, contact her at renee.morad@gmail.com.
The IT Innovators series of articles is underwritten by Microsoft, and is editorially independent.
This first ran at http://windowsitpro.com/it-innovators/it-innovators-entrepreneur-sets-out-redefine-cloud-management |
|