Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, August 4th, 2016
Time |
Event |
12:11a |
Equinix: Our Customers Are Pushing Out Our Global Footprint It’s good news, or at least it should be, for the global leader in the colocation space: Equinix’ revenue for the last quarter is a little better than 3 percent greater than analysts’ expectations, and somewhat greater than its own guidance in the last quarter. A good chunk of that gain, the company admitted during its quarterly analysts call Wednesday afternoon, was due to wrapping up its acquisitions of Telecity and Bit-isle.
But as Equinix executives explained to analysts, these acquisitions haven’t been mere opportunities that present themselves out of nowhere, like four-leaf clovers in an unplowed field. Enterprise demand for cloud capacity in hybrid installations is continuing to grow, and as Equinix works to meet that demand, its executives sounded at times like they needed to stop for a moment and catch their breath.
“I think we saw a bit of a land grab going on last year, as the top (cloud service providers) really tried to quickly get into what they saw as the critical baseline set of markets for their services,” explained Equinix COO Charles Meyers. “And I think that slowed down a little bit, but now we’re seeing behavior in terms of people adding incremental services, beginning to scale their revenue lines. Obviously that’s evident in the results of the likes of (Amazon)p; AWS and Microsoft, and we think that’s (incenting) other CSPs to have an interest in expanding their platforms as well.”
Specifically, the Equinix C-suite believes that the need for capacity drove CSPs to acquire data center space in 2015, first in established markets, followed later by emerging markets. Now they have stakes in the ground, so for these CSPs, 2016 is about establishing a pace for growth. How much do they want to scale their capacities up?
That percentage is being determined for them, it turns out, by the CSPs’ own customer bases. They’re moving to service models that are more elastically scalable. While they’re no longer compelled to acquire and provision more capacity than they think they need — which used to be the case, in the pre-cloud era — these end customers may be consuming more capacity than the CSPs thought they would be.
“They’re continuing to deploy in the big markets, and we benefit from that because we’re in the big markets,” explained Equinix CEO Stephen Smith. “But we’re also seeing them go into emerging markets now, at a pretty high clip, trying to extend their platform all over the world. And sometimes we’re there, and sometimes we aren’t.”
Smith went on to say that his company is clearly experiencing the phenomenon to which it’s most sensitive to, from a monetization standpoint: pipelining, and all the cross-connects that come with it. This is where the cash register bells start clanging for Equinix. Customers are moving off of all-public cloud deployments, and bringing assets back within their firewalls. So as Equinix executives admit, customer demand for cross-connects from customers demanding multi-cloud is exceeding their expectations.
“They’re pushing beyond even the footprint that we have around the world, into emerging markets,” said the leader of the provider with the biggest footprint there is. “So at some point, that will pull us into future, emerging markets. That kind of demand is what does that — a big customer driver. We watch that very closely, too.”
As CFO Keith Taylor explained, in terms of gross and net bookings, his company is delivering very much what was expected. But the complexity of hybrid cloud deployments for customers creates a backlog, which results in Equinix not always being able to bill for those bookings when it expects to.
“As we think about complexity of the global hybrid cloud implementation,” he said, “they’re extending the book-to-bill interval.” Some $4 million of delayed revenue is attributable to this trend, he said.
“We’re driving our bookings engine as we expect; and with momentum that we think will continue to come from the channel, that gives us perhaps a greater opportunity as we look forward. All that said, you can see our utilization levels moving up. . . there’s eight new projects coming on line this year, and there’s 19 that are in the hopper. It’s important for us to build out our expansion initiatives, so we can continue to sell at that same clip, with the same set of opportunities.”
For its fiscal second quarter, Equinix reported revenue of $900.5 million, a gain of better than 6 percent over the previous quarter, and a nice 35.3 percent gain over the year-ago second quarter. Adjusted funds from operations (AFFO) rose by 38.4 percent over the prior quarter to $290.5 million, a 31.2 percent gain over the year-ago quarter. Some $485 million of revenue was attributable to stabilized Equinix International Business Exchange (IBX) data centers, with $250 million from expansion IBXes, and $15 million from new IBXes.
The completion of Equinix’ acquisition of Telecity and Bit-isle, said CFO Taylor, removed much of what he called the “noise” from the previous earnings pattern, letting the company concentrate more this time on growth and profitability. Equinix’ acquisition of the St. Denis, Paris-based data center from Digital Realty — from whom Equinix was leasing capacity — plus one other Paris facility was a $211 million transaction that did factor into Wednesday’s figures.
“There’s a great opportunity as a company as we acquire assets; we can operate them differently than the landlord,” Taylor remarked, with respect to the Paris acquisition. “In this case, there’s an opportunity as we think about the revenue we can derive from the incremental customers inside those data centers.” Specifically, there are operating costs that can be recorded on the balance sheet as depreciation. “So we get the benefit attributed to revenue, but we remove the costs associated with how we treated those assets.”
It’s the type of rethinking that executives have to do about the business they run, as their customers rethink their business models around the information assets they own and operate. | 9:00a |
How Vantage Data Centers ‘Created Land’ For a 51 MW Santa Clara Expansion Campus Vantage Data Centers continues to find creative ways to expand its already sizable wholesale data center footprint in Santa Clara, California.
On Thursday, Vantage announced it had “secured a nine-acre site located approximately two miles from its existing Santa Clara campus. Designs for the new campus feature four separate data center buildings and a dedicated substation, with construction starting on the site in 2017. Capacity will be delivered in phases, beginning in 2018.”
This announcement sounds pretty much business as usual, but in reality there is more to this story than meets the eye. While the Bay Area’s Silicon Valley is one of the largest U.S. data center markets, it has some of the highest barriers to entry for new development.
Read more: Silicon Valley – a Landlord’s Data Center Market
Notably, data center REITs CoreSite Realty and DuPont Fabros are currently building out the final phases of their existing Santa Clara campuses. The DuPont Fabros SV1 Phase III is 100 percent leased to one hyperscale cloud provider, and CoreSite’s 230,000 SF SV7 final phase is 59 percent pre-leased.
Read more: CoreSite Realty: Strong Q2 Overshadowed By CEO Tom Ray’s Departure
Santa Clara Challenges
There is little suitable land remaining for development of large data centers, or ample sub-lease space available. Sites suitable for data center development must meet an exhaustive list of requirements which excludes property located in flood plains and flight paths and in railroad corridors. The lack of adequate power or other utilities rules out many of the remaining sites with the proper zoning and other requirements entitled for development in this highly desirable region of California.
Sureel Choksi, Vantage president and CEO, told Data Center Knowledge earlier this week that he and his team had evaluated close to 150 sites in order to try and solve this conundrum. In fact, that process alone took over a year to complete, according to Choksi.
The nine-acre site Vantage eventually acquired for future development was actually an assemblage of three parcels of land — all encumbered with existing buildings. Additionally, an agreement has been reached for Silicon Valley Power to build a new 51 MW substation on part of the land for the critical load.
Vantage Has Been Proactive
Earlier this year, Vantage announced it was adding 21 MW of capacity adjacent to buildings on its existing 51 MW Santa Clara campus, a four-story facility, which will have one floor for support and office, and three floors containing data center halls.The multi-story V5 data center design allows for higher density, which appears to be a new trend in Santa Clara.
Read more: Vantage to Add 21MW in Supply-Starved Silicon Valley Data Center Market
Emerging Trends
Vantage Vice President of Marketing Steve Lim explained that lessons learned from designing the V5 and V6 expansions were crucial to creating the design for the new campus.
In order to build the two-story V6 facility across the street from the existing campus, Vantage had to acquire a one-story office building which will be demolished. However, the municipality had just recently begun to approve projects higher than two-stories. Notably, CoreSite had previously gained approval to construct a four-story data center in close proximity to the existing Vantage campus.
Vantage also needed to get feedback from potential customers regarding their willingness to lease data center halls with concrete slabs rather than traditional raised floors. The trend toward higher density data center designs has led to greater acceptance of multi-story designs. The confluence of these two trends resulted in the four-story V5 data center design.
Solving The Puzzle
Fast-forward to August 4, and the latest Vantage announcement of a new nine-acre campus. After the demolition and utility work, there will be four, three-story data center facilities constructed in phases along with generators and mechanical equipment.
CEO Choksi explained that each of the three-story buildings will contain two floors of data halls, with a minimum of 4 MW of contiguous space on each floor.
The three-story data center solution in retrospect might seem obvious. However, it was clearly the result of a multi-year learning curve. Vantage has leveraged its local market knowledge in order to perform the alchemy necessary to “create land” in Santa Clara for a new 51 MW data center campus.
Santa Clara is the key data center market for Vantage. It appears management has used its “home court” advantage to create a runway for future development of wholesale data center projects. Vantage will now be able to provide new deployments of 500 kW and larger in a supply-constrained Silicon Valley well into 2018 and beyond. | 6:42p |
Dissecting the Data Center: What Can – and Can’t – Be Moved to the Cloud  brought to you by AFCOM
According to the results of a recent survey of IT professionals, 43 percent of organizations estimate half or more of their IT infrastructure will be in the cloud in the next three to five years. The race to the cloud is picking up steam, but all too often companies begin implementing hybrid IT environments without first considering which workloads make the most sense for which environments.
The bottom line is your business’s decision to migrate workloads and/or applications to the cloud should not be arbitrary. So how do you decide what goes where?
The best time to consider migrating to the cloud is when it’s time to re-platform an application. You should not need to over-engineer any application or workload to fit the cloud. If it’s not broken, why move it? For the purposes of this piece, let’s assume your organization is in the process of re-platforming a number of applications and you are now deciding whether to take advantage of the cloud for these applications. There are a few primary considerations you should think through to determine if moving to the cloud or remaining on-premises is best.
Evaluating What Belongs on the Ground or in the Cloud
First, ask yourself: Is our application or workload self-contained or does it have multiple dependencies? Something like the company blog would be considered a self-contained workload that can easily be migrated to the cloud. At the other extreme, an in-house CRM, for example, requires connectivity to your ERP system and other co-dependent systems. Moving this workload to the cloud would introduce more risk in terms of latency and things that could go wrong.
You should also identify whether it is a customer-facing workload or a workload that is primarily accessed via the web. If so, it probably makes more sense to host in the cloud to ensure your end-users are experiencing maximum uptime, performance and availability. Similarly, the cloud is also ideally suited for workloads with variable workload. If you find it difficult to accurately predict the amount of traffic an application will receive at any given moment—and by association, its capacity needs—you’ll benefit from moving that to the cloud and taking advantage of its inherent agility and additional services.
And it may seem obvious, but your organization’s level of experience with governance and security in regards to cloud applications should play an important role in the migration conversation. If this is your organization’s first time working with a public cloud provider and their SLA’s security terms and conditions, you probably don’t want to start out with a core, mission critical application. An experienced team, on the other hand, should feel more comfortable migrating a more complex application and managing the relationship with the cloud provider.
What’s most important to remember is that in today’s on-demand environment, uptime and an acceptable end-user response time are expected no matter where you host applications. Many companies that go to the cloud do so with the expectation that they’re going to save money, but performance, a key enabler of uptime, is closely tied to cost and comes at a premium in the cloud. Without an understanding of each application or workload’s requirements and how they can be met by a specific cloud service provider, you could be in for a surprise when you get the bill.
When It’s Not Time to Migrate
Of course, just because certain applications or other workloads might not be ideally suited for the cloud—or vice versa—today doesn’t mean there isn’t room to reevaluate down the road. For example, it could be that your business needs considerable storage capacity and at some point in the future Amazon Web Services announces a new lower cost storage service that makes a shift to the cloud affordable and meets your security and durability requirements for your organization’s archives.
Generally speaking, though, you should consider a shift only when the requirements for an application change. That could mean you’re running out of space and resources on your physical infrastructure, and rather than investing in additional on-premises hardware it may be more economical to take advantage of the cloud’s scalability.
Alternatively, if you’re currently running an application in the cloud and your provider’s SLA changes to such an extent that it’s cheaper to move back to a physical location or the workload becomes more predictable, it might be time to migrate back on-premises. Some cloud native startups have eventually reached a certain critical mass that makes it more strategic and cost effective to move portions of their infrastructure back to an on-premises (or colocation) model.
Thinking Like an IT Pro in a Hybrid World
Despite these considerations, the reality is that the center of technology itself is becoming increasingly hybrid, and we as IT professionals must start thinking about management and monitoring practices in a hybrid context. With that in mind, here are some best practices to help you better manage a hybrid IT environment now or in the future.
Have a hybrid cloud mindset: Despite the cloud’s growing role in traditional data center strategy, on-premises IT infrastructure isn’t going anywhere just yet. To bridge these two approaches to IT—traditional IT infrastructure and cloud services—and prepare for the hybrid IT future, take a workload-centric approach and use cloud, on-premises and a mix of these as best suits each workload.
Embrace DevOps. Independent of the architecture of the application and the logo outside the building where one’s servers reside, a modern application mindset is important. The DevOps movement has brought us new practices, tools and processes that bring benefits to development and IT operations in general. Organizations can adopt the core principles and best practices of DevOps, including an end-user and performance orientation, end-to-end visibility and monitoring and collaboration to achieve a more agile, available, scalable and efficient data center.
Monitor for the hybrid era: Similar to establishing a unified view across on-premises hardware, where your infrastructure might be comprised of any number of disparate vendor solutions, IT professionals must implement a monitoring system that gives them a view across the entire hybrid IT environment. Such a system will allow you to make informed decisions about what workloads belong on-premises or in the cloud. You should be able to see, through a single pane of glass and at any moment in time, when application performance is slowing down or underperforming whether in the cloud or on-premises, and compare relative performance between these two to make informed decisions.
At the end of the day, with so many options about where to host an application – whether it’s on a container, a virtual machine or in the cloud – companies expect both performance certainty and cost efficiency. The best way to meet this requirement is with the proper monitoring tools that provide an understanding of how your applications change over time, and track the actual requirements of that application and its workload.
Build a roadmap: Remember, there is no right way to adopt elements of cloud computing and introduce hybrid IT into your organization; it’s different for every business, and is often a multi-year journey. The best thing for any IT department considering a move to the cloud to do is to build a roadmap. You need to be informed to make smart decisions when it comes to cloud, even if the decision is to do nothing in the immediate future because there is no immediate need for change.
Making such decisions requires developing specific knowledge, including an understanding of how to get the right visibility with hybrid monitoring tools, building processes for migration and testing of apps and economic and capacity planning models that are independent of specific technologies like virtualization or cloud computing.
The key is to build a cloud adoption roadmap based on a workload-by-workload evaluation that considers requirements, workload variability, potential upside, costs and urgency.
This post originally appeared at AFCOM.com
| 6:56p |
Cash-Rich Tech Companies Can’t Help Tapping U.S. Debt Markets Technology companies from Apple Inc. to Microsoft Corp. have sold more than $100 billion of debt this year even as they sat on more than half a trillion dollars of cash. They did it for two reasons: they wanted to avoid paying taxes, and just because they could.
Bond buyers fleeing negative yields on government debt in Europe and Japan are eager to snap up technology company bonds. The securities are seen as safe while offering higher income than U.S. Treasuries.
“There’s just an insatiable thirst for high-quality investment-grade paper,” said Christian Hoffmann, a Santa Fe, New Mexico-based money manager at Thornburg Investment Management Inc., which oversees about $54 billion in assets.
Tech companies have sold more than $28 billion of bonds in the last week, and so far this year have already issued about the same amount as they did in all of 2015. They’re selling more debt even as overall corporate bond issuance this year has fallen about a percent to $879.6 billion.
“One of the challenges that the overall market is facing this year is that supply is actually down in an environment where there’s a tremendous amount of demand for corporate credit,” said Ashish Shah, head of fixed income at AllianceBernstein, which manages more than $480 billion of assets.
Companies including Apple and Microsoft sold debt to avoid paying taxes on cash that was earned overseas, and could be taxed at a 35 percent rate if it were brought back to the U.S. Many firms that sell debt in the industry are financing moves that are friendly to shareholders, like share dividends or acquisitions.
Google Sale
In its first trip to the bond market in more than two years, Google sold $2 billion of 10-year notes to repay short-term debt on Tuesday. The debt yielded 0.68 percentage point above Treasuries. That was lower than the 0.8 percentage point discussed initially, said a person familiar with the matter who asked not to be named because the deal is private. The average spread for bonds of similar ratings and maturities is 0.91 percentage point, according to Bank of America Merrill Lynch index data.
Google had about $78 billion of cash on its books as of the end of June, according to data compiled by Bloomberg, of which about $30 billion was in the U.S.
On Monday, Microsoft sold $19.75 billion of bonds to help finance its planned acquisition of LinkedIn Corp. Last week, Apple issued $7 billion of bonds to fund dividend payments and share buybacks. Investors piled into both deals, allowing the companies to cut the interest they pay on the securities. Apple Inc., Dell Inc. and Microsoft Corp. hold three of the top five spots for total investment-grade bond offerings this year. | 9:04p |
Microsoft Applies the ‘Metro’ Model to Operations Security History will show that Microsoft’s biggest marketing push in the decade of the 2010s — unless the company comes up with a huge surprise right at the buzzer — will have been a design motif. At one point called “Metro,” until a lawsuit forced the company to stop using that word, it’s the idea that people accept information better when it’s summarized and partitioned in rows or columns of scrollable rectangles.
When Microsoft sprung its motif onto the public all at once in 2012 with Windows 8, the result was mass confusion, leading to one of consumer technology’s most well-diagnosed failures. But maybe it wasn’t the little rectangles’ fault. The lesson the company appears to have taken from the Windows 8 debacle was that users appreciate change when it comes more gradually.
So as Microsoft moves its data center admins and DevOps customers towards its Operations Management Suite model, first introduced last year, it’s incorporating incremental additions to the new system while simultaneously encouraging continued use of the old one — in this case, System Center.
This week, the company announced the latest of these additions: a security module with links to Azure Security Center. Marketing may not do this new module justice: While OMS Security purports to maintain active, real-time investigation of potential security threats to its customers, its true value to the enterprise may lie much more deeply beneath the little rectangles on the surface.
In a company video, Azure Security Principal Program Manager Sarah Fender demonstrated drilling down from the newly amended OMS portal, into a detailed report of the status for particular domains — which in Microsoft’s parlance refers to the general topics to which a business’ security controls may pertain (e.g., units with suspected malware, policy rules that may have failed recently, failed login attempts).
Fender went on to demonstrate a baseline assessment feature, which in an earlier era was not something that a security tool could project in real-time. In the context of OMS, a baseline is a rule that is executed like a policy, and whose result is an event that the console can report.
If you’re a veteran DevOps professional, you’ve already gotten a peek at where we’re going with this: A complex rule execution engine, which includes the generation of custom rules, suggests the presence of a runtime engine that can be extended for something far more applicable than the “user experience:” automation.
Last May, the company began demonstrating the use of PowerShell — the scripting tool that transformed Microsoft’s entire server platform line — in implementing an OMS feature called desired state configuration (DSC). In a similar vein to how Jenkins, the open source CI/CD framework for Linux, utilizes runbooks to automate the delivery of services, Microsoft has been pushing PowerShell over the past few months as a scripting tool for implementing its version of runbooks.
“OMS automation permits me to use PowerShell scripts to automate complex end-to-end processes,” writes certified trainer Ed Wilson (“The Scripting Guy”) in a company blog post last May. “I can do this with runbooks that I can run on demand, run immediately, or that I can schedule to run at a later time. Once I have the PowerShell script in the runbook, I can basically do anything I want to do with that runbook.”
Now, there was an era in Microsoft’s history where the admission that you can do anything you want with Microsoft scripting, would set off red-alert klaxons. But in the spirit in which Wilson meant this in the modern era, he means that PowerShell can implement the goals of DSC because nothing in OMS Automation restricts it from doing so.
It doesn’t take a scripting genius to infer from this that a DevOps or infosec professional could leverage the same system that delivers OMS Automation, to create wide-ranging scripts that assess the security state of domains or data center principals across the globe. Those scripts could, in turn, generate alerts. And while alerts help out the goals of OMS, by way of giving Microsoft something new to animate within the suite’s Web portal, they also give PowerShell runbooks the input they may need to automate security responses in real-time.
While Fender demonstrated the extent to which an operator might see the details of security events in OMS, her demonstration was restricted to what human beings can do in response to those events. The real value of this latest suite addition may yet come from automation that reduces what human beings have to do to respond, to a minimum. | 10:46p |
Now On Its Own, Virtuozzo Seeks Container + VM Co-existence Some 14 years ago, a company called SWSoft — the one behind the Parallels Windows virtualization technology so often associated with Mac — developed a virtual private server technology. It predated the deployment of VM-based servers in the cloud by several years, and was one of the first to be offered to customers as a hosting service rather than a stack of licensed software. It was called Virtuozzo, and deserves a measure of credit for giving rise to the cloud.
There’s something else Virtuozzo would like to receive some credit for: containers. Certainly, it did present an early framework for compartmentalizing and isolating Linux processes — they weren’t control groups (cgroup) yet, but it had the same goal in mind.
Back in May, after Parallels spun off Virtuozzo as a company unto itself, it announced its intention to join the Open Container Initiative (OCI), and to broaden its focus to incorporate more modern technologies. Now, with the release of version 7 of its staging platform, Virtuozzo is aiming to address one of the key integration issues facing infrastructure specialists: integrating virtual machines with modern, Docker-style containers, while at the same time integrating the storage volumes that both will utilize simultaneously.
“Container users can leverage dozens of Linux distributions in a container, and most Linux applications work fine and even faster in containers in comparison with VMs,” writes Virtuozzo Senior Program Manager Vladimir Porokhov, in a company blog post this week. “However, for an application that needs to use a custom kernel module depends on a kernel feature that isn’t in the host kernel, or simply designed for a different OS (like Windows), it won’t work in a Linux container. In these cases, one can use virtual machines and avoid any OS or application compatibility issues.”
Virtuozzo 7 is not an orchestrator, like Docker Swarm, Kubernetes, or Mesosphere DC/OS — that is to say, it does not schedule workloads, oversee their operation, and migrate them from place to place. What it does offer is a common platform for staging both KVM hypervisor-based workloads and OCI container workloads on capacity that has been pooled together from bare metal servers. Those workloads may run on their own operating systems. Virtuozzo has not certified minimalistic operating systems yet, such as CoreOS or Microsoft Nano Server, although it has certified several 64-bit Linux distributions, along with their associated Linux containers (CentOS 7.x and 6.x; Debian 8.x and 7.x; Ubuntu 16.04 LTS, 15.10, and 14.04 LTS; OpenSUSE 42.x; Fedora 23; and its own Virtuozzo Linux 7.x and 6.x), as well as Windows Server 2012, 2012 R2 and 2008 R2 SP1.
Container staging takes place on Virtuozzo’s own container platform, called OpenVZ. This is the evolved form of the original platform for virtual private servers. As such, OpenVZ treats the context of a Virtuozzo container differently from a Docker or an appc container (CoreOS), and that might make a difference in whether data centers care to try out Virtuozzo’s co-existence plan. Specifically, a container and a VM are both servers in the OpenVZ environment, whereas with Docker, a container represents an application.
So Virtuozzo treats both with equal respect, so long as you’re willing to concede that a container is a complete server image, rather than the minimum dependencies needed to run an application — what Docker enabled it to be. According to Virtuozzo 7’s documentation [PDF], unlike Docker, Virtuozzo containers truly are virtual machines, rather than virtualized resources hosted by the underlying operating system that’s also hosting the container engine.
Still, this does enable one feature that Docker containers — as well as any format that’s hosted by a container daemon — had some difficulty in implementing, even now: persistent storage volumes. In Virtuozzo, there are simple storage devices that are networked together in clusters, and addressable using NFS.
This methodology could conceivably change over time, as Virtuozzo adopts more modern processes. What it has yet to demonstrate is scalability — the virtue displayed by more microservice-oriented container systems of replicating services on-demand, and eliminating them from the system when no longer needed. Conventional virtual machine environments are scalable, but only by replicating whole servers and employing load balancing, which does not always bode well for driving up utilization.
The company uses the term “hyper-converged” a bit loosely. Certainly Virtuozzo 7 is far from an orchestrator that pools compute, storage, and networking fabric at an infrastructural level, like Cisco’s concept of hyper-convergence, or HPE’s “composable infrastructure.”
But the more that is seen on Docker’s playground, this could change in short order, making it a viable option in a world where integration is first and foremost on the CIO’s priority list. |
|