Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, March 7th, 2016

    Time Event
    1:00p
    Interxion Expands as European Data Center Market Heats Up

    The race for data center customers seeking connectivity in Europe remains hotly contested, and Interxion Holdings, one of Europe’s heavyweights in the space, is now competing with two US data center giants with global reach.

    Last week, Netherlands-based Interxion announced expansion plans in four key European data center markets and reported its 37th consecutive quarter of earnings growth.

    Interxion has been enjoying success in its core Western European markets, but now that Equinix has closed the acquisition of Interxion’s major rival TelecityGroup, and Digital Realty has acquired Telx and now pursues a global strategy in which interconnection plays a key role, competition has gotten a lot hotter in Europe.

    Telecity adds 23 facilities to Equinix’s European data center footprint, including presence in seven metros and six countries that are new for the Silicon Valley-based giant. In addition, Equinix is aggressively expanding its IBX data center footprint, announcing $4.5 billion in expansions on four other continents just last week.

    Notably, Equinix has agreed to sell eight Telecity data centers to secure a green light for the merger from EU antitrust regulators.

    In October 2015, Digital Realty closed its $1.9 billion acquisition of the US colocation and interconnection heavyweight Telx. Its new strategy can be bad or good news for Interxion, depending on what Digital does next.

    Just two months ago, a report circulated that Digital Realty might be eyeing Interxion as a way to scale up its European data center presence and the new big interconnection business it acquired in the Telx transaction.

    Interxion was well on its way to a merger with Telecity before Equinix wedged itself in with a successful bid of its own. An acquisition by Digital could be a good way to bulk up for Interxion as it takes on its rival, which was a rival in Europe before but not nearly as big as it is now.

    Read more: Report: Digital Realty Mulling Interxion Acquisition

    Interxion continues to focus upon controlling its own destiny as a pan-European data center provider. However, as more enterprises look to deploy new applications in the public cloud and compete in the global marketplace, Interxion may need to carefully consider its options, or risk being left behind.

    Interxion Forges Ahead

    On March 2, Interxion announced four expansions to go along with its Q4 earnings release and conference call. The expansion locations included:

    • Marseille – MRS1.2 to add 800 square meters of equipped space and more than 1MW of customer power, opening third quarter 2016.
    • Paris – PAR7.2 to add 1,100 square meters of space to handle existing customer expansions, to become operational in second quarter 2017.
    • Vienna – VIE2.6 to add incremental capacity of approximately 1,400 square meters and approximately 3MW of customer power to VIE2. This space will be brought online in phases during Q2 2016-Q3 2017.
    • Dusseldorf – DU2.2 to add 600 square meters of equipped space to serve digital media companies and serve as a dual site market for larger German enterprises. DU2.2 is scheduled to become operational in the second quarter of 2016.

    These builds are expected to cost a total of €50 million, adding a total of approximately 3,900 square meters of equipped space and approximately 5MW of customer power in European data center markets.

    These expansions were included in the Interxion 2016 CapEx budget of €210 million to €220 million.

    INXN - 4Q'15 Earnings s8 snip Logos-Ecosystem

    Source: INXN – Q4 2015 Earnings presentation

    Subsequent to the end of Q4, Interxion opened the first phase of its FRA10 data center in Frankfurt.

    Interxion to Start Charging Recurring Cross-Connect Fees

    On the earnings call, Interxion CFO Josh Joshi highlighted the initial results from the shift over to a recurring revenue model for cross-connects:

    “In the fourth quarter, the company transitioned away from charging solely on a one-time nonrecurring fee for cross-connects and moved to recurring-fee model across all countries for all new cross-connect sales… We do expect our quarterly nonrecurring revenue to continue to be lumpy in nature and return to the slightly lower range of between €4 million to €5 million per quarter.”

    This transition is one area which investors should closely monitor moving forward.

    Interxion Q4 Earnings Highlights

    Interxion CEO David Ruberg was upbeat regarding leasing momentum for 2016: “Trends in IT continue to be in our favor, and Interxion spent most of 2015 strengthening our position by attracting cloud magnets to our data centers across the widest footprint in Western Europe. We believe that we are still at the beginning of the next major IT investment cycle as billions of IT dollars shipped from legacy IT deployments to the cloud over the next several years.”

    Interxion Q4 results included:

    • Revenues – Q4 2015 revenues grew 12 percent, and net profit was up 64 percent to €12.1 million.
    • Revenue breakout – Recurring revenues, €95.1 million, a 13.6 percent increase; nonrecurring revenues, €5.6 million, a decrease of 10.1 percent.
    • Occupancy – Equipped space was increased by 1,000 square meters in Q4; revenue-generating space rose by 1,100 square meters. Utilization rate at year’s end was 78 percent.
    • Initial 2016 Guidance – Management expects full-year revenue of €416 million-€431 million; and adjusted EBITDA of €185 million-€195 million, (both in line with expectations).

    Interxion has now reported 37 consecutive quarters of earnings growth, and adjusted EBITDA margins remain at a solid 45 percent.

    The Bottom Line

    In an increasingly competitive EU environment, will the Interxion local market focus, or “Communities of Interest” strategy, be enough to carry the day?

    Equinix president and CEO Stephen Smith said global businesses were increasingly relying on interconnection to provide rich user experience everywhere around the world. These trends will only accelerate as more data is stored in edge markets to support the Internet of Things.

    Interxion appears to have an excellent foothold in many European edge markets. It owns nine of its 41 data center locations and is currently exercising purchase options for PAR7 and AMS7.

    One question investors must answer for themselves: Will the Interxion piece of the pie become more attractive, or less attractive to potential suitors over time?

    Want to know how other data center providers did in the fourth quarter? Curious about investing in data center companies in general? Visit the Data Center Knowledge Investing section for everything you need to know about this high-performing sector.

    4:00p
    Is Your Data Center Protected from Drones?

    Bet you didn’t think it needed to be. It hasn’t occurred to me either, but Adam Ringle, who runs a security consulting company that bears his name, says not addressing the Unmanned Aerial Systems threat proactively can lead to serious consequences.

    Ringle advocates for both taking steps to lower the risks from drones and also using them at the data center proactively to improve physical security. He is speaking on the subject at the Data Center World Global conference in Las Vegas this month.

    He will be talking about relevant new policies, procedures, and legislation.

    His company, Adam Ringle Consulting, guides companies on creating policies for dealing with things like drones and workplace shooters and provides emergency readiness training. It also offers expert medical, emergency services, and law enforcement witnesses to assist companies and government agencies in litigation.

    ARC partners with StarRiver, which describes itself as a Drone, Unmanned Systems Technology Defense Research and Development Lab. The lab studies and develops drone detection and response systems.

    Join Adam Ringle and 1,300 of your peers at Data Center World Global 2016, March 14-18, in Las Vegas, NV, for a real-world, “get it done” approach to converging efficiency, resiliency and agility for data center leadership in the digital enterprise. More details on the Data Center World website.

    6:20p
    Virtualization Management Tips: Optimizing Your Environment

    At this point, almost every modern data center will have worked with some type of virtualization technology. The modern hypervisor has come a long way from its predecessors. Leading virtualization platforms offer enterprise-ready technologies capable of consolidating infrastructure and helping it grow harmoniously with other tools. Many products provide virtualization-ready support for a variety of workloads. We’re using entire virtual appliances to serve specific functions like load balancing and even advanced security integration.

    As virtualization continues to improve, however, you do have to pay special attention to the hardware underneath. Remember, better hardware means more VMs per host, better app density, more efficiency, and optimal virtualization management. This means more users can be handled with less hardware and physical resources, such as power, cooling, and data center space. All of this translates into cost savings.

    Now, let’s look at the virtual side of things:

    • Using virtual images, administrators are now able to move their workloads between distributed sites, ensuring the longevity of their data. Creating highly replicated hot sites, for example, becomes easier with mature built-in technologies that come directly with the hypervisor.
    • Integration with storage systems is now a normal practice in virtualization management where data deduplication and backup comes standard with a given hypervisor. Remember to utilize storage optimization from the hypervisor level when working with various storage repositories. For example, different VM policies for flash and for spinning disk.

    When working with virtualization, you have to understand the following:

    • Sizing and scaling an environment will be very important. Initial planning stages are crucial to making the right hardware and resource decisions. Not having enough to support your user count can be much more costly to resolve after a system has gone live.
    • Remember, as with any physical resource, the capabilities of your network, storage, and compute are finite. This means administrators must carefully watch how their virtual workload is operating and where their resources are going. Too often, administrators over-provision a VM only to see most of the resources go unused.
    • Testing and maintenance are always important in virtualization management. Remember to manage logs, VM health, and accessibility regularly. This means performing off-hours DR testing to ensure production systems stay live.

    Endpoint Virtualization and Optimization

    Remember, virtualization isn’t just happening at the server level. We’re also seeing advancements around endpoint virtualization and content delivery. By virtualizing the endpoint, IT administrators are able to accomplish several tasks:

    • Minimize end-point hardware footprint
    • Convert to a thin-client environment
    • Centrally manage all desktop images
    • Deploy desktops or workloads to any device with internet connectivity
    • Secure golden images against tampering and deploy those as needed to numerous locations simultaneously

    Virtual Desktop Infrastructure is a powerful solution, but there must be a need for it. Some organizations are looking to VDI to help with compliance issues. Others see that a complete PC hardware refresh is much too expensive. In these situations, converting to a thin-client environment and then pushing down desktop images through VDI becomes a viable solution.

    When working with endpoint virtualization, there are two core delivery methods that need to be understood: application virtualization and desktop virtualization.

    • With application virtualization we still potentially keep the endpoint but deliver all of the necessary applications from a centralized platform.
    • In a VDI environment we look to completely virtualize the desktop and deliver that to the end user. Although the image exists on a server, to the end-user, the environment seems truly transparent. Advancements in networking and storage technologies allow virtual desktops to be pushed down quickly and seamlessly.
    • There is also a hybrid approach, where an environment has both a virtualized application setting as well as VDI deployed to their end user. One server environment manages the applications, while the other delivers the desktop. In these scenarios we have a true separation of duties and a granular control over the entire workload.

    In working with new levels of virtualization, organizations must be ready to support their users. This means creating capabilities around mobility and productivity. Whether at the server level or within the application or desktop, virtualization can create powerful efficiencies for the entire data center. Most of all, you create new use cases that help the business stay agile in a very fluid market. This level of agility allows you to quickly adapt to changing demands as you support an increasingly diverse user.

    6:43p
    SUSE’s Latest OpenStack Distro Focuses on Private Cloud
    By The VAR Guy

    By The VAR Guy

    SUSE says its latest OpenStack offering, SUSE OpenStack Cloud 6, finally makes open source private clouds enterprise-friendly and easy to adopt without fear of vendor lock-in.

    SUSE OpenStack Cloud 6 is the latest version of the company’s OpenStack distribution. Based on the OpenStack Liberty release, it introduces several major new features. Those include:

    • Support for upgrading to newer versions of OpenStack without disrupting cloud operations.
    • Broader virtualization support with the addition of IBM z/VM compatibility.
    • Support for Docker containers (which sort of makes SUSE OpenStack Cloud more than just a cloud platform, but we’re betting enterprises won’t mind the optional extra features).
    • “Enhanced high availability,” according to SUSE, which says the platform is designed to support enterprise workloads that demand absolute reliability.
    • Support for Manila, OpenStack’s shared file system.

    At the core of SUSE’s pitch for its new OpenStack offering is enterprise readiness. Citing a study that the company performed recently, SUSE said the vast majority of businesses “would use a cloud solution for business-critical workloads, and believe there are business advantages to implementing an open source private cloud.” Yet companies remain “concerned about installation challenges, possible vendor lock-in and a lack of OpenStack skills in the market,” SUSE said.

    See also: Mirantis Launches Latest OpenStack Cloud Distro

    “SUSE is addressing these concerns by adding non-disruptive upgrade capabilities along with a more business-friendly release cycle and longer support duration. These combine to reduce the load on limited skilled resources by requiring fewer upgrades and minimizing disruption to production environments.”

    On balance, the SUSE OpenStack Cloud 6 probably won’t allay ever CTO’s concerns about the open source private cloud. The cost of migrating legacy systems will no doubt remain a challenge for many, as will the perception among some that OpenStack itself, with its varied and burgeoning components, is not yet fully mature.

    But SUSE’s new upgrade features and broader OpenStack support certainly won’t hurt in attracting more companies to its enterprise platform. Some may finally see the new features as the push they need to get over the OpenStack adoption hurdle.

    This first ran at http://thevarguy.com/open-source-application-software-companies/suse-debuts-openstack-cloud-6-open-source-private-cloud-c

    7:22p
    Want to Learn about Smart Data Center Consolidation?

    Most data center servers operate at only 12 to 18 percent of their capacity, yet many companies aren’t taking advantage of the cost-saving potential offered by data center consolidation. Consider this: in the last five years, the US government saved nearly $2 billion by consolidating data centers. Companies like Microsoft, HPE, and IBM have likewise saved billions.

    In an effort to cut costs and regain control of the data center environment, IT managers are asking that their environments be consolidated and made more efficient. The conversation revolves around aligning IT with business needs, which today often means greater IT agility. Managers and executives are trying to drive down cost and in doing so have prioritized data center consolidation and migration projects.

    In creating a consolidation or data center migration plan, high-density server equipment, applications, virtualization technology, and end-user considerations all fall under the general scope.

    Is your company one of many that are wasting money on data center operations? At the Data Center World Global conference next week in Las Vegas, Andy Abbas, VP of technical services at Data Agility Group, will talk about data center consolidation. He brings years of experience as a data center architect and migration specialist and focuses on aligning migration and consolidation efforts with the business.

    During his session, he will highlight the following:

    • Budgets are shrinking and organizations are having to do more with less.
    • There is a lot of money to be saved by doing smart data center consolidation, freeing up resources for other more effective projects.

    “Most of all, how can you evaluate whether your company can profit from a data center consolidation?” Abbas asked. “During this session, we’ll dive into a roadmap of what to look for when considering a consolidation or data center migration project. We’ll also look at where companies can save millions, and in some cases, even billions of dollars.”

    Remember, a lot is changing in today’s business and data center world. New technologies are impacting how data centers are deployed and managed. Consider these two big trends:

    • Virtualization and Cloud: This has enabled companies to reduce their footprint and cater to the business in a more robust manner.
    • On-Premise IT Moving to Colocation Data Centers: Companies are realizing they can’t run a data center as efficiently as they thought they could. It’s more economical to have someone else manage the facilities, while you focus on managing your IT infrastructure.

    In his session, Abbas will discuss:

    • How have some of the world’s largest organizations saved billions through data center consolidation? The session will provide attendees with real-life examples of organizations taking advantage of new technologies, i.e. virtualization and cloud, to save and optimize their infrastructure.
    • What are the key opportunities for companies to save money through data center consolidation? Attendees will learn the key areas which will give them the best ‘bang for the buck.’ There are several areas which could offer savings but balancing the risk and reward is very important. They will learn what areas will help them get the most value, while minimizing the risks involved with migrations.

    Join Andy Abbas and 1,300 of your peers at Data Center World Global 2016, March 14-18, in Las Vegas, NV, for a real-world, “get it done” approach to converging efficiency, resiliency and agility for data center leadership in the digital enterprise. More details on the Data Center World website.

    7:28p
    The Second Machine Age: Our Future With Artificial Intelligence 

    Sudheesh Nair is President of Nutanix.

    Recently, I went to New York and had to get a taxi at LaGuardia Airport, an arguably antiquated airport. To do so, you must stand in line and wait for an attendant to fill out a ticket with the appropriate zone for your final destination. When your taxi drives up, this person hands the ticket to the taxi driver and you can then go on your way.

    As I waited for my taxi, I couldn’t help but see this cumbersome process as a classic queuing protocol. While easily solved through technology, this inevitable change would leave this attendant without a job. For better or for worse, this is the fate of many jobs as artificial intelligence catches up with human intelligence — and that day is not as far off as we may think. Artificial intelligence is no longer relegated to the fictional imaginings of such worlds as “Blade Runner,” “The Matrix” or “Star Wars.”

    Computers aren’t taking up entire rooms anymore, taking days to crunch specific queries, within reach of only a select few. Now they’re in our pockets, on our wrists, in our refrigerators and even walking around with us in our shoes. But even with this advancement, we are nowhere near the potential that the merger of computer and human intelligence has promised for so long. Human intelligence still supersedes computers in many ways, such as in common sense, empathy and critical thinking. However, the day is not too far off when artificial intelligence will change life as we know it.

    In this second machine age, technology will fit seamlessly into our lives, bringing remarkable benefits to consumers everywhere. Underlying hardware will act as an invisible infrastructure as intuitive software interfaces and automation do most of the work. Take, for example, the transportation industry. Right now it hinges on human input to drive cars and buses, conduct trains, and fly planes. We are already seeing the potential of AI in revolutionizing how we get from place to place, as driverless cars are on the cusp of coming to market. By removing the human error and the cost that employment incurs, transportation will become more affordable and will even, for many people, eliminate the need to own a car — one of the most underutilized resources we currently depend on. This will reduce waste, clean up the air and give us more room for development in urban areas.

    These consumer benefits also extend to our health care. It doesn’t seem too far off to imagine a time when a computer could make the simple diagnoses that account for a vast majority of issues presented to doctors, making medical care more affordable, more accurate and more widely available. Medical professionals would then be freed up to focus their attention on unusual or complicated cases that require their human adaptability, creativity and innovation. They would also be able to focus their attention on advancements in other areas, such as the individualized care that human genome sequencing promises.

    Of course, with all of the automation AI will bring us, it will eliminate many of the jobs currently undertaken by humans. Machines will be able to drive for us, diagnose us, run our routine IT support, give us financial guidance and hail us a cab at the airport.

    While this may seem like the end of days for employment, just remember that we have been through these shifts throughout time, even as recently as the Industrial Revolution. Advancements in technology have often improved quality of life at the expense of mundane tasks. Look no further than the Guttenberg press. While it put many a monk out of work, it also made books available to the masses, improving the lives of millions of people through accessible knowledge — much like the Internet has done today.

    This massive shift we are facing leaves us with innumerable questions and complications. What kinds of jobs will people have? How will we distribute wealth? What possibilities lay in front of us? For years, people have been clambering for more education to stay ahead of the changes. While this is important, education is not enough. Computers can access more knowledge than is even fathomable to our limited minds.

    What we must do is adapt to the coming tide of change. We can do this by leaning in to what makes us innately human: our ability to improvise, to innovate and to inspire. Machines cannot yet replace these qualities and they also cannot make the genuine human connections that we all thrive and depend on. The key is to modify our approach to the way we live with technology so people like the taxi ticket attendant will be able to ride the wave and not get pulled under the riptide.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    8:35p
    How Much Is Spotify Paying for Google Cloud?

    It’s a question worth asking because Spotify’s transition from operating its own data center infrastructure to Google’s public cloud is the first high-profile public use case for a cloud service provider that’s often mentioned alongside Amazon Web Services and Microsoft Azure – thanks primarily to the scale and might of its data center infrastructure – but has yet to prove it can really compete with the two public cloud giants successfully.

    Spotify announced the move in February, saying its “lazy engineers” had been questioning the necessity of leasing data center space, managing servers and networking gear, and scaling it all to keep up with demand. The music streaming service is available in close to 60 markets globally, hosting more than 2 billion playlists and streaming more than 30 million songs.

    Spotify will be using Google’s cloud compute, storage, and networking services, as well as its data services, such as Pub/Sub, Dataflow, Big Query, and Dataproc. The company is transitioning from an on-prem to an all-cloud infrastructure model gradually – a process it expects will take months to complete.

    While flexibility of cloud infrastructure and the ability to outsource the scaling headaches are big benefits over running your own data centers, cost is the single most crucial factor, especially at the scale of a service like Spotify. Amazon, Microsoft, and Google have been battling each other on price for several years now.

    Spotify and Google didn’t release any details about how much the streaming company is paying for Google’s cloud services, but David Mytton, founder and CEO of Server Density, a server monitoring startup, has made an attempt to calculate what some of those costs may be, since Google’s pricing is fairly transparent.

    Mytton’s analysis is limited to the cost of Spotify’s event delivery service, which is transitioning from Apache Kafka to Google Pub/Sub. Event delivery is the only service Spotify has shared some numbers about, making it possible to deduce some cost estimates that are more or less close to reality. The full analysis is on Medium.

    His conclusion is that Spotify is paying about $290,000 per month for Pub/Sub alone, although it’s possible that it received some discount, given that Google is getting more out of this deal than just a paying cloud customer.

    In November 2015, Urs Hölzle, Google’s VP of technical infrastructure, said this year would be the year Google shows the world it is serious about its cloud services business. Deals like Spotify are important to telling that story.

    Read more: Netflix Shuts Down Final Bits of Own Data Center Infrastructure

    << Previous Day 2016/03/07
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org