Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, November 6th, 2015

    Time Event
    1:00p
    CyrusOne: No Customers Ask for Retail or Wholesale Data Center Product

    For CyrusOne, the data center business is no longer about selling traditional retail or wholesale data center space. Things are a lot more nuanced than they were in the past.

    “No customers come to us and ask to purchase our retail or wholesale product,” Gary Wojtaszek, the Carrollton, Texas-based data center provider’s CEO, said on the company’s third-quarter earnings call this week. “Rather, customers’ infrastructure purchasing decisions are based around the applications they are managing, which have very specific requirements.”

    The increasingly sophisticated data center customer that demands a wider variety of services from their data center provider is a recurring theme today. Companies want a variety of power densities, network connectivity, managed services, and direct links to public cloud infrastructure, and they don’t want to have to sign multiple contracts with multiple service providers for all those things. More and more, they look to their data center provider for all of it.

    What this means for the providers is a different revenue model. The deals start smaller in terms of space and power and cost more to deliver, but generate more revenue per kW or per square foot of data center space. They also make for “stickier” relationships with customers, who become reliant on the provider for all of their infrastructure needs.

    “Over the past eight quarters, the majority of our new leases are from full-service customers based on the volume of leases,” Wojtaszek said.

    CyrusOne reported a strong third quarter overall, as did other data center real estate investment trusts. The current US multi-tenant data center market is characterized by high demand and supply volume that ensures prices don’t fluctuate significantly. Companies are leasing out existing capacity and expanding across all major regions.

    CyrusOne leased close to 5 MW and 30,000 square feet of data center space during the quarter, representing about $13 million in annual revenue.

    The company has had a particularly successful year in Northern Virginia – one of the world’s largest and most active data center markets. It commissioned Phase One of its first Sterling, Virginia, data center in the first quarter, and by the end of the second quarter the data hall was “essentially sold out,” Wojtaszek said.

    CyrusOne commissioned a second data hall, about 37,000 square feet in size, during the third quarter. It has also started construction of the second phase in Virginia and has already pre-leased about 10 percent of it, according to the CEO.

    While it has traditionally been an active market, Northern Virginia saw a burst of activity recently. Wholesale data center provider DuPont Fabros Technology recently leased out about 20 MW of capacity in the region. Both Equinix and Digital Realty Trust announced huge expansion projects in the market.

    CyrusOne is also expanding in Austin, Houston, San Antonio, Dallas, and Phoenix.

    But CyrusOne’s biggest recent expansion was its acquisition of data center provider Cervalis earlier this year. The deal gave it instant presence in the New York market – comparable in size to Northern Virginia – with four data centers.

    In addition to geographic expansion, the deal added a substantial new source of revenue for CyrusOne and diversified its customer base. In its native Texas, CyrusOne has made lots of inroads in the oil-and-gas industry, and the Cervalis acquisition substantially increased its business with financial services companies. “The acquisition more than doubles our presence in the key financial services vertical,” Wojtaszek said.

    CyrusOne has been putting a lot more emphasis on interconnection products recently, and the strategy has been paying off. Interconnection revenue grew about 30 percent year over year. It now accounts for five percent of the company’s total revenue.

    “We expect that, as the enterprises become more familiar with an outsourced network model, similar to their acceptance of the outsourced data center model, that they look to take advantage of the lower cost afforded by an internet-based network topology,” Wojtaszek said. “This product (interconnection) should continue to grow faster than our base business.”

    5:32p
    Six Data Disasters That Could Happen Anytime

    Perry Dickau is the director of product management at DataGravity.

    It’s what you thought would be a quiet Saturday night when you get a call at 2 a.m. and learn your system has been breached. It may be a CISO’s worst nightmare. A major portion of your data, structured and unstructured, is now in the public domain. However, until you get to the office and dig into the specifics of the breach, you’re not sure which files and folders have actually been exposed. Frantic, you rack your brain: What is the worst thing someone could find in your files?

    Inadvertently exposing sensitive data is never a positive experience for the people directly connected to the information – as Sony, Target or any one of the thousands of companies that suffered breaches in recent years could tell you. For many security executives, real terror strikes when they aren’t entirely sure what’s hiding in their storage. When data is untouched for long periods of time, it grows cold and eventually becomes dark, creating a breeding ground for misplaced information and potentially dangerous elements. And it’s not just enterprises that can unearth nightmares in their dark data stores; universities, hospitals, law firms and various other organizations frequently store spreadsheets, emails and other files that may contain credit card numbers, Social Security information, intellectual property, or worse.

    Imagine if, after you got that 2 a.m. phone call, you arrived at the office in the morning to learn that any one of the below examples had made its way from your server to the public realm:

    Intellectual Property

    Your company’s bread and butter in the marketplace is the advantage you have over other competitive vendors, but as part of this oncoming breach, your “secret sauce” was potentially leaked to your adversaries. Losing trade secrets could cause irreparable harm to your brand integrity, and potentially make your company less viable in the eyes of your consumers.

    Passwords, Credit Card Numbers and Online Login Details

    Do you have a document titled “Passwords” in one of your desktop or file share folders? Even if you don’t, one of your employees likely does. Spreadsheets and repositories that keep track of login credentials, file names and elements such as credit card numbers and Social Security numbers make it easy for interested parties to identify personal information on a company’s server. In today’s risk landscape, where security breaches are almost inevitable, companies need to make sure they’re locking down every instance of critical data.

    Home Loan Information

    As a realtor, it’s part of your job to help manage clients’ private data and keep it confidential – including files such as home loan applications and approvals. If a third party gained access to these files, your customers’ personal security would be compromised, and the credibility of your business would be at stake.

    A Family’s Financial History

    Universities keep track of students’ scholarships and tuition payments, many of which are determined by their families’ income. If you’re a chief information security officer (CISO) or IT pro working in higher education, a hack could expose credit histories, loan applications and other private assets for current students and alumni alike – and financial support from both parties is often necessary to fund university programs and initiatives.

    Medical Insurance Information

    Every patient who has ever received treatment at a hospital or doctor’s office is on record, accompanied by his insurance history and medical details. Even in the event of an inadvertent breach, exposing this information can wreak personal havoc on the patient’s life, including his quality of healthcare and his personal privacy.

    Timelines and Research For a Round of Funding

    The investors are excited; the terms are nearly settled between a venture capitalist (VC) firm and your company – and suddenly, all of your negotiations are out in the open, along with your research about the firm that started the deal in the first place. Even if you don’t lose support from the VC, your reputation in the community will likely be tested.

    When your company suffers a data breach, your internal teammates aren’t the only people with private information at stake. It’s your responsibility to help protect your customers, employees, partners and any other individuals who have interacted with your organization. Be sure to ask yourself: Are you sure you know what’s hiding in the depths of your data? And what would you do differently if you could find out?

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    6:37p
    Docker 1.9 Offers Improved Networking, Storage Features

    logo-WHIR

    This article originally appeared at The WHIR

    The latest release of application containerization software Docker features some major updates including production-ready Docker Swarm, multi-host networking, Docker Engine’s new volume management system, and improved support across multiple environments with Docker Compose.

    Among container technologies that efficiently package applications so they can run as lightweight microservices, Docker has been an extremely popular open-source choice, with its main alternative being LXC Linux containers.

    In Docker 1.9, released Tuesday, the new features work together to provide a more complete toolset aimed at running stateful and stateless distributed applications in production and at scale, according to a blog post from Docker’s Ben Frishman, co-founder of Docker web host Orchard Laboratories which Docker bought last year.

    Essentially, Docker Swarm pools infrastructure into a single resource for hosting distributed apps, Docker Networking helps the containers making up those apps speak to each other, and Docker Engine’s new volume management system allows persistent data storage wherever those containers are. And Docker Compose helps developers more easily define and run containers in multi-container applications.

    Multi-host Networking

    First announced as an experimental Docker Engine feature at DockerCon in June, generally available networking improvements allow users to create virtual networks in Docker Engine that span multiple hosts. Containers can attach to these virtual networks and communicate over them. This gives administrators complete control over the network topology and how containers can talk to each other.

    The networking system can also be swapped out with a plugin to integrate with other networking systems without changing the application.

    Persistent Storage

    Storing persistent data in distributed apps has been a struggle for many Docker users according to Frishman. “Docker Engine 1.9 includes a completely redesigned volume system that makes them much easier to use and brings plugins to the forefront.”

    Docker 1.8’s plugable storage volumes allow Docker volumes to use any third-party storage system. Storage volumes also work with Swarm, allowing persistent storage across an entire cluster. Volume drivers are available for Blockbridge, Ceph, ClusterHQ, EMC and Portworx.

    Native Clustering with Docker Swarm 1.0

    Docker Swarm 1.0 provides native clustering for Docker Engine. In addition to bug fixes, the latest version has been optimized, hardened, and tested at scale for production-ready applications spanning 1,000 nodes and 30,000 containers.

    Configuring Containers with Docker Compose 1.5

    Docker Compose now provides better support for multiple environments – making it easier to specify the structure of the development, test, and production environments. In configuring images, Docker Compose now reads two files: docker-compose.yml and an optional docker-compose.override.yml file. You can specify a base file that describes the structure of the app, then you could have variations in the override file for staging or production.

    Compose 1.5 provides more thorough validation of Compose files and output with more informative error messages when something goes wrong.

    The new Compose also features experimental support for Docker Networking, allowing a Compose app deployed on Swarm to work across multiple hosts.

    Docker Compose also now runs on Windows and is included in Docker Toolbox for Windows.

    Running Containers on Laptop or the Cloud with Docker Toolbox

    Docker Toolbox includes all the latest elements of Docker for a Mac or Windows workstation along with Machine 0.5, which creates Docker Engines on a laptop. It also includes pluggable drivers that allow users to run Docker containers on a cloud or virtualization providers, or write custom drivers for services without Docker drivers.

    This first ran at http://www.thewhir.com/web-hosting-news/docker-1-9-offers-improved-networking-storage-features

    7:00p
    National Science Foundation Sponsors $5M Federated Cloud Project

    logo-WHIR

    This article originally appeared at The WHIR

    The National Science Foundation is sponsoring a $5 million project to build a federated cloud of data infrastructure building blocks to support data scientists and engineers working with big data. The project is led by Cornell Universities’ Center for Advanced Computing, and is called the Aristotle Cloud Federation.

    The federated cloud will be shared by 7 science teams with over 40 global collaborators, and will be deployed at Cornell, the University of Buffalo (UB), and theUniversity of California, Santa Barbara (UCSB). The teams will study earth and atmospheric sciences, finance, chemistry, astronomy, civil engineering, genomics, and food science, which were chosen to demonstrate the value of sharing resourced and data across institutional boundaries, even with diverse data analysis requirements and cloud usage modalities.

    Earlier this week the NSF announced a grant of over $5 million to establish four regional data science innovation hubs at US universities. US Ignite received $6 million from the NSF in September to support local high-speed networks.

    “This award continues NSF’s multi-year strategy to stimulate exploration of scalable and sustainable data infrastructure models that facilitate collaborative research across disciplines and institutions,” said Amy Walton, Program Director, Advanced Cyberinfrastructure Division, NSF. “By experimenting with cloud usage metrics, collaborating with a commercial cloud vendor, and exploring pricing/trading allocation mechanisms, the project will provide valuable information about how the innovations work in a range of situations, and how this ‘market approach’ integrates within the larger research ecosystem.”

    The project will use metrics from UB’s XDMod (XD Metrics on Demand) and UCSB’s QBETS (Queue Bounds Estimation Time Series) to make predictions about where a given workload is best run, to efficiently use the federated cloud. It involves implementing a new allocations and accounting model that allows utilization data across federated sites to be tracked and used as an exchange mechanism.

    It will burst to AWS, which will collaborate with federation developers and scientists.

    This first ran at http://www.thewhir.com/web-hosting-news/national-science-foundation-sponsors-5-million-federated-cloud-project

    7:56p
    Friday Funny: Data Center Thanksgiving

    Dear Data Center Knowledge readers and fans of data center humor, please note that we will be switching from a weekly publishing schedule of Kip and Gary Friday Funnies to a monthly one. From now on, the Friday Funny will appear on the first Friday of every month. Here’s this month’s edition:

    Thanksgiving comes to Kip and Gary’s data center early!

    Here’s how it works: Diane Alber, the Arizona artist who created Kip and Gary, creates a cartoon, and we challenge our readers to submit the funniest, most clever caption they think will be a fit. Then we ask our readers to vote for the best submission and the winner receives a signed print of the cartoon.

    Congratulations to Darrell, whose caption won the “Hole in the Wall” edition of the contest. His caption was: “The wall was between me and the coffee machine.”
    Some good submissions came in for the Halloween edition – now all we need is a winner. Help us out by submitting your vote below!

    Take Our Poll

    For previous cartoons on DCK, see our Humor Channel. And for more of Diane’s work, visit Kip and Gary’s website!

    10:27p
    Amazon to Launch its First Cloud Data Centers in UK

    Amazon Web Services, the e-commerce giant’s cloud infrastructure services arm, is planning to establish cloud data centers in the UK.

    Expected to come online late next year or early in 2017, this will be the company’s first cloud region in the country and third in Europe, the company’s CTO Werner Vogels wrote in a blog post. The other two are in Germany and Ireland.

    An AWS region usually consists of a cluster of data centers interconnected by a Wide Area Network. The company has had cloud data centers in Dublin for several years and launched its first Frankfurt data center earlier this year.

    As Amazon and its chief competitors in the cloud services market, such as Microsoft, IBM, and Google, race to capture market share, they invest billions of dollars in data center infrastructure around the world every quarter.

    Cloud service providers, and especially Infrastructure-as-a-Service, compete on the basis of global reach, feature set, and pricing.

    As these service providers go after enterprise customers around the world, it becomes increasingly important to have physical infrastructure that’s local to where the customers are located.

    Physical proximity improves performance of enterprise applications, which often use data stored in enterprise data centers and in the cloud. It also addresses “data sovereignty” concerns that have heightened since former US National Security Administration contractor Edward Snowden’s disclosures of widespread government surveillance of electronic communications.

    Physical location of data has become even more important since European Union authorities rejected so-called “safe harbor” rules around transporting user data between data centers in Europe and the US. The safe-harbor ruling has made it even more important for cloud service providers like Amazon and its peers to be able to store European users’ data in European data centers.

    Today, there are 11 AWS regions around the world. Earlier this week Amazon announced plans to establish a new region in South Korea.

    The company is working to bring online new regions in China, India, and Ohio next year.

    << Previous Day 2015/11/06
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org