Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, March 21st, 2016
Time |
Event |
4:00a |
QTS to Launch Huge Chicago Data Center in July QTS Realty Trust is on the final stretch of the first phase buildout at its first Chicago data center in the former Chicago Sun-Times printing plant. The company bought the 317,000-square foot building in 2014.
Chicago is an active data center market with tight supply downtown but a lot of capacity in the suburbs, according to the latest report on North American data center markets by Jones Lang LaSalle. While the QTS building is not downtown, it is about three miles west, and its location is a significant advantage over facilities in the suburbs, 30 to 40 miles away, said Dan Bennewitz, COO for sales and marketing at QTS.
A lot of demand downtown is driven by companies wanting to be in or close to 350 East Cermak, a major network interconnection and data center hub.
 350 East Cermak in Chicago is a key hub in the region’s digital economy and one of the most connected buildings in the country. (Photo: Rich Miller)
QTS had been eyeing the Chicago data center market well before it acquired the Sun-Times building, Bennewitz said. It waited until it could buy the plant, however, because the building fit in its strategy of buying and retrofitting large infrastructure-rich properties cheaply.
“We had our eye on the Chicago market for multiple years,” he said. “Over the last year, the opportunity in Chicago has actually gotten stronger.”
Chicago is one of several US markets QTS expects to expand in this year. The others are Richmond, Virginia; the Atlanta metro; Dallas-Fort Worth; Santa Clara, California; and New Jersey.
The company, which provides a full range of data center services, from wholesale and retail colocation space to managed hosting and cloud, is primarily targeting Fortune 500 companies in Chicago. But it is also going after healthcare, financial services, high-tech or digital media companies, as well as government customers, Bennewitz said.

Raised floor inside the new Chicago data center by QTS (Photo: QTS)
The company expects to bring the first phase online in July. It will have about 8MW of power.
No major anchor tenant has signed up to use the data center, but QTS has already secured commitments from several colocation and managed services customers, Bennewitz said. | 3:00p |
Microgrid to Enable Phoenix Data Center to Unplug An Arizona utility is building a 63MW diesel-powered microgrid in the Phoenix area that will enable a data center currently under construction to disconnect from the utility grid during high-congestion periods or other grid problems.
The utility, Arizona Public Service, designed the microgrid together with Aligned Data Centers, the subsidiary of Aligned Energy that’s building a 500,000-square foot data center in Phoenix.
The project is a variant of demand response, where utility customers get offline during peak load periods in exchange for incentives from the utility.
APS will control the diesel generators that will power the microgrid remotely, according to a report by The Arizona Republic.
When it isn’t powered by the microgrid, the data center will receive grid power from a new substation being built nearby, which will be fed by four power lines connected to three different energy sources.
This is Aligned’s second data center project. The company brought its first data center online last year in Plano, Texas, featuring a modular cooling system design that enables it to scale capacity up and down, charging customers only for the capacity they use, a model used by cloud service providers but not common for physical data center services.
The cooling design is developed by Aligned’s sister company Inertech. The data center provider said it will use a similar approach in Phoenix.
Read more: Modular Cooling System Enables On-Demand Data Center Capacity | 3:00p |
New Breed of Storage Providers Hopes Garbage Trucks Need Big Data Too  By WindowsITPro
If a few years ago it seemed like cavernous hard disks might finally outpace the demand for places to put all our data, the tables have since turned — and a new breed of storage startups hopes to help your business buy some room to grow.
According to IDC, the global demand for storage was 4.4 zettabytes in 2013 — but that demand is projected to skyrocket to 44 zettabytes by 2020.
Much of that data is going to live in the cloud, meaning it’s Someone Else’s Problem. But a lot of it can’t.
Sometimes, the cloud is just too expensive. Even Apple is finding that out the hard way, as it moves around its iCloud storage service providers in a bid to keep costs down while it builds its own data centers.
Other times, it’s just too much of a risk or violates compliance rules, whether because the data is too personally sensitive or too critical to the business.
So a new breed of storage providers are hoping to offer more space that’s more accessible, with a heavy focus on trying to scale up flash storage. the New York Times just profiled one of those companies, Pure Storage, and gave a hint at just why so many companies — including non-tech companies — are suddenly getting Big Data religion.
Pure’s chief executive, John Hayes, has found customers in some unexpected markets, like championship auto racing, where the company won over Mercedes AMG Petronas with its 16 petabyte appliance.
“Health care, manufacturing and natural resources companies can all justify owning this much storage,one analyst told the paper. In 10 years, a big sanitation company with sensors on its Dumpsters to manage pickups could have tens of petabytes.”
That could be a big opportunity for IT professionals — if almost every company needs Big Data, then that’s a lot of storage capacity to be managed, and even more data analysis that will need to be done. The question is how much of it will be managed by the businesses themselves, and how much will be farmed out to companies that specialize in data management and analysis.
Will every sanitation company really want petabytes of storage to manage, or will they find it makes more sense to lean on a company that just provides sanitation-data-as-a-service?
This first ran at http://windowsitpro.com/industry/new-breed-storage-providers-hopes-garbage-trucks-need-big-data-too | 4:58p |
CenturyLink Continues Expanding Data Centers it May Sell As it looks for alternatives to owning its data centers, CenturyLink continues expanding them. The company added 14MW of capacity across eight sites last year, and plans to expand four more in the first half of 2016, according to an announcement released on Monday.
When CenturyLink execs announced they were looking for ways to off-load some or all of the company’s data centers last year, they said CenturyLink would continue providing data center services. You don’t have to own facilities to do that, and many companies provide a variety of services out of data centers they lease.
But data center providers have to continuously expand their capacity for a number of reasons. Existing customers often need to expand their infrastructure within the buildings they’re already in, and it’s important to have spare capacity to capture new demand.
Besides colocation, CenturyLink provides a variety of managed hosting, cloud, network, and IT services, and they require a lot of physical data center capacity too. The company is going after hybrid deployments, where customers use both their own infrastructure in colocation space and services up the stack.
Read more: Why CenturyLink Doesn’t Want to Own Data Centers
Over the last five years, CenturyLink added 11 data centers to its footprint and expanded about 40 existing sites. Its total current data center footprint is about 2.6 million square feet across North America, Europe, and Asia Pacific.
CenturyLink is not the only telco giant that may sell a lot of data center capacity in the near future. Verizon is exploring alternatives to data center ownership, and so is AT&T.
Top management of the data center provider Equinix indicated on the company’s latest earnings call that it was in talks with both CenturyLink and Verizon about potentially buying their data center assets. | 5:03p |
The Evolving Role of DNS in Today’s Internet Infrastructure Kris Beevers is Co-founder and CEO of NS1.
A decade or so ago, if you were building an online application, chances were it lived in physical infrastructure in a single datacenter and you managed individual servers with an esoteric set of configuration files and operator knowledge. Today, all that has changed: applications are distributed across multiple service endpoints thanks to a breadth of cloud facilities, content delivery networks, deployment automation and application technologies. And along the way, the tools you use to get traffic to your application have changed.
DNS, the “phone book” of the Internet, was in the mix then as it is now, translating your domain into IP addresses and other service information – but like the rest of your infrastructure, DNS today is vastly more dynamic and is a more important tool than ever for developers and operators to understand and leverage effectively.
Today’s most advanced online properties—think Facebook or Twitter, Amazon or Google—deliver reliably good performance regardless of a user’s location and whether they’re using a laptop or a mobile device. As a result, the expectation of fast and responsive delivery of both static and interactive content has become the norm for every online application. Users are increasingly demanding and unforgiving: Amazon calculated that a slowdown in page load time of just a single second could cost it $1.6 billion in sales each year.
DNS lookup is a critical component of an application’s performance. As the entry point into an application, the need for reliable and fast DNS lookups is obvious. Importantly, as the first indication that a user is about to interact with an application, DNS also presents a powerful opportunity to manage the performance of the application by sending users to appropriate service endpoints in today’s distributed environments.
Until recently, while application architectures and other underlying infrastructure had undergone tectonic shifts, DNS itself hadn’t kept pace and had been limited to restrictive endpoint selection techniques. Tools like round-robin DNS (random endpoint selection), simple health checking (shifting traffic from failed endpoints) and simple geographic routing (attempting to send users to “nearby” endpoints) were the state of the art. In the face of modern distributed architectures, an increasingly dynamic Internet, and ever more demanding users, the need for more advanced DNS tools has rapidly emerged.
Modern DNS platforms, like those of today’s most advanced Managed DNS providers, enable far more complex application- and network-aware endpoint selection by leveraging real-time telemetry about infrastructure and the Internet’s health to enable active, intelligent traffic management. These platforms optimize performance by measuring and minimizing latency or packet loss between users and service endpoints, or by maximizing bandwidth throughput, or even by managing completely application-specific measures of performance. They also enable developers to leverage infrastructure more efficiently and effectively by routing around outages, and by enabling cloudbursting or shedding load to meet spikes in demand. Today’s advanced DNS platforms make automated traffic management decisions that are driven by data—often application-specific—that is ingested, aggregated and acted on in real time.
Key Benefits of Modern DNS Platforms
Today’s next-generation DNS solutions offer intelligence and capabilities that address the challenges of modern application delivery far beyond traditional in-house or legacy solutions. Modern solutions can be leveraged in a variety of models, from high-performance, globally distributed SaaS-model Managed DNS networks to fully managed, on-premise deployments. Increasingly, an organization’s DNS systems are tightly coupled across a number of use cases for single-pane-of-glass manageability, visibility and automation of DNS and traffic management across a company’s entire infrastructure.
Beyond the intelligent traffic management capabilities and flexible deployment models of modern DNS platforms, there are a number of other motivations for shifting from a legacy DNS technology and mindset:
- Data feed integrations: Today’s most advanced DNS platforms enable you to connect your existing monitoring and analytics tools to feed data that can be used to drive intelligent traffic management decisions.
- Telemetry-driven traffic management: Beyond feeds from your existing tools, some modern DNS platforms directly incorporate their own deep monitoring and data gathering technologies, and gather telemetry straight from your systems and users to drive routing decisions.
- Visibility: In addition to acting on data, modern platforms give you great insight into your DNS, with real-time analytics about your DNS traffic and visibility into the performance of your infrastructure with respect to your audience.
- DevOps integrations: Today’s DNS technologies are addressable through full-featured APIs and DevOps tool integrations, enabling complex automation suitable for the elasticity and dynamism of a modern application infrastructure stack.
DNS isn’t what it used to be. Far beyond the basic “phone book” of the Internet, today’s DNS platforms enable application developers to leverage this ubiquitous protocol to manage and optimize application delivery and performance. As infrastructure and applications continue to evolve, and the performance and reliability demands of users become more strict, today’s DNS providers will push the envelope in enabling developers and operators to manage their traffic and optimize their delivery.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 6:12p |
Are Software-Defined Data Centers Really a New Idea? No, But…  By The VAR Guy
Software-defined networking. Network functions virtualization. Virtual storage. These are the new buzzwords of the channel today. But are these trends actually as novel as they seem? Viewed from an historical perspective, not really.
SDN, NFV and scale-out storage — which we can collectively call software-defined everything, or SDx if you like acronyms — offer lots of benefits for data centers and the cloud. They abstract operations from underlying infrastructure, making workloads more portable, scalable and platform-agnostic. They also create new opportunities for building more secure infrastructure. And they can lower costs by letting you get next-generation functionality out of cheap commodity hardware.
It seems pretty certain that software-defined everything is the wave of the future. From Docker containers to carrier-grade SDN projects like ONOS, these technologies are progressing rapidly through the development and adoption stages and into production use. Some of them are not there yet, but they’re on the way.
What’s Old Is New Again
In many respects, though, software-defined everything is not really a new idea at all. Think, for example, about virtualization — a technology that’s almost as old as computers themselves, and which went mainstream in the data center world more than a decade ago.
Virtual servers — here I mean the traditional kind that use hypervisors like VMware’s or KVM, not containers — also abstract storage from the host system. They usually virtualize networking. In other words, they rely centrally on software-defined functionality, even though no one was thinking about it that way when virtualization became the next big thing in the 2000s.
Think, too, about VPNs, an even simpler technology, which has been in widespread use for a number of years. What is a VPN but a software-defined network? To be sure, traditional VPNs offered only a small slice of the functionality you can get from modern SDN infrastructure, but the core idea is the same.
Telecoms have essentially been using SDN ever since they updated from circuit-switching to packet-switching networks, too.
In a way, even plain old Network Address Translation, or NAT — the thing that lets you connect dozens of devices in your house to the public Internet without having to assign unique IPs to each of them — is also a form of software-defined networking.
As for software-defined storage, that has also long been in use. A local virtual storage volume, or a networked file system protocol like NFS, is just software-defined storage by another, older name.
What Makes Today’s SDx Different
So, if software-defined infrastructure is not actually a very new idea, what is different about it today? Is it just a buzzword that, like the cloud (which was also not a new idea when it came into vogue, by the way; the Web and even old-school Unix terminals were also basically the cloud), has become trendy without meaning much of anything?
Well, no. Software-defined everything may not be different in kind from what has been happening already for decades. But it is different in scale and sophistication. Projects like OpenDaylight are enabling a new level of flexibility in the data center. They are making software-defined storage, networking and services not just complements to physical infrastructure, but complete replacements for it.
Still, we think it’s worth bearing in mind that although software-defined everything may seem totally revolutionary, it’s firmly rooted in the past. If you want to make the most of it, you should make sure you are using it to achieve new levels of functionality — as opposed to just replacing your existing, first-generation SDx technology, like VPNs and virtual servers, with newer platforms.
This first ran at http://thevarguy.com/open-source-application-software-companies/are-software-defined-data-centers-really-new-idea-no |
|