Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, August 19th, 2015

    Time Event
    12:00p
    Why QTS Dished Out $326M on Carpathia Hosting

    Scale and breadth of services offered appear to be two of the key attributes a data center provider must have to succeed in today’s market, and it’s both of those things that QTS Realty Trust was after when it acquired competitor Carpathia Hosting in a $326 million deal announced in May.

    Besides about 230 additional customers, the acquisition added substantial managed hosting capability to the QTS product portfolio and more than doubled the amount of data centers in its fleet, including, for the first time, facilities overseas. International market where QTS now has presence are Toronto, London, Amsterdam, Hong Kong, and Sydney.

    Consolidation in the data center provider space has been a running theme of the last several years, and the deals that took place usually revolved around bulking up a company’s ability to be a one-stop shop for international data center infrastructure and the full gamut of services. Digital Realty bought Telx to expand its retail colo and interconnection services, Equinix bought TelecityGroup to make sure it is the biggest player in Europe, and NTT bought a majority stake in RagingWire to grow its US footprint and e-shelter in Germany as a way to expand in the European market, to name a few of the most recent high-profile examples.

    The one thing that stands out about the QTS-Carpathia deal is Carpathia’s sizable federal government business. QTS has put a lot of effort into growing its play as a government data center service provider in recent years, which the acquisition boosted in a big way.

    Before the acquisition, the contribution of government deals was in the single digits as percentage of total revenue QTS raked in, Dan Bennewitz, the company’s COO, said in an interview. That portion is now about 15 percent, including government customers at federal, state, and local levels, he said.

    The deal brings additional authorizations for serving government clients, including FedRAMP, a must for providing cloud services to federal agencies, and HIPPAA compliance, which is a set of security and privacy rules for individually identifiable health information.

    Much of the demand in the public sector is for hybrid services, John Lind, VP of federal markets at QTS, said. These deals are usually done through systems integrators, and QTS sees a lot of requests for proposals from integrators that include some hardware in an on-prem facility, some managed hardware in a colocation data center, and some cloud services, he said.

    The federal government’s “cloud-first” push is driving a lot of this, and so is the desire to shift IT from being a capital expense to an operational one, according to Lind.

    But government business wasn’t the only reason QTS acquired Carpathia. Another reason was its managed and cloud services portfolio, which took the size of QTS’ cloud and managed services from 10 percent of total revenue to more than 25 percent, Bennewitz said.

    Carpathia’s bread-and-butter is its managed hosting business. “They’re primarily a managed hosting company,” he said.

    The acquisition added 13 data centers to the 12 QTS had before. Unlike QTS, however, Carpathia leases its data centers. The fate of those leases will be decided on a case-by-case basis. Where it makes sense, the sites will be consolidated into the massive QTS facilities once leases expire. QTS will renew leases in locations that fit well strategically, Bennewitz said.

    3:00p
    Intelligent Controls: a Simple Way to Optimize Your Data Center Thermal Management System

    John Peter Valiulis is Vice President of North America Marketing, Thermal Management, for Emerson Network Power.

    Organizations are currently adopting a number of new thermal management strategies and technologies to remove heat from the data center while achieving capital and operational savings. One of the most effective strategies is optimizing existing thermal management systems with intelligent controls that span both the unit and system levels to enable greater availability, efficiency and decision-making.

    A typical misconception is that all controls require significant customization to either the cooling units and components or building management systems. This is not necessarily the case. Some thermal management controls that are integrated into certain air handling units designed for data centers require little if any customization. In many cases, they simply need to be “turned on” and utilized to realize benefits.

    A common issue many data centers have is that thermal management systems have typically been designed with peak IT heat loads in mind. However, most data centers rarely operate at peak load. As a result, the output of the thermal management system does not match the varying levels of the IT load, providing too much or too cold airflow than is required for most areas of the data center; at times even not providing enough airflow to sufficiently protect the IT equipment.

    Intelligent controls enable data centers to more easily reach and maintain the optimal balance point of matched cooling capacity and IT load. They accomplish this by monitoring the data center environment through wired or wireless sensors and controlling the operation of thermal management systems. Data center managers are able to dynamically adjust airflow patterns by controlling the speed of variable speed fans or drives within the thermal management units to allow cooling unit capacities to adapt quickly to changing room conditions.

    Spanning Unit and System Levels

    The most effective way to create an integrated cooling environment with intelligent controls is to use a technology that spans both the unit and system levels, and even integrates at the data center infrastructure management (DCIM) level for capacity and utilization monitoring. This type of system is most ideal for allowing managers to protect, harmonize and optimize thermal systems more intuitively.

    Typically, before intelligent controls were available, data centers would often attempt to create an integrated cooling environment by pairing traditional unit controls with building management systems (BMS), with the BMS acting in the supervisory role. Unfortunately, this does not provide an effective, efficient option. Since the unit controls and the BMS are two very different systems, it requires expensive programming and customization to tie them together into one system. Even once they are connected and sharing information, much of the information goes unutilized and is not robust enough for data center managers to develop effective strategies for cooling.

    Utilizing intelligent control capabilities with data center thermal management systems to gain a unified cooling environment can yield up to 50 percent higher cooling energy savings, depending on data center specifications and existing equipment. Intelligent controls integrated into the next generation of thermal management solutions (new economizers, air handling units and free cooling chillers) have been shown to help data center managers achieve annual mechanical PUE under 1.2.

    Unit-Level Control

    At the cooling unit level, the primary focus is on protection. Intelligent controls enhance data center protection by providing local access to unit-level functions and operational data, and auto-tune key operating parameters, such as fan speed, compressor utilization and economization. If oscillations outside the set points are detected, the control algorithms adjust operations accordingly.

    For example, automatic cascade and lead/lag routines automatically activate and deactivate cooling units based on room load to matching cooling needs. Also, if out-of-tolerance conditions are detected, such as the refrigerant pressure approaching unsafe thresholds, the intelligent controls would lower fan speed and compressor capacity to avoid a unit shutdown and ensure continued operation.

    While protection is the primary function, intelligent controls also provide efficiency gains at the unit level. For instance, when used with air handling units that employ economization, such as an indirect evaporative free-cooling unit, intelligent controls optimize its components for the changing data center loads; data center temperature and humidity conditions; ambient conditions outside the data center; and chiller plant operation (in the case of chilled water systems).

    With an indirect evaporative free-cooling air handling unit, which uses a heat exchanger and features two air streams, the hot data center return air (primary stream) is taken through the heat exchanger where it is cooled by outside scavenger air (secondary stream) that has been cooled or preconditioned by evaporating water sprayed onto the heat exchanger. A blower then circulates the cooled primary air stream throughout the data center, while the secondary air stream is exhausted outdoors.

    Intelligent controls at the unit level operate the scavenger fan air stream differently, depending on the unit’s current mode of operation, to control the supply air to the user-adjustable set point. During cold ambient conditions, when the unit is operated in dry mode, the unit controller adjusts cooling capacity by modulating the scavenger fan to the desired leaving air set point. The scavenger fan increases speed to increase cooling capacity or slows to reduce capacity.

    When outdoor temperatures rise and the unit is unable to achieve the desired set point efficiently by only using the heat exchanger in dry mode, the intelligent controls activate the unit pumps and water is sprayed over the heat exchanger. This increases the capacity of the heat exchanger by bringing the outdoor air down near the wet bulb condition.

    System-Level Control

    At the thermal management system level, intelligent controls primarily manage efficiency, providing insight for action. They enable machine-to-machine communication between multiple units to prevent them from working at cross-purposes and allow the system as a whole to more easily reach and maintain the optimal balance point of matched cooling capacity and IT load. This level of teamwork is accomplished through wired or wireless sensors and advanced algorithms that automatically adjust air flow, temperature and economizer operations based on IT loads and outdoor conditions to optimize efficiency and protection.

    This type of machine-to-machine communication can be used in small and large rooms. In small rooms with balanced heat loads, the designated “master” thermal management unit determines which operation the system is to perform (cooling, heating, humidifying or dehumidifying) and how much of the operation each individual unit is to perform (none, partial or full capacity). In large rooms with unbalanced heat loads, the master unit averages all network temperature and humidity sensor readings and determines the operation the system is to perform. Each individual unit determines how much of the operation to perform based on its local sensor readings.

    In certain cases, four units with variable capacity fans in teamwork mode can operate approximately 56 percent more efficiently than four fixed speed units operating autonomously. At extremely low loads, the controls can place some units in standby mode for further savings.

    There are a number of fundamental steps data center managers can take to ensure their thermal management systems are intelligent and self-optimizing, enabling them to achieve the highest of levels of efficiency and availability. One of the easiest steps data center managers can take is to implement intelligent controls – for the simple reason that, in many cases, it is as easy as “turning on” the controls found within many of today’s thermal management units.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:58p
    Lightning in Belgium Disrupts Google Cloud Services

    Not even the mighty Google data centers are immune from acts of god it turns out.

    A series of successive lightning strikes in Belgium last Thursday managed to knock some cloud storage systems offline briefly, causing errors for some users of Google’s cloud infrastructure services.

    The lighting hit electrical systems of one of three Google data centers in St. Ghislain, a small town about 50 miles southwest of Brussels. The data center hosts the europe-west1-b zone of Google Compute Engine that experienced issues as a result.

    Besides failover systems that switch to auxiliary power when primary power source goes offline, servers in Google data centers have on-board batteries for extra backup, which was the case with the servers supporting Persistent Disk, the cloud storage that acts like Network Attached Storage or storage that’s independent of compute.

    But some of the servers failed anyway because of “extended or repeated battery drain,” according to the company’s incident report. “In almost all cases the data was successfully committed to stable storage, although manual intervention was required in order to restore the systems to their normal serving state,” the report read.

    Google engineers estimated that about five percent of persistent disks in the zone saw at least one I/O read or write failure over the course of the roughly five days the problems appeared. A tiny fraction of the persistent-disk space in the zone lost some data permanently: 0.000001 percent, according to Google.

    The company’s infrastructure teams are currently in the process of replacing storage systems with hardware that’s more resilient against power failure, and most Persistent Disk storage is already running on the new hardware, Google said.

    In a piece of advice cloud service providers commonly offer following cloud outages, Google reminded users that it has multiple cloud regions around the world and multiple isolated zones within each region precisely so that users can set up resilient infrastructure that can fail over from one zone to another in case of a single-zone outage.

    Google Compute Engine has three regions: Central US in Council Bluffs, Iowa, Western Europe in St. Ghislain, and East Asia in Changhua County, Taiwan. There are four zones in the Central US region and three each in Western Europe and East Asia.

    7:11p
    China: Supercomputer Back Online Following Explosion

    Government-controlled media in China have reported that Tianhe-1, one of the fastest supercomputers in the world, is back online after several days of downtime caused by last week’s massive explosion in Tianjin City.

    The reports that the system has been put back in service come from People’s Daily and China Youth Daily, official news outlets of the Chinese Communist Party and the Communist Youth League of China, respectively.

    Chinese authorities last week arrested ten senior executives of Tianjin Dongjiang Port Rui Hai International Logistics, the chemical storage and transportation firm that owned the warehouse complex where multiple powerful explosions on August 12 killed at least 114 people, according to the most recent official estimates, CNN reported.

    Director of China’s work safety agency and Tianjin’s former vice mayor Yang Gongliang is also under investigation in connection with the blasts.

    It is assumed the explosions were caused by chemicals stored at the facility, but few details are available, and the investigation is ongoing.

    TIANJIN - AUGUST 17: Rescuers work at the blast site during the aftermath of the warehouse explosion on August 17, 2015 in Tianjin, China. The death toll has risen to 114 following last Wednesday night's explosion at a warehouse in the Binhai New Area of Tianjin. (Photo by ChinaFotoPress/Getty Images)

    TIANJIN – AUGUST 17: Rescuers work at the blast site during the aftermath of the warehouse explosion on August 17, 2015 in Tianjin, China. The death toll has risen to 114 following last Wednesday night’s explosion at a warehouse in the Binhai New Area of Tianjin. (Photo by ChinaFotoPress/Getty Images)

    Researchers at the National Supercomputing Center in Tianjin shut down the system following the blast to prevent damage to one of the state’s crown jewels, which is 24th on the most recent edition of Top500, the list of the fastest supercomputers in the world. The system topped the list between October 2010 and June 2011.

    Its younger sibling, Tianhe-2, has been the number-one system on the list since June 2013. Tianhe-2, known in English as Milkyway-2, is housed at the National Super Computer Center in Guangzhou.

    Tianhe-2 is powered by a combination of Intel Xeon E5 CPUs and Xeon Phi co-processors. Tianhe-1 is powered by older-gen Xeon CPUs and GPU accelerators by NVIDIA.

    It’s unclear how long China will be able to maintain its supercomputing lead, since the US government banned Intel from selling its powerful Xeon chips to Chinese supercomputer makers in April. The ban applies to other US processor suppliers too.

    7:42p
    Alibaba to Launch Singapore Cloud Data Center Next Month

    Aliyun, the cloud services arm of Chinese web giant Alibaba, has officially announced plans to launch a cloud data center in Singapore in September.

    The tiny island nation has become one of the major business and technology gateways between key Asia Pacific markets and the rest of the world. As such, its data center industry is booming. A recent analyst estimate pegged the size of Singapore’s data center colocation market at $1 billion.

    Aliyun has a partnership with Singtel, a Singapore telco that’s also the biggest data center provider on the island.

    In addition to being the location of Aliyun’s seventh cloud data center, Singapore will serve as headquarters for the company’s rapidly expanding international operations.

    Alibaba recently announced a $1 billion investment program for its cloud services business, including cloud data centers in Silicon Valley, a second one in a yet undisclosed US location, and additional facilities in Europe, Middle East, and Asia. Its Asian data center plans include Singapore and Japan.

    “Singapore is a natural destination to be our headquarters for overseas expansion,” Aliyun VP Sicheng Yu said in a statement. “The city state is a natural springboard into the Asia Pacific region, not only for us, but for our target audience.”

    Aliyun data centers already online are in China, Hong Kong, and Silicon Valley.

    The cloud data center in Singapore will support the gamut of Aliyun’s Infrastructure-as-a-Service offerings, including cloud compute, relational database, load balancing, caching, storage, NoSQL database, and security.

    Aliyun claims to have grown revenue from its cloud computing and internet infrastructure business more than 100 percent year over year in a recently completed quarter.

    9:34p
    Dell to Ship Servers With Scality’s Software Defined Storage

    In another sign that the locus of control over data center storage is moving to the server, Dell this week announced an agreement with Scality under which it will resell software-defined storage software that can manage petabytes of data. Under terms of the agreement Dell will offer Scality’s object-based storage Ring on its PowerEdge servers running Linux.

    Travis Vigil, executive director for Dell Storage, said the alliance is being driven by the fact that customers are now looking for ways to more easily scale to petabytes of storage.

    “We want to make it simpler for customers to buy a complete range of software-defined software from Dell,” said Vigil. “The issue is that existing file systems have limitations.”

    Erwan Menard, COO of Scality, said its approach to object storage differs from others in that it can still present a familiar file system option to an application. Underneath that file system, however, is an object storage system that enables applications to scale gracefully, said Menard. In fact, thanks to Scality Ring, a volume can now be any size, he said, eliminating all the management headaches associated with managing legacy file systems.

    The Ring architecture provides native REST, S3, SMB, NFS, and OpenStack support in a single system, which Menard noted makes it simpler for IT organizations to transition between legacy file systems and more modern data center storage architectures. The software requires a minimum cluster size of six storage servers that can then be scaled out to span thousands of x86 servers.

    Because Ring is based on a peer-to-peer architecture that distributes both user data and the associated metadata across server nodes, the software itself has no single points of failure and requires no downtime during any upgrades, said Menard.

    Naturally, the pace of transition to object storage systems will vary widely from company to company. Menard said that partnering with Dell makes available an entire services team that will soon have the expertise needed to help customers make that transition.

    As IT organizations make the shift to public and private clouds, it’s become apparent that from a scale perspective legacy file systems are quickly becoming too complex to manage. As a result, the shift toward more agile cloud environments generally forces IT organizations to take a hard look at object-based data center storage systems in one form or another.

    Of course, most cloud service providers have for the most part already made that move. The issue now for many IT organizations is finding a way to replicate that same functionality inside private clouds that need to be able to support both legacy and modern applications with differing storage requirements.

    10:20p
    Portmapper Helps Hackers Leverage Web Hosts to Amplify DDoS Attacks

    logo-WHIR

    This article originally appeared at The WHIR

    Web hosts, gaming web hosts, and internet infrastructure providers are becoming unknowing participants in a new type of amplified Distributed-Denial-of-Service attack that has been used to amplify DDoS attacks to around 20 times their original size on average.

    The DDoS vector that uses the Portmapper service to amplify DDoS traffic appeared last month, according to a blog post this week from security researchers at Level 3.

    Portmapper (also known as rpcbind, portmap or RPC Portmapper) basically helps a client find the appropriate service on the server for a request. It is a mechanism that helps facilitate Remote Procedure Call services from the open internet.

    Portmapper can run on both TCP or UDP port 111. In the case of attacks, attackers send a spoofed request via UDP to receive an amplified response. Level 3 tested this exploit by sending 68 byte queries, which resulted in responses ranging as small as 486 bytes to as large as 1930 bytes, providing an amplification factor of between 7 and 27 times.

    Portmapper DDoS amplification works in a similar way to other known amplified (or reflective) DDoS attacks that use standard UDP accessible internet services. Some others include Chargen (which uses UDP port 19), Netbios (UDP port 139) and SSDP (UDP port 1900).

    While Level 3 has been seeing the use of this Portmapper DDoS amplification vector increase since its appearance in late June.

    “Clearly the success of using this method for attacks is growing aggressively,” Level 3 writes. “However, when Portmapper’s global traffic use is compared with the other popular UDP services, it is clear that the global volume of traffic is still small…. [I]t is a great time to begin filtering requests and removing reflection hosts from the internet before the attack popularity grows larger and causes more damage.”

    Level 3 recommends server administrators review their publicly available internet services, and disable Portmapper along with NFS, NIS and all other RPC services across the open internet. Services that need to remain available should incorporate firewalls that blocks unauthorized IP addresses, or switching from UDP to TCP-only.

    However, a host or infrastructure provider that locks down their own servers can’t account for servers run by other providers that have not secured themselves. Even if they’re not participants in amplified DDoS attacks, hosts can still be targets of DDoS attacks, and need to have the proper mitigation technologies in place.

    “Disabling or blocking internet facing RPCbind/portmap services is a trivial task on any single system but it is unlikely to occur anytime soon on the potentially millions of vulnerable systems accessible on the internet today,” Ashley Stephenson, CEO of security firm Corero, said in a statement provided to The WHIR. “In the meantime, organizations, regardless of industry can protect themselves against RPCbind/portmap amplification with real-time DDoS defense mechanisms, designed to detect and defeat these types of attack before they can impact their networks, or their customers.”

    This first ran at http://www.datacenterknowledge.com/archives/2015/08/18/leaked-documents-detail-highly-collaborative-partnership-between-nsa-and-att/

    << Previous Day 2015/08/19
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org