Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, January 6th, 2015

    Time Event
    2:10p
    Quanta Expands Presence in China and Japan

    Quanta Cloud Technology, which designs and manufactures custom hardware for web-scale data center operators, such as Facebook, has opened offices in China and Japan to provide sales and support services to customers in the two countries.

    QCT is a subsidiary of the Taiwan-based manufacturer Quanta Computer. Until it opened the new offices in December, it had facilities in Taiwan and Fremont, California.

    While Quanta initially got into the data center hardware business by making IT gear to order for the big suppliers, such as Dell, or HP, several years ago it started making hardware and selling it directly to the big data center operators.

    But the company has also been making efforts to sell off-the-shelf hardware to smaller cloud data center operators that have more modest infrastructure needs than the web-scale giants do.

    Quanta has been involved with the Open Compute Project, the Facebook-led open source data center and hardware design initiative. The project has become a valuable point of access for vendors to big data center operators, which in addition to Facebook also include Microsoft, Goldman Sachs, Fidelity Investments, Baidu, Yandex, and Rackspace, among others.

    QCT General Manager Mike Yang said the company opened the new offices as a response to demand from cloud data center operators in the two important markets. “Our new offices in Japan and China will help us respond to the increasing demand for our products as we bring disruptive changes to the data center ecosystem in these two markets,” he said in a statement.

    2:10p
    OpenStack Private Cloud Provider Blue Box Raises $4M

    Seattle-based on-demand private cloud provider Blue Box has closed a Series B financing round, adding another $4 million to the previously announced $10 million.

    The company announced general availability of its hosted OpenStack private cloud services in May. It’s an alternative to building and managing your own OpenStack cloud. Blue Box delivers its services from data centers in Seattle, Ashburn, Virginia, and Zurich.

    The funding will go toward expanding engineering, sales, marketing, and business development teams. The company also plans to expand its channel program, which will allow other service providers to launch OpenStack-based cloud services for their customers.

    “The combination of our unique IP and true operating experience makes it possible for Blue Box to deliver consistent, reliable, efficient, and agile private cloud infrastructure as a service directly and through channel partners, and it’s that model we’re moving to implement in 2015 with our partners,” Blue Box CEO Matthew Schiltz said in a statement.

    The company’s second strategic investor that joined the round is a large integrated telecommunications provider, which plans to deliver Blue Box Cloud services to its customers. The company did not disclose the investor’s name.

    Additional investors in the round include Voyager Capital and Founders Collective, as well as the Blue Box executive team.

    Blue Box raised a $4.3 million Series A led by Voyager Capital in 2012.

    Investment continues to flood into the OpenStack ecosystem, as several providers look to put their unique spin and talents atop of the popular open source cloud architecture. One example is CloudScaling, which has an OpenStack-powered cloud offering focusing on turnkey solutions. Another managed OpenStack private cloud provider Metacloud was acquired by Cisco in September.

    “VC investment in the OpenStack ecosystem has been vibrant for some time now,” said Jonathan Bryce, executive director of the OpenStack Foundation, in a release. “What’s new is an emerging interest among service providers and other strategic investors who back companies with products built on OpenStack, with an eye toward reselling OpenStack-powered services to their existing customer bases. It’s a new milestone in the evolution and maturation of our platform.”

    2:30p
    First Data Center Module Arrives at Keystone NAP’s Steel Mill Site

    Keystone NAP has delivered the first modular “KeyBlock” unit to its data center on the border between Pennsylvania and New Jersey. The modules were co-developed with Schneider Electric and are configured to each individual customer’s power, cooling, and network connectivity needs.

    There are a lot of interesting layers to the project. Keystone is converting a former Bucks County, Pennsylvania, Steel Mill into a data center and leveraging the existing power infrastructure and unique cooling resources. The KeyBlocks are highly configurable stackable modules. They can be stacked three high.

    The first KeyBlock arrived before the building fit-out was finished to demonstrate the capabilities to the customer — something that wouldn’t have been possible to do with a traditional approach.

    “It’s fascinating to think about how fast these things are installed and deployed versus a traditional data center,” said Keystone CEO Peter Ritz. “It allows so many things to occur in parallel. We’re finishing the preparations to the building and have the roof go up; meanwhile the trucks [carrying the KeyBlock components] show up, and there’s no disruption.”

    KeyBlock modules include racks, power distribution, and cooling gear in a variety of configurations, from a couple to a hundred racks. The solution includes scalable 2N generator and UPS capacity.

    The first KeyBlock arrived in two parts on flatbed trucks and currently sits in the former motor room of the steel mill. The foundation in the motor room is designed to withstand 80 tons.

    “If we were a typical data center building, we’d have to build out an entire floor,” Shawn Carey, senior vice president of sales and marketing at Keystone, said. “This is a more efficient use of our capital to drive a better economic solution for the customer.”

    Another Provider Goes Modular

    A number of companies, including giants like Microsoft, Google, and Amazon, have used data center modules, or containers, to add data center capacity. There are also several data center colocation providers, such as IO, and Verne Global, that lease space in modules housed in massive warehouses.

    Schneider has been expanding its modular data center portfolio over the past several years, including the acquisition of Barcelona-based module maker AST Modular early last year. Among its other modular data center customers is the Chinese Internet giant Baidu.

    “The engineering team at Schneider was terrific,” said Ritz. “They were able to give a palate of what has worked and what have been proven. Modularity is not really new. What was new was, how do we bring it in house? What if we design the chassis correctly and plug in these KeyBlocks like a blade server?”

    “It was one of the most interesting rigging jobs I’ve ever seen, and I’ve seen a lot of this stuff,” said Jason Walker, director of Schneider’s Data Center Service Provider business segment.”This project started a little bit over two years ago and is the second attempt at building a data center on the [former steel mill] property. Things really started ramping up about six months ago.”

    Second Data Center Attempt on Steel Mill Property

    The original attempt was by a company called Steel Orca. Schneider was also involved in that project.

    Ritz has told Philadelphia Business Journal that the Steel Orca project fell through, making way for the formation of Keystone, but did not elaborate. Keystone President John Parker was involved with Steel Orca as an attorney, according to multiple online sources. The Steel Orca trademark has been abandoned, according to Justia’s trademark database.

    Walker said Keystone’s project stands to succeed because of strong financial backing and because of the modular approach, which enables the company to deploy capital incrementally. “A major reason this project got off the ground was the very nature of this prefabricated piece,” he said. “They’re retrofitting, but not in that ‘stick facility’ way. It meant Keystone reserved capital without needing to build from day one and didn’t need to do a lot of phasing.”

    Stackability, Configurability, Automation

    Stackability of the modules allows Keystone to maximize space utilization. “The architecture … allows us to take the KeyBlocks and stack them three high, two stacks right next to each other,” Carey said. Two stacks combine to create an almost-square (45 feet by 42 feet). A crane is coming in the next week or so to be installed inside the building permanently to lift and stack modules.

    Diagram of the future Keystone NAP campus in Bucks County, Pennsylvania (Image: Keystone)

    Diagram of the future Keystone NAP campus in Bucks County, Pennsylvania (Image: Keystone)

    “We have some prospects who want half a stack or three stacks at the same time,” Ritz said. The provider plans to offer power densities from 100 watts to more than 400 watts per square foot.

    Each module, secured with multi-factor authentication, acts as an individual customer’s vault. The company can provide individual power, cooling, and connectivity service level agreements for individual modules. It wants to automate every part of the configuration process as much as it can.

    4:30p
    Containment Best Practices: Tips for Maximizing Capacity and Cost Savings

    Lars Strong, senior engineer, thought leader and recognized expert on Data Center Optimization, leads Upsite Technologies’ EnergyLok Cooling Science Services, which originated in 2001 to optimize data center operations.

    A 2011 paper by the Green Grid overviews data center containment by broadly outlining the hot aisle and cold aisle methodology and various components of partial containment that could be utilized, including patch panels, grommets, the return air plenum and cabinet configuration.

    Three years later, our understanding and management of containment solutions in data centers has greatly improved. The 2013 Uptime Institute Data Center Survey shows 72 percent of large facilities (>5000 servers) and 53 percent of small facilities (<1000 servers) to have installed hot or cold aisle containment.

    Not only are containment solutions implemented at a much greater rate now, but the solutions are better understood in the time they’ve been used so as to require a set of best practices and complementary airflow management techniques to maximize the use of containment as an effective cooling technology.

    Maximizing the Benefits of Containment

    Airflow containment solutions are a best practice for managing IT intake temperatures and keeping costs down. Since IT equipment intake temperatures in a computer room guide all efforts to improve efficiency and density it is crucial to have an understanding of the different options. However, even after containment strategies have been implemented – partial or full, hot or cold aisle – many sites fail to take full advantage of the capacity and density improvements containment can provide.

    The first step in maximizing your containment is to understand the utilization of the computer room cooling infrastructure; this is easily done by calculating your computer room’s Cooling Capacity Factor (CCF). CCF is the ratio of total rated cooling capacity of running cooling units to the estimated heat load. Armed with this knowledge, you will be able to make informed decisions about maximizing the benefits of containment solutions in your facility.

    Determining What is Right for You

    Facilities should determine which containment options are appropriate depending on the layout of the computer room, available budget, and long-term goals. Data centers can choose from three primary areas of containment: hot aisle versus cold aisle containment, full versus partial containment, and hard versus soft containment.

    Self-explanatory, hot aisle containment encapsulates the hot aisle, leaving the remaining area of the room cool, while cold aisle containment results in the opposite effect, maintaining appropriate intake air temperature in the cold aisle and allowing discharge air to flood the computer room.

    In contrast, full containment completely covers the open area of an aisle while partial containment focuses on a specific area of airflow (aisle ends, tops of aisles, rack-gaps, etc.).

    Finally, hard containment is understood as doors, baffles, partitions, or other inflexible installments of containment and soft containment primarily refers to over cabinet or aisle-end curtains.

    Implementing the Supporting Best Practices

    Data Center Knowledge’s 2012 best practices for containment explore containment as an overall cooling strategy and some specifics for hot and cold aisle containment solutions. Beyond aisle preferences, containment installations require an understanding of available cooling capacity and airflow management for the space to work to its fullest potential. There are a variety of best-practices that must be implemented in conjunction with the installation of any containment solution to get the full benefits.

    • Manage open areas of raised floors (proper placement of perforated tiles, seal all bypass open areas including cable penetrations, under PDUs, remote power panels, etc.)
    • Seal open areas in IT equipment enclosures (mounting rails, cabinet sides, empty rack spaces)
    • Fill open areas around enclosures (between enclosures, missing enclosures, under enclosures)
    • Reduce supply airflow rate as much as possible (balance supply flow rate to IT equipment airflow rate)
    • Increase temperature of conditioned supply air as much as possible
    • Implement containment as completely as possible across the entire computer room

    Start any airflow management optimization project by calculating the CCF for the computer room. Then understanding the unique layout and architecture of each computer room will reveal conditions that make certain cooling strategies preferable. After choosing the best containment option for your computer room, implementing the supporting best practices, and making adjustment to the controls for temperature and airflow volumes, the CCF should be calculated again. The results will show that relatively low cost improvements to AFM can lead to large savings and the release of large amounts of stranded capacity in your data center.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:30p
    How the Network Now Defines the Data Center

    Within today’s modern data center and when it comes to the strategic importance and business value of the data center network, there is actually a pretty big disconnect. Even with momentum building behind the software-driven data center (SDDC) and software-driven services, applications and networks are often ships in the night. In many cases they are designed, built, deployed, and managed separately, as discrete entities. And there is still considerable inertia when it comes to modernizing the capabilities of the data center network.

    The result is a lack of flexibility. All too often the data center network cannot easily scale to accommodate growth, cannot handle new types of traffic and work­loads, cannot take advantage of cloud automation and orchestration opportunities, cannot move large volumes of data, and so on.

    This white paper from Copper River IT takes a closer look at the importance of overcoming the inertia and moving to an agile, high-performance, next generation data center (NGDC) network, along with the three key areas that need to be addressed: the data center interconnect (DCI), cloud connectivity, and building cloud‑aware networks.

    By making the move to a modern, agile data center network—one that provides optimized cloud orchestra­tion and bi-sectional bandwidth combined with WAN network integration—enterprises and service providers can achieve multiple benefits, including the ability to:

    • Deploy applications in near-real-time
    • Try more new ideas faster
    • Survive disasters with less expense
    • Model changes in real time
    • Offer more high-bandwidth applications and services
    • Mashup services across larger metro areas
    • Reduce CapEx
    • Extract more value from commoditized network services
    • Harvest the benefits of NFV

    Many organizations are now building powerful cloud-aware networks. IT service providers spend hundreds of billions of dollars per year connecting data center systems, devices, networks, applications, and data to the cloud. To maximize this investment, it is critical to implement cloud-aware networks that adhere to open standards. This will reduce the cost and complexity of having a network that needs to scale with and map to software resources it doesn’t integrate with.

    For example, Juniper’s Contrail, an open-source, proactive overlay SDN solution, works with existing network devices and helps address the networking challenges for self-service, automated, and vertically integrated cloud architectures. Contrail also helps improve scalability and CapEx inefficiencies through an overlay virtual network. All of the networking features such as switching, routing, security, and load balancing are moved from the physical hardware infrastructure to software running in the hypervisor kernel that is managed from a central orchestration system.

    Download this white paper today to learn a very important, yet simple point: To consumers of IT services and to the business itself, nothing is more important than the agility, performance, scalability, and availability of the data center network. Thus, from a strategic perspective, nothing is more important than ensuring that the data center network fully delivers on these requirements.

    6:01p
    Heatwave, Cooling Failure Bring iiNet Data Center Down in Perth

    A data center in Western Australia was knocked offline due to equipment failure and record-breaking temperatures in the area. A heatwave that caused outside temperatures to rise to about 112F brought internet service provider iiNet‘s data center down.

    Those that operate data centers are very familiar with the importance of cooling. Servers can give off tremendous heat, and lack of adequate cooling can lead to disastrous results.

    An unusually warm day coupled with what iiNet said was multiple air conditioner failure meant that some servers needed to be shut down.

    It was the hottest January day on record for the area since 1991, and the heatwave is expected to continue into the week.

    iiNet is the second-largest DSL Internet provider in Australia. Email and corporate websites went down; thousands of customers ended up offline.

    Problems at the Perth data center started around 4:30pm Australian Eastern Daylight Savings Time. The outage lasted between six and seven hours.

    “We have had multiple air conditioners fail on site causing temperatures to rise rapidly,” company representative Christopher Taylor said in a forum post. “We have additional cooling in now. We will begin powering services back up once the room has cooled adequately. If we are premature the room won’t recover and risk the A/C failing again.”

    The heat necessitated a shutdown of a portion of servers in the facility as a precautionary measure. The company stated that network redundancy plans ensured 98 percent of customer broadband services were unaffected.

    The iiNet data center in Perth also saw problems in October, when the company’s email platform went down for several days.

    Heat bringing a data center down is a common occurrence, although most of the time it happens because of cooling or power infrastructure problems, and not because of outside weather.

    A data center heat spike caused a Hotmail outage in 2013. A failed software update caused the heat to rise rapidly in one part of a Microsoft data center, causing an outage of up to 16 hours for Outlook.com, Hotmail, and the Skydrive storage service.

    Back in 2007, Rackspace suffered an outage when a traffic accident took out a transformer, causing the facility to switch over to generator power. However two chillers failed to start back up, which ultimately caused the outage.

    Data centers are equipped to consider the outside environment, with heat-based outages usually the result of an internal problem, not the weather.

    7:43p
    Rackspace Readies Customers for Final Migration from First Gen Cloud Servers

    logo-WHIR

    This article originally appeared at The WHIR

    Rackspace customers still on First Generation Cloud servers based on Slicehost will start receiving notices this month of the final migration to the company’s newer OpenStack-based servers, according to a Gigaom report on Monday. Customers will receive the notice 30 days prior to the start of their migration period.

    Slicehost was acquired by Rackspace almost seven years ago, and a major migration effort followed in 2011. At the time Rackspace said it hoped to complete “all the transitions” within 12 months, or by May 2012. In August 2012, however, Rackspace CTO John Engates told The Register that it would probably take 12 to 18 months for the last customers to migrate.

    Some legacy Slicehost code has remained in use by some Rackspace customers, and it is this final piece of the old company, its remaining legacy non-IPv6 compatible servers, that are being retired. Rackspace’s Next Generation Cloud servers run on a Python-novaclient controller and use a new API.

    Rackspace confirmed the legitimacy of notices emailed to customers to Gigaom.

    “If you choose not to self-migrate, Rackspace will migrate your First Generation servers on your behalf at the end of the self-migration window,” Gigaom reports the notice as saying.

    Rackspace’s work with Openstack began in 2010, and the company moved its worldwide public cloud onto the platform in 2012.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/rackspace-readies-customers-final-migration-first-gen-cloud-servers

    10:00p
    Upcoming Verizon Cloud Downtime May Be Wakeup Call for Some

    For some Verizon Cloud customers this coming weekend’s prolonged downtime will be a resiliency test for their architecture; some will simply experience a period of downtime they knew could come at some point; yet for some, the outage may serve as a wakeup call.

    Last week, Verizon notified users of the new Verizon Cloud services that all nodes of the cloud will undergo maintenance, beginning early morning Saturday, 10 January. The provider told users to be prepared for up to 48 hours of downtime, although the process is anticipated to take less time than that, Verizon Enterprise Solutions spokesman Kevin King said in an email.

    Unlike September’s unplanned security update to the XenServer hypervisor that necessitated widespread cloud reboots by numerous providers, including among others Amazon Web Services, Rackspace, and IBM SoftLayer, Verizon is planning to take its cloud down for routine maintenance.

    The company announced the new cloud services, built on an entirely new platform, in 2013. This weekend’s maintenance will not affect customers of the company’s “legacy” cloud platforms, which include enterprise, managed, and federal cloud services.

    “The upgrade will involve installation of routine maintenance updates to Verizon Cloud production platforms at Culpeper, Virginia, and other data center locations supporting Verizon Cloud,” King said. “Updates of this nature typically require some system downtime, and we notified customers in advance, so they could plan accordingly.”

    Unusually Long Maintenance Window

    While planned cloud outages for maintenance are common, what’s uncommon about this one is its duration and scope.

    Bill Peldzus, vice president of consultancy Cloud Technology Partners, said even though Verizon did not anticipate the maintenance to take the full 48 hours, the worse-case scenario would make for an unusually long outage. “Something that takes you down for up to two days is fairly significant,” he said.

    Customers that have built the capabilities to failover to another cloud provider into their architecture will simply get to see how well their resiliency schema works when Verizon takes its cloud down. For customers who don’t have a disaster recovery schema in place but have critical applications that need to stay up around the clock, “this could be their wakeup call,” Peldzus said.

    Failover Options May Be Limited

    Big cloud providers generally have their own failover systems in place that enable customers to move their workloads from one data center to another during maintenance periods. Peldzus said it was uncommon for a provider to simply notify users that their entire cloud would not be available during a maintenance window without providing some place to transfer users’ workloads temporarily.

    The option to failover within Verizon Cloud may not be available to customers, however, since all nodes of the cloud are being upgraded. King did not respond to a request to clarify this in time for publication.

    Understanding Cloud SLAs is Paramount

    It is likely that Verizon Cloud has some users who simply accept a potential period of downtime as part of their agreement with the provider. “It is up to the cloud user to understand the SLA of their cloud provider and design [or] plan accordingly,” Peter Roosakos, principal at Foghorn Consulting, another cloud consultancy, said in an email.

    If you have an application that needs to be up around the clock, and your cloud provider practices “rolling maintenance” (taking individual zones offline one by one), then you can serve your high-availability workload from zones within that same cloud that are not offline. If your provider doesn’t offer such operations model, you have to use multiple providers for your critical applications, Roosakos explained.

    “As you’re implementing your applications on extremal cloud providers, you should always plan for and architect for failure,” Peldzus said.

    While the multi-provider architecture is the best option for mission critical applications, managing such an infrastructure is significantly more complex. Ultimately, the nature of the business and the application should dictate how much thought each company puts into architecting for resiliency.

    << Previous Day 2015/01/06
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org