Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, November 27th, 2014

    Time Event
    1:00p
    The Cloud in 2014 and Beyond

    As we wrap up 2014, it’s time we took a look at some of the biggest cloud technologies that made an impact over the course of the year and thought about cloud predictions for 2015. I’m most likely not going to list all of the technologies that were big this year, so if you feel I missed something, feel free to add it in the comments section!

    That said, the concentration around the user and the information delivery model has allowed the modern data center and the cloud infrastructure in general to really evolve. We’re seeing new methods of optimization, cloud control and entirely new ways of controlling the user experience. And so, what were some of the big technologies that impacted the cloud?

    • APIs (cloud apps). This has been a big one. Platforms from VMware, OpenStack, CloudStack, Eucalyptus, and Amazon are all creating easier ways to connect via the cloud. APIs are creating intelligent infrastructure cross-connects to reduce the amount of resources required. APIs at the software and hardware layer will continue to make cloud communication easier on an application and infrastructure level.
    • Software-Defined Everything (SDx). We’re really taking off with this whole virtualization concept. Software-defined platforms really do revolve around specific components virtualization. This can be storage, networking, security, or even a data center platform. With technologies like SDN, we’re able to create intricately connected data centers capable of greater resiliency and business continuity.
    • The Hybrid Cloud. There is going to be a lot of blurring when it comes to cloud model definitions. The future of the cloud will pretty much see everyone adopt some type of hybrid cloud platform. Why? Firstly, most organizations are already in the cloud. Secondly, there are a lot of new options in terms of connecting a private cloud with some cloud resources. More companies are moving just a part of their environment into various cloud options. The reason it’ll all start to blur together is the management framework is evolving. New cloud management solutions aim to control your cloud regardless of the platform. Hybrid, public, private and even community clouds can all be controlled from a single console.
    • Mobility. Forget about devices. The fad around mobility being defined around the device is over. Now, mobility revolves around the delivery of applications, workloads and data to an ever-mobile user. This can be to any device. In the future, the goal will be to deliver the best possible user experience regardless of the device. Here’s something we all need to come to terms with: the age of the PC, as we know, is coming to a close. In fact, this is being written from a Surface Pro. And a keyboard. Look for a much more mobile user, and a much more mobile data layer.
    • Data, security, and compliance. This was going to have to change. Even big regulations like SOX, PCI/DSS and HIPAA are making technology adjustments. The recent Omnibus Rule as a modification to HIPAA can actually allow you to store data for collaboration in the cloud. Solutions like Citrix Sharefile Cloud for Healthcare jumped all over this, signed a business associate agreement (BAA), and can now process protected healthcare information (PHI) directly from the cloud.

    Alright, so what t I forget? Well, no list of cloud predictions is complete without robotics. We do have this entire conversation of robotics which is going to be controlling the cloud and the data center model of the future. Robotics aside, we know that cloud and data center automation are becoming big topics as well. The future cloud platform will blur a lot of lines when it comes to the compute process. Ultimately, the goal is to create the most positive user experience possible. Which technologies to you see making the most impact? Will user devices create even more ways to connect? Will we see a “personal cloud” follow us around permanently? When it comes to cloud computing, the next couple of years will be interesting.

    4:30p
    Air Circulation in Data Centers: Rethinking Your Design

    Michiel de Jong is an engineer at Low Speed Ventilation Datacenters.

    A number of data center issues are related to the circulation of air – local pressure differences, short cuts in air circulation and subsequent hotspots, among others. Most of these issues, if not all of them, stem from the basic design of the data center.

    Air Velocity and Hot Spots, Combating the Problem

    Transferring heat from the data room calls for substantial volumes of air – some 1.2 million gallons per minute for each megawatt IT-load. During its recirculation, this air has to pass through air coolers. Therefore, it is no surprise that air velocity is high within some areas of the data center, especially close to the air coolers.

    Due to a phenomenon called the Venturi-effect, relatively high air velocity will produce local pressure differences within the data room. This effect is a major cause of hot spots. Spots with low pressure may hold back the flow of air through the server or suck air back from locations where it has already been used and is therefore hot. Both instances lead to high temperatures in the servers.

    This problem is usually addressed by applying overpressure and by strict separation of hot and cold areas in hot/cold containment. Blanking panels, adjustable floor tiles, a tight control of temperatures and pressures within the entire data center – the efforts to “cure the symptoms” are well-known and multiple. Whether or not the problem of hot spots is solved may vary, but in any case a lot of resistance is built into the air circulation pattern.

    The Not-So-Cool Side of Powering the Fans

    Seven to 9 percent of the total energy cost in a data center comes from fan power. Consequently, 7 to 9 percent of the total heat generated within the data center is caused by the fan motors. Therefore, the fans require 7 to 9 percent of the cooling capacity for themselves. Moreover, the pressure they upload may lead to air leakage. Due to these two factors, significantly more air is circulating within the data center than is required for cooling the servers.

    Is this an inevitable consequence of cooling by means of air? Not necessarily. An alternative approach to air circulation in data centers can eliminate hotspots, reduce fan energy costs and significantly trim down air cooler maintenance.

    Rethinking Data Center Equipment and Design

    An alternative approach to air circulation in data centers is a system designed for low speed ventilation. Using this method would lead to coolers with a relatively large cross-sectional air-flow area – a design that would require minor adjustments to the layout of the data room.

    Instead of a row of air coolers mounted along the wall of the data room, these thin but large coolers would themselves serve as a wall. Air would flow at a low speed from a corridor between the outside wall and the air coolers, through this wall of air coolers, into the data room.

    After the air has absorbed the heat from the servers, the hot air is channelled back into this corridor via a plenum. The low speed ventilation thus created, eliminates local pressure variations, makes pressure control unnecessary and brings down the need for fan power. The coolers are large and uncomplicated, with minimal fan wear and modest maintenance requirements.

    In the absence of pressure differences, climate control is also simplified. Instead of controlling the volatile pressure and temperature situation in the data center, climate control in a low speed ventilation approach sets focus on the idea of “cool air availability.” By measuring the air flow through a tube between the hot and cold compartment in the data room, a direct shortage/surplus-measurement is received about the amount of air demanded by the servers and supplied by the air coolers. The fan speed is adjusted in order to balance demand and supply.

    An additional advantage of this approach is the easy application of free-cooling by the use of outside air. The corridor between the outer wall and the coolers provides an ideal pre-treatment space for letting in the outside air, mixing it with return air, filtering the air and raising the humidity if necessary. The temperature range for free-cooling can be extended from 54 F to 75 F – and even above these temperatures the reduction of cooling costs are significant. For a moderate climate a PUE of 1.07 is possible.

    Deciding What’s Right for Your Data Center

    The main reason for the operational costs to be lower is obvious. A cut of approximately 6 percent on energy consumption implies a proportional reduction of the energy bill. Likewise, the considerably reduced load on the fans in the air coolers leads to lower maintenance costs.

    The reason why the investment level of a low speed ventilation data center will be lower (compared to a conventional data center) is less obvious. The aforementioned separation wall will certainly bring additional cost. However, the capacity of the power supply equipment can be reduced by 7 percent due to the structural reduction of the energy consumption. Since less heat is generated by the fans, the cooling equipment can be scaled down as well.

    When compared to investment in a conventional solution, the example above will bring a saving of approximately 4 to 5 percent of the total capital engaged (excluding IT-equipment). It may not sound like much, but a 6 percent savings on the energy bill combined with a reduction of the investment by approximately 4 percent makes a huge difference in overall financial performance.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    << Previous Day 2014/11/27
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org