Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, April 20th, 2015

    Time Event
    12:00p
    ASHRAE’s Pursuit of Lower Humidity in Data Centers

    It’s a well-known fact that data center cooling systems are some of server farms’ biggest energy consumers. One of the most effective ways to reduce cooling energy costs has been to maximize the use of naturally cool outside air, a method called airside economization, or free cooling.

    Climate imposes obvious time and place limits on the use of free cooling. You cannot use it just anywhere, and you cannot use it all the time.

    But a group within ASHRAE (American Society of Heating, Refrigeration, and Air-Conditioning Engineers) has for many years worked on expanding those limits. Until recently, the efforts of ASHRAE’s TC 9.9 (the technical committee focused on data centers) have been to show that if operators accept temperature levels on the data center floor that are just a little bit higher than customary, they will not only get energy savings from using less mechanical-cooling capacity but also be able to use free cooling in more places and for longer periods of time – all while keeping temperatures within normal warranty by IT equipment manufacturers.

    After having raised the inlet-air temperature in its recommendations three times (once in 2004, once in 2008, and again in 2011), the committee is now working on expanding the relative-humidity part of the envelope. The “relative” part is very important here, because humidity and temperature are inseparable.

    TC 9.9 develops its recommendations together with IT manufacturers. There are three types of envelopes in its data center cooling guidelines:

    • Recommended: Environmental conditions that ensure high reliability while operating in the most energy efficient manner.
    • Allowable: A wider envelope than the recommended range. Manufacturers test their equipment within this envelope to make sure it functions under the conditions it describes.
    • Prolonged Exposure: These are operating conditions well outside the recommended range, encompassing extremes of the allowable range. While short-term “excursions” into this envelope may be acceptable, anything more than a short excursion can reduce equipment reliability and longevity.

    There is a trend among operators overall to dial data center cooling systems to increase operating temperature and humidity while staying within the ASHRAE envelopes. It lets them save on both operating and capital costs while keeping expensive IT gear under warranty.

    Impact of Lower Relative Humidity Minimal

    TC 9.9 recently completed a study together with the University of Missouri on the impact of low humidity on electrostatic discharge (ESD) on the data center floor. The study showed that with ESD-rated floors, the difference in the effect of discharge on IT equipment failure at 8 percent relative humidity versus 25 percent relative humidity is minimal, Don Beaty, the committee’s co-founder, said.

    Based on the study’s results, ASHRAE is planning to expand its relative-humidity recommendations for data centers, which will ultimately further expand the amount of free-cooling opportunities around the world.

    The committee is planning to publish results of the study later this year and then work on updating its official recommendations. “The point of expanding the relative-humidity envelope is to enable more free-cooling hours per year in cold climates without the need for artificial humidification,” Beaty said.

    In Data Center Thermodynamics Everything is Linked

    The relationship between temperature and humidity is quite simple: cooler air is dryer. The danger of dry air for electronics lies in static electricity, which accumulates more the less humid the room is and causes ESD.

    TC 9.9’s envelope is in the form of a psychrometric chart, since it’s more complex than simply a temperature range and a humidity range. There are different classes of equipment, and recommendations are different for each classes. The way the committee expanded temperature ranges last time, in fact, was by adding more classes of equipment that is rated to perform in hotter conditions.

    The humidity part itself is really multiple parameters: relative humidity, minimum dew point, and maximum dew point.

    ESD Shoes Not Terribly Important

    The study compared the presence of ESD in less humid conditions in data centers with ESD floors (ESD stands for electrostatic discharge). These are floors made of material that conducts and dissipates static electricity.

    The researchers tested the combination of ESD floors with ESD shoes (also frequently used in data centers) and ESD floors without ESD shoes, and found that presence or absence of the shoes also did not have a significant impact on hardware failure rates. This should come as a relief to many. “People aren’t that disciplined about using ESD shoes,” Beaty said.

    3:00p
    Microsoft Follows Google’s Stream Analytics Announcement With Its Own

    Microsoft announced general availability of Azure Stream Analytics (ASA), a fully managed cloud service for real-time processing of streaming data last week, shortly after one of its chief competitors Google launched a beta version of Dataflow, its own stream analytics service.

    The big cloud providers continue to expand their offerings into the realm of big data and Internet of Things. ASA’s goal is to make it easy to set up real-time analytic computations on data coming in from sensors and other devices, collectively known as the Internet of Things, as well as websites, applications, and infrastructure systems.

    Such analytics capabilities used to be the domain of big enterprises, but the big public cloud providers are beginning to offer users sophisticated analytics without the hefty price tag and complexities that often come along. IBM and HP have also been active in offering cloud analytics services, and so has Amazon.

    ASA is a secure multi-tenant service, according to Microsoft. Customers can allocate and pay for the resources they use, so they can start small and scale out as needed. Resiliency and check pointing for auto recovery are built into the platform.

    The cloud analytics service supports a high-level SQL-like language that simplifies the logic to visualize and act on data in real time. ASA can serve as backend for a variety of IoT applications, such as remote device management or getting insight connected cars.

    “Simple configuration settings allow developers to tackle the complexities of managing network latencies from sensors sending data to the cloud, correctly ordering events across thousands of sensors to find correct patterns etc,” wrote Joseph Sirosh, corporate vice president of Information Management and Machine Learning at Microsoft. “The stream analysis logic can also be easily tested and debugged within the internet browser itself before deploying it live in the cloud.”

    ASA provides a rapid development experience, removing all unnecessary overhead of traditional programming languages such as Java, according to Sirosh. An example given is computing a moving average on a temporal window. In ASA it takes five lines of code compared to hundreds in Java/Storm.

    Two of the early users are NEC, which uses ASA for face detection among other things, and Fujitsu, which is using it for environmental monitoring and management in manufacturing, collecting data from a variety of end points and machines. ASA provides real-time analytics to accelerate factory-wide optimization.

    “In the past, CEP (Complex Event Processing) in the cloud was not a realistic option, but Azure Stream Analytics showed great performance running on Azure,” said Hiromitsu Oikawa, a director at Fujitsu, wrote on Microsoft’s blog.

    3:30p
    How Hosting Firms Can Help Clients Survive Site Blacklisting

    Ridley Ruth is COO at Dropmysite, a cloud backup company.

    Blacklisting is probably more common than you think. While Google eschews the term “blacklist,” the search giant has quarantined as many as 10,000 websites per day in recent years, typically because the sites have been infected with malware and expose unsuspecting visitors to malicious software that can cause harm to their computers and put sensitive personal information at risk.

    What happens when a website is blacklisted? Too often, site owners panic and web hosting providers suffer disruptions to everyday business, as they scramble to help anxious customers clean up their sites and get back online.

    Unfortunately, businesses that can’t afford to hire IT security specialists or install expensive monitoring tools are often slow to realize their website has been blacklisted. In fact, nearly half of business owners are alerted to a compromised website by a browser, search engine, or other warning when trying to visit their own websites. That’s when the fire drill begins. For blacklisted sites, time is the enemy. Every minute a website is blocked represents lost revenues, not to mention immediate—and sometimes lasting—damage to an organization’s reputation. This problem is particularly acute for startups and small and medium-sized businesses, which lack the infrastructure and deep pockets to weather an extended storm. Moreover, when a customer’s site is blacklisted, they stand to lose nearly all of its organic traffic from their marketing activities, which can have a devastating impact on sales.

    The time required to remove malware and secure a site can range from hours to days, depending on the severity of the infection and whether the site is protected by a frequent and effective backup regimen. Removal of malware and site restoration is the first part of the fix. Once that process is complete, site owners still need to request a review from Google before blocking is removed. A recent study of 500 blacklist removals by SucuriLabs found that the average time for blocking removal was 10 hours and 23 minutes, with actual removal times ranging from 2 hours and 20 minutes to 23 hours.

    For web hosting providers, blacklisted customer sites can be a real nightmare, putting a strain on operations and potentially undermining their credibility. Customers typically don’t understand why their site was blacklisted and will often unfairly blame their hosting provider for the problem. But regardless of where the fault actually lies in individual incidents, blacklisting isn’t going to go away anytime soon, and smart hosting providers will position themselves to help customers remediate the problem as quickly as possible. Providers that offer robust tools to get their clients through the process expediently will ultimately inspire enhanced confidence and loyalty; those that don’t are likely to squander significant resources on remediation support and lose customers in the process.

    The good news is that blacklist remediation doesn’t have to be a nightmare or a lengthy ordeal, particularly if affected website owners are already using intelligent automated backup regimens and can easily restore the affected website files and functionality on their own with the appropriate tools.

    To protect themselves, hosting providers should familiarize themselves with the following steps for remediation so they can implement them quickly and efficiently once a customer discovers their site has been blacklisted:

    1. Check for viruses on administrators’ systems by running reputable antivirus scanners or AV scanners on every computer used by an administrator to log in to the site. Then, you should check server logs for activity by the administrator who owns the infected computer.
    2. Change passwords for all site users and accounts, including logins for FTP, database access, system administrators and CMS accounts. Remember that strong passwords will combine letters and numbers and punctuation, and will exclude words or slang that might be found in a dictionary. The more sophisticated web hosting companies will allow customers to easily make these changes in a dashboard interface as part of a self-service automated backup offering.
    3. Educate customers to check that they have installed the latest versions of their operating system, CMS, blogging platform, apps, plug-ins, etc.
    4. Delete all new and modified files added to the server following the time when the issue was first detected, and then perform a complete system restore. If you offer a cloud-based automated backup and disaster recovery service to customers, it may be possible to complete the restoration with a single click. Otherwise, your customers will need to find and manually download the last clean versions of the each of the modified files.
    5. Request a review by Google to remove the blacklist flagging. The process for that is described by Google here, and keep in mind you will need to use Google Webmaster Tools to carry out the required steps.

    How can webhosting provider’s best prepare for possible infections and blacklisting? Act now to control your own destiny. Assume the worst will happen, and then make sure you have access to tools that will get your customer’s site and data back online as quickly as possible

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:00p
    How Data Center Virtualization Shrinks Physical Distance

    Over the past couple of years lots of new technological buzz terms have come around. One of those revolves around “software-deigned” platforms that aim for entire data center virtualization, way beyond server virtualization alone. The idea is simple: to unify processes to make the IT environment easier to manage and more efficient. In working with software-defined technologies, specifically around data center infrastructure, we see that there is an underlying theme: Removing the complexity of distance.

    Of course software-defined technologies have other great purposes, but for the sake of this conversation we are going to examine exactly how data center virtualization, or software-defined tools, help bring data centers closer together.

    • Virtual and Physical WANOP. We’ve discussed this before, but it’s very important to note that WAN optimization has really come a long way. Not only are we trying to improve the end-user computing experience, we are also improving communication between data centers. Now, edge devices are able to optimize communication between multiple data centers spanning a country or even the globe. Furthermore, client-less WANOP solutions allow you to optimize user sessions in completely new ways. Finally, new capabilities around working with virtual WANOP appliances allows you to save on hardware space while still optimizing the cloud and user experience.
    • Global Server Load Balancing. As a way to make the data center more resilient, engineers continuously look for ways to eliminate single points of failure. In that sense, technologies around GSLB have allowed national data centers to become more agile as disaster recovery sites. Now users can be load-balanced between entire sites to keep in line with business continuity requirements. As a way of virtualizing traffic and deploying these powerful load-balancing controllers, it’s important to note that this process can now be done entirely in the logical layer. In the past, organizations would have to buy physical appliances to accomplish GSLB. Virtual instances and network function virtualization allow you to have virtual load-balancers ensuring an optimal data center bridge.
    • DCIM and DCOS. Think of the data center operating system as the next evolutionary step for Data Center Infrastructure Management (DCIM). Data center administrators are going to build in infrastructure automation on a multi-site level. As one site needs more resources or offload some users, another location can pick up the slack. This can be done much easier by using the next-generation of data center management technologies. Here’s the other big point to remember: as you are bridging your data centers, you’re also building bridges around the cloud. The data center management layer can now extend from private to public and help you manage the resources in between.
    • Open-Source Computing. Surprised? Open-source technologies have been taking the networking, virtualization, and management industry by storm. Technologies like CloudStack and OpenStack are allowing for direct cloud infrastructure integration. The challenge with open-source technologies, however, is standardization. However, much of that is changing. Open-source solutions are helping service providers and some large organizations rethink how they build and bind their data centers. The revolution driven by the likes of Cumulus Networks, companies that build Linux-based network management software for commodity switches, is only one example. Basically, Cumulus Linux is a software-only solution that provides the ultimate flexibility for modern data center networking designs and operations with a standard operating system.

    There’s no doubt that optimization technologies are going to continue to evolve. One of the key technologies making the data center virtualization push is of course software-defined networking. We can do so much more with a physical switch now than we ever could before. We even have network virtualization and the ability to quickly create thousands of vNICs from physical devices. The ability to dynamically create LANs, vLANs, and other types of connectivity points has become easier with more advanced networking appliances.

    This goes far beyond just optimizing the links between data center environments. Moving forward, we are creating a truly distributed system capable of resiliency and business continuity. Even now the open-source environment is picking up pace with various distributed environment management platforms. As the market continue to evolve, it will be critical for your organization to keep pace with user and industry demands. One important methodology there will be to closely tie your data center services with that of the cloud. Using cloud, virtualization, and optimization are great ways to not only create good traffic flow and user controls, but it also helps create a much more agile data center ecosystem.

    4:35p
    SmartCube Gains Modular Data Center Patent

    In a decision that may portend future litigation, the U.S. Patent Office has awarded a patent to SmartCube covering the way the provider of modular data centers cools them and provides accessibility to IT equipment from a cold aisle inside its containers.

    SmartCube President Tom Oberlin said the patent specifically addressed the way SmartCube created a rack inside its modular data center that enables the rack to be spun around to make all the equipment in that rack accessible from a cold aisle versus requiring IT staff to work in a hot aisle where temperatures rise to the point where they can only work comfortably for a few minutes at a time.

    “It’s really about worker convenience and safety,” Oberlin said. “IT staff can not only do everything they need to do from the cold aisle; they’re also closer to aisle exits.”

    The company has offices in Rio de Janeiro and targets the Brazilian data center market.

    Oberlin says the cooling aisle within a SmartCube is created using a unique on-board chiller that is attached to walls of the container. From the cold aisle the racks inside a SmartCube data center can be spun around to make it possible to service IT equipment without having to move into the hot aisle.

    SmartCube 2015DX Module

    2015 DX, one of the models of SmartCube’s modular data center

    SmartCube is currently evaluating at least one rival modular data center platform for possible violations of its patent, Oberlin said.

    SmartCube continues to see significant adoption of modular data centers within traditional enterprise IT environments, where the value of local real estate space is at premium, the company’s president said. For example, in a hospital, where space can be allocated to beds that generate revenue, it makes better financial sense to house data center resources in the parking lot or nearby warehouse, he said.

    Part of the issue that many IT organizations face is getting IT staff to want to work inside those modular containers. Not only is it often hard to find IT talent, the quality of the working environment is often a major factor in terms of the ability to actually retain that talent.

    IT administrators come in all sizes and shapes these days so asking them to work inside modular containers that have aisles that are a few feet wide can be a challenge. Oberlin says that issue has been one of the major reasons that many IT organization have opted for the SmartCube containers.

    The ultimate number of data centers that will morph into becoming modular containers that are logically connected to provide new ways of being able to scale IT infrastructure resources remains to be seen. But clearly the quality of the work environment in those containers and IT staff turnover rates are forever going to be intrinsically linked.

    5:09p
    Broadcom Unveils Faster Data Center Switch Platform

    With IT organizations starting to deploy as many as 30 virtual machines per physical server demand for network bandwidth within the data center is increasing rapidly. To address that issue Broadcom unveiled today an upgrade to its System-on-Chip (SoC) platform that can deliver 1.2 terabits per second of performance in a top-of-rack data center switch.

    Dubbed the Trident-II+ Series within the Broadcom StrataXGS Trident Ethernet switch portfolio, the new SoC will enable manufacturers to start delivering as early as this year a new generation of switches that are not only twice as fast but also consumes 30 percent less power.

    Rochan Sankar, director of product management and marketing for Broadcom’s Infrastructure and Networking Group, said the Trident-II+ Series is designed to be upgradable within the existing data center switch architecture employed by most vendors. As a result, switch vendors do not have to design a new chassis to bring the Trident-II+ Series to market.

    Sankar said that over the next several years the bulk of data centers will be making the transition from 1G to 10G Ethernet. As they make that shift, the cost advantages of employing data center switches based on merchant silicon rather than proprietary ASICs will increasingly become apparent, he said.

    “This platform is really targeting the long tail of the enterprise market that is still moving off of 1G Ethernet,” Sankar said. “We think that transition will be playing out over the next three to four years.”

    Like the previous generation of Trident series switch platforms, the Trident-II+ is optimized to process VXLAN traffic generated by VMware virtual machines. Rather than relying on servers that need to allocate as much processing horsepower as possible to applications, the Trident series enables the processing of VXLAN traffic to be offloaded to the switch itself. The end result is more balanced data center environment, Sankar said.

    IT organizations don’t tend to upgrade switches as often as servers. But there is enough critical mass starting to build in terms of the number of virtual machines deployed on each physical server to start forcing the issue. In fact, the rate at which VM deployments are growing suggest that IT organizations might be upgrading switches more frequently in the years ahead than they have done historically.

    As that trend continues to play out, Broadcom is betting that the cost advantages of switches based on merchant silicon manufactured in volume are likely to have an impact on switch economics as x86 processors did on the servers those switches are inevitably connected with.

    6:37p
    Daniel “Rudy” Ruettiger Sets Tone for Data Center World

    Daniel “Rudy” Ruettiger gave an inspiring keynote to start off the Data Center World conference in Las Vegas Monday. Rudy is best known for the 1993 movie “Rudy” about an undersized kid with a dream of playing for Notre Dame.

    Rudy is bigger than the man – it is a symbol for the triumphant underdog. An industry that’s undergoing drastic change with the emergence of cloud, and fewer giant tech companies with massive data centers increasingly housing more of the world’s servers, the data center industry can use a good underdog story.

    There are a lot of surprising parallels between a kid with a dream of playing for Notre Dame and the data center industry. Many of us hold a dream, and like Rudy, we’re inundated with negative voices. Rudy’s message was to focus on one voice and get rid of the negative voices. The most important factor is not where you are or what you have today but your passion.

    Rudy attributes passion for where he is today. He was not without his share of negative voices but managed to block all the naysayers out and hold a singular vision. Because of his positive attitude, he was often a leader of men despite not being the strongest or smartest – he suffered from learning disabilities and Catholicism, he said.

    Rudy emphasized the importance of collaboration. A positive attitude gives birth to others wanting to know you. “Most people in leadership understand the importance of collaboration and helping one another,” said Ruettiger. “I had to make sure they knew me and to let people know who I was in a positive way.”

    Rudy’s story goes beyond the movie. He ultimately did play for a moment in a game to the cheers of a crowd, giving birth to one of the most inspirational underdog stories of all time. However, his singular positive vision changed the course of his life in several ways. Each time he was faced with adversity, he said, he overcame through patience and trust in his vision.

    Making the movie itself wasn’t a sure thing. Rudy spoke of the relationships it took to make the movie happen, and how that same resiliency that led to his Notre Dame moment led to the creation of his tale.

    He first had to convince the writer — who hated Notre Dame — that this was a story that needed to be told. He told a story of sitting in a restaurant in Santa Monica for four hours never to have the writer show up.

    Most would feel dejected, but he struck up a conversation with an older man outside who happened to know where the writer lived. The two had an instant connection, which he said was something that happened to him all the time, because everyone can identify with the “Rudy” mentality. He went to the writer’s house and that singular positive attitude — the same that got him into Notre Dame following several rejections — got the movie written. It’s easy to guess how the rest of the ducks lined up.

    Whether a startup or a sizable company, it all starts with a dream. The most successful people all share the kind of determination that Rudy has.

    You can make magic happen even if you’re not an Amazon or Microsoft, even if you don’t have all the resources in the world. Resiliency, collaboration, keeping your focus positive and most of all, trusting in your dreams will not only take you far, it will make you someone people want to know. “A lot of people quit when it’s darkest,” he said. “Listen to that positive voice. Define who you are through patience.”

    6:55p
    IBM Launches Program to Share 700 Terabytes of Global Cyber Threat Data in the Cloud

    logo-WHIR

    This article originally appeared at The WHIR

    A 700-plus-terabyte database full of raw cyber threat data will be available to any company that wants it through a new IBM program called X-Force Exchange. The company announced that it is granting access to its database that includes threat data from 270 million devices, spam and phishing attack emails and 25 billion web pages and images. The service is powered by IBM Cloud.

    The platform also provides real-time indicators of live attacks. The program’s goal is to help companies mobilize against network threats. The company says there currently isn’t a single point of contact for this kind of information.

    “We’re taking the lead by opening up our own deep and global network of cyberthreat research, customers, technologies and experts,” said Brendan Hannigan, general manager for IBM Security. “We’re aiming to accelerate the formation of the networks and relationships we need to fight hackers.”

    Threat sharing has been a hot topic recently. Obama recommended new cybersecurity legislation in February and proposed a $14 billion cybersecurity budget that includes $227 million for construction of a civilian cyber campus to better share information on cyber threats.

    However, according to the 2015 Data Breach Investigations Report released Tuesdaycyber threat sharing may not actually be that effective. “It is hard to draw a positive conclusion from these metrics, and it seems to suggest that if threat intelligence indicators were really able to help an enterprise defense strategy, one would need to have access to all of the feeds from all of the providers to be able to get the ‘best’ possible coverage,” said the report. “This would be a herculean task for any organization, and given the results of our analysis, the result would still be incomplete intelligence.”

    The IBM program offers a wealth of resources including a variety of intelligence and data, including malware threat intelligence, spam and phishing attacks and reputation data.

    The cloud-based platform allows for collaboration among community members. It features a social interface that allows users to easily communicate and validate information. The platform is providing support for the emerging standards for threat sharing, STIX and TAXII.

    Finding solutions to cybersecurity threats is becoming big business. With big data breaches such as JP Morgan, Kmart, Dairy Queen, Home Depot, Xbox and Sony costing millions of dollars in damages companies are looking for ways to protect themselves from hackers and data loss.

    This first ran at our sister site The WHIR: http://www.thewhir.com/web-hosting-news/ibm-launches-program-share-700-terabytes-global-cyber-threat-data-cloud

    << Previous Day 2015/04/20
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org