Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, July 22nd, 2013

    Time Event
    11:30a
    Cedexis Now Tracks Performance for 50 Clouds

    Internet performance and control company Cedexis announced an extension to its Radar community services with the addition of many new clouds, and the availability of Radar reports as an app for Windows 8. The Cedexis Radar community now measures more than 50 cloud platforms, the company said.

    The recent additions to the Cedexis Radar Cloud benchmark include Aruba Cloud, ASPserveur, Cloud Sigma, eNocloud, Rackspace Cloud, Savvis Cloud, SFR Cloud and Virtacore.  The Cedexis Radar is an independent benchmark for companies providing intelligence about which clouds, CDNs and data centers are best for their online assets. Collecting nearly 1 billion measurements per day around the world, Radar provides performance metrics on cloud platform performance, both in absolute terms and in comparison to other providers.

    To help customers mine this wealth of data, a new Cedexis Radar Windows 8 app has been developed to providecontent presentation and analysis. The new application is highly-interactive, makes comparative reports, and makes Radar Cloud and CDN performance data more accessible than ever.

    “As Cloud and CDN adoption continue to accelerate, enterprise IT professionals require objective third-party data to make increasingly strategic cloud vendor and cloud region/location decisions,” said Marty Kagan Co-Founder of Cedexis. “The new Windows 8 app and automated alerts provide IT decision makers with quicker access to the performance data of both existing and potential providers, so they can make well-informed decisions, in real time.” 

    The Windows 8 Radar app is available immediately for free on the Windows App Store.

    12:30p
    Harvesting Big Data: How Farm Fields Boost Data Center Demand
    puresense-tech

    A monitoring system from PureSense tracks conditions in a field of crops. The company recently expanded its data center to add more capacity for data storage. (Photo: PureSense)

    Those fields of crops you drive past out in the country could be generating data center demand. PureSense, which provides “irrigation intelligence” for farmers to manage and automate irrigation systems to improve crop yields, is expanding its data center to accommodate 14 terabytes of data and 4 billion data records.

    Data is permeating everything we do. The “Internet of things” and “big data” are oft-used term these days, but PureSense is a specific example of how data is being generated all around us. Irrigation isn’t something that we would necessarily think generates terabytes of data. Watering crops is filling up servers.

    PureSense uses technology to helps farmers boost the yield for their crops. The company deploys monitoring systems to gather data on soil moisture and the flow of irrigation systems. The data is analyzed and used to optimize irrigation scheduling, which can then be automated and managed remotely. The company’s equipment provides real-time monitoring for more than 4,000 fields in the western United States. When you have real-time reporting, you need uptime.

    “We decided a year ago that given our growth, we needed to upgrade our data center to assure uninterrupted services in hosting our online data management and software services for our customers,” said David Termondt, CEO of PureSense. “Based on the problems our competitors’ customers have experienced in having reliable access to their information, we believe that this is a substantial differentiator for us in the market.”

    The company is touting the data center as a way to convey value to its customers. The 4 billion plus data records in its databases are available with sub-second (300 ms) response times on queries. The new data center offers redundant application server hardware with instant local failover capability. The company uses RAID 5 and 10 with hot spare drives so no data is lost in the case of a drive failure.

    “This project has been a great opportunity to re-architect our hosting services to higher levels of redundancy and reliability,” says Ryan McNeish, Senior Director of Engineering for PureSense.  “Since completing the upgrade, our hosting and data management services have only been offline for routine maintenance events resulting in a 99.7% up-time. We’re excited about the opportunity to improve our data center and to be assured that we are ready to meet the needs of our customers as we continue to grow.”

    1:30p
    LoPoCo Launches Low Power Server Line
    Lopoco-Server

    A look at the Lo Power Company’s LP-4240-10H Ultra server. (Photo: LowPower Company)

    Low Power Company (LoPoCo) believes high efficiency servers can save businesses more than half of what they spend to operate their data centers. This week the company launched a line of low-power servers, which it says can use as much as 80 percent less power than some mass-market servers. But the LoPoCo team is being cagey about exactly how it is providing those energy savings.

    “It’s almost always the first question to come out of people’s mouth,” said CEO Andrew Sharp. “We are making machines that are way more efficient and perform smartly. A lot of R&D is required to do what we do. To eliminate every last ounce of waste, we start with the best processors with the best tradeoffs of performance and efficiency. Then we handcraft (the server) around them.”

    LoPoCo is the latest player in the growing market for low-power server options. Some have pursued strategies with their own silicon (Tilera), or multi-node designs using low-power chips from ARM (Calxeda) or AMD (SeaMicro). LoPoCo says it is using Intel Xeon “brawny core” chips and standard form factors, and gaining energy savings through its hardware design.

    “Custom chips represent a risk on multiple front,” said Sharp. “In the current market today the Intel/AMD juggernaut is really hard to beat. Down the line, we’ll take a look and see if it makes sense, but you can absolutely believe that Intel/AMD will respond. They always have in the past.”

    Design Refinements on Existing Tech

    Sharp compares its approach to the gains captured by hyper-scale players who build efficient systems around existing technologies.

    “There’s a reason that Google and Facebook make their servers – because with a few steps, they can get more efficiencies, said Sharp.”It took a huge pile of rejected hardware to get to this point. It was a lot engineering work.”

    Sharp has worked on server technology at Silicon Valley firms since the 1980s, most recently at LSI. VP of Engineering Jack Mills has worked in the enterprise platofmrs team at Intel, while CTO and co-founder Peter Theunis has experience at Yahoo.

    Sharp says that the marketplace and the industry has a history that isn’t the right approach for the next generation web data center of today.

    “You really couldn’t buy enough CPU for common server applications in the past,” said Sharp. “It was a limiting factor.” Sharp says servers were made to fit a wide range of options, which isn’t the right fit today.”There’s parts of the market that we’re not going after – like High Performance Computing. We’re happy to leave that to traditional vendors.”

    The company claims its servers decrease overall power consumption, decrease HVAC requirements, dramatically lower the noise level in the data center, and actually increase throughput. For example, the LP-4240 family of 1U LoPoCo servers consumes 28 watts when idle, and  has 100 watts Total Design Power (TDP), the maximum possible power that can be consumed. That equals 40 servers in a rack, with a 40A 110V circuit. A white paper goes into the math of this vs. some typical setups.

    According to the company, the LP-4240 provides the same or nearly the same throughput as their conventional server, because the customer never actually uses that vast amount of CPU. “The fact is, most normal server applications have a mix of I/O and CPU requirements, but the X86 CPUs of the last 10 years are so powerful that they spend about 95% of their service life waiting for I/O, even when utilizing very fast I/O such as SSDs or 10G ethernet,” according to the white paper.

    LoPoCo counts a number of enterprises as customers, including Light and Motion Industries.

    “We have purchased systems from Lopoco and all are performing flawlessly,” said Daniel Emerson, CEO of Light and Motion Industries.  “We are also pleased with the power savings. I would recommend them to any small business looking to move off the power hogs that pass for servers these days.”

    The company offers four types of servers: micro, 4,8, and 12-core. The four types of servers use Intel chips:

    • Intel D525 Dual core quad thread
    • Intel E3 Xeon Dual core 2.3 GHz quad thread
    • Intel E3 Xeon Quad core 2.4 GHz 8 thread
    • Intel E5 Xeon Six core 2.0 GHz 12 thread

    “The power consumption of our servers is part of our warranty,” said Sharp.

    2:36p
    Analytics Re-shaping IT Operations & Gaining Strong Momentum

    Sasha Gilenson, Founder and CEO, Evolven, is a thought leader in the new emerging area of IT Operations Analytics, tackling 15 years of chronic change and configuration challenges.

    Sasha-EvolvenSASHA GILENSON
    Evolven

    The pace of change in today’s IT world is truly astonishing. IT is expected to support a wider range of technologies and platforms, as well as accelerated release schedules and still solve time-sensitive business challenges, despite cost pressures and complexity that frustrates many IT ops teams. This landscape is pushing even more complex, multi-layered IT systems that need to keep up with the dynamic atmosphere that produces volumes of data (far more than was ever imaginable), yet is critical to daily performance and operations.

    Dealing with this data volume, variety, velocity and complexity is really a “big data” problem for IT operations, forcing many traditional approaches in IT to change, ushering in IT Operations Analytics solutions to take on this challenge.

    IT Operations Analytics is better equipped to manage this kind of big data challenge. So, it is no wonder that IT Operations Analytics is gaining both industry interest and expert attention – moving IT management technology into a new S-curve growth cycle.

    Investments in Traditional IT Management Tools Deliver Only Marginal Returns

    For IT Operations, change is a fact of life, taking place at every level of the application and infrastructure stack, with an impact on nearly every part of the business. Today’s IT environments are complex. Generating huge amounts of data, management of this is made even more difficult as the rate of change grows in frequency, and the departmental silo structure creates further obstacles to gaining a clear perspective of issues affecting service management.

    Traditional IT management tools have been applied to collect enormous amounts of raw data, but now lack the analytics capabilities to make sense of today’s “big data” operations. As the recent Forrester report, titled “Turn Big Data Inward With IT Analytics,” noted, “The tools present us with the raw data, and lots of it, but sufficient insight into the actual meaning buried in all that data is still remarkably scarce.”

    The frustration with traditional IT management tools has been further demonstrated in a report published recently by Gartner Research which concluded that “the Big Four surrendered share and stunted market growth, while a new generation of ITOM (IT Operations Management) vendors grew significantly faster than the market.” (Source: Market Share Analysis: IT Operations Management Software, Worldwide, 2012: by Laurie F. Wurster et al).

    According to this Gartner report, two of the leading providers in the sector, IBM and BMC, did show modest year-on-year growth in 2012 of 0.8 percent and 0.9 percent respectively, but CA and HP declined 0.6 percent and 4.3 percent. In contrast, a group of the fastest-growing companies mustered growth rates ranging from 84.4 percent to 45.5 percent.

    Moving to a New Technology S-curve

    The life cycle of innovation has often been described using the technology S-curve model, mapping the progress of technology innovation against new performance challenges. During the last 15-20 years in IT operations, enterprises have made huge investments in IT management tools.

    With complexity growing, the IT landscape changed as the amount of IT operations data required to keep track of grew, impacting the ability of IT Operations to maintain performance and availability. The long implementation periods involved in running these technologies and the little actionable information yielded has left IT ops vulnerable when failure hits. In order to address this problem, a new approach – IT Operations Analytics – has emerged, tackling the complexity and dynamics in a new and more innovative way. This is driving a transition to a new S-curve, shifted to the right and upward of the original one, delivering better data center performance.

    S-Curve

    The emergence of IT Operations Analytics solutions is creating an environment in which many traditional elements of IT are also shifting. IT Operations Analytics can quickly discover the root causes of IT system performance problems, assess the relative impact when multiple causes are involved, analyze service cost and anticipate performance impacting events among other under the responsibility of IT operations management.

    Gartner Research VP Will Cappelli, explained in a recent report, “IT Operations Analytics Technology Requires Planning and Training,” that the “operational data explosion has sparked a sudden and significant increase in demand for ITOA [IT Operations Analytics] systems.”

    IT Operations Analytics Is Redefining IT Operations

    As a discipline, IT Operations Analytics combines complex-event processing, statistical pattern discovery, behavior learning engines, unstructured text file search, topology mapping and analysis, and multidimensional database analysis. IT Operations Analytics solutions, in the spirit of business intelligence (BI), are penetrating IT operations.

    Analysts such as Gartner and other industry experts are enthusiastic about the technology. Vendors are also approaching company decision-makers to consider how to take advantage of their vast stores of data, and apply advanced analytics in the context of IT operations.

    IT Operations Analytics can provide visibility, extracting insight buried in piles of complex data, helping IT operations teams to proactively determine risks, impacts, or potential for outages that may come out of granular configuration changes.

    This expectation is underscored by the outlook described in Gartner’s recent Hype Cycle in IT Operations report, stating that “IT Operations Analytics will provide CIOs and senior IT operations managers radical benefits toward running their businesses more efficiently…“

    Improving IT Operations Performance

    Most IT operations teams spend a disproportionate amount of time chasing various root causes for performance issues, primarily because poor technology obscures clues to resolution. IT operations leaders need new ways to deliver more value to their business. Tools for effective decision-making can improve the infrastructure and operations (I&O) team’s ability to allocate resources to the right types of activities.

    Industry analyst firm, Ovum, believes that “gaining actionable information from the wealth of data generated through change and configuration management activities can help IT work proactively, reduce disruptions to normal service, and help more effectively manage change.” Enterprises that have already implemented IT operations analytics solutions report on significant cuts in their MTTR, a reduction in number of incidents and downtime, and enjoy smooth, error-free releases.

    With all this data, IT Operations Analytics tools stand as powerful solutions for IT, helping to sift through all of the big data to find patterns. That’s what IT Operations Analytics is all about. Otherwise, IT management will continue to struggle, continuing into a downward spiral. So it’s time to apply some of that same business intelligence thinking to the work in IT, and bring the analysis of big data inwards for IT Operations.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:52p
    Equinix Testing Fuel Cell for Use in Fire Suppression
    equinix-se3-pdus-470

    Some of the power infrastructure at an Equinix data center. The company has installed a fuel cell in its facility in Frankfurt, Germany. (Photo: Equinix)

    Equinix has become the latest provider to test drive the use of fuel cells, installing a 100-kilowatt system to generate energy for its FR4 data center in Frankfurt, Germany, the company said. In a new wrinkle, the demonstration system is designed to also provide fire suppression by managing the oxygen level in the room, leaving enough oxygen for staff to breathe comfortably but not enough oxygen to support a fire.

    Equinix says the demonstration project is the first data center implementation of a system from n2telligence that use the nitrogen-rich output from the chemical reaction inside a fuel cell to manage the surrounding data center environment. The N2telligence system monitors the room to maintain oxygen content at the proper level, and can adjust the settings on the fuel cell to adjust for any changes.

    Most data centers use fire suppression systems featuring either water or gas. Water-based systems typically use a “pre-action” approach in which the pipes above the data hall are empty until an initial alarm is received, at which point they fill with water and would be triggered by a second confirmation alarm. Gas systems now use primarily “clean agents” using hydrofluorocarbons (HFC), which lack the ozone-depleting characteristics of chlorofluorocarbons (CFCs) such as halon, which was once widely used in data centers.

    N2telligence will work with Equinix to monitor the fuel cell’s performance and efficiency, as well as the environment in the fire suppression test area. The Frankfurt system is a Quattro-generation hydrogen fuel cell which is expected to produce approximately 800,000 kilowatt hours of electricity annually.

    N2telligence is based in Wismar, Germany and specializes in fire protection systems for sustainable energy management.

    “This project is first used worldwide use of fuel cell technology in the combination of power, heat, refrigeration and fire protection in a data center,” Lars Frahm, founder and CEO, N2telligence. “This high level of integration results in an efficiency of the fuel cell system, the conventional technologies can not achieve., It shows us that the fuel cell to many years of development work has arrived in the market. We are delighted to have won Equinix as the pioneer of this technology.”

    N2intelligence isn’t the only company focused on the use of low-oxygen environments in data center fire suppression. Prevenex also offers a “hypoxic suppression” system that extracts room air oxygen to reduce levels to 16 percent, which remains safe for people but makes fire impossible. The company says this environment is similar to being at an altitude of 7,000 feet or so, roughly the altitude near Denver, Colorado. 

    6:32p
    OVH Builds Custom Servers at its Data Centers
    OVH-Server-Racks-470

    Rows and rows of custom-built servers inside an OVH data center. (Photo: OVH)

    When the staff at OVH’s newest data center need to install a new server, they don’t need to go far to find one. The huge web hosting firm builds its own custom servers on-site at its facility in Beauharnois, Quebec, creating perhaps the shortest supply line in the industry. OVH, which is based in France but entered the North American market last year, says that server ordered on its web site can be built on-demand in just 15 minutes, and in a rack within an hour. OVH’s latest server models, known as “2013 Reloaded” integrate the latest generation of RAID cards, with 6Gbps of bandwidth and 3 options of access acceleration: the CacheVault technology (with ultra-fast Flash memory), FastPath (improving I/O performances using SSD drives) and CacheCade which combines SSD and SATA/SAS drives for frequent access (SSD) and primary storage (SATA/SAS). This video provides an overview of the company’s manufacturing operation in Beauharnois, along with a look at the data center design, which features a peaked roof design similar to the Yahoo “chicken coop.” This video runs about 2 minutes.

    7:15p
    CloudFlare Shifts to Quanta for New Servers
    cloudflare-complete-g4-hard

    A look at one of the new CloudFlare G4 servers with the top off. (Photo: CloudFlare)

    Content delivery and security specialist CloudFlare has been growing rapidly as it builds out a global network of data centers. The company has been working with HP to build its servers, but with its latest hardware refresh (known as G4) has opted for customized servers built by Quanta, a Taiwanese company that has built servers for Facebook and Rackspace.

    CEO Matthew Prince provided an in-depth overview of CloudFlare’s hardware development in a blog post today, reviewing its existing G3 servers and some of the way it has fine-tuned its G4 model to boost performance. The updates feature Intel SSD drives and SandyBridge Xeon 2630L processors, adjustments to caching, and a shift in network cards.

    “The biggest change from our G3 to G4 servers was the jump from 1Gbps to 10Gbps network interfaces,” Prince writes. “We ended up testing a very wide range of network cards, spending more time optimizing this component in the servers than any other. In the end, we settled on the network cards from a company called Solarflare. (It didn’t hurt that their name was similar to ours.)

    “Solarflare has traditionally focused on supplying extremely performant network cards for the high frequency trading industry. What we found was that their cards ran circles around everything else in the market: handling up to 16 million packets per second in our tests (at 60 bytes per packet, the typical size of a SYN packet in a SYN-flood attack), compared with the next best alternative topping out around 9M PPS. We ended up using the Solarflare SFC9020 in our G4 servers.”

    For more details, see A Tour Inside CloudFlare’s Latest Generation Servers.

    8:00p
    Facebook’s Power Footprint Growing, Moving East
    facebook-carbon-2012

    A breakdown of power usage at Facebook’s data centers during 2012, from the company’s annual sustainability report.

    Facebook’s data center energy use grew 33 percent in 2012, as the company installed tens of thousands of servers in its new company-built data centers. The growth of the company’s power usage is disclosed in the company’s latest sustainability report, which also documents the company’s move to reduce its computing footprint in Silicon Valley, even as it boosts its reliance on leased space in northern Virginia.

    The surge in Facebook’s energy usage was expected, as the company has been massively scaling up its data center infrastructure to keep pace with growth in its audience, which now includes more than 1 billion monthly users.

    Facebook’s server farms used 678 million kilowatt hours (kWh) of electricity in 2012, up from 509 million kWh in 2011, the company said. That reflects major expansions of its company-built data center campuses in Prineville, Oregon and Forest City, North Carolina. Facebook has invested hundreds of millions of dollars in its ultra-efficient facilities in both sites, which can slash the energy bill for its vast armada of servers. Facebook’s company-built data centers have an average Power Usage Efficiency (PUE) of 1.08, among the best marks on the widely-used metric for “green” operations.

    Shifting Capacity from Silicon Valley

    The report confirmed that Facebook has been reducing its server footprint in Silicon Valley, where its power usage declined 18 percent during 2012, dropping from 227 million kWh to 185 million kWh.  That reduction, which we noted last month, has left the company with surplus data center space, which it is now seeking to sublease to third parties.

    Things are different on the East Coast, where Facebook leases wholesale space from multiple providers in Ashburn, Virginia. Facebook’s energy footprint in its Ashburn data centers grew 15 percent last year, from 205 million kWh to 237 million kWh. Updated tax incentives in Virginia are one reason. Facebook says its investments in Virginia”could reach hundreds of millions of dollars” in coming years.

    Here are some other data points on Facebook’s data center energy use from its sustainability report:

    • Facebook’s power usage is nearly evenly split between its facilities on the east coast (335 million kWh) and the west coast (338 million kWh). This trend is driven by the growth in both Forest City and Virginia.
    • Leased data center space accounted for 62 percent of its power footprint at the end of 2012, compared to 38 percent for company-built facilities. This reflects both the efficiency of the new facilities and the company’s continued reliance on third-party “wholesale” data center space.
    • Power usage in Prineville grew from 71 million kWh to 153 million kWh during 2012.
    • In Forest City, which came online at the end of 2011, power usage increased from just 6 million  kWh in 2011 to 98 million kWh in 2012.

    Carbon Footprint Growing

    The report also provided some data relevant to the headline-making scuffle in which the environmental group Greenpeace criticized Facebook for building its data centers in areas that relied on coal for much of the local energy mix. The groups have since announced a truce, with Facebook adopting a new siting policy reflected in its hydro-powered data center in Lulea, Sweden.

    In 2012, the carbon footprint of Facebook’s server farms grew from 196,000 metric tons of carbon dioxide in 2011 to 298,000 metric tons in 2012, an increase of 52 percent – a faster pace of increase than the 33 percent rise in the power used by these server farms.

    Why did Facebook’s data center carbon footprint grow faster than its data center energy use? There are a number of factors that may contribute to this. One is the growth of the new server farm in Prineville, Oregon. As has been highlighted by Greenpeace, the Prineville facility initially used energy sourced predominantly from coal. As a result, it has a larger carbon impact than other parts of Facebook’s infrastructure.

    The Prineville campus used less power than Facebook’s Silicon Valley leased data centers in 2012, but its output of metric tons of carbon dioxide was nearly three times higher. That’s likely because most of Facebook’s data centers in Silicon Valley get electricity from Silicon Valley Power, the Santa Clara municipal utility, whose energy mix includes just 9 percent of coal-sourced energy.

    Although the sustainability report presents an interesting snapshot of Facebook’s infrastructure, the data is now six months old and Facebook has been busy building in all four of its company-operated campuses – including Lulea, which will improve the company’s carbon-intensity going forward.

    Facebook says its per-user carbon footprint changed only slightly, from .000249 metric tons of CO2e (or 249 grams) per monthly active user in 2011 to .000294 MT of CO2e (294 grams) in 2012. “Put another way, one person’s Facebook use for all of 2012 had the same carbon impact as about three bananas, just as it did in 2011,” the company said.

    << Previous Day 2013/07/22
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org