Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, September 14th, 2015

    Time Event
    12:00p
    How to Make Your Data Center PUE Calculation More Accurate

    Victor Avelar is a Senior Research Analyst for Schneider Electric’s Data Science Center.

    The quest to conserve energy in the data center is ongoing. It has motivated data center managers to have a simple, standard means for tracking a facility’s total power usage against the amount of power used by the IT equipment. To address the need for an industry-wide benchmark, The Green Grid developed the power usage effectiveness (PUE) calculation in 2007 as a principle way to measure data center infrastructure efficiency.

    While PUE has become the de facto metric for measuring infrastructure efficiency, data center managers must clarify three things before embarking on their measurement strategy: There must be agreement on exactly what devices constitute IT loads, what devices constitute physical infrastructure, and what devices should be excluded from the measurement. Without first clarifying these three things, it can be difficult for data center managers to ensure the accuracy of their PUE. However, this process is easier said than done, as there are a number of issues that can make classifying power-consuming subsystems as 1) IT loads, 2) physical infrastructure, or 3) neither, problematic:

    • Devices are encountered in data centers that draw power, but how (or if) their power data should be counted in the efficiency calculations is unclear.
    • Various power-consuming data center subsystems aren’t present in some data centers such as outdoor lighting or the Network Operation Center (NOC).
    • Some subsystems support a mixed-use facility and are shared with other non-data center functions (for example, cooling towers and chiller plants) so fractions of the power attributable to the data center cannot be directly measured.
    • Some practical power measurement points include loads that are unrelated to the data center, but cannot be separated from the measurement.

    Compounding the issue further is the fact that commonly published efficiency data is not computed using a standard methodology, and the same data center can have a different energy efficiency rating when different methodologies are applied. So what can a data center or facility manager do?

    A Three-Pronged Solution to PUE Calculations

    Since most data center operators who attempt to determine PUE will encounter one or more of the above problems, a standard way to deal with them should be defined. The three-pronged approach outlined below can be used to effectively determine PUE.

    This methodology defines a standard approach for collecting data and drawing insight from data centers. It also helps data center managers understand how to use this approach to calculate PUE, with a focus on what to do with data that is either misleading or incomplete.

    One: Establish a Standard for Classifying IT Loads and Physical Infrastructure

    The first part of this methodology is to establish a standard to categorize data center subsystems as either (a) IT load or (b) physical infrastructure or (c) determine whether the subsystem should be excluded in the calculation. While it’s fairly simple to designate servers and storage devices as an IT load, and to lump the UPSs and HVAC systems into physical infrastructure, there are subsystems in the data center that are harder to classify. For example, the personnel spaces, switchgear, and the NOC, which all consume power, do not clearly fall into these categories. However, if these subsystems are not uniformly classified for all data centers, it’s not possible to directly compare computed efficiency results across different data centers within your data center portfolio. Since many customers, government bodies, utilities, and data center providers are looking for a standard benchmark for data center efficiency, clear guidelines for what is classified as an IT load or physical infrastructure are critical to determining a benchmark that can used across various data centers.

    Two: Calculate PUE for Shared Devices

    Some devices that consume power and are associated with a data center are shared with other uses such as a chiller plant or a UPS that also provides cooling or power to a call center or office space.

    Even an exact measurement of the energy use of these shared devices doesn’t directly determine the data center PUE, since only the device’s data center-associated power usage can be used in the PUE calculation. One way to handle this is to omit the shared devices from the PUE, but this approach can cause major errors, especially if the device is a major energy user like a chiller plant.

    A better way to measure this shared device is to estimate the fraction of losses that are associated with the data center, and then use those losses to determine the PUE. There are three ways to do this using a chiller plant as an example:

    • Measure/estimate the thermal load on the chiller using all the electrical losses of all the other data center loads, then measure/estimate the chiller performance. This approach will provide you with a good estimate of how much of the chiller’s power the data center is using.
    • Measure the fractional split of the thermal load between the data center and the other loads. Using water temperature, pressure, pump setting, etc., measure the chiller input power, and then allocate the traction of the chiller power to the data center according to the fractional split.
    • Shut off the non-data center loads on the chiller, and then measure it to determine the power offset for the data center. These indirect estimates are best made during an expert data center energy audit, and once the technique is established it can frequently be used over time when efficiency trending is important.

    Three: Provide an Estimate for Devices that are Impractical to Measure

    While every device in the data center that uses energy can be measured, it can be impractical, complex, or expensive to measure its energy use. Consider a power distribution unit (PDU). In a partially loaded data center, the losses in PDUs can be in excess of 10 percent of the IT load. These loss figures can significantly impact PUE, yet most data center operations omit PDU losses in PUE calculations because they can be difficult to determine when using the built-in PDU instrumentation.

    Fortunately, the losses in a PDU are quite deterministic and can be directly calculated from the IT load with precise accuracy if the load is known in either watts, amps or VA. In fact, this tends to be more accurate than the built-in instrumentation approach. Once the estimated PDU losses are subtracted from the UPS output metering to obtain the IT load, they can be counted as a part of the infrastructure load. This method improves the PUE calculation, as opposed to ignoring PDU.

    With this three-pronged standard methodology, data center managers can accurately and effectively determine PUE to ensure their data centers meet not only energy efficiency regulations but larger business goals as well.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    3:00p
    Kaminario Upgrades All-Flash Arrays, Drops Flash Cost below $1 per GB

    Making use of a new generation of 3D TLC solid-state drives, Kaminario unveiled upgrades to its all-Flash K2 arrays that drop the cost of usable Flash storage to less than $1.00 per GB. In addition, Kaminario announced that it is now including native replication in its K2 arrays with the release of version 5.5 of the company’s storage management software.

    Kaminario CEO Dani Golan said the latest version of the K2 provides a sustainable edge over competitors that are not able to pack 360TB in a base unit that can be used to either scale up or out as they see fit. All told, Golan said the K2 systems can be used to provide access to up to 2 petabytes of all-Flash storage at speeds of 1.5 million IOPs per rack.

    “We’re addressing scalability, cost effectiveness and ease of use,” said Golan. “It’s up to the IT organization to decide if they prefer to scale up or scale out.”

    Golan added that Kaminario is the only all-Flash array vendor willing to guarantee effective capacity. If a customer is unable to store the capacity that is guaranteed, they will receive additional hardware at no expense to expand their system and fulfill that guarantee. All told, Golan said that the latest editions of the K2 all-Flash are priced anywhere from 40 to 60 percent less equivalent systems than competitors.

    Finally, Golan said the K2 is optimized for writing to SSDs, not just reads, and includes advanced inline data reduction, which together ensure that 3D TLC technology used in the SSDs can meet the rigors of enterprise computing applications.

    Of course, the debate over whether to use all-Flash arrays versus hybrid storage systems continues unabated. In some quarters, hybrid storage is viewed as being more efficient because only a small amount of SSD capacity is need to support primary storage and caching. The rest of the data can then be stored on traditional hard disk drives.

    However, Golan noted that with the advent of SSDs based on 3D TLC, the differential between SSDs and HDD pricing is narrowing to the point where the other benefits of SSDs trump cost per gigabyte issue. Those benefits include less physical space being consumed in the data center by storage; less heat being generated; and less complexity in terms of moving data between different types of storage medium.

    Ultimately, the decision to go with an all-Flash array versus a hybrid array may come down to personal preference. The one thing that is for certain is that the I/O performance issues that used to consume so much time and energy in the data center may soon very well be a thing of the past.

    3:30p
    Hortonworks Acquires Onyara for Analytics on Data in Motion

    Hadoop provider Hortonworks announced the addition of a product called DataFlow to its Open Enterprise Hadoop platform, result of the acquisition of Onyara, a specialist in analytics on data in motion.

    To help its customers address the velocity of streaming data coming from the Internet of Things Hortonworks hopes these new capabilities will help automation and securing of data flows and to collect, conduct, and curate real-time business insights and actions. In a Hortonworks blog post CTO Scott Gnau said that big data implementations need a Rosetta Stone in order to enable the integration of data from different sensors created at different times will define true success at an enterprise level.

    Washington, D.C.,-based Onyara was only founded five months ago, but its engineers have worked for the National Security Agency over the past decade, and developed the technology that would eventually evolve into the Apache NiFi project. The NiFi technology aims to make the remote acquisition of data in motion easier to a robust enterprise-class dataflow solution for managing massive scale streams of data. The Hortonworks Data Platform will add DataFlow functionality, powered by Apache NiFi.

    “Nearly a decade ago when IoAT began to emerge, we saw an opportunity to harness the massive new data types from people, places and things, and deliver it to businesses in a uniquely secure and simple way,” said Joe Witt, chief technology officer at Onyara. “We look forward to joining the Hortonworks team and continuing to work with the Apache community to advance NiFi.”

    In an Onyara blog post Witt explains that a few distinct features differentiate NiFi from a growing field of data flow solutions. Witt says that NiFi’s HTML5 browser-based interactive drag-and-drop interface for dataflow management and testing. The second distinction is in metadata-based chain-of-custody that provides users with insight into the ‘who, what, when, and where’ for the discrete bits of data in a NiFi dataflow.

    NiFi technology was tried and tested in the intelligence community, but Hortoworks will look to build new products and services based on it, and broaden its appeal to other industries. Witt adds that resources for NiFi within Hortonworks will grow, as it looks to accelerate the pace and depth in which Onyara had already been working with Hadoop, Apache Spark, Storm, Kafka and others.

    4:00p
    Oracle Launches All-Flash FS1 SAN

    Joining the party of all-flash storage arrays Oracle announced an all-flash version of its FS1 Flash Storage System, engineered expressly for Oracle software. Unique to the new FS1-2, Oracle says its Hybrid Columnar Compression data reduction technology typically delivers a 10:1 compression ratio, which is almost twice the reduction usually obtained with deduplication techniques.

    Designed for concurrent mixed workloads Oracle says the new system will scale as high as 64 all-flash domains for highly-secure data isolation in multitenant cloud environments, and I/O prioritization based on business value. With a focus on high IOPS and throughput Oracle says the FS1-2 can scale to 912TB of flash, and up to 2.8PB of combined flash and disk. The company also lays claim to a sub-30 minute pallet to power on setup time.

    While defending its storage portfolio Oracle drives home the performance benefits of co-engineering hardware and software compared to others, fixating specifically on EMC. In tests it performed for customers, Oracle says it demonstrated sub-one millisecond latency when running simultaneous workloads across small to large block sizes with up to 8x faster IOPS and 9.7x faster write throughput than EMC XtremIO.

    Oracle’s senior vice president of Flash Storage Systems Mike Workman says that customers are looking to flash to run workloads at full speeds and prevent slow OLTP response times. Workman notes that the new FS1-2 system “dramatically reduces I/O wait times typically seen in today’s highly virtualized, transaction-driven enterprises where low latency is critical to response time.”

    IDC Storage research director Eric Burgener defines the new offering in the category of true All Flash Arrays(AFAs) and says that “AFAs feature unique designs that are specifically optimized for flash media, delivering more consistent performance across their entire throughput range than Hybrid Flash Arrays, and making them the storage platform of choice for application environments that demand the highest levels of performance.”

    4:45p
    Thomson Reuters to Serve Singapore Exchange Data Center Customers

    Trading companies using colocation services in the Singapore data center operated by the Singapore Exchange now have the option to connect directly to the market data platform by the New York-based media giant Thomson Reuters.

    Singapore is already one of Asia’s primary business and telecommunications hubs, and as its importance continues to grow, so do the local data center infrastructure needs of companies both foreign and domestic. Many Singapore firms have “small infrastructure footprints,” according to Reuters, but have large market data requirements to access global markets, which can be expensive.

    Reuters already offers access to its Elektron suite as a managed service out of its own Singapore data center. The addition of Elektron Managed Services inside Singapore Exchange’s colocation facility simply adds the amount of market players who can potentially use it.

    Most key players in Singapore’s trading ecosystem are likely to have servers in the Singapore Exchange data center anyway. Now, if they want a direct link to Elektron, they can simply buy a cross-connect between their engines and the Elektron servers in the same data center.

    Elektron is a link to global markets, offering low-latency market data feeds and analytics. Reuters also provides Elektron Managed Services in data centers in Hong Kong, Shanghai, Mumbai, Tokyo, and Sydney, as well as in Frankfurt, London, New York, Chicago, Toronto, Mexico City, and Sao Paulo.

    The Singapore data center colocation market is already large for the city state’s size and growing quickly. Together with Hong Kong, it is viewed as a primary connectivity portal to mainland China, as well as a hub for connecting to the rest of the markets in Asia Pacific.

    One analyst report published earlier this year estimated that Singapore was a $1 billion colocation market in 2014. The report projected the market would add another $200 million in revenue next year.

    There are about 50 data center providers on the island. The biggest players in the market are Singaporean telecommunications company Singtel and Silicon Valley-based colocation giant Equinix. Other major players are UK’s Global Switch, Singapore’s Keppel, and San Francisco-based Digital Realty.

    Only four of the top 10 colocation companies in Singapore are local.

    5:00p
    Microsoft Appoints Smith First Company-wide President in 13 Years

    varguylogo

    This post originally appeared at The Var Guy

    By DH Kass

    Microsoft, which hasn’t had a company-wide president since 2002, elevated Brad Smith, its top lawyer and 22-year company veteran, to president and chief legal officer.

    Smith, who helped reform the company’s long-standing aggressive legal posture, slides into the slot last occupied by Rick Belluzzo in 2002. Smith was promoted to general counsel that year as the vendor famously battled with anti-trust regulators, bitter rivals including its now-collaborator Apple and foreign lawmakers, at times settling cases or making deals involving billions of dollars.

    Microsoft chief executive Satya Nadella, in an internal company email, said Smith will continue his legal responsibilities and, in his new role will “lead the work needed to accelerate initiatives that are important to our mission and reputation such as privacy, security, accessibility, environmental sustainability and digital inclusion, to start.”

    Smith has played a key role in Microsoft’s ongoing legal battle with US law enforcement officials over access to a Microsoft customer’s emails stored in the company’s Dublin, Ireland, data center.

    Nadella said Smith “will work with me and others on the senior leadership team in the coming weeks to help us organize ourselves for success, identifying the right way to have impact on these cross-company initiatives.”

    In addition, he thanked Smith for his “ongoing contributions to Microsoft–I learn from you constantly and deeply value your advice,” he said. “You exemplify the growth mindset that we’re working so hard to permeate our culture. You look for new opportunities, you listen, you learn and you push us forward. I look forward to what you will do in the years ahead.”

    Lawyers on Smith’s team will pick up some of his regular legal responsibilities, Nadella said.

    Smith’s appointment to president comes some three months after Nadella enacted a wide-ranging structural overhaul, named a new 12-person senior leadership team that handed more responsibility to Windows head Terry Myerson and showed the door to Stephen Elop, head of its devices business.

    This first ran at http://thevarguy.com/business-technology-solution-sales/091415/microsoft-appoints-smith-first-company-wide-president-13-years

    5:57p
    Thomas Cook’s Ex-CEO Harriet Green to Lead New IBM IoT Business Units

    IBM has formed two new business units that will apply the company’s Big Data, analytics, and cognitive computing capabilities to the Internet of Things and education markets. The company appointed Harriet Green, former CEO of the Thomas Cook Group, to lead the new units.

    The two IBM IoT units are part of the $3 billion investment initiative the company announced in March. The four-year spending program’s goal is to develop solutions that take advantage of cognitive computing, cloud data services, and developer tools geared for specific industries, all meant to help organizations address the Internet of Things.

    IBM’s cognitive computing work encompasses artificial intelligence and machine learning as well as machine-human interaction. The company’s flagship group of technologies representing this work is Watson, which first appeared in public as a supercomputer that played Jeopardy against the TV game show’s past winners in 2010 and won.

    IBM has been quickly productizing the technology behind Watson in a multitude of ways, from specialized solutions for industries, such as financial services or healthcare, to general-purpose Big Data analytics services delivered via public cloud.

    Green was appointed CEO of Thomas Cook, the British travel industry giant, in 2012 but was ousted abruptly late last year, which caused a big drop in the company’s share price. She joined when the company’s shares were at one of their lowest points of the decade and was credited with turning the ailing company around.

    VP and general manager, Green will be responsible for developing the new IBM IoT and Education business units. The company plans to grow her team to more than 2,000 consultants, researchers, and developers, IBM said in a statement.

    IBM has been involved in coalitions to encourage interoperability between IoT technologies. Earlier this month, it announced an alliance with the processor company ARM to make its IoT products and services compatible with ARM’s mbed operating system.

    Last year, together with AT&T, Cisco, GE, and Intel, IBM formed the Industrial Internet Consortium to define open interoperability standards and common architectures for interconnecting “devices, machines, people, processes, and data.”

    8:36p
    Power Outage in Hostway’s Chicago Data Center Impacts DNS Service for Some Customers

    logo-WHIR

    This article originally appeared at The WHIR

    Web hosting provider Hostway has experienced a localized power failure in its Chicago data center that is impacting DNS service for some of its customers, according to a message on Twitter by the company on Monday afternoon. Nearly half of its affected customers are back online as of 4 pm ET, about an hour and a half after the company first reported the issue.

    According to Hostway’s Twitter, technicians were on site as of 3:27 pm ET and have restored a portion of the impacted services. Hostway started to report issues at 2:33 pm ET on Twitter.

    The company said it will post an update every 15 minutes on Twitter and on its blog until it restores service completely.

    For web hosts communicating outages or service issues with customers, being up front about your communication plan is a good idea. Transparency is key during outages as it builds trust with your users, and letting them know specifically when they can expect updates is an easy way to be in touch with your customers.

    Hostway explained that the issue started with one of its three uninterruptible power supplies (UPS) “which caused a critical fault and triggered safety breakers.”

    “Bypassing the affected systems could have resulted in electrical damage to impacted servers,” Hostway said in a tweet.

    The WHIR has reached out to Hostway for a comment and will update the story when we hear back.

    This first ran at http://www.thewhir.com/web-hosting-news/power-outage-in-hostways-chicago-data-center-impacts-dns-service-for-some-customers

    << Previous Day 2015/09/14
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org