Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, December 11th, 2012

    Time Event
    12:30p
    Opera Picks Latisys Ashburn for Latin America Expansion

    The interior of new space in the Latisys data center in Ashburn, Virginia (Photo: Latisys).

    Opera Software, the developer of the Opera web browser, has selected a Latisys data center in Ashburn, Virginia to support international expansion and growth in Latin America. Opera is leasing 270kW of critical power in a three-year agreement, but the company forecasts growing to use 1 megawatt or more over that period.

    After a long evaluation process that spanned dozens of providers, Opera determined Latisys Ashburn was right for it. “Latysis was the best choice for Opera,” said Trygve Jarholt, Director of Hosting, Opera Software ASA. “They were technically very excellent, able to support Opera’s HPC type of requirements,  future growth needs, and had a green profile with a low PUE.”

    Opera is the provider of one of the most popular mobile browsers, Opera Mini, which touts 191 million users worldwide. Opera Mini compresses data before reaching the end user to speed up the mobile browsing experience. This also reduces latency for the end user, and reduces overall bandwidth costs by reducing bandwidth used. For regions where bandwidth usage is of particular consideration. like Latin America, this compression feature is clutch. This might be why the company is growing at significant clip, growing about 100 percent year-over-year, with even more impressive growth in Latin America.

    Latisys will support Opera’s ongoing expansion and services in Latin America, and it’s a considerably large deal, considering how much of the heavy lifting Opera’s servers handle in the Mobile Mini experience.

    Why Latisys?

    Opera looked at different potential providers in US East and Canada. It identified 27 potential providers, from Toronto in the North all the way down to Atlanta in the South. The company soon narrowed its list to 6 potential providers that could comply with its technical requirements. After technical due diligence and commercial discussion, the company determined that Latisys in Ashburn was right for it. In Latisys, the company says, it has found a provider that can expand to fit its growth, both in terms of power and space. Opera was also attracted to Latisys’ low PUE (Power Usage Effectiveness), which is less that 1.4 on an annualized basis.

    “Opera is dedicated to providing users with the best possible experience on any phone, device or network,” said Jarholt. “The underlying IT infrastructure is critical, and only Latisys was able to deliver the high-density platform we needed, meeting power requirements of 14KW per cabinet and beyond.”

    Why Ashburn?

    Latisys was able to provide Opera with a high density, low latency environment, “From a hosting perspective it is important that the latency is shorter from Ashburn to South America than we can have from other places,” said Jarholt.

    Another factor that came into the decision is location of content providers. “Opera is always looking for locations where we can peer up with content providers,” said Jarholt. “It’s really important that we can peer up with important peering partners, and by doing so, we can reduce latency and increase the end user experience. Seattle and Ashburn are very good locations for peering today, and we believe this is good location for the future as well.”

    Opera already has an agreement in Seattle with Digital Fortress, and the Latisys deal gives it a location in the East to match one in the west. The company said it is currently evaluating another location in North America as well. If needed, that new location will be added at the beginning of 2014.

    More on Latisys

    Latisys’ data center platform is operated under SOC 2 Type 2 and SOC 3 audited controls. This replaces the SAS 70 guidebook and reaffirms Latisys’ commitment to meet the highest standards for availability and security, while making sure all of the appropriate controls and safeguards are firmly in place.

    Earlier this year, Latisys announced that it is adding 72,000 square feet of highly secure, high density, carrier grade IT infrastructure space along with significant power upgrades. After opening DEN2 — Latisys’ newest state-of-the-art data center in Denver — and its 22,000 square foot DC5 expansion in Ashburn, Latisys’ total data center platform now exceeds 343,000 square feet across seven data centers in four major markets.

    12:45p
    NJ Tech Council Holds Data Center Summit

    The New Jersey Technology Council will hold a Data Center Summit (Working in the Clouds) this Thursday, Dec. 13 at the Eisenhower Conference Center in Livingston, N.J.

    The event kicks off at 9:30 a.m. with a morning keynote from Joe Weinman, Senior VP of Cloud Services at Telx and the author of “Cloudonomics,” followed by a morning panel on DCIM moderated by Pete Sacco of PTS Data Center Solutions. The afternoon keynote will be by Rich Miller, Editor in Chief of Data Center Knowledge, followed by a special panel on “Hurricane Sandy: A Discussion on Lessons Learned.”

    For more information and to register, visit the NJ Technology Council web site.

    1:15p
    Financial Platform Hits 500,000 Transactions Per Second

    A financial analytics platform from Gresham Computing is now able to process 500,000 transactions per second, the company said, as financial services companies continues to push technology for blazingly fast processing.

    Gresham’s reconciliation solution, Clareti Transaction Control (CTC) , demonstrated it can process 500,000 transactions per second, equivalent to 1.8 billion transactions per hour. The platform uses ”in memory” matching atop of GigaSpaces’ XAP solution, running the application in a pure data and compute GigaSpaces grid with no database attached. The tests were conducted with Intel Xeon Processor E7 family, at Intel’s Computing lab in Reading, United Kingdom

    CTC set a benchmark of 50,000 transactions per second earlier this year when using a database for reading and writing the data. 500,000 transactions per second is certainly a feat, and is a win for ‘in memory’ matching when it comes to ridiculous speed.

    Who needs this ridiculous transaction speed? There are a variety of enterprise uses, but the financial world is a prime example. “The ability to process massive amounts of data in real-time from multiple sources , while also retaining transactionality, scalability, and high availability is the IT holy grail for many financial services enterprises,” said Nati Shalom, CTO of GigaSpaces Technologies.

    Clareti Transaction Control uses its speed and power to sort through financial transactions, helping trading firms analyze risky trades, banks manage cash flows, and enterprises track accounts receivable. An example: CTC is used to closely monitor trades that trigger higher margin requirements, allowing rapid notifications for trading positions that incur outsize risk. The benchmarking of its platform’s processing power aids Gresham Computing in its sell to the financial world.

    “The high performance achieved during this most recent benchmark shows that CTC can be used in extremely high-volume, low-latency environments to give operational certainty to financial organizations and infrastructure providers conducting billions of transactions per hour in fast-moving markets,” said Neil Vernon, Development Director, Gresham Computing plc.

    Clareti Transaction Control was built using GigaSpaces  XAP elastic application platform as the foundation of its infrastructure.  CTC is optimized to operate in two modes: pure in-memory matching like above, and with the database attached. Pure In-memory, or in-grid matching is for real-time service, suitable for hardcore trading environments. With a full database attached, the throughput is 50,000 transactions per second with all transactions written to the database.

    2:00p
    QTS Richmond Data Center Gains LEED Gold Status

    One of the power rooms inside the QTS Richmond Data Center. (Photo: QTS)

    QTS (Quality Technology Services) has achieved Leadership in Energy and Environmental Design (LEED) Gold Certification for Data Center 1 at the QTS Richmond Data Center. Developed by the U.S. Green Building Council (USGBC), LEED is an internationally-recognized green building certification system.

    QTS has been retrofitting the 1.3 million square foot former semiconductor plant for data center use since the first customer installation was completed in November 2010. Since 2011, the company has recycled more than six million pounds of materials from its data centers, including copper, aluminum, steel, plastic and concrete, with the majority of those recoveries occurring in Richmond.

    “It is an impressive achievement to attain LEED Gold in a facility that was not originally built to be a data center,” said Rick Fedrizzi, President and CEO of the U.S. Green Building Council. “QTS’ engineering and data center operation teams, as well as its building and engineering contractors, should be justifiably proud of attaining LEED Gold with a broad array of energy efficient initiatives on the Richmond campus. Their hard work has led to the QTS Richmond Data Center being certified as one of the world’s greenest high-performance buildings.”

    QTS is taking advantage of existing infrastructure that includes 22,000 tons of chiller capacity on site, and a campus power capacity of 100 megawatts.

    “QTS federal and enterprise customers are extremely interested in hosting their IT infrastructure in an energy efficient environment to be good stewards of resources and to reduce their energy consumption costs,” said Jim Reinhart, chief operating officer, development and operations at QTS.

    2:34p
    U.S. Defense Department to Cool Servers With Hot Water

    A server tray using Asetek’s Rack CDU Liquid Cooling system, which is being implemented in a U.S. Department of Defense data center. The piping system connects to a cooling distribution unit. (Source: Asetek)

    The U.S. Department of Defense (DoD) will soon begin cooling its servers with hot water. The DoD said this week that it will convert one of its data centers to use a liquid cooling system from Asetek Inc. The move could clear the way for broader use of liquid cooling in high-density server deployments at the DoD, which says it will carefully track the efficiency and cost savings from the project.

    Asetek was selected for the $2 million project to retrofit of a major DoD data center with its direct-to-chip liquid-cooling technology called RackCDU (short for Rack Coolant Distribution Unit), which brings high-performance cooling directly to the hottest elements inside every server in a data cente, removing processor heat from servers without the use of traditional computer room air conditioners or water chillers.

    Benefits of Hot Water Cooling

    The RackCDU solution uses hot water cooling, which allows it to work using only outside ambient air (free cooling). While most air cooling systems use chilled water at temperatures as low as 45 degrees, higher water temperatures are possible in tightly-designed and controlled environments that focus the cooling as close as possible to the heat-generating components. Turning off the chillers and CRAC units allows data center operators to lash the amount of power required to support their cooling system. (see Hot Water Cooling: Three Projects Making it Work for more examples of this approach).

    “The Department of Defense has become very serious about improving data center efficiency, and they are seeking new approaches to address this mission-critical problem,” said Andre Eriksen, Asetek’s CEO and founder. “Hot water direct-to-chip liquid-cooling is a powerful approach that can capture more than 80% of the heat generated by a data center and remove it from the building, where it can be cooled for free by ambient air or even reused for building heating and hot water. No power what so ever goes in to actively chilling the water.”

    Multiple federal mandates are driving the DoD to increase energy efficiency, increase the use of renewable energy and to consolidate data centers. Similar mandates affect data centers operated by other departments of the Federal government.

    Extending Liquid to Pizza Boxes and BLades

    Liquid cooling has been used in government data centers that house supercomputers, such as those operated by the Department of Energy (such as Oak Ridge National Laboratory) and the National Security Agency (NSA). The DoD initiative extends liquid cooling to rack servers and blade servers as well.

    The project will convert an existing air-cooled enterprise data center into a liquid-cooled data center, without disrupting operations during the transition, and with significant improvements in energy consumption, density (enabling consolidation within existing facilities), and creating opportunities to reuse energy by capoturing the waste heat from servers.

    Asetek has been a leading supplier of liquid cooling solutions for high-performance gaming PCs and workstations, and recently announced its entry into the data center market.

    The National Renewable Energy Lab (NREL) will analyze energy efficiency performance, savings, lifecycle cost, and environmental benefits of RackCDU, while McKinstry will install investment grade monitoring and collect data results. Measured, validated energy savings and performance results could qualify Asetek liquid cooling technology for broader adoption across the DoD.

    Johnson Controls Federal Systems, a business unit of Johnson Controls, was chosen for the installation and integration of the system. “This new liquid cooling technology has the potential to shape the future of this industry and will provide a low cost retrofit solution that can be applied to virtually all data centers,” said Mark Duszynski, Vice President Johnson Controls Federal Systems.

    In this video from SC12, Asetek founder & CEO André Eriksen describes the company’s innovated hot water cooling technologies for HPC and Cloud computing in an interview with Rich Brueckner of InsideHPC.

    3:30p
    Data Center Links: Equinix, PEER 1, Tier 3, IceWeb

    Here’s our review of some of this week’s noteworthy links for the data center industry:

    Equinix selected by ChinaNetCenter. Equinix (EQIX) announced that ChinaNetCenter, a CDN services provider in China, has joined Platform Equinix to expand its service delivery into the United States. The company deployed a content delivery network (CDN) node in Equinix’s Los Angeles data center, LA1, to serve as a hub from which it can provide its CDN and Internet data center (IDC) services. “To remain a CDN and IDC leader in China, it’s critical for ChinaNetCenter to utilize high-quality data center resources so that we can better serve our customers worldwide,” said David Liu, vice president, ChinaNetCenter. “By choosing to work with Equinix, we now have the agility we need to meet growing customer demands. Equinix provides consistent, reliable and extensible service to us, which is fundamental to our business development.”

    PEER 1 and Tier 3 bring Enterprise cloud to Canada.  PEER 1 Hosting and enterprise cloud platform company Tier 3 announced that its VMware-based, enterprise-grade cloud platform is now available in Canada. Businesses can now deploy applications on secure, enterprise cloud services in nodes located in Toronto today and Vancouver early next year. The joint offering addresses the enterprise cloud requirement for “data sovereignty” – ensuring that data is stored and managed according to Canadian rules and regulations. “PEER 1 Hosting and Tier 3 operate as very successful providers on their own, making it a mutually beneficial relationship,” said Carl Brooks, infrastructure services analyst at 451 Group. ”The move into North America further validates their partnership: Tier 3 gets access to PEER 1 Hosting’s international audience and established hosting expertise, while PEER 1 gets a top-of-the-line IaaS service offering for its customers. The result is a compelling enterprise-grade cloud for end users that the company can expand swiftly to meet demand.”

    IceWEB launches cloud services unit. IceWEB (IWEB), a provider of Unified Data Storage appliances for cloud and virtual environments, announced the launch of its IceWEB Cloud Services Unit, providing secure, infinitely expandable, ubiquitous cloud capabilities to its Unified Storage customers.  IceWEB CEO Rob Howe also hinted at a future content delivery network (CDN) service, for companies that must maintain their data ‘everywhere at once’, and who have been using overpriced solutions previously.  “We are seeing more and more that the cloud as a local service in government, education, healthcare and corporate entities is rocketing to the front of required services,” said Howe. “Using a ‘public’ cloud has caused great harm to many organizations as they see their precious data assets leaving the enterprise. Those entities now urgently need to offer their own cloud services on their own campus or premise, and they will now be able to do that with IceWEB. We will launch our IceFOLDER mobile client software, enabling business associates, students, faculty, customers or constituents to manage their own personal folder in the business or campus cloud from any device—a smartphone, tablet, notebook, whatever, while securing that data according to the controls required by the enterprise on IceWEB appliances running in their on-premise cloud.”

    3:30p
    Survey Sees Big Year Ahead for Big Data

    Enterprises are recognizing big data as mission critical, increasingly need real time event processing, and are looking to move it all to the cloud, according to a new survey from GigaSpaces. Big Data is all the rage, and the survey of 243 IT professionals from various backgrounds and had three major findings:

    • An overwhelming majority of organizations view their Big Data processing as mission-critical, as rated by 80% of respondents.
    • For companies handling Big Data, the need for real-time functionality is both significant and growing – over 70% already need to process streaming Big Data, with half that number needing to handle both high volume and high velocity. The survey indicated that there is increasing readiness to use streaming solutions to deal with the challenges of Big Data and speed up Big Data processing.
    • Most companies have plans to move their Big Data to the Cloud, or are considering the option. Only 20% of the IT professionals surveyed indicated that their company had no plans to move their Big Data to the cloud.

    The last point contained an interesting wrinkle – the Big Data workloads headed for the cloud include mission-critical data and apps. “Eighty percent of those defining their Big Data apps as ‘Mission-Critical’ to the business are planning or considering a move to the cloud, while of those who consider their Big Data ‘Somewhat Important’ the number was 75% for the cloud move,” GigaSpaces reported.

    The survey was conducted via an online survey service aimed at IT audiences, and distributed to IT and business professionals at several industry trade shows. Respondents hailed from a range of positions and in a variety of industries, such as Financial Services, Telecom, Retail, Insurance, Media, Government, E-Learning, Software Development, SaaS, and more.

    Both the benchmark and the survey findings indicate a good road ahead for XAP, Gigaspace’s application platform geared for big data. XAP employs an in-memory data grid for pure processing speed, what it calls ‘share-nothing’ partitioning for reliability and consistency, and event-driven architecture that enables real-time processing of massive event streams and unlimited processing scalability.

    GigaSpaces provides end to end scaling solutions for distributed, mission-critical application environments. Its solutions are designed to run on any cloud environment – private, public or hybrid. It has several customers in the Fortune Global 500, including in financial services, e-commerce and online gaming (gambling) providers and telcos.

    4:15p
    Best of the Data Center Blogs for Dec. 11

    Here’s a roundup of some interesting items we came across this week in our reading of data center industry blogs.

    The Data Center Outsourcing Decision Process Needs Serious Rethinking - Mark Thiele at Switch examines the changing criteria for picking colo: “The nature of how you acquire and use IT has changed, and continues to change to a model more aligned with buying ‘services.’ Even though you may still decide to build and operate your own infrastructure the way you add or remove capabilities will be (should be) much more Lego like (See Fluid IT). My suggestion on the IT environment of the future is that the flexibility and speed of service change everyone will drive towards, will force us to rethink how we buy. It will also force us to seriously consider ‘where’ we buy. No, I don’t mean HP vs. Amax or Cisco vs. Juniper, it’s more of a ‘location’ where.”

    Top Problems with the TOP500 – Where was Blue Waters in the recent Top 500? The new NCSA system didn’t field an entry, and NCSA’s Bill Waters explains why: ” the TOP500 does not provide comprehensive insight for the achievable sustained performance of real applications on any system—most compute-intense applications today stress many features of the computing system and data-intensive applications require an entirely different set of metrics. Yet, many sites feel compelled to submit results to the TOP500 list ‘because everyone has to.’ It is time to ask, ‘Does everyone have to?’ and more specifically, ‘Why does the HPC community let itself be led by a misleading metric?’”

    Cloud-native, cloud-centric, or cloud-ready? – While I’m weary of efforts to define and parse cloud terminology, the latest jargon is discussed in a thread convened by CloudScaling’s Randy Bias: “I am trying to understand if we are close to a consensus on the new apps driving all of the cloud growth. Cloud-native is a term positioned by the folks at MessageBus, cloud-ready is what we at Cloudscaling have been using for a while, and cloud-centric is an IBM-ism.”

    When an HP cloud is not an HP cloud (and whether it matters) - At GigaOm, Barb Darrow looks at the rise of third-party clouds: “Much of Hewlett-Packard’s new cloud relies on non-HP technology. The foundation is OpenStack; the content delivery network (CDN) is from Akamai; and the latest piece, unveiled this week, is a Platform as a Service from ActiveState. That outsourcing of key technologies from a company once known for its “invent” motto, is jarring to some. Others think it’s weird given the tens of billions HP spent on technology acquisitions in the past few years. With all that tech in-house, why send out for more?”

    Why Virtual Machine Recovery Is No Piece of Cake – Madhu Reddy discusses VM disaster recovery at the SunGard Availability blog: “Some vendors like to tout how easy DR becomes with virtualization. Sure, provisioning, deploying, and even moving virtual machines (VMs) to new servers are relatively simple operations. But recovering VMs in large-scale virtual applications environments presents several protection and recovery challenges.”

    5:50p
    Irving Named New CEO of Go Daddy

    Hosting provider Go Daddy has named veteran executive Blake Irving as its Chief Executive Officer of Go Daddy, effective January 7. Irving, a former Microsoft executive and most recently Chief Product Officer of Yahoo, will also join the Go Daddy Board of Directors. He succeeds Scott Wagner of KKR, a GoDaddy investor, who has held the position of interim CEO since July.

    “Blake Irving’s deep technology experience and his history of developing new cutting-edge products and leading large global teams make him an enormously compelling choice to drive Go Daddy to the next level of its domestic and global growth,” said Executive Chairman and founder Bob Parsons in a release. “Go Daddy has made great strides with Scott Wagner as CEO, and we look forward to building on that in the future.”

    The new CEO reflects Go Daddy’s strategic shift, as the world’s largest mass market hosting provider has been expanding its products and services beyond domains and hosting The Scottsdale, Ariz. company has nearly 11 million customers, a good majority of them small business customers. As part of its effort to expand into ancillary and complementary services for small businesses, Go Daddy recently acquired Outright.com, a cloud-based financial management application. The company also formed a partnership with DudaMobile to launch mobile Website Builder, a service that automates the integration of Web and mobile sites to save businesses time and expand their customer reach.

    In addition to his role as Chief Product Officer at Yahoo, Irving was also corporate VP of Windows Live, Microsoft’s online services portfolio that includes Hotmail and MSN Messenger. There, he had a part in the creation and operation of Microsoft’s global online services, launching several new products, including MSN Messenger and growing Hotmail from 7 million to 290 million users.  As Chief Product Officer at Yahoo, he developed and rolled out the unified product vision and strategy.

    So Irving is a product and services-type; Go Daddy is getting deeper into products and services, so this seems like a good fit. Irving left Yahoo in April 2012 when Marissa Meyer was coming in as CEO and there were a lot of lay-offs of product people.

    “As a long time Go Daddy customer, I have seen firsthand an organization that is committed to its customers 24 hours a day and seven days a week and yet there still remains much more we can do to enhance the customer experience,” said Irving. “This is a fantastic opportunity, and I can’t wait to hit the ground running in January.”

    6:43p
    Intel Unveils Low-Power Atom Chips for Servers

    The new Intel Atom S1200 offers a system-on-chip design that uses just 6 watts of power (Photo: Intel Corp.)

    Intel’s first Atom server-class processor has arrived, and some major hosting companies say they’re ready to start filling their racks with servers using the low-power chips. The Atom S1200 features a 64-bit system-on-chip (SOC) that uses as little as 6 watts per processor.

    Today’s announcement is the latest milestone in the ongoing push to reduce energy usage in data centers, which has prompted server vendors to adapt low-power smartphone chips for the server market, including both Intel’s Atom and chips from ARM that are widely used in iPhones and iPads. It continues the development of both “brawny” and “wimpy”cores for different types of computing tasks within the data center, a trend that provides end users with additional options for matching processors with workloads to find the best economics and performance.

    “Our strategy continues to be to address all the workloads in the data center,” said Diane Bryant, vice president and general manager of the Datacenter and Connected Systems Group at Intel, who said the S1200 could allow end users to pack as many as 1,000 cores into a standard rack. “As more and more workloads emerge, we’re committed to providing the optimal solution for any given workload.”

    The SoC includes two physical cores and a total of four threads, 64-bit support, a memory controller supporting up to 8GB of DDR3 memory, up to eight lanes of PCI Express 2.0, and Error-Correcting Code (ECC) support for higher reliability. The new product family will consist of three processors with frequency ranging from 1.6GHz to 2.0GHz.

    Ideal for ‘Light Scale-Out’ Workloads

    With today’s announcement, Intel meets its announced timetable to offer a commercial version of the Atom “Centerton” processor in the fourth quarter of 2012. The chip has been in use by early adopters including HP, which has made Atom the focus of the “Gemini” line of servers within its broader Project Moonshot initiative to build low-power next-generation servers.

    “One size no longer fits all,” said Paul Santeler, VP of the Hyperscale Business Unit at HP, who said the new Atom servers are ideal for “light scale-out” workloads including content delivery, caching technologies like memcached, and some “big data” processing requirements. Bryant cited dedicated hosting as a promising area for the use of Atom, and HP has deployed its Gemini with hosting customers.

    “Based on what we have seen so far, HP appears to have developed a highly efficient solution with the Intel Atom S1200 product family that is deal for light scale-out workloads,” said Marc Burkels, manager of dedicated servers and colocation at LeaseWeb in Amsterdam. “LeaseWeb continuously innovates in its own product pottfolio and services, and is always looking for energy-efficient solutions. That’s why we tested an HP Gemini beta system, and compared it with our dedicated server products.”

    Not the First Atom Servers

    Atom chips have been used in production servers by companies who proceeded without a commercial product release from Intel. These included SeaMicro (which was acquired by Intel rival AMD) and France’s OVH Hosting, which builds its own custom servers.

    “We already sell lots of Atom-based systems, but the added reliability features and virtualization support of the Atom processor S1200 will expand the market for lowest-power dedicated servers,” said Miroslaw Klaba, Director of R&D at OVH. “The power efficiency of this new platform will allow OVH to double the density of our entry-level server designs and to offer our customers unmatched flexibility and TCO.”

    Hosting companies aren’t the only interested customers. Facebook hardware honcho Frank Frankovsky was on hand at the Intel briefing to discuss the merits of low-power systems to customers like Facebook.

    “We’re facing unprecedented scale requirements,” said Frankovsky. “That’s why we’re excited about promoting this move to system-on-chip. You really dramatically drop the power required. We believe this is going to use one-half to one-third the watts (as Xeon).”

    Facebook has tested many-core designs from other vendors, including Tilera. Some of the original design manufacturers getting big business from the hyperscale universe were identified as among the server builders that have developed designs incroproating the S1200, including Huawei, Quanta and Wiwynn.

    When asked about competition from ARM -based offerings, Bryant noted that “today there are no ARM-based enterprise servers, so that it not an apples-to-apples comparison. We believe we have a good view into competing processors and we have an advantage and a compelling solution.”

    6:44p
    How To Measure IT Value Is The Real Issue

    Hani Elbeyali is a data center strategist for Dell. He has 18 years of IT experience and is the author of Business Demand Design methodology, which details how to align your business drivers with your IT strategy.

    Hani_Elbeyali_DellHANI ELBEYALI
    Dell

    Showing IT value to the enterprise is challenging. The problem is not what values IT creates—-but how to measure and communicate those values. Current practices in IT performance measurement, metrics and reporting do not help, as they concentrate on reporting how IT spends money rather than the values created from those expenditures. Business usually measure success in monetary values, profit and loss, and other attainable targets. Investments are made only in initiatives yielding positive return on investment (ROI).

    However, there is no clear correlation of IT contribution to monetary dividends or gain for business. In many cases, IT projects are long term, or they start with cost recovery in mind, and have a long pay-back period. These are elements that businesses don’t like to hear. Cost re-allocation can makes things worst, because, each line of business argues about why they are paying too much. Therefore, changing the perception of IT into business driver is fundamental in measuring the value of IT.

    What Would Happen if IT Impact on Business Was Reported Correctly

    To best understand the dynamics of technology return to the business, we need to take a closer look at the full picture. Rubin systems published and excellent report of top financial services companies from 2006 to 2010, the report shows evidence of the value strategic technology investment can add to the business performance[1], even in downturn economy. (http://www.rubinworldwide.com/rubinww.php – can we get a direct link to the report.) Some interesting results of the report:

    • IT expenses appear flat as percentage of the firm’s revenue (figure 1.0)
    • IT expense as percentage of gross revenue went down by 1.6 percent (figure 1.0)
    •  IT expenses rise relative to business expenses
    • 70 percent more compute at 9 percent expense increase
    • Technology expenses up 17 percentper employee, but revenue is up 26.7 percent
    • Number of IT employees remain flat (1.1M)
    • Business grew 19.4 percent

    IT-Expense-v-IncomeFigure 1.0

    By analyzing the report we can derive two important results:

    •  IT helped grew the business by 20 percent and drive 26.7 percent top line revenue, while keeping its operating expenses flat
    • The business continue to squeeze out and shrink the wrong operating expense—IT is only 8.4percentof the firm’s revenue

    Measuring IT Based on Demand

    Based on the trends provided, IT is evidently essential in creating competitive advantage, its worth noting  that companies that made surgical investments in IT when economy contracted had advantage over those who didn’t. What is important is those who held back on IT had to invest at later time to catch up, but they were already behind in market position.

    What I’m proposing is a simple solution called Business Demand Design (BDD) , which is a new way of developing an IT approach — measuring IT value from a top-down and bottom-up perspective. BDD framework creates a real-time infrastructure by guiding the transformation of today’s IT practices to better align with the business as it faces a changing competitive landscape. This effectively helps delineate the benefits of IT based on each line of business, essentially providing a way to report business value creation to the business real-time and on demand.

    supply-demand

    Click to enlarge.

    A BDD approach would include methods to:

    • Profile the demand characteristics of each line of business (LoB) so that they can be turned into design criteria when optimizing platforms for specific application workloads.
    • Identify and group similar workloads types so that they can be properly mapped to applications.
    • Run these workloads on the right-fit IT architecture.

    Editor’s note: Previously, this author wrote on Business Demand Design in this column, Demand Design: Beyond ‘Keeping the Lights On’.

    Please note the opinions expressed here are those of the author and do not reflect those of his employer.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

     

    [1] Dr. Rubin, Rubin World Wide (http://www.rubinworldwide.com/)

    << Previous Day 2012/12/11
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org