Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, March 12th, 2013

    Time Event
    12:30p
    Why eBay’s Digital Service Efficiency Changes the Game

    Winston Saunders has worked at Intel for nearly two decades and currently leads server and data center efficiency initiatives. Winston is a graduate of UC Berkeley and the University of Washington. You can find him online at “Winston on Energy” on Twitter.

    Winston SaundersWINSTON SAUNDERS
    Intel

    My two days earlier this March at the 2013 Green Grid Forum in Santa Clara were well spent. There were many highlights. To name a few: The roll-out of  an equipment disposal metric. earthquake and natural-disaster-proof data center design in Japan and an informative panel discussion on the efficiency challenges of Exascalar with leading industry experts.

    But the real game changer for me was the presentation by Dean Nelson (Vice President, Global Foundation Services, eBay) of eBay’s Digitial Service Efficiency indicator, which is available online here: dse.eBay.com (See DCK coverage of the announcement eBay’s DSE: One Dashboard to Rule Them All.)

    Total Transparency

    The eBay team has solved two of the most persistent problems in IT — with full disclosure on how to do it.

    The first persistent problem is getting the IT, Finance, Business Unit, and Facilities teams on the same page. Now to be fair, eBay is not the only company doing this. Joe Kava (Vice President – Data Centers at Google) has also disclosed Google’s organizational structure to optimize the overall efficiency of Google’s most costly asset (i.e. its data centers). But where eBay breaks the mold is by disclosing the next level of detail. As Stanford professor Jon Koomey says so eloquently: “Fixing misplaced incentives is the most important step toward realizing (the potential for efficiency improvement).”

    eBay dashboard

    The eBay Digital Service Efficiency (DSE) dashboard helps eBay to see the full cost, performance and environmental impact of customer buy and sell transactions, giving eBay a holistic way to balance and tune its technical infrastructure.

    The second problem eBay solved is communicating the detailed management of its core business, as shown in the Digital Service Efficiency indicator reproduced above.

    An important element of this data is that through open disclosure, it permits us to quantify the sustainability benefits of our digital economy. It’s relatively easy to calculate the carbon and water impact of each transaction from the data. For instance, for the roughly 46,000 transactions/kWh recorded in 2012, eBay used about 0.053 mL of water (See calculation) and about 22 mg (See calculation) of carbon. To put this into perspective, this equates to about one drop of water (See calculation) and about a mass of CO2 equivalent to a small grain of rice (See calculation) per each transaction.  .

    Compare your impact with eBay's app.

    Now you can compare your carbon impact – bricks-n-mortar retail vs. online retailer.

    That’s impressive when you consider the carbon footprint of a physical transaction. Imagine I drive to a store five miles from my home to make a similar purchase. Assuming an efficiency of 20 mpg, I’ll end up generating about 4.5 kg of CO2 (See EPA calculation) for the round trip, or about the mass of a decently sized kettlebell. Here I’ve associated driving with the transaction only. One might argue shipping and other factors should be included, but those details vary greatly, and really only obscure the comparison of virtual to physical transaction cost. The main point is that doing transactions through a well-managed data center has an astounding reduction in resource impact compared to physical activity: in this case — almost a factor of 200,000.

    True Game Changer

    That’s why eBay has changed the game. They have provided a role model, which is organized around common metric, to optimize the overall effectiveness of IT, and have openly disclosed the metrics and indicators they use to do it. This has the huge side benefit that we, as eBay customers, can understand the impacts of our own actions in terms of carbon and water use. And while there is at this point scant publicly available data on water use, eBay’s early leadership using the Green Grid’s WUE metric sets the stage for greater openness throughout the industry.

    Kudos to the eBay team for their openness, this disclosure breakthrough, and their leadership in data center management.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:00p
    Cordys Enables Cloud Brokerage For Telcos, Service Providers

    Cordys believes its diverse platform of services form the ideal platform to help businesses transition to the new cloudy world. Cordys helps companies move from Infrastructure as a Service (IaaS) to providing a Platform as a Service (PaaS) and serving as a cloud broker of partner services offered atop that platform. Companies like Savvis and Clouditalia have used Cordys to transform their businesses.

    In short, Cordys provides a platform on which a telecom or systems integrator can build a cloud business. It provides the lego-like building blocks, and you build your castle (or Death Star, depending on your predilections). It enables IT collaboration, and supports any type of workflow.

    Technically speaking, Cordys is a global Business Process Management System (BPMS) and Platform as a Service (PaaS) software vendor. It’s a single-stack business operations platform that bridges three seemingly disconnected worlds: Business Process Management (BPM), SOA integration, and Composite Applications Development. By providing a composite application framework (CAF), it serves to enable three major groups of potential cloud broker providers: telcos, systems integrators, and new dedicated cloud providers.

    Moving Up The Cloud Value Chain

    That’s an acronym-rich way of saying that Cordys allows customers to move up the cloud value chain.

    “The market is changing,” said Art Landro, CEO of Cordys. “You see the adoption of cloud and cloud brokers. We have a huge solution. It increases the flexibility and functionality of current infrastructure and data.”

    Managed hosting provider Savvis started using Cordys as an on-premises tool to provision their internal requirements in its data centers. It soon realized that it could use the technology to extend that model to value-added services.

    The best example, Landro says, is telco providers. “Telecoms have been commoditized to just supplying the pipe,” said Landro. “We now provide them a provisioning tool where telcos can be more than just pipe. They can offer highly privatized services – building block services out to customers.

    “It’s a platform that encapsulates all the good elements of existing ERP ‘behind the scenes’ and at the same time makes use of all that Internet has to offer as the ‘front end’,” said Landro. “That is the Cordys solution, which reuses structured business logic together with the unstructured information to establish flexible business processes and operations that can be continuously optimized and improved, faster and quicker than ever.”

    One Telco’s Story

    Clouditalia is using Cordys to create a service platform atop its existing server and network infrastructure, evolving its business from traditional telecom services to becoming a PaaS and SaaS provider. Clouditalia used the Cordys platform to enable them to broker SaaS, IaaS and telecom services, create new cloud based products and take their customers to the cloud.

    Mark de Simone, the CEO of Clouditalia, credits Cordys as the engine that helped Clouditalia transform its business, flipping the model on its head. Traditionally, the telecom provider wants the client relationship, with the partner in the back. In the Cordys model, the partner handles the relationship for services offered on Clouditalia’s PaaS platform – which provides those partners with access to the telco’s 30,000 business clients.

    “You need to create lego blocks,” said de Simone. ”We’ve made it easy by creating replicable processes atop of Cordys. Examples of replicable processes are internet banking processes, credit card closing processes, linking to social networks – generally making it simple for people regardless of whether they’re business developers, service developers or even enterprises. They have created a repository of these.”

    Within a month of its launch, Clouditalia had an ecosystem of 600 partners and integrators. “I’ll give you a piece of the action on the app, the infrastructure as a service, and the network,” said de Simone. “I’ll deliver all three to you and you don’t have to take a euro out of the bank because all you’re getting is a share of my revenues. For how long? Forever.”

    Speaking of the telecom world in general, “We should not be the titanic – we shouldn’t be changing the deck chairs, we should be doing something completely different,” de Simone said. “We have to change the model or there will not be any value.”

    Cordys’ flagship product, the Business Operations Platform, helps organizations improving business operations by leveraging new process oriented solutions and services from the cloud, while respecting existing enterprise software. Cordys is unique in its approach by having one single platform for deployment on premise or in the cloud.

    Cordys is a completely integrated and open platform based on state of the art SOA architecture. Cordys is a single-stack business operations platform that bridges several things: it’s a business process management suite (BPMS), enterprise service bus-SOA based integration platform, and a composite application framework.

    2:24p
    ColoATL Builds Out Space in Atlanta

    News from the colocation and service provider sector includes expansions and customer wins for ColoATL, QTS and RagingWire Data Centers.

    ColoATL builds-out 55 Marrieta Street Space  ColoATL announced a significant build-out of its space at 55 Marrieta Street in downtown Atlanta. The build-out includes increased colocation and data center space, and a number of new amenities, including a conference facility and “meet me” area operated by ColoATL’s sister company, The Georgia Technology Center. The expansion was necessitated by demand for colocation and Software Defined Networking (SDN) peering. The facility is home to the Southeast Network Access Point (SNAP), which provides next-generation Internet Exchange solutions, including SDN peering, testing, collaboration and implementation. “Our goal has always been to provide affordable interconnection options, quickly,” said Tim Kiser, Owner and Founder of ColoATL. “Additionally, we must deliver the network technologies that meet the objectives of the global Internet community. The ColoATL facility expansion is designed to meet those needs.” Established in November 2001, the fully built-out infrastructure of the facility is located on the 5th and 8th floors of 55 Marietta Street.

    WellStar Health System Selects QTS as its Primary and Secondary Colocation Provider QTS is expanding its healthcare client base, with WellStar Health Systems using colocation services at QTS’ Atlanta Metro and Suwanee facilities, which will act as its primary and secondary data centers, respectively. WellStar selected QTS as its primary and secondary data center provider in part due to the company’s ability to provide two facilities in the metro Atlanta area with enough space to accommodate private colocation suites. “The attention to detail and infrastructure redundancy in QTS’ metro Atlanta facilities were very attractive to WellStar’s cross-functional site evaluation team,” said Jon Morris, M.D., senior VP and CIO of WellStar. “Both physical security and HIPAA compliance were of critical importance in our selection of a colocation provider.”

    The Atlanta Metro and Suwanee facilities collectively offer more than 1.3 million square feet of total data center space.

    RagingWire customer Unleashed Technologies expands at The Bolt. RagingWire customer Unleashed Technologies announced the opening of a greatly expanded data center presence at RagingWire’s Ashburn, Virginia campus, known as “The Bolt”. “RagingWire designs, builds and operates innovative, highly advanced data centers backed by superior connectivity, power and cooling systems that will enable Unleashed Technologies to deliver flexible, highly secure solutions for our partners and clients,” said Ryan Barbera, president of Unleashed Technologies. “Our expansion into RagingWire’s Ashburn data center greatly enhances the offerings our growing list of partners can provide and improves our capacity to deliver enterprise-class hosting solutions to our clients nationwide.”

    The RagingWire Ashburn data center boasts 14.4MW of critical IT load capacity and 150,000 square feet. The company announced achieving SSAE 16, Type 2 compliance last November. It is also PCI DSS 2.0 compliant and FISMA moderate.

    2:30p
    Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

    Today’s modern data center is built around efficiency and the ability to reduce costs by allowing the equipment to run more optimally. When designing a solid data center environment, there are several considerations around best practices. The process of allocating space, the flooring, the power and the equipment is a tedious task which can take time and a lot of planning. During the environment setup, there are further steps to ensuring that your data center runs as optimally as possible. One of those steps is ensuring that heat and cooling are carefully controlled and monitored.

    In this white paper, Upsite shows how the cooling capacity factor (CFF) can really help a data center save money and create a more efficiency platform. Data center decisions must be made around logical processes and known metrics. This is why decisions around cooling require a certain amount of knowledge on how these resources are being used.

    CFF Cooling Inefficiences

    [Image Source: Upsite Technologies]

    To illustrate, of 45 sites that Upsite reviewed, the average running cooling capacity is an astonishing 390% of the heat load in the computer room. In some cases there is over a 3,000% capacity of the heat load. In other instances, data centers never really balance their cooling — This can result in an insufficient volume of conditioned air being delivered to the contained space and unsealed open spaces in the cabinets that allow conditioned air to flow out of the space and exhaust air to flow in.

    By understanding resource utilization, resource administrators are able to build a better data center. There are several benefits to knowing your CFF and using that to right-size your cooling infrastructure. In Upsite’s white paper, we learn about several of these benefits including:

    • The computer room environment improves.
    • Hot and cold spots are eliminated.
    • The throughput and reliability of IT equipment increases.
    • Operating costs are reduced by improved cooling effectiveness and efficiency.
    • Released stranded capacity increases room cooling capacity, while deferring capital expenditure for additional cooling infrastructure. This enables business growth through the deployment of additional IT projects or other investments.
    • Increase in supported IT load through improved utilization of airflow.
    • Reduced carbon footprint via reduction in utility usage.
    • Deferred capital expenditure by increasing the utilization of existing infrastructure.

    Download Upsite’s white paper to learn how to create a more efficient data center cooling environment. Or, if you’re stuck with existing cooling problems, this white paper also describes some key remediation steps to regaining control of the infrastructure. As more organizations turn to data center providers for their hosting needs, there will be greater demand to run a more efficient DC infrastructure. To optimize and save costs, organizations should always remember that data center environmentals are very important efficiency metrics.

    3:39p
    Verizon to Help U.S.D.A. Move to the Cloud

    News from the cloud computing sector includes developments from Verizon, HP and Dell:

    Verizon to Help U.S. Department of Agriculture Move to the Cloud. The U.S. Department of Agriculture (USDA) recently awarded the department’s Enterprise Data Center cloud blanket purchase agreement to Verizon under the company’s GSA Schedule 70 program. With its Federal Edition enterprise cloud, Verizon will present its infrastructure-as-a-service offering to 34 USDA agencies and offices. Under the blanket purchase agreement, the Verizon cloud product addresses the stringent security and reliability requirements of federal agencies. It is designed to meet the risk management framework outlined in NIST 800-53, a set of recommended security controls for federal information systems. In addition, the Verizon-Terremark purpose-built Tier III federal data centers in Culpeper, Virginia, and Miami feature multiple layers of redundancy – facility, power, HVAC – and meet or exceed FISMA High criteria for physical and environmental controls. “My cloud-computing conversations with federal IT leaders have definitely changed over the past couple of years,” said Susan Zeleniak, senior vice president – public sector markets, Verizon Enterprise Solutions. “It’s no longer a question if cloud initiatives will be successful, but how can they determine the best applications to move to the cloud and quicken the pace of cloud migrations and data center consolidation programs.”

    HP Selected by Molson Coors.  HP (HPQ) announced that Molson Coors Brewing Company is working with HP to transform some of the brewer’s finance and human resources processes and systems in order to reduce costs, increase efficiency and improve access to critical information. The move to a cloud hosting environment via the new HP Business Process Outsourcing platform will standardize HR and finance processes globally for Coors to their corresponding business applications. Molson Coors will receive BPO services from a blend of HP global centers of excellence located in Canada, Costa Rica, India, Poland and the United States. “With top brewers joining forces to compete, it’s critical for Molson Coors to use the latest processes and technologies to drive even more operational efficiency,” said Dennis Stolkey, senior vice president and general manager, Americas, HP Enterprise Services. “Under these market conditions, Molson Coors will depend on HP as a trusted advisor to lead its transformation to deliver world-class finance and HR services, allowing the Molson Coors team to focus on what they do best.”

    Dell Expands Master Data Management Cloud Solution. Dell launched its next-generation Dell Boomi Master Data Management (MDM) that delivers a comprehensive set of features and capabilities in a single cloud-managed solution. This allows mid-sized companies to take advantage of an MDM solution that simplifies data management, data integration and assurance of data quality – at a fraction of the traditional big vendor cost.  Delivered as a 100 percent cloud-based offering, it offers multi-domain support, near real-time synchronization, bi-directional data flow and includes web service calls that support enriching and validating data. “MDM tools based on software as a service (SaaS) and/or clouds are still relatively new,” said Philip Russom, research director for data management, The Data Warehousing Institute (TDWI). “Even so, there is a growing demand for and trust in cloud-based MDM tools and solutions. For example, a TDWI survey run in 2012 showed that 25 percent of organizations are planning to adopt cloud-based MDM within three years. This amounts to a potential growth rate of 20 percent, which is quite healthy given the current economy. So we can expect cloud-based MDM solutions to be in use by more organizations soon.”

    To keep up with cloud computing news, visit our Cloud Channel.

    5:28p
    IBM Watson Could be Offered as Technology as a Service
    The IBM Watson supercomputer.

    The IBM Watson supercomputer.

    After much media attention when IBM’s Watson took on top Jeopardy champions in 2011 and won, the team behind the cognitive computer has helped it gain greater understanding of content and context. Today, IBM is experimenting with offering a way to reach Watson through a cloud service, and recently asked USC students for creative ways to apply Watson to business and social challenges.

    Watson in the Cloud

    IBM’s Stephen Gold, a director with IBM Watson solutions, shared with Datanami that, “IBM has begun an internal pilot and the company is currently working on ways to infuse the Watson technology into its entire portfolio.” One possible scenario is to make it available as a service through the cloud. Positioned with a unique front-end that ingests natural language data, Watson is able to parse massive amounts of data looking for candidate answers. IBM hopes to accelerate the development of its platform, so it can offer the Watson technology as a service.

    Since its Hollywood debut on Jeopardy, Watson has been used in limited applications. In the fall of 2011, IBM and Wellpoint announced intentions to build commercial applications for healthcare based on IBM Watson technology. Watson’s ability to understand the meaning and context of human language and process large amounts of information will be used to help target options for a patient’s circumstances and help physicians identify the most likely diagnosis and treatment options for their patients. About a year ago, IBM Watson went to Wall Street and was employed to advise Citigroup on portfolio and client risk. In this case, Watson was delivered as a cloud-based service and earn a percentage of the additional revenue and cost savings it is able to help financial institutions realize. Earlier this year, IBM donated a powerful version of its Watson computing system to the Rensselaer Polytechnic Institute for research and development in big data, analytics and cognitive computing.

    IBM Taps Academia for Where Watson Should Work Next

    Recently, IBM turned to young minds at the University of Southern California to compete in the IBM Watson Academic Case Competition.  With IBM business leaders present and listening, the challenge put students in the spotlight to create business plans for applying Watson to pressing business and societal challenges. To set the stage, IBM demonstrated how Watson is helping WellPoint and Memorial Sloan Kettering Cancer Center improve the speed and quality of treatment for cancer patients.

    As part of the competition, students were assigned into 24 teams and given 48 hours to define a new purpose for Watson, develop a business plan, and present it to a panel of judges comprising school officials, IBM executives and local business leaders. To foster interdisciplinary collaboration, each team was required to have at least one business and one engineering member from USC’s Marshall Business School and Viterbi School of Engineering.  Three winning ideas were selected. First place was for legal research – letting Watson do the discovery for your next legal case.  Second place was an idea for employee training – having Watson uncover the keys to success for your employees. The third place idea was for post-traumatic stress disorder – having Watson help doctors find patients.

    “Partnering with universities such as USC gives IBM a unique opportunity to tap into the minds of our next-generation of leaders, whose training, skills and ideas for changing the world are all forward-thinking and based on a desire to make a meaningful impact,” said Manoj Saxena, IBM General Manager, Watson Solutions. “These students see what Watson is doing right now and think — how else will cognitive computing impact my life and career in the years to come? To us, that’s exactly the mindset that should be fueling IBM innovations, and the very reason we host Watson Academic Case Competitions.”

    The Watson Case Competition at USC, the third in a series hosted by IBM, is the latest example of IBM’s work with academia to advance interest among students in Science Technology Engineering and Math (STEM) curriculums that will lead to high-impact, high-value careers.  Over the last two years, students at USC’s Annenberg Innovation Lab have been using big data analytics technologies to conduct social sentiment analyses and determine public engagement on topics such as sports, film, retail and fashion. Two of the biggest projects looked at Major League Baseball’s World Series and the Academy Awards, projects developed for students to explore and expand their skills as they prepare for new data-intensive careers. A recent Gartner report estimates that 1.9 million big data jobs will be created in the U.S. by 2015.

    6:00p
    Virtualization: The Next Frontier…

    dreamstime-paradigm-shift

    Several years ago, we began using virtualization technologies as means to test servers and use resources more effectively. When VMware became a hypervisor, very few vendors actually supported a virtual infrastructure. So in many organizations, virtualization was relegated to the classroom and development environments.

    But soon administrators saw that server resources were being wasted dramatically and that virtualization was a way to curtail that, making it easier to consolidate servers and boost resource utilization. And with that, the pressure rose on vendors to support a virtual state. From there, server virtualization made its way into almost all data center environments as more organizations adopted the technology to help align their business needs.

    Now – we’ve entered the next frontier …

    We have better servers, more bandwidth, and greater amounts of resources to work with. To put it in perspective, Tilera recently released their 72-core, GX-72 processor. The GX-72 is a 64-bit system-on-chip (SoC) equipped with 72 processing cores, 4 DDR memory controllers and a big-time emphasis on I/O. This type of technology now allows administrators to create a “hyper-connected” infrastructure with focus on improving performance and removing bottlenecks.

    We’re way beyond simple server virtualization and are exploring new avenues to make virtualization an even more powerful platform. Let’s take a look at some of these technologies.

    • Application Virtualization. If we can virtualize a server, why not apps? The popularity of products like XenApp and ThinApp continues to increase. Administrators are able to stream or deliver applications to the end user without actually deploying them at the user premises. This sort of control and manageability makes application virtualization very plausible. In fact, many of the big Fortune 500 organizations have already deployed some type of application virtualization.
    • End-point Virtualization. The conversation here isn’t only around virtualized desktop infrastructure (VDI). Many organizations are now redefining how they see the end-point and how they can best utilize those resources. Often referred to as hardware abstraction, the end-point has changed from a big PC to thin and zero computing clients. These devices are now adopting a rip-and-replace methodology and will soon be breaking the $100 barrier. This means that all administration is done at the data center level and that the end-point is becoming an easier to manage.
    • Network Virtualization. Also known as software-defined networks (SDN), network virtualization has allowed the administrator much greater control over a network infrastructure. Where one physical NIC had its limitations, new technologies allow for numerous virtual networking designations on a corporate network.
    • Security Virtualization. Hardened physical appliances aside, more organizations have deployed security platforms on top of a virtual machine (VM). The flexibility to clone security appliances, place them at various points within the organization and assign specific functions to them makes security virtualization very appealing. Imagine having a security appliance VM only doing data-loss prevention (DLP) or intrusion prevention/detection services (IPS/IDS). This type of deployment can be very strategic and beneficial.
    • User Virtualization. With IT consumerization and BYOD making its presence felt, more organizations are looking for ways to abstract the user layer from devices, applications and end-points. And so, user virtualization was born. Solutions from AppSense, RES and LiquidWare all provide a way for a user to transfer their personalized settings from application to application and from platform to platform. Basically, users are able to carry their settings with them as they migrate from various systems and applications.
    • Storage Virtualization. Storage can be pricey – so why not maximize efficiency? A single storage controller can be logically carved up so well, that they appear to be their own standalone units to the administrator. Using storage more efficiently is on the front page of many project lists. Controller multi-tenancy is just one example of how storage virtualization plays a role in today’s IT world.
    • Server Virtualization. This stays on the list only because server virtualization continues to evolve and expand. With entire platforms being designed for server virtualization, more emphasis is being placed on how to better use a virtual environment. There continues to be a need for virtualizing the server and to better incorporate virtualization efficiencies into the modern data center. In fact, the server virtualization market is beginning to heat up again. Adoption rates for Microsoft’s Server 2012 Hyper-V infrastructure continue to rise. More organizations are now revisiting their virtual server infrastructure than ever before. With server density increasing, and new types of platforms becoming available, it won’t be surprising to see even more innovation around server virtualization.

    The list will most likely grow as more environments seek ways to be even more efficient. With advanced virtualization technologies, organizations can begin to grow more organically. Without having to deploy massive amounts of infrastructure at a new site or location, companies can design an IT platform around a “business-in-a-box’” mentality. In creating such an agile environment, entire data centers can be provisioned quickly to help a company stay ahead of its competition. Already, virtualization technologies are helping many businesses cut costs, regain control, and allow for greater growth with their infrastructure. Moving forward, virtual platforms will only continue to expand as they further help shape the structure of the business IT environment, the overall technological landscape, and the future of cloud computing.

    << Previous Day 2013/03/12
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org