Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, July 24th, 2013

    Time Event
    11:09a
    IBM, Pivotal Team to Boost CloudFoundry

    ibm-pivotal-cloudfoundry

    Continuing to sharpen its focus on enterprise cloud computing, IBM is joining forces with Pivotal to support CloudFoundry, the versatile platform as a service (PaaS) framework that allows developers to build applications that can run on multiple clouds. IBM said today that it will make CloudFoundry a component of its open cloud architecture and work with Pivotal on further development of the CloudFoundry open source project and establishing an open governance model for the community.

    CloudFoundry is open source software developed by VMware and now part of the Pivotal Initiative, an independent entity funded by VMware and EMC. The framework allows developers to create apps that can run on Amazon Web Services, OpenStack clouds or VMware’s vCloud and vSphere environments.

    IBM and Pivotal have been working together since March, and early fruits of the collaboration is a preview version of WebSphere Application Server Liberty Core, IBM’s lightweight version of the WebSphere Application Server, which simplifies development and deployment of web, mobile, social and analytic applications.

    IBM Pledges Full Support for CloudFoundry

    “Cloud Foundry’s potential to transform business is vast, and steps like the one taken today help open the ecosystem up for greater client innovation”, said Daniel Sabbah, general manager of Next Generation Platforms, IBM. “IBM will incorporate Cloud Foundry into its open cloud architecture, and put its full support behind Cloud Foundry as an open and collaborative platform for cloud application development, as it has done historically for key technologies such as Linux and OpenStack.”

    “We believe that the Cloud Foundry platform has the potential to become an extraordinary asset that many players can leverage in an open way to enable a new generation of applications for the cloud,” said Paul Maritz, CEO of Pivotal. “IBM’s considerable investment in Cloud Foundry is already producing great results with application-centric cloud offerings such as making IBM WebSphere Liberty available on Cloud Foundry. We look forward to growing and expanding an open Cloud Foundry community together with IBM.”

    Here’s a look at early analysis and commentary from around the web:

    The Register – “The partnership follows IBM executives saying in March that they were going to put OpenStack at the heart of future SmartCloud technology, and the company’s buy of SoftLayer for a rumored $2bn and its blessing of the MongoDB query language within DB2 in June. These three moves represent changes within the company as it turns away from developing a monolithic cloud software stack, and instead puts its technical efforts into community technologies and its dosh into a well-regarded infrastructure underlay.”

    Wall Street Journal - “Offerings like Cloud Foundry, and competitors such as Salesforce.com’s Heroku and OpenShift backed by software firm Red Hat, are jockeying to win over developers working on cloud apps and other technologies for corporations. Amazon and Google have a big head start on developer loyalty. But many companies are just getting started in the cloud, giving newer entrants like Pivotal the confidence they can win over developers.”

    TechCrunch – “The fuller picture for the partnership is in the community focus. IBM is a governance shop. A core value to customers is providing technology to large companies that have to pay close attention to compliance issues. As part of the partnership, the companies are advocating for an “open governance” model for application development in the cloud. For Pivotal, it’s an opportunity to make a play as the dominant PaaS provider.”

    11:30a
    Cisco Acquires Sourcefire For $2.7 Billion

    Cisco (CSCO) announced a definitive agreement to acquire cybersecurity solutions provider  Sourcefire. Cisco and Sourcefire will combine their products, technologies and research teams to provide advanced threat protection across the entire attack continuum  – before, during and after an attack – and from any device to any cloud.

    The acquisition is expected to close later this year and will boost Cisco’s security portfolio and strategy. Under the terms of the deal, Cisco will pay $76 a share in cash, nearly 30 percent higher than Sourcefire’s closing price on Monday. The offer includes retention-based incentives for Sourcefire’s executives.

    “The notion of the ‘perimeter’ no longer exists and today’s sophisticated threats are able to circumvent traditional, disparate security products. Organizations require continuous and pervasive advanced threat protection that addresses each phase of the attack continuum,” said Christopher Young, senior vice president, Cisco Security Group. “With the acquisition of Sourcefire, we believe our customers will benefit from one of the industry’s most comprehensive, integrated security solutions – one that is simpler to deploy, and offers better security intelligence.”

    “Cisco’s acquisition of Sourcefire will help accelerate the realization of our vision for a new model of security across the extended network,” said Martin Roesch, founder and chief technology officer of Sourcefire. “We’re excited about the opportunities ahead to expand our footprint via Cisco’s global reach, as well as Cisco’s commitment to support our pace of innovation in both commercial markets and the open source community.”

    For more analysis and commentary, see coverage from The New York Times, Reuters and GigaOm.

    12:00p
    Looking Beyond Linpack: New Supercomputing Benchmark in the Works
    Titan-470

    The addition of GPUs helped the Titan supercomputer at Oak Ridge National Laboratory place atop the Top500 last fall. But the system’s CPUs handle much of the workload on most apps that use Titan. (Photo: Oak Ridge)

    Has the performance metric used to rank the world’s top supercomputers become dated ? With so much emphasis and funding invested in the Top500 rankings, the 20-year old Linpack benchmark has come under scrutiny, with some in the community suggesting it needs to evolve. Now even University of Tennessee  professor Jack Dongarra, who helped found the Top500 list, believes it is time for a change.

    Dongarra and his colleague Michael Heroux from Sandia National Laboratories are developing a new benchmark that is expected to be released in time for the next TOP500 list release in November. The new benchmark being proposed is called the High Performance Conjugate Gradient (HPCG), and should better correlate to computation and data access patterns found in many applications today. The HPCG won’t replace Linpack, but both metrics will be used to evaluate contenders in the November Top500.

    The primary objective of the Top500 list of the top supercomputers in the world is to provide a ranked list of general purpose systems that are in common use for high end applications. The list has been released twice a year for the past twenty years, with Linpack serving as the standard yardstick of performance. The High Performance Linpack (HPL) was introduced by Dongarra and selected for the Top500 in 1993 because it was widely used and performance numbers were available for almost all relevant systems.  It measures the ability of a system to solve a dense system of linear equations.

    Designing for a Benchmark, or Applications?

    The performance measurement for Linpack is FLOPS, short for Floating Point Operations Per Second. On the very first Top500 list the Los Alamos National Laboratory CM-5 supercomputer ranked number one, posting a 59.7 gigaflops performance. Twenty years later the top spot was awarded to China’s Milky Way-2, with 33.86 petaflops performance. Among many other performance metrics, memory, storage and interconnect advances, vendor changes and other things, the evolution of gigaflops to teraflops and then petaflops has led many to speculate on what it will take to achieve exaflop levels of performance.

    In the Sandia National Laboratories report that Dongarra and Heroux released it lists an example of how Linpack has lost its relevance.

    “The Titan system at Oak Ridge National Laboratory has 18,688 nodes, each with a 16-core, 32 GB AMD Opteron processor and a 6GB Nvidia K20 GPU,” the report notes. “Titan was the top ranked system in November 2012 using HPL. However, in obtaining the HPL result on Titan, the Opteron processors played only a supporting role in the result. All floating-point computation and all data were resident on the GPUs. In contrast, real applications, when initially ported to Titan, will typically run solely on the CPUs and selectively off-load computations to the GPU for acceleration.”

    “We have reached a point where designing a system for good Linpack performance can actually lead to design choices that are wrong for the real application mix, or add unnecessary components or complexity to the system,” said Dongarra. “The hope is that this new rating system will drive computer system design and implementation in directions that will better impact performance improvement for real applications.”

    While Linpack was not able to adapt and measure the more complex computations, Dongarra believes the new benchmark will adapt to emerging trends.  The HPCG measurement will debut this November at the Supercomputing Conference (SC2013) in Denver, Colorado. The Linpack will not be laid to rest though –  HPCG will serve as a companion ranking of the Top500 list, in a similar fashion to how the Green 500 re-ranks the Top500 according to energy efficiency. The HPCG metric will continue to be developed and go through verification processes and have extensive validation testing performed against real applications on existing and emerging platforms.

    Performance for Real-World Applications

    The poster child in the debate over Linpack has been the NCSA supercomputer Blue Waters at the University of Illinois, which was brought online earlier this year. The Cray system posted an impressive 11.6 petaflops performance, but has not been submitted to the Top500 for ranking consideration. Deputy Project Director for Blue Water William Kramer has been one of the critics of the Top500 Linpack benchmark.

    Writing in November 2012 Kramer said that the Top500 list “does not provide comprehensive insight for the achievable sustained performance of real applications on any system.” Noting that Blue Waters would not be submitted for Top500 evaluation, with the blessing of its NSF funding source, Kramer outlined some issues and opportunities for improvement with the Linpack benchmark and measurements, and other perceptual and usability issues with systems that are submitted to the Top500 list.

    Kramer joins others that have lamented about the worth of measuring HPC systems against Linpack. In a recent RFI the Intelligence Advanced Research Projects Activity (IARPA) noted that the general value of benchmarks were necessary metrics, but that HPC benchmarks have “constrained the technology and architecture options for HPC system designers.”

    MathWorks founder Cleve Moler compared Linpack to home runs in baseball. Home runs don’t always decide the result of a baseball game, or determine which team is the best over an entire season – but they are interesting to track over the years.

    12:30p
    Refining Your Criteria for Data Center Site Selection

    Prashant Baweja is working as a senior associate consultant with Infosys Ltd. in infrastructure domain and has a total experience of about six years.

    Prashant-Baweja_tnPRASHANT BAWEJA
    Infosys

    Determining the location a data center is one of the crucial decisions for a company as it is based on strategy and goals of a company. In any discussions related to data center sites, few factors come up frequently like power, telecommunication, data center tiers, clean power, site selection and so on. Management looks for Total Cost of Ownership (TCO) and based on their long-term and short-term goals.

    Site selection plays an important role for the same as it will have direct impact on cost and TCO. In below sections we will discuss more about factors affecting site selection and a high level process/methodology for the same.

    Factors Affecting Site Selection

    There are many factors which affect site selection. All these factors should be looked from strategic perspective as nowadays an industry changes its focus every 3 to 5 years and transforms/adopts new technology every 7 to 10 years. Below diagram depicts various factors affecting site selection for a data center:

    • Geographic location – First and foremost factor is the geographical location of the site. This has to be thought out at the outset. Few things which needs to be looked are: Natural disasters (Flood, Hurricane, Tornado etc.) – probability and frequency of occurrence at the said location; Environmental hazards – Impact and degree of affect; and Climate – Climates which support free cooling (outside air cooling) will be an added advantage.
    • Electricity – Electricity or power is important factor as it is one of the chief constituent to operating cost of a facility. we need to understand. Factors include: Availability – While looking at availability of electricity, we need to weigh options like access to more than one grid, maturity of grid, various generation options, power transmission mechanism, etc.; Cost – Need to compare cost of electricity across various options. i.e. it should have low cost/tariff per KWh; and Alternative source of power – Management can look into renewable energy options like solar, wind, air etc. which will help company to become greener.
    • Telecommunication Infrastructure – Telecommunication is one of the most important components for a datacenter. While selecting a site for datacenter, various factors need to be considered with respect to telecommunication infrastructure perspective. Following list will help in this regard: Fibre backbone route and its proximity – How near a fibre backbone to the selected site. It will help to gauge addition investments required from backbone route to the exact data center location; Type of fibre – Will affect speed and transmission; Carrier type and support – What all carriers are present in the vicinity and their support & service model in place; and Latency – Transaction time/latency will be an important factor
    • Tax Rates – Another important aspect is tax rates prevalent at a particular location. Following needs to be considered: Property tax; Corporate tax; and Sales tax.
    • Construction – While building a data center at any particular location, construction cost and options play a major role. Following aspects should be considered for making a decision: Construction industry maturity, experience, process in place and technology availability at the considered site; and Labour – Availability and cost of labour at the considered place.
    • Transportation – Availability and proximity of various modes of transportation affects site selection for a data center, as equipment must be delivered and workers must commute to the location and vendors must visit it.
    • Cost of living – Presence of various day to day requirements and cost of living is an important factor which has to be though/analyzed while selecting a site by a company.

    Process for site selection

    Methodology/process for a site selection needs to customize as per company’s requirement. Below process will provide a high level approach to start and detailed as required:

    1. Scope and requirement – In this stage, company management with various stakeholders decide about scope and what is the requirement from the project in relation to site selection.

    2. Location vetting – Based on requirement, team vets various locations based on high level requirements.

    3. Site visits and meetings – In this phase team of experts will visit the site, meet with various government agencies, various service providers and do detailed analysis.

    4. Site short listing – Post visits and analysis, top 3-4 sites are shortlisted. One of the options to do the same is to use weighted score method for various factors.

    5. Detailed investigation – Once few sites are shortlisted based on initial investigation, it’s time to do detailed analysis. It will include having detailed discussions with various stakeholders, look for actual data like geo-technical studies etc., discuss about construction options and cost, negotiating on best prices etc.

    6. Analysis, recommendation and approval – Post detailed investigation, it’s time to analyze the data, make recommendation and present the options to management for their feedback and approvals.

    Building a data center is an important and crucial decision for a company. With site selection factors and process in place, it will lay a strong foundation for a company to succeed.

     

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:00p
    Going Dutch: Digital Realty Partners With KPN in Netherlands

    It’s been a busy week in the Netherlands for Digital Realty Trust. Just a day after announcing a new project in Amsterdam, the data center developer said it will partner Dutch telecom provider KPN to build a data center in Groningen.

    KPN has signed a long-term, triple net lease with Digital Realty for an existing data center in a 7,000 square foot building at the site, which has a potential power capacity of 3.5 megawatts of critical load.

    “The Netherlands is a strategically important market for our European customers as they seek to implement robust data centre solutions in order to facilitate business growth,” said Michael Foust, Chief Executive Officer of Digital Realty. ”We are delighted to support KPN in this mission critical project and we look forward to supporting their future data centre needs. Furthermore, the deal demonstrates our ability to collaborate to provide the robust backbone that organisations today rely on to deliver flawless IT services. We are delighted to welcome KPN to our portfolio.”

    “This transaction underscores our ability to complete a highly structured transaction that enables our customer to provide solutions in its controlled data centre real estate assets with a well-capitalised, long-term data centre owner,” said Bernard Geoghegan, Managing Director, EMEA for Digital Realty. “This deal continues our strategy of expanding our European footprint by investing in high-quality data centre facilities that are home to top-tier global brands.”

    In data center circles, Groningen is known as the location for an early European data center for Google.

    4:24p
    Should the OpenStack Community Embrace Amazon & Google?
    openstack-cloud

    The OpenStack Foundation, announced last year, is right on time with its official launch in September, prior to its next OS release and its conference in San Diego next month.

    Should the OpenStack community focus on creating an open source alternative to Amazon and Google? Or should it make it easier to shift OpenStack clouds to Amazon Web Services and Google Compute Engine for companies that want to use both approaches in a hybrid infrastructure?

    CloudScaling today called on the OpenStack community to work more closely with Amazon and Google. In an open letter to the OpenStack community, co-founder and CTO Randy Bias is making the case that OpenStack must embrace other public clouds rather than set themselves up as an open alternative to the mega-clouds. If the future is hybrid, he argues, OpenStack must play with the dominant public clouds.

    “For three years, elements of the OpenStack community have arbitrarily and unfairly positioned OpenStack agains the incumbents, especially Amazon Web Services and VMWare,” writes Bias. “The practical expression of this view is that OpenStack should build and maintain its own set of differentiated APIs. I’ve made no secret of my belief that this choice will harm OpenStack, and perhaps already has.”

    This position aligns with CloudScaling’s interests, but also raises broader questions about OpenStack’s role going forward and how it relates to other clouds. At the heart of this topic are application programming interfaces (APIs) Bias reviews the history of OpenStack, noting how it initially had no native APIs, but now has a native API that is largely identical to the API for the public cloud of Rackspace Hosting, the initial patron and largest user.

    Cloud APIs at Heart of Debate

    The issue is governance of OpenStack. The community now controls the direction of the project, and Bias believes that native APIs are dangerous to the project. “It’s time we advocate a public cloud compatibility strategy that is in all our best interests, not just those of a single, albeit substantial, contributor,” writes Bias. “Failing to make this change in strategy could ultimately lead to the project’s irrelevance and death.”

    Given where CloudScaling is positioned, it’s no question that the evolution of OpenStack is of upmost importance. CloudScaling has focused on AWS compatibility, seeing this as imperative to customer needs.  Providing a production grade version of OpenStack. CloudScaling is designed to have as much resiliency and fault tolerance as possible, with an emphasis on compatibility with Amazon.

    “What we’re seeing with our customers is that they’re all looking at the public cloud,” said Bias in a DCK interview last June. “They want to make sure they’re future-proofing themselves in that regard … people want a private, hybrid cloud that they can interconnect with public.”

    Innovation vs. Compatibility?

    AWS is the big player is the public cloud space, so it makes sense that CloudScaling would insure compatibility for these customers. Architectural compatibility means that apps can move freely from private to public and back, when needed.

    “We don’t see ourselves as being part of the OpenStack market, Bias said. “We see ourselves as part of the private cloud market. “Businesses are realizing that they need more of a scale-out model to increase time to market. Get solutions out faster and move in a more agile fashion.”

    Bias’ open letter goes on to discuss how AWS has grown and controlled the innovation curve in the public cloud. He argues that OpenStack can be in control of the innovation curve in private and hybrid cloud. But to do this, “it must embrace the public clouds, to which enterprises want to federate.”

    “I have long believed that private and public clouds needed to look similar and connect if we’re to have massive cloud adoption,” writes Bias. “We are now seeing enterprise customers demand a hybrid cloud solution: a private cloud connected to a public cloud so they can run workloads in both places and generally have choice and control that drive positive economics and business agility.”

    Ultimately, Bias is advocating a full embrace of Amazon and the AWS APIs.  Bias gives a proposal that he argues results in a win for everyone, including Rackspace, which holds a major interest.

    Cloud Wars

    Dubbed in the media as “the cloud wars,” there are a handful of open source frameworks competing (or not, depending on who you ask). OpenStack, CloudStack, Eucalyptus, and OpenNebula all play on the open source front. Bias wants OpenStack to win, and to do so, he argues it needs to embrace the public clouds.

    AWS and Google are seen as the competition – and in some ways, they are. However, approaching clouds from a hybrid perspective means embracing rather than overtaking is the way to go. The biggest concern is enabling cloud agnosticism. Lock-in is a major concern of enterprises, and Bias argues that control over the API is where this potentially happens. His concern is the downside risk – that by not enabling hybrid solutions, a potentially superior technology stands to lose if it goes the closed route.

    Think of the format wars between Beta and VHS.  JVC quickly licensed VHS, giving up control, while Sony didn’t take what the consumers wanted into account. Beta, after all, was superior to VHS, but VHS was more consumer-friendly. This is about making OpenStack friendly to use. It’s about moving the competitive positioning so that it rules for hybrid and private, not so that it competes directly with the major public clouds.

    CloudScaling recently received $10 million in funding from Seagate and Juniper, among others.  The company is also going to focus on compatibility with Google compute and others, so that customers have as much choice as possible.

    6:12p
    Intel Wants To Re-Architect Data Center Services
    Diane-Bryant-datacenter-day

    Diane Bryant, senior vice president and general manager of the Datacenter and Connected Systems Group at Intel, discusses the company’s vision for the data center at an event Monday. (Photo: Intel)

    Intel (INTC) has set out to re-architect the data center, hoping to help data center operators keep pace with the massive growth of information technology and services. During a media event this week Diane Bryant, senior vice president and general manager of the Datacenter and Connected Systems Group at Intel, set the stage for Intel’s approach to re-architect the data center and journey through periods of transformation that are changing from network-centric to human-centric models. New Atom and Xeon processor announcements were also made.

    With software defined infrastructure as the strategic foundation, Bryant explained plans for how Intel will invest in rack scale architecture, big data and high performance computing (HPC) technologies.

    “Data centers are entering a new era of rapid service delivery,” said Bryant. “Across network, storage and servers we continue to see significant opportunities for growth. In many cases, it requires a new approach to deliver the scale and efficiency required, and today we are unveiling the near and long-term actions to enable this transformation.”

    New Atom and Xeon processors

    As a part of its software defined infrastructure strategy, Intel revealed new details for the forthcoming Intel Atom processor C2000 family, aimed for low-energy, high-density micro servers and storage (code named Avoton) and network devices (code named Rangeley). The 64-bit System on Chip (SoC) is the second generation of Atom processors and will feature up to eight cores with integrated Ethernet and support for up to 64GB of memory. The expected energy efficiency is up to four times that of the first generation of the Atom, with up to 7 times the performance. These chips are based on the Silvermont microarchitecture, announced in May. Intel has been sampling these processors to select partners since April and general availability is expected later this year.

    With one family of processors announced, Intel follows its tick tock cadence of innovation and unveils the next processor generation. Third generation Atom processors will be based on Intel’s forthcoming 14nm process technology and are scheduled for 2014. The Atom SoC, code named Denverton, will enable even higher density deployments. The 2014 Xeon E3 family SoC processor is code named Broadwell, and will also be 14nm. It will be built for processor and graphic-centric workloads such as online gaming and media transcoding.

    In conjunction with 14nm Atom and Xeon additions Intel announced a new SoC designed from the ground up for the datacenter based on Intel’s next-generation Broadwell microarchitecture that follows today’s Haswell microarchitecture.

    To empower data centers to become more agile and service-driven Intel addresses technology challenges with customized solutions for an ever-expanding set of diverse workloads that place varying degrees of demand on cpu, memory and i/o. The foundation of Intel’s software defined infrastructure will take on severs, storage and networking to help take data centers from static to dynamic, and manual to automated. Acting as the catalyst for these bold transformations, Intel hopes to drive both traditional product lines and explore as many customization opportunities as possible.

    Customized Silicon for Rackspace

    During the media event Jason Waxman, vice president and general manager of Intel’s Cloud Platforms Group talked about architecting cloud infrastructure for the future. Intel’s Rack Scale Architecture (RSA), an advanced design promises to dramatically increase the utilization and flexibility of the datacenter to deliver new services.

    To highlight a customer customization example Waxman brought Rackspace Hosting COO Mark Roenigk to the stage to discuss its integration story. Roenigk said Rackspace  it will deploy a new generation of rack designs as part of its hybrid cloud solutions aligned with Intel’s Rack Scale Architecture vision. As the first commercial rack scale implementation, Rackspace will use the Open Compute Project rack design, powered by Intel Ethernet controllers and storage accelerated by Intel Solid State drives.

    With a vast ecosystem of technology partners, Intel has a growing set of solutions, to match a spectrum of workloads in information technology. The first generation of Atom processors had 20 designs, and the new Atom C2000 family has over 50. Additional architected solutions Intel cited came from customized silicon for eBay and Facebook, Accelerators for F5, Vantrix and QuickFire networks, and extreme low power solutions for OVH and Savvis.

    With the recently elected Brian Krzanich as CEO coming from Intel’s technology and manufacturing group and the company continuing to invest billions in silicon manufacturing, Intel has a clear advantage in benefiting from silicon customization opportunities.

    << Previous Day 2013/07/24
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org