Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, May 17th, 2016

    Time Event
    12:00p
    How OCP Server Adoption Can Accelerate in the Enterprise

    Jason Waxman, Intel VP and general manager of the chipmaker’s Cloud Platforms Group, has been on the board of the Open Compute Project since its inception in 2011. He has watched OCP grow from an open source data center project only Facebook and a handful of Asian design manufacturers were actively part of into a vibrant ecosystem whose members include Microsoft, Apple, Google, and some of the biggest IT and data center infrastructure vendors, such as IBM, Dell, HPE, Cisco, and Schneider Electric, among many other companies.

    While the list of the non-profit’s members and sponsors has grown over the past five years, however, the variety of OCP hardware users hasn’t expanded nearly as quickly. In addition to Facebook, Microsoft, and Rackspace, OCP-style gear has enjoyed some adoption by a handful of big financial services firms, and, more recently, interest from some of the big telcos. But the pool of buyers is still limited to big users with massive data centers and very deep pockets. There’s little evidence of adoption by smaller enterprise IT shops.

    In a recent interview, we asked Waxman about his thoughts about the reasons OCP hardware hasn’t seen a lot of adoption by those smaller IT organizations and what has to happen for it to pick up:

    Data Center Knowledge: Is there a compelling story in Open Compute for traditional enterprise IT shops?

    Jason Waxman: There’s the business need, and then there’s the barrier to entry. If the barrier to entry is high, it’s very difficult for smaller companies to go invest in that infrastructure.

    It comes down to what they do and how core the IT infrastructure is to their business. Companies that are doing engineering services, companies in healthcare, companies that are their own SaaS provider – there are a lot of [companies] that aren’t huge, but owning their infrastructure is required.

    One company, a medium-size business, they do SaaS. One of the reasons they need their own infrastructure is control. They have to meet certain compliance requirements for their customers that maybe they can’t get in a bigger general-purpose cloud. They may need to decide, “Hey look, if there’s a security patch, I don’t want somebody else telling me that I have to have mandated downtime; I want to be able to make that decision on my own.”

    Read more: Why OCP Servers are Hard to Get for Enterprise IT Shops

    And then there’ just the economics of it. I can go and find a lot of general-purpose things, and there’s some benefits to that, but if I really need to tune my infrastructure to what I do, than having control of the hardware can be more efficient. So there are definitely a lot of reasons, and we see people moving both ways.

    We see companies that are going full-scale into the cloud [like Netflix]. And I see companies going the other direction as well, saying I’ve got a big-enough scale now and I need to manage my own infrastructure, and it will be lower-cost and more efficient for me in the long run to have something that really suits my needs.

    DCK: What has to happen before smaller enterprises start deploying Open Compute or something similar?

    Couple of things have to happen. The hardware building blocks for compute at scale need to be available and they need to be efficient. Right now, the divide between the standard off-the-shelf system and what you can get for example through Open Compute or what large cloud services are deploying is just huge. If through Open Compute there’s greater access to more efficient solutions, then that brings down the overall cost to deploy.

    Read more: Guide to Facebook’s Open Source Data Center Hardware

    DCK: Is it still difficult to source OCP gear?

    It is. Even within Open Compute, we’ve had a lot of fragmentation. Some of the solutions have been optimized for the way Facebook does it, or the way Microsoft does it, or the way that Rackspace does it, so there are these variants, and that’s good because it’s highly optimized for their solutions, but to get an ecosystem going, you need more standardization of those building blocks. Otherwise, companies that want to participate in this ecosystem go, “Well, if I design something, how many other customers am I going to get?”

    So you have this barrier for other vendors wanting to participate in the ecosystem, and when the vendors aren’t participating in the ecosystem, you’ve got fewer choices. Then you go back to the end user and the end user says, “Well, I don’t see any choice.”

    I think the way you break the cycle is by driving more efficient building blocks. More standardization of the building blocks that allow more companies to participate in that ecosystem. Then you’ve got more places where you can buy around the world, you’ve got support, it’s easier for vendors to justify investment.

    DCK: But major OEMs have been on board with OCP and have had solutions based on the designs on the market. Wouldn’t that be an example of the vicious cycle being broken?

    When you peel the onion a little bit – and I think it’s well-intentioned – many of the products have been sort of derivatives of standard products under a kind of Open Compute-inspired umbrella. But it lacks some of the consistency. So I’m still making a variety of choices: A or B or C or D, at the end of the day, and each one of them has trade-offs, versus saying, “I know that I want this, and now I can find different sources or multiple sources to buy that type of product.” That may seem like a subtle difference, but I think it’s a crucial one to really getting the Open Compute ecosystem going.

    That said, the number of systems being deployed through Open Compute has been growing. I think we’re starting to see that turn. Now we’re starting to really see it.

    See also: Intel: World Will Switch to Scale Data Centers by 2025

    3:00p
    Orchestration and Automation: The Enterprise’s Best Kept Secret

    Trey Layton is Chief Technology Officer for the Converged Platforms Division of EMC.

    One of the biggest challenges the typical IT organization faces today is that the systems they have in place are not agile enough to keep up with the rapidly changing needs of the business. It may take a few minutes to provision a virtual machine. But it can take the average IT organization several weeks to provision all the storage and networking services associated with that virtual machine. As a result, the business winds up pushing more application workloads on to public clouds, which are more agile but become more costly over the long term. To address those issues forward-looking organizations are increasingly embracing converged and hyperconverged infrastructure platforms to build agile private clouds.

    There are basically two ways to go about bringing the level of IT automation and orchestration required to create an agile private cloud. IT organizations can add an overlay of software or they can acquire converged and hyperconverged platforms that have embedded that capability within the underlying infrastructure. Adding a layer of software on top of an existing IT environment to achieve that goal is problematic on several levels. IT automation frameworks such as Puppet and Chef require programming skills that the vast majority of IT operations teams don’t have. So to make use of these IT automation frameworks, an IT organization needs to teach their IT operations teams how to either code, or conversely, divert developer resources to managing IT infrastructure. Given application development backlogs inside most organizations these days allocating developer resources to manage IT infrastructure is not the most efficient use of a limited IT resource.

    A far more practical approach to IT automation and orchestration can be found by embedding these capabilities in converged and hyperconverged infrastructure. Rather than requiring IT organizations to deploy and manage a separate overlay, modern infrastructure platforms provide all the capabilities an IT organization needs to manage the underlying software-defined infrastructure using declarative commands.

    The IT organization simply defines a set of policies using templates. Those templates are then used to automatically provision all the infrastructure resources required by any given application workload. The end result is a much more agile IT organization capable of dynamically responding to any and all new application requirements.

    Once that automation capability is in place the IT organization gains the ability to holistically orchestrate sets of infrastructure services that function as a cloud; right down to being able to define what infrastructure resources can be made available to a specific application. In the truest sense of a cloud IT organizations can even allow developers to self-service their own IT infrastructure requirements within a set of well-defined guidelines defined by the IT organization. That not only reduces DevOps friction inside the organization, it by definition creates the documented IT policies and procedures needed to operate in any highly-regulated environment.

    While integrated DevOps is a noble goal an IT organization should not have to be turned upside down to achieve it. IT automation overlays were created to compensate for the limitations of existing legacy IT infrastructure. Modern IT infrastructure provides access to a control plane through which compute, storage and networking resources can be managed as a common pool of software-defined resources.

    At this juncture it’s obvious in the age of the cloud legacy IT infrastructure architectures are not sustainable. Not only do they cost more to own, they make the internal IT organization operationally inefficient. At a time when IT operations are consuming as much as 70 percent of the IT budget it is clear organizations of all sizes need to change the way they stand up, provision and orchestrate IT services. That’s fundamentally going to be much easier to achieve using converged IT infrastructure platforms that were designed from the ground up with that very goal in mind.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    3:53p
    Microsoft and SAP’s Cloud Computing Platforms to Interoperate

    (Bloomberg) — Microsoft and SAP are linking up their software products to make it easier for businesses to rent applications online and workers to manage travel and meetings.

    SAP’s Hana data analysis tools and S/4 Hana applications will be able to run on Microsoft’s Azure cloud computing service by the third quarter of 2016, the companies announced Tuesday. The move will let Microsoft customers run SAP software without installing it on their own servers. Workers using Microsoft’s Office 365 suite of applications will also be able to plan trips and manage expenses using SAP’s Concur software directly within Outlook.

    For employees, the upshot is teams can use SAP tools to complete a range of common tasks — book travel, automatically extract receipts from e-mails, recruit job candidates — without leaving familiar Microsoft environments, said Steve Singh, a managing board member at SAP. With Microsoft, the German software maker is adding a second partner for hosting its flagship business applications in the cloud; it already offers S/4 through the Amazon Web Services platform.

    SAP CEO Bill McDermott has been expanding technology partnerships with the industry’s biggest companies in recent weeks to spur sales of SAP software at a time when license sales have started to fall. Earlier this month SAP and Apple unveiled an alliance that will put more SAP apps on the iPhone and iPad. Despite fast growth of cloud computing services, traditional licenses and support of those programs still account for two-thirds of SAP’s revenue.

    Microsoft CEO Satya Nadella plans to appear on stage with McDermott at SAP’s Sapphire customer conference today in Orlando, Florida, today.

    The agreement comes as Microsoft adds more outside programs to its Azure service, which lets businesses rent processing power, data storage and other underlying web services instead of buying and maintaining them on site. Microsoft is also linking Office 365 to widely used software from competitors and partners alike that can feed it useful data.

    Customers can use Oracle Corp.’s database and middleware, for example, on Azure, as well as run a variety of computing services on the open-source Linux operating system.

    Microsoft has also been promoting apps called Outlook Add-Ins to enhance the software with services including PayPal and Evernote.

    10:05p
    Cisco Told to Pay $23.5M Over Hacker-Security Patents

    (Bloomberg) — Cisco the biggest maker of networking equipment, was ordered by a jury to pay more than $23.5 million to a nonprofit research center for infringing network-surveillance patents designed to identify hacking attacks on computer systems.

    Jurors in federal court in Wilmington, Delaware, concluded last week that San Jose, California-based Cisco used technology owned by SRI International, the former research arm of Stanford University, without permission. The panel rejected Cisco’s arguments that it didn’t infringe or that the two at-issue patents weren’t valid.

    Officials of Menlo Park, California-based SRI sought more than $50 million in damages for Cisco’s unauthorized use of the patented technology, which allows computers to automatically detect and record suspicious activity on networks.

    Cisco officials were disappointed with the jury’s finding that the networking company’s products infringed SRI’s patented technology, Robyn Blum, a company spokeswoman, said in an e-mailed statement.

    Appeal Grounds

    “Cisco’s technology was independently developed by highly-regarded industry innovators, and we see several grounds for appeal that will be pursued,” she said.

    The case focused on whether sensors created by Cisco to detect suspicious computer traffic incorporated SRI’s patented technology without its permission. Companies such as Home Depot and TransUnion, a provider of consumer-credit reports, use Cisco’s surveillance system, according to court testimony.

    SRI’s lawyer told jurors in closing arguments Wednesday that Cisco’s system for tracking computer-network intrusions wrongfully incorporated SRI’s technology and the company encouraged its customers’ unauthorized use of SRI’s invention.

    “Cisco instructs, guides and encourages its customers’ infringement,” Frank Scherkenbach, a lawyer for the research center, told the panel. SRI developed things such as the computer mouse and SIRI, a personal-assistance program on Apple’s IPhones.

    Company Arguments

    Cisco’s attorney countered that SRI failed to prove the networking company infringed on its technology and didn’t deserve a damages award. SRI contends it deserves damages in the form of reasonable royalties for Cisco’s use of its inventions without permission.

    “This is not even a close case” of infringement, Steven Cherny, Cisco’s lawyer, said in closing arguments.

    The case is SRI International v. Cisco Systems Inc., 1:13-cv-01534, U.S. District Court, District of Delaware (Wilmington).

    10:16p
    Arizona Law Would Force Agencies to Use Cloud Services
    By Talkin' Cloud

    By Talkin’ Cloud

    The State of Arizona is proposing a law that would require state agencies to migrate IT resources and operations to the cloud. The law, S.B. 1434, now awaits the approval of Governor Doug Ducey.

    According to a report by InfoWorld, the law mandates that agencies use cloud computing, and if they don’t, CIOs could risk jail time for noncompliance.

    The State of Arizona has already migrated its DNS solution to AWS, which saves it approximately 75 percent in annual operating costs on its DNS solution compared to its previous on-premise infrastructure.

    The law states that departments must adopt a policy “that establishes a two-year hardware, platform and software refresh evaluation cycle for budget units that requires each budget unit to evaluate and progressively migrate the budget unit’s information technology assets to use a commercial cloud computing model or cloud model as defined by the National Institute of Standards and Technology.”

    The rest of the text is fairly standard in that it will require government agencies to utilize resources that comply with regulations such as FedRAMP, HIPPAA, and PCI DSS to name a few. It also requires the cloud data to be stored in the United States.

    Arizona CIO Morgan Reed told StateTech that Governor Ducey’s vision for Arizona is for it “to move at the speed of business. So the question should be, why not the cloud?”

    This first ran at http://talkincloud.com/cloud-computing/arizona-considers-law-would-require-agencies-use-cloud-computing

    << Previous Day 2016/05/17
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org