Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, December 5th, 2018

    Time Event
    5:11p
    Gartner: Replace Your Data Center with a ‘Digital Toolbox of Possibilities’

    The analyst firm lays out ways enterprise IT leaders can stay relevant in a future where “infrastructure is everywhere.”

    While enterprise data centers aren’t going away completely, the data center as we know it is seeing its final days, playing an increasingly smaller role in the overall picture of enterprise technology infrastructure, which now also consists of a multitude of cloud-based services. That means an entirely new mindset for those working in IT and operations.

    This was the overall message Gartner analysts delivered in the opening keynote of the research firm’s annual IT Infrastructure, Operations, and Cloud Strategies conference in Las Vegas Monday. Instead of a rigid on-premises infrastructure, designed to support specific applications, IT leaders have to develop a “toolbox” of infrastructure options and “pre-defined patterns” to support the constantly changing requirements of business units and software developers.

    “The future of infrastructure is everywhere. That’s right, everywhere,” said Gartner distinguished VP analyst David Cappuccio, whose blog post titled ‘The Data Center Is Dead’ earlier this year, carrying more or less the same message, sent ripples through the data center industry. “Whether as individuals we like it or not, it’s out of our hands,” he said in the keynote.

    Why may a VP of IT not like the concept of infrastructure being everywhere instead of being confined to the four walls of their data center? The short answer: control.

    “We’re asked to support agility,” Bob Gill, also a Gartner VP, said, “but agility lies in diversity.” Agility in IT is “the monster under the bed,” because the more diverse a family of technologies you support, the more control you relinquish. And less control means a tougher time with security and compliance.

    “It used to be that you controlled everything” as an enterprise IT leader, Gill said. That control is quickly slipping away, especially as a function of controlling the data center. “There are still data centers, but the idea of the data center as the core factor that controls everything is outdated.”

    In the world of digital business, a business leader or developer is no longer limited by the technology their company’s IT department provides. There’s a myriad of services they can buy with a corporate credit card without ever getting IT involved, and there’s no guarantee those services will meet the organization’s security and compliance requirements.

    Having a “digital toolbox of possibilities,” or a diverse set of well-architected solutions and services, is a short-term way for IT leaders to provide the agility developers need while regaining some control, Gartner analysts suggested.

    Control itself isn’t the end goal, of course. “The business unit still needs our steady hand of governance, of security, of application reuse,” Gill said.

    In the long term, IT leaders should get away from being project-focused and become more product-focused, because that’s the way businesses look at things, he said. “Infrastructure does not make money; the applications make money.”

    That means infrastructure decisions should be driven by applications. Gartner calls the various new infrastructure options available to companies “execution venues.”

    IT can leverage its knowledge to guide the selection of best execution venues for each particular application, which means collaborating more closely with business units and developers – their colleagues that build products that drive revenue for the organization.

    5:15p
    Goldman Uses Gear From This Startup to Run Some Data Centers

    Barefoot says a new version of its networking chip will help its effort to shake up how computer networks are built and operated.

    Ian King (Bloomberg) -- Barefoot Networks Inc., a rare semiconductor startup, said a new version of its networking chip will help its effort to shake up how computer networks are built and operated. At least one customer, Goldman Sachs Group Inc., is betting it will succeed.

    Barefoot makes switch semiconductors that direct the flow of data between computers. Its Tofino 2 chip is faster and uses less power. Like the original, it can be programmed to do different things via an open-source coding language called P4 that Barefoot helped create.

    That flexibility makes Barefoot products different from rival networking chips, such as the Tomahawk range from Broadcom Inc. The startup argues the technology will bring the networking sector up to speed with the innovation sweeping through other areas of computing. Backers include Google and Tencent Holdings Ltd.

    Goldman has used some Barefoot-based networking equipment in some of its data centers. The investment bank plans to increase deployments as more mainstream makers of gear, such as Cisco Systems Inc. and Arista Networks Inc., offer switching machinery using Barefoot technology.

    “It’s an elegant solution. It doesn’t solve every problem, and it needs to evolve, but it just keeps improving,” said Josh Matheus, a managing director of technology at Goldman Sachs. “We’re pretty happy with it. I see it proliferating.”

    For Goldman, there are multiple advantages in using Barefoot-based networking. The ability to customize the chips, and the P4 language, mean new functions can be added quicker, usually within a couple of months, Matheus said. That compares with waiting more than a year for new versions of standard switch semiconductors to be designed and manufactured.

    A key goal for computer network owners is real-time monitoring of information as it flows through their data center equipment. Barefoot chips help with this, making it easier to spot and fix outages and slowdowns. Efficiency and security are also enhanced by seeing what happens to information after it enters a network and before it leaves. These extra insights help Goldman make better products such as improved high-frequency trading, Matheus said.

    5:16p
    RISC-V Summit Debuts to Showcase Open Source ISA

    Vendors at the first annual RISC-V Summit announce new products built around the open source processor specification.

    SANTA CLARA, Calif. -- A year or so ago you'd probably have trouble finding someone who knew anything about the open source silicon project RISC-V. Then SiFive, a startup built around the reduced instruction set architecture, raised over $143,000 in a crowdfunding campaign taking pre-orders for HiFive Unleashed, a $999 single-board computer (SBC) for developers featuring what was then the most powerful RISC-V system, the Freedom U540, a multi-core processor that clocks at 1.5 GHz.

    That proved there was considerable developer interest in the project, and it served as notice that this open source processor architecture was ready to tackle a wide range of workloads. Within weeks after the SBCs were delivered, the Linux distributions Debian and Red Hat's Fedora announced they were working on ports to the architecture -- a feat that was accomplished by both distros in near record time.

    By this time NVIDIA had already been on the RISC-V bandwagon for a couple of years, having announced in 2016 that it was using RISC-V for the next generation of the Falcon micro-controller for its GPUs. An ecosystem of companies had also been growing around the ISA, mostly centered around using the technology in embedded IoT devices, an area where the technology was already well-suited and poised to take on the likes of ARM.

    This week there's further proof that RISC-V has arrived. Something over 1,000 professionals, mostly on the hardware side of tech, are attending the first ever RISC-V Summit at the Santa Clara Convention Center in Silicon Valley.

    At a keynote on Tuesday, the conference's second day, Krste Asanovic, co-founder and chief architect at SiFive as well as board chair of the non-profit RISC-V Foundation which oversees the project's development, said that RISC-V interest and adoption is being motivated by the project's openness more than from its impressive specs in areas such as speed, scalability, and power use.

    Not that openness and capability don't go hand-in-hand.

    "Whatever's broken or missing in RISC-V is going to get fixed," he added. "Again, because of its open source nature."

    That's a textbook example of the open source development model, because as user companies come on board and begin to develop products around the specification, they add features that are important to them, as well as fix issues that affect them.

    In another keynote, Martin Fink, CTO at Western Digital which is in the process of shifting its entire line to RISC-V processors, announced the company has developed its first RISC-V core to be used in its products. In addition, it's working with SiFive and Barefoot Networks on OmniXtend, an open coherence standard for RISC-V, and has built an instruction set simulator (ISS) that will be released as open source.

    Another announcement came from Patrick Johnson, a VP with chip maker Microchip Technology which acquired longtime RISC-V developer Microsemi earlier this year. During his keynote he revealed Microsemi's PolarFire SOC, which is being billed as the first system on a chip-based FPGA.

    Although many expect RISC-V to eventually find a place in data center servers, some work remains to be done to make the technology ready for those workloads.

     

    7:57p
    A Funny Thing Happened on the Way to the Cloud
    Integration PaaS needs to go beyond basic application and data integration to address the full capabilities of cloud data management.

    << Previous Day 2018/12/05
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org