Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, January 24th, 2017
Time |
Event |
1:00p |
HPE to Acquire Cloud Cruiser to Measure Cloud Consumption  Brought to You by Talkin’ Cloud
Cloud Cruiser, the San Jose, Calif.-based cloud software provider that helps businesses control cloud spend, has been acquired by HPE in a deal announced on Monday.
According to a blog post by HPE’s Scott Weller, SVP of Technology Services Support, the addition of Cloud Cruiser will help its customers manage cloud spend, a task that becomes more difficult as workloads are spread across on-premises infrastructure and public cloud.
HPE plans to use Cloud Cruiser’s technology to accurately meter and bill for customers’ consumption of IT, as part of HPE Flexible Capacity, a consumption model which allows customers to manage infrastructure in their own data center but pay for it as-a-service.
“A critical piece of HPE Flexible Capacity is measurement – the ability to accurately meter and bill for customers’ consumption of IT– that differentiates Flexible Capacity from other offers,” Weller said.
“As a Cloud Cruiser customer, we have seen first-hand the value that Cloud Cruiser’s technology creates by enabling HPE Flexible Capacity to meter and bill for usage of on-premise IT infrastructure in a pay-as-you-go model. By continuing to enhance the Cloud Cruiser platform and SaaS app Cloud Cruiser 16, more tightly integrating it into HPE Flexible Capacity and leveraging the deep domain expertise of the Cloud Cruiser team, we are excited about the opportunity to accelerate the adoption of innovative consumption-based IT offerings and simplify hybrid IT for our customers.”
Cloud Cruiser works with a variety of cloud providers, including AWS and Microsoft Azure, so customers can measure the cost and usage information of workloads regardless of where they sit. In a recent interview with Talkin’ Cloud, HPE’s Eugene O’Callaghan, vice president of workload and cloud, said that though some of its clients are completely in the public cloud, the majority have a mix of on-premise, private cloud, and public cloud.
With this strategy shift, a tool to measure usage and control costs across a customer’s infrastructure in an automated way is a welcome addition to HPE’s portfolio, which also recently added managed services for Microsoft Azure.
Cloud Cruiser will be under the Data Center Care portfolio within HPE’s Technology Services Support organization. The company’s co-founder and CEO David Zabrowski (an HP alum), will report to Weller.
“Our relationship with HP started in 2010 as they became our first partner, first product integration, and our first joint customer win. Fast forward a few years, and HPE is now one of Cloud Cruiser’s largest customers, utilizing Cloud Cruiser to power Flexible Capacity, a consumption based, pay-as-you-go service. Over this time, our teams worked together collaboratively, which I only expect to deepen as we become one,” Zabrowski said in a blog post on the Cloud Cruiser website.
HPE has started 2017 off strong with its $650 million acquisition last week of SimpliVity, a provider of software-defined, hyperconverged infrastructure.
As a report by The VAR Guy notes, the acquisition of SimpliVity will help HPE gain traction in the hyperconverged infrastructure market, which is expected to reach $5 billion by the end of 2019.
This article originally appeared here, on Talkin’ Cloud. | 4:00p |
Oracle Cuts 450 Jobs in Hardware Systems Division  Brought to You by Talkin’ Cloud
In a sign of the times, Oracle has laid off hundreds of its employees in its hardware unit in Santa Clara, according to a report by The San Jose Mercury News.
According to the paper, Oracle will lay off about 450 employees in its Santa Clara hardware systems division, confirming rumors that had been swirling online and speculation that job losses may be afoot after hardware’s poor showing in its second quarter earnings.
Companies like Oracle are focusing investments on faster growth areas of the business like cloud while cutting from divisions such as hardware that are on the decline. Earlier this month Oracle has announced plans to open three new cloud regions in the next 6 months as it has opened an Israeli tech accelerator to help boost its cloud innovation.
According to Oracle, who sent a letter to the Employment Development Department, the facility in Santa Clara will stay open but is “refocusing its Hardware Systems Business, and for that reason, has decided to lay off certain of its employees in the Hardware Systems Division.” Oracle sent the letter to the government on Wednesday as part of its legal obligation to provide notice of impending mass layoffs, the report said.
Hardware and software developers will account for the majority of the lost jobs, according to the report, along with a handful of managers, technicians and admin assistants. The affected employees were notified last week on Thursday.
On TheLayoff.com, a discussion board where employees can report workforce reduction, employees of Oracle estimated the layoffs a lot higher – with some reporting as many as 1,700 jobs cut, not just from Santa Clara, but from other areas in the U.S. including in Colorado and Burlington, Mass.
The reports come as Oracle has been sued by the Obama administration over claims that its compensation policies discriminate against women and black and Asian employees.
This article originally appeared here, on Talkin’ Cloud. | 4:30p |
IBM Touts Trump-Pleasing Hiring Plans While Firing Thousands By Jing Chao (Bloomberg) — On the eve of a summit last month between technology executives and then President-elect Donald Trump, IBM Chief Executive Officer Ginni Rometty publicly pledged to hire about 25,000 U.S. workers and spend $1 billion on training over the next four years.
She didn’t mention that International Business Machines Corp. was also firing workers and sending many of the jobs overseas.
In late November, IBM completed at least its third round of firings in 2016, according to former and current employees. They don’t know how many people have lost their jobs but say it’s probably in the thousands, with many of the positions shipped to Asia and Eastern Europe. During the presidential campaign, Trump routinely criticized offshoring, although he didn’t specifically mention IBM. The firings—known internally as “resource actions”—have continued into the new year. This month, IBM started notifying more U.S. workers that they would be let go, according to a current employee, who says colleagues in the services business are bracing for further rounds.
Many industries and companies have been sending jobs overseas to take advantage of lower wages and to be closer to local markets. In some cases, technological advances, including automation, have made positions obsolete. In an e-mailed statement, IBM spokesman Doug Shelton pointed out that the company generates more than two-thirds of its services revenue overseas. He reiterated plans to hire 25,000 people in the U.S. and said IBM will end up adding a net number of jobs over the next four years. “If we are able to fill these positions,” he said, “we expect IBM U.S. employment to be up over that period.” Shelton declined to say how many IBM workers were fired in 2016 but said IBM enjoys an attrition rate that’s historically much lower than the industry average.
Rometty’s hiring pledge prompted current and former IBM workers to vent on message boards and Facebook groups. Some complained that the new recruiting drive wouldn’t offset jobs sent overseas in recent years. Others said Rometty had neglected to mention whether and how many people would be fired in the meantime. Some urged online communities to contact the Trump transition team and educate his aides about IBM’s history of layoffs and outsourcing.
“Ginni Rometty is terminating thousands of IT workers and touting herself as some hero who’s out to hire 25,000 workers,” says Sara Blackwell, a Sarasota, Florida-based lawyer and advocate for Protect U.S. Workers, who represents about 100 IBM ex-employees who have filed discrimination and other complaints. “To me, that’s hypocritical.”
When Rometty became CEO in 2012, she began pushing IBM into new businesses such as cloud computing and artificial intelligence. So far, results have been mixed. On Jan. 19, IBM reported that fourth-quarter sales fell 1 percent to $21.8 billion and operating margins narrowed, evidence that revenue from cloud computing and AI hasn’t yet replaced sales from older businesses. However, some areas showed promise. Sales in the unit that houses analytics and AI software increased for the third quarter in a row, and the technology services and cloud platforms segment also recorded year-over-year growth.
In a call with analysts, Chief Financial Officer Martin Schroeter said that the company expects to improve its margins this year, partly from “savings we have from workforce rebalancing as we continue to remix the workforce.” The shares rose 2.2 percent to $170.55 on Friday, but have fallen about 7 percent since Rometty became CEO.
IBM’s re-organization inevitably meant some workers would lose their jobs. Automation wiped out some positions, and the company says some of the firings were done to make room for people with skills in IBM’s new businesses. Layoffs have slowed but are continuing. In an Op-Ed published in USA Today in December to explain the creation of 25,000 new positions over the next four years, Rometty wrote: “We are hiring because the nature of work is evolving.”
At the same, time IBM has sent thousands of jobs offshore. It’s not alone in doing this. In the last 15 or 20 years, Accenture Plc, Cap Gemini SA and Hewlett Packard Enterprise Co. have all done the same. IBM doesn’t disclose how many of its 300,000-plus employees work in the U.S. or in its various divisions. But the services businesses, which generate more than half the company’s sales, have taken the brunt of the offshoring.
For example, early last year, the technology services division aimed to have just 30 percent of permanent employees located in the U.S. by the end of 2016, according to two former managers, who received the information from superiors. Later in the year, the target had been reduced to 20 percent, said one of the people, who asked not to be named to discuss internal matters. Shelton said the numbers are inaccurate.
“The thing I found really insulting was that the quote-unquote reason for my termination was it was a ‘skills transformation,'” says Sean Ott, a 20-year IBM veteran asked to clear out his desk a couple of months ago despite what he describes as a strong performance rating. “But at the same time, they have my exact job out there, they have my colleague’s job out there for other people to get hired into.”
As recently as three years ago, it was easier to track IBM’s staff cuts. Employees losing their jobs received a list of colleagues being fired too, along with their positions, departments, ages and other information. They were able to use the information to determine whether they wanted to waive age-discrimination claims, which they needed to do to receive a severance package. Then in 2014, IBM ended the disclosures, making it much more difficult to tally job cut totals.
The company hasn’t entirely sidestepped criticism for firings. In 2015, local and federal officials lambasted IBM for dismissing 1,200 workers in Iowa and Missouri just five years after opening facilities there. The firings were a blow to the cities of Dubuque and Columbia, which spent a combined $84 million on sweeteners to lure Big Blue in the hopes of incubating a startup scene. Republican Iowa Senator Chuck Grassley wrote to the company to express concern about the firing of 700 workers.
Today, Big Blue gets rid of people quietly and in smaller batches, former and current employees say. The firings have become so commonplace, they say, that many workers are resigned to losing their jobs and simply wait for their names to be called. Then, many are asked to train potential replacements overseas. In Ott’s case, those people were in China, India and Argentina. “Clearly,” he says, “my skills are still relevant.” | 5:00p |
Vendors Take Facebook Data Center Switches to Market A group of network hardware and software companies are bringing to market data center network hardware Facebook has designed in-house for its own use.
Cumulus Networks, the maker of a full Linux distribution designed for white box switches, is set to deliver today what its CEO described as the first public demo of its OS running on configurations of Facebook’s Backpack and Wedge 100 switch platforms. This demo is scheduled to take place at an invitation-only technical conference hosted today by Facebook at its Menlo Park, California, headquarters.
As both Cumulus and Facebook confirmed, Cumulus will make Backpack configurations with pre-loaded Linux generally available for pre-order today. The hardware manufacturer is Cumulus partner Celestica.
“[Backpack] is the first modular system ever to be running Cumulus,” explained Cumulus Networks CEO Josh Leslie, in an interview with Data Center Knowledge.
“At a high level, Facebook has been breaking a lot of ground, building very cost-effective, very efficient data center systems. They’re ground-breaking in the sense that they’ve been incredibly open with how they’ve done things. In parallel, Cumulus has been building an open, disaggregated network operating system that has gotten an increasing amount of maturity and traction in the marketplace. And what’s new is that, this is the first time the two will meet.”
The move is part of a string of announcements made today in support of Facebook’s Backpack and Wedge 100 switches as commercial products. Barefoot Networks announced a variation on one of Facebook’s themes: a so-called “Wedge 100B” that replaces its own switch logic with the company’s Tofino fully-programmable PISA-based switch. In a similar vain, Cavium inserts its own XPliant programmable switch for what’s being called “Wedge 100C.”
See also: Guide to Facebook’s Open Source Data Center Hardware
Disaggregation You Can See
Facebook released its Backpack modular data center switch platform last November, using a hardware architecture that models the way software-defined networks typically subdivide traffic. With Backpack, blocks of separate switch elements are delegated traffic along the data, control, and management planes. These elements are fitted to the chassis orthogonally, with the line and fabric cards mounted horizontally and the control and management modules mounted vertically, in a configuration that opens up natural airflow.
Its thermal design, Facebook claims, enables its optical fabric to tolerate operating temperatures of 55 degrees Celsius / 130 degrees Fahrenheit, which is at the high end of network engineers’ typical tolerance scales. (Although when Facebook originally said it supported “55C optics,” some folks were Googling the phrase to find out what grade of fiber it meant.)
“We create hardware that can run multiple software options,” wrote Facebook consulting engineer Luis MartinGarcia for his company’s engineering blog touting today’s news, “and we develop software that supports a variety of hardware devices, enabling us to build more efficient, flexible, and scalable solutions. By building our data centers with fully open and disaggregated devices, we can upgrade the hardware and software at any time and evolve quickly.”
Fitting One Decoupling with Another
Cumulus Linux amends the typical Linux operating system configuration to include management and support functions for switch hardware. Network control protocols such as LLDP are treated by the Linux kernel as though they were applications in the user space. As the company’s chief scientist, Dinesh Dutt, explained in a company video, this enables an opportunity for automation that closed switch architectures could not possibly enable: integration of network control tasks with configuration management and CI/CD tools, such as Puppet and Chef.
Arguably, this opportunity makes white-box switches a more suitable fit for the more microservices-oriented architectures that Web-scale organizations like Facebook need to run. Granted, there are very few of these organizations in the world, at least in number. And more commonly-sized enterprises have historically been more comfortable with all-in-one hardware configurations and out-of-the-box management tools.
So it’s interesting that Cumulus should drive an evolutionary course toward the enterprise side, kicking white-box switches more in the direction of everyday businesses.
“These are loosely-coupled systems,” said Cumulus’ Leslie, speaking about his company’s architecture. “We don’t want to do a bunch of special things to make our software work on their hardware. We want to do a bunch of simple things. That’s where the value is for customers.
“If they want to move later to next-generation Facebook hardware; or to a fixed configuration system from a Dell, a Mellanox, or an HPE, or if they want to graduate off a chassis later to a more modern architecture, they can do all those things and keep the software and operational model constant. And that’s very much part of the value proposition of this type of model.”
Making Network Configuration Ordinary Again
Cumulus Linux began making serious inroads in 2015, through deals with vendors including HP (now HPE) and Supermicro to provide bare-metal switches that were certified as “Cumulus-ready.”
A December 2016 Puppet webinar [registration required] told the story of implementing Puppet-automated switches running Cumulus Linux at New York-based 3D printing service Shapeways. It’s an example of a hyperscale architecture finding a home in a situation that’s not nearly as hyper as Google or Facebook.
“No more having a tool to handle an initial deployment of a switch, another tool to monitor the configuration of that switch, and then another tool to handle ad hoc remediations as necessary,” explained Carl Caum, Puppet’s senior technical marketing manager, during the webinar.
“Traditionally, when you’re trying to get a switch onto the network… it takes hours or days to get it live, and get it actually serving value. Then for your ongoing management, you have these manual and custom processes. But with Cumulus Linux and Puppet, you can have ONIE Boot install the operating system onto your new rack switch, and then hand off to Puppet to handle… not just the initial configuration, but also ongoing management, which reduces your time-to-value to minutes or seconds.”
Cumulus’ efforts notwithstanding, there may be two emerging classes of data center switch hardware taking shape, where the on-premise data center looks and works fundamentally differently than for cloud service providers, ISPs, and Web-scale providers.
In an effort to prevent a barrier from forming between those classes, last week Cumulus unveiled its own white-box switch components, called Cumulus Express, with its network Linux pre-loaded. Tuesday’s demonstration appears to show, however, that Cumulus has no intention to compete with its own hardware partners. Rather, a plurality of options plays into what Leslie described as his company’s value proposition.
“If there’s some function that you want from the physical network,” he said, “you don’t want to be beholden to the vendor that built those tools, to make it available to you or not. If you’re building some [network] layer above us, we’re going to work well for you, because we’re going to give you as much control over that environment as is possible, and that’s going to be valuable to you in terms of achieving your business objectives.” |
|