Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, November 2nd, 2016
Time |
Event |
9:00a |
Cisco Unveils Modular, High-Capacity S-Series Storage for UCS Making the case that public cloud-based storage loses its cost effectiveness after so long a period of time or so great a capacity of data, Cisco announced Tuesday the availability, beginning next week, of a new series of modular storage appliances geared for high-bandwidth applications, such as the emerging field of live, digital video analytics.
Called S-Series (but not to be confused with other classes of network product that the company has called “S-series” in the past), Cisco’s new design offers up to 600 TB of total storage capacity from 60 3.5-inch devices in 4U of space, and connectivity by way of Cisco’s Virtual Interface Cards, with up to 256 virtual adapters per node. Some of that space may be traded out for compute nodes.
“If you take all of the modules out of this system, it’s essentially an empty metal box,” admitted Todd Brannon, Cisco’s product manager for unified computing, in an interview with Data Center Knowledge. Think of it like a blade server, he said, except focusing on storage optimization as opposed to compute density.
“The modularity does two important things for customers: One is, we can dial the storage, compute, and caching up and down the I/O in different ways, right-sizing the infrastructure to the workload,” Brannon continued.
“Second, in a traditional server, it’s pretty much a fixed bill of goods. You buy it, and when you want to upgrade some component, you typically rip it out and replace it. Here, we can decouple all the refresh cycles of the server subsystems.”
For example, he said, once Cisco completes its company-wide transition from 40 Gbps to 100 Gbps connectivity per wire, all the data center operator needs to do is swap out the I/O modules, leaving storage and compute modules in-place. Similarly, when Intel upgrades its CPU technology, existing compute modules may be swapped out for newer ones. The management system, now provided by way of Cisco’s ONE Enterprise Cloud Suite, will immediately make adjustments.
Brannon noted that the component parts of modern servers each have different, usually non-synchronized, lifecycles from one another. The goal of modularity through disaggregation is to facilitate the management of resource classes on their own cycles.
In its own internal study, Cisco engineers modeled the costs of staging 420 TB of storage on Amazon’s S3 public cloud over a three-year period, and 600 TB of storage (accounting for overhead) of on-premises S-Series storage over the same period, accounting for labor and maintenance. While S3 incurs no up-front charges versus a significant investment for S-Series, Cisco believes enterprises will reach the break-even point in 13 months.
“The public cloud is fantastic for its immediacy, and the scaling that you can do there,” said Brannon. “We can offer the same type of rapid scaling on-prem for much, much lower costs.
“Customers are starting to recognize that, for these large data sets, the cloud’s not going to be the knight in shining armor. They’re still going to need to build these platforms on-prem. With S-Series, we’re trying to make that a lot easier for them, and give them the TCO that they need without all the operational headaches.”
But recognizing that customers will use public cloud storage for any number of purposes — including launching an application with the intent of moving it back on-premises — the UCS platform underlying S-Series will provide CliQr, giving them a means of migrating workloads on- and off-premises.

S-Series will be made available as part of Cisco’s UCS 3260 platform, which can be configured to order or assembled around general use cases, beginning November 7. | 9:00a |
High Density Compute is Here; Are You Keeping Up? Steve Lim is Vice President and Head of Marketing for Vantage Data Centers.
The connected lifestyle is here, and whether you are reaching consumer or business users, the growth in the use of devices and data is staggering. In the U.S., the number of devices and connections is expected to grow from 7.3 per person in 2015 to over 12 per person in 2020. Video continues to grow according to Cisco’s VNI with business internet video growing 4.2 fold between 2015 and 2020 reaching 4.8 exabytes, and consumer video 3.1 fold to reach 29.1 exabytes by 2020. This will have a big overall impact on the data center as more than 83 percent of all data center traffic will be in the cloud by 2019.
To meet the rapid growth in data usage, high-density data centers will be critical in order to scale to support cloud, big data IT and new data-intensive technologies. And since data centers are all about power and cooling, high density is how you maximize the usage of both of these.
Defining High-Density Computing
So what does ‘High Density’ mean? Today, most racks in a data center support between 3-4kW of power which limits the number of blade servers and micro-servers you can put in each rack. A high-density rack supports nearly 3x that amount of power: between 10kW-12Kw and thus supports nearly 3x as many servers in the same footprint. High Density also refers to the use of virtualization to run more applications on each server/machine to increase processor usage and reduce idle time. Applications should also be written to automatically move to another server in the event of a failure to maintain uptime and minimize application downtime.
High Density Compute Strategies:

The benefits of a high density compute data center include:
- Lower Costs: High density delivers the lowest cost data center service. By doing more in the same amount of space, costs are lowered and future growth is supported as data usage continues to double in the next three years.
- Greater Efficiency: High density compute maximizes both power and cooling to get the most out of each footprint and lower total operating costs. High density data center management tools are optimized for running servers at peak efficiency.
- Higher Capacities: Up to 5x more power per rack compared to just 10 years ago. High Density racks are able to support more: more power, more electricity, more compute – the high density effect.
- Going Virtual: Virtualization is vital to getting the most out of your data center spend. Leveraging virtualization allows you to run more applications on your machines maximizing their utilization and by moving applications off of failed machines to maintain uptime.
Run Hotter but More Efficiently, and Plan for Failure
Data centers are working hard to maximize the dollar value of every rack— by going as dense as possible to ensure that they are maximizing the space within the rack and that they’re maximizing the power that’s supplied to the rack as well. Moreover, virtualization is a key technology enables servers to pick up slack and continue operations in the event a physical server fails.
Traditional forced-air cooling methods are less effective delivering uniform cooling at densities above 10kW per rack and this inconsistency can negatively impact server performance and lifespan. In a high density compute data center cooling and power systems are optimized to allow dense server configuration to run at optimal temperatures which may mean running hotter than in current configurations, but within controlled parameters.
However, the current trend towards virtualization of applications provides two benefits within a high density environment: since the utilization of servers changes constantly, virtualized apps can take advantage of processor ‘down time’ to run more apps and not waste power on idling machines. This also means if a sever fails for any reason, applications are moved to other servers and loads quickly rebalanced. Thus servers can be run to maximum performance (hotter) as failures do not result in significant application downtime.
By relying on high density and virtualization and planning for failure, customers can run the data center hotter, use less energy to cool the air, and save more money because when a physical server goes down, no productivity is lost.

Why a Special High Density Data Center?
High density does require a different approach to data center management. Heavier racks may require reinforced floors, cooling may require a rethink not to mention a mental adjustment to allowing servers to run hotter with plans in place to swap out failed systems. New management systems are needed to monitor and control high density systems and increasingly this may include machine learning to fully maximize data center economics. But the impact of all of this rethinking is a more efficient, higher performing data center.
High density compute data centers maximize your data center spend so enterprises can continue to adopt new digital transformation business models that relay on virtualization and cloud technologies to stay competitive.
To put it simply, any data center purpose-built for high density will be able to support the next-generation cloud, enterprise and IT infrastructure for High Performance Computing, thereby optimizing the data center footprint and lowering overall associated costs.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
| 3:18p |
Broadcom to Buy Brocade for $5.9 Billion to Expand in Cloud (Bloomberg) — Broadcom Ltd, a maker of semiconductor chips, agreed to buy Brocade Communications Systems Inc. for $5.9 billion, a move that will help it play a greater role in meeting increasing demand for cloud computing.
Broadcom will pay $12.75 a share for the company in an all-cash transaction valued at about $5.5 billion plus $400 million in net debt, the company said in a statement Tuesday. That’s a 47 percent premium to Brocade’s closing share price on Friday. Brocade shares rose 22 percent on Monday after Bloomberg reported the deal was in the works. After the transaction closes, Broadcom said it plans to divest Brocade’s internet gear business, including Ruckus Wireless which it acquired earlier this year in a deal valued at $1.5 billion.
Owning Brocade’s networking gear is attractive for component makers like Broadcom, which makes the chips that power network switches. Such an acquisition would help Singapore-based Broadcom play a greater role in the build-out of data centers needed to fulfill rising capacity demand for storing information in the cloud.
Broadcom Chief Executive Officer Hock Tan, who’s built a $67 billion chipmaker through a string of acquisitions, had said he was in the market for more purchases. Tan already has chips that power network switches and control storage devices, markets that provide Brocade with most of its income.
Adding Brocade’s fiber channel switching business is “strategic and complimentary for Broadcom,” Tan said on a conference call. “Demand for storage continues to grow rapidly and this acquisition fills a key area within the enterprise storage product line.”
Tan also emphasized that he doesn’t plan to cannibalize the chip business.
“Let me put everyone at ease by confirming that we are not getting into the systems business,” Tan said on a conference call Wednesday. “We have built a great company selling primarily semiconductors to OEM system vendors. We consider our OEM customers to be strategic partners and have no desire to compete with them. We will be keeping Brocade’s fiber channels and switching business, which sells to many of the same OEM our semiconductor business does, but we will divest other businesses.”
San Jose, California-based Brocade has struggled to find growth in networking, where it’s dwarfed by Cisco Systems Inc. Customers are turning away from proprietary hardware and software combinations that Cisco specializes in, opting instead for open-source software and cheaper hardware built on the kind of chips that Broadcom makes.
Last year, Brocade sales rose 2 percent to $2.3 billion. That’s less than Cisco gets from its switch business in one quarter.
Upon closing, the transaction is expected to add immediately to Broadcom’s adjusted free cash flow and earnings per share, the company said. Broadcom anticipates the new business will add about $900 million to adjusted EBITDA in fiscal 2018. Broadcom also increased its long-term operating margin target to 45 percent from 40 percent, Chief Financial Officer Tom Krause said on the call.
Broadcom shares rose 2 percent to $172 in premarket trading while Brocade rose 9 percent to $12.25. | 7:11p |
AWS Adds Wind Farm as it Targets 40 Percent Renewable Energy by Year-End Brought to you by The WHIR
As the behemoth of the public cloud market, Amazon Web Services uses an enormous amount of energy to run its data centers. To offset this, AWS has announced the construction of a new 189 megawatt wind farm in Hardin County, Ohio, that will generate 530,000 megawatt hours (MWh) of renewable electricity annually, starting in December 2017.
The wind farm is AWS’ second in Ohio, and its fifth renewable energy project in the U.S. They will generate a combined 2.2 million MWh of renewable energy annually, which AWS says is equivalent to almost 200,000 homes.
The project will be constructed, owned, and operated by EverPower, which already operates seven projects producing 752 MW of energy.
“We remain committed to achieving our long-term goal of powering the AWS Cloud with 100 percent renewable energy,” Peter DeSantis, Vice President, Infrastructure, AWS said in a statement. “There are lots of things that go into making this a reality, including governments implementing policies that stimulate cost-effective renewable energy production, businesses that buy that energy, economical renewable projects from our development partners and utilities, as well as technological and operational innovation that drives greater efficiencies in our global infrastructure. We continue to push on all of these fronts to stay well ahead of our renewable energy goals.”
AWS says it is on track to reach 40 percent renewable energy use this year, and 50 percent by the end of 2017.
Amazon’s Paulding County, Ohio facility is scheduled to begin operation in May 2017. It currently has a wind farm in Indiana and a solar farm in Virginia in operation. The Hardin County facility will be the second largest by capacity, behind the Amazon Wind Farm US East project in North Carolina, which is expected to begin producing electricity in about a month. Amazon also has several renewable projects other than those to provide power for AWS infrastructure, including a recently unveiled 253 MW wind farm in Texas.
The U.S. Environmental Protection Agency recently released a list of data center providers using the most renewable energy, topped by Digital Realty Trust. A survey by Data Center Knowledge found that 70 percent of colocation and wholesale data center service users consider sustainability when selecting a provider. | 7:18p |
SonicWall Returns to Independence, Names New CEO Brought to you by MSPmentor
SonicWall has officially became a standalone company again, amid a rollout of slick new branding and updated programs.
But it was a significant personnel development that dominated headlines.
Bill Conner, a Dallas-area technology executive and 30-year veteran in networking and security, was named SonicWall’s new CEO, following its spin-off from the tech giant formerly known as Dell.
No one openly said nimbleness was lacking during SonicWall’s four-year existence as part of Dell Software Group.
But Conner – the handpicked choice of the new owners, private equity firm Francisco Partners and hedge fund Elliot Management – said his first priority is to restore a sense of creative dynamism to the security software vendor.
“Literally, it’s been to return the pace and frequency of the innovation in our core products,” he said of his mandate. “To build back the channel momentum and scale that we know this company can have and has had.”
Quest Software and One Identity, also components of the Dell Software Group sale, have been rolled into a single company called Quest Software.
The top of the new firm’s to-do list also includes doubling down on its cloud initiative, SonicWall Cloud GMS (global management system), a cybersecurity software solution with more than 100,000 firewalls under management.
“We are now hosting it to allow partners to focus on doing the business,” SonicWall vice president of worldwide security sales Steve Pataky told MSPmentor in August at Dell Peak 16 in Las Vegas.
“We take care of all the back-end infrastructure,” he said, adding that partners with infrastructure are free to host their own solutions. “Smaller operations will be able to get up and going with hosted GMS in a cost-efficient basis without having to put out a lot of capital costs.”
Pataky said the company had successfully executed on its commitment to set up a standalone partner program by the time the deal went final. Today also marks the start of the migration period, which will continue through the end of 2017.
“We’re watching the numbers today,” Pataky said. “Hundreds and hundreds of partners are already migrating.”
Management said to expect new capabilities around mobile in the coming months, as well as innovation around Internet of Things (IoT).
“You look at IoT, it’s clearly a new attack vector,” Conner said.
The new SonicWall plans to compete in the enterprise space.
“We’ll use our skill sets and our own grid networks and capabilities,” Conner said.
The new CEO said he feels fortunate to sit at the helm of such a major cybersecurity player, at such an exciting time.
“I truly knew the asset and the leadership team,” Conner said.
“I’m fortunate enough to be selected by Francisco and Elliot,” he added. “My focus will now be to take it to the next level…and returning some of the innovation and speed (to) SonicWall.” | 8:04p |
Not Really a Bro-mance: Broadcom Wanted Brocade’s FC Storage When Brocade Communications completed its acquisition of municipal Wi-Fi provider Ruckus Wireless last May, it informed investors that the deal would create a new kind of pure-play networking company. That seemed to be what the networking technology market wanted… up until this morning, when networking IP giant Broadcom Limited announced its intent to acquire Brocade, in a deal which Broadcom valuates at $5.9 billion.
“We were not looking to sell the company,” wrote Broadcom CEO Lloyd Carney, in a company blog post Wednesday morning. “However, when Broadcom approached us with a compelling offer, we had an obligation to consider that offer, along with other alternative opportunities.”
It was almost as if Carney succumbed to the idea as the inevitable fate of doing business. One wonders if the Ruckus deal would have been completed had either party to that deal could peek just six months into their future.
“The nice thing about this ecosystem is, things don’t change,” pronounced Broadcom CEO Hock E. Tan, during an investors’ presentation Wednesday morning.
So Much for Pure-Play
Broadcom made it clear that it was not actually acquiring a pure-play networking company. In fact, two of the components which analysts speculated would make such a deal attractive, as late as yesterday, were not even on the table: Ruckus, and the virtual IP networking business that put Brocade on the map. It only wants Brocade’s Fibre Channel SAN business, and not even because it foresees growth for that market.
“We’re not buying this business, Fibre Channel SAN, because we think it will grow dramatically over the next five years or ten years,” said Tan. “We foresee it to be in the range of what it was over the last five years.”
Broadcom will help Brocade to sell off both Ruckus and its IP networking business, said Tan. He phrased that statement more than once as a reassurance to Broadcom’s partners that it is not getting into the systems business and will not be competing with its own customers, even though Brocade holds some very valuable intellectual property in the SDN category.
At almost the same time, though, he also praised Brocade’s FC SAN portfolio for its intellectual property value as well as for its value as storage appliances. But repeatedly, Tan and CFO Tom Krause acknowledged that FC SAN is not, and probably will not be, a growth business per se for Broadcom.
Rather, it provides a necessary component to Broadcom’s catalog that had been missing. Tan painted a picture of a data center systems and components market where customers are capable of selecting parts from multiple vendors, but at the same time rate the quality of those vendors on the breadth of their platform portfolio. And he had reason to believe Broadcom was deficient in this category.
“We also do perceive a large number of customers — particularly larger enterprises like financial institutions, telcos, cable operators, government — need dedicated and highly secure storage infrastructure, for managing and sharing mission-critical data used often in private data centers or on-premises. Storage-area networks are a key solution to meet this need,” said Tan, at one point citing AT&T and Bank of America as example customers in this space.
Slow and Steady
Later in the call, responding to an analyst who had pointed out Brocade’s own previous assessment of its FC SAN business as a no-growth, or even steadily declining, revenue channel, Tan went so far as to predict those numbers would not change — insofar as they apply to the FC SAN market separately.
But as a component of a larger platform that must sell to certain classes of enterprises that Tan described as highly “risk-averse,” FC SAN offers particular value to Broadcom in this particular situation. The transition of enterprise storage to a hybrid model stops, he explained, when high security and high availability factor into the equation. If Broadcom doesn’t provide those options, he said, customers will look someplace else.
About five to ten percent of the FC SAN market is moving to lower-cost alternatives, he said, utilizing public cloud infrastructure, and integrating iSCSI and IP-over-Ethernet. But Tan believes — in his own words, “ironically” — that a Broadcom presence in the FC SAN market will help even those customers transition to lower cost storage, involving technologies where Broadcom has already established an intellectual property presence.
“But I would say this is a very gradual move,” remarked Tan, “because storage, as you know, is an extremely risk-averse custom, especially mission-critical storage. Any move is slow.”
There are technological shifts under way, the CEO pointed out, from 16G Fiber Channel standards to 32G. But the amount of time it would take this class of customers to make the move starting now, he predicted, could be as long as six years.
Un-Wed
The deal is currently pending approval from Brocade’s and Broadcom’s shareholders, and will face regulatory scrutiny. Still, it leaves up in the air the fate of two businesses which even Broadcom’s Tan acknowledged are high-growth, lucrative, and desirable.
Ruckus Wireless burst onto the scene in 2012, with an ambitious move to offer free-to-the-public municipal Wi-Fi service to San Francisco and San Jose. Launched two years later, Ruckus’ project replaced an earlier venture with MetroFi, that was canceled after that company left its half-built towers derelict, forcing angry taxpayers to fork over funds to deconstruct them.
The jewel in Brocade’s crown is Vyatta, its software-based SDN controllers, acquired through a 2012 acquisition. It became a true market disruptor in 2014, integrating open source technology from OpenDaylight, and making it feasible for organizations to stage their network infrastructure completely within x86 boxes.
Last May, legendary Brocade engineer Tom Nadeau spoke with Data Center Knowledge, discussing how network functions virtualization could enable very high-availability networks to run on OpenStack. The driving need for organizations, he told us, was to be weaned off of their first generation of virtualization, into an environment where multiple generations of software workloads could co-exist.
These institutions, from Nadeau’s characterization, were not risk-averse. And some of them were the same class that Brocade CEO Tan singled out. Evidently there’s a difference of opinion. That difference may be the thread upon which the future of Brocade’s SDN/NFV portfolio presently hangs. |
|