Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, May 19th, 2014
| Time |
Event |
| 12:00p |
Is Reportedly High Interest in ACI Enough For Cisco To “lead SDN?” Application Centric Infrastructure was front and center in Cisco’s latest earnings announcement and its top executives’ call with analysts last week, which signals an acknowledgement from the world’s biggest data center switch vendor that software-defined networking is the direction the industry is now moving in.
The data center networking market is going through a big transition, and so is Cisco, whose massive Cisco live! conference kicked off in San Francisco this week. Numerous SDN vendors have emerged, challenging Cisco’s dominance by promising sophisticated networking technology where the switching hardware is simple and cheap but managed intelligently by software that sits outside of the physical network.
Cheap commodity networking hardware is a problem for Cisco, because its traditional value has been in sophisticated proprietary software and hardware that are inseparable, which has enabled it to sell its products at extremely high margins.
“We’re going to lead SDN”
At last year’s Cisco live!, which took place in Orlando, Florida, the networking giant announced its answer to the SDN movement – its Application Centric Infrastructure technology. Instead of the common SDN approach of using virtual network overlays, which communicate specific configuration commands to hardware using the open SDN protocol called OpenFlow, Cisco’s ACI approach is to communicate high-level application requirements to intelligent network hardware, which self-configures accordingly.
Whether ACI wins the battle with OpenFlow-based SDN technologies remains to be seen. Cisco CEO John Chambers is optimistic. “We’re going to lead SDN,” he said on last week’s earnings call. But Cisco is not cutting itself out of the OpenFlow market. Its latest Nexus 9000 switches – the first line of ACI-enabled switches – also support OpenFlow, so when ACI finally becomes available later this year, Cisco customers will have a choice between the two.
Chambers claimed there was a lot of interest in ACI among CIOs he had spoken to and boasted that about 50 customers were already trying it out in their data centers. He said it was only a matter of time before those trials turned into sales: “I think you’ll just see us knock’em off one after the other.”
Cisco’s three problems
Data center switching, and the high-end switch-and-router market overall, is one of three problems Cisco has today. The other two are slumping sales in the emerging markets and lack of momentum with service providers. Those were the three problem areas Chambers identified on the earnings call for the third quarter of the company’s fiscal year 2014.
While there are signs of things picking up in the service-provider and high-end-switching segments, the company expects sales in countries like China, India, Russia, Brazil and Mexico to continue declining due to macroeconomic challenges in those countries. “We expect these challenges to continue,” Chambers said.
Cisco is the world’s biggest vendor of networking equipment and trends that affect it are indicative of trends affecting the market overall. In 2013, the company had more than 60 percent market share in the Ethernet switching and routing market alone, which includes its data center switching products, according to IDC.
Its closest rival in this space is HP, but HP’s market share in Ethernet switches and routers pales in comparison to Cisco’s. HP made about $550 million in sales in this space in the fourth quarter of 2013, for example, compared to Cisco’s $3.65 billion in sales that quarter.
Cisco’s recent weakness in high-end switching and routing was felt primarily in sales into its mobility and access-layer segments. These segments dragged down the third fiscal quarter’s switching revenue in general. “Overall switching revenue declined by 6 percent,” Chambers said.
He added that he was “pleased” with data center switching momentum, however, but said it would take several more quarters before overall switching returns to growth. “We will continue to take it one quarter at a time as you would expect,” he said.
Cisco reported $11.5 billion in revenue for the third quarter of fiscal 2014 — down 5.5 percent year over year. Its net income was $2.2 billion — down 12 percent. The company reported $0.42 in earnings per share for the quarter — down 8.7 percent. | | 12:30p |
Hortonworks Buys Hadoop Security Startup XA Secure Hortonworks has acquired XA Secure in an Apache Hadoop security play. Hortonworks has one of the top distributions of the popular open source software that turns a group of commodity servers into a powerful parallel-processing compute cluster.
The buyer primarily targets enterprises, which are one of the most security-conscious customer segments. XA offers centralized policy management, fine-grain access control, encryption management and other enterprise-friendly security features.
Hadoop is growing in popularity within the enterprise, but simplicity and security needs to be added to truly open up the market potential. The need for better security was heightened following addition of the YARN resource-management tier in Hadoop 2 last year. YARN allows multiple workloads to run on Hadoop, and customers have requested simple centralized security following its release.
“XA Secure will play an instrumental role in our company’s strategic vision for unlocking the potential of enterprise Hadoop,” said Rob Bearden, CEO of Hortonworks. “This move is consistent with our approach of delivering the enterprise capabilities that Hadoop users expect, completely in open source.”
In keeping with its reputation of commitment to open source software, Hortonworks plans to open-source XA technology later this year but will provide commercial support for it and other Hadoop services.
Terms of the deal were not disclosed, but XA is a small firm of around 10 people. The company was founded by former Oracle and VeriSign employees in January 2013 and joined Hortonworks’ technology partner program last December.
Hadoop for enterprises is a busy and competitive market. Hortonworks is up against both younger companies that built entire businesses around Hadoop like it did, as well as big established IT vendors who are not letting startups run away with the relatively new growing market. The list of competitors includes MapR, which has made friends with some of the biggest cloud providers, such as Amazon and Google. There are also Cloudera, which recently got a $740 million equity investment from Intel, becoming the processor giant’s official Hadoop distribution, and Pivotal, an EMC- , VMware- and GE-owned software company ran by former VMware CEO Paul Maritz.
Hortonworks, a spin-off from Yahoo, which formed the stand-alone company together with Benchmark Capital in 2011, announced a $100 million venture round in March. This is a lot of capital to keep it going, but its competitors have a lot of fire power behind them as well. | | 1:00p |
Cloudera Hosts Big Data With Vantage in Santa Clara Enterprise Hadoop specialist Cloudera will house its data center infrastructure with Vantage Data Centers in Santa Clara, the companies announced today. The deal includes “significant data center space,” according to Vantage, with the ability to expand across an even larger footprint if Cloudera needs more space.
Cloudera is a leader in the fast-growing “Big Data” sector, offering customers enterprise data hub software built on Apache Hadoop, providing a single place to store, process and analyze all their data. Earlier this year the company lined up $900 million in funding, including a $740 million investment from Intel Capital (INTC).
That funding allows Cloudera to expand its data center infrastructure. Vantage operates campuses in Santa Clara and Quincy, Washington. The lease with Cloudera will help fill space in Santa Clara, where the company has more than 24 megawatts of capacity available, making it easy for Cloudera to expand if needed.
“We’re very excited to have one of the leading big data companies on our campus,” said Sureel Choksi, President and CEO of Vantage. “Big Data drives massive and growing data center requirements, making Vantage an ideal long-term partner for Cloudera.”
“Vantage offered us an excellent customer experience, one of the lowest total costs of ownership, and the ability to scale quickly as we grow,” said Steve Hirai, Senior Director Corporate Services at Cloudera. “They also had outstanding references from current customers, which made the decision easy.”
Vantage is one of several data center providers with space available in Santa Clara, which is the busiest data center market in Silicon Valley due to affordable power from the local utility, Silicon Valley Power.
Last month Vantage increased its revolving credit facility from $210 million to $275 million. The company said the additional credit will allow it to continue to its expansion, including forays into new geographic markets. Vantage is backed by Silver Lake, the global leader in technology investing, with over $20 billion in combined assets under management. | | 1:41p |
Notes and Observations from Data Center World Matt Bushell leads Nlyte’s Product and Corporate Marketing efforts as their Senior Product and Corporate Marketing Manager. Prior to joining Nlyte, Matt worked at IBM for more than ten years, helping to launch multiple products in their Information Management and SMB groups.
I recently had the pleasure of attending AFCOM’s Data Center World both as an exhibitor and as a session attendee, and am compelled to share some of the key trends and themes that we observed (albeit through the lens of a Data Center Infrastructure Management (DCIM) provider) in no particular order:
1) Data Centers themselves are changing more than we realize – In Scott Noteboom’s opening keynote (he has worked for Apple and Yahoo), Scott posited about the amount of energy per human walking the earth is consumed by Data Centers (A: ~7 watts per day). His vision (and no doubt his startup) will look to disaggregate the CPU, memory and storage, apparently far beyond just virtualization. The other trend, is that heat density is ramping up, almost like a hockey stick, starting next year, from ~3-4 kw per rack to 8-10 in a couple years, and an expected monstrous 30 kw in 10+ years. So those are two trends, one a bit ambiguous, the other quite real and tangible.
2) AFCOM Data Center World’s audience is changing – The audience is strongly seguing from a more facilities audience to a strong mix of IT. In a Monday morning session on metrics, an informal survey showed a fairly even split (~150 people) of hands raised showed this – which in my opinion, is a good thing. (Noteboom, a longtime attendee of the show, also commented as such.) A question was asked in the session, “Software – The Universal Link in Tomorrow’s Data Centers” about the rack-and-stack mentality of data center operators, and how software can play a role in changing that. The short answer is that there needs to be a bridge for the gap between facilities, server deployment and applications, and software is an answer, but not the complete answer. Given that, it is of little surprise to the shift of audience participation at AFCOM.
3) CFOs and CIOs don’t understand data center terminology – This is both a surprising statement and a disappointing one, but the point being that terminology that THEY do understand is key. So if data center management is looking to indicate power costs, executives don’t care about kilowatts, but they care about dollars. But because the data center has a square footage, and they are used to real estate costs, a data center manager should relate their costs in dollars per square foot. So this is for full investment / loading of a data center (all equipment: CRAC/CRAH, IT, even staffing and building maintenance, etc.). They also want information by business unit, but less than 10 percent of data centers have the ability to do charge backs. For progress, some want custom metrics, e.g. by service tickets filled. (One of the presenters indicated in so many words, “If you can measure what you do, you can show, without being showy, how great a job you are doing, improving upon, and get you and your staff promoted.”)
4) Data Center Infrastructure Management doesn’t have to be done “all at once” – This came up a couple of times, in both a metrics session, but also in a roundtable DCIM panel hosted by Jennifer Koppy of IDC. The concept is, “walk before you run”. In another session, “DCIM – How to Justify,” the concept of deploying via a limited pilot came up, with the bigger picture needs being (of no surprise):
- I have solid processes in place, just need tools to help streamline
- There is a major project or initiative coming, and DCIM would be a big boon
- There is an issue coming (e.g. capacity running out) and DCIM could help solve for management.
- DCIM can be justified just in saving cost of wages: how many items placed or removed per month. For example, 30 minutes saved on each end multiplied by the cost of personnel time.
5) DCIM Industry finally gets some clarification –My colleague Mark Harris noted in a roundtable, “To talk about the DCIM market or to say that you are a vendor in the DCIM market is analogous to an automotive vendor saying they are in the automotive market: they may make tires, seat belts, powertrains, but they don’t make a car per se. It is similar in the DCIM market where there are power and monitoring providers, asset management and capacity management providers, and so forth.” Understanding that not all DCIM vendors are alike is important.
Clearly there was an immense amount of additional topics covered and findings that could take up a book, but these are the key themes that stood out for me during the conference.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 2:00p |
Data Center Migration: Critical Lessons Learned Today’s reliance on technology in the business world has placed many more workloads into the modern data center platform. So, as your organization grows – your data center must stay agile as well.
This is where the challenge comes in. When planning out a data center project – how do you know what’s the right direction? What makes sense to best fit into your business plans? In this eBook from Raritan, we quickly learn how organization designed their data center changes and what they learned after the process happened.
There are some key steps to consider during the data center planning phase. At this point, you know your organization is getting bigger and new IT needs are pushing your data center platform to the brink. To better understand these steps, the eBook outlines:
- To build, co-locate, or retrofit
- Selecting a site
- Determining data center tier rating, size and design
- Getting ready for the move
- Ensuring a successful moving day
- Additional pearls of wisdom (form organizations that have gone through it all)
There’s really no question that the modern business model won’t continue to evolve. Technology is acting as the direct enabler for a lot of the market and organizations are trying to keep up. Download this eBook today to learn the critical lessons learned around data center migration planning. Remember, as you design your data center platform, it’s critical to communicate often and ensure that all key teams are involved throughout the entire planning process. If successful – you’ll create direct positive impacts for your entire organization. However, unsuccessful data center migrations or changes can result in serious business slowdowns. | | 2:30p |
Data Center Jobs: Pinebreeze Technologies, Inc. At the Data Center Jobs Board, we have a new job listing from Pinebreeze Technologies, Inc., which is seeking a Data Center Technician in Seattle, Washington.
The Data Center Technician is responsible for maximizing uptime by providing a rapid response to all incident requests and troubleshooting to resolve them, providing remote access to systems when required, maintaining client hardware throughout the life cycle, installing and wire-manage network cable infrastructure, monitoring power and cooling, and maintaining complete documentation of all activities required for regulatory compliance. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 8:32p |
SAP Business Apps to Run On Microsoft Azure Microsoft and SAP announced that many SAP applications will run on Microsoft’s Azure cloud infrastructure, and that both new and existing SAP customers will be able to consume these applications in the Software-as-a-Service, pay-as-you-go manner.
German software giant SAP is a dominant presence in enterprise software and the ability to run its applications on Azure gives Microsoft a boost in the contest among cloud service providers to capture share of the enterprise market.
“Microsoft and SAP have a great history together, and we are committed to meeting the diverse needs of our enterprise customers,” said Scott Guthrie, executive vice president of cloud and enterprise at Microsoft. “Our expanded partnership with SAP demonstrates our continued commitment to deliver the applications and services our customers need — in their private clouds, service provider clouds, Microsoft Azure and Microsoft Office.”
Many of SAP’s applications will be certified to run on Azure by the end of the second quarter. The German company’s Business suite, Business All-in-One solutions, Mobile platform, Adaptive Server Enterprise and the developer addition of HANA in-memory database are slated for Azure availability.
Besides certifying business-critical SAP applications for Azure, the partnership focuses on better integration and connectivity between SAP back-office applications with Microsoft Office and SAP BusinessObjects with Microsoft’s Power BI.
Microsoft’s recent Azure moves — such as offering private network connection to the cloud through multi-tenant data center providers like Equinix and TelecityGroup — are targeted squarely at enterprise cloud usage. In another example of the push for the enterprise market was Oracle making its apps available on Azure last year. While SQL is the dominant database on Azure, Oracle’s heavy-duty options expanded variety. The trick is tuning these applications for multi-tenancy, something SAP and Oracle used to be reluctant to do.
SAP’s initial foray into SaaS, billed as a trial version for its on-premise applications, took place around 2008. The company has changed tact, increasing focus on SaaS, following a battle on the Customer Relationship Management front with Salesforce.com. Today, SAP already runs on Amazon Web Services and Verizon Terremark cloud services. | | 9:00p |
Raritan’s Intelligent Power Transfer Switch Targeted At Cloud Raritan introduced an intelligent rack power transfer switch targeted at cloud infrastructure and racks with one-power-supply devices Monday. Called the PX3TS, it helps equipment operate through power failures. The switch transfers from one power source to another when it senses a power loss and does it faster than current market offerings, according to the vendor.
“Every millisecond counts when there is a power failure,” Greg More, senior product marketing manager at Raritan, said. “Our new hybrid rack transfer switch is one of the fastest in the industry — twice as fast as standard automatic transfer switches, which take from 10 to 16 milliseconds to transfer loads.”
The PX3TS, announced at the Cisco live! conference taking place this week in San Francisco, samples the current 4,800 times per second with built-in sensors and transfers the load when it senses a problem in 4 to 8 milliseconds, according to the company. It uses a “hybrid” design, combining electromechanical relay and silicon-controlled rectifier (SCR) technologies.
Designed for cloud computing infrastructures and data center racks filled with one-power-supply devices, it adds an additional piece of automation in an increasingly automated data center.
The switch also provides remote monitoring and power distribution unit metering (at PDU level) to help with capacity planning decisions for more efficient use of power resources. It can also act as an extension to Raritan’s data center infrastructure management software, sharing real-time power information gathered by the switch. Raritan’s DCIM monitors the health of the rack and the data center.
There are a number of options available as well as a range of voltage and plug types. The PX3TS comes with an intelligent network-ready controller with display, two USB-A ports and one USB-B port to support Wi-Fi networking, webcams and cascading to share IP drops. The switch’s sensor ports support optional Raritan plug-and-play environmental sensors for monitoring environmental conditions in the racks. |
|