Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, October 13th, 2015
| Time |
Event |
| 12:00p |
Next-Gen Enterprise Network Doesn’t Stop at Corporate Firewall Enterprise IT is now expected to add business value – as the mantra currently goes – while the company’s customers, partners, and employees expect access to applications and services they use anytime, anywhere, and from any device – as goes another one.
To deliver to those expectations, the enterprise network cannot be limited to its corporate users. It has to be interconnected with the world of service providers that can bring those revenue-generating applications the enterprise creates to its customers around the world and to give its employees frictionless around-the-clock connectivity to partners and clients.
Equinix, the Redwood City, California-based colocation giant, wants this next generation of interconnection to happen inside its data centers. Providing interconnection as a service is itself a revenue-generating business for the company, but Equinix also views it as a driver for future growth in its colocation business – the business of providing data center space, power, and cooling in facilities around the world.
A recent study by market research firm Execute Now, sponsored and published by Equinix, found (predictably) that enterprise IT decision makers are increasingly aware of the need for interconnection with a broad ecosystem, and that demand for interconnection services is growing. The study’s results of course serve Equinix well and, like other vendor-sponsored market studies, should be taken with a grain of salt, but as Equinix’s vertical development director Lou Najdzin put it, “it’s really hard to argue” that the need for wide-reaching interconnection is growing.
“We’re entering an interconnected era,” he said. “Companies have to collaborate in communities with other enterprises,” as enterprise business models are now interdependent.
In a way, the interdependency is similar to the financial services industry, where traders, market operators, and market data companies are interconnected and rely on each other. The interconnection ecosystem covered in Equinix’s study, however, is different in that it crosses not only the enterprise firewall but also industry borders.
Will Cloud Hurt Colo Revenues?
Many if not most of those new revenue-generating enterprise applications will be hosted in data centers operated by public cloud providers, such as Amazon Web Services or Microsoft Azure. The messaging at Amazon’s AWS re:Invent conference last week was clear: the company wants all enterprise applications, not just the new ones, to run in its cloud.
And AWS has had some success in this pursuit. General Electric and Capital One, for example, are both consolidating data centers, moving most of their applications and data to AWS. Capital One’s latest mobile banking application will run entirely on AWS, according to the company’s CIO Rob Alexander.
Enterprises, reportedly, aren’t satisfied with the performance and security of cloud services when accessed via the public internet, however, which is where Equinix, and many other data center and network service providers, come in. They provide private network connectivity to public clouds – a set of services Equinix claims has been one of its fastest growing business segments.
But as corporate data center footprints shrink, replaced by cloud services, and as customers use colocation facilities to connect directly to those cloud providers, what should we expect the impact to be on colocation revenues? In Najdzin’s opinion, the impact can only be positive.
The interconnected era might actually mean enterprises that haven’t traditionally used colocation services will now consider them as they transition to models that rely on cloud services. Enabling access to enterprise networks for employees, partners, and customers anywhere at any time also requires interconnection with more service providers, which may translate into new colocation revenue.
Finally, as enterprises look to take advantage of modern Big Data analytics tools – especially the ones that operate in real time – they will want to store more data close to their points of access to those analytics tools and connect to them via private network links, Najdzin said.
Cloud Gatekeepers Abundant
Equinix, as mentioned earlier, is not alone in this space. Companies like CoreSite, Interxion, and, more recently, Digital Realty – which gained a substantial US colocation and interconnection play through its $1.9 billion acquisition of Telx – also want to be at the network intersections between enterprise customers and cloud providers. Major network carriers, such as Level 3, NTT, AT&T, Orange, and Verizon, are also competing for this business, and some of them offer data center colocation services of their own.
Which vendors enterprise customers choose to enable their new interconnection architecture will depend on a host of factors, including, to a great extent, their business goals and the state of their existing infrastructure. But the key to winning in this space from a data center provider’s perspective will be providing access to a wider variety of service providers in more locations than available through others. | | 3:00p |
New Methods of Maximizing Your Oldest Data Center Technology Brian Hanking is the CTO of Canara.
No matter how “high tech” your data center, there is a high likelihood that the backup power system of your critical facility is completely dependent on room full of batteries. Data center surveys have shown that anywhere between 65 percent to as high as 85 percent of unplanned downtime can be attributed to battery failure of some kind. This means your facility is almost certainly at the mercy of a room full of what basically remains 1800’s technology.
It only takes a single unit failure within a string of lead acid batteries to make that entire string useless so it follows that even several strings of batteries need only have a few bad units scattered throughout it to render the entire emergency power system useless. Even when monitored locally, battery monitoring systems typically produce a tsunami of data points but very rarely if ever do they have the intelligence to put the data together to give useful actionable insights. The result is that the battery room and related backup power infrastructure still remain one of the most opaque components of a critical facility.
Today there is a much better option in how to use this technology and dramatically improve the process for understanding your batteries, identifying vulnerabilities, predicting failures and ultimately avoiding downtime. The combination of advanced sensors and predictive analytics have proven to be incredibly effective at keeping important machines up and running in other fields by providing clear, actionable alerts about when interventions, maintenance and pre-emptive repairs should be done.
So what does a more effective battery strategy look like?
Decide What You are Going to Do With the Data
It seems obvious, but knowing what you are going to do with the data is the very first step you should take. Are you going to have a battery expert looking at the data on a daily basis to determine alarms as well as determining asset life or are you merely going to feed basic alarm information to the building management system.
Select Your System
There are several systems in the marketplace today and it seems a new one pops up every week. It seems superfluous to say but any monitoring system has to be more reliable than the system it is monitoring. Once you have selected a system to suit your needs consider the company who makes it, how long they have been in business and how long the product itself has been in the field. Ask for references from other users.
Install the System
Very often overlooked, but make sure that the system you choose is being installed by someone who has a vested interest in that system working correctly for you. Ensure also that the contractor considers servicing access to the batteries when installing. The installation step is as important as selecting the right equipment. There are many good systems that are being ignored today simply due to bad installation practices which have rendered a very expensive investment useless due to it alarming falsely until it becomes a nuisance and then very quickly, a boat anchor.
Consider Your Alarm Limits
It would be much simpler if every battery had a one simple set of parameters however the reality is that these parameters vary from battery manufacturer and battery model. There are many considerations, from simple float voltage to the temperature compensated settings of the rectifier being used. The alarms can signal issues with string voltage, unit voltage, impedance, ambient temperature, unit temperature, ripple and record discharge. These alarm limits have different priorities, ranging from lower priority maintenance pointers to more immediate critical issues. So which are the important ones? All of them. If unsure, talk to the battery manufacturer about what limits to set alarms to.
Turn Data Into Knowledge with Predictive Analytics
This is definitely the most efficient and effective way to use your monitoring equipment. You have invested a large sum of money to purchase and install a monitoring system so why not get the most out of it? Certainly you could have your own staff check for simple alarms (or more likely be notified on the email alarm list) but why not have an expert look at the data every day and collate that data and have them use predictive analytics to give you many more benefits, including earlier warning of possible issues, more assured uptime, elimination of false alarms, flagging of additional issues, extending useful battery life, enabling better asset management and assistance with warranty claims.
Batteries and critical backup power systems can and do fail. There is no avoiding that fact but monitoring and predictive analytics can have a dramatic effect on identifying warning signs early and eliminating problems before they occur. | | 4:07p |
Linux Foundation and ONOS Partner on Open Source SDN and NFV Networks 
This post originally appeared at The Var Guy
ONOS, a carrier-grade open source software-defined networking (SDN) operating system, received a big endorsement this week from the Linux Foundation. Starting today, the two organizations will partner to develop open source SDN and NFV software.
ONOS develops an SDN operating system for carrier-grade networks. Designed for high availability, high scalability and high performance, the platform is funded and supported by a range of industry partners, including AT&T, NTT Communications, SK Telecom, China Unicom, Ciena, Cisco, Ericsson, Fujitsu, Huawei, Intel and NEC.
The ONOS platform was open sourced in December 2014, and has issued four new releases since then.
As part of the partnership with the Linux Foundation, ONOS will “transform service providers’ infrastructure for increased monetization by achieving high capex and opex efficiencies and creating new innovative services using the power of open source SDN and NFV,” the Linux Foundation said in a statement. “The Linux Foundation will assist ONOS to organize, grow and harness the power of this global community to take ONOS and the solutions enabled by it to the next level of production readiness and drive adoption in production networks.”
The Linux Foundation sees the initiative as a way to help drive open source forward in the carrier-grade networking space. “Service providers are increasingly adopting open source software to build their networks and today are making open source and collaboration a strategic part of their business and an investment in the future,” said Jim Zemlin, executive director of the Linux Foundation. “The Linux Foundation recognizes the impact the ONOS project can have on service provider networks and will help advance ONOS to achieve its potential.”
For its part, ONOS will gain the support of a big name in the open source world. “Now is the perfect time to partner with the Linux Foundation, said Guru Parulkar, executive director and board member at ON.Lab, which oversees ONOS. “They bring a number of resources and also provide a measure of trust and sustainability through a well-built brand that delivers extended reach to our collaborative community and accelerates innovation on an even larger scale.”
This isn’t the Linux Foundation’s only SDN venture. The organization has been closely supporting OpenDaylight, another SDN project, since April 2013. The ONOS partnership doesn’t apparently entail the same level of commitment from the Linux Foundation, which has not made ONOS into one of its official collaborative projects. (OpenDaylight is a Linux Foundation collaborative project.) Still, the partnership opens new doors for an SDN platform with strong industry backing, while providing the Linux Foundation with another way to promote open source within the rapidly evolving SDN networking world.
This first ran at http://thevarguy.com/open-source-application-software-companies/101315/linux-foundation-and-onos | | 5:00p |
Sunbird Aims to Make DCIM Software Useful for CIOs and CEOs Sunbird Software, the DCIM software company Raritan spun off before it was acquired by Legrand, has launched the latest release of its data center management software suite, adding an easy-to-understand dashboard for C-level execs and a host of new and enhanced monitoring, auditing, and analytics capabilities.
Enterprise CIOs and CEOs are increasingly interested in what’s going on with their companies’ data center infrastructure, since data centers and IT departments that oversee them are now seen more and more as strategic assets that can add business value as opposed to being in their traditional role as costly infrastructure needed to support various internal operations and business functions.
“Sunbird DCIM enables data center resources to be used more efficiently and to be shifted to meet new and changing business demands,” Sunbird President Herman Chan said in a statement.
Since DCIM software tools track a wide variety of data center parameters, they have the information those executives need, so answering to the demand is simply a matter of translating the often complex data sets the tools collect into easy-to-understand visualizations. This is what Sunbird’s new Enterprise Dashboard aims to do:

It presents a company’s data center infrastructure as a series of tiles. Each tile represents a data center site and displays some basic data about it, such as its current load, total power capacity, and available power capacity, as well as unusual events.
A CIO can use the configurable dashboard simply to track infrastructure health and capacity from a bird’s-eye perspective, while an operations manager can drill deeper down into each site by clicking a tile and seeing data on individual cabinets, individual servers, and the status of their ports, power connections, and energy usage.
Here is a list of other updates in the latest Sunbird DCIM software:
- A new user interface to DCIM’s Paperless Audit Mobile App makes it easier to conduct audits using iOS and Android mobile phones and tablets. Coupled with a Bluetooth handheld scanner, the audit app’s new bar code search and input capabilities make it easier to track equipment and to make updates on the go. From the data center floor, the app also can be used to initiate requests to install, move, and decommission a device. It can be used to view and complete work orders, as well as verify that an installation was done according to plan.
- Predictive Analytics — Modeling/Simulation Capabilities
- New Live Charts were added to make it easier to identify stranded power capacity in data centers, helping to avoid the expense of deploying new cabinets. Side-by-side views of redundant cabinet PDUs provide valuable information on potential power supply failures. Additional views allow for trend analysis of circuit breaker limits and real-time loads to predict potential failures.
- Health Map provides a bird’s-eye view of entire white space and visual alerts of abnormal operation conditions that might lead to data center downtime. An inspector with Real-time Readings and alarm details enables drill down capabilities to investigate the root cause and possible remediation.
- New Specialized Polling Engine optimized for early health alerts. The new alert engine augments the existing data polling engine that is focused on highly accurate data collection at a one-second frequency.
- Enhanced support for LDAP and Active Directory
- Enhanced Import and Export makes it easier to add large amounts of data and quickly and resolves data quality issues with automatic validation.
| | 5:49p |
IBM’s New Cloud Data Center in India to Serve Its Exploding Developer Population The only country that has more developers than India today is the US, and that’s going to change by 2018, when India will have the world’s largest developer population, according to estimates by Evans Data Corp, a market research firm that tracks developer population globally.
There are about 2.75 million developers in India today, but EDC expects that number to nearly double three years from now, reaching 5.2 million developers, at which point India’s developer population will surpass the number of coders in the US.
These dynamics aren’t lost on the world’s largest cloud service providers, such as Microsoft, Amazon, and IBM. Microsoft announced the launch of three Azure cloud data centers in India in September to improve services for users located in the country; an Amazon Web Services cloud data center in India is expected to launch next year; and IBM announced the launch of its first SoftLayer cloud data center in India today.
According to Gartner’s estimates, India’s public cloud services market will reach $838 million by the end of this year – up nearly 33 percent from 2014 – including revenue from Infrastructure-as-a-Service, Business Process-as-a-Service, and Software-as-a-Service.
IBM has had a data center in Mumbai for some time, but the new facility in Chennai is the first to support its SoftLayer cloud services. The Mumbai facility has been used to support a host of IBM’s other IT outsourcing services, including non-SoftLayer cloud offerings.
Before the Chennai cloud location was launched, SoftLayer users in India had to access the services from a SoftLayer server in Singapore, their data traveling between Singapore and India over the public internet, former SoftLayer CEO Lance Crosby wrote in a 2014 blog post.
“When we add a SoftLayer data center in India, you’ll obviously access servers in that facility much more quickly, and when you want content from a server in our Singapore data center, you’ll be routed through that new data center’s network point of presence in India so that the long haul from India to Singapore will happen entirely on the private network we control and optimize,” Crosby wrote.
IBM expects thousands of startups to launch in India in the future. The company has partnered with an Indian IT trade association called NASSCOM (National Association of Software and Services Companies) to establish Techstartup.in, a networking hub it hopes will attract the country’s tech ecosystem. | | 6:30p |
VMware Expands Hybrid Cloud Platform for Development and Visibility 
This article originally appeared at The WHIR
VMware has expanded the application development and visibility capabilities of its Unified Hybrid Cloud offering, the company announced Tuesday at VMworld 2015 Europe. As several new features become available, VMware says they will provide performance, analytics, ease of management, and security benefits to customers using its hybrid cloud platform.
Google Cloud DNS, the new release of VMware vCloud Director, VMware vCloud Air Monitoring Insight and Enhanced Identity Access Management, and expanded support for VMware vSphere Integrated Containers are all being added to the Unified Hybrid Cloud offering.
“VMware’s approach is about empowering organizations to securely build, run and deliver any application across any environment,” said Bill Fathers, VMware executive vice president and general manager, Cloud Services Business Unit. “Our public cloud, vCloud Air, and global service provider ecosystem, vCloud Air Network, form the core components of our Unified Hybrid Cloud and remove that compromise from consideration.”
Google Cloud DNS will be offered on VMware vCloud Air on its release to generally availability, which is expected in the first half of 2016. The addition will provide organizations hosting web-facing applications on vCloud Air, or moving email servers to the cloud with reduced latency, high availability and scalability, and ease of management of the Google service.
Service providers can use VMware vCloud Director 8 to deliver self-service cloud orchestration to customers. The new release supports vSphere 6 and VMware NSX 6.1.4, includes virtual data center templates, vSphere vApp enhancements and OAuth Support for Identity Sources.
The Insight service provides cloud monitoring and analytics to maintain application health and platform performance while optimizing infrastructure use. Access Management provides simple single identity, single sign-on authorization, governance and role management through token authentication. Both are expected to be available for vCloud Air Virtual Private Cloud in Q4 2015.
VMware is also increasing support for its VMware Photon OS runtime environment forLinux containers with support with vSphere Integrated Containers on vCloud Air. The addition provides developers with container use and orchestration flexibility, and is expected to be available in 2016.
The company also announced a technology preview of Project Michigan, which extends Advanced Networking Services and Hybrid Cloud Manager to all vCloud Air services by enabling access to elastic Virtual Private Cloud OnDemand.
VMware announced it would integrate Google Cloud DNS with some of its offerings in February, as it was introducing its unified hybrid platform.
This first ran at http://www.thewhir.com/web-hosting-news/vmware-expands-hybrid-cloud-platform-for-development-and-visibility | | 9:02p |
What about Dell’s Own Huge Data Center Software Portfolio? While there may be a lot of cross-sell opportunities for hardware and software between Dell and VMware, Dell has a substantial list of data center software capabilities of its own, which it has accumulated over the years primarily through acquisition. This portfolio overlaps to a great extent with that of VMware, the data center software giant and EMC subsidiary Dell will gain control of if its $67 billion EMC acquisition successfully comes to a close.
In 2010, Dell bought Boomi, an expert in integration between cloud services and on-premise systems. The same year, it acquired Scalent, a management platform for virtualized data center environments. Quest Software, acquired by Dell in 2012, provides IT management software. Also in 2012, acquisition of Gale Technologies gave Dell advanced infrastructure-automation software for on-prem and hybrid clouds.
If VMware remains part of EMC and Dell, Dell will have the opportunity to become even more of a one-stop shop for a variety of data center and cloud management tools, Thomas Bittman, distinguished analyst at Gartner who covers VMware, said in an interview.
Dell has been pursuing the one-stop-shop status since at least last year, when it launched into public beta the Dell Cloud Marketplace, a platform for single-pane-of-glass shopping, purchasing, and management of public cloud services, starting with Amazon Web Services, Google Cloud Platform, and Joyent. VMware’s vCloud Air could potentially become the next cloud service available through the marketplace.
Dell’s cloud strategy has so far focused on small and mid-size customers. Adding VMware to the mix could give it a substantial play in the large enterprise market, Bittman said. But, “everything at this point is just speculation,” he added. “We really don’t know.”
“The question is whether EMC and Dell will maintain the company as it is, or whether they might sell parts of it,” Bittman said.
Presumably because of the uncertainty about VMware’s future, news of the deal sent its share value tumbling down.
VMware’s stock hasn’t done particularly well this year overall, starting the year around $80 per share, getting close to $90 mid-year, but sliding down continuously since. VMware was trading at about $80 per share just last week but dropped to just below $70 per share Tuesday afternoon, a drop attributed to the acquisition news.
Citing competitive pressure from Amazon Web Services, a JMP Securities analyst downgraded VMware’s stock from “market outperform” to “market perform” Tuesday, according to MarketWatch. Pacific Crest Securities lowered its 12-month stock target for VMware, citing near-term financial challenges for the company.
The MarketWatch report cited Morgan Stanley saying the proposed deal can potentially influence VMware stock “for some time.” |
|