Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, May 21st, 2014
Time |
Event |
12:00p |
Schneider to Integrate Vigilent Into DCIM Suite Schneider Electric is going to integrate Vigilent’s cooling-system optimization solution into its data center infrastructure management software suite StruxureWare for Data Centers.
This is the first time Schneider is getting into controlling mission critical systems in data centers. Its DCIM software has not gone beyond monitoring and asset management until now.
“Schneider Electric has been … careful [about] moving into controlling things on the IT floor,” Henrik Leerberg, global product line director for data center software for the French power distribution and automation giant, said. “We’ve been controlling buildings for years … but in the IT space, having a central piece controlling a lot of critical equipment has been kind of … foreign waters for us.”
As vendors compete in the growing DCIM market, they have been expanding capabilities to provide more and more of a holistic view of a data center’s status. They expand capabilities by acquiring smaller vendors or partnering and integrating solutions.
Schneider has done a little bit of both. In 2011, it bought intellectual property for a piece of DCIM software from a company called Viridity.
Vigilent has a “sophisticated machine-learning control software” that dynamically controls capacity of air conditioning units based on demand, its CEO Dave Hudson said.
Before the control software is deployed for on-going management, however, Vigilent instruments a data center with sensors, installs control kits for air conditioning units and goes through an initial monitoring period. Control decisions are then based on that baseline assessment.
The deal is significant for Vigilent, a 70-plus-employee Oakland, California-based startup. Schneider’s sales and marketing muscle has the potential to get its software into thousands of data centers around the world, Hudson said.
StruxureWare comes in a variety of flavors, and the data center version is only one of them. The company sells StruxureWare in modules, so customers can pick and choose from a list of DCIM capabilities and buy only the pieces that fit their needs.
Schneider plans to complete the integration with Vigilient by November, Leerberg said. Once that is complete, there will be a new module called Cooling Optimize in the StruxureWare portfolio. It will be based entirely on Vigilent’s technology but branded as StruxureWare.
To use it, however, customers will need to have at least one other base component of StruxureWare in place, Leerberg said.
Both vendors will market the integrated solution, but Vigilent will continue selling a stand-alone product as well.
A Vigilent deployment in a typical data center with about 50,000 square feet of white space costs about $250,000 on average, Hudson said. Such a deployment will usually pay for itself within two years, he estimated.
Schneider charges StruxureWare customers differently, on a per-rack basis, and the plan is to adopt the same billing scheme for the future cooling-optimization module. Leerberg said the module will cost about $300 per rack, which is about the same amount the vendor charges for other StruxureWare modules. But, the bigger the deployment, the lower the cost per-rack, he added. | 12:30p |
HP Automates Data Center for DevOps HP has introduced an automation and cloud management solution called Orchestrated Datacenter, which helps optimize data center operations to meet infrastructure and application delivery requirements. The solution combines several existing products and features and integrates with a number of open source technologies, such as Chef, OpenStack and Hadoop, but has some new enhancements as well.
Manoj Raisinghani, senior director of global product marketing in automation and cloud at HP, said the solution was cutting out a lot of repetition and duplication and making the entire data center stack automated and orchestrated. It is meant to enable IT teams to design, customize and deploy services in test and production across traditional environments, as well as public and private clouds.
“We’re aiming at the DevOps paradigm,” he said. “We’re bringing together [application development] teams and IT.”
According to IDC, data center automation and orchestration software is one of the IT industry’s fastest growing management software categories.
“IT decision makers recognize there are significant business agility improvements as well as cost savings to be had by automating and orchestrating a wide range of infrastructure, middleware and application provisioning and configuration activities,” said Mary Johnston Turner, an IDC analyst. “IT architects and CIOs want simpler, more integrated and effective orchestration solutions that deliver value right out of the box. HP’s Orchestrated Datacenter Solution addresses these priorities by combining important open source technologies such as OpenStack and Chef with HP’s full-stack automation technologies and operational best practices.”
A brand new offering that is part of the announcement is HP Enterprise Maps, an ArchiMate-certified enterprise architecture solution. “We talk about plan, build and run, and we spend a lot of time on build and run,” said Raisinghani. “On the plan side, they use things like Excel and Word and discussions – basic service-level planning.
“Enterprise Maps is brand new. It lets architects come together and look at all the business service requirements, see what’s the actual transformation and the actual map. You can take requirements and map them back to a service. Figure out which components have to be architected, which apps lend themselves well to cloud, which don’t. It brings business-level requirements to the operations level.”
Chef support out of the box
Cloud Service Automation 4.1, the latest release of HP’s solution for designing services through a self-service catalog, can now incorporate Chef recipes in the design. “Now you’re creating an entire service model,” Raisinghani said.
Out-of-the-box Chef support is an example of the company’s increased focus on open-source technologies to appeal to developers. “We hear about [application development] teams doing stuff in the open world and we are addressing this very heavily,” he said.
Chef is emerging as an open-source standard to define application and system configurations and has growing appeal in modern application development communities.
“App teams are doing their own things using Chef. They’re using it because it’s open source. Teams are using this to create basic recipes. Now comes the time for IT to put it into production. IT tools are not matching up with what Dev teams are using, so we embraced Chef. It’s policy-based automation. You can import the recipe, put all the controls and effectively from here on you don’t need to recreate it. It’s bringing together IT and AppDev guys.”
The new Operations Orchestration 10.1, an orchestration tool, has more than 5,000 operation workflows. Server Automation 10.1 has a new Automation Insight capability, which unifies reporting across products and data sources to measure status, compliance, server operations and frequency of updates.
Supporting OpenStack comes from the same set of motivations. “We embraced OpenStack,” Raisinghani said. “It used to be a lot of manual work. Not anymore.”
There’s also out-of-the-box support for Hadoop and no-SQL databases like MongoDB, CouchDB and Hbase. | 2:00p |
Key Signs That You Need a DCIM Solution Data centers are complex with many components that are distributed. So how do you keep an eye on it all? How do you make sure that your data center is running optimally? The truth is that with information pouring in from all different sources, it is almost impossible to parse through all the noise and figure out if DCIM software is right for your Data Center to consider starting a Data Center Infrastructure Management (DCIM) project.
With all of that in mind – Is this the right time? If your data center is rapidly changing and you need better insight into your Data Center activity, it might be. In this eBoook from Raritan we take a look at the critical signs which indicate that you may very well need a solid DCIM solution.
As your data center evolves and changes – there is the direct need to better control resources, users, and workloads. As the eBook outlines, there are 8 key points around DCIM to be aware of:
- Planning a Data Center move
- Considerations around building a new Data Center
- Challenges in figuring out where to put new servers, storage, and network equipment
- Challenges understanding the status of work in the Data Center
- Paying for Data Center space and power in a colocation or hosted facility
- Rely on vendors to remotely perform maintenance activities (moves, adds, changes)
- Dispatch remote staff to repair systems troubles
- Need to increase energy efficiency and uptime of your Data Center
Download this eBook today to really understand where you are with your data center model and how DCIM can help. Remember, the idea is to create a powerful data center platform capable of dynamic scale and efficiency multi-tenant user controls. The only way to get to that point is through intelligent resource and data center management. DCIM not only provides a real-time representation of your data center – it helps administrators make proactive infrastructure decision. | 4:30p |
Sumo Logic Nets $30 Million for Global Expansion Log management and big data analytics company Sumo Logic has raised $30 million to take its machine data analytics platform into EMEA and Asia Pacific regions, taking its total funds raised to $80.5 million. This latest round was led by Sequoia Capital, with participation from existing investors Greylock Partners, Sutter Hill Ventures and Accel Partners. Accel Partners also participated in a 2012 $30 million Series C round for Sumo Logic, and is joining in this round with its second $100 million big data fund.
The startup’s patent-pending Elastic Log Processing and LogReduce technologies transform machine data into actionable insights for IT operations, application management and security and compliance teams.
Big Growth expected for big data analytics
“Machine data used to be little more than exhaust. Now it’s the fuel that powers operational efficiency, information security and insights into customer behavior,” said Pat Grady, a partner at Sequoia Capital. “We were shocked to learn that customers can start analyzing production data in minutes, which makes Sumo Logic an easy choice over on-premise alternatives.”
The four-year-old startup expects the market for machine data to grow 15 times by 2020 and has witnessed its customer base grow more than 200 percent year over year. It has been rumored that Sumo Logic will file for an IPO in the coming years. Big data companies Splunk and Tableau Software filed for IPOs last year and have since done very well.
Sumo Logic customers conduct more than 5 million searches per month. To support this continued growth, the company is investing in the addition of personnel dedicated to customer satisfaction.
To better service enterprise customers located in EMEA and Asia Pacific, it now provides its service from data centers located in Sydney, Australia and Ireland. By offering access to Sumo Logic’s machine data analytics platform in-region, customers will experience lower latency and address data sovereignty requirements.
“By every measure, we have experienced explosive growth over the last year and this lead investment from Sequoia provides another huge validation of our business and large upside,” said Sumo Logic CEO Vance Loiselle. “As we continue executing on our vision in 2014, we are seizing the opportunity to expand our talent roster and extend our business into the wide-open European and Asia Pacific markets as demand for machine data analytics soars.”
Over the past year, in addition to introducing key integrations with Akamai, Amazon Web Services CloudTrail and ServiceNow, Sumo Logic has enhanced its platform with enterprise security analytics, anomaly detection and an application library. The company’s cloud-based Log Management and Analytics service is used by companies like Netflix, McGraw-Hill and GoGo Inflight. | 5:30p |
OneNeck IT Building $20M Colorado Data Center, Lines Up Incentives OneNeck It Solutions is investing $20 million on a new data center in Colorado, its first in the state. OneNeck parent company TDS acquired Englewood-based MSN Communications in October 2013, unifying it under the OneNeck IT brand.
At the time of the acquisition, the company announced intentions to build, which it has now confirmed. OneNeck has lined up construction companies and has secured incentives from Douglas County Commissioners. The project leverages a personal property tax rebate and construction fee waivers.
“In an effort to continue strategically supporting investing in projects that provide a strong economic foundation for Douglas County, we are proud to support OneNeck in this endeavor,” Jill Repella, Douglas County commissioner, said. “We believe in providing an environment where businesses can succeed. For this reason, it is a pleasure to team up with OneNeck on their data center build, a project that will certainly add to our community’s economic growth.”
The planned 35,000 square foot data center is being built on 11.2 acres of land on Concord Center Drive in Englewood. Expected completion is early 2015. OneNeck will deploy its ReliaCloud Infrastructure-as-a-Service in the new data center in addition to offering managed services. This will be its seventh data center. Its other facilities are in the Midwest and Arizona.
The overall project is designed for up to five phases totaling 160,000 square feet and it will support data center modules. The electrical system will have multiple levels of redundancy and backup. The company expects the facility to provide cabinet power densities of 5kW on average, with some capable of going up to 20kW.
“We are excited to make this additional investment in the Denver area,” says Phil LaForge, president and CEO of OneNeck. “Our data center will be built to withstand natural disasters, which means area businesses can rest assured [that] their IT infrastructure is safe, protected and always accessible in our new … data center.”
The MSN Communications acquisition gives the overall company strong footing in the area prior to the building of this data center. TDS has been bolstering their cloud strategy via a mix of internal development and targeted acquisitions. It acquired OneNeck in 2011 for $95 million as a major foundation for its cloud.
About 30 local companies will be involved in the construction of the building. Denver-area contractor JE Dunn Construction will coordinate the project with support from INVISION Architecture, Faith Technologies and North American Mechanical.
ReliaCloud is an IaaS solution built on an infrastructure stack by Cisco, EMC and VMware. It is desgined for resource-intensive applications and databases. | 7:00p |
NSA’s Hardware Tampering May Alter Global Product Flow The long-term effect of the latest Snowden-Greenwald revelations that American cyber spies intercept exported IT gear before it leaves the U.S. to install “backdoor surveillance tools” may be “a fragmented Internet, where the promise of the next Internet is never fully realized,” as Cisco CEO John Chambers wrote in his letter to President Barack Obama earlier this month. But the most immediate effect may be a wholesale change in the way IT equipment by American vendors is shipped around the world.
Few of the products in question are actually manufactured in the U.S. IDC analyst Kuba Stolarski estimated that between 10 percent and 20 percent of servers by American vendors are built domestically. Some systems, however, are built by Taiwanese manufacturers and shipped to the U.S. before being exported, which brings the percentage of serves that pass through U.S. borders higher, but that may soon change.
“If there are concerns among some non-U.S. customers about potential tampering, then hardware vendors may find themselves having to create new distribution paths that circumvent the U.S. entirely for those customers,” Stolarski wrote in an email.
The revelations are also an opportunity for American vendors’ foreign competitors. “There is also a small chance that the FUD (fear, uncertainty and doubt) from this story could open up a window of opportunity for non-U.S. vendors who could play a ‘safe haven’ role for non-U.S. customers,” Stolarski added.
Cisco CEO calls on Obama to lead on change
Allegations of tampering by the U.S. National Security Agency earlier this month caused yet another stir among IT vendors who worry that public knowledge of the practice may erode confidence in their products and ultimately hurt their businesses.
Chambers voiced the concern in a May 15 letter to Obama, asking him to lead on a set of new rules for the government’s electronic surveillance.
Chambers sent the letter after Glenn Greenwald, the reporter at the U.K. newspaper The Guardian who has been publishing information from secret NSA documents leaked by former NSA contractor Edward Snowden, published an article based on a 2010 report by the head of the NSA’s Access and Target Development Department. The report outlines how the agency intercepts servers and networking gear en route to foreign countries, implants “backdoor surveillance tools,” repackages and reseals them before putting them back on the road.
“We simply cannot operate this way,” Chambers, CEO of the world’s largest network technology company, wrote.
Serious issue for all vendors
Chambers is not alone in his outrage. A spokesperson for Juniper, a major Cisco competitor, sent us a statement saying the company took such allegations seriously. “To be clear, we do not work with governments to purposely introduce weaknesses or vulnerabilities into our products,” the statement read.
Dell spokesman David Graves echoed the Juniper statement, saying the company did not work with governments to undermine security of its products.
“Dell systems are produced in Dell’s global network of manufacturing facilities, using open technologies acquired through a thoroughly vetted supply chain,” he wrote in an emailed statement. “We continually monitor our manufacturing process and products and take seriously any issue that might impact the integrity of our products or customer privacy.” | 9:00p |
Are Your Enterprise Applications Ready for an OpenStack Neighborhood? Billy Cox, the general manager of Service Assurance Management for Intel.
The typical enterprise application grew up on its own block. By that I mean the application lived in its own environment served by a dedicated pool of bare-metal servers. For the administrator, there was no question the application would get the resources it needed to meet the appropriate service-level agreements (SLAs). That was all hard wired into the system.
Today, with the advent of cloud computing and the maturation of the supporting technologies, a big change is under way. Enterprise applications are now moving into multitenant OpenStack neighborhoods where they have to contend with potentially “noisy neighbors.”
A so-called “noisy neighbor” is a virtual machine that consumes more than its fair share of the shared resources within the system and therefore degrades the performance of the other VMs running on that same system. And this is where things get even trickier: When you move an application into a shared environment, you might encounter performance problems but have no way of knowing they are due to a noisy neighbor.
At Intel, we are addressing this problem with software, called Intel Data Center Manager: Service Assurance Administrator (Intel DCM: SAA). This software plugs into OpenStack to extend its functionality in a natural way. It uses OpenStack methods to determine the best placement for a particular virtual machine and the workload it runs.
Intel DCM: SAA has built-in capabilities that use Service Level Agreement (SLA) specifications as performance targets. The details on these capabilities are beyond the scope of this blog post. For now, let’s just cut to the chase: By assigning a specific SLA to an application, all stakeholders can have the confidence that—regardless of other activities in the neighborhood—the application will get the CPU resources it needs to meet the expected performance criteria.
Your workloads also want to live in a secure and trusted neighborhood. You need the assurance that the neighborhood is in fact the right neighborhood (the right hypervisor) and is running in compliance with required regulations (configured to meet requirements).
What application owners need are assurances about the performance and trust of workloads running in a multi-tenant environment. Service providers, in turn, gain the means to create new offerings with assured performance or to pack workloads more densely into existing server racks—with the confidence that they will meet their SLAs.
Looking ahead, Intel plans to add to the software with quality of service capabilities for storage and networking resources—all in the interest of maintaining good neighborly relations on OpenStack blocks.
For a closer look at the capabilities in the initial release of Intel DCM: SAA, watch this video and take a look under the hood. Let me know what you think on Twitter @IntelITCenter.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 10:09p |
Intel Launches Performance Monitoring Software for OpenStack Clouds Intel has added another member to what is now the Data Center Manager family of products, announcing a fine-grained server performance monitoring and management capability for OpenStack environments.
Data Center Manager started as middleware that aggregated CPU power and temperature data for server energy management software. Later, Intel added a virtual KVM gateway to the family, and this week, the company announced the addition of monitoring capabilities for hardware running OpenStack clouds.
Sold directly as a stand-alone product and through third-party vendors, the DCM Service Assurance Administrator installs an agent on each node in an OpenStack environment and reports that node’s performance, expressed in the number of instructions per second it processes. It also includes security and capacity management features.
The idea is to give administrators a way to see whether their cloud infrastructure performs the way they expect it to perform and to provide security assurances for enterprise customers, most of whom are reluctant to adopt OpenStack primarily because of security concerns that arise with open source software.
Intel has been a supporter of OpenStack previously and has been contributing code to the open source cloud infrastructure software project. Boyd Davis, vice president of Intel’s data center group and general manager of the data center software division, said DCM SAA was a “a utility that works with OpenStack, so we’re now part of the OpenStack ecosystem ourselves.”
Davis could not say whether Intel had plans to contribute code it has developed for SAA to the OpenStack community, but did not rule it out. The company has to be careful with opening up such code because of the deep level of access to the inner workings of its CPUs SAA has. Intel has to make sure it does not expose intellectual property or create a security threat.
“Whenever you start to talk about accessing instrumentation or telemetry deep within the CPU, you have to be very careful that you don’t open up unintended threat surface,” Davis said.
Offering the product directly and through other vendors is the same go-to-market strategy the company has used for the other DCM offerings. DCM Energy Director, for example, is sold by hardware vendors and by data center infrastructure management companies which integrate it with their software.
Third-party vendors Intel hopes will sell SAA include converged-infrastructure suppliers as well as OpenStack distribution vendors, Davis said. At a press conference in San Francisco Wednesday, Intel announced that Redapt Systems, a Redmond, Washington-based system integrator, was the first company that had agreed to include it in its offerings.
SSA costs $200 per server per year, Davis said. The product is available now. |
|