Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, August 24th, 2015
| Time |
Event |
| 5:33a |
Intel Leads $100M Round for OpenStack Cloud Heavyweight Mirantis Looking to help make OpenStack a more accessible management platform for a much broader number of IT organizations, Mirantis announced it will apply $100 million in additional funding from investors toward making the open source framework much simpler to deploy and manage.
This latest round of funding is led by Intel, which Mirantis president Alex Freedland said is also committing to making engineering and lab resources available to Mirantis to help make OpenStack a platform the average IT organization can easily administer. Besides Intel Capital, other investors participating in this round of funding include Goldman Sachs, August Capital, Insight Venture Partners, Ericsson, Sapphire Ventures, and WestSummit Capital.
Freedland said while OpenStack is being used in production environments by many IT organizations today, it requires a large team of IT experts to deploy and maintain it. To drive further adoption of OpenStack it is clear the cloud framework needs to be deployable at the push of a button, said Freedland.
As part of that effort Mirantis plans to help refine all the components of OpenStack in a series of stages that will ultimately enable IT organizations to fire up as many as 2,000 OpenStack nodes right out of the box.
The cost of managing IT on-premise is one of the primary reasons that so many application workloads are moving on to public clouds, which, Freedland said, winds up being a threat to the continued existence of internal IT organization. Just as concerning from a vendor’s point of view is if the vast majority of application workloads move to the cloud, a handful of cloud service providers will essentially decide what new innovative products and services actually get a chance to come to market.
“We need to drive IT costs down to the point where IT organizations can compete with all the hyperscalers,” said Freedland. “If all workloads move to the hyperscalers they ultimately become a choke point for innovation.”
While Freedland said it’s important that the OpenStack community continue to partner with providers of other technologies, it’s also critically important that OpenStack be able to compete against VMware and Microsoft by offering as many native services as possible. Freedland said both Microsoft and VMware will continue to expand the scope of the services they embed in their respective platforms to provide a better lifecycle management experience.
The OpenStack community can’t be limited in the types of native services that can be embedded in the platform if it expects enterprise IT organizations to replace those proprietary management frameworks with distributions of OpenStack, said Freedland.
At the same time, it’s important for the OpenStack community to, for example, work closely with Kubernetes, Google’s open source orchestration projects for Docker containers; providers of Platform-as-a-Service environments based on CloudFoundry; and Mesosphere, a provider of a distribution of advanced scheduling software for the data center that is based on an open source Apache Mesos project.
While OpenStack has obviously made a lot of progress in a short period of time, the number of IT organizations that have the engineering resources needed to master it in its current form is clearly limited. To drive the next phase of adoption, much more elegant implementations of OpenStack will clearly be required. | | 3:00p |
Droughts, Heat Waves, and High Data Center Cooling Costs Jeff Klaus is General Manager of Data Center Solutions at Intel Corporation.
Summer is not all fun. In the data center, IT and facilities teams are happy to see an end to summer and the extra strain it puts on the air handlers and cooling systems. Finance teams similarly celebrate an end to the higher utility bills.
As autumn approaches, it is a good time to evaluate the effectiveness of the in-place cooling solutions, and consider changes that can cut costs and help everyone keep their cool next year.
The $$$$ of Climate Control in the Data Center
Lowering cooling costs should never start with blindly raising the set point on the thermostat in server rooms. Yes, bumping it up a single degree can translate into a noticeable reduction of cooling costs. However, temperature in the data center is never uniform. It can vary drastically throughout a room, a row, or even within a single rack.
At a minimum, undetected hot spots must be identified and addressed before making any set point changes, which calls for a tool for monitoring temperature patterns. Fortunately, data center equipment providers have built in the intelligence that supports fine-grained thermal and power monitoring. Rather than relying solely on the return-air temperature at the cooling systems, site managers can benefit from temperature and power data provided by individual servers, blades, power distribution units, air handlers, and other intelligent devices in the data center.
Continual Monitoring – Automatically
An energy management solution can automate the ongoing collection and aggregation of these data points. IT and facilities teams can then view and study real-time thermal conditions as well as long-term patterns.
The best-in-class energy management solutions feed a steady stream of this data to a console, with various display options including at-a-glance thermal maps. Once hot spots are exposed, adjustments can be made to cooling systems or rack densities for immediate improvements in temperature consistency and cooling efficiency.
Over time, monitoring and analyzing logged temperature data can provide insights about the correlations between cooling costs and outside weather and environmental conditions.
IT can also monitor the ongoing efficiency of cooling systems to more optimally schedule preventative maintenance. In data centers that employ water-based cooling systems, these oversights of cooling efficiency can directly lower water consumption and help comply with any restrictions imposed during droughts.
Mitigating Cooling System Failures
The most energy efficient data centers constantly check for under- or over-cooled spots since these are signs of flaws in the cooling plan. A sudden increase in temperature can also alert the data center team to a failing cooling system.
The cooling plan should include steps that will be taken to mitigate any such cooling system failures. These same steps can apply during periods of power outages or restrictions, or any time that the temperature in the data center starts to increase.
The mitigation plan might include bringing in outside air (via economizers, or other ventilation), setting up fans (if power is available), or taking non-essential systems and services offline to reduce demand. Thermal monitoring capabilities and analysis of temperature correlations make it possible to define a mitigation plan that will optimize results under any conditions.
The combination of ongoing monitoring and a detailed failure plan puts IT and facilities teams in a position of readiness, and also promotes proactive maintenance practices that minimize server failures due to hot spots and cooling system failures.
Go Ahead – Turn Up the Temperature
With an understanding of the temperature patterns and correlations, IT can now confidently raise the set point in the data center. While most data centers still operate at 70 degrees Fahrenheit or lower, growing numbers of sites are pushing the data center temperatures up to 80, 82, or even higher set points. With each degree of increase, power bills reflect a two-percent increase in savings, which adds up significantly year after year.
There is no magic temperature setting rule, and every data center team must still consider the local climate conditions, budget, and equipment requirements. Today’s servers can definitely handle higher operating temperatures, but some storage systems and legacy tape backup systems are more sensitive to temperature and humidity. Adjusting room layouts and consolidating sensitive equipment in cool zones can still allow higher ambient temperatures and, therefore, cooling reductions in major portions of any data center.
The same monitoring solutions that automatically collect and aggregate temperature data can support threshold definitions and automatic alerts and actions to protect equipment and maintain conditions that maximize the lifespan and reliability of critical equipment.
Be Cool
By adopting the temperature monitoring practices of the world’s largest and most energy efficient data centers, even a small data center can achieve cost savings and improved uptimes. Before next summer has IT and facilities sweating about cooling solutions, consider a free trial or pilot deployment of a real-time temperature monitoring solution. Better yet, look at a solution that combines temperature and energy monitoring. Very cool.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 3:30p |
LogicMonitor Gives C-Suite Execs a View of the Data Center They Can Understand While there has never been a shortage of tools to help IT operations teams figure out what’s occurring inside the data center, only recently stakeholders outside the data center have started taking an interest. Now that it’s apparent the data center is the economic engine that makes the digital economy run, C-level executives want to know more about its inner workings.
To provide that capability, LogicMonitor, a provider of Software-as-a-Service applications for data center monitoring, has added an Executive Dashboards Suite that makes it easier for CIO, CTOs, and other C-level executives to visualize what’s happening inside the data center in real time at any given moment.
LogicMonitor CEO Kevin McGibben said that while the IT team needs access to more granular data, C-level executives want a view of the same data that enables them to discern overall performance and trends.
While not everyone in the IT operations team might welcome that additional level of scrutiny, McGibben said, providing more transparency into the data center ultimately winds up making it easier to get executive support for upgrades. Rather than having to make the case for those upgrades without any hard data to help justify the investment, the feature provides insight into data center performance in a way that C-level executives can easily correlate upgrades to events that have a material impact on the business.
“A lot of times the biggest issue that IT people have when trying to get funding is simply accessing the data,” said McGibben. “With LogicMonitor they can track what is happening on a per-minute basis.”
Here’s a screenshot of the executive dashboard, courtesy of LogicMonitor:

Specifically, the suite provides widgets to track network and storage utilization; metrics such as CPU status in relation to peak performance; along with an overview of activities occurring in the network operations center.
In some cases, McGibben said it’s not unusual to find customers that have configured LogicMonitor dashboards for large displays hanging on a wall that every employee in the company can view, because the underlying dashboards are powered by a SaaS application.
Of course, the degree to which the C suite wants to have access to that level of information about what is occurring inside the data center depends on just how strategic IT is to their business. But in general, the less mystery there is surrounding the data center these days the more likely the business is to appreciate not only the value of that investment but also the expertise of the people that keep it running. | | 4:00p |
Red Hat Certifies Midokura’s SDN for OpenStack Looking to keep its options open when it comes to networking in virtualization environments, Red Hat has certified Midokura Enterprise MidoNet network virtualization software for use on the Red Hat Enterprise Linux OpenStack Platform Version 7. The announcement comes before the kick-off of the OpenStack Silicon Valley event later this week.
Adam Johnson, VP of business for Midokura, said the company is seeing a lot of interest in using RHEL to run OpenStack. But given ongoing concerns about the quality of the networking software provided by the Neutron implementation of network virtualization software that comes with OpenStack, interest in alternative forms of network virtualization is running equally high, he said.
In general, Johnson said that as IT organization begin to find themselves managing multiple clouds and making use of technologies such as Docker containers and software-defined networks, they find that network virtualization is no longer optional as the IT environment as a whole becomes more distributed.
“We see microservices and SDNs driving adoption,” said Johnson. “IT operations teams discover a need to upgrade their operations.”
As a member of the Red Hat Connect for Technology Partners program, Midokura has an existing relationship with Red Hat. The certification, however, takes that relationship a level deeper at a time when IT organizations are looking to expand usage of OpenStack in production environments, said Johnson.
Like it or not, the rise of microservices is going to drive more IT organizations to manage their data center environments at higher levels of abstraction. The days when IT organizations could keep pace with changes and updates to the environment using command line interfaces are coming to an end.
Network virtualization provides an overlay across physical networks that SDN software then invokes to automate the management of functions across multiple networks. In effect, SDNs turn those networks into a set of programmable resources that IT organizations can access often using the same management frameworks they use to manage servers and storage.
The degree to which those frameworks will consolidate job functions across the data center remains to be seen. But it’s already apparent that demand for IT administrators that have only one primary skill is declining. In contrast, there is a considerable demand for IT administrators that have programming skills that can be applied across multiple types and classes of IT infrastructure.
Naturally, it will take some time before higher levels of abstraction begin to drive consolidation of IT management functions inside the data center. But the process almost always starts with some form of network virtualization overlay on which the rest of modern IT management frameworks now depends. | | 5:41p |
New Dell Division to Sell Hyperscale-Style Data Center Gear to Not-Quite-Hyperscale Companies 
This article originally appeared at The WHIR
Dell will introduce a product line later this fall targeted specifically customers that want hyperscale-style data center infrastructure but who aren’t hyperscale operators themselves. Called Datacenter Scalable Solutions, the division will be within Dell’s Enterprise Solutions organization and will address the market of business that is just below the hyperscale space in size.
Dell said that this segment is growing three times faster than the traditional x86 server market, and that these types of businesses, including telecommunication companies, hosting providers, oil and gas and research organizations, require “semi-custom solutions.” Dell DSS will new design new systems for this market, while also offering supply chain optimizations and custom configurations.
“Dell was the first major server vendor that recognized the unique requirements of the hyperscale market when it introduced DCS in 2007. They are now taking the best practices and learnings from their DCS business and addressing the distinct needs of the space just below the top tier hyperscalers,” said Matt Eastwood, senior vice president, IDC. “As a private company, Dell continues to make long-term strategic customer centric investments and they are proving their new operating model is dynamic, thoughtful and unique in the industry.”
Last year, Dell competitor HP partnered with Foxconn to develop servers targeted for the huge cloud builders, according to a report by the WHIR’s sister site DataCenterKnowledge. A report in Q1 2014 by IDC last year showed that HP led the x86 market with 29.6 revenue share and Dell had 22 percent.
“Dell Datacenter Scalable Solutions is a prime example of how Dell, as a private company, is able to be more nimble, make faster decisions and – most importantly – drive innovation on behalf of its customers,” said Ashley Gorakhpurwalla, vice president and general manager, Server Solutions, Dell. “While others in the IT industry have been focused on marketing hype or reducing capex costs only, we’ve created a new operating model that is centered around flexibility. DSS is about understanding customers’ goals and enabling them to achieve those objectives by giving them purpose-built solutions that are designed when and how they want it.”
Dell is also providing a range of financing options through its Dell Financial Servicesarm.
Last month, Dell added support for Windows Azure Pack and enhanced support forMicrosoft Azure to its cloud management software.
This first ran at http://www.thewhir.com/web-hosting-news/dell-forms-new-line-of-business-for-web-hosts-telecommunications-service-providers | | 8:57p |
Los Angeles Explosion Affects Data Center Cooling Systems, Connectivity In addition to interruptions to connectivity for customers using Level 3’s network services, last week’s basement explosion in downtown Los Angeles briefly left a CoreSite data center’s cooling system without power, causing temperature in the facility located within the gigantic One Wilshire carrier hotel to exceed the temperature level the data center provider guarantees to its customers.
The guarantee that customers’ IT equipment will be supplied with air of certain temperature and humidity levels is one of the most important services data center providers offer, on par with the guarantees of uninterrupted supply of power and physical security. Loss of cooling capacity for an extended period of time can lead to failure of customer equipment and prolonged outages.
“CoreSite’s LA1 data center’s customers’ critical power was not affected by the loss of utility power, however, the data center did experience an associated mechanical outage that resulted in increased temperatures exceeding our SLAs (Service Level Agreements),” CoreSite representatives said in a statement emailed to Data Center Knowledge. “Corrective action was taken, and temperatures were restored to normal operating levels.”
Multiple companies operating data centers in the area affected by the blast on Thursday evening reported loss of utility power and said they had switched to backup generators without power interruption to their customers. The biggest impact of the explosion felt by data center customers seems to have been interruption of connectivity on Level 3’s network.
The explosion, whose cause is still under investigation, took place in the basement of a 22-story commercial building at 811 West Wilshire Blvd. around 10 pm on August 20, according to a statement by the Los Angeles Fire Department. Firefighters discovered smoke, blast damage, and a large volume of water from damaged pipes when they arrived on the scene.
Four people suffered minor injuries, and two of them were taken to a hospital, one with a headache, and the other with back pain.
The blast damaged an on-site power station and caused power outages for 12 buildings in the area. The Los Angeles Department of Power and Water restored power to all affected buildings other than 811 Wilshire by Friday evening, according to an LADPW statement.
Level 3’s downtown Los Angeles facility is at 818 W. 7th St. – one block away from the site of the explosion. According to a statement issued by Internap, a data center service provider that provides network connectivity services in the area out of Level 3’s facility, connectivity was interrupted because the facility did not successfully switch to generator power when it lost utility service.
“Level 3 experienced a delay in restoring power via backup generators and, as a result, some customers of Internap’s network services were impacted by the outage,” Internap’s statement read. “The site has returned to full utility power.”
A Level 3 spokesperson confirmed it had experienced a network outage in Los Angeles because of a utility power outage but did not provide any details. In an email Monday afternoon, the spokesperson said Level 3’s network was operational and “performing as it was designed to do.”
One of Internap customers who experienced an outage as a result of the explosion was LogMeIn, a Software-as-a-Service company. Craig VerColen, a LogMeIn spokesman, told us the company contracted for data center services in an Equinix facility in Los Angeles through Internap. Internap and Equinix have a partnership whereby Internap provides connectivity services to Equinix customers.
The Equinix data center downtown, called LA2, was in one of the buildings that lost power Thursday night, an Equinix spokesperson said via email. It switched to generator power without impacting customers, he said. | | 10:01p |
Switch Joins Obama’s Business Climate Pledge, Plans 100MW Solar Project in Nevada Switch, the operator of a large data center campus in Las Vegas called SuperNap, has joined the second round of pledges by private-sector companies to invest millions of dollars into combating climate change under a White House-led initiative.
President Barack Obama’s administration unveiled the initiative, called the American Business Act on Climate Pledge, in July and announced that 13 of the country’s largest companies, including Microsoft, Apple, and Google, had made pledges. Google’s pledge, for example, included a commitment to powering its operations, including data centers, with 100 percent renewable energy. Microsoft’s and Apple’s commitments were along similar lines.
The companies also promised things like offsetting emissions from business travel, reducing water consumption, and investing in renewable energy development. Some of the biggest investments in the sector are in renewable energy generation projects associated with data center construction projects by the likes of Apple, Google, Microsoft, and Facebook.
Most data center providers like Switch generally do not invest in renewable energy. One of the exceptions is QTS, which has built a massive solar farm to offset emissions associated with power consumption of its New Jersey data center.
Some data center providers will also help their customers source renewable energy. San Francisco-based Digital Realty, for example, offers its customers the option to use renewable energy at any of its data centers around the world while paying regular energy rates for the first year. When the year is up, they can either switch to regular power or pay a premium to continue using renewables.
The tech sector in general has stepped up its investment in renewable energy, according to a recent study by the US Department of Energy. S
When the White House announced the pledge, it also said it would announce a second round of companies in the fall. Switch is the first colocation company to come on board, which it announced today at the Clean Energy Summit in Las Vegas, where Obama is expected to deliver a keynote address in the evening.
Switch’s pledge is to power its data centers with 100 percent renewable energy by building renewable energy generation facilities in Nevada, according to a company statement.
The first such project is a 100-megawatt solar farm in the state expected to start construction in October. The data center provider is partnering on the project with Nevada Energy, a public utility that serves Las Vegas, Reno, and the surrounding areas, and First Solar, a manufacturer of photovoltaic panels.
Switch also pledged to work on reduction of water use by partnering “for new water technologies” and by getting the new data center campus it is building in Reno to rely 100 percent on recycled water from municipalities and local water utilities in northern Nevada. The company has signed eBay as the anchor tenant at SuperNap Reno.
As it builds in Reno, Switch also continues to expand in Vegas. Earlier this month, the company announced the opening of the ninth SuperNap facility in its hometown. |
|