Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, January 24th, 2014
Time |
Event |
1:00p |
Schneider Electric, HP Team To Bring Facilities and IT Together  Schneider Electric has partnered with HP on solutions featuring its DCIM offering, shown here being demonstrated at Data Center World. (Photo: Colleen Miller)
Bridging the silos between facility and data center managers continues to be a top concern in the Data Center Infrastructure Management (DCIM) space. This is the impetus behind today’s announcement that Schneider Electric will collaborate with HP to deliver a converged data center and IT management platform. The joint solution features HP Converged Management Consulting Services (CMCS) combined with Schneider Electric’s DCIM solution, StruxureWare for Data Centers.
The converged data center and IT management platform will link DCIM capabilities with IT service management (ITSM), effectively connecting physical infrastructure assets to business processes.
“By collaborating with HP to provide a holistic approach to managing IT business process assets and workloads, we are continuing to bridge the gap between IT and facilities,” said Soeren Jensen, vice president, Enterprise Management and Software for Schneider Electric. “Enabling IT service providers to instantly view the impact of any changes in their data center, as well as the operational costs associated with these changes is an important step towards improving energy efficiency in data centers and IT.”
The platform provides consistent views for both facilities and IT professionals, allowing them to reconcile asset data between DCIM and ITSM, improving holistic business impact analysis. Schneider Electric provides an in depth look into the infrastructure, while HP provides Asset Management under one holistic view. The combined technologies mean energy savings and more efficient management of their IT services and assets.
“DCIM provides a holistic view of the data center to enable better operational efficiency and capacity planning; however, many organizations lack the unique mix of internal IT, facility and service management expertise needed to make the most of DCIM’s benefits,” said Rick Einhorn, vice president, Technology Services, Datacenter Consulting, HP. “HP Converged Management Consulting Services help customers capitalize on Schneider Electric’s top DCIM solution, StruxureWare by delivering deep expertise in ITSM, IT infrastructure, and facilities, as well as a framework to connect business goals, systems and data center processes.”
Schneider Electric has been keeping busy. The company is making big moves, both expanding its capabilities, its partnerships, and the scope of what it provides, leading to a busy couple of months for Schneider Electric.
Last month, the company acquired AST Modular, beefing up its position in the market for pre-fabricated data centers. The AST deal reflects Schneider’s growing focus on modular solutions, coming just three months after it rolled out a new line of 15 modular enclosures and reference designs. With AST, Schneider added a major global player in the modular market.
Big Moves For Schneider Electric
The company also announced collaboration with Intel on DCM, working with Intel’s recent Virtual Gateway technology for a new product module in StruxureWare, in order to provide full server lifecycle access and power cycling for remote management. It expanded its global service offerings, moving deeper into services and concerning itself with the entire lifecycle of the data center. Then there’s the potential partnership with Sears/Ubiquity. Sears/Ubiquity is working with Schneider Electric on a proposal to build and operate mission-critical facilities in a number of markets around the country, converting former Sears auto center stores into data centers.
Integration Details
Schneider Electric and HP will integrate StruxureWare for Data Centers with HP’s Universal Configuration Management Database (UCMDB), which will enable communication and reconcile asset data between the solutions platforms for DCIM and IT Service Management (ITSM). Also, HP will map the functions and features of StruxureWare for Data Centers into HP’s proprietary Converged Management Consulting Framework. This mapping will allow consultants to make informed recommendations around deployment of the solution into customer’s data centers and determine the best way to integrate the overall ITSM and DCIM systems within the environment. | 1:30p |
How to Prepare Private Cloud Services with a Hybrid Cloud Future in Mind Adam Fore is Director of Marketing for Virtualization and Cloud Solutions at NetApp, a network storage solutions company that creates innovative storage and data management solutions that deliver cost efficiency and accelerate business breakthroughs.
 ADAM FORE
NetApp
The value of hybrid cloud environments for enterprises continues to grow as cloud services—both private and public—become more and more refined with each passing quarter. Although perceived risks remain, new management tools are quickly closing the gap, allowing data and applications to be managed seamlessly in blended IT environments.
The types of hybrid models available are also expanding. They include familiar models in which applications and operations span public and private environments, such as cloud-based backup, DR, and cloud bursting. They also include new hybrid data-center architectures that connect public compute with private storage. The latter allows businesses to reap the cost benefits, elasticity, and responsiveness of the public compute cloud while maintaining control of their data and continuing to use the same data management services and tools they use internally.
This growth in the use of cloud services requires IT managers to re-evaluate their role.
Instead of building infrastructure to support applications, IT managers need to use their expertise to assess cloud services and deploy applications to the best-suited resource, public cloud, private cloud, or infrastructure, becoming service brokers instead of infrastructure builders. This is an important change and an opportunity to further elevate the value of IT.
Anticipating these changes in key areas can accelerate your transition to this model. Start preparing your organization and staff, the business users of IT, and the IT infrastructure itself.
There are three critical areas that companies need to prepare for a successful transition to the hybrid model: the business, the IT organization, and the IT infrastructure.
Prepping the Business
Many companies’ internal organizations are looking externally to cloud-based services that they feel are more responsive to meeting IT needs. A first step is to change the way businesses look at IT. It’s important for IT departments to be seen as the “go-to” resource for any IT-related matter.
It’s important to build the larger organization’s trust in IT’s ability to select the right service without significantly impacting speed, cost, or capabilities. This can be achieved by providing deep expertise on cloud service options with a balanced perspective and then educating the business on the risks and implications of choosing the wrong service provider.
Also, by factoring in the organization’s existing policies (regulatory, SLAs, and so on) when choosing a new cloud service, application, or management tool, IT can better maintain consistent services without impeding an organization’s workflow.
As brokers, IT departments will add value to the operational performance of the business, providing assurances that selected cloud services or hybrid services meet the company requirements and ensuring that services employed aren’t putting the company at risk.
By becoming the intermediary between the business and third-party service providers, IT departments can be the gatekeepers of the data and applications. This will enable better management across hybrid environments and eliminate shadow IT, so business units don’t try to provision IT resources on their own.
Prepping Your Staff and the IT Organization
Traditionally, IT has been organized in a siloed fashion. However, the advent of virtualization, in which server, network, and storage are more tightly integrated, has started to collapse this model. Cloud services require even tighter coordination because infrastructure is highly automated. Infrastructure management needs to be centralized and horizontal. New skills are also required to consult with the business and to broker cloud services.
To minimize this difficulty during transition, it is critical to restructure the organization and to elevate the roles. By doing so, people won’t see outsourcing as loss of control or loss of responsibilities. As brokers in a horizontal model, they will be accountable for managing the resources, elevating the roles of IT staff.
It is important that IT staff be trained beyond traditional technology. The IT team requires new skills when they move from being service providers to being brokers of services. In their new roles, IT staff must understand how the business operates and how it can be affected by such things as the health of service providers, contract terms, sensitivity of data, and so on.
A stronger business acumen will help to limit risk when making critical IT decisions.
Prepping the IT Infrastructure
Once the macro (business and IT organization) changes have been implemented, IT workers can turn their attention to the micro changes: prepping the IT infrastructure.
IT workers need to develop a framework that identifies which workloads and applications can move to the public cloud and which need to be kept in a private environment, based on cost, security, risk profile, governance, compliance, and regulatory measures. Additionally, understanding the larger business objectives will aid in determining the technological requirements around portability and management that are necessary to make the move.
How and where workloads and applications can be moved can be determined by evaluating their strategic value and their dynamic nature, as the chart below illustrates.

The X-axis shows the strategic value of applications and workloads: the further out they are, the more strategic they are. Usually the more strategic they are, the more likely they will be kept in a private cloud. The Y-axis illustrates operational flexibility: the greater the flexibility is needed further out, which lends itself to the public cloud.
It is important to build a blended IT environment that has the flexibility to allow data to move freely between different resources. When it comes to data management tools, a lot of options are available, since more vendors offer hybrid management solutions (for example, the Data ONTAP operating system, OpenStack, and CloudStack).
Whatever solution the business chooses, it’s critical to buy into a platform that has a strong commitment to building hybrid cloud management capabilities.
Ultimately, as hybrid cloud services evolve and their benefits to businesses become more pronounced, the pressure on IT workers from the C-Suite to make the transition will grow. By taking the necessary steps now to prepare the IT organization and IT infrastructure, businesses will be ready to extend their IT environments with a hybrid-cloud future in mind.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 2:30p |
Terremark Expands by 4MW in Santa Clara with PowerHouse  An Active Power PowerHouse unit providing containerized power infrastructure for a modular data center. (Photo: Active Power)
Verizon Terremark had a problem. It wanted to expand its data center in Silicon Valley, but was short on space and backup power. The solution: Add four megawatts of power protection by deploying containerized PowerHouse systems from Active Power.
Due to the lack of real estate on its Santa Clara, Calif. campus, the managed hosting and cloud service provider wound up deploying the new units on the roof. Each PowerHouse modular power system includes an uninterruptible power supply (UPS) system, switchgear and monitoring software.
“As we update our infrastructure, we are continually looking for ways to make our data centers more efficient,” said Ben Stewart, senior vice president, Facility Engineering, at Verizon Terremark. “PowerHouse modular units are energy and space efficient, which give us flexibility to manage power consumption to best serve the needs of our clients and limit energy and equipment waste.”
Flywheel UPS Specialists
Active Power makes UPS units that use a flywheel, a spinning cylinder which generates power from kinetic energy, and continues to spin when grid power is interrupted. In most data centers, the UPS system draws power from a bank of large batteries to provide “ride-through” electricity to keep servers online until the diesel generator can start up and begin powering the facility.
Terremark has been among the leading users of flywheel UPS units.
“The power density and flexibility of PowerHouse allows us to offer one of the most compact modular power solutions in the industry that enables the operator to reduce the size and cost of their land and building,” said Todd Kiehn, senior director, Product Management, at Active Power. “The philosophy behind our PowerHouse product line is to simplify the design and build process for our customers.”
“This simplification comes in the form of taking factory built components, integrating them into a purpose built enclosure, and testing the entire system in advance of delivery as opposed to doing all of this work in the field,” continued Kiehn. “This saves the customer time and money so they can better manage the infrastructure supporting their data center power requirements.” | 3:00p |
Equinix Expands in Melbourne and London  Cabling inside an Equinix data center. The company has expanded its data center footprint in Melbourne and London. (Photo: Equinix)
Equinix has updated its global expansion plans with a new Melbourne Australia data center, and a sixth data center for its successful Slough campus in London.
New Equinix data center in Melbourne. Equinix (EQIX) announced plans to build a new data center in Melbourne, to address strong demand from customers for premium data center services, as data consumption and cloud services experience continued growth in the Australian and global IT markets.
The roughly $60 million investment for the ME1 facility will provide 1,500 cabinets in the 105,000 square foot square building that will allow for future development. ME1 will be built to meet the LEED Green Building Rating System and include plans for evaporative coolers to support a lower PUE. ME1 will be Equinix’s fourth Australian data center and the latest in a series of expansions across Asia-Pacific markets, enabling Equinix to assist multi-national customers looking to expand into local and regional markets.
“We continue to see a surge in demand for data center and interconnection services in the Asia-Pacific (APAC) region and the addition of ME1 will aid enterprises looking to gain proximity to customers and partners and to improve application performance for employees and end-users in Australia,” said Tony Simonsen, managing director at Equinix Australia.
Sixth Equinix data center in London. Equinix also announced it will build a new data center called LD6 in its highly successful London Slough campus in response to consistent demand from customers in the financial services, content, cloud and enterprise segments. Scheduled to open in the first half of 2015, the $79 million data center will have an initial capacity of 1,385 cabinets across the 8,00 square meters (86,000 square feet).
With LD6 Equinix will aim for LEED Platinum status, with the help of an innovative air system which will utilize mass air cooling technology with indirect heat exchange and 100 percent natural ventilation. This will contribute to LD6 having lower energy consumption and a smaller carbon footprint than other facilities of its kind. An additional $37 million is being spent on the LD4 and LD5 buildings on the Slough campus. With LD6 complete Equinix-Slough will provide over 36,000 square meters (388,000 square feet) of colocation space, interconnected by more than 1,000 diverse dark fiber links and over 90 network service providers.
“LD6 is a hugely exciting project; the facility will be the most advanced data center in the UK. We are committed to providing not just continuity but also continuous improvement for our current and future customers; this latest addition to our thriving campus will also set new standards in efficiency and sustainability,” said Russel Poole, managing director at Equinix UK. | 3:30p |
The Future of Bitcoin: Corporate Mines and Network Peering?  At the Inside Bitcoins Conference in Las Vegas in December, Josh Zerlan of Butterfly Labs spoke about the future of Bitcoin mining and transaction fees. (Photo: Rich Miller)
This is the second in a two-part series on the boom in Bitcoin computing infrastructure, and what it means for the data center industry. See Part One, Mining Heads to the Data Center.
LAS VEGAS - What’s the end game of the Bitcoin mining arms race? Miners are building ever-more powerful hardware and larger data centers, trying to stay a step ahead of their rivals and keep pace with “the difficulty” – algorithm changes that make it progressively harder to earn new bitcoins.
Some Bitcoin watchers believe the network will ultimately shift from mining for new coins to a model based on transaction fees, which could accelerate a shift of Bitcoin hardware into data centers and the creation of peering networks to manage fees, just as current peering agreements seek to reduce network transit costs.
The long-term outlook for Bitcoin is important for the data center industry, where some leases can run from three to 10 years. The emergence of Bitcoin has seen the cryptocurrency soar in value, accompanied by rapid advances in the hardware required to successfully capture new coins. The Bitcoin protocol is designed so that these rewards will become harder to earn and will shrink over time. That means that the economics and business models of bitcoin could shift over the life of a data center lease.
Fees and the Future
The Bitcoin economy is supported by a global network of computers that use processing power to verify transactions between Bitcoin owners. Those who participate can benefit in two ways:
- The issuance of new bitcoins, which happens about every nine minutes with a “block reward” of new bitcoins to the miner that processes that transaction. The block reward diminishes over time. It was initially 50 bitcoins, but is currently 25 bitcoins, or about $23,000. In 2016 it will be reduced to 12.5 Bitcoins.
- Miners earn transaction fees, which can be awarded in every bitcoin purchase or transaction, and have historically been a tiny amount (often less than a cent) left as a gratuity for the miner. Slightly larger fees can be offered for transactions that require more data crunching, to ensure that the transactions are processed without delay.
Over the past two years, gaining block rewards has become progressively more difficult, forcing miners to upgrade their hardware from CPUs to GPUs and then FPGAs (Field Programmable Gate Arrays) and finally specialized ASICs (Application Specific Integrated Circuits) optimized for bitcoin data-crunching. As the hardware has become more expensive, many enthusiasts have been priced out of the mining market.
Princeton University computer science researchers Ed Felten, Joshua Kroll and Ian Davey have studied the bitcoin reward system and foresee a shift ahead.
“At present, the mining reward seems to be large enough, but under the current rules of Bitcoin the reward for mining will fall exponentially with time,” the Princeton team wrote in a recent paper on Bitcoin economics. “Transaction fees, which are voluntary under the current rules, cannot make up the difference. The only way to preserve the system’s health will be to change the rules, most likely by either maintaining mining rewards at a higher level than originally envisioned, or making transaction fees mandatory. The choice is likely to drive political disputes within the Bitcoin community.”
Researchers from Microsoft and Cornell have also explored this scenario and outlined refinements that would be needed to make incentives work in a shift to transaction fees.
The bitcoin community is “debating that (shift),” said Emmanuel Obiodun, founder and CEO of Cloudhashing, which leases computing power to customers. “It’s becoming more expensive to mine coins. But transaction fees are very low right now, and have very small profit margins. For now, there’s still a lot of upside in bitcoin mining.”
One Vision of a Fee-Based Future
The future of mining was a hot topic at the Inside Bitcoins conference in Las Vegas in December, where Josh Zerlan, Chief Operating Officer of Butterfly Labs, gave a presentation on the future role of transaction fees.
“In the future, there will not be much incentive to mine (for block rewards),” said Zerlan. As rewards become harder to achieve and the growth of bitcoin leads to more transactions, Zerlan says that fees will need to increase to ensure that miners continue to support the network. As this happens, miners will gravitate towards transactions with higher fees attached to them, which will be processed before those with smaller rewards.
If bitcoin gains wide acceptance as a payment platform or even as a currency, the growth of fees will present several challenges, Zerlan said.
“If you’re a large company, you have a problem (with paying transaction fees),” he said. “The solution is to maintain a large mining farm in your data center to process your own transactions for free, and your customers’ transactions for free. You can also earn extra income to processing others transactions.” | 4:00p |
Geeky Fun for Friday: A Beginner’s Guide to Raspberry Pi Raspberry Pi – the little case-less computer that can fit in your pocket- generates enough power to run your home media center, a VPN, and a lot more. But before you can kick back and watch movies on this $35 machine, you need to configure it and install an operating system. This video from LifeHacker shows you how to get up and running.
Also, you can find detailed instructions on Lifehacker.
For additional video, check out our DCK video archive and the Data Center Videoschannel on YouTube. | 7:16p |
Friday Funny: What’s the Best Caption? It’s Friday! As the work week ends, it’s time for a little humor and our Data Center Knowledge cartoon contest is a great way for readers to have a chuckle on this winter Friday. Before you take off, take a moment to vote for our reader suggestions for the caption for our new Data Center Knowledge cartoon – What’s Up With That Phone?.
New to the caption contest? Here’s how it works: We provide the cartoon and you, our readers, submit the captions. We then choose finalists and the readers vote for their favorite funniest suggestion. The winner receives a hard copy print, with his or her caption included in the cartoon!
For the previous cartoons on DCK, see our Humor Channel. Please visit Diane’s website Kip and Gary for more of her data center humor.
Take Our Poll | 8:01p |
Why Does Gmail Go Down? January 2014 Edition  Don’t worry, your Gmail hasn’t evaporated. In worst-case scenarios, Google can back up lost Gmail messages from huge tape backups like this one, as the company did in an outage last year. (Photo: Connie Zhou for Google)
We’ve written many times about the breadth of Google’s data center infrastructure and its focus on reliability. So how does a widely-used app like Gmail go down, as it has today? There have been a number of Gmail outages over the years, usually involving software updates or networking issues. Or in some cases, a software update causing a networking issues.
Google is acknowledging reports of issues, which appear to be global. “We’re investigating reports of an issue with GMail,” the company said on its status dashboard. “We will provide more information shortly.”
On at least four occasions, Gmail downtime has been traced back to software updates in which bugs triggered unexpected consequences. A pair of outages in 2009 involved routine maintenance in which bugs caused imbalances in traffic patterns between data centers, causing some of the company’s legendary large pipes to become clogged with traffic. That was the case in Febuary 2009, when a software update overloaded some of Google’s European network infrastructure, causing cacading outages at its data centers in the region that took about an hour to get under control.
In Sept. 2009, Google underestimated the impact of a software update on traffic flow between network equipment, overloading key routers. In the Sept. 2009 outage, Google addressed the problem by throwing more hardware at it, adding routers until the situation stabilized.
In a December 2012 outage, the culprit was once again a software update causing a networking issue, this time in Google’s load balancers. ““A bug in the software update caused it to incorrectly interpret a portion of Google data centers as being unavailable,” Google reported.
Despite the sophistication of Google’s networks, updates sometimes bring surprises.
“Configuration issues and rate of change play a pretty significant role in many outages at Google,” Google data center exec Urs Holzle told DCK in a 2009 interview. “We’re constantly building and re-building systems, so a trivial design decision six months or a year ago may combine with two or three new features to put unexpected load on a previously-reliable component. Growth is also a major issue – someone once likened the process of upgrading our core websearch infrastructure to “changing the tires on a car while you’re going at 60 down the freeway.” Very rarely, the systems designed to route outages actually cause outages themselves.”
But don’t worry that Gmail might lose your data. In addition to storing multiple copies of customer data on disk-based storage, Google also backs up your data to huge tape libraries within its data centers. The company restored some customer data from tape in a 2011 outage, also caused by a software bug. | 8:12p |
SunGard to Split Off Availability Services Business  Sungard Availability Services, which supplies disaster recovery and business continuity services, is being split off to parent company SunGard’s investors. SunGard AS can deliver services using mobile units that can be used in the event of a disaster, like this one displayed at the Gartner Data Center conference. (Photo by Colleen Miller.)
SunGard Data Systems will split off its Availability Services business, which operates its data centers and disaster recovery business. The Sungard Availability business unit will be spun off into a separate company to its existing stockholders, including its private equity owners, in a tax-free transaction. The new company will continue to use the SunGard Availability Services name. SunGard says customers should see no impact from the change.
Both SunGard and SunGard Availability Services will continue to be owned principally by the consortium of private equity investment funds from Bain Capital, The Blackstone Group, Goldman Sachs, Kohlberg Kravis Roberts, Providence Equity Partners, Silver Lake and TPG, which acquired SunGard in a leveraged buy-out in August 2005.
With annual revenue of over $4 billion, SunGard is one of the largest privately held IT software and services companies. SunGard Availability Services provides disaster recovery and managed IT services, operating approximately five million square feet of data center and operations space.
“Greater Clarity and Alignment”
“This separation will bring greater clarity and alignment to each company’s mission,” said Russ Fradin, SunGard’s president and chief executive officer. “We believe a strategic separation of SunGard into two financially strong, independent companies will allow each to better focus on its distinct type of business and better pursue its own growth opportunities. While both businesses have been together as part of SunGard for a long time, they serve vastly different customer needs and have very different business profiles, with distinct capital requirements, sales forces and competitors.
“We are confident that two more focused and autonomous companies – each with significant size and capabilities – will be better positioned to drive long-term growth and value for customers, employees and investorsm,” said Fradin. “Both companies have compelling value propositions and growth opportunities as well as industry leadership positions, strong customer relationships, experienced management teams and specialized workforces. With each company having a strong footing from which to build and years of experience running independent operations within SunGard, the split-off should not have any impact on customers.”
Andrew Stern, currently chief executive officer of SunGard Availability Services, will become CEO of the independent company after the separation.
“We’ve made great progress at SunGard Availability Services to broaden our portfolio beyond traditional disaster recovery, with significant growth in our Cloud, Recovery-as-a-Service (RaaS), and Enterprise Managed Services businesses,” said Stern. “Customers around the world now rely on SunGard Availability Services to help ensure the availability of the IT systems, data and infrastructure that are critical to their business. As an independent company with $1.4 billion in revenue, we will have the scale, services and focus to continue bringing unique solutions to our customers’ availability challenges.” |
|