Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, September 23rd, 2015
| Time |
Event |
| 12:00p |
Equinix to Power All California Data Centers with Solar Power Equinix, the world’s largest data center service provider, has contracted with SunEdison for enough solar energy to offset carbon emissions associated with energy its entire California footprint consumes, including data centers in Silicon Valley and Los Angeles and global headquarters in Redwood City.
The company has signed a five-year power purchase agreement with SunEdison to buy most of the energy output from a planned 150-megawatt solar farm in Imperial Valley, California, due to come online by the end of 2016. This is Equinix’s first utility-scale data center power purchase agreement, which will bring its global energy consumption from 30 percent renewable to 43 percent renewable.
The 105MW of the Mount Signal Solar II plant’s total capacity claimed by Equinix will produce about 300,000 kilowatt hours of energy per year, which will be enough to offset energy consumed by its California operations, David Rinard, Equinix’s senior director of global sustainability and procurement, said. The portion consumed by the company’s headquarters is almost negligible in comparison to the amount of energy consumed by its data centers in the state, he said.
While internet giants and some of the biggest enterprises have been investing a lot in renewable data center power in recent years, the predominant majority of data center service providers, who serve thousands of customers around the world, have not, citing high cost of renewables and lack of interest by customers. Customer interest has been rising, however, according to multiple data center providers, including Equinix, which is a recent trend.
San Francisco-based Digital Realty, one of Equinix’s biggest competitors, which also happens to be one of its biggest landlords, announced earlier this year it would give its customers one year of premium-free renewable energy anywhere in the world. Interxion, one of Equinix’s biggest rivals in Europe, claims that 100 percent of its data center operation is powered by renewable energy.
Securing enough renewable energy for data centers remains a big challenge in most of the US. Many utilities don’t provide renewable energy as a product, and the ones that do often ask for premiums that are too high for data center operators to absorb. In states with regulated energy markets, such as California, utilities are prohibited from making power purchase deals with corporations.
California also has some of the highest electricity rates in the world, which in combination with the difficulty of sourcing renewables in the state made for an especially challenging route for Equinix. But it was important for the company to get renewable data center power in California, rather than in states like Texas or Oklahoma, where it would have been much easier, Rinard said.
Equinix’s global headquarters is in California, and Silicon Valley is one of the world’s biggest and most important data center markets. The company has seven data centers in Silicon Valley, a footprint stretching from Palo Alto to San Jose.
For those reasons the company accepted a data center power purchase agreement that was perhaps less attractive than would have been possible in other markets and one that was for a shorter term than it would have preferred, Rinard explained. It was important to solve the puzzle in California “because California was so hard to solve for,” he said.
Equinix also had to accept a degree of risk in entering into the agreement with SunEdison. Because it does not have a subsidiary registered as a power utility – unlike Google – it cannot sell power on the wholesale market. For many of its big renewable contracts, Google buys the energy from a project, strips it of Renewable Energy Credits and sells it on the market, applying the RECs to its data centers.
In exchange for RECs, Equinix essentially agreed to pay SunEdison the difference between the cost of solar energy the plant generates and what the energy producer will be able to get for non-renewable energy on the market. Equinix doesn’t know exactly what that difference is going to be over the next five years — it will vary — “but we feel the risk is reasonable and justifiable,” Rinard said.
The contract also includes “bridge RECs,” or RECs Equinix can apply to its energy consumption between the time it signed the contract and the time the solar plant in Imperial Valley comes online.
Rinard declined to disclose financial terms of the deal, citing a non-disclosure agreement. | | 12:02p |
Microsoft Azure, Rackspace to Sell NoSQL Cassandra to Enterprises With interest in the open source NoSQL Cassandra database as platform for highly distributed applications picking up, a race is on to provide cloud and hosting services for it. The platform is starting to gain significant traction at the higher end of the enterprise IT market.
At the Cassandra Summit 2015 in Santa Clara, California, Microsoft announced it will make an enterprise edition of Cassandra by a company called DataStax available as a service on its Azure cloud. That move comes a day after Rackspace announced that it will provide a managed instance of Cassandra by DataStax as part of its service portfolio.
DataStax claims to have more than 1,000 customers running its Cassandra implementation, known as DataStax Enterprise (DSE). Cassandra itself was designed from the ground up to simultaneously support both transaction processing and analytics applications. That’s appealing to IT organizations that traditionally had to bear the cost and expense of supporting separate databases for transaction processing and analytics applications.
Because of the capability, Cassandra is currently being widely tested as an alternative to traditional relational database systems from Oracle and IBM. To make NoSQL Cassandra more appealing to those organizations, Cassandra supports a query language that is semantically similar to traditional SQL.
While it’s unlikely that Cassandra will replace Oracle or IBM relational databases any time soon, there has been a noticeable decline in new database licenses for both vendors. The degree to which Cassandra may be responsible for that is difficult to assess.
Cassandra as a service can also be found on Amazon Web Services and a variety of other hosting services, including one managed by DataStax itself. Earlier this year DataStax also partnered with HP to provide HP Moonshot servers that come prepackaged with DSE.
In addition to the Microsoft and Rackspace announcements, DataStax today also announced the general availability of version 4.8 of its Cassandra implementation which among other things adds tighter integration with Apache Spark, the popular open source in-memory computing framework, and Docker containers. DataStax also announced Titan 1.0, a graph database that is based on a distributed framework that like Cassandra scales out more easily across x86 servers.
Robin Schumacher, vice president of products for Docker, said another big advance in DSE 4.8 is that Cassandra now consumes 50 percent less storage space.
In both cloud and hosting scenarios, IT organizations that don’t have a lot expertise when it comes to deploying Cassandra are being offered the option of relying on third-party expertise to run both Cassandra and the applications that run on it. In fact, in an environment such as Azure, Shumacher said, DSE is the only true distributed database available, because Microsoft SQL Server is based on a master-slave architecture. | | 3:00p |
Data Center World Panel Rings Alarm Bells about Electromagnetic Pulse Danger NATIONAL HARBOR, Md. — Data center managers used to be relegated to the bottom floors and only showed their faces when end users experienced computer problems, but things have changed for them. Today, they participate in board meetings and have a tremendous influence on C-level decision makers.
With that in mind, the three panelists who spoke at the Data Center World conference here Tuesday about their growing concern with the potential impact of an Electromagnetic Pulse (EMP) attack on critical infrastructure called upon attendees to use their influence to bring awareness to a critical issue that could literally “take down our nation.”
“Can you imagine a United States without electricity?” Frank Gaffney, founder and president of the Center for Security Policy, a conservative national security think tank in Washington, D.C., asked the audience.
It may be hard to conceive, but that’s what would happen if the US electrical grid suffered an EMP attack, Gaffney said. If a terrorist or a terrorist nation got hold of a conventional nuclear weapon and detonated it high in the earth’s atmosphere over a specific target, the resulting powerful current would fry and disable all electrical equipment for thousands of miles, he said.
Considering that a “recently translated military doctrine from Iran mentioned EMP as a weapon more than 20 times,” and that the US has already experienced attempts to bring down parts of the grid before, it’s a very real threat, Gaffney said. Besides, if a terrorist doesn’t strike, it’s just a matter of time before the sun does. On July 23, 2014—the second anniversary of a near-miss with a Carrington-class solar storm—NASA reported there was a 12-percent probability one would hit the earth in the next decade.
“Every 850 years solar flares grow very large and one hits us,” Gaffney explained. “The last time it happened, one hit the earth square on in 1859. Do the math. We’re in the zone. Three years ago, one came within a week to where the earth would have been in its rotation.”
“It would, in effect, reduce our technology-based society to pre-industrial modes of communication and subsistence,” he said.
Lloyd’s of London, one of the world’s largest insurance providers, published a “what-if” scenario in July. If 15 states were blacked out for up to two weeks, as many as 93 million people would be affected, causing up to $1 trillion in economic damage. In the event that power stays out for an entire year, elaborated Gaffney, nine out of 10 people would die without food, water, and medication.
“This is the most important issue you’ll address today, or in the course of this conference, or in the course of the year,” Gaffney told conference attendees. “If we can’t address the problem the panel is going to talk about today, nothing else gets addressed, nothing.”
Calling a failure of the US grid both a nation-ending event and a data center-ending event, he stressed the importance of data centers taking measures to protect critical infrastructure and data should the grid go down.
“Data centers are quickly replacing telecommunication centers. Everything’s a data center,” said Michael Caruso of ETS-Lindgren Inc., a company that provides solutions for detecting, measuring, and shielding EMP waves. “Folks that operate them are absolutely critical to the survival of our society. This subject should be on the table in board meetings when talking about disaster recovery and abatement plans.”
While the US military is protecting nuclear assets from EMP threats, the government has been slow to create legislation to make it mandatory for utilities. Ironically, they resist because of the cost; yet they have the most at stake, he said.
The good news for data centers is that the technology that can protect from EMP threats exists and is in use today. Caruso said RF shields and Faraday cages are affordable ways to protect either new facilities or existing ones. Both deflect electromagnetic energy.
He also outlined three levels of protection: Level One would use a Faraday cage to house data center equipment and protect all points of entry during an event. The facility, support, and utilities might go down, but the actual equipment would be protected and be able to be brought back online once the utilities were restored.
Level Two is an auxillary-protected building where a Faraday cage would house power units, back-up generators, and cooling systems. This would allow a facility to operate through an event without disruption.
Level Three represents a multi-story facility that could be retrofitted with a number of small Faraday cages that house the most critical equipment or infrastructure.
All three levels should be discussed with an organization’s senior executives, especially with non-government and non-military entities—most of which are not even aware of the threat, said Caruso.
“We could use your help to spread the word,” concluded Scott Tucker, an architect who builds data centers. “All of us have a stake in getting these problems resolved. I hope we have moved you to become part of the solution for your own personal and corporate good.” | | 3:30p |
Six Reasons to Get User Experience Right Simon Townsend is Chief Technologist, EMEA for AppSense.
For as many as 10 million desktops, user environment management (UEM) technology optimizes the user computing experience while reducing IT management complexity and cost. Not only does a solution that truly delivers end-to-end UEM deliver the fastest, easiest and lowest-cost desktop possible, it can achieve this regardless of the mix of physical and virtual devices or combination of multiple devices, locations or delivery mechanisms.
The benefits of using an end-to-end UEM solution are both pervasive and persistent. It can produce a better user experience and lower both capital and operational expenses providing a significant return on investment. Consider these six reasons to get the user experience right for both the business and its employees.
Achieve a Seamless User Experience
Advanced end-to-end UEM solutions deliver a seamless user experience across all desktops and devices. This includes efficient access to both applications and data without slowing server performance or increasing storage requirements.
Easier Migration and Upgrades
Environments that use end-to-end UEM never have to worry about migrations or upgrades again. Because the user has been decoupled from the underlying system, it’s effortless to migrate the user profile and data to new devices and operating systems. This reduces the time, cost and complexity of migration, a process that has traditionally been very tedious, and eliminates user disruption with literally zero-downtime migrations.
Better IT Control
The best end-to-end UEM solutions put granular control into IT’s hands so that it may more effectively and efficiently manage corporate and application policies. This can also assure accurate use of user privileges and works to prevent costly security breaches.
Better-Performing Desktops
An end-to-end UEM solution can dramatically speed the user environment experience, improving employee acceptance and productivity. With more efficient distribution of profile and application policies, users no longer need to wait for pieces of their environment to load that they don’t need. In addition, smart controls allocate CPU, memory and disk resources to improve the overall quality of service, increase user density and reduce hardware requirements.
Dramatic Cost Reductions Across the Board
End-to-end UEM solutions reduce costs across the board for desktop infrastructure, delivering both capital expense (CAPEX) and operational expense (OPEX) advantages. They can dramatically reduce desktop and support costs, conserve infrastructure expenses and optimize application license expenditures. In addition, with the advantages listed above, end-to-end UEM fuels user productivity for greater workforce efficiencies overall.
Efficient Data Access and Management
With end-to-end UEM, enterprises and their users further benefit from seamless access to data via secure and efficient processes that offer additional granular policy control and end-to-end security. As a result, users can access their work content on any Windows PC, Mac, iPad, iPhone or Android-based device with confidence and ease. Furthermore, data is future-proofed as storage requirements change without the need to ever migrate data when new devices are deployed.
When it comes to the user acceptance of new desktop implementations, it’s all about experience. To power user adoption while producing the greater IT efficiencies that will result in exponential value both today and into the future, select an end-to-end UEM solution that will help get user experience right. It will lower overall desktop costs, deliver the fastest logon and in-session performance and give users the consistent, reliable experience they expect. This will not only improve user experience, but the business experience overall.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 4:00p |
Digital Realty Gets into Colocation Market for Electronic Trading Digital Realty, the real estate investment trust that’s been changing direction away from its traditional strategy of leasing data center space wholesale, has taken another step toward a more diversified model, partnering with GMEX Technologies Limited to establish numerous electronic trading hubs in its data centers around the world, the company said this week.
The move will theoretically attract more retail colocation customers to the facilities that host GMEX servers, who will want to participate in the markets the trading technology company provides access to. Digital has been putting a lot more emphasis on retail colocation recently, aiming to leverage its global reach and build interconnection hubs in its data centers similar to what its rival and customer Equinix has built.
Earlier this year Digital took its biggest step yet in the new direction, acquiring Telx, one of Equinix’s biggest US competitors, for $1.9 billion. The deal gave Digital 1.3 million square feet of retail colocation space across 20 facilities and a big amount of new retail colo customers. It also gave Digital control over several key network interconnection hubs in the US.
The partnership with GMEX starts with Digital’s data center in Chessington, just outside of London. GMEX servers hosted in the facility will provide access to trading in emerging markets in Europe, including securities, commodities, derivatives, and foreign exchange.
The companies expect to bring the London hub online this October. Next year, they plan to launch a similar hub in Chicago for access to emerging markets in Central America and South America and in Singapore for access to Asian markets.
Traders and companies that serve the high-frequency trading ecosystem pay premium colocation rates to host their servers close to exchange servers to reduce latency. Both colocation companies like Equinix and exchange operators like Nasdaq and ICE provide such services in their data centers. | | 8:08p |
Michigan Sues HP Over Botched $49M Project 
This post originally appeared at The Var Guy
By DH Kass
HP has been hit with a lawsuit from officials in Michigan over a $49 million project 10 years in the doing that the state claims is some five years late.
The project was intended to replace aging computer systems at Michigan’s Secretary of State offices throughout the state.
“I inherited a stalled project when I came into office in 2011 and, despite our aggressive approach to hold HP accountable and ensure they delivered, they failed,” said Ruth Johnson, Michigan Secretary of State. “We have no choice but to take HP to court to protect Michigan taxpayers.”
Michigan officials said they negotiated with HP for months before issuing a termination for cause letter on Aug. 28. Under terms of the contract, even if terminated, HP must provide support to ensure services to Michigan are not affected.
HP staff has failed to report to work since Aug. 31, Michigan said.
Other states, including California, Minnesota, New Jersey, New Mexico and Vermont, also have reportedly terminated contracts with HP over non-performance.
The IT vendor has been on the job since 2005, officials said, as primary contractor for the Business Application Modernization project, intended to replace the Secretary of State’s mainframe-based computer system used by all 131 offices and many internal work areas. The legacy system is some 50 years old with outdated coding and is costly to maintain and update, Michigan said.
Because the 2010 deadline for HP to deliver the system replacement was not met, the department is saddled with using old technology.
In 2011, Johnson publicly addressed the project’s lack of progress after the state had already paid out $27.5 million to HP for a system that was not operational. Johnson demanded HP reset the terms of the contract to put in place clear timelines for delivery and penalties if HP was unable to deliver and HP agreed, Michigan said.
“Our DTMB (Department of Technology, Management and Budget) partners and I are gravely disappointed that this action to sue is necessary, but HP simply failed the state of Michigan,” Johnson said. “Our focus now will be on looking for options that allow us to continue to provide the best possible service at the lowest possible cost to our customers.”
In an emailed statement to ComputerWorld, HP said, “It’s unfortunate that the state of Michigan chose to terminate the contract, but HP looks forward to a favorable resolution in court.”
This first ran at http://thevarguy.com/business-technology-solution-sales/092215/hp-sued-michigan-over-botched-49-million-technology-job | | 8:16p |
CloudFlare Raises $110M from Google, Microsoft, Baidu, Others 
This article originally appeared at The WHIR
CloudFlare has raised $110 million in equity capital in a funding round led by Fidelity Management and Research Company, with strategic participation from Google Capital, Microsoft, Baidu, and Qualcomm, through its venture investment arm Qualcomm Ventures. With the latest investment, CloudFlare has raised $182 million total in equity funding.
According to the company’s announcement on Tuesday, it will use the new funding to acquire customers, grow its product capabilities, and rollout rapid expansion into new international markets. CloudFlare recently expanded in China and Australia, and plans to expand its global network at the pace of one new data center location per week.
CloudFlare has more than four million customers worldwide, and has a presence in more than 30 countries. The funding round comes as CloudFlare readies its San Francisco headquarters for its first ever Internet Summit on Thursday.
“There’s an inevitability around our business,” Matthew Prince, co-founder and CEO of CloudFlare said. “Traditional on-premise solutions such as firewalls, load balancers, and DDoS mitigation appliances are becoming obsolete as organizations distribute their applications across geographies and cloud environments. CloudFlare offers these edge functions as a service without any additional hardware or software, irrespective of where the applications reside.”
With the growth of mobile Internet traffic, CloudFlare will continue to invest in product ranges to increase the performance and security of mobile applications.
As part of Microsoft’s strategic participation, CloudFlare’s capabilities will be “seamlessly” extended to the Azure ecosystem, according to Microsoft CVP of business development Bob Kelly.
Scott Sandell, managing general partner at New Enterprise Associates said: “We view CloudFlare as a category-defining company that is fundamentally changing the way enterprises deploy applications on the Internet. The broad strategic participation in the round reflects the degree to which this proposition has entered the mainstream, and is recognized by forward-thinking companies as a massive opportunity.”
In 2013, CloudFlare quietly raised a $50 million Series C funding round. In 2011, the company raised $20 million led by New Enterprise Associates.
This first ran at http://www.thewhir.com/web-hosting-news/cloudflare-raises-110-million-gains-strategic-partners-for-rapid-expansion-plan | | 8:29p |
Report: Citrix in Final Attempts to Find Buyer Before Asset Sale 
This post originally appeared at The Var Guy
By DH Kass
Two months ago, virtualization provider Citrix appeared to acquiesce to pressure from activist investor Elliott Management, agreeing to give up at least one board seat, position its GoTo webconferencing line for sale, and move aside longtime chief executive Mark Templeton.
Jesse Cohn, Elliott portfolio manager and head of U.S. equity activism, who in the last two years has compiled an impressive list of IT conquests, immediately joined the Citrix board, gaining a say-so in Citrix’s search for another independent board member.
Citrix also agreed to sell or spin off its GoTo webconferencing portfolio, apparently yielding to pressure from Elliott there as well.
Now Citrix is making a final move to sell itself before it’s forced to sell off assets, according to a Reuters report. The vendor’s market capitalization stands at about $11.6 billion.
According to the report, Citrix is newly engaged in buyout talks with private equity firms and some technology companies. Dell is said to have some interest, the report said.
Citrix reportedly has held off selling assets such as its GoTo webconferencing and associated services, including GoTo Meeting, waiting to see if it can snag a buyer at an acceptable valuation, the report said. Should Citrix fail to find a suitable buyer, the company will also consider selling other assets as well, Reuters sources said.
In the wake of Templeton’s exit, Citrix formed an Operations Committee headed by board member Robert Calderoni to review of its operations and capital structure, specifically its overall product portfolio and profit potential. As part of that deal, Calderoni was named executive chairman of the board with Thomas Bogan assuming the role of lead independent director.
The moves came amid a favorable FQ2 2015 in which the company posted a 94 percent year-over-year spike in net income to $103 million, or $0.64 a share, on a 2 percent revenue increase to $797 million.
This first ran at http://thevarguy.com/information-technology-merger-and-acquistion-news/092315/report-citrix-final-attempts-find-buyer-asset-sale | | 8:45p |
What’s Your Data Center IQ? The Survey Says … NATIONAL HARBOR, Md. —Ponemon Institute and Emerson Network Power teamed up recently to develop a 36-question Data Center IQ test to gauge key best practices that 560 IT operations, facilities engineering, and senior management professionals in North America may know about but may not necessarily implement.
In order to identify personal and industry strengths and weaknesses, the multiple-choice test focused on five areas: availability, speed of deployment, cost control, productivity, and risk management.
The full Ponemon report won’t be released until next month, but Dan Draper, director of data center programs for Emerson, shared a portion of that test and results with Data Center World attendees on Tuesday. Some questions were thought-provoking and some simply “trivia,” according to Draper. All in all, though, the results were interesting, to say the least.
Some of the more serious questions revolved around downtime and arc flashes in the work environment.
When asked about the average duration of a complete data center outage according to Ponemon, the majority knew that 107 minutes was the correct answer. That’s based on the time nothing is computed or processed in the data center and based on on-site triage audits, said Draper.
The majority also knew that battery failure in a UPS was the number one root cause of data center outages. However, Draper questioned whether knowing this fact and having a protocol in place for testing and replacing weak batteries are two different things. “You know that batteries are the weakest link in your data center. You knew that coming in, but do you test your batteries? I just want to get you thinking,” he told attendees.
The question about how many US workers wind up in burn centers caused by arc flashes annually in an organization, and how many had one in their data centers, raised some eyebrows. Most respondents knew it was the highest answer option provided—2,000. Yet, 5 percent had first-hand experience. It gave Draper cause to question how well-prepared industry professionals actually are.
In fact, revisions to the 2015 National Fire Protection Association’s Standard for Electrical Safety in the Workplace (NFPA 70E 2015) mandates annual arc flash training, according to an article in Data Center Management magazine.
The new arc flash requirements also force data centers to calculate the arc flash boundary distance for each piece of hazardous equipment and to determine what arc flash protection and personal protective gear are required.
“Do you have areas roped off while doing maintenance that involves electricity to protect a vendor for yourself?” asked Draper.
Not every test question focused on such dire issues. For example, when respondents were asked if they knew their PUE, 87 percent did; yet only 32 percent actually knew what PUE (Power Usage Efficiency) stood for. When Draper asked everyone in the room if they had to report that number to a company executive, everyone raised their hands. When test-takers were asked if turning off the lights was part of the calculation, only half answered “true.”
“So, we all have to measure and report the PUE to someone, yet most don’t know what the letters stand for and only half know that turning off the lights is part of the equation,” quipped Draper. “Is this a meaningless, worthless metric? I don’t know, but it’s something to think about.”
He then asked attendees how they pronounced DCIM. Some said each letter individually, and others called it D-CIM. Draper’s point in asking? “There are a lot of new terms out there, but how well do we really understand them?”
While some of the questions were light-hearted and others very serious, combined they all served to bring awareness to key issues facing data center and facilities professionals.
Draper concluded: Ultimately, it’s one thing to know best practices, but if you’re not implementing them, what’s the point? |
|