Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, September 11th, 2014
| Time |
Event |
| 12:00p |
Salesforce to Move Into Interxion’s Paris Data Center Software-as-a-Service giant Salesforce.com has chosen an Interxion data center in France to expand its European capacity. It is one of three data centers the company is opening in Europe to support its business in the region.
It’s a big win for Interxion, one of Europe’s largest colocation service providers which has several data center expansions slated across its footprint.
The largest SaaS provider, Salesforce.com is primarily known for its Customer Relationship Management offering, but also offers a Platform-as-a -Service called Force.com and an App Exchange. Salesforce.com has tapped colocation providers for all of its locations and continues to do so.
It has seen significant customer momentum across Europe, with record revenue growth of 41 percent last quarter, according to the company. In addition to the three data centers, it also recently announced plans to add more than 500 jobs across Europe in fiscal 2015.
The other two new Salesforce.com data centers will be in the UK and Germany. The company has chosen NTT for its UK expansion but has not announced who will be its provider in Germany, where it plans to launch a data center in 2015. Both Interxion and NTT have data centers in Germany, but so does Equinix, its primary data center provider in the U.S.
In addition to the new data center in France, Salesforce.com recently unveiled a new French headquarters in an iconic building in the heart of Paris. The new headquarters will house the industry’s first Digital Transformation Hub, a dedicated space to foster innovation and collaboration within the company’s ecosystem of customers, partners, developers and employees.
“Salesforce.com’s new data center demonstrates our ongoing commitment to France and supports the success of our growing base of customers and partners in the region,” said Olivier Derrien, the company’s senior vice president for France, Southern Europe and the Middle East. “Salesforce.com continues to increase its strategic investments in France, enabling local companies to harness the latest cloud, social and mobile technologies and power their digital transformation.”
Salesforce.com participates in the EU-U.S. Safe Harbor framework, which permits EU-based customers to store some customer information on Salesforce.com’s U.S.-based servers. The new data centers will provide in-country locations as an option, making it an attractive to those with in-country data requirements. | | 12:30p |
DataBank Accelerates North Dallas Expansion DataBank has accelerated expansion plans in its Richardson, Texas, facility. It recently deployed a second 10,000-square-foot pod at the Dallas-suburb site, and several customer deployments since have the company building out a third one faster than anticipated.
The 60,000-square-foot data center was commissioned in February of 2013 and features 20 megawatts configured in 2N (complete redundancy) for both utility and on-site power generation.
“We have been experiencing rapid growth in Texas, specifically in our north Dallas data center,” said Tim Moore, DataBank’s CEO. “With several recent large-scale deployments, we positioned ourselves to respond with a timeline designed to meet these expansion requirements.”
DataBank’s strategy to grow beyond its original footprint in Dallas is alive and well. After eight years and nearly filling up 130,000 square feet of data center space in the original DataBank building (which formerly served as the main Federal Reserve Bank office in Dallas), the company began extending its model – first with another data center in the Dallas metroplex in Richardson, and then in other cities around the U.S.
Its first Minnesota location came through acquisition of VeriSpace and its site in Edina, Minnesota. The company recently broke ground on a second, 88,000-square- foot data center in the south Minneapolis suburb of Eagan.
It also has two data centers in Kansas City, where it acquired a company called Arsalon earlier this year. The provider moved up the stack to offer managed services in 2013. | | 1:00p |
Skybox Outsources Facilities Management to T5 T5’s newly formed facilities management group (T5FM) will manage wholesale provider Skybox’s data centers. Skybox will build and offer move-in ready wholesale data centers in mid-market locations, where T5 will provide facilities support and management.
Using this wholesale data center business model, Skybox will lease self-contained data center halls to customers, each with its own dedicated mechanical and electrical infrastructure, allowing customers to retain total operational control.
Skybox recently started construction on the first data center, called Skybox Houston One. The 86,960-square-foot facility is expected to be finished in November and will be commissioned in January 2015. The company is a joint venture between Rugen Street Capital and Bandera Ventures.
T5 recently started a facilities management services company, offering its domain specific expertise and protocols to data center operators across North America. T5FM offers the same comprehensive facilities management used in its own data centers, primarily located in larger markets.
Houston is a competitive market, serving as a historic hub for both CyrusOne and SoftLayer (now IBM). It is home to a total of about 30 data centers. Skybox will benefit from T5’s operational expertise in the competitive Houston market, as well as in any future markets it decides to build.
“This partnership allows us to focus on our core development business while knowing that the T5FM team will keep operations running smoothly,” said Rob Morris, managing partner at Skybox. “It’s a great marriage.”
T5 gains a valuable client for its newly formed division.
“Skybox and T5 have very similar philosophies as to how to best serve data center customers, and through this partnership we can bring T5’s facilities management expertise to new customers in mid-market regions,” said Mike Casey, chief operating officer at T5. “The expertise and protocols that existing T5 customers rely on will adapt quite well to Skybox’s clientele.”
The Skybox Houston One facility will have four independent data halls and will focus on serving the so-called Houston energy corridor. The facility is designed to support the high-power densities required by the industry, with dedicated halls from 1.2 to 2.4 megawatts available.
Skybox’s unique spin on wholesale is that it offers tenants the ability to procure and negotiate their own power contracts directly with the provider of their choice.
T5 operates its own data centers in Atlanta, Colorado, Dallas, Los Angeles, New York, North Carolina and Portland. Its facilities management subsidiary launched in July with an executive management team touting more than 100 years combined experience.
Casey called the subsidiary the next logical step for the company, given the complexity of operating data centers versus other types of property. A month after launch, T5FM added John Ducic, former senior facilities manager at CoreSite Realty Corp., as director of its west coast operations. | | 2:00p |
Armed With 10TB Drives and 3.2TB Flash, HGST Aims to Own Data Center Storage HGST (formerly Hitachi Global Storage Technologies), a Western Digital (WDC) company announced six new data center storage infrastructure offerings, including NVMe-compliant PCIe SSDs and an extension of the use of Intel NAND Flash technology. HGST says that in response to exponential growth in storage and the need for it to be accessible, elastic and fast, the company has developed innovative methods to extract increased efficiencies, performance and reliability through advanced software and firmware that provides tight end-to-end integration of underlying device capabilities with higher level functions at the subsystem, system and application level.
For server-side clustering and volume management software HGST launched Virident Space, which the company says will provide clustering of up to 128 servers and 16 PCIe storage devices to deliver one or more shared volumes of high performance Flash storage. With capacity for more than 38 TB the Virident Space offering looks to replace SAN environments for shared storage applications like Oracle RAC and Red Hat Global File System. HGST adds Virident Space to its vision for a Flash fabric suite, to its existing Virident ClusterCache for SAN acceleration and Virident Share solutions.
HGST said it is now shipping an 8TB Ultrastar He8 hard drive, joining the 6TB Helium hard drive launched last year. The HelioSeal platform by HGST is a hermetically sealed, helium-filled hard drive aimed at delivering maximum capacity and high density for OEMs, enterprise and cloud customers. Also introduced was a data center class 10TB hard drive for cloud and cold storage applications. Utilizing the HelioSeal technology and Shingled Magnetic Recording the new 10TB drive aims to lead the way for cost per terabyte and watts per terabyte metrics.
“At every step, we are innovating with purpose and pace to exceed the expectations of our customers by offering the broadest portfolio of SSDs, HDDs, software and solutions in the industry,” said Mike Cordano, president of HGST. “By providing complete solutions for both performance and capacity centric environments, we’re enabling our data center customers and partners to focus on developing new services and capabilities that drive competitive differentiation and profitability for their businesses. The products and solutions announced today ensure that HGST sustains its heritage as the most trusted provider of innovative data storage offerings to maintain market leadership.”
Going on the attack after those still using tapes in the category of active archives, HGST says it will offer hard drive solutions to provide a 10x increase in storage density and power efficiency compared to traditional enterprise data center solutions and a 5x increase in storage density and power efficiency compared to commonly used scale out cloud data center solutions. Although the archive platforms can be configured for a complete range of storage architectures the company says the greatest cost and efficiency gains are made in extremely large capacity environments.
HGST launched a new series of NVMe-compliant Ultrastar SN100 PCIe SSDs, which integrate Toshiba’s current MLC NAND Flash, and the company says these SSDs will enable broad system interoperability and ease of deployment, resulting in lower cost of ownership. The Ultrastar SN100 series of SSDs will be offered as a half-height, half-length (HH-HL) add-in card, as well as in a standard 2.5-inch drive form factor, with up to 3.2TB. To help integration and ease of use the company says it will make NVMe-compatible extensions aimed at delivering new levels of application integration by enabling value-add software layers to interface with the PCIe SSD’s NAND flash management.
HGST also announced it has extend the use of Intel NAND Flash technology as part of a cooperative agreement with Intel on Serial Attached SCSI (SAS) solid state drives (SSDs). After four generations of HGST SAS SSDs the products will be marketed and sold exclusively by HGST for another three years.
Rob Crooke, corporate vice president and general manager of Intel’s Non-Volatile Memory Solutions Group said “our work with HGST underscores the importance we place on developing the best SSD solutions for high-performance enterprise workloads. This product development effort advances our NAND Flash technology innovations into the important SAS SSD market by addressing critical data center pain points related to scalable and flexible solutions that help deliver system compatibility and ease of integration into new and existing system designs.” | | 2:30p |
Future Data Center Trends In the data center industry, many professionals not only work hard to keep the data center facility of today humming along, but also think deeply about where technology and data centers are headed in the future.
Hector Diaz, the AFCOM Chapter president of the Denver chapter, who manages 1 million square feet of data center space for a large multinational enterprise, will lead a panel discussion on “The Future of the Data Center” at the upcoming Orlando Data Center World conference.
Data Center Knowledge asked him what top trends the panelists will discuss. Diaz said there are several major considerations:
- The data center of the future is the one you don’t own.
- There will be a continuous focus on energy efficiencies, higher power densities and liquid cooling.
- Go beyond peer ratings, existing standards and pushing ASHRAE standards.
Colocation and Cloud Will Rule The Day
Diaz said that the AFCOM audience has smaller (5,000 to 6,000 square-foot) enterprise data centers for the most part. These data centers will be replaced by colocation and cloud software as a service products, such as Salesforce.com.
“No business on its own has the economies of scale,” he said. “When you do the economic analysis, the return on the investment is not there. Operating at a small size you don’t get the economies of scale that the colocation provider or a large enterprise with 250,000 square feet or more of data center space can get.
“Server huggers are usually in the IT camp. But they don’t have access to the data of what it is costing the company to run the data center, including all the costs.” He pointed out that a comprehensive look at cost must include not only hardware, utilities, personnel, but also the depreciation cost of the data center asset.
Large online mega-scale companies are even leveraging others’ expertise. “Netflix doesn’t own any infrastructure,” Diaz said.
The colocation provider has a set of in-house experts constantly monitoring the equipment and the environment and the Service Level Agreement (SLA) is assuring the company of uptime, he said. Also, bandwidth and latency are less and less of an issue. The amount of time it takes a single packet to get from New York to Chicago is about 14 milliseconds, while the length of time for a human eye blink is about 100-400 milliseconds. The network pipeline can be better externally than the ones within a corporate campus.
In terms of cloud, he said, “It’s time to take off the blinders and embrace cloud computing.” The enormous data centers run by cloud providers can truly leverage economies of scale that the small data center in an enterprise cannot reach.
Liquid Cooling Will Be Ubiquitous
Diaz thinks that liquid cooling by bringing water to the rack will become prevalent. “I certainly see that in the future and the future is now,” he said, noting a recent tour of the (National Renewable Energy Laboratory) NREL data center showed what can be done with liquid cooling.
“It’s a matter of physics. Water is a more efficient coolant than air,” he said. “There are solutions that prevent leaks in the data hall. The water flow is put through the pipes under negative pressure. If a leak occurs, the water goes back in the pipe, not out.”
While there is increased upfront cost with water cooling at the rack level, this is offset by the reduced cost of purchasing air-handling equipment, Diaz said.
Standards Will Change
In terms of industry standards, Diaz predicted that the industry will push for more clarity. The Tier standards offered by the Uptime Institute can often be confusing at present. “There needs to be more understanding conceptually,” he said. “Most people who talk about Tiers aren’t certified by the Uptime Institute.” He also said the Tiers are related to energy efficiency and not necessarily a reflection of reliability.
“I have seen a Tier II facility with 100 percent uptime for many years,” he said.
As for the ASHRAE standards, he also expects them to be adjusted again. “They have changed over time, there’s no reason they won’t change again.”
Discuss the Data Center of the Future
Want to explore this topic more? Attend the session on “The Future of the Data Center” or dive into any of the other 20 trends topical sessions curated by Data Center Knowledge at the event. Visit our previous post, Data Center Commissioning: What’s the Best Approach?
Check out the conference details and register at Orlando Data Center World conference page.
| | 5:25p |
IBM Updates x86 Server Line Despite Sell-Off Plans IBM launched a new line of M5 x86 servers earlier this week to support a range of enterprise workloads and computing environments. This will likely be the last series designed by IBM, as it announced at the beginning of the year that Lenovo would acquire its x86 server business for $2.3 billion, and just recently the deal was cleared by the U.S. government.
The M5 portfolio is a standard line of x86 servers for the enterprise, but IBM hopes to expand the use case opportunities with additions like IBM Trusted Platform Assurance security features. The M5 platform comes in configurable models of rack and tower servers, dense systems, blades and integrated systems.
IBM may be getting out of the x86 server business, but it continues to command second-largest share of the global server market. IBM made $2.8 billion in revenue from server sales in the second quarter of this year, while HP, whose market share is the largest, made about $3.2 billion, according to Gartner.
All new M5 servers will come equipped with the new Intel Xeon E5-2600 v3 processors and energy-saving TruDDR4 memory. The servers can come loaded with up to 1.5 TB of memory and range from 1U to 5U in height.
Bill Parker, account manager at Logicalis, said, “In today’s environment of ever-growing workloads and fewer IT resources, customers are demanding the highest levels of reliability, efficiency and automation. We look forward to sharing more with our customers about the increased levels of security, reliability and efficiency architected into these new systems.”
IBM says M5 servers add hardware support for the latest version of the Trusted Platform Module (TPM 2.0) to enable more encryption algorithms and Windows support. A new Secure Firmware Rollback feature prohibits any unauthorized updates of previous firmware versions.
IBM made Energy efficiency gains with the M5 series as well, noting new features such as extended operating temperature ranges, dual fan zones and active/standby mode for power supplies. One of the new products, the M5 NeXtScale system, is a direct-water-cooled server.
Building on the M5 series, IBM also launched several tailored enterprise solutions. A System x solution for VMware VSAN, a System x solution for Microsoft Fast Track DW for SQL Server 2014, SmartCloud Desktop Infrastructure with Atlantis Computing ILIO and IBM Flex System solution for Microsoft Hyper-V were all announced.
The deceptive x86 growth numbers
x86 servers are the only server category that showed year-over-year revenue growth for the second quarter, according to Gartner. RISK/Itanium Unix servers fell 23.2 percent, and the category made up primarily of mainframes (a big business for IBM) fell 2.2 percent.
It is telling that IBM is second in the market by revenue but third by the amount of units shipped (behind HP and Dell). It means the company is making more revenue from selling fewer big expensive systems than from selling lots of commodity x86 servers, which explains its urge to divest the x86 business.
Total worldwide server revenue in the second quarter increased 1.4 percent year over year. The other companies with biggest market share by revenue are Oracle and Cisco. | | 6:08p |
Compass: We’ll Build Data Center in 6 Months or Pay $100,000 Exclusive: Realizing that the short time it takes to deliver one of its 1.2-megawatt facilities, Compass Datacenters will now guarantee that it will finish a data center within six months of signing, complete with Tier III and LEED Gold certification, or pay $100,000 to the customer if it fails to meet the deadline.
Chris Crosby, co-founder and CEO of the Dallas-based developer, said the three-year-old company’s clients have said that speed of delivery has become a big differentiator for Compass. “We’ve built all of the sites in this six-month window,” he said.
The count-down starts from the moment a construction site is “pad-ready.”
“No-one is doing greenfield [in a timeframe] anywhere close to that. It’s unheard of.” A typical greenfield build takes more than a year.
One of the biggest benefits of a shorter timeframe is the ability to better align IT with the data center that supports it.
In the 18 or so months you usually have to wait for a greenfield data center, your entire IT environment may go through a major transition, Crosby said. With a guaranteed six-month delivery time, you have a better idea of what exactly will go into the facility and can plan the layout and supporting infrastructure accordingly.
Compass has a single standard data center design for a 1.2-megawatt facility and all supporting infrastructure. The company targets underserved second-tier U.S. markets.
Because it has one design, Compass is in the position to guarantee both Uptime Institute’s Tier III and the U.S. Green Building Council’s LEED Gold certifications. All four data centers the company has built so far have successfully completed both certifications.
The six-month guarantee applies to clients who own the data centers Compass builds and clients who lease the facilities.
The developer’s clients include CenturyLink, Windstream, Iron Mountain and American Electric Power. Compass announced the AEP deal earlier this week.
In May, the company closed a $100 million funding round to fund expansion. | | 6:28p |
Juniper Networks Adds Security Options to Stop Malware and Emerging Threats 
This article originally appeared at The WHIR
With IT environments facing risks such as increasingly advanced malware, Juniper Networks has announced new advancements to its Spotlight Secure threat intelligence platform and the ability for Juniper’s SRX firewall to incorporate outside threat intelligence data for a more complete view of threats and compliance issues.
“We hear all the time that security continues to be a major concern particularly in the cloud and data center environments,” Juniper’s Calvin Chai said in an interview with the WHIR. “So, what’s we’re doing is we’re taking our Spotlight Secure and extending the threat intelligence platform that it covers. With this announcement, Spotlight Secure is going to provide an open and scalable platform that enables us to link in real time with the Juniper Networks SRX firewall.”
Rather than be locked into the intelligence data offered by their firewall vendor, this approach allows customers to chose the most appropriate threat detection technologies for their business.
In addition to Juniper’s Spotlight Secure threat feeds (which includes attacker-device fingerprinting), other threat feeds can be incorporated into a “common feed” that can be used to, for instance, tell SRX firewalls to cut off command-and-control traffic or isolate infected systems.
Enhancements to the SRX firewall allow it to now consume and enforce policy based on the aggregated threat intelligence, including insight into the network gained from Metafabric, the data center architecture that Juniper introduced less than a year ago.
A new cloud analytics engine from Juniper helps make sense of network information and threat feeds. “The cloud analytics engine is a new solution that includes data collection, analysis and correlation tools, and it also includes a visualization component,” Chai said. “And really what this does is it allows network users to better understand the behaviour of their workloads and applications across both the physical and virtual networks.”
While the new security capabilities are largely geared towards enterprise customers, Juniper equipment is used by several web hosts including French cloud provider CloudWatt.
According to Chai, Juniper’s open and scalable approach to network security is appealing to many customers because changing technologies or adapting to new scale doesn’t necessarily mean the customer has to abandon their existing investment.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/juniper-networks-adds-security-options-stop-malware-emerging-threats | | 7:01p |
Waveform Adds 1MW at Detroit Data Center for Bitcoin Miners Michigan’s Waveform Technology has upgraded its data center in Troy (just outside of Detroit) to add 1 megawatt of capacity to serve its growing Bitcoin mining customer base.
Bitcoin mining consists of specially designed servers that ‘mine’ for the virtual currency by solving complex mathematical problems. Economics of mining Bitcoin aside, it’s a new and growing segment of data center customers that is rapidly evolving. These customers require high densities to meet their needs. At least $600 million is expected to be spent on Bitcoin mining infrastructure in the second half of this year, according to a recent estimate. Data center providers CenturyLink and Latisys have recently scored big Bitcoin mining deals.
Waveform has allocated 40,000 square feet of data center space for Bitcoin mining equipment. There are two separate electrical connections to the local utility serviced by two separate electrical substations with a total of 4 megawatts to the building.
Waveform isn’t the first provider to specifically market to Bitcoin mining. Another example is Arizona’s NextFort data center, which partnered with Gray Matter Industries last month.
A Bitcoin mining equipment maker GAW Miners recently bought up data center capacity and launched a flurry of new “Hashlets,” providing cloud-based data crunching power to Bitcoin enthusiasts.
“This upgrade leverages Waveform Technology’s 14-year experience providing data center colocation,” said Rich Tota, director of sales.”We believe hosting Bitcoin servers is a natural extension to a business we have been proficient in for over a decade. We believe our track-record of technical expertise and reliability will provide a better alternative to many of the unproven data center startups entering this arena.”
Bitcoin and other cryptocurrencies have grown an entire industry surrounding these mining servers. The Bitcoin data center market is a big and growing opportunity, but not an opportunity all of them are in the position to capture.
Tota said that since many of Waveform’s staff members have been Bitcoin enthusiasts, the transition to a new service offering was an easy one.
There’s also a flurry of data center and hosting providers now accepting Bitcoin as payment. Canadian colocation company ROOT announced earlier this month that it would accept Bitcoin from customers. U.S. data center companies C7 and Server Farm Realty take Bitcoin as payment as well.
Many online businesses accept the cryptocurrency too. eBay’s Paypal recently started accepting Bitcoin, among other companies like Overstock.com, helping further usher Bitcoin into pop culture. | | 7:30p |
Did the FBI Use Illegal Techniques to Find Silk Road Server? 
This article originally appeared at The WHIR
A cybersecurity expert has accused the FBI of lying about how it found the Icelandic server hosting Silk Road. Nik Cubrilovic, an information security consultant and former TechCrunch writer, says in a lengthy blog post that the FBI explanation of how it beat the Tor network and found the server, and ultimately the site’s operator, is “impossible,” and at best incomplete.
Silk Road (in its first iteration) was shut down by the FBI in 2013 and Ross Ulbricht was charged with being the sites operator The Dread Pirate Roberts. Ulbricht’s trial defense sought to have evidence thrown out on grounds that the server was identified through illegal means, and the FBI defeated the motion by explaining its methods.
That method, according to court documents, allowed the FBI investigators to identify a “non-Tor source IP address reflected” in CAPTCHA-related packet headers.
Cubrilovic says that the IP could not be obtained from “leaky CAPTCHA” because CAPTCHA was not being served from a live IP.
“The idea that the CAPTCHA was being served from a live IP is unreasonable,” Cubrilovic writes. “Were this the case, it would have been noticed not only by me – but the many other people who were also scrutinizing the Silk Road website.”
He goes on to detail several other flaws in the official FBI explanation, including a failed test to replicate the results, and suggests several alternative methods the agency might have used, and reasons for not disclosing them.
Ulbricht has been charged with narcotics trafficking and money laundering conspiracy.
One of the four Pirate Bay co-founders was arrested in June after four years on the lam. His eight month prison sentence and $6.9 million fine could be dwarfed by Ulbricht’s sentence if he is convicted.
Tor has been targeted by attacks aimed at identifying users in the past, including a prolonged attack this year which may have unmasked some Anonymous members.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/fbi-use-illegal-techniques-find-silk-road-server | | 8:57p |
HP Buys AWS-Compatible Cloud Builder Eucalyptus HP has acquired Eucalyptus Systems, which builds Amazon Web Services-compatible private and hybrid clouds for enterprises, the Palo Alto, California-based giant announced Thursday.
HP did not disclose the acquisition price, but anonymous sources have told Re/code that it was less than $100 million. HP expects to close the deal in the fourth quarter.
The company has not been acquisition-happy since the botched Autonomy deal in 2011. The $10.3 billion acquisition resulted in a massive value write-off later and led to shareholder lawsuits.
The company has also been restructuring over the past several years and streamlining its operations, which left little room for expansion through acquisition. It did make one small acquisition in March, buying network virtualization company Shunra.
The latest acquisition brings an important new member to HP’s leadership team. Eucalyptus CEO Marten Mickos will join HP as senior vice president and general manager of its cloud business.
He will take the cloud-business responsibilities over from Martin Fink, who will remain in his other two roles as the company’s CTO and director of HP Labs.
Mickos is an influential figure in the world of open source software for enterprises. Between 2001 and 2008 he was CEO of MySQL, an open source database company Sun Microsystems bought in 2009. MySQL is one of the most popular open source relational database management systems.
Eucalyptus products are also based on open source software.
At HP, he will be charged with building out the company’s Helion portfolio of cloud services. HP announced Helion in May, saying it would invest $1 billion into the Infrastructure-as-a-Service and Platform-as-a-Service initiative.
The IaaS piece is built on OpenStack, an open source cloud infrastructure architecture, and at the core of the PaaS portion is Cloud Foundry, an open source PaaS developed by EMC’s Pivotal. HP says it is the leading code contributor to the upcoming release of OpenStack, expected in October.
“Enterprises are demanding open source cloud solutions, and I’m thrilled to have this opportunity to grow the HP Helion portfolio and lead a world-class business that delivers private, hybrid, managed and public clouds to enterprise customers worldwide,” Mickos said in a prepared statement. |
|