Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, March 13th, 2013
| Time |
Event |
| 11:30a |
Data Center Jobs: First Citizens Bank At the Data Center Jobs Board, we have a new job listing from First Citizens Bank, which is seeking an Infrastructure Architect in Raleigh, North Carolina.
The Infrastructure Architect is responsible for partnering with the senior IT management, including the Manager of IT Storage and Disaster Recovery, to develop the long-term technical vision for DR that will meet the business needs of all operational areas of the Bank, designing, developing and integrating a DR technology strategy that fundamentally changes the way the Bank accomplishes its DR goals and recovery objectives, utilizing Technology Domain Strategy to direct the design and build of the solution, providing cohesive technical guidance and direction for the bank’s annual investment in all technology areas, including Data, Applications, and Infrastructure, facilitating the development and evolution of the architecture and enterprise governance processes, and creating and communicating a clear technical vision for management and senior technical staff to ensure maximum leverage of both purchased and developed technology. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 12:00p |
Salesforce.com To Sell $1 Billion in Notes, Commits to Renewables Enterprise cloud computing company Salesforce.com (CRM) announced its intention to offer $1 billion aggregate principal amount of convertible senior notes, with the potential for $150 million more, in long-term convertible debt. The company recently announced 2013 fourth quarter and full year fiscal results, with operating cash flow totaling $737 million, up 25 percent year-over-year. Total cash, cash equivalents and marketable securities finished the quarter at $1.8 billion.
With $1.15 billion Salesforce.com intends to use a portion of the net proceeds for the cost of the convertible note hedge transactions, and fund possible acquisitions of, or investments in, complementary businesses, services or technologies, working capital and capital expenditures. The company has an almost $27 billion market cap presently.
Salesforce.com issued a sustainability commitment memo recently, citing a goal of becoming fully powered by renewable energy. It will work to steadily increase the amount of renewable energy used in its data center operations. The company memo lists four steps that it will take this year towards achieving the goal:
- Adopting a data center siting policy that states a preference for access to clean and renewable energy supply
- Researching energy efficiency and renewable energy solutions for future data centers
- Encouraging data center energy providers to increase the supply of renewable energy
- Convening peers, sustainability specialists and energy experts around data center energy issues
Salesforce.com added more than 100,000 square feet of new data center space in 2012. The company currently houses much of its operations in colocation space from third-party companies, including Equinix, and thus is currently reliant on the power mix from these providers. Salesforce.com has said that it will evaluate building its own data centers as its infrastructure expands.
Celebrating its 14th birthday recently, Salesforce.com CEO Marc Benioff reflected on the journey of turning a simple idea into a high-growth company. | | 12:30p |
Highlights from DatacenterDynamics Converged  The expo hall was bustling with activity at the DatacenterDynamics Converged conference held yesterday at the Marriott Marquis hotel in New York. (Photo: Rich Miller)
More than 1,500 data center professionals from the New York region gathered Tuesday for the DatacenterDynamics Converged conference at the Marriott Marquis. Featured topics included the lessons learned from Superstorm Sandy, cooling guidelines from ASHRAE, the latest industry research findings, and risk management planning for data centers. Check out our photo feature, Scenes from DatacenterDynamics Converged NYC, for a recap of the conference highlights. | | 12:30p |
Six Tips for Selecting HDD and SSD Drives Gary Watson is Chief Technology Officer, Nexsan, an Imation Company.
 GARY WATSON
Nexsan
With today’s wide variety of storage devices comes lots of confusion about what types of drives to use for what data types. Adding to the confusion is Serial ATA (SATA) and SAS, which refer to disk drive interfaces, and Solid State Drive (SSD) which refers to a particular kind of internal technology. Then there are considerations of random access performance, sequential performance, cost, density and reliability.
All these factors make selecting the right drives a challenge. This article offers six tips for navigating through this complexity to help you pick the right solutions for your needs.
1. Don’t Confuse Interface Type with Disk Performance or Reliability.
In the past, SAS and SATA were used as convenient shorthand for fast (SAS) or dense (SATA) disk drives. Now, however, we have SSD drives with SATA interfaces as well as inexpensive and dense but relatively low-IOPS 7200 RPM drives with SAS or even Fibre Channel interfaces. Users can no longer make blanket assumptions like “SAS is better for databases.” For example, if we’re comparing a blazing fast SLC SSD with a SATA interface vs. a relatively sluggish 7200 RPM NL-SAS drive, we might be wrong by a factor of 1000x.
Users can’t even use SAS or SATA as shorthand for desired drive reliability. There are several SATA drives that have a claimed 2.0M hour MTBF (mean time between failure), for example a 4TB enterprise hard drive from one of our technology partners. This is in contrast to the typical 1.6M hour MTBF number for many 3.5-inch 15,000 RPM SAS drives, or the even lower 1.4M hour MTBF number for some 2.5-inch small form factor (SFF) 7200 RPM NL-SAS drives.
Think about that last number for a minute – for a 40TB system, users would need 40 of the 1TB SFF NL-SAS drives, while only needing 10 of the 4TB drives referenced above – one fourth as many. Furthermore, and this is crucial, because the 4TB drive I referenced is so much more reliable, there would be 5 times as many SFF drives failing per year. Additionally, the 4TB drive would only consume 113 Watts, whereas the SFF drives would consume over 200 Watts for the same capacity. When power is a concern, 3.5-inch drive systems often deliver twice the gigabytes per Watt as compared to 2.5-inch drive systems.
2. For Best $/GB, 3.5-inch RPM SATA Is Still King.
Storage vendors have a seemingly endless variety of pricing models, but one constant seems to be that 2.5-inch systems cost twice as much per gigabyte as 3.5-inch systems, assuming both are using “enterprise-grade” drives. But as previously noted, the 3.5-inch solution will be far more reliable.
10K and 15K SAS solutions in either 2.5-inch or 3.5-inch form factor will be approximately 3X to 6X more expensive per gigabyte. SSD solutions can be from 10X to 50X more per gigabyte than comparable SATA drives.
3. HDD Performance is Mostly Dictate by Density and Mechanical Speed.
The random or transactional (IOPS) performance of spinning drives is dominated by the access time, which in turn is determined by rotational latency and seek time. Interface performance has almost no influence on IOPS, except in the negative sense that complex or new interfaces sometimes have bloated or immature driver stacks which can hurt IOPS. Highly random applications which benefit from high IOPS drives include email servers, databases and hypervisor environments.
Sequential performance, which is important for applications like video and D2D backups, are dominated by the RPM of the drive times the bits per cylinder. This number will decrease 50 percent or more as the drive moves from the outermost to the innermost cylinders. Again, as long as the interface is fast enough to keep up (and it is in all modern hard drives), the interface speed (or even the quantity of interface ports) has no measurable effect on sustained performance. The fastest drives today can sustain less than 200 MB/s, which is less than the performance of a single 3 GB SATA port.
4. Consider SSD Instead of 10K OR 15K Drives for Transactional Workloads.
Due to their ever-increasing performance and reliability, 7200 RPM SATA drives are taking on more types of workloads including moderate transactional applications. However, 15,000 RPM drives can deliver roughly two to three times as many small block random transactions as 7200 RPM drives due to their lower rotational latency and much more powerful actuator arm. As a result, they are often used for demanding database or email server workloads.
Recently, SSDs have become mainstream options from most storage vendors. Though not faster at sequential workloads, they are incredibly fast at random small block workloads and may be a superior choice for demanding SQL, Oracle, VMware, Hyper-V and Exchange requirements. Many customers report that they can support more guest virtual machines (VMs) per physical server due to the lower latency of SSD solutions, which may offer tremendous cost savings depending on specifics of licensing and hardware.
SSDs continue to advance at a very fast pace, and are now the leading technology in terms of dollars per IOPS as well as IOPS per watt. Today it is very likely that an all-SSD solution will have lower overall capital and operational cost than one made from 15,000 RPM drives due to the reduction in total slots required to achieve a given transaction performance, and the greatly reduced power footprint as compared to spinning drives for a given number of transactions. Some enterprise SSD’s meet or even exceed the reliability and durability of 15,000 RPM drive systems because far fewer SSD’s are required to achieve any given IOPS level. | | 12:54p |
DCK Webinar: Data Center Transformation The Data Center Knowledge Webinar series continues with the next one titled, “A Roadmap for Data Center Transformation.”
You’re invited to learn more about data center transformation when IO Senior Vice President Aaron Peterson has an in-depth conversation with the Editor-in-Chief of Data Center Knowledge, Rich Miller, during the next DCK Webinar. The discussion will revolve around the increasing pace that the global demands being placed on data center providers.
Register now for the DCK Data Center Transformation Webinar on March 28 at 2 p.m EDT. The event lasts 1 hour.
In the webinar, various vital topics around the existing and future data center transformation areas will be covered. This includes:
- The predominantly static data center and its limitations.
- The approaching crisis, a “perfect storm” on the data center horizon, made up of supply and demand constraints, which must be prioritized and transformed.
- The technology-based, sustainable solution that is Data Center 2.0 which represents a fundamental transformation of data center DNA.
Register for this event on March 28 to gain a greater understand of the current and future roadmap around data center transformation. | | 1:27p |
How Sandy Has Altered Data Center Disaster Planning  The Empire State Building stands out as a beacon of light in a darkened Manhattan landscape during the widespread power outages following Superstorm Sandy. (Photo by David Shankbone via Wikimedia Commons).
NEW YORK – Keep your diesel supplier close, and your employees closer. These were among the “lessons learned” from Superstorm Sandy, according to data center and emergency readiness experts at yesterday’s Datacenter Dynamics Converged conference at the Marriott Marquis, which examined the epic storm’s impact on the industry and the city.
The scope of Sandy has altered disaster planning for many data centers, which now must consider how to manage regional events in which travel may be limited across large areas due to fallen trees and gasoline shortages, restricting the movement of staff and supplies. Yesterday’s panel also raised tough questions about New York’s ability to improve its power infrastructure, as well as the role of city policies governing the placement of diesel fuel storage tanks and electrical switchgear.
A clear theme emerged: Data center operators must expand the scope of their disaster plans to adapt to larger and more intense storms, weighing contingencies that previously seemed unlikely. The power, size and unusual storm track for Sandy proved to be a deadly combination, bringing death and destruction on an unparalleled scale.
Superstorm Sandy caused $19 billion of damage in New York City, leaving more than 900,000 employees out of work at least temporarily, according to Tokumbo Shobowale, the Chief Business Operations Officer for New York. Shobowale said the storm has led FEMA to redraw the storm surge maps and flood zones for the city.
“We have 200 million square feet of commercial space in the flood plain now,’ said Shobowale, who said the city struggled to adapt to unprecedented flooding that damaged critical infrastructure for transit, telecommunications and power. “A lot of our response was figured out on the fly. Now that experience allows you to create standard operating procedures for next time.”
Focus on Fuel and Personnel
The data center industry has begin that process in earnest. New York area facilities experienced both direct and indirect impacts from Sandy. A handful of data centers in the financial district were knocked offline as the storm surge flooded basements housing critical equipment. Nearly all of lower Manhattan was left without power when ConEd was forced to shut down key parts of the power grid, forcing major carrier hotels and data centers to operate on backup generators for three to seven days. Facilities in New Jersey also faced local power outages and road closures, as roads fell across streets and power lines.
Planning ahead is more important than ever, as data centers will need to consider padding their inventories to ride out longer periods in which they must operate independently.
“If you didn’t have your service providers and employees on-site at the time of the storm, they weren’t going to get there,” said Paul Hines, VP of Operations and Engineering at Sentinel Data Centers, which has a data center in central New Jersey. “That’s affected our planning.” That includes keeping more spare parts at the facility, bringing more staff on-site and additional advance planning with maintenance contracts and fuel suppliers.
Several questions for the panel focused on the availability of diesel fuel for emergency backup generators, which was a key concern in the storm’s aftermath. Data center providers typically arrange priority contracts with fuel suppliers. But what happens when a regional disaster tests supply and creates dueling priorities?
Providers in New Jersey reported no problems finding fuel, although some had to go outside the region to ensure a continuous supply. “We had 10 days of fuel, and contracts with two fuel suppliers” said Hines. “You also have to make sure your fuel suppliers can operate with no power, and have gravity-fed systems. We’ve now found an out-of-region supplier as well. But that doesn’t solve the problem of access to facilities.”
That was also a pressing problem in Manhattan, where flooding made some roads impassable. Building owners worked with city officials to ensure the availability of telecom services, for example. One of the city’s largest data hubs, 111 8th Avenue, had a high priority because the building also houses a hospital.
The Role of the City
Audience members at DatacenterDynamics Converged also pressed Shobowale about the city’s response to Sandy, especially the vulnerability of the utility grid. One questioner noted the failure of a major ConEd substation built alongside the East River. | | 2:00p |
Spring Data Center World Ready for Vegas The AFCOM Data Center World Spring 2013 conference will bring together data center professionals in a dialogue about real-world events and needs. In response to feedback from data center and facilities management professionals, new AFCOM President and Data Center World Chairman Tom Roberts has put together an educational program of more than 60 sessions that addresses topics such as disaster recovery, cloud, management, data protection, data center design, among other areas.
Slated for April 28 – May 3, 2013 at Mandalay Bay in Las Vegas, thought leaders and subject matter experts from across the industry will host the educational sessions and make presentations that will address those issues.
 Brian Janous
Microsoft
Brian Janous, Utility Architect, Data Center Advanced Development, Microsoft, will deliver the keynote address, “Commoditization of the Cloud: Ensuring Sustainability and Resiliency for Data Centers,” which will focus on how data center operators must become well versed in the dynamics of commodity markets for power, water, carbon and materials and develop strategies to mitigate the long-term impacts on operating costs.
Other session topics and speakers include:
Avoiding Data Center Disasters: What Professionals Need to Know
James Nelson, President BCS & Chairperson ICOR
The International Consortium for Organizational Service (ICOR)
Finding the Total Cost of Ownership (TCO)
Paul Schlattman, VP Mission Critical Facilities
Environmental Systems Design, Inc. (ESD)
Impact and Learning from Hurricane Sandy (Panel Discussion)
Donna Manley, IT Senior Director
President AFCOM Midlantic Chapter, DCI Board Member, University of Pennsylvania
Alex Delgado, Global Operations and Data Center Manager
International Flavors and Fragrances (IFF)
The Future of Disaster Recovery
Pete Manca, President & CEO, Egenera
Building IT Resilience: Why Systems and Data Need Full Protection
Ralph Wynn, Sr Manager, Product Marketing
FalconStor
Does Your Cloud Really Need Its Own Data Center? (Panel Discussion)
Margaret Dawson, Vice President of Product Marketing & Cloud Evangelist
HP Cloud Services
Mark Thiele, EVP of Data Center Technologies
Switch
Intelligent Modular Design and Build–A Case Study
Martin Olsen, Vice President
Active Power
Dave Rotheroe, Senior Technologist, Global Data Center Services
HP IT
See full program on Data Center World website.
Increased Roundtables, Unconference Sessions
This year, there’s more open roundtable discussions on tap, focused on providing practical, real-life solutions organized around specific topics including disaster recovery/business continuity, energy efficiency rebates, DCIM, and data center builds.
Unconference Sessions, which are educational sessions organized for attendees from distinct industries, will be available for a variety of areas including health care, education, government, colocation and finance. Unconference meetings allow for peers to share best practices, benchmarks, and provide problem identification and resolution. Subject matter experts from each industry segment are available for Q&A and networking.
There’s also opportunities to network with peers at a variety of venues during the conference. On the expo floor, there’s the chance to demo, discuss and see the latest in data center and facilities management technology.
Data Center World is both an educational conference and an expo for data center and facilities management professionals, which provides real-world solutions, vendor-neutral education, peer-to-peer networking and access to technology service providers. The optional tutorials and virtual data center tours offer an added in-depth experience for areas of particular interest.
For more information and registration, see the Data Center World website. | | 2:14p |
Network News: Juniper Selected by PEER 1 Here’s a roundup of some of some of this week’s headlines from the network industry:
Juniper selected by PEER 1. Juniper Networks (JNPR) announced that PEER 1 has deployed Juniper’s integrated routing, switching and security technology in its U.K. data center. PEER 1 uses the Juniper EX Series Ethernet Switches with Virtual Chassis technology as distribution devices beneath the core level handling most of its routing changes. The Juniper solution supported PEER 1′s backbone traffic and internal data center traffic, in addition to providing flexibility of 10GB hot-swappable modules which allows PEER 1 to run huge amounts of bandwidth within the data center network and over the backbone. Juniper MX 3D Universal Edge Routers interconnect to PEER 1′s FastFiber Network and to the London Internet Exchange (LINX) for high-speed Internet access. “Our rapid growth in the past few years means that we required support from a company such as Juniper Networks, ensuring that we remain on track to deliver the great hosting and great service that PEER 1 has become known for,” said Dominic Monkhouse, managing director, EMEA, PEER 1. ”Juniper’s innovation has enabled us to provide the highest level of secure user services at very low levels of energy consumption; and it supports our ‘Green’ mission with its high-performance network infrastructure built on reliable, high quality products that are environmentally friendly.”
Juniper also announced that its next-generation core routers have been deployed by the China Education and Research Network (CERNET) to establish the country’s first 100 Gigabit Ethernet (GbE) backbone network.
Ciena selected for submarine upgrade. Ciena (CIEN) announced that communications service provider SEACOM has selected Ciena Corporation’s 6500 Packet-Optical Platform and OneControl Unified Management System for the upgrade of its submarine network across the Southern and Eastern African coastlines. The upgrade includes key countries in SEACOM’s 17,000km undersea network, including India, Egypt, Dijbouti, Kenya, Tanzania, Mozambique, and South Africa. The solution will allow SEACOM to deliver its capacity in very short timeframes and provide for future demands. The deployment will initially use Ciena’s 40G coherent transport technology, with ultra-long distance 100G wavelengths planned for future upgrades. “Connectivity services in Africa are booming due to the growing needs of business IT users, the rise of ”cloud” based services, and growing requirements for the processing and storing of personal data,” says Claes Segelberg, chief technology officer at SEACOM. “Ciena’s technology will enable us to cost-effectively scale our capacity to address this growing demand for connectivity throughout the continent. The company’s future-proof network design has mitigated the risks associated with the upgrade project, ensuring a seamless transition for SEACOM’s carrier customers and end users.”
Ciena also announced that the Utah Education Network (UEN) and the University of Utah have deployed Ciena’s 6500 Packet-Optical Platform, equipped with WaveLogic Coherent Optical Processors, to provide high-speed, high-capacity 100G connectivity between the University and its new downtown Salt Lake City data center, and to UEN member organizations.
Windstream expands Carrier Switched Ethernet. Windstream (WIN) announced a nationwide expansion of its Carrier Switched Ethernet service. This expansion will allow Windstream to have ubiquitous availability within its entire footprint through existing network interconnects. The increased coverage area will enhance the current service offerings to include major metro areas across the United States such as New York, Philadelphia, Baltimore, Chicago, Houston, Dallas, Denver and Phoenix. With the solution, interconnect ports of 100 Mbps, 1 Gbps, and 10 Gbps are available, and end user loops from 3 Mbps to 1 Gbps are supported. “By expanding our Carrier Switched Ethernet solution, our customers now have a broader ability to reach businesses with dependable Ethernet access,” said Don Perkins, Windstream senior vice president of Business Marketing. “Ethernet expansion represents continued network and product integration, resulting in greater efficiency, consistency, and reliability for many organizations. This growth further positions Windstream as a key service provider in the carrier Ethernet industry.” | | 3:30p |
Defeating Cyber Threats Requires a Wider Net As more organizations and users utilize the Internet, there will be more data, more management needs, and a lot of worries around security. The big push around cloud and the modern cloud-ready data center really revolves around IT consumerization and newly available resources. Just like any infrastructure, the bigger and more popular it gets – the bigger the target.
Cyber threats have been growing and at an alarming rate. Not only have frequencies increased – the creativity of the intrusions and attacks are staggering as well. There plenty of evidence to supporting this as well:
- Malware is reaching new all-time highs – Trend Micro, for example, has identified 145 thousand malicious Android apps, as of September 2012.2 Keeping malware at bay, already a “treading water” challenge, is intensifying.
- BYOD is a growing threat vector – Frost & Sullivan estimates smartphones shipped in 2012 will reach 558 million, and tablets will reach 93 million. With more users using more cloud networks – targets will become larger as well.
- Distributed Denial of Service (DDoS) attacks are approaching mainstream In a 2012 survey of network operators conducted by Arbor Networks, over three-quarters of the operators experienced DDoS attacks targeting their customers.
- Exposure footprint is expanding –According to a Frost & Sullivan 2012 global survey of security professionals, slightly more than one-third of the respondents cite cloud computing as a high priority for their organizations now, and that percentage increases to 54 percent in two years.
With the evident change in the technological landscape, there will undoubtedly be a need to re-evaluate existing security environments. Why is the case? Simple, many existing security platforms are just not enough to handle today’s demands around cyber security. In this white paper, new types of security platforms are explored. Specifically, Arbor’s ATLAS platform is seen as a leader in enterprise-ready security and traffic-monitoring. Between these two sources, Arbor is collecting data from all assigned IP addresses—service-active IP addresses from Arbor platforms and service-inactive IP addresses from darknet-hosted ATLAS sensors.

Launched in 2007, ATLAS collects network traffic data from sensors hosted in carriers’ darknets, and data from carrier and enterprise-deployed Arbor monitoring platforms. Download this white paper to see how ATLAS and its platform has direct benefits for carriers and enterprises. This includes:
- More threats are proactively mitigated, resulting in a lower overall risk posture.
- Less remediation occurs. With fewer attacks being successful, remediation efforts will be fewer in number and smaller in scale.
- As ATLAS researchers monitor and assess traffic data from Arbor platforms and darknet sensors, carrier and enterprise security analysts gain the benefits of this threat analysis without incurring the work effort.
Remember, the cyber threat environment will only continue to grow and evolve. Whether your environment is utilizing the WAN or some type of cloud environment – it’s time to evaluate your security infrastructure and see how new, advanced, platforms can help. | | 5:39p |
Ascent Goes Downtown With New Chicago Data Center  The Ascent CH2 data center in suburban Chicago. (Credit: Tara Wujcik).
Ascent, which has built several large data centers in the suburban Chicago market, is going downtown. The developer confirmed this week that it will team with Sterling Bay Cos. to renovate a property on South Desplaines Street as its CH3 data center.
Ascent has built two data centers on property it owns in Northlake. The first was leased and eventually sold to Microsoft, which used the building as a “container colo” site for its modular design. The second, CH2, is a 250,000 square foot multi-tenant facility whose tenants include Comcast Corp. and a national retail chain.
The Desplaines site was purchased by Sterling Bay in December. An existing building on the property will be partially demolished. Ascent will retrofit the remaining structure for data center use, and build an addition from the ground up.
Focus on Flexible Design
The project will feature Ascent’s build-to-suit “Dynamic Data Center Suites,” which offer infrastructure that can be customized for customer requirements. Each customer in the multi-tenant facility can have its own entrance, security access and shipping and receiving area, as well as dedicated mechanical and power infrastructure. The approach provides Ascent with the flexibility to offer different suite designs within the same property. The site will support high-density installations and low-latency connectivity, a key requirement given Chicago’s concentration of financial trading operations.
“CH3 is incredibly flexible, unlike the standard multi-story data centers in Chicago, making it adaptable to new technology and server rack designs that some of the older constructions are unable to accommodate,” said Phil Horstmann, CEO of Ascent. “Downtown Chicago is an attractive location for data center space, but previously didn’t offer the options the market is now demanding. We’re talking with companies about their current data center needs and developing CH3 to meet those market demands.”
The project is the latest in a series of data center developments in downtown Chicago, where data center space has historically been in limited supply. Last year Server Farm Realty opened a data center building on S. Canal Street, and earlier this year Digital Realty Trust deployed new space at its facilities on South Federal Street. There are also reports that Equinix may participate in a proposed project at 111 Cermak Road,
Ascent says the CH3 project has full capital backing from its current equity partners, enabling fast-tracked development of the space. | | 6:37p |
Data Center Links: Elastichosts Expands With Equinix Here’s our review of some of this week’s noteworthy links for the data center industry:
Equinix selected by Elastichosts. Equinix (EQIX) announced that Elastichosts, a global Cloud Server provider, has deployed in Equinix’s International Business Exchange (IBX) data center in Hong Kong. Elastihosts chose Equinix for its rich cloud ecosystem, to grow its business in Hong Kong, as well as in neighboring countries such as Taiwan, China, Singapore, and the rest of ASEAN (Association of South East Asian Nations). The broad reach of ecosystems available with Platform Equinix gives Elastichosts access to a large marketplace of potential customers for revenue generating opportunities. “Deploying with Equinix in Hong Kong brings us immediate benefits as we enter the Asia-Pacific market,” said Richard Dvies, CEO of Elastihosts. “We are in close proximity to hundreds of networks and through these networks, closer to our customers. Its rich ecosystem gives us the confidence that we can gain access to a broad pool of potential customers and partners to grow our revenue, while its global footprint ensures we can achieve rapid deployment and scalability as we expand our presence across Asia-Pacific.”
CentriLogic to acquire Capris Group. Canadian data center operator CentriLogic announced an agreement to acquire the Capris Group, an Ontario-based IT services provider. “This is the first step in our expansion plans across Canada,” said chief executive officer Robert Offley. “The acquisition of Capris rounds out our capabilities in Ontario and the [Greater Toronto Area] and we’ll now look to other provinces.” In addition to western Canada, CentriLogic is also scouring for potential acquisitions in Quebec. Internationally, it plans to acquire one data centre in the United States, where it currently operates two facilities. It will also add a second facility in the United Kingdom. | | 7:07p |
Oracle Buys Nimbula as Tech Giants Wake to Cloud Potential  Last fall Oracle CEO Larry Ellison announced a new an Infrastructure as a Service cloud computing offering. Today Oracle said that it has bought Nimbula,which makes private cloud technology. (Photo: John Rath)
Larry Ellison might love the cloud after all. Oracle has acquired open cloud player Nimbula, a provider of private cloud infrastructure management software. Nimbula’s product is complementary to Oracle’s growing cloud play, with Nimbula expected to be integrated with Oracle’s cloud offerings. The transaction is expected to close in the first half of 2013.
Nimbula’s flagship product is Nimbula Director, which allows enterprises and service providers to build large-scale, fully functional infrastructure services from bare metal in a matter of hours. Nimbula Director differentiates by its high level of self-service, automation, application orchestration features, and ease of use. Providing a one-stop virtual data center management solution, Nimbula Director isolates customers from the operational and hardware complexities associated with deploying a private, hybrid or public cloud. Nimbula joined the Open Stack movement last October.
Oracle most likely was attracted to Nimbula because it addresses a private cloud management need, as well as adding some heavy cloud talent in the form of the company’s founders. So we have one of the most promising early entrants in the cloud landscape joining with one of the most misunderstood (in terms of cloud) tech giants. The details of the acquisition are sparse, but the deal indicates that Oracle is continuing to get serious about its cloud play.
Nimbula, which emerged from stealth mode in June 2010, was founded by former Amazon executives Chris Pinkham and Willem van Biljon, who led the development of the Amazon EC2 public cloud service. It was an early player on the scene, and one that was surrounded by a lot of hype thanks to its founders. The company never quite seemed to live up to its promise, mainly because its promise was astronomical and market confusion around cloud has been pervasive.
Oracle’s Misunderstood Cloud Ambitions
Also misunderstood has been Oracle’s cloud strategy. By many accounts, Oracle and CEO Larry Ellison used to be a bit disdainful about the cloud. This perception was built on a few comments by Ellison rather than the actual business, as Oracle has a growing play across all parts of the cloud stack (Infrastructure as a service, Platform as a Service, and Software as a Service). However, perception is a driving force in the market.
Oracle has been forward-thinking in terms of cloud in some regards; it has several SaaS-based enterprise applications and has pushed to bring social capabilities across the portfolio. Ellison founded NetSuite, one of the earliest SaaS players. Most of the disdain Oracle and Ellison have displayed in the past appears to be for “cloud washing,” an industry-wide rebranding of everything and anything as “cloud.” However, Google “Ellison Cloud” and you’ll see quite the controversial history.
There are several promising cloud players out there that offer a piece of the larger puzzle, but the overall picture remains fragmented. However deals such as this one are occurring more frequently as traditional technology giants are finally moving away from legacy practices ( namely, license and maintenance fees) as the driving force behind revenue.
Cloud flips long-established business models on their heads, which is why there’s been some hesitancy on the part of the largest technology companies. These tech giants, particularly the public ones, are under pressure from investors to maintain license and maintenance revenue, and cloud/recurring revenue services has historically been seen as a cannibalization of these revenues. However, both investors and enterprise tech giants are realizing that cloud is the way of the future, so there’s been, and will continue to be, consolidation occurring in the market. Companies like Oracle will continue to pick up important cloud pieces out there to build out full cloud plays. |
|