Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, August 11th, 2014
Time |
Event |
12:30p |
Rackspace Clears Up Confusion: It is Not Getting Out of IaaS Market Rackspace announced in July that it would no longer provide bare Infrastructure-as-a-Service. Every flavor of IaaS the company offers would come packaged with some level of managed services. After the announcement, several articles surfaced online proclaiming – either out of confusion or to get attention – that Rackspace was getting out of the IaaS market.
That of course is not true, and the company’s CTO John Engates took to its official blog to dispel the rumors. The truth is that Rackspace is getting out of the business of providing raw, commodity cloud infrastructure services, where it had been competing with a number of giants locked in a vicious price war: Amazon, Google and Microsoft. Instead, the company has decided to leverage its reputation of “fanatical support” to differentiate its cloud with managed services.
Engates discourages users that want the raw cloud compute, storage and network resources from turning to Rackspace. “If you’re looking for raw infrastructure that you have to manage yourself, we may not be the best provider for you,” he writes.
“We’re not interested in offering pure commodity cloud Infrastructure-as-a-Service with no support. Instead, we offer cloud IaaS that comes standard with built-in managed services to help our customers manage that infrastructure. IaaS is and remains a critical component in the Rackspace managed cloud.”
Rackspace is after a very specific cloud user – a cloud user that does not have (or does not want to have) the resources to manage their cloud infrastructure. Gartner recently named Rackspace leader in the managed cloud market. The company has already done well in this market niche, and July’s announcement was simply an attempt to solidify its identity as major player in this space. | 1:00p |
Nlyte Adds Smart Blueprints for Easy Data Center Design Replication Data center infrastructure management software company Nlyte unveiled a new feature called Smart Blueprints in version 7.7 of its DCIM suite. Smart Blueprints enables IT to preserve and copy any portion of a data center’s design, which can help optimize hardware configurations, capture comprehensive designs and easily reuse or share them with others.
Nlyte’s suite includes extensive real-time monitoring of power and virtualized resources, as well as centralized management of all the assets of the data center. Smart Blueprints captures best practices for existing data center designs for use in future projects.
Users capture comprehensive designs at different levels of granularity, including materials, configurations, connectivity information and three-dimensional visual representations. The blueprints are saved and can be reused by the entire organization.
“Nlyte Smart Blueprints is an industry first,” said Mark Harris, vice president of strategy at Nlyte. “It allows for the complete capture of designs from the detailed specifications, up through all elements of an asset, including how they are visually rendered in 3D. So rather than forcing data center professionals to start with a blank slate each time they create specific asset designs, Nlyte Smart Blueprints simplifies the process, saving time and capital resources, by enabling them to leverage pre-existing assets.”
Common designs for Nlyte Smart Blueprints are:
- Highly configurable chassis-based devices which require individual blades, NICs and other modules
- Fully contained or functional racks, e.g. compute, mail or storage services
- Compilation of multiple functions that are individually housed in complete racks or rows
- Replication of entire pod-level designs for dense computing- or storage-intensive applications
The company is adding new licenses and customers every week, including big-name customers with 100 racks and above, said Harris. License revenue is up 120 percent year over year. The largest deployment is 25,000 racks in one site.
Interest in DCIM is high but confusion remains, he said. “DCIM is so much more than power and cooling. With customers we’ve been engaging with, it’s not just about lowering power consumption. It’s almost always about workflow and business process engineering. It’s a lot more than raising the temperature of the data center, it’s comprehensive. How can you possibly price out your user, unless you know the entire cost? It’s not just the server, the power, it’s ‘what’s the burden cost to deliver email to a user?’”
| 1:29p |
Standards & Ratings: Maximizing Generator Set Reliability for Data Center Applications Jason Dick is a senior level applications engineer at Minnesota-based MTU Onsite Energy Corp. and is a power generation problem-solving expert with nearly a decade of experience.
Data centers can consume up to 100 times more energy than a standard office space, and often rely on dependable backup power supply systems to prevent data loss, customer dissatisfaction or lost business in the event of a power outage.
Selecting the right generator for a data center’s needs requires an understanding of several considerations, including ratings and performance standards.
As a first step, you must define ratings such as total kW output, running time, load factors and emissions regulations. However, while all manufacturers comply with basic standards, some rate their generator sets in ways that require careful consideration.
So, how do you make sense of it all? Here are the four types of requirements that you must consider:
ISO-8528-1:2005
This industry standard defines performance parameters in onsite power applications based on four operational categories: emergency standby, prime power, limited-time running power and continuous power. A generator set’s rating is determined by the maximum allowable power output within each category in relation to running time and load profile.
Manufacturer ratings
With ISO-8528-1:2005 as the guiding standard, manufacturers can customize units to best serve the specific needs of data center customers, and can often exceed industry standards. Because of this, understanding terminology can help you make informed decisions. However, as technology advances so does the associated terminology, which can cause confusion. Here are a few commonly confused terms that often require clarification:
Net vs. Gross Power Output: Think of this as you would your pay. Your gross pay is what you make; your net pay is what you put in the bank after taxes and other deductions. When comparing generator set ratings, evaluate them based on the complete system power output, which should include the power draw for the cooling system, since it’s required for the system to perform.
Overload Capacity: Overload capacity limits vary by manufacturer. Prime power systems, which supply power in lieu of commercially purchased power from a utility, generally have a 10 percent overload capacity that should not be used for more than a set number of hours per year. That number of hours ranges from 25 to 87 annually depending on the manufacturer.
Load Factor: When comparing products with different published load factors, it is important to consider some of the advantages of a generator set with a higher load factor. Electrical engineers often prefer a more complex, soft-loading method that minimizes starting power requirements, which has led to the use of smaller generator set systems and reduced system costs, but at the expense of higher load factors.
Maximum Run Time: Maximum run time is perhaps one of the most important ratings to consider and is often defined based on real-life experience from the field, which varies greatly by manufacturer. Typically, 50-200 hour limits are declared, however manufacturers with more advanced technology are able to recommend 500-hour annual runtime. The biggest question data centers ask is: what if I exceed this time in the event of a utility outage? That answer also varies by manufacturer. Some units will sound an alarm or force shut down, while others will continue to produce power without issue.
Emissions ratings
For diesel generator sets, the Environmental Protection Agency (EPA) regulates federal engine exhaust emissions standards by engine type: stationary emergency, stationary non-emergency and mobile. However, regional and local jurisdictions may impose stricter regulations. The key is to know your local area’s requirements to ensure compliance.
Uptime Institute performance standard
The Uptime Institute was developed in 1993 to set global power design standards while improving reliability and uninterruptible availability—uptime—in data center facilities. Performance requirements are awarded certification by the Institute and defined by “Tier” levels and each of the four Tiers indicates availability of uptime.
To help navigate the many ratings and industry-based Tier certifications, manufacturers have created their own ratings that meet all of the requirements outlined above. MTU Onsite Energy recently launched its own Data Center Power rating, which meets Uptime Institute Tier III and IV Certification, offering unlimited runtime and increased usable power.
Reliability and continuous uptime is central to every data center. The online retailers, search engines, social media and software companies that form the backbone of the online economy need to be operational 24/7/365.
Now, armed with the knowledge needed to select a generator set to cope with costly power failures – whether brief or extended – you can protect critical power used to run servers as well as the air conditioning needed to remove heat the servers produce, ensuring safety and reliability no matter the circumstance.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 1:30p |
Oracle Launches Vault to Lock Encryption Keys in Enterprise Data Center Last week, as the connected world panicked over the massive login credentials heist by a Russian crime ring, Oracle launched a new product for protecting and managing encryption keys used on systems in enterprise data centers.
The Oracle Key Vault is a “software appliance” is an addition to the Oracle Database security portfolio. It centralizes management of encryption keys and credential files, such as Oracle wallet files, Java KeyStores, Kerberos keytab files, SSH key files and SSL certificate files.
The solution is optimized for the Oracle stack, including database and Fusion middleware. It uses Oracle Linux and Oracle Database for security, availability and scalability.
Vipin Samar, vice president of database security product development at Oracle, said there was growing regulatory pressure worldwide to encrypt more data, which created a need for centralized management of encryption keys and credential files in data centers. “Oracle Key Vault is a modern, standards-based product that allows organizations to reduce the overhead of regulatory compliance with a solution that protects Oracle Database encryption master keys, Oracle wallet files, Java KeyStores and other credential files,” he said in a statement.
The worldwide push to step up encryption in data centers did not come solely out of fear of criminals. Companies doing business online have been encrypting more and more of the data stored in their data centers and traveling between their data centers in response to the recent revelations by former U.S. National Security Agency contractor Edward Snowden that the agency had been collecting data wholesale from the world’s digital communications networks.
Companies like Google, Yahoo, Twitter and Microsoft have announced that they would encrypt ever more data moving through or stored on their infrastructure to prevent government spying. | 2:00p |
IBM’s New Chip Has Brain-Inspired Computer Architecture Claiming a big step toward cognitive computing, IBM unveiled a chip called SyNAPSE that uses a brain-inspired non-von Neumann computer architecture. The von Neumann architecture has been in use almost universally since 1946. The neurosynaptic computer is the size of a postage stamp and runs on the amount of energy equivalent to a hearing aid battery.
The company says this technology could transform science, technology, business, government and society at large. This is the first neurosynaptic computer chip to achieve the scale of one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second per watt.
The 5.4 billion transistors chip is built on Samsung’s 28nm process technology. The new chip bridges the divide between the human brain’s cognitive capability and ultra-low power consumption. It is one of the largest CMOS chips ever built but consumes a meager 70 kilowatts during real-time operation, which is far less than a modern processor.
The second-generation chip is the culmination of almost a decade of research and development. It brings us a potential future of neurosynaptic supercomputers. A single-core hardware prototype was unveiled in 2011. In 2013, a new software ecosystem with new programming language and chip simulator was released.
The chip’s brain mimicking design means it can enable vision, audition, and multi-sensory applications.
The new cognitive chip architecture has an on-chip two-dimensional mesh network of 4,096 digital, distributed neurosynaptic cores. Each core module integrates memory, computation and communication and operates in an event-driven, parallel, fault-tolerant fashion.
To enable system scaling beyond single-chip boundaries, adjacent chips, when tiled, can seamlessly connect to each other, building a foundation for future neurosynaptic supercomputers. To demonstrate scalability, IBM also revealed a 16-chip system with sixteen million programmable neurons and four billion programmable synapses.
“IBM has broken new ground in the field of brain-inspired computers, in terms of a radically new architecture, unprecedented scale, unparalleled power-area-speed efficiency, boundless scalability and innovative design techniques,” said Dharmendra Modha, IBM fellow and chief scientist for brain-inspired computing at IBM Research. “We foresee new generations of information technology systems – that complement today’s von Neumann machines – powered by an evolving ecosystem of systems, software and services.
“These brain-inspired chips could transform mobility, via sensory and intelligent applications that can fit in the palm of your hand but without the need for Wi-Fi.” | 2:11p |
Hosting + Cloud Transformation Summit 2014 The 451 Research 10th Annual Hosting + Cloud Transformation Summit will be held Monday, October 6 through Wednesday, October 8 at the Bellagio Resort & Hotel in Las Vegas, Nevada.
Join corporate leaders, end users, industry visionaries, IT practitioners, and financial professionals as they learn, network and map out strategies for today’s rapidly changing IT landscape.
This power-packed two-day event provides a complete overview of the opportunities and challenges facing the hosting and cloud industries.
For more information – including speakers, sponsors, registration and more – follow this link.
To view additional events, return to the Data Center Knowledge Events Calendar. | 2:30p |
Procera Adds 100GE-optimized I/O module for 600 Gbps in Single Chassis Procera Networks announced a new 100GE-optimized I/O module for its PacketLogic PL20000 platform. Responding to the increasing demands of high performance broadband networks, the new PL20000 takes advantage of a 100GE-optimized I/O module configuration with up to 4x100GE channels and 16x10GE channels and increases the platform’s scalability up to 480 million flows at a setup rate of 8 million flows per second for up to 10 million broadband subscribers.
“The PL20000 was specifically designed to operate in 100GE environments, not just for performance, but also session scalability and performance,” said Alexander Haväng, CTO of Procera. “Fixed and mobile consumers use more devices and more applications simultaneously than ever before. The PL20000 can handle bandwidth-intensive applications like video as well as session-intensive applications like social networking, allowing operators to continue to grow their subscribers and service offerings.”
Ed Page, director of product management at Procera, said that with the 600Gbps throughput, multi-system performance can scale up to 10 Tbps and speeds of 600 Gbps mean that the PL20000 can handle bandwidth-intensive applications, running concurrently on a variety of devices and help ensure a great user experience.
With its products deployed across network operators worldwide, Procera offers Internet intelligence solutions to service providers and the analytics for their network pipes and how they are performing, in addition to some deep packet inspection for looking specifically at what content is being transmitted. For instance, the company has estimated the number of people watching Netflix shows.
It says its PacketLogic software suite delivers the insights to take action on broadband traffic in order to enhance the subscriber experience.
Intelligence for mobile networks
Procera also announced the launch of RAN Perspectives, a new subscriber experience solution for mobile broadband operators that delivers real-time location and RAN (Radio Access Network) QoE (Quality of Experience) awareness. The company says the solutions provide network intelligence for mobile operators and reduce the need for probes and the costs associated with drive testing. It will integrate with a PacketLogic deployment and give the operator real-time location and RAN quality information.
“The proliferation of mobile devices coupled with the expanding use of data-intensive applications, such as mobile video, has led to increasing congestion in the RAN that can dramatically reduce subscriber QoE,” said Shira Levine, an analyst at Infonetics Research. “A device-based solution such as RAN Perspectives has the potential to more effectively manage congestion while enabling a broad array of additional services, solutions and applications, which could have dramatic implications for mobile networks.”
RAN Perspectives is available for beta testing and is expected to be generally available in the first quarter of 2015. | 5:15p |
IBM Beefs Up Cloud Security Portfolio With Another Acquisition IBM has acquired Lighthouse Security Group, a subsidiary of its partner Lighthouse Computer Services, for an undisclosed sum.
The acquired company provides an access management solution for enterprises that need to give access to sensitive data to users in multiple geographic locations securely. The deal comes less than two weeks after IBM announced the acquisition of CrossIdeas, another company in the enterprise data access management space.
IBM’s security business is vast. The company has made more than a dozen acquisitions in the field over the past decade and has built out a massive security research and development organization. It holds more than 3,000 security patents.
The recent acquisitions indicate a rapid expansion of its enterprise cloud security play.
IBM said it will integrate Lighthouse and CrossIdeas technology into its identity and access management offering, turning it into a “full suite of security software and services.” The package will also include expert managed services.
Lighthouse’s customer roster includes companies in the financial services, healthcare, retail, manufacturing, higher education and agencies in the U.S. Department of Defense. The company’s engineering roots grow out of IBM and Lockheed Martin – both major defense contractors.
Lighthouse Gateway, the Lincoln, Rhode Island-based company’s identity management solution can be deployed from a hosted environment, a customer’s own data center, cloud infrastructure or a combination. It protects sensitive corporate data stored on premises or in the cloud from access by unauthorized people.
An automotive company, for example, may need to share proprietary specs on a line of cars with sales managers around the world. Lighthouse would ensure those managers can access the data from their laptops or smartphones while on their show floors, but prevent anyone who is not authorized from seeing it.
The platform of Rome, Italy-based CrossIdeas works along similar lines as Lighthouse, but has more of a focus on role-based access management to prevent compliance violations. It also has analytics capabilities, generating visualizations for insight into user access, alignment and compliance with access risk requirements.
Since IBM has been a partner of Lighthouse Security Group’s parent company, the Lighthouse Gateway is already integrated with IBM’s platform. Its functionality is based on IBM’s identity and access management capabilities, including user provisioning, identity lifecycle governance, single sign-on, enterprise user registry services, federation and user self-service.
Lighthouse CTO Eric Maass said the company stood out in the crowded cloud-security space because of its ability to match the might of traditional enterprise security products in a cloud-based service. “We are excited to become part of IBM’s security offering,” he said. | 6:35p |
Airlines Save Cash Using GE and Pivotal’s ‘Data Lake’ Tech GE has stood up a system it uses to analyze data generated by aircraft engines while flying. The company recently conducted a pilot run of the system and said it already helped some of its airline clients cut operational costs.
The system is based on the concept of “data lake,” cooked up by a software company GE owns a 10-percent stake in called Pivotal. The rest of the company, led by former VMware CEO Paul Maritz, belongs to the storage giant EMC and VMware, in which EMC owns a majority stake.
A data lake is a collection of disparate data sources – where data is stored in different formats – that can be analyzed by a single analytics engine. Pivotal and GE pitch it as a faster-performing and cheaper alternative to traditional enterprise data warehousing, where data has to be organized and converted to a uniform format before it can be analyzed.
Citing IDC, GE said it could take as much as 80 percent of project time to gather and prepare data for analysis using the conventional data warehousing approach.
The data lake concept, however, is problematic, according to a Gartner analyst. One fundamental issue is the assumption that anyone in an organization has the skills necessary for Big Data analytics, and the other issue is risk associated with security and access control when data is placed into a data lake without discerning what a particular piece of data is and who is authorized to access it.
“The fundamental issue with the data lake is that it makes certain assumptions about the users of information,” Nick Heudecker, research director at Gartner, said. “It assumes that users recognize or understand the contextual bias of how data is captured, that they know how to merge and reconcile different data sources without ‘a priori knowledge’ and that they understand the incomplete nature of data sets, regardless of structure.”
In GE’s case, user sophistication is not an issue since the vendor seems to be providing analytics as a service to its customers, doing the heavy lifting itself.
GE Aviation’s pilot project, which took place in 2013, collected data on 15,000 flights from 25 different airlines. Each flight generated about 14 gigabytes of metrics data.
The data lake approach enabled GE to integrate all that flight data and run analytics against the massive data set. The process produced measurable cost savings, such as one-percent reduction on the yearly fuel bill of GE customer AirAsia, according to the vendor.
GE shrunk the time it took to run analytics against the data set from months (which would be required to do the job using the data warehousing method) to days.
The data lake itself is built on technology by Pivotal and integrates with GE’s own software called Predix. The GE solution is a way to connect a massive amount of machines, people’s devices and analytics systems in a standard, secure way.
The software that did the actual analysis in the pilot project was GE’s Predictivity. The company expects to collect data from 10 million flights by 2015, a 1,500-terabyte data set Predictivity will get to crunch through.
David Joyce, president and CEO of GE Aviation, said, “Gathering and analyzing data to improve our customers’ operations is no longer a futuristic concept, but a real process underway today, and growing in magnitude.” | 9:31p |
How AWS, Google, Microsoft and SoftLayer will Change the Hosting Market 
This article originally appeared at The WHIR
By: Craig Deveson
The Big Four hyperscale vendors (AWS, Microsoft, Google and SoftLayer) are going to radically alter the public cloud hosting market in the next couple of years. The hosting industry has its eyes on Amazon but now it is time to look at the Big Four. They will own the majority of the market in years to come.
What do these vendors have in common?
- Lots of cash
- Other profitable businesses
- They are all price matching, often within days of each other (with the exception of SoftLayer)
- Large partner ecosystems
- Appeal directly to end users
- They are investing large sums of money on customer and partner training and education programs
In terms of training for example, there are now thousands of AWS certified professionals. I am sure the others will have similar programs in the next year.
This is radically different to what your local data center provider does currently.
All these vendors can cross-subsidise the growth of their cloud business with their other profitable businesses. Think of it like a freemium model where they get customers to start using it. Once they move their data over, they are unlikely to leave.
Why basic prices won’t increase (Compute, Storage and Networking)
The Big Four are using lower (basic) pricing to grow market share. Once customers are on the platform the objective is to get a greater percentage to upgrade and/or add-on higher profit margin services. These are high value services that they will continue to develop such as AWS Zacalo and RDS.
The internet giants invested $27B last year. Some vendors are investing 1B per quarter.

An area of major expansion is in Sales and Operations:
- The AWS team grew by thousands of employees this past year, expanding AWS infrastructure, enterprise and public sector sales capabilities and allowing the team to innovate at an accelerating pace. * Amazon Second Quarter Guidance 2014
There are debates in how revenue is calculated for the Big 4 so it depends on how you calculate it. For example Microsoft Includes Office365, Google includes cloud products, and IBM throws in services. If you look at IASS & Pass then AWS is the clear leader. Microsoft Azure is a clear second and Google & Softlayer following. See chart:

Where is your opportunity in the public cloud?
Are your customers asking you about the public cloud? If not they are either trailing behind or asking someone else. More and more service providers are going to realize that it’s better to say “yes, we can help you with that.”
The public cloud is complex and Bob Darrow recently said that AWS’s Achilles heel is complexity. However this is the service providers’ opportunity to remove the customer’s pain. Enterprises are planning to move to the cloud, it’s just a matter of when and with who.
According to Gartner, torrential changes and a confidence crisis will reshape the service provider landscape.

Cloud Pricing is Confusing
AWS pricing options are on-demand, reserved and spot pricing, Azure has PAYG and term plans, Google has sustained use and HR billing models.
There are a number of revenue models, however they include three components:
- Promoting value
- Preventing customers pain
- Productizing your portfolio
In summary, the hyperscale vendors will have the majority of the market share in the next few years and they will aim to build larger ecosystems to deliver more value added services on their platforms.
The cloud complexity issue could be overcome by a single platform or a managed service provider relationship. Cloud service brokers will manage multiple clouds and make it easier for customers.
*Note: I left OpenStack out of this article because my view is that OpenStack will mainly be used for private clouds and it’s struggling to get traction.
Craig presented at Hostingcon Australia Symposium on August 4th 2014. Check out his presentation on slideshare “Public Cloud: Add New Revenue & Profitability to Your Existing Hosting Business”
He can also be reached at Cloud Manager Twitter Email
About the author: Craig partners with cloud leaders such as Amazon to build products and solutions for the local and international market. His first company Devnet was sold to Cloud Sherpas (one of the first Google Cloud Partners in the world and first in Asia Pacific). He is active in the AWS users group and many startup groups. His current startup is Cloud Manager: the easiest way to connect to the AWS Cloud. Email : craig.deveson@cloudmgr.com T:@cloudmgr W: www.cloudmgr.com
This article originally appeared at: http://www.thewhir.com/blog/aws-google-microsoft-softlayer-will-change-hosting-market | 10:00p |
IO Building Massive Beachhead in New Jersey EDISON, N.J. - The view from the balcony overlooking Hall 2 of the IO New Jersey data center tells the story of the company’s progress.
Two years ago, this vast hall was nearly empty, with several container-style modules tucked in a corner. Today there are dozens of modules housing servers and storage, deployed in row after row across the hall.
This 829,000-square-foot building overlooking the New Jersey Turnpike was once a printing plant for The New York Times. It now serves as the East Coast beachhead for IO and as a proving ground for the company’s bid to transform the way data centers are built and deployed.
IO has been a pioneer in the market for modular data centers built in a factory using repeatable designs that can be shipped to either an IO data center or a customer’s premises. The company has also developed its own software to manage IT infrastructure.
After building a large installed base of modular data centers in its home base of Phoenix, IO introduced its offering in New Jersey, where it was the first service provider to offer pre-fabricated IT infrastructure.
“When we had the first few modules installed, some people thought it was a science project,” said Jason Ferrara, vice president of marketing at IO.
That curiosity has given way to sales. IO says it has deployed 88 data center modules in Edison, each representing between 200 and 400 kilowatts of IT capacity.
 Rows of data center modules within the IO New Jersey data center in Edison, N.J. (click for larger image)
Tough competition
Back in 2011, we noted that IO New Jersey would serve as the first major test of whether modular data centers represent a major disruptive force in the multi-tenant market or become a niche market for specialized requirements.
New Jersey is one of the most active data center markets in the country, boosted by its proximity to Wall Street and the state’s robust supply of healthcare firms and enterprises. It’s also one of the most competitive, with virtually all of the industry’s major players locking horns. That’s a key difference from Phoenix, where IO is the dominant provider.
Tenants in IO New Jersey include investment firms like Goldman Sachs and Allianz and service providers like Fortrust, Ajubeo, CCNet and Nothing But Cloud. A major travel site and a Bitcoin mining operation also have space in the Edison facility.
New Jersey hasn’t always been an easy market for new entrants. Wholesale specialist DuPont Fabros Technology entered just before IO did and has taken three years to sell 11 megawatts of space in its Piscataway facility.
At the time, the new arrivals left the perception of an oversupply of space. But new players continue to arrive. Internap and CoreSite have both opened facilities in north Jersey, and Digital Realty has begun building a new campus in Totowa.
Given that competition, the deployment of 88 modules at IO New Jersey suggests the state’s data center customers are growing comfortable with modular designs as an alternative to traditional raised floor space.
Playing at all levels of the market
One factor helping IO, Ferrara says, is that its offerings allow it to play at all levels of the market. IO offers colocation by the cabinet within multi-tenant modules, leases dedicated modules for wholesale-style requirements and also operates its IO.Cloud service in Edison.
“This is a competitive space,” said Ferrara. “For us, having visibility across modular, colo and cloud is an advantage. We have strong competitors in each of those areas, but we can accommodate customers around any of them. Our question is ‘what are your application requirements?’”
IO’s cloud is powered by OpenStack and runs on servers and storage from the Open Compute Project. The company is positioning its cloud as an alternative to pushing data to Amazon Web Services (AWS).
“We know that AWS is pushing hard to get into the enterprise,” said Ferrara. “For customers who are dipping their toe into cloud, they can easily push workloads across a cross-connect within the data center. There’s no traversing the public Internet or paying data transfer costs. It’s colo-to-cloud within the same data center. We’re starting to see that as a differentiator.”
IO New Jersey is among a small number of modular facilities optimized for multi-tenant operations. Other examples include CapGemini’s Merlin data center in the UK, a Tier 5 facility in Australia, and Colt data centers in Europe.
Room to grow
The data center’s site overlooking the Turnpike is significant, as the highway access simplifies delivery of the IO.Anywhere modules, which are built in the company’s factory in Phoenix. The 42-foot modules, which weigh as much as 25 tons, are loaded onto trucks for the cross-country journey to New Jersey, where they enter the building through a 20-by-20-foot entrance and wheeled into position.
The company has filled nearly all of its first phase and a large chunk of Hall 2. But there’s plenty of room to grow, as the cavernous third and fourth phases of the building are even larger and can bring the total capacity to 650 modules and 108 megawatts of power.
 |
|