Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, October 1st, 2013
| Time |
Event |
| 11:30a |
Data Center Jobs: The Vault by BendBroadband At the Data Center Jobs Board, we have a new job listing from The Vault by BendBroadband, which is seeking Solutions Architect – Tier III Data Center in Bend, Oregon.
The is Solutions Architect – Tier III Data Center is a high visibility, high impact role. Reporting to the VP of Sales and Business Development, you will have his ear on account and corporate strategy. The Solutions Architect must become our product expert and craft creative and complex solutions for clients, speak the same language as our clients and will act as the bridge between the client team and our engineering and sales teams throughout the implementation, must sell racks; sell virtual, physical or both, and be part of the data center frontier as we put money into new technology and live the leading edge. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 12:00p |
SoftLayer: An Autonomous Shade of Blue 
The acquisition of SoftLayer by IBM was one of the most noteworthy deals of the last year. It was the combination of one of the most successful hosting companies of all time with a true technology giant. SoftLayer’s automation platform will play an important role in IBM’s overall cloud.
IBM has acquired 110 cloud companies over the years. Right now, the company is evaluating which pieces should go on SoftLayer’s automation platform, and porting them over. Expect SaaS and PaaS having to do with big data and analytics to be ported over to the SoftLayer platform.
“IBM is hyperfocused on cloud,” said SoftLayer CEO Lance Crosby. “They really liked our tech and platform.” What does SoftLayer add? Crosby uses an apt sports analogy: “IBM was winning the Super Bowl when it came to enterprise cloud, but they had no draft picks or farm team. That’s what we bring.” SoftLayer extends IBM’s capabilities outside the Fortune 2500, as well as enhancing them within the enterprise.
Transforming IBM for the Cloud Age
IBM CEO Virginia Rometty has now been at the helm for two years, and has been integral in transforming Big Blue for the cloud age. IBM is known for its high-touch managed enterprise services, while SoftLayer is renowned for its top-notch automation platform. Combining them provides the breadth to serve everything from the web-scale company developer to the giant Fortune 100 enterprise.
The road to the acquisition started in 2012, when SoftLayer decided to do a market check. Within the 60 initial days, the company saw significant interest. IBM provided the most attractive avenue for the company to grow going forward.
IBM is letting SoftLayer function autonomously for two years. “This will take us through 2014, as we figure out each other,” said Crosby. “They’re taking the approach that no one knows the business as well as we do.”
In the meantime, IBM is putting tremendous faith in the company, giving it the full gamut of resources it needs to expand in ways that weren’t possible as a stand-alone company. “I never anticipated a full blessing from IBM, but that’s what I got,” said Crosby.
Aggressive Expansion for SoftLayer
SoftLayer has aggressive expansion plans, and under the umbrella of IBM, has the resources to grow. On the docket are eight new sites worldwide, including expansions in Canada, Europe and Asia.
Further down the line, the company is looking deeply into some core, emerging markets that might have been outside of its resources prior to acquisition. “We’re looking at China by 2015, as well as India and South America,” said Crosby. “We’ll form in-country entities.”
The company continues to find strong demand. An example: Crosby says he initially thought SoftLayer overcommitted on space in Singapore because at the time of the expansion, there was a mad rush to get space in that particular Equinix facility. However, Crosby says that space is now almost full.
AWS in the Crosshairs
SoftLayer and Rackspace Hosting used to be fierce competitors in the hosting space. On one side of the battle was SoftLayer, a company that focused on automation and giving control to customers. On the other was Rackspace, a company that built the business on high-touch, fanatical support. Perhaps the biggest change post-acquisition is that Rackspace is no longer in SoftLayer’s crosshairs: now it’s Amazon Web Services.
IBM gives SoftLayer the resources and scale to compete with the cloud giant. AWS doesn’t have the high-touch options that SoftLayer’s parent, IBM brings to the table. | | 12:30p |
Using a Total Cost of Ownership (TCO) Model for Data Center Harry Handlin is Director of Critical Power Applications, GE Critical Power. He co-authored this column with Brad Thrash, Product Manager, Global Three-Phase UPS, GE Critical Power. Handlin and Thrash are part of the senior engineering and product management team with GE Critical Power’s AC Power Group, responsible for helping to develop and bring to market energy efficient UPS technology including GE’s new TLE Series UPS with eBoost technology.
 HARRY HANDLIN GE Critical Power
Most people in the data center world understand the basic concepts of total cost of ownership (TCO) – the sum of initial capital expenditures (CapEx) added to ongoing and long-term operational expenditures (OpEx). TCO is a critical metric when designing a new data center facility or selecting equipment. Yet, with the explosion of data center expansion — identifying and weighing the value of TCO variables when specifying, building and operating a data center may be more elusive. A simple miscalculation can cost companies millions of dollars every year.
We know that energy is certainly one of those critical TCO variables, as data centers are significant consumers of energy. Servers and data equipment account for 55 percent of the energy used by a data center, followed by 30 percent for the cooling equipment to keep the facility operational. Even electrical power distribution losses, including uninterruptible power supply (UPS) losses, consume a significant 12 percent of energy consumption. Interestingly, only 3 percent is consumed by lighting.1
Power Equipment Purchases: Why 1 Percent Energy Efficiency Matters
Energy efficiency gains, in any of these areas, have a significant impact on TCO and annual operating expenses, especially on high power, long life assets. For example, let’s look at just a 1 percent efficiency improvement for a UPS deployment at a 10 megawatt (MW) data center. As the chart (Figure 1) below shows, while CapEx is fixed, the OpEx costs of a UPS over 10 years shows an operational savings of $1.4 million with just an energy efficiency improvement of one percent – from 93 to 94 percent. With newer eco-mode or multi-mode UPS technologies that offer up to 96.5 percent efficiency (or higher), that savings jumps to almost $3.4 million. So, applying a TCO model uncovers the impact of how just a single percentage gain in energy efficiency adds ups.
 Figure 1. TCO vs. Efficiency.
Wrestling with TCO OpEx versus CapEx Demands
So, as mentioned earlier, if TCO models are understood by the data center industry, why aren’t many of the TCO criteria established for a data center’s design applied during the final equipment procurement phases of data center construction or upgrades? Unfortunately, the TCO model is often abandoned when the project phase turns to selecting power system components, such as a UPS, because of short-term CapEx concerns over the initial cost.
While the capital expenditures for UPS systems are relatively the same, the OpEx for UPS energy consumption can easily exceed the CapEx over the life of the equipment due to differences in energy efficiency. Deciding what equipment to buy is analogous to buying a car on price, without considering the price of gasoline, fuel economy and maintenance costs.
This pressure between short-term CapEx purchasing criteria and long-term OpEx TCO evaluations are easy to understand. Decisions for purchasing power systems typically are driven by two groups: (1) the real estate organization or project team with a mandate to reduce capital expenditures to deliver a data center “on time and on budget” and; (2) data center operators responsible for reducing operational expenditures, including energy consumption and maintenance costs over the life of the system.
As stated earlier, CapEx includes the cost of equipment and installation expenses. Often times, equipment efficiency is specified at a minimum level, and the purchasing decision is solely based on meeting the minimum level specified with no evaluation credit given for exceeding the minimum efficiency level. Instead, the purchase price and how it compares to budgeted amounts and the immediate availability of equipment often play a larger role in procurement decisions.
Even when senior management identifies OpEx as a key factor early in the planning of a power system, buying decisions made with the more immediate pressures of component price and availability are often made at the expense of OpEx criteria. When purchasing is outsourced to contractors, a divergence from management’s original intent is even more likely.
Furthering this short-term CapEx versus OpEx vantage is that most of the “hardware” in a data center comprises servers and networking equipment. Typical data center TCO evaluations (Figure 2) for an asset like a server, with a typical life span of two to three years, is a very different calculation than the long-term critical power requirements of a complete data center with a life span of 10 to 15 years.

Savings Realized Through Use of TCO Models
Data center design and operational teams can ultimately realize millions of dollars in savings when applying TCO models to the evaluation and purchasing of components for critical power systems. This is particularly true for energy-intensive equipment, such as UPS systems, where electricity costs can easily exceed their purchase price in only a few years, and where energy efficiency ratings of a few percentage points can add to significant operational cost savings.
To realize the long term benefits and cost savings of a TCO evaluation and purchase model, data center managers have an opportunity to align their CapEx-centric purchasing team with the OpEx goals of their operational teams. TCO can become a common metric for both groups as they work together to create energy efficient data centers that return long-term value.
Endnotes:
1 Uptime Institute – May 2012
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 12:45p |
Scenes from Data Center World Fall 2013  Data center managers enjoy the opening networking reception at Data Center World Fall 2013 in Orlando, Fla. (Photo: Colleen Miller)
ORLANDO, Fla. - AFCOM’s Data Center World autumn event, held in central Florida, kicked off this week with a theme of “aligning the data center with business strategy.” Conference attendees heard from industry leaders about today’s challenges and tomorrow’s innovations to meet the ever-growing demand for data center space, flexibility an connectivity. We present a photo overview of Day 1 of the event. See Data Center World Opening Day Highlights. | | 1:30p |
What The Cloud Engineer Must Know  Cloud engineers must master a range of skills and technologies and understand how they relate.
Before the cloud boom, there were virtualization, storage, networking (WAN/LAN) and data center engineers. These folks were the pillars and pioneers of what we know as the modern cloud infrastructure. These are the people that helped build the foundation of the cloud in conjunction with application and software teams. Today, we still have these positions. However, new job titles have been created as well.
The growing demand for cloud services has similarly created a quickly growing need for cloud architects and engineers. A new IDC report sponsored by Microsoft and published by Forbes indicates that the demand for a cloud-ready IT force will grow by 26% through 2015. Furthermore over the next two to three years, more than 7 million new cloud-related industry positions will become available globally. Here’s the reality: although this industry is expected to grow rapidly in the future, there is a demand for cloud engineers now.
The IDC report shows that IT hiring managers said that there were about 1.7 million cloud-related positions that were available but went unfilled. What was the problem? Candidates were lacking cloud-related training and key certifications. Of the many other IT-related fields, cloud computing jobs are growing the fastest. With that in mind, what does the modern cloud engineer need to know?
- Learn the language of business. Today’s modern organization heavily relies on IT and the services that technology provides. With so many companies moving to the cloud, the cloud engineer must understand the language of business. This means improving communication skills, becoming more involved in business meetings and understanding where technology can resolve business issues. Business and technology are forever intertwined. If you only know technology, your job prospects may be limited. However, if you understand where your organization is going, where cloud can help, and how you can deploy it, you can make a directly positive business impact. Aside from better understanding your own organization, cloud architects must also evolve their project management skills. Because there are so many technologies involved, it’s important to understand where each piece fits and how the entire cloud deployment process can be properly controlled.
- Understand the logical and physical. Cloud engineers must understand a breadth of different technologies and platforms. Yes, there will still be experts within various areas, but the true architect has to understand many base technological theories. This includes storage, networking, compute, user management, open-source solutions, security, virtualization, optimization options, application/services delivery, and much more. Cloud computing is not one product. Rather, it is a combination of key technologies which all work together to bring data and resources down to the end user. And so, cloud engineers can have specific areas of expertise, but they should always retain the knowledge of how their world, interconnects with the rest of the cloud.
- Know about operations – DR, HA, Business Continuity. Because the cloud has become such an integral part of many organizations, cloud engineers must know how to create true infrastructure resiliency. This means creating an architecture where data can be replicated and good DR strategies are in place for critical workloads. Even today we see regular cloud outages. Well, what if you were hosting your entire data center from an Amazon cloud? What if that cloud went down? Did you create an availability zone? How is replication handled? Creating a resilient cloud infrastructure is a key knowledge component that many cloud-ready organizations require. Downtime results in lost dollars. So, knowing operations and how the cloud environment behaves will be critical to designing a solid DR plan.
- Begin to think way way outside the box. The cloud has created new industries and even sub-industries. We have new service models which strive to create the data center of everything. There are expanding areas around fog computing and big data. All of this is the result of an ever-expanding cloud environment. When working with cloud computing – it’s important to be creative in solving problems. It’s not always about throwing resources at the challenge. Virtualization, high-density computing, and various other technologies can dynamically optimize your cloud environment. The key is having knowledge around these solutions and understanding where they fit in. For example, instead of buying more disks, why not deploy a VM with Atlantis ILIO as a RAM-based storage repository for VDI or application virtualization? Or, instead of dishing out more cash for extra bandwidth, deploy virtual WANOP appliances from Silver Peak to great improve traffic. The point is that there are great technologies out there that cloud engineers must know about. These are the solutions that will save money, improve the infrastructure, and create a more resilient cloud.
- Applications, security, and the end-user. Because so much is being delivered via the cloud, there are more targets for security people to worry about. The end-user is now utilizing two or more devices to access their data and the application delivery process is becoming even more important. As a cloud engineer, you must have an understanding around applications, how they interconnect with the cloud, and how they impact the end-user. In between all of that, there must be understanding around security as well. There are new technologies revolving around next-generation security platforms which protect cloud-facing resources. For example, the NetScaler Application Firewall is a heuristic learning engine which understands and learns the normal behaviors of an application. Should there be an anomaly – a SQL injection for example – the operation is halted immediately. Finally, there needs to be an understanding around how these services and applications are affecting the end-user. Ultimately, cloud engineers must understand that the end-user experience is one of the most important criteria for a successful cloud deployment.
The cloud market will continue to grow and create new types of positions. Already we have engineers focusing on big data analytics, edge networking, and even creating the “Internet of Everything.” The consumer has driven a lot of demand around data and services being available on any device, anytime and anywhere. As more devices connect into the cloud – there will be more types of unified services created to facilitate more demand. This is where future cloud architect and engineer can translate direct user or business needs into direct technological solutions. | | 2:00p |
Top 10 Data Center Stories, September 2013  Denise Harwood of Google’s Technical Operations team works inside the company’s data center in The Dalles Oregon. Our DCK story about Google’s data center investment topping $21 billion led the list of most popular stories in September. (Photo by Connie Zhou for Google)
Data center expansions by Google, Microsoft and eBay captured the attention of readers in September, along with a cloud outage for Amazon Web Services. Here are the most viewed stories on Data Center Knowledge for September 2013, ranked by page views.
- Google Has Spent 21 Billion on Data Centers – September 17, 2013
- Microsoft Data Center Expansion -September 9, 2013
- Microsoft to Build 250 Million Data Center in Finland – September 3,2013
- Network Issues Cause Amazon Cloud Outage- September 13, 2013
- eBay Goes Live with its Bloom-powered Data Center – September 26, 2013
- Google Buys Former Gatorade Plant Near Oklahoma Data Center – September 23, 2013
- Intel Targets Cloud Data Centers with Atom c2000 Chips – September 4, 2013
- Vegas’ Switch Adds 19 MW Colo Supernap Sales – September 6, 2013
- LinkedIn Raising 1.15 Billion; Will Invest in Infrastructure – September 4, 2013
- AWA and Devops Skills Sought: Hiring Very Healthy According to Dice – September 11, 2013
Stay current on Data Center Knowledge’s data center news by subscribing to our RSS feed and daily e-mail updates, or by following us on Twitter or Facebook. DCK is now on Google+. | | 2:30p |
Colocation Will Be a $10 Billion Market by 2017, Research Firm Says  During a panel discussion of colocation at Data Center World, a participant from Wyoming asks how he can make his geographic location attractive to possible data center development. At left is Jason dePreaux, Associate Director of IMS Research, who moderated the session. (Photo: Colleen Miller)
ORLANDO, Fla. – The colocation sector generated $6.5 billion in revenue in 2012, and is expected to grow to $10 billion by 2017, according to new data from IMS Research (an IHS Company). That total includes both retail colocation and wholesale data center operations, and reflects the strong growth in the market for multi-tenant data center space.
IMS Associate Director Jason dePreaux discussed his firm’s projections for market growth as part of a panel discussion on the colocation sector Monday at Data Center World. dePreaux said wholesale data center operators account for about $2 billion of the 2012 total, with retail colocation representing the remaining $4.5 billion.
The session covered a range of topics relevant to the colocation market. Here’s a roundup:
More Power, Scotty! The colocation industry was once defined by networks, but panelists say that has shifted. ”The colo business began as an outreach of the telecom industry,” said Jim Leach, Vice President of Marketing at RagingWire Data Centers. “What’s really driving this industry is power. Can I expand and contract power? Can I move power around the data center? Going forward it will be about power and how we provision it.”
Both Leach and John Dunaway, the Director of Data Center Sales at Data Foundry, cited the attractiveness of metered power, in which customers are billed based on usage rather than the full capacity of their circuit.
Watch Out for Water Fees: Water management is a growing concern for data center and colocation operators. Now there’s a new component in the water equation: up-front access fees from utilities. “The water authorities in many areas are introducing connection fees,” said Steve Spinazolla, Vice President at RTKL Associates, an architectural firm that works on many advanced data centers. “Now you really have to do your due diligence up front, and not just look at power.”
Spinazolla said one data center project in Denver had been asked to pay a $5 million connection fee.
The Colo-Cloud Connection: The panel was asked whether cloud computing represented a threat to colocation services. Panelists said that the two are complementary, although providers are taking a variety of approaches.
“All these clouds have to sit in a data center,” said Dunaway. “Several years from now, every colo will have some type of cloud just to keep people from leaving the building.”
Some colocation providers are partnering with cloud providers or hardware vendors to offer “turnkey clouds” for their colo tenants that desire to add a cloud component. Some providers, like RagingWire, want to ensure that they’re not competing with their customers.
“Do you want your colo provider to also be your cloud provider?” asked Leach. “We’ve made the strategic decision that we’re a colocation provider, not a cloud provider. We fundamentally believe the two businesses are very different. Colo is the business we want to be in, and cloud is the business we want to enable for our customers.”
Compliance Matters: Regulatory compliance remains a key concern for many end users. To address diverse customer scenarios, colocation providers must support many different compliance requirements.
“The majority of these compliance requirements are checkmarks for your CTOs,” said Dunaway. “We try to take a broad stroke on compliance to address a broad range of requirements. The bullet proof glass in my entryway may only appeal to one out of 10 customers. But for that one customer, it really matters.”
Leach said customers can benefit from the relationship by including their colo providers’ compliance audit documents with their own compliance submissions.
Hottest Markets: Northern Virginia and Dallas are among the hottest geographic markets, according to Todd Cushing, Senior Consultant with the Technology Practice Group at CBRE. The Chicago area is shy of supply, he added, while one of the hottest regional markets is Minneapolis/St. Paul.
Understanding the regional demand patterns in the colocation is essential, said Leach. “To look at the US colo market collectively is really difficult,” he said. “You really need to look at each of these areas as micro-climates. Each of them have their own market dynamic that can be extremely important to your decision.”
“The majority of our customers come w ithin 100 miles of our data centers,” said Leach. “The data center industry has always been a location-driven industry. But with smart operations folks and good remote management tools, that’s starting to change.”
Leach, who is based at the RagingWire site in Ashburn, a market where customers have an unusual variety of options.
“Coming to Ashburn, Virginia for data centers is like going to Napa Valley for wine,” said Leach. | | 3:04p |
Savvis CenturyLink Plans Minneapolis Data Center Savvis, the cloud infrastructure arm of CenturyLink, is the latest data center provider to expand into Minneapolis/St. Paul, one of the hottest regional markets in the country. Savvis today announced plans to open a new data center in Shakopee, Minn. in spring 2014.
To build the facility, Savvis is teaming with Compass Datacenters, which is seeing increased traction for its strategy of building modestly-sized wholesale facilities in secondary markets. Compass is building partnerships in the service provider market, where it has teamed on projects with Windstream (Raleigh and Nashville), Iron Mountain (Boston area) and now Savvis CenturyLink.
The Savvis MP2 Minneapolis data center will support 4.8 megawatts of IT load and offer 100,000 square feet of raised floor space. It will open with an initial 1.2 megawatts and 13,000 square feet of raised floor space.
“The Twin Cities area has long served as a hub of activity to retailers, consumer brands, healthcare and media companies – all of which need more convenient and secure ways to access, maintain and manage their rapidly expanding data,” said Jeff Von Deylen , president, Savvis. “Our investment in the MP2 data center signifies our strong commitment to providing businesses in the Minneapolis-St. Paul region with access to world-class colocation, cloud and managed-hosting services.”
Compass will develop the data center on 10 acres of property it owns in Shakopee, and lease the facility to Savvis.
“We are excited to partner with CenturyLink’s Savvis organization, which combines a global leadership position in data center excellence with a deep understanding of Minnesota market needs through the existing local CenturyLink presence,” said Chris Crosby, chief executive officer of Compass Datacenters. “Working with Savvis to quickly facilitate expansion in the Minneapolis-St. Paul market, we’ve developed a streamlined strategy for future expansion and response to the growing demands of businesses in the region.”
Savvis operates more than 50 data centers worldwide, with more than 2.4 million square feet of gross raised floor space throughout North America , Europe and Asia. It was acquired by CenturyLink in 2011. |
|