Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, May 22nd, 2014
Time |
Event |
11:30a |
Study: Facebook’s Data Center Created Thousands of Jobs in Oregon Facebook’s data center construction over five years has created about 650 jobs in Central Oregon and nearly 3,600 jobs in the State of Oregon overall. Construction of one of the social network’s most important nerve centers has also lead to $573 million in capital spending statewide.
That is according to economic consultants ECONorthwest, who recently completed a Facebook-commissioned study of the project’s economic impact. The company’s campus in Prineville now has two massive primary data centers and a smaller “cold-storage” facility.
Data center construction in the U.S. has been booming over the past several years, and state and local governments have been competing to attract data center projects to their jurisdictions, primarily by offering tax breaks and by making it easier for the companies to go through planning processes. While day-to-day operations of a data center do not require a big staff, local economies benefit from activity around the massive construction projects.
Personal income from Facebook’s construction in Prineville between 2009 and 2013, for example, generated more than $6.5 million in state income taxes, according to the ECONorthwest report. The social network’s ongoing non-construction-related data center operations there in 2013 alone were linked to $45 million in economic output and more than 200 jobs in Central Oregon, and $64.7 million in output and about 270 jobs in the state overall.
Its 2013 operations were associated with about $500,000 in property taxes and about $750,000 in state income taxes.
Facebook also makes charitable donations in the community. Since 2011, the company has awarded about $1 million to Crook County schools and non-profits through grants and local donations.
“We’re pleased to continue to have a positive impact on the economies of Prineville, Crook County and Central Oregon, from construction activity to full-time employment, direct job creation and indirect ‘multiplier spending effects’ to our charitable giving initiatives,” Facebook representatives wrote in a statement announcing results of the study.
Another benefit of the project the study did not measure was the fact that Facebook had put Prineville on the map as a potential data center location for other companies. Several years after Facebook’s arrival, Apple built a massive data center in Prineville too.
The first building on Facebook’s Prineville campus was the first data center the social network company designed and built for itself. Prior to that, it leased space from commercial data center providers. Today, it also has its own data centers in Forest City, North Carolina, Altoona, Iowa, and in Lulea, Sweden. | 12:00p |
Metacloud to Integrate VXLAN into OpenStack, Plans to Open Source the Code Private and hosted OpenStack cloud provider Metacloud announced plans to integrate Virtual Extensible LAN (VXLAN) into its cloud stack.
VXLAN is a networking technology that tries to fix scaling problems associated with large clouds. Metacloud plans to contribute the code for its VXLAN-for-OpenStack solution to the open source cloud software development community. The contribution may advance the state of virtualized networking in OpenStack, enhancing big data workload capabilities.
“We’re contributing this back,” said Sean Lynch, Metacloud founder and CEO. “Everyone that uses Nova network — assuming our contribution is accepted — will have a very robust high speed networking option. Neutron (an OpenStack network project) isn’t there yet, and the Nova networking project has been officially reopened and is actively being contributed to yet again.”
Traditional multi-tenant OpenStack solutions are built on top of isolation within Layer 2. VXLAN allows all compute nodes and hypervisors to have Layer 3 relationships. It routes without blocking any ports, which is important for latency-sensitive workloads.
“The issue is OpenStack’s NovaNet model does things where tenants sit on VLANs and use Layer 2 netwoking semantics to separate tenants,” Lynch explained. “What happens when you deploy this purely at Layer 2, is switch ports get blocked to ensure you don’t have loops on the network. It’s all well and good from a high-availability standpoint. This type of redundancy has been around for 25-plus years. It’s OK for general cloud computing needs, but for big, scale-out clouds and stuff like Hadoop and Cassandra, it’s not enough.”
Metacloud is working with Cumulus Networks, a startup that sells a Linux-based network operating system for bare-metal networking switches. Metacloud will integrate its solution on Cumulus’ OS. The integrated solution will be an important proof point, as it will validate the potential of open networking on Linux.
“The combination of Cumulus Linux and Metacloud’s VXLAN implementation is a clear example of why open networking is more of a priority now than ever,” said Nolan Leake, co-founder and CTO, Cumulus Networks. | 2:00p |
Environmental Monitoring: You Can’t Manage What You Don’t Monitor Data center administrators are tasked with running an optimally efficient environment. This means understanding resource utilization, how equipment is being deployed and, of course, the environmental impact of a new high-density, multi-tenant, data center platform.
Here’s the challenge: the modern data center is a lot more complex than it used to be. Because of that – data centers are taking the heat – literally and figuratively. With equipment generating enormous amounts of thermal energy, data centers continue to shovel operational funds into cooling as energy costs steadily climb. Environmental optimization is so demanding that Emerson Network Power cites that cooling and energy make up 44% of the average data center’s cost of ownership. These shocking levels of energy consumption have led to much rebuttal in the media—particularly, the negative environmental impact of the data center industry’s staggering 200% increase in power consumption between 2000 and 2005 in addition to the more modest, but nonetheless concerning, 36% increase between 2005 and 2010.
In this whitepaper from RF Code, we learn how the addition of the right intelligent environment monitoring system will replace a seemingly inescapable chain of costs and inefficiencies with savings and smarter management. The right system provides the insight needed to lower energy consumption.
When it comes to understanding the direct benefits of a powerful monitoring and management solution – consider the following:
- Understanding Cost Savings: Lower Energy Expenditure and Beyond
- Incorporating Lower Installation Costs
- Delivery of Lower Energy Expenditures
- Improving Infrastructure Management
- How Monitoring Will Lower Downtime Risk
- Creating a Smaller Carbon Footprint
Your data center will only grow in importance to your organization. Download this whitepaper today to learn how advanced environmental monitoring technology is able to optimize data center infrastructure, reduce downtime risk, and provide the metrics needed to publically demonstrate that your company has lowered its carbon footprint. Ultimately, all of these benefits help ensure a good return-on-investment (ROI) and that the savings will continue to increase. | 4:00p |
Three Key Elements for Successfully Migrating Data Into the Cloud Kevin Leahy is the group general manager for the Data Center Business Unit at Dimension Data with expertise in the areas of cloud, service management, virtualization and IT optimization.
It’s one thing to handle simple test and development or to move non-mission critical data to the cloud. When the data is sensitive or mission critical, enterprises are right to be concerned about the ongoing security and reliability of their data. They should carefully consider how to migrate their data into and out of the cloud environment whenever necessary.
For starters, companies need to know what tools are available to automate the process of moving data into the cloud. They should also establish controls and policies around what happens once the data is outside their network, and will need answers to the following questions before getting started:
- Who should have access to the data?
- How do you scale resources to support the data or applications?
- How will scalability work from an integration perspective?
Automation, control and integration are the three important areas enterprise architects need to understand before migrating applications or data into the cloud. By focusing on these, enterprises can address and set the groundwork for many of the other high-level concerns around data migration that are common, such as security, performance, privacy and data ownership.
Key Element: Automation
Automation is first about orchestration. Understanding how to move applications into the cloud, how orchestration works, and how all these stack up against enterprise and architectural requirements is what makes automation possible. In order for enterprise IT to truly serve the needs of end users and reduce rogue usage of public cloud services, CIOs must embrace automation and orchestration that extends self-service provisioning down to those end users. While that is the goal, the first step is usually the hardest, defining the policies that can be automated.
For many organizations, those policies do not exist, and rogue IT may or may not be an issue without those policies in place. Automation of the policies, while complex, with modern DevOps tools and API’s currently available can usually start with basic policies that can become more complex with event triggers and real time information analysis in the future. The ideal end result is to fully leverage a self-provisioning model to autoscale and create efficiencies in cloud resource usage. Ultimately, only the applications and data that the company wants, and the rules under which it operates, should be moved to the cloud.
Key Element: Control
Control of various resources, such as servers, storage and networking gear, is key to ensuring security no matter where the data is located. This includes an understanding of role-based access controls (RBAC) for each person that dictates what they can control and touch within a cloud environment, separate from administrators, and what type of control an enterprise has over its resources. Control is about gaining visibility into, and the monitoring of, usage and activities. This can be achieved through authentication including multi-user and identity-based access management and depends on how you enable single sign-on, for example, using SAML and/or LDAP.
Control must enforce the policies the company has defined, and the more automated it is, the fewer escapes and the easier it is to demonstrate compliance. A simple example is ensuring private data is removed (i.e. privatized) or encrypted before it is moved to the cloud. Equally, a backup or recovery copy or even archival storage is part of the overall data architecture, and ensuring those copies are protected in the right places, and deleted when required is all part of the control.
Key Element: Intergration
Integration is a complicated and often an underestimated element of data migration. Enabling APIs and enterprise data, applications and systems to seamlessly work together with the existing network takes expertise that is not always available within the IT department. In fact, as cloud maturity has advanced beyond simple web facing application or development and test to complex interdependent applications, integration has become the biggest cost and time factor in implementing hybrid clouds. While tools have advanced, the application management rules in place are usually built assuming all on premise response times and security handling issues are resolved.
Consider what consulting and professional services, as well as managed services, may be required to speed the time it takes to successfully migrate data to the cloud. Integration tools to be explored could include Chef and Puppet configuration management tools and DevOps tools to integrate the application needs with the infrastructure. Cloud management platforms that balance resources across several cloud providers can be implemented as a way of stitching together disparate platforms, but standards adoption is not at the point where that approach can perform as well as a well-architected hybrid platform.
The Result: Business Agility
Migrating data and applications into the cloud can provide better data access and availability and—depending on the types of services you implement—can deliver added protection, security and reliability for your sensitive and mission critical data. The three key elements of automation, control and integration required to successfully migrate your data, mission critical or otherwise, to the cloud can result in one significant overall benefit–business agility.
It is well-known that the cloud delivers flexibility, scalability, a pay-as-you-go consumption model, and the ability to outsource some or all infrastructure and IT functions for improved efficiencies with typically increased ROI. However, migrating to the cloud also frees up critical and valuable resources from day-to-day IT management and maintenance to focus on more strategic business goals. This manifests itself in a number of positive outcomes besides reducing upfront Capex and ongoing Opex.
IT and engineering can instead focus on strategic goals like speeding time to market by reducing test and dev production cycles, delivering more robust services, improving products, and creating new products. This results in satisfied customers and, ultimately, increasing growth and profitability for your enterprise.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 5:00p |
AMD Launches 2nd-Gen Embedded R-Series APUs and CPUs AMD announced the second generation of embedded R-series accelerated processing unit (APU) and CPU family (previously codenamed “Bald Eagle”) for embedded applications. Built on AMD’s Steamroller CPU architecture and Graphics Core Next (GCN) architecture, the new R-series APU and CPU are designed for mid- to high-end visual and parallel compute-intensive embedded applications with support for Linux, RTOS (Real Time Operating Systems) and Windows operating systems.
With graphics performance and performance-per-watt as the focus, the new R-series APU is the first embedded processor to incorporate HSA (Heterogeneous System Architecture) in its design, enabling applications to distribute workloads to run on the best compute element; either CPU, GPU, or a specialized accelerator such as video decode.
At its 2013 Developer Summit AMD discussed APU product roadmaps and the HSA architecture with a spotlight on partner collaboration and empowering developers. The same focus applies for the embedded market, as the AMD R-series will vie for market share against Intel Haswell chips, with the help of AMD’s latest Grahics Core Next (GCN) architecture.
Through an agreement with Mentor Graphics, and with AMD being a gold-level member of the Yocto Linux collaboration project, embedded systems developers now have access to customized embedded Linux development and commercial support on the second generation AMD Embedded R-series family. The new APUs feature dual-channel memory with error-correcting code (ECC), DDR3-2133 support and configurable TDP (thermal design power) for system design flexibility to optimize the processor at a lower TDP. AMD has pledged a 10-year support lifetime for the R-Series chips.
“When it comes to compute performance, graphics performance and performance-per-watt, the second-generation AMD Embedded R-series family is unique in the embedded market,” said Scott Aylor, corporate vice president and general manager of AMD’s Embedded Solutions division. “The addition of HSA, GCN and power management features enables our customers to create a new world of intelligent, interactive and immersive embedded devices.”
The embedded market that AMD is targeting has a diverse set of needs. For embedded visual applications like gaming machines and digital signage the new R-series APUs can provide flexibility and scalability, with support for up to nine independent displays and 4K resolution with the combination of the new AMD Radeon E8860 embedded discrete GPU. For medical imaging the new APUs deliver high image transformation performance and low latencies in a low-power and highly integrated solution for medical imaging device vendors.
The advanced parallel-compute graphics engine in the new APUs give networking companies a high-performance GPU, enabling acceleration of parallelizable functions, such as deep packet inspection, encryption or decryption, search and compression or decompression, allowing more CPU headroom for customers to help increase feature velocity. | 10:00p |
Savvis Colo Roots Remain Core to CenturyLink’s Strategy With all the marketing and publicity outreach CenturyLink Technology Solutions has been doing around its cloud services, it is easy to lose sight of the fact that Savvis, the company CenturyLink acquired to kick off its foray into the infrastructure services market, was and still remains very much about providing traditional data center services.
The company’s aggressive data center expansion this year is a good reminder that colocation is still at the core of what became CenturyLink’s infrastructure services business. Drew Leonard, vice president of colocation product management at CenturyLink, said the company was adding about 20MW of data center capacity across nine locations in 2014. Most of it is in North America, and some is in Europe and Asia.
Chasing large colocation deals
And it is not cloud that is driving this expansion. It is demand for colocation space. “There’s more large deals on the table,” Leonard said. “What traditionally would be earmarked for wholesale type of providers [is] being shopped through retail providers as well.”
CenturyLink brought nearly 10MW online in Phoenix, 2MW each in Toronto and in Orange County (south California) and 1.2MW in Washington, D.C., to name some of this year’s expansion projects. It is expanding to avoid running out of capacity in any of those markets (or “going dark,” as Leonard put it) and missing opportunities as a result.
“If you don’t plan ahead and have an available 1MW, an available 2MW [in a given market], one deal fills you up,” he said. “And now, all of a sudden, you go dark in the market.”
Hybrid infrastructure is key
CenturyLink’s differentiation is providing core colocation space but packaging it up with managed services or cloud, or both. Joel Stone, the company’s vice president of global data center operations, said its value proposition was flexibility.
The $2.5 billion acquisition of Savvis by CenturyLink in 2011 gave Savvis the firepower needed to capitalize on the demand. “CenturyLink has access to a lot of capital,” Stone said.
It has enough capital to invest in new facilities and to optimize and bring up to date existing data centers to avoid having dated infrastructure. “That’s a challenge you have with any provider that’s been around for as long as we have,” he said.
Expanding quickly but carefully
Location decisions for going into new markets are dictated by existing CenturyLink footprint. The company prefers to add data center footprint in areas where it already has network connectivity (CenturyLink operates a Tier I backbone) and some sales presence with knowledge of the local market.
In locations without existing footprint, the company prefers to partner with companies that have local presence rather than go in on its own. The point is to expand while keeping the cost down as much as possible, Leonard said.
Recently, CenturyLink began to standardize the design of data centers it builds on its own to further cut cost and time to market. But, it is not afraid to experiment with other innovative models. In Phoenix, for example, the company used data center modules by IO, and for its most recently launched data center in Minnesota was built by Compass Datacenters, a Dallas-based developer that uses its own standard design to build 1.2MW pods for customers in second-tier North American markets. |
|