Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, October 1st, 2015
| Time |
Event |
| 12:00p |
North Carolina Makes Data Center Tax Breaks Easier to Get After many years of providing generous tax exemptions to companies that build and occupy massive data centers on their own, North Carolina has made tax breaks much more accessible to smaller data center users.
On Wednesday, State Governor Pat McCrory signed an economic-development bill that among other things extends tax breaks to companies that occupy multi-tenant data centers. The legislation also lowers the minimum investment commitment data center users have to make to qualify for tax breaks from $150 million to $75 million.
Numerous other states, including neighboring Virginia, provide tax incentives to data center tenants, so the new law makes North Carolina more competitive in contests states frequently engage in to attract big data center construction projects. Tax breaks have become one of the most important factors companies consider when going through the data center site selection process, along with things like electricity rates, availability of fiber-optic network infrastructure, climate, and skilled workers.
Data Centers as Employment Drivers Controversial
At least 23 states have data center tax incentives written into law today, according to a recent analysis by the Associated Press. Another 16 states have provided incentives for data center projects using general economic-development programs. Collectively, states have dolled out about $1.5 billion worth of data center tax breaks over the past 10 years, according to AP.
While government economic-development officials are often eager proponents of data center tax breaks, the economic impact of a data center on its surrounding area is controversial. While data centers are big and expensive construction projects, they don’t employ a lot of people once built, yet officials often paint tax incentives for data centers as a way to drive job growth.
Multi-Tenant Facilities Create More Jobs
McCrory’s announcement was no exception. The data center incentives were part of a larger piece of economic-development legislation called the NC Competes Jobs Plan.
By extending tax breaks to data center tenants, however, the bill encourages construction of the type of data center that can create more jobs than a single-user facility. Besides people employed by the data center operator, tenants often have their own staff on site or use third-party contractors to look after their equipment.
“As the provider, our number of direct employees will probably be comparable to that of a large single-user facility,” Todd Aaron, co-president at Sentinel Data Centers, operator of a large multi-tenant facility in Durham, North Carolina, said. “Each of our users then typically brings many of their own employees.”
He declined to say how many tenants Sentinel had in the 420,000-square-foot Durham data center, but said the first phase, which contains 50,000 square feet of raised floor, is nearly full, occupied by “several” companies.
Tax Breaks on Equipment, Electricity Contracts
Besides lowering the investment threshold to $75 million, the bill allows investment by all tenants in a single data center to be combined to meet the minimum. This means even customers with the smallest footprint in a facility can benefit from the tax breaks.
The savings can be substantial. The bill exempts qualifying companies from the 7-percent sales tax on electricity purchases, which for a 1-megawatt user can translate to $3,000 to $4,000 in savings per month, according to Aaron.
It also provides a sales-tax exemption for property and equipment purchases. A single tenant can spend tens of millions of dollars buying and installing servers, storage arrays, network switches, and electrical equipment.
Stimulus for North Carolina’s Colo Market
North Carolina has been successful in attracting a number of major single-user data centers, including massive facilities built by Facebook, Apple, and Google, but it hasn’t been a big multi-tenant data center market. The new incentives may make its multi-tenant market more active.
“Kudos to North Carolina, because it will open up demand,” Bo Bond, managing director at the commercial real estate brokerage Jones Lang LaSalle, said. He expects the incentives to produce both demand for data center services as well as supply, because data center providers will want to take advantage of more favorable market conditions.
Lower investment commitment required to qualify for incentives and the ability for tenants to pool investment will have a positive impact on the market in Bond’s opinion. “That’s a game changer, because you can combine the investment,” he said.
Qualifying Threshold One of Lowest in US
While it isn’t the lowest investment threshold required to receive data center tax breaks in the country, North Carolina’s $75 million minimum is lower than in many other states, where the threshold can be hundreds of millions.
To qualify for Ohio’s sales tax breaks, for example, a data center developer has to invest at least $100 million, according to the AP report. The investment threshold in Tennessee is $250 million, and Texas offers sales-tax exemptions to developers that put at least $200 million into their data center sites there.
Texas in June enacted another set of data center tax breaks for especially large builds. It provides a temporary exemption from sales and use tax on personal property for data centers that contract for at least 20 megawatts of power and spend at least $500 million. Next month after the legislation was enacted, Facebook announced it would build a $1 billion data center in the Dallas-Fort Worth area.
Developers usually have to make other commitments to qualify as well. Many states attach a minimum number of new jobs that has to be created. Some also have a facility size requirement. | | 3:00p |
CyrusOne CFO Kim Sheehy to Leave CyrusOne CFO Kimberly Sheehy, who oversaw the Carrollton, Texas-based data center service provider’s IPO and conversion to Real Estate Investment Trust status in 2013, is leaving the company.
CyrusOne has appointed Gregory Andrews, a seasoned REIT CFO, as her replacement, the company announced this week. Andrews, 53, starts on 19 October, with Sheehy staying on as employee until the end of the year to help with the transition and then as consultant to help prepare the company’s annual earnings report for this year.
The company did not specify a reason for the change. Its CEO Gary Wojtaszek said in a statement that CyrusOne was grateful for Sheehy’s contributions and leadership in the IPO and REIT conversion and after.
She joined CyrusOne as chief administrative officer in 2011, coming from Cincinnati Bell, which acquired CyrusOne in 2010 before spinning it out three years later. She had been with Cincinnati Bell since 1996.
Most recently, Sheehy’s replacement Andrews was CFO at Ramco-Gershenson Properties Trust for about five years. Prior to that, he served for three years as CFO at Equity One. Both Ramco-Gershenson and Equity One are publicly traded REITs.
His annual base salary will be $425,000, plus stock options and bonuses, according to QTS documents filed with the SEC.
CyrusOne reported $89 million in revenue for the second quarter –up 9 percent year over year. It had about 1.35 million square feet across 27 facilities in its data center portfolio as of the end of the quarter.
Its shares debuted on NASDAQ in January 2013 at $19 per share and after a somewhat rocky first half of 2013 have been rising steadily. They were trading at $32.66 in afterhours trading Wednesday.
CyrusOne was one of three data center providers that converted to REITs in recent years. Its competitor QTS floated on NYSE and flipped to REIT status in October 2013. Equinix started operating as a REIT early this year, but it has been a publicly traded company since 2000.
By converting to REITs, data center providers substantially reduce their corporate tax burden. Other major data center services players, Digital Realty Trust and CoreSite, have been operating as REITs for years.
QTS shares debuted in 2013 at $21 per share and have generally risen in price since, trading Wednesday at more than $43 per share.
Equinix started 2013 at $217 per share and fell to $180 per share by the end of the year but has since been steadily rising, trading today at above $273 per share. | | 3:30p |
Mirantis Makes OpenStack Distribution More Enterprise-Friendly Marching toward hardening OpenStack to the point where the average IT organization can deploy it, Mirantis released version 7.0 of its distribution of the open source cloud software.
Amar Kapadia, senior director of product management for Mirantis, said this latest version of OpenStack from Mirantis, based on the Kilo release of OpenStack, is specifically geared toward enabling IT organizations to achieve operational stability using OpenStack.
“We’re trying to provide stability at scale,” said Kapadia. “We’re driving out all the intermittent bugs.”
Mirantis has also added several improvements to the Murano app catalog in OpenStack, including support for pluggable infrastructure orchestrators, improved role-based access controls and support for VMware and Windows apps.
While everything can be perfect when it comes to deploying any technology on the first day, Kapadia said, the real issues that IT organizations need to address start manifesting themselves on the second day after installation. For that reason, Mirantis in this release specifically focused on change management, updates, upgrades, monitoring, diagnosis, and workload deployment.
Mirantis has also improved its automated test suites, which now include tests for Murano, Sahara, and Ceph components of OpenStack and the back-porting select fixes from the Liberty release of OpenStack.
Mirantis has also enhanced its Fuel GUI to enable IT organizations to filter and sort nodes, which makes it possible for customers with large Mirantis OpenStack deployments to see their entire cloud on one dashboard.
Finally, Mirantis now supports NSX-v and vSphere distributed switch that interoperates with other software-defined networks (SDNs) based on Neutron, and there is also now a plug-in for Hadoop. Mirantis is promising in the next few weeks to support plugins for SDNs from Calico and storage systems from SolidFire.
In general, Kapadia said, Mirantis is not trying to use OpenStack to usurp platforms such as VMware, but rather support specific classes of cloud-native workloads that Mirantis contends will run better on OpenStack.
As the only remaining independent supplier of a distribution of OpenStack, Kapadia said Mirantis is also free from any legacy IT products or technologies that would inhibit its ability to keep pace with the latest OpenStack innovations, including being able to unify the management of containers using Kubernetes and a distribution of OpenStack that can run on any hypervisor an IT organization chooses to deploy.
While OpenStack has gained a fair amount of momentum in the past five years, challenges still remain in terms of ability of the average IT organization to deploy and maintain OpenStack. But with each passing release of the framework, the closer OpenStack becomes being able to be mastered by mere more IT administrators. | | 6:20p |
Google Launches First East Coast Cloud Region in S. Carolina Data Center Google has launched a new cloud region at its South Carolina data center, which is the first Google cloud region hosted on the East Coast of the US, the company announced Thursday. It now provides public cloud infrastructure services out of four data centers, two in the US, one in Belgium, and one in Taiwan.
It is important for a public cloud provider, especially one that provides general-purpose cloud infrastructure services, to have as many locations that host the underlying servers as possible. More locations means customers have more choice when setting up their cloud infrastructure.
Some replicate cloud VMs across multiple remote locations for better reliability. For some users location is important because of security or compliance concerns. Organizations in some industries, such as healthcare or government, are required to host their data within country borders. In many cases, physical proximity of servers to end users also means better performance and lower data transport costs.
While often mentioned as one of the biggest pubic cloud providers, along with Amazon Web Services and Microsoft Azure, the list of Google data centers that host its cloud infrastructure services is much shorter than the two other providers’. AWS has four regions in the Americas (plus one more just for government customers), two in Europe, and four in Asia Pacific.
Microsoft’s Azure cloud is served out of more locations than either of its two big competitors’ clouds: seven in the Americas (plus two government cloud regions), two in Europe, and nine in Asia Pacific. Three of its Asia Pacific regions – central, south, and west India – came online only recently and were announced earlier this week.
There are 14 Google data centers around the world the company has talked about publicly, so the internet giant doesn’t have to start from scratch and build a new data center every time it needs to add a new cloud region.
Google launched its Berkeley County, South Carolina, data center in 2008, and in 2013 announced a $600 million expansion project at the site. Here’s a video tour of the Google data center in South Carolina.
Google’s second US cloud region is in Council Bluffs, Iowa. | | 6:30p |
Have it Your Way! You Pick the Cloud Model Rick Vincent is Cloud Business Development Manager for Faction.
There are two types of IaaS cloud providers in the marketplace, and oftentimes they are only compared/contrasted based on their consumption model – clouds that are consumed by the instance, a.k.a. the hyperscale cloud; or clouds that are consumed as a resource pool, a.k.a enterprise-grade clouds. Just as important as the consumption model is how we have traditionally compared and contrasted cloud services.
Cloud computing offers many advantages to businesses. The technology provides more flexibility, lower costs, greater availability and deep staff knowledge. In addition to developing strategies to ensure close alignment with current and future business needs, the cloud becomes an asset to the organization. Companies that span the cloud have become increasingly essential for business success, as their agility broadens and they aligned with business objectives. It’s not really a question of whether companies should move to the cloud, but when.
Two Cloud Models
First, let’s define the two cloud types by their consumption model. A by-the-instance cloud provider is usually the simplest way to consume cloud. By-the-instance is ideal for companies that wish to scale horizontally and companies with read-only applications.
In contrast, there are no t-shirt sizes (S, M, L, or XL) for clouds consumed by a resource pool like hyperscale cloud. This isn’t the simplest way to consume cloud, but it is far more suitable for the complexities of most businesses. Resource pool clouds are ideal for companies with transactional applications and those that wish to scale both horizontally and vertically.
Cloud consumption models can help customers discover how well their cloud services are protecting their organizations and provide them with a broad view of their cloud usage. It can also help cloud service providers to better meet the needs of internal clients and predict future needs. The consumption model can also help you better govern your organization’s cloud adoption from end-to-end.

Which Cloud Consumption Model is Best?
Ask yourself how important performance and availability are to your company? Is 100 percent availability essential to your company’s survival? Do you have a highly available architecture? Is robust performance critical to your operational success? With by-the-instance cloud you will create both scale and availability by distributing content across multiple instances and geographies.
Alternatively, resource pool-based clouds are ideal for environments with large transactional systems that have an underlying HA infrastructure and scale, at a granular level, to fine-tune your compute, network, or storage independent of one another.
By-the-instance clouds tend to be less expensive on the front end since they do not include HA redundancy in the infrastructure. It is not ideal though if your environment requires seeding, retrieval and frequent movement of your data. In addition, when scaling horizontally, by-the-instance clouds produce additional copies of the data. While there may be higher front-end costs, resource pools are the most efficient when you are at scale. This is beneficial because fewer data copies are required, and fewer data movements occur in a resource pool cloud.
At first look, by-the-instance clouds appear flexible, however, they are constrained by the attributes of each instance (the t-shirt sizes S, M, L). There is no customization, and they must be purchased off the shelf. Typically resource pool cloud is more flexible because you can customize each element of the infrastructure to your needs. Customers can change resource allocations without rebuilding or moving data and can more easily scale into larger resource pools than in per-instance clouds.
By-the-instance cloud requires deep employee knowledge of the application(s) in order to operate and scale. Resource pool clouds do not. You can achieve HA architecture and availability even if you don’t have the deep employee knowledge of the application internals.
Ask the Right Questions
Don’t just look at the consumption model when weighing your cloud options. Most importantly, consider performance, costs, flexibility and staff knowledge. Thinking in this way will allow you to make intelligent choices related to initial design so that you can avoid potentially disastrous application redesigns as the application grows or changes and you are faced with new challenges.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 6:57p |
What Microsoft Announced at its Big Azure Cloud Event 
This article originally appeared at The WHIR
At AzureCon this week, Microsoft announced new solutions spanning containers, security, infrastructure and the Internet of Things.
Azure Data Centers in India
Microsoft announced the availability of Microsoft Azure services in India with the opening of three new regions: Central India in Pune, South India in Chennai, and West India in Mumbai. This makes Microsoft the first major global public cloud provider serving Indian customers out of local data centers.
This is important for Indian customers who are able to meet data residency requirements (given that data stays in India) and it also allows data replication across multiple regions for increased availability and backup redundancy.
Including these new regions, there are now 24 Azure regions worldwide.
Office 365 in India is slated for October, and Dynamics CRM Online for the first half of 2016.
Modern Applications
Microsoft announced a new Azure Container Service that will combine the openness ofApache Mesos and Docker with the hyper-scale of Azure for container orchestration and management.
This lets organizations using Azure easily deploy and configure Mesos to cluster and schedule Dockerized applications across multiple virtual hosts. The service will be available for preview by the end of the year, and will support Windows Servercontainers in the future.
“With large-scale production users like Airbnb, Twitter and Apple, Apache Mesos is the most scalable and flexible container orchestration platform available on the market today,” said Florian Leibert, co-founder and CEO, Mesosphere. “At the same time, Microsoft continues its rapid growth and enterprise cloud leadership, with more than 80 percent of the Fortune 500 using Microsoft’s cloud to power their businesses. The partnership between Mesosphere and Microsoft will give customers unmatched choice and flexibility in managing their container investments, delivering a first-class implementation and enterprise support experience on Azure.”
Azure IoT Suite, Expanded Azure Data Lake
The Azure IoT Suite, which helps share data across connected assets, devices and systems, is now available for customers to purchase. It integrates with a company’s existing processes, devices and systems to quickly and easily build and scale IoT projects using preconfigured solutions.
Microsoft also announced the new Microsoft Azure Certified for IoT program, an ecosystem of partners whose offerings have been tested and certified so businesses can take their next IoT project from testing to production, more quickly. Current partners include BeagleBone, Freescale Intel Corporation, Raspberry Pi, Resin.io, Seeed Technology Inc., and Texas Instruments Inc.
Additionally, Microsoft announced Monday it had expanded Azure Data Lake, the service it originally launched in April that provides a Apache Hadoop cluster service that collects all data in a single place prior to any formal definition of requirements or schema. Microsoft calls it a “hyper-scale repository”.
As of this week, Azure Data Lake now includes Azure Data Lake Analytics, Azure Data Lake Store, the new U-SQL programming language, and Azure HDInsight general availability on Linux.
Like Hive, U-SQL is a SQL-based language, but U-SQL aims to make big data processing easier. Notably, it makes it easier to support non-structured data and files by not requiring file data or remote sources to be cataloged and schematized before querying them.
Azure HDInsight is a fully managed Apache Hadoop cluster hosting service that supports analytics engines such as Hive, Spark, HBase and Storm. Now, organizations can now manage clusters on Linux with a 99.9 percent uptime SLA.
Azure Security Center
One of the roadblocks to cloud adoption has been concerns over security, especiallyamong organizations outside the US.
To hopefully ease some of these worries, Microsoft’s new Azure Security Center is designed to give customers visibility and control of their Azure resources with integrated security, monitoring and policy management. Microsoft is able to analyze information gathered from customer deployments and compare it with global threat intelligence to detect threats and provide security recommendations for clients.
It integrates with security solutions from Barracuda, Checkpoint, Cisco Systems Inc.,CloudFlare, F5 Networks, Imperva, Incapsula and Trend Micro.
Azure Security Center will be broadly available for Azure customers by the end of the year.
NVIDIA GPU-Powered Azure VMs
Finally, continuing investments to deliver industry-leading compute capacity, Microsoft is announcing the N-series, a new family of Azure Virtual Machines (VMs) powered by NVIDIA GPUs. GPUs have long been used for compute and graphics-intensive workloads. Microsoft is the first hyper-scale provider to announce VMs featuring NVIDIA Grid 2.0 technology and the industry-leading Tesla Accelerated Computing Platform for professional graphics applications, deep learning, high-performance computing and more. A preview will be available in a few months.
Cost Savings From Buying Azure Instances Upfront
Available globally starting Dec. 1, Azure Compute Pre-Purchase Plan allows customers to pay for their Azure compute capacity for an entire one year at a discount of up to 63 percent. This makes sense for customers with steady state, predictable workloads on Azure.
This first ran at http://www.thewhir.com/web-hosting-news/orange-business-services-integrates-with-google-cloud-interconnect | | 7:41p |
Google, Microsoft Wave White Flag to End Patent Battles 
This post originally appeared at The Var Guy
By DH Kass
After five years battling one another on multiple fronts over patents and technology innovations, IT titans Google and Microsoft said they would end hostilities by dropping some 20 lawsuits in the US and Germany.
The companies have battled each other over the use of various patents for mobile phones, gaming consoles, royalties, and more.
Neither company disclosed financial terms of the settlement agreement. But in a gesture reflecting a new era of collaboration between former rivals, the two companies said they will work together to develop a royalty-free video compression technology to boost download speeds and to push for a unified patent system in Europe.
“Microsoft and Google are pleased to announce an agreement on patent issues,” the companies said in a joint statement. “As part of the agreement, the companies will dismiss all pending patent infringement litigation between them, including cases related to Motorola Mobility,” the vendors said.
“Separately, Google and Microsoft have agreed to collaborate on certain patent matters and anticipate working together in other areas in the future to benefit our customers,” they said.
In recent months, both Google and Microsoft have sought to move away from their former litigious postures, with the search giant last year putting aside its patent differences with Apple after four years of legal wrangling, requesting a federal appeals court in Washington dismiss their cases against each other.
And, earlier this month, Microsoft elevated Brad Smith, its top lawyer, 22-year company veteran and architect of the company’s kinder, gentler legal posture, to president and chief legal officer in a move reflective of its newfound more collaborative profile under chief executive Satya Nadella.
The trend toward legal detente among once bitter IT rivals is a welcome respite from the wars that have dominated intellectual property innovations for years and a counter to the torrent of lawsuits filed by technology patent trolls, which consultant Unified Patents said could reach an all-time high this year.
This first ran at http://thevarguy.com/business-technology-solution-sales/100115/google-microsoft-wave-white-flag-end-patent-battles | | 8:57p |
Which Data Center Skills are in Demand Today Today’s data center is an evolving engine supporting new types of workloads and users. There are new kinds of demands being placed around resource controls, facilities management, and data center optimization. Businesses are spending more money on their environments to better compete in an evolving world.
Cloud is a big part of that evolution. Global spending on Infrastructure-as-a-Service is expected to reach about $16.5 billion in 2015, an increase of 32.8 percent from 2014, with a compound annual growth rate from 2014 to 2019 forecast at 29.1 percent, according to Gartner.
As their approach to infrastructure changes, organizations are looking to invest in good people to support new initiatives. This means personnel needs to evolve with new demands of the market, which has created new positions, evolved others, and has forced the data center industry to invest in new talent.
Abut one third of respondents to our sister company AFCOM‘s latest State of the Data Center Survey, for example, said their personnel costs have increased because of new training and certification requirements. Yet, traditional data center infrastructure roles remain hard to fill.
It’s a competitive market, and it’s not always easy to find the right people. Respondents to the survey have had challenges filling the following top three roles:
- Data center facility technicians, engineers, and operators: 42 percent
- IT systems and/or applications personnel: 20 percent
- Network and telecommunications personnel: 19 percent
Furthermore, 69 percent of respondents indicated that they’ve had to increase investment in data center IT and facility personnel within the past three years, while 71 percent said they will have to increase investment in data center IT and facilities professionals over the next three years. So, what are the driving factors for these increases?
- 53 percent indicated that there is increased demand for onsite coverage.
- 38 percent said that retention costs for existing staff have increased.
- 33 percent said that increased training and certification requirements drove up costs.
All of this translates to more career opportunities and growth within the data center space. Organizations are looking for diverse skill sets and data center professionals who can have more impact on business processes.
Here are the top responses to the question about skills data center and IT operations managers are looking for in potential hires:
- Automation and cloud
- Cisco UCS capabilities
- DCIM experience
- Skills around load balancing and unified communications
- Network engineering
Certifications are critical as well. The top five job skills where certifications are required for a data center operations role are:
- Facilities management
- Data and/or network security
- Network engineer/training
- Operations and process management
- Project management
The demand around data center and cloud services will only continue to grow. Data center professionals will need to know how their underlying architecture impacts the business, the users, and the overall ecosystem. They’ll need to understand more about workloads they support and how to best align with business goals.
Download the full State of the Data Center Survey for more details |
|