Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, February 26th, 2015
| Time |
Event |
| 1:00p |
What Affects the Dynamics of Data Center Pricing Multi-tenant data center pricing trends are not an easy subject to broach. In North America, each market has different dynamics. Coupled with an industry evolving beyond power and ping to other services, dissecting raw space and power from everything else is further complicated.
For these and many other reasons, it’s nearly impossible to come up with a blanket statement that pricing is going up or going down. However, the consensus among many is that it has generally bottomed out in the U.S., and much firmer pricing is expected in the near term. These findings largely agree with a recent Cushman & Wakefield report. However, several industry watchers mentioned an abundance of X-factors that will impact data center pricing.
The Floor is Firm
Industry players and observers from Colliers, Wired Real Estate, Jones Lang LaSalle, and North American Data Centers all predict flat but stable pricing. What happened to get here, and what’s going to happen going forward?
In 2011, colocation providers and industry observers had noticed an upward trend in pricing during previous years as the industry took off. The sentiment was increasing competition would slow or reverse the trend, and those sentiments were right.
“Over the last 4-5 years, we’ve seen a 7.5 percent drop in pricing,” said Bo Bond, managing director, Jones Lang LaSalle. However, Bond also notes that last year was just a few points down. “The trend of falling price compression is starting to level out,” he said.
Dynamic Ceiling Harder to Peg
Another big discussion in 2011 was around blurring lines, or convergence, between wholesale and retail colocation and its potential impact on pricing.
Convergence is once again the big trend – the convergence of infrastructure and IT. Infrastructure providers are increasingly becoming IT service providers as well, offering services beyond running the facility such as cloud or managed services.
“A key driver that’s affecting pricing is the notion of ancillary services,” said Tim Huffman, executive vice president, Colliers Technology Group.
Because deals are increasingly about more than space and power, it influences the price of space and power.
It affects data center pricing in many ways. Huffman argues a provider is more likely to cut a deal on the space and power if it sees opportunity in other buckets like services.
It also means providers will continue to price aggressively on large strategic wholesale deals and make up revenue across other higher-margin customers. Providers will try to “average out” the revenue per square foot between low priced big deals and higher margin, hybrid customers.
Raising the overall average takes the discussion away from weak pricing in big deals on Wall Street, moving the discussion to rising pricing per square foot averages. This phenomenon can be seen in recent Internap and QTS earnings.
Huffman explained that in addition to protecting a provider from raw data center pricing sensitivity, services also make customers stickier. It can also provide flexible deals where if a customer doesn’t use expected space, that difference can go towards services.
“Providers purely in the middle [retail colocation] are going to suffer if they don’t have services,” said Huffman. “That bucket will incrementally shrink.”
The largest retail colocation player Equinix doesn’t offer direct cloud services but differentiates beyond space and power with interconnection and private cloud connectivity. It also recently acquired professional cloud services firm Nimbo.
Other Factors Impacting Pricing Going Forward
Location has always been a factor in data center pricing and will continue to be so. For example, Cushman & Wakefield’s pricing range per market varies greatly in locations like Chicago — around $125 to $185 per kilowatt per month — a massive window.
It is hard to create an overall trend for every market in the country, said Jim Kerrigan, managing principal, North American Data Centers.
Kerrigan also notes that even in strong markets, the volume is often too low to project a trend. “Dallas, Virginia, Chicago are three huge markets with very little supply online currently, and a lot coming online this summer,” he said.
“That supply-demand curve can turn based on a few large deals,” said Bond.
Data centers are also spreading out. Emerging markets are landing more deals that arguably would have gone to core markets, increasing competition and aggressive pricing.
“If there’s no supply, it doesn’t mean the market doesn’t need supply; it means guys are fulfilling space elsewhere,” said Kerrigan.
Further complicating the issue is that pricing doesn’t always move with supply-demand dynamics, as Jeff West, a director at Cushman & Wakefield recently noted. Kerrigan argues that supply can create demand.
Another factor in pricing is not all data center space itself is created equal. Space with more redundancy is more expensive to deliver than less. Customers are getting better at accurately assessing their needs and providers are more flexible in delivering custom space, making pricing fluctuate within even a single facility.
Just-in-Time Builds Stabilize Pricing
This flexibility is the result of better “just-in-time” building, or incremental building. Just in time building means there are fewer speculative builds, which firms pricing.
“Just-in-time inventory is changing things around to the detriment of tenants and favor of landlords,” said Kerrigan.
Another theory around just-in-time is that pricing slightly decreased in the last few years because building efficiently allowed aggressive pricing. This partially explains softer pricing leading into the last quarter of 2014.
Pricing is expected to be firmer going forward because these efficiencies have already largely been factored in, resulting in a new, firmer floor being set.
“A few years ago it was more expensive to build,” said Bond. “Now their margins are a little better. But pricing is starting to bottom out.”
There has been criticism toward public companies aggressively pricing big name deals. Cushman & Wakefield noted that there would be increasing investor pressure on public companies to not aggressively price. Both Kerrigan and Bond are skeptical that aggressive pricing for banner deals will stop.
“Are they really as firm on pricing as they say [on investor calls]? I don’t think so,” said Kerrigan.
“Credit is king with a REIT,” said Bond. “I do believe in core principles, that bigger credit and larger deals get better pricing.”
There will always be customers that only need white space. However, big wholesale deals, a popular measuring stick for industry health in the past, are a less effective measure today. | | 2:00p |
Docker Releases First Orchestration Tools for App Containers Orchestration has been a top priority for Docker, and the three Docker orchestration toolsets first announced in December are now available for download for the first time: Docker Machine, Swarm, and Compose.
All three are meant to make it easier to build, ship, and run multi-container distributed applications.
Docker has approached orchestration with a “batteries included, but swappable” approach to keep Docker extremely integration and ecosystem-friendly.
The second part of the announcement has to do with wider orchestration ecosystem participation and support on the whole. Docker has seen massive support across industry leaders, including Amazon Web Services, Google, IBM, Microsoft, Joyent, Mesosphere, and VMware.
Machine allows a user to set up any host that Docker engine will run on with one command. Swarm provides native clustering, and Compose makes the developer’s life easier when updating and tracking multi-container applications.
Machine simplifies portability. It allows a user to set up the host that Docker engine will run on with one command. Before, it was a multi-step process. Users had to log into specific hosts and install and config for that specific host and operating system.
“It allows you to have infrastructure at the ready,” said David Messina, vice president of enterprise marketing at Docker. “These infrastructures, uniformly with one command, can start working with Docker, and there is no need to relearn different environments.”
Swarm provides native clustering, ensuring a uniform developer experience as multi-container, multi-host distributed apps are built and shipped. Previously, there was no native solution available, with each Docker Engine independent of each other.
Swarm is comparable to Kubernetes or Mesosphere and comes with a “rip-and-replace” capability where partner integrations can replace and augment aspects of Swarm’s capabilities. This is what Docker execs mean by “batteries included, but swappable.”
“We ‘ship with batteries included’ but wanted swappable solutions,” said Messina. “Multi-container applications are always portable, but providers can optimize their infrastructure for Docker. There are a ton of integrations already working with Machine, and several integrations planned with Swarm.”
Swarm API and driver integrations with other container orchestration products and cloud providers with orchestration services are underway. Reference implementations are documented with Apache Mesos, and its corporate sponsor Mesosphere.
The third toolset is Docker Compose. Modern applications are dynamic, and Compose helps to make sure changes are accounted for.
While Swarm is handy for both developers and operations, Compose is pure magic for the developer, said Messina. “Through a single .yml file, I can define which containers are part of my application, what sequence I want to start up, and with a file I describe my app, Compose gets that app up and running instantaneously,” he explained.
If a developer iterates several times a day, they can automatically update a distributed application.
 Orchestration is a top priority for the project, and its orchestration ecosystem already contains a who’s who in the industry (Source: Docker)
There was a bit of controversy around the same time these native orchestration toolsets were first announced. CoreOS CEO Alex Polvi and his colleagues have a difference in philosophy with Docker when it comes to containers.
The three issues, according to Polvi, were around security, composability, and open standards. A blog post revealed a slight bifurcation in vision for the application container’s future.
“The issue is if Docker becomes a platform itself,” said Polvi. “It appears Docker is going open core.” Part of that reasoning is Docker builds all kinds of tools, like these, into its Docker runtime.
Containers were forced into puberty overnight, and technology is ahead of business model, meaning potential growing pains. Docker needs to evolve like any technology, and massive interest means it has to do so at an accelerated pace. These tools address a big need around containers in orchestration.
Orchestration, Security, Networking are Docker Priorites
The three toolsets address what a recent survey found to be some of the biggest needs: orchestration, tooling, and security. The survey was partially commissioned by StackEngine.
Docker ecosystem player StackEngine builds software to manage and automate Docker applications. StackEngine CEO Bob Quillin spoke of the importance of orchestration in the bigger picture.
“There is no longer a one-to-one relationship between applications and the resources they use, and there are too many services moving too fast to manage by hand. Automation through orchestration is the only answer,” said Quillin. “By building common orchestration libraries and APIs, Docker is on the right track to enable organizations to choose the scheduler or orchestration tool best suited to their application needs.”
Orchestration was the single most commented-on thing in terms of what people want, according to Messina. “Security is an ongoing area with any technology, especially with something less than 2 years old. It’s an iterative process.”
Messina calls networking a third very hot topic in the community. “As you build multi-container applications, people want to figure out the most dynamic way to network,” said Messina.
The rapidly growing project recently underwent a change in operational structure. “The company has grown four-fold,” said Messina. “One of the things I’d be remiss in not mentioning is the technology of the project has been able to scale thanks to an enormous community of contributors, specifically with orchestration tools.” | | 4:30p |
Modern Collaboration is Double-Edged Sword CEO and Founder of SysCloud, Vijay was previously a Marketing Executive for the RAID division at LSI Logic K.K.
It’s never been easier to share information on the Web, and the workflows and habits formed through the use of consumer applications has led users to expect the same ease of use and control over data in their professions.
From the versatile collaboration options of Google Drive (which has surpassed 190 million monthly active users) to the ease of sending files via Dropbox to niche collaboration tools for the enterprise, there are a plethora of sharing tools that fit the needs of nearly every user.
However, modern collaboration within the enterprise is a double-edged sword. As sharing data gets easier, users subject themselves to greater information security risks. Organizations, from small businesses to global enterprises, are particularly susceptible to these security crises, as demonstrated by the multitude of high-profile information hacks and data compromises in the past few months alone. This vulnerability isn’t limited to malicious, Target-sized attacks – something as simple as an accidental typo on an email address can lead to major complications due to the unintentional disclosure of sensitive information.
So, when do the risks of sharing information over the Web outweigh the costs? Before examining what companies can do to prevent collaboration security breaches, it’s important to understand the advantages and potential pitfalls of data sharing in organizations.
The Good, The Bad, The Ugly
The largest benefit to data sharing is frictionless collaboration, and this seamless sharing has obvious advantages for organizations. Data sharing and access has become frictionless through streamlined workflows that let users make broad changes and share folders of documents instantly. This efficiency of cloud-based collaboration suites has lead to increased productivity by employees and accelerated the ability to leverage tools globally from any device. Files stored on Google Drive, for example, can be accessed from phones, tablets, personal laptops and work desktops under a single sign-in.
However, this advantage is also collaboration’s biggest downfall. With an abundance of devices used both inside and outside the workplace, the task of securing data has become more difficult. Individuals are surrounded by consumer apps they use in their personal lives which are highly focused on connecting and sharing with friends, family and the public. These applications put total control in the hands of the user through interfaces that prominently display the ability to connect and share, and users expect access to these same applications in the workplace (often without regard to security standards). Combined with the ability to BYOD, organizational networks are potentially exposed to a multitude of external threats.
Aside from device risks, the online component file sharing afforded by frictionless collaboration also poses a problem for organizations. Collaboration tools have put the control to make broad changes to data access and the deletion of documents in the hands of users, placing a data exposure or loss just a click away. Features like type ahead search, which makes it easy to add accounts to a sharing list or suggestions on who should be included on an email, only increase the likelihood of accidental changes being made or that data breaches and loss will accidentally occur.
What Organizations Can Do?
So, what can businesses do? The seemingly obvious answer is to shut down external collaboration and place strict controls on data sharing within an organization. However, the efficiency of collaboration cannot happen in a walled garden, users have legitimate needs to share data with clients, vendors and partners which is critical to business operations.
Additionally, this approach often backfires and creates “shadow IT” services, or information technology services that operate outside the oversight of an organization. Though this gives users the free reign they often crave, shadow IT isn’t subject to the same organizational security standards and can undermine the organization’s existing security measures.
Despite external threats, collaboration is necessary in today’s work environment and there are steps businesses can take to minimize security risks. These include:
Education. Outline collaboration policies and ensure all users are educated on the organization’s policies, proper sharing processes and authorized tools for use.
Classification. Classify data into multiple types, including sensitive and confidential, internal and public. Based on this classification, advise users which documents are appropriate to share both inside and outside the organization.
Application Evaluation. Evaluate applications to ensure they provide a security framework for collaboration and sharing at an administrator level that allows for granular level control and auditing. To meet internal security policies and compliance requirements third-party applications should be integrated to overlay and fill any gaps to meet an organization’s specific collaboration policies.
Control Without Interference. Security controls are needed that are easily manageable, operate without slowing down users or changing the way they work, and provide enforcement of policies automatically without unnecessary intervention.
Each organization has its own unique needs, so there is no set formula for the amount of collaboration controls or the “right” education program. The most important component of any collaboration program is thorough user education with organizational safeguards, including third-party apps. With this in mind, and the proper tools and infrastructure in place, executives and IT leaders can have peace of mind in regard to data security while still giving users the benefits of online collaboration and sharing.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission processfor information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 5:42p |
Report: Arizona Tax Bill Would Benefit Apple Data Center Arizona legislature is trying to extend renewable energy tax credits and exemptions that may benefit the recently announced Apple data center project in the state, Associated Press reported. Apple is converting a 1.3 million-square-foot manufacturing plant into a data center in the Phoenix suburb of Mesa, with construction expected to start next year.
The bill breezed through the state’s House of Representatives. It would expand a $5 million sales tax credit last year to include international operations centers that make a capital investment of $1.25 billion. The bill was amended to match the Senate version to speed up final governor approval once it passes there.
To qualify, the Cupertino, California-based company would have to invest in at least $100 million in new renewable energy facilities and use some portion of that energy to power its data center. The company is very likely to do that, given its recent history of large renewable energy investments.
The bill would also exempt the Apple data center from sales taxes on electricity or natural gas, which equate to a $1.3 million loss of revenue for the state’s general fund.
Arizona hopes to get its foot in the door with Apple partially because it could lead to bigger projects, Senator Bob Worsley told the AP. “We think we have a real shot at being a secondary place for Cupertino,” he was quoted as saying.
Apple acquired the facility in 2013 from GT Advanced. GT Advanced planned to make sapphire glass for Apple products, but it went bankrupt that year. Instead of seeing it as a worm in the Apple (groan), Apple bought the plant and committed to building out 70 megawatts of new solar power generation.
The facility will control Apple’s global networks, employing about 150 workers and running on 100 percent renewable energy.
The Greater Phoenix Economic Council supported the Senate version (Senate Bill 1468). Opponents called the bill “specialty legislation.”
This week, one of the state’s largest data center players, IO, began offering renewable energy options for customers under an agreement with utility provider Arizona Public Service.
Arizona passed data center tax incentives in 2013. Microsoft is also said to be eyeing Arizona for a data center and has been looking into tax incentives there. | | 5:53p |
Dell Invests in Object Storage Startup Exablox Scale-out object storage startup Exablox is tackling the growth of unstructured data, and it has Dell‘s attention. The companyrecently raised a $16 million Series C funding round, with Dell Ventures joining as a strategic investor and all previous investors participating. The company has raised $38.5 million total to date.
Exablox provides a turn-key enterprise-grade object-based storage largely targeted to the mid-size market. The company emerged from stealth last year and has now passed the 100-customer mark, with petabytes of storage deployed. The new funding will be split evenly between research and development and marketing and sales.
Exablox developed its own object store, tightly integrated with a distributed file system. An appliance called OneBlox and OneSystem management software couples several features in a solution the company said was easier to use than legacy storage. It comes with enterprise-class features, such as inline deduplication, continuous data protection, remote replication, and seamless scalability to SMB and NFS-based applications.
“It’s blending AWS S3-like scalability down into a device that anyone can deploy,” Exablox CEO Douglas Brocket said.
The storage startup said it tackled the barriers to getting started with next-gen storage, such as complicated installation, cumbersome management, and forklift upgrades. Customers also get the benefits of shared deduplication and data protection.
Brockett said Dell’s investment is great validation for the technology. Brockett dealt with Dell extensively during his time at SonicWall, a company Dell eventually acquired in 2012 for $1.2 billion.
“We have been feeling, internally, as if the market has a need and is ready for this type of solution,” he said. “From our perspective, a company with as much market touch as Dell to validate us with investment reinforces this belief. What Dell saw is this is a way that a broad segment will get their hands on next-gen storage technology. It’s easy and simple to adopt.”
While Brockett touted a seamless user experience, he emphasized that ease of use does not mean technical simplicity. “We’ve taken a large technical bite,” he said. “I think we can take lessons from companies like Apple, who provide a great user interface. A simple UI doesn’t mean the technology is simple – it means doing further work into a great UI.”
Brockett said the Exablox approach to the market is different. Many tech companies with enterprise offerings reach down to the mid-market by “dumbing down” features. “What [people] need to realize is you don’t take stuff away when you go to mid-market, you add stuff in,” he said.
The storage startup is tackling a big market opportunity. Research firm IDC in its Worldwide Object-Based Storage 2014-2018 Forecast Report projects that the file-and-object-based storage (FOBS) market will be $43.4 billion in 2018. Scale-out solutions will represent 80 percent of that market by 2018. | | 7:59p |
China Removes Cisco, Others from List of Approved Government Service Providers 
This article originally appeared at The WHIR
China has removed several large US IT companies including Cisco and Apple from a list of approved government product and service providers, Reuters reported Thursday. The move, while disruptive for the companies involved, is just the latest in a series of steps taken by the Chinese government reducing the reach of US IT companies in China.
An analysis by Reuters of a list provided to state agencies by China’s Central Government Procurement Center (CGPC) showed that Cisco had 60 products approved for government use, but all had been removed by late 2014. Products from Apple, Citrix, and McAffee were also removed from the list, which swelled over the same time from under 3,000 to almost 5,000.
The CGPC banned foreign ownedcybersecurity companies Symantec and Kaspersky from government use in August.
Why the companies were removed is a question receiving speculation, with theWashington Post noting that since the US NSA’s PRISM program was leaked, a rift has been growing and China has begun touting its need for “cyber-sovereignty.” New regulations and VPN disruptions in January continued a trend of growing Chinese internet insularity.
Previously, the Chinese government’s approach to foreign tech companies has been indicated by raids on Microsoft offices, a review of the national financial security implications of Chinese banks using IBM servers, and blockages of services like Dropbox and Gmail.
These moves have delayed investment and negatively affected business, but they also have surely protected domestic competitors.
With the list of products expanding, the Chinese government investing in cloud development, and the information and communications technology market in China expected to reach $465 billion in 2015 according to IDC, limiting foreign company’s market access could have a major impact on both those companies and the market itself.
“There’s no doubt that the SOE (state-owned enterprise) segment of the market has been favoring the local indigenous content,” another anonymous Western IT company executive told Reuters.
There are still some American companies that have approved products on the CGPC list, including Dell and HP.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/china-removes-cisco-others-list-approved-government-service-providers | | 9:33p |
Your 2015 Cloud Security Update We’ve got a few challenges floating around the cloud and security world where it has come time for a bit of an update. Between Sony and a few major retailers, there have been a lot of conversations around cyber-security, cloud security, and how all of this impacts the end-user.
Since there’s really no slowdown in sight – and more organizations are touting the era of the “Internet of Things” – we have to take a look at security from a truly holistic level. That said, here are some security updates that can help you stay ahead of the bad guys… or, as much ahead as you can be.
- Client-less security. Take security and efficiency and mash them together! Now, we have security technologies which can directly integrate into the hypervisor layer. This means VMs running on top won’t need a clunky client. There has been a resurgence behind virtual application and desktop delivery. This is where both end-user efficiency as well as security at the VM-level are a must. For example, a virtualization aware AV engines run at the hypervisor level; scanning all of the traffic which flows into and out of the VM. Trend Micro introduced its Deep Security platform to do just that. It will integrate directly with VMware Tools to facilitate virtualization-ready security at the hypervisor layer. Another great example is 5nine’s security model and how it interacts with Hyper-V. This way, administrators don’t actually have to install AV clients on the workloads. Because of this, the AV process becomes much more streamlined and efficient. Now, we’re introducing new levels of security and efficiency for your virtual platform.
- The adoption of virtual platforms. Virtual security appliances are agile, powerful, and can be deployed anywhere in your infrastructure. The other big part is that these security platforms can be service-oriented. This means you can monitor specific network nodes and data points within a very distributed environment. Check Point has their Virtual Systems which deploy as Software Blades on any virtual system for customized protection. Similarly, Palo Alto has their VM-Series virtual appliances which support the exact same next-generation firewall and advanced threat prevention features available in their physical form factor appliances. Furthermore, automation features such as VM monitoring, dynamic address groups and a REST-based API allow you to proactively monitor VM changes and dynamically integrate this into your security policy architecture. The cool part is that the VM-Series is supported on VMware, XenServer, KVM, Ubuntu, and even AWS.
- Cloud and compliance can happen! Cloud is growing up and playing nice with various compliance regulations. Now, you have the ability to deploy powerful cloud platforms which are ready for PCI/DSS, HIPAA, and many others. Just make sure your cloud provider is compliant and ready to delivery that type of cloud solution. Let me give you an example. Have you heard of FedRAMP? Basically, FedRAMP is the result of close collaboration with cyber security and cloud experts from the GSA, NIST, DHS, DOD, NSA, OMB, Federal CIO Council, and its working groups, as well as private industry. Already, cloud providers like AWS, IBM, HP, Microsoft, and Akamai are becoming FedRAMP-certified cloud service providers.
- Next-gen security feature sets. Geo-fencing, advanced DLP, node-based IPS/IDS, application firewalls, and even new types of DDoS protection are all powerful features which live on virtual and physical appliances. But the really cool part is just how much you can pack into a virtual appliance. Next-generation security features are here to help with the advanced persistent threats that traditional UTM security appliances simply can’t handle. Virtual security features now can include advanced network and firewall configurations, clientless VPN, application control, URL filtering, AV services, identity awareness and mobile access controls.
- Creating a new security policy. This is a constantly evolving process. Keep your organization as well as your user base continuously updated. Have you updated your computer policy? Do you have a mobility policy? There are new ways that organizations must secure their data, and many times that starts with informing the user. That said you should also review your data control policies and how users are accessing your networks. Now is a great time to look across your entire IT environment and identify places where older security policies might have holes. Creating good corporate, mobility, data and security policies helps keep your overall environment a lot more proactive. When it comes to a security breach, spend the money now so that you don’t have to pay even more if an incident occurs. Consider this, a recent IBM sponsored report looked at the actual cost to a company if there was a data breach. The total? $3.5 million – and 15 percent more than what it cost last year.
Let’s be realistic here, securing your infrastructure is a never-ending struggle. It’s not so much the bad guys either. We are tasked with balancing effective security while still providing an optimal user experience. With so many devices and data points, it has become even more challenging to secure critical data. Still, data and workload centralization has allowed administrators to keep a closer eye on their information while still controlling users. Consider some of these best practices:
- One of the best ways to stay proactive and always be vigilant.
- Stay updated on patches and fixes.
- Read security blogs, posts and articles – seriously. They help a lot.
- Test your system for flaws! Sometimes a pen-test can be very useful.
- Continuously update your security policies. This goes for both your users and the IT infrastructure.
|
|