Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, November 9th, 2015
| Time |
Event |
| 1:00p |
US Data Center REITs Enjoying a Booming Market As people watch more and more online video, as cloud service providers grow, and as enterprises move more and more of their IT infrastructure out of their on-premise data centers, the biggest companies that provide data center space as a business are benefitting tremendously, enjoying a market boom.
The biggest ones in the market, US data center providers operating as Real Estate Investment Trusts, all reported high rates of revenue growth in the third quarter compared to one year ago. All of them are building out more capacity across major US markets in response to high demand.
Massive data center construction projects are ongoing in Northern Virginia, Chicago, Dallas, Phoenix, and Silicon Valley, representing hundreds of millions of dollars of investment.
Digital Realty Trust, the second-largest data center REIT after Equinix, announced it will start construction after 18 months of a nearly complete halt to all expansion. The company, whose biggest customers include IBM, Facebook, Oracle, eBay, Amazon, and LinkedIn, recently bought a 2-million-square-foot property in Ashburn, Virginia, with potential for 150 MW of data center capacity.
Digital reported $436 million in revenue for the third quarter, up six percent year over year. The company signed a total of 180,000 square feet and 18MW of capacity in new and renewed leases during the quarter, representing $54 million in annualized rent.
It’s unclear how Digital will be affected by CenturyLink’s decision to rethink its data center strategy, a decision driven primarily by poor performance of its colocation business. The telecommunications company, Digital’s second-largest customer after IBM, announced last week it is exploring alternatives to owning data centers, including potentially selling some or all of its data center assets.
Since it leases most of its data centers, the announcement has to do more with infrastructure inside leased facilities than facilities themselves, but the move can still potentially cause some uncertainty for Digital, depending on the decisions CenturyLink makes from now on. CenturyLink occupies about 11 percent of all space in Digital’s portfolio and nearly 7 percent of annualized rent the landlord receives.
Here’s our coverage of Q3 2015 earnings by each individual US data center REIT:
Equinix, the world’s biggest data center REIT that also happens to be Digital’s third-largest customer, is doing much better than CenturyLink’s troubled colocation unit. The company reported $687 million in revenue for the third quarter – up 11 percent year over year.
Equinix is also expanding capacity, its largest upcoming project being a new campus in Ashburn. The campus has the potential to grow to 1 million square feet of building space, costing the company an estimated $1 billion at full build-out.
Another REIT building large in Northern Virginia is DuPont Fabros Technology. While DuPont only leased less than 1 MW of capacity in the third quarter, it closed deals on nearly 20 MW in Northern Virginia immediately after the quarter ended.
DuPont brought about 7 MW of capacity online in the third quarter and has nearly 40 MW of capacity under construction in Ashburn and Chicago.
The company reported $115 million in revenue for the quarter – up 6 percent year over year.
CoreSite, a REIT somewhat smaller than DuPont, reported one of the highest revenue growth rates in the group. The company reported about $87 million in revenue for the third quarter, which represented a 23-percent increase from the third quarter of last year.
CoreSite brought 24,000 square feet of data center space online in Q3 and has 100,000 square feet under construction in Northern Virginia. It also has more than 400,000 square feet under construction in Silicon Valley, about 140,000 square feet of which it is building for a single customer.
The data center REIT that saw the biggest jump in revenue in Q3, due primarily to its recent acquisition of Carpathia Hosting, was QTS Realty Trust. Its revenue went up more than 50 percent, reaching $90 million in the third quarter.
QTS closed leases amounting to $15 million in annualized rent during the quarter. It brought online 22,000 square feet and 3.5 MW of capacity and has 45,000 square feet under construction.
The Carpathia acquisition gave it some international footprint, a sizable managed hosting business, and a substantial boost to its customer base among US federal government agencies.
CyrusOne, the last of publicly traded US data center REITs to report third-quarter earnings, grew its revenue by more than 30 percent year over year. Its Q3 revenue was $111 million. The company also recently made an acquisition, buying Cervalis, a data center provider in the New York market.
CyrusOne leased out about 30,000 square feet of data center space during the quarter, adding $13 million to its annual rent income. It completed build-out of nearly 40,000 square feet of data center space in Q3 in Northern Virginia and has a total of 350,000 square feet under construction in Texas and Arizona. | | 4:00p |
Five Security Best Practices for Cloud and Virtualization Platforms The growth of data, users, virtual systems, and the cloud itself has created new security concerns spanning the entire data center. There are new types of targets, advanced attack vectors, and a lot of valuable information that can be compromised. In March of 2013, we saw a DDoS attack against Spamhaus which shook the cloud world. A 300+Gbps peak DDoS attack was registered. A recent Arbor Networks article puts it into perspective: “This is the largest known DDoS attack to date by a significant margin. The previous largest reported (and verified) attacks were at around 100Gb/sec. However, this is not the only example of a large (damaging) DNS reflection/amplification attack to have taken place this year.” More recently, Juniper Research pointed out that the rapid digitization of consumers’ lives and enterprise records will increase the cost of data breaches to $2.1 trillion globally by 2019, increasing to almost four times the estimated cost of breaches in 2015.
It’s no wonder that respondents to the latest AFCOM State of the Data Center survey indicate that security is still a top concern when implementing a cloud architecture. Not surprisingly, 32 percent said that security continues to be a big concern around both physical and logical aspects of the cloud.
What are you doing to better protect your virtual platform and cloud environment? Are you creating an intelligent system that can handle these new types of threats? Although there are many solid ideas and best practices, here are five that will get you ahead of the game.
- Utilize intuitive management. Virtualization and cloud computing have helped expand the modern data center. Just like these systems, your security platform must be able to scale. This means utilizing one intuitive security management console for multiple data center and virtualization points. Using permissions, administrators can have specific access to authorized security areas. This unified console allows for improved visibility into the complete virtual layer as well as the workloads accessing virtual resources.
- Create security efficiency. One misconception is that security is just an extra layer creating additional overhead. The reality is that it doesn’t have to be. Better integration at the hypervisor layer, improved resource utilization, and even agentless technology are all improving how your security platform integrates with your data center. Security can help improve efficiency by allowing administrators to deploy solid policies while still maintaining VM density.
- Integrate security scalability. Modern cloud and virtualization environments now span multiple locations. In the same respect, your security platform must be able to scale as well. Whether you have instances in public, private, or hybrid cloud, your security must be able to handle new types of requirements. Scalable security means the capability to handle high-density multi-tenant cloud and virtualization environments. So, if you’re spanning multiple cloud types and data centers, make sure to utilize a security solution that can span as well.
- Be proactive! Imagine being able to capture malicious attacks before they even hit your VM or to enforce policies while a virtual workload is only being provisioned. Creating security intelligence and automation allows administrators to focus on new types of deployment methodologies while allowing the security engine to operate. As attacks against the modern data center continue to evolve, your security infrastructure will need to stay proactive and agile.
- Integrating compliance and regulation. For those organizations bound by compliance and regulations, cloud infrastructure can be a bit of a challenge. With that in mind, you can still deploy security platforms which will enforce PCI-DSS, HIPAA, and Sarbanes-Oxley compliance and security standards. These security platforms go far beyond standard AV services. Compliance-ready security platforms will utilize integrated firewalls, intrusion detection services, and even ensure complete traffic control and isolation between virtual workloads.
Remember, there are a lot of great ways to optimize and enhance your virtualization and cloud infrastructure. However, it’s always critical to take security into direct consideration. With modern security platforms, data center administrators can truly leverage scalable intelligence. Policies, controls, and visibility now scale between data centers and various data points. Security isn’t just a component of your environment; it is also a means to directly optimize and enhance the performance of your virtual platform. | | 5:50p |
Emergence of 64-bit ARM in Today’s Data Centers John Williams is Vice President of Marketing and Product Management for AppliedMicro.
The technologies that power the data center have grown at an incredible rate in recent years. What was once considered “state of the art” is now deemed outdated.
Next-generation servers have broken the commodity mold offering capabilities ranging from customizable solutions with application accelerators to appliances that address specific workloads. Data center software is also evolving at break-neck speeds. Use of high-level languages, virtualization and open source software has opened the door to a new generation of server solutions and deployment models – solutions that are not necessarily constrained to the almost 40-year-old x86 instruction set.
In short, the “one-size-fits-all” data center is dead. To achieve order of magnitude increases in application performance and reductions in operational costs, new approaches are required. The future of the data center is a broad set of solutions using cost-effective, energy-efficient processors, new platform architectures, and workload accelerators to achieve maximum performance, power-efficiency and scalability.
Server-based compute is rapidly becoming a commodity. In the past, symmetric multiprocessing (SMP) was used to scale compute resources with a unified memory and input/output (I/O) subsystem. There was always more demand for compute cycles. For years, enhancements to pipeline architectures and new fabrication process technologies drove performance upwards – gigahertz was a relevant performance metric. All of that changed in 2005. Dual-core server processors were introduced and dramatically changed the amount of compute available and the level of power efficiency for these devices. Today, server processors with 8, 16, or more compute cores are common.
Server utilization in 2005 was generally poor with 10 percent or less not uncommon. Virtualization enabled server consolidation to better utilize compute resources, but that moved the bottleneck. Compute has become a commodity in today’s data center. IT organizations rarely invest in the highest performance “top-bin” processors. Why? They are expensive and the compute provided is difficult to monetize in the majority of data center workloads. Servers need more memory and better I/O subsystems to scale performance, not more compute. Few workloads are compute-bound today. The problem this creates is that to get access to more memory that is badly needed, one needs to add processors with additional compute resources that are generally of little value in achieving higher workload performance.
So what does this all mean to the data center of tomorrow?
- Adoption of scale-out compute platforms running distributed workloads across servers with a healthy balance of compute, memory and I/O will continue at a rapid pace.
- Performance will increasingly be a workload metric – not a processor metric. Synthetic CPU benchmarks will become an increasingly weak predictor of delivered application performance.
- Adoption of new server compute architectures like ARM with the 64-bit ARMv8 architecture will accelerate based on a rapidly expanding enterprise software ecosystem.
- Data center costs will drop. With ARM and other broadly available architectural alternatives, multiple suppliers will offer differentiated, workload-optimized solutions driving competition and innovation – something that has been sorely lacking in recent years.
- Server platforms will become more differentiated based on rack density, underlying compute architecture, memory, storage and networking. Platform vendors will offer more appliance-like solutions – ‘the right tool for the job.’
- You won’t care what instruction set the processor is running.
The availability of ARM-based solutions is an important step in the evolution of the data center. IT organizations are clearly seeing that a solution that offers a balance of strong compute, high integration, large memory and excellent power efficiency is a powerful tool to address critical workloads ranging from web serving and caching to in-memory databases and data analytics. What is key to these new solutions ?
- Large memory is not an upsell. The processor solution is the same whether one chooses to address 32 gigabytes of memory or 256 gigabytes of memory.
- Power efficiency is not an upsell. ARM is inherently power efficient. There is no premium for a low power 35-watt processor – that is just what the product is.
ARM-based server platforms for both compute and storage workloads are in production and available today. The software ecosystem has developed and matured rapidly and is enterprise-ready. Data centers are deploying the technology now. The list of silicon and platform suppliers continues to grow as we enter 2016. After multiple years of ARMv8 development by silicon vendors, original equipment manufacturers (OEMs) and software vendors, the ARM adoption cycle is accelerating. The data center will never be the same.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 6:01p |
How the Trans-Pacific Partnership Will Affect Cloud Service Providers 
This article originally appeared at The WHIR
While certain chapters had been previously leaked, the text of the controversial Trans-Pacific Partnership trade agreement has been officially released, outlining measures that could eliminate local data storage requirements and enforcing common standards for the protection and enforcement of intellectual property rights.
Described as “NAFTA on steroids”, the TPP is a trade and foreign investment agreement between 12 nations: Australia, Brunei, Canada, Chile, Japan, Malaysia, Mexico, New Zealand, Peru, Singapore, the US, and Vietnam.
Internet activists have warned politicians that TPP could force sites to remove content that allegedly infringes on copyright without a court order, introduce harsh criminal penalties for journalists and whistleblowers, punish internet users who share copyrighted material, and put restrictive limits on “Fair Use”. More than 250 tech companies and digital rights organizations have stated their opposition to the TPP Fast Track plan.
In June, the Senate voted 60-38 to give the president fast-track authority to pass the TPP and other trade deals and barring lawmakers from making amendments to such agreements.
Fast Track authority helps the President pass other secretive trade deals such as the Trans-Atlantic Trade and Investment Partnership (TTIP) between the US and the EU, and the 51-nation Trade in Services Agreement (TiSA).
Removing Barriers to Cloud Services and Data Transfers, But at What Cost?
International trade policy is still catching up to the exchange of digital goods and services.
When the TPP first became known, software industry association BSA noted that an international agreement could potentially work against a “new wave of IT-focused protectionism” fueled by data security and privacy fear that have been prohibiting online entrepreneurs from entering international markets.
It would do this by ensuring data can flow easily across borders, that participating countries would honor a common set of intellectual property protections, and that incumbent service providers would be forced to compete with international competitors so that consumers have access to the most innovative options.
Some big tech companies like aspects of the TPP that restrict data localization lawsthat could require hosting data within a country’s borders. TPP members countries would not require companies to build data centers to store data as a condition for operating in a TPP market, and, in addition, that source code of their software is not required to be transferred or accessed.
But, as the EFF points out, “a trade agreement is the wrong place for a sweeping prohibition of such practices.” They explain that it does too little to ensure that sensitive user data is safely transferred and stored overseas.
In fact, the TPP chapters dealing with electronic commerce and telecommunications essentially prioritize trade interests over privacy rights. The security and confidentiality of messages and privacy of end-user personal data is clearly secondary to unfettered trade in telecommunications services, according to the EFF’s interpretation.
Changes to Copyright for TPP Member Countries
Service providers obviously handle a lot of user data, and copyright law is a huge area of concern for them.
As described in Chapter 18, TPP member countries would have to comply with a notice-and-takedown regime similar to the US Digital Millennium Copyright Act(DMCA), as opposed to other schemes of enforcement such as Canada’s notice-and-notice system.
This means that Internet Service Providers must “promptly remove or disable access to the identified materials upon receipt of a verified notice [of infringement]; and be exempted from liability for having done so in good faith in accordance with those guidelines.” This could lead to ISPs simply removing any materials they host where someone claims a copyright infringement simply because challenging copyright claims could potentially lead to legal liability.
Of course, there are fines for “misrepresentation in a notice or counter-notice that causes injury to any interested party,” but it’s unclear how effective these fines would be.
ISPs are also required to forward notices of alleged infringement to their customers or else be fined.
Also, under TPP, half of the 12 TPP-negotiating countries would see their copyright terms extended by 20 years, essentially making these countries comply with the life-plus-50-years specified in the Agreement on Trade Related Aspects of Intellectual Property Rights (TRIPS).
TPP member countries could also be sued by a company under an “Investor-to-State Dispute Settlement” if a country passes a law that undermines a company’s ability to exploit their intellectual property. This could mean that democratically-decided user protections could be attacked by an ISDS challenge.
In essence, the TPP would open up new markets to large tech companies, but it would do little to make cross-border data sharing safer. It would also change how many service providers police their services, requiring them to handling domestic and foreign copyright infringement requests differently – which could be more costly and make their users subject to new takedown notices. This all factors into the costs of this trade agreement.
This first ran at http://www.thewhir.com/web-hosting-news/how-the-trans-pacific-partnership-will-affect-cloud-service-providers | | 6:30p |
Microsoft Acquires Israeli Security Firm Secure Islands 
This article originally appeared at The WHIR
Microsoft announced on Monday that it has acquired Israeli security company Secure Islands. The terms of the deal were not disclosed.
According to Microsoft, the acquisition will help its customers secure their business data regardless of its storage location – whether its on-premises, in Microsoft cloud services like Azure and Office 365 or third-party services.
In a blog post by Microsoft’s Takeshi Numoto, corporate vice president, cloud and enterprise marketing, Microsoft said that Secure Islands provides data classification, protection and loss prevention technologies for “virtually any type of file.”
Secure Islands is just one of the security companies from Israel that Microsoft has scooped up in recent years. In September, the company acquired cloud application security company Adallom, and last year Microsoft bought enterprise cloud security company Aorato.
These cross-platform security solutions seem to align with the belief that multi-cloud will be a common approach for enterprises, and they will need security solutions that protect data seamlessly across these environments.
“By joining Microsoft, we will be able to extend and expand our vision,” Secure Islands CEO Aki Eldar said in a statement. “Microsoft has been a long time partner and its leadership in enterprise IT, its resources and global reach will help us innovate and deliver new information protection capabilities to both our current and new customer base.”
After the acquisition is completed Secure Islands’ technology will be integrated into Azure Rights Management Service. Secure Islands will continue to sell its existing solutions and support to customers.
This first ran at http://www.thewhir.com/web-hosting-news/microsoft-acquires-israeli-security-firm-secure-islands | | 7:26p |
Report: Verizon May Sell Former Terremark Data Centers Verizon Communications has retained Citigroup to help it explore a potential sale of its enterprise services assets, including those it gained when it acquired business landline and internet service provider MCI and assets it gained through the $1.4 billion acquisition of data center provider Terremark in 2011, Reuters reported citing anonymous sources.
Total value of the assets in question is about $10 billion, the sources told the news service. A Verizon spokesperson declined to comment.
Verizon is one of several major telcos looking for alternatives to ownership of their data center assets.
Verizon’s Terremark deal was one of two big data center provider acquisitions by telcos in 2011. The second one was the $2.5 billion acquisition of Savvis by CenturyLink. A smaller deal that year that was along similar lines was Time Warner Cable’s $230 million NaviSite acquisition.
The wave of acquisitions was at the time explained by telecommunications companies wanting to leverage their network assets to expand into cloud services. But the handful of giants in the cloud market — Amazon Web Services, Microsoft Azure, and to a lesser extent Google Cloud Platform and IBM SoftLayer — have since then only increased their dominance and pushed down pricing, making it more and more difficult for others to compete.
CenturyLink announced last week it was exploring alternatives to owning its massive data center fleet, a substantial portion of which it gained from the Savvis deal. CenturyLink’s colocation business has not been growing revenue, and company officials said they wanted to focus investment on more profitable businesses.
AT&T, Verizon’s biggest competitor in the wireless market, has also been evaluating a potential sale of about $2 billion worth of data center assets, Reuters reported earlier this year also citing anonymous sources.
Last month, Windstream Communications agreed to sell its data center business to TierPoint, a roll-up buying data centers in underserved regional markets, for $575 million.
Like its peers, Verizon is facing a highly competitive data center and enterprise IP networking market. In comments on the company’s third-quarter earnings call in October Verizon CFO Francis Shammo indicated that the company was taking a hard look at its enterprise portfolio.
“Within the data center space, now there is an awful lot of competition happening with price compression,” Shammo said. “I think what you are seeing is the trend that we think will continue as we revamp the portfolio, if you will, and come into more of whether we’re going to [be] willing to compete.”
Verizon’s enterprise revenue in the third quarter declined 5 percent year over year. Shammo attributed the decline to “secular and economic challenges.” | | 7:59p |
Linus Torvalds: Perfect Security in Linux is Impossible 
This post originally appeared at The Var Guy
Does Linus Torvalds fail to take security in the Linux kernel seriously, and is the world doomed because of it? That’s what the Washington Post suggests in a recent article about security in the open source OS.
The Post sums up Torvalds’s take on security as follows: “Security of any system can never be perfect. So it always must be weighed against other priorities — such as speed, flexibility and ease of use — in a series of inherently nuanced trade-offs.”
The Post also describes Torvalds as “the man who holds the future of the Internet in his hands.”
Taken together, the two points suggest that Torvalds is not serious enough about security in Linux, and that his lackadaisical approach endangers everyone who uses the Internet.
Both claims are problematic. First, it’s a pretty big — if flattering — stretch to say that Torvalds holds the Internet in his hands. The Linux kernel is an important part of the Internet because it powers many servers and networking devices, but there is much, much more to the Internet than Linux. The developers of the Apache HTTP server, PHP or MySQL, among other software platforms that play central roles in the Internet, are just as significant as the man behind the Linux kernel.
More important, there is arguably much to be said for Torvalds’s attitude toward security. Torvalds recognizes and is willing to admit that a completely secure system can simply never exist, since it’s impossible to be certain that no security vulnerability exists in any layer of a software stack.
That makes his message different, and less comforting, than that of developers who promise to deliver hacker-proof platforms. But those are false promises. It’s much healthier to admit that limitations exist than to cling to a fantasy where there are never security vulnerabilities.
Of course, if CTOs of major companies frankly admitted to the public that their software systems almost definitely have security flaws, and always will, their businesses would suffer. Torvalds can get away with more candor when it comes to Linux security. He doesn’t have a job to keep or a company’s image to promote.
All the same, it’s disappointing to see a platform like the Post — which has many non-technical readers who may think making software secure is just a matter of investing enough money in security — defame the Linux kernel for security issues.
After all, Linux has powered millions of servers for more than two decades without being the source of security breaches that have resulted in the theft of millions of people’s personal information. Increasingly few developers of other platforms can say the same in an era of recurring disclosures about massive security breaches of the software systems at businesses and government agencies.
This first ran at http://thevarguy.com/open-source-application-software-companies/linus-torvalds-perfect-security-open-source-linux-os-impo |
|