Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, December 30th, 2014
Time |
Event |
1:00p |
Equinix to Start 2015 as Data Center REIT Colocation services giant Equinix will start 2015 as a real estate investment trust. The company’s board of directors unanimously approved the conversion earlier this month, and the company expects to get approval by the end of 2014 or early 2015. It will start operating as a REIT on January 1, 2015.
A REIT is a corporation that pools capital of many investors to purchase and manage income property. The structure was introduced in 1960 as a way to offer small investors a way to invest in large-scale commercial real estate through securities. Data centers are unique real estate, but real estate nonetheless, so many of the large providers have either converted or contemplated conversion.
In a REIT, income comes from rent of the properties, and REITs are legally required to distribute 90 percent of their taxable income to investors. It means Equinix will have regular distributions of its earnings. In addition to boosting investor interest, analysts say a REIT conversion would result in lower taxes for Equinix.
“Equinix’s REIT conversion equates to substantial tax savings, which could potentially be reinvested back into the business in the form of additional capital to further strengthen its global position in new and existing markets,” said Jabez Tan, senior analyst at Structure Research. “Converting to REIT status also adds more transparency and visibility to the broader colocation market with data center asset disclosures.”
The decision isn’t as cut-and-dry as it looks on paper, however. There are several factors that need to be weighed in making the conversion.
“Due to sizable initial costs associated with REIT conversions, a certain level of stability in the business is required before pursuing such a move,” said Tan. “We expect to see more such REIT conversions, as the data center colocation market continues its current trajectory of growth and maturity.”
Equinix’s journey to become a REIT began a few years ago. When its intentions became official, shares surged in tow. When CFO Keith Taylor announced the company was exploring the merits on its fourth quarter earnings call in 2011, he said the company was analyzing various structures, including alternative financing, capital, and tax strategies in a bid to maximize long-term shareholder value.
“I it as a tax strategy, not a business model,” said Compass Data Centers CEO Chris Crosby. “It makes a lot of sense for them. This was not a rash decision; they’ve been studying for a while. The cash they can generate is much better from a tax perspective. It’s the right structure for the right time.”
Crosby was formerly with the biggest data center REIT called Digital Realty Trust. Since leaving the company, he has grown Dallas-based Compass from upstart to serious player in just a few years. Crosby said the REIT model is good for stabilized assets, but there’s a question mark if the capital is going towards growth.
The REIT Class
Several of the major data center players are structured as REITs, including Digital Realty, DuPont Fabros, CoreSite Realty, QTS Realty Trust and CyrusOne. Iron Mountain recently joined the REIT fray after announcing its intentions around the same time as Equinix. Windstream Communications is planning on splitting the company in two, creating a separate operating business for REIT assets and one for networking assets.
The IRS put data center REIT conversion on hold in 2013 because of the gray areas of data center revenue from services and the colocation leasing model. Data centers are a unique form of real estate in terms of construction and business model, so there was a question as to whether they qualified.
Data centers are big business. The colocation market was pegged as $25 billion by 451 Research, with half the revenue coming from the top 60 providers. Equinix is a major player in both primary domestic and international markets. | 4:00p |
Apprenda, Piston Partner on Joint Turnkey Private PaaS Platform-as-a-Service provider Apprenda is partnering with Piston Cloud Computing, a web-scale infrastructure and automation company to provide the technology necessary for companies to stand up private PaaS. The two are delivering an integrated solution for a turnkey cloud for building Java and .NET cloud applications and microservices. Piston provides the OpenStack private cloud, while Apprenda provides the PaaS for app development.
Both companies have focused on on-premises cloud enablement. Combined they form a turnkey private PaaS with policy-based access to OpenStack APIs. Enterprises use the two in conjunction to develop and launch cloud apps or for modernizing on-prem applications for SaaS.
Piston provides enterprise grade Infrastructure-as-a-Service private clouds. It integrates several technologies into the standard, open-source OpenStack framework to provide a more complete and easily deployable private cloud solution.
Piston was founded in early 2011 by technical team leads from NASA and Rackspace. Co-founder Joshua McKenty, who is now with Pivotal, was an OpenStack founding father.
Piston was early to market with its first commercial OpenStack product in 2012 and raised $8 million in 2013. CEO Jim Morrisroe recently commented on what he said was negative influence of legacy vendors on the open source cloud project.
“From day one, Piston set out to build a product that would eliminate the complexity associated with managing and deploying a traditional on-premise environment,” Morrisroe said in a joint release. “And together with Apprenda, we provide customers additional freedom from dat acenter complexity by delivering a scalable turn-key IaaS and PaaS solution, out of the box.”
Apprenda PaaS provides policy-based access controls to make the PaaS suitable for enterprise. Its enterprise customers include AmerisourceBergen, JPMorgan Chase and McKesson. Apprenda added Java and .NET support early this year. | 4:30p |
Data Center Design: Five Key Areas of Focus Yann Morvan is the Director of Product Management at Legrand.
From Google glasses to Nest thermostats to Fitbit wristbands and beyond, the Internet of Things is here to stay and will drastically shape our digital economy for many years to come.
According to Cisco, there will be more than 5 billion connected devices worldwide by 2020 – a tsunami of data that all flows through data centers. Dealing with this exponential growth is no easy task, especially when planning today means keeping an eye 10 to 15 years into the future.
A data center solution that considers performance, time, space, sustainability and experience, is one that will be reliable, flexible to grow and efficient in many ways. Let’s demonstrate the value of those five key elements when assessing a newer technology such as refrigerant-based close-coupled cooling.
Key Elements at Work
Performance. Uptime, speed, latency … optimum performance ultimately comes from the quality of your structured cabling systems in copper and fiber working seamlessly with your switching, computing and storage gear. As such, protecting the cabling system is essential and can be managed through efficient cooling design along with adequate airflow management. Close-coupled cooling provides the shortest route between the heat source and the cooling, while leveraging the benefits of integrated cable management inside the cabinet. It can also give you the unique benefit of in-rack redundancy of 20 kW (kilowatts) N+1.
Time. Data centers are growing in size and complexity but often require faster deployment times. Consider also that 90 percent of active equipment will be replaced in five years or less. A modular solution offers scalability as it can be added when needed to support densities from 10 to 30 kW per rack in the same infrastructure. When co-engineered with the enclosure, the cooling solution can be easily installed, saving significant time, waste and packaging.
Space. Space is a premium in the data center. Optimizing computing power per square foot is only possible with higher cooling efficiency. With close-coupled cooling, you can now increase the density of your rack to 30 kW over time without changing the current setup of your infrastructure. Another trend is to grow vertically. Traditional racks and cabinets are 7 feet tall or 42RU. We can now see some 9 foot rack (up to 58 RU), offering 38 percent more space. Integrated cooling compatible with taller racks fully realize this benefit. It is now an option to reduce the dedicated cooling footprint in the white space by 90 percent by eliminating the need for CRACs.
Sustainability. Sustainability can mean many different things but at the end of the day, a data center manager has an increasing responsibility to look for solutions in order to lower impact on the environment (and reduce OPEX). One must take a comprehensive approach taking into account active and passive cooling, power distribution, airflow control, and physical support with cable management, to ensure optimal energy efficiency and performance. When integrated within the enclosure, close-coupled cooling provides a 95 percent reduction in annual power consumption versus traditional CRACs for equal cooling capacity, since it captures heat much closer to the source. Also consider local utility incentives as a result of significant efficiency gains.
Experience. Putting the pieces together can be very daunting. A data center build experience can be enhanced by selecting the right partner who will provide guaranteed performance, the ability to customize solutions and ensure that the solutions work seamlessly together. Evaluate a manufacturer with one point of contact who possesses the expertise with all the components of the overall solution, and can provide resources to help coordinate the project: from solution design to logistics and installation.
These five key elements are critical to obtain complete efficiency in data center design. As important as cooling is to your bottom line, you should carefully evaluate your connectivity and physical infrastructure solutions as well. When designed together, they should form a complete integrated data center. That “connected infrastructure” is the true value to meet the needs of today and tomorrow.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 7:02p |
Emerson Sells Power Transmission Business to Regal for $1.44B Emerson Electric has agreed to sell its Power Transmission Solutions business unit to Regal-Beloit Corp., which owns a multitude of brands, manufacturing electrical motors and other components for commercial, residential, and industrial equipment.
The transaction values Emerson’s business unit at $1.44 billion. Emerson Chairman and CEO David Farr said the unit did not fit the company’s present strategy. “While the Power Transmission Solutions business no longer aligns with the strategic direction of our portfolio, it is a strong business with an outstanding management team that has created significant value while part of Emerson,” he said in a statement.
Emerson’s Network Power business is one of the biggest suppliers of electrical and mechanical equipment for data centers. It is well known in the data center industry for its Chloride UPS systems, Liebert cooling and power products, and Trellis data center infrastructure management software, among other brands.
According to Emerson, its Power Transmission Solutions business, which produces couplings, bearings, and drive components, among other products, as well as supporting services and solutions, reported 2014 revenue of more than $600 million. That makes it a major addition to Regal, which made about $195 million in profit on about $3.10 billion in revenue in 2013.
Emerson’s market capitalization is about $43.30 billion, while Regal’s is about $3.43 billion. | 7:30p |
Using Storage APIs and Creating the Kinetic Cloud Recently, Seagate announced a revolutionary way applications are going to communicate with their storage and resource back-end. According to Rocky Pimentel, Seagate executive vice president and chief sales and marketing officer, “With the Seagate Kinetic Open Storage platform, our internal R&D teams have designed an unique, first-of-its-kind storage architecture to enable cheaper, more scalable object storage solutions that free up IT professionals from having to invest in hardware and software they don’t need, while empowering them with the most innovative storage technology available. This technology optimizes storage solutions for a new era of cloud storage systems, while drastically reducing overall costs.”
So what does this all really mean? Well, the reality here is that infrastructure, applications, and the cloud are becoming a lot more intelligent. Just like the modern hypervisor for virtualization, we are trying to eliminate extra resources hops between our workloads and the resources that they utilize. Optimizing that communication – now at an application and cloud layer – actually creates quite a few new types of benefits for your organizations.
- Better application performance. Imagine cutting out the proverbial “middle-man” when it comes to storage communication. By incorporating the Kinetic Open Storage platform, applications will be able to manage specific features and capabilities and rapidly implement and deploy in any cloud storage software stack.
- Simplified scale-out architecture. Redefining hardware and software capabilities, the platform enables cloud service providers and independent software vendors to optimize scale-out file and object-based storage. This means that new applications can leverage storage on a truly distributed cloud-layer.
- Creating the next-generation open source storage platform. This could have gone one of two ways: proprietary or open source. Fortunately, the storage API structure being developed by Seagate will leverage open source technology capable of rapid application-centric deployment. This open source storage API will be able to integrate with a variety of systems on numerous storage platforms.
- Optimizing storage and data controls. Beyond what Seagate can do, we are seeing a direct change in how data is processed and controlled within the cloud. The proliferation of big data and mobile computing has changed how data traverses both the data center and cloud. New extensions into OpenStack, CloudStack and other cloud management platforms are creating truy elastic storage designs. Ultimately, this helps both large and small organizations improve how they control and distribute their data.
- New use-cases for storage control and delivery. There’s just so much new data out there to manage. In fact, a recent Gartner report indicates that 75% of currently deployed data warehouses will not scale sufficiently to meet new information velocity and complexity of demands by 2016. Without good data control – there’s just no way that corporations are able to deliver the right information, at right time to support enterprise outcomes all of the time. This can become a serious business model issue very quickly. Organizations will be looking at ways they can optimize data control and make their storage environment a lot more agile.
Seagate went on to define their Kinetic Open Storage platform as the first device-based storage platform which enables independent software vendors, cloud service providers, and enterprise customers to optimize scale-out file and object-based storage, delivering lower TCO. Seagate Kinetic Storage comprises storage devices + key/value API + Ethernet connectivity.
This is a good place to pause and understand that modern APIs are the connection point for the cloud.
- Many organizations already don’t use only one cloud provider or even platform. More providers are offering generic HTTP and HTTPS API integration to allow their customers greater cloud versatility. Furthermore, cross-platform APIs allow cloud tenants the ability to access resources not just from their primary cloud provider, but from others as well.
- The rapid provisioning or de-provisioning of cloud resources is something that an infrastructure API can help with. Furthermore, network configurations and workload (VM) management can also be an area where these APIs are used.
- We are already seeing a very rapid change in storage utilization because of cloud and IT consumerization. The steps towards and ever-connected user and IoE is creating a new level of data in the cloud. APIs, open-source platforms, and even bi vendors are making it much easier to integrate storage technologies.
Beyond cool technological advancements, the reality here is that this is the trend for modern applications which need to reside in the cloud. The days of the PC are numbered and the web-enabled device is becoming king. This means that organizations are being driven to become application-centric data center shops. More and more we are seeing that apps hold the keys to a business’s kingdom. Redefining hardware and software capabilities, the platform enables cloud service providers and independent software vendors to optimize scale-out file and object-based storage simply and effectively. This means the future application will have less to communicate with to ensure optimal resource utilization. Besides, most storage systems of the future will probably revolve around flash-based arrays as the spinning disk becomes obsolete. | 8:00p |
Website for DNS Organization ISC Down After Malware Discovery 
This article originally appeared at The WHIR
The website of the Internet Systems Consortium, the non-profit organization behind the BIND Domain Name System software, is down for maintenance after administrators found signs of a possible malware infection.
Since ISC also operates the F-root name server, one of the 13 Internet root name servers underpinning the global Internet, some worry that this infection could have an enormous impact, despite the organization saying otherwise.
According to the message displayed on ISC.org, the WordPress CMS is likely the point of infection, and the other network resources including the FTP site from which BIND can be downloaded, and the ISC Knowledge Base for documentation.
ISC notes that the malware incident has resulted in no infections of client machines, but is advising those who have recently accessed this site to scan their systems for malware. ZDNet’s Steven J. Vaughan-Nicholsfurther recommends site admins monitor their DNS logs for suspicious activity.
According to Cyphort Labs, which detected the infection on Dec. 22, the main page had been modified so that visitors are redirected to a landing page for the Angler Exploit Kit, which serves various exploits that download and execute a malicious binary in memory (in which nothing is written to disk) on Windows systems.
Some propose that if ISC’s front-end WordPress server is compromised, other aspects of the organization could be too, including the BIND code. A server that’s updated with compromised DNS BIND code would, for instance, provide a security hole for malicious hackers.
As for the F-root servers, the ISC’s Dan Mahoney told The Register that “service and security is absolutely unaffected” by the website compromise – being entirely separate from the front-end servers.
Meanwhile, ISC is rebuilding its front-end website with a clean database and CMS, which will undoubtedly be more assuring for site visitors aiming to download DNS software than a malware warning.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/website-dns-organization-isc-malware-discovery | 8:30p |
Cisco’s Acquisition of Neohapsis will Bolster its Cloud Security and Compliance Offerings 
This article originally appeared at The WHIR
More than a year after its 2.7 billion acquisition of Sourcefire in an effort to bolster its threat protection across the “entire threat continuum”, Cisco has announced a deal to acquire Neohapsis, a privately held security advisory company which provides risk management, compliance, cloud, application, mobile, and infrastructure security solutions.
A blog post from Hilton Romanski, who leads corporate development at Cisco, outlines the deal which was made public this month.
“Together, Cisco, Neohapsis and our partner ecosystem will deliver comprehensive services to help our customers build the security capabilities required to remain secure and competitive in today’s markets,” writes Romanski. “This will help our customers overcome operational and technical security vulnerabilities, achieve a comprehensive view of their risks, take advantage of new business models, and define structured approaches for better protection.”
Security proves to be a major area of neglect for many businesses, and one where Cisco sees opportunities.
Earlier this month, IDC Canada and Cisco released a report stating that 60 percent of Canadian businesses lacked a security strategy. Furthermore, 22 percent of those surveyed reported a breach within the last 12 months, and 8 percent wouldn’t be able to say if they had experienced a breach or not.
In a blog post, Neohapsis president and CEO James Mobley wrote that Cisco is a “perfect strategic match” given its “services and research mission” including emerging threats around mobile and cloud, as well as the “Internet of Everything”. “Together, what we bring to enterprise customers, IoT device manufacturers, and associated service providers will be unique in the market.”
The Neohapsis team will join the Cisco Security Services organization under the leadership of SVP and GM Bryan Palma.
The acquisition is expected to close in Q2 2015. Financial details around the transaction were not made public.
As part of its strategy of moving beyond networking and towards the cloud and software market, Cisco has been making other notable acquisitions (and acqui-hires).
This trend includes the September acquisition of Metacloud, a provider of OpenStack private cloud provider. Metacloud’s employees will join Cisco’s Cloud Infrastructure and Managed Services organization when the deal closes in Q1 2015.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/china-blocks-access-gmail-via-third-party-email-clients | 9:00p |
Cavern Doubles Size of Underground Data Center Near Kansas City Cavern Technologies has nearly doubled the amount of space built out for tenants in its underground data center near Kansas City. The data center is located in a massive cave, where its biggest neighbor is the U.S. National Archives and Records Administration.
The data center provider commissioned a 60,000 square foot Phase I of an ongoing expansion project last week, John Clune, Cavern’s president, said. The company had built out 65,000 square feet of data center space for customers prior to the expansion.
There is a number of underground data centers around the world. Besides the obvious disaster protection benefits, underground facilities save on cooling costs, some using cool underground water, and some, like Cavern, benefitting from stable temperature throughout the year.
Ambient air in Cavern’s facility in Lenexa, Kansas, (15 miles south of Kansas City) remains at around 68F throughout the year, unaffected by the region’s extreme temperature swings, Clune said.
Cavern, which has been operating in the underground facility since 2007, provides a mix of data center options, from small private suites to large custom build-to-suit data centers with dedicated security and electrical infrastructure.
It has a 10,000 square foot dedicated data center occupied by a hospital group, and a 6,000 square foot one for another single tenant. The smaller private suites range from 200 square feet to 3,000 square feet.
The next phase of the current expansion will add another 40,000 square feet. “Currently we have plans for 150,000 square feet that’s already designed out,” Clune said.
The most recently commissioned pod has 2.5 megawatts of power. Cavern can get up to 50 megawatts total at the site. |
|