Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, May 4th, 2015
| Time |
Event |
| 12:00p |
Exclusive: Digital Realty Sells Huge Healthcare, Data Center Building in Philly Digital Realty has sold a 14-story mixed-use building in downtown Philadelphia for $161 million to a national healthcare-oriented real estate investment trust, company executives told Data Center Knowledge. This is Digital’s third transaction this year, as the company continues to sell off properties it doesn’t consider core to its strategy.
The building has some retail, telecommunications, and data center tenants, but most of its 700,000-plus square feet is occupied by medical offices. Bought by the San Francisco-based data center provider and developer in 2005, it is a prime example of what the company describes as “non-core” properties in its portfolio it has been getting rid of in an initiative announced last year.
Indicative of the changes affecting the entire data center provider industry, Digital, one of the world’s largest data center landlords, has been going through a major transformation. In addition to optimizing its asset portfolio, the company, whose business has historically relied on big space-and-power leases, has been pursuing a strategy of partnering with providers that can add value to its buildings by offering tenants things like managed hosting, cloud, or cloud connectivity services. It has also placed more emphasis on retail colocation business than it has done in the past.
New C-Level Exec Team in Place
There have been big changes at the top of Digital’s management team as well, starting with the departure of its founding CEO Michael Foust in March 2014. The company’s long-time CFO William Stein was appointed permanent CEO last November.
Just this April, Digital appointed Andrew Power, a former Bank of America Merrill Lynch exec, to replace Stein in the CFO role, and former CoreSite COO Jarrett Appleby became COO at Digital. Also in April, the company appointed Michael Henry, who previously served as CIO at Rovi, as the first CIO in its history.
Demand for Healthcare Space Outpaces Data Center Demand in Philly
With a new senior leadership team in place, Digital continues to dispose of real-estate assets that are not essential to its newly refined strategy.
As the healthcare sector in and around Philadelphia grew, driving demand for medical offices in the market, the building at 833 Chestnut St. picked up more traction over time among healthcare tenants than it did among data center tenants, Peter Rosenbaum, vice president of acquisitions and investments at Digital, said. It is adjacent to the main building of the Thomas Jefferson University Hospital, and its location has proven to be a deciding factor in its fate.
Today, only between five percent and 10 percent of the building is occupied by data center space, including a meet-me room. The sale to a national healthcare-oriented REIT, whose name Digital execs declined to disclose, doesn’t mean companies that are taking data center space there will become second-class tenants, Michael Darragh, senior vice president of acquisitions and investments at Digital, assured.
“Most of these data center clients are also clients in [our] other buildings,” he said. “We don’t want to leave them with a bad taste in their mouth.”
The building’s new owner has hired a national company that will take over management of the data center space there.
Not a Big Wholesale Data Center Market
While the region has numerous tenant-owned and operated enterprise data centers, Philadelphia has traditionally been primarily a retail colocation market as far as data center providers are concerned, Darragh said. Such markets usually have one or two major multi-tenant data centers that sufficiently address the demand. Philly’s big data center hub and carrier hotel is 401 North Broad Street, owned by New York-based Amerimar Enterprises.
After Profitable Deal, Six More Buildings on the Market
Digital expects sale of its Chestnut Street property to generate cash net operating income of about $9.3 million this year, representing a cap rate of 5.8 percent. The company expects to gain about $77 million in the second quarter on net proceeds of $149 million the sale is expected to generate.
Also this year, Digital sold a 170,000-square-foot office building in suburban Boston for $31 million, and a vacant former 70,000-squre-foot data center in Brea, California, for $14 million.
The company has identified six more properties in this portfolio it wants to part with this year. Ranging in size from 14,000 square feet to about 310,000 square feet, all six are non-data-center facilities, housing technology office or technology manufacturing space. | | 3:00p |
Cisco Boosts ACI Security With FirePOWER Integration Cisco announced that it is integrating the FirePOWER Next Generation Intrusion Prevention System with its Application Centric Infrastructure automated policy fabric, Cisco’s alternative to other Software Defined Network solutions on the market. Cisco acquired Sourcefire in 2013 and now offers the threat protection services software through both physical devices and virtual appliances. The FirePOWER services were also integrated with the 5500 series of Cisco ASA firewalls.
With ACI enabling a policy-based multi-tenant infrastructure, the addition of NGIPS will enable companies to dynamically detect and block advanced threats with continuous visibility and control across the full attack continuum, according to Cisco. After it is made available in June, Cisco says the new integrated ACI security solution will provide advanced security to protect data centers before, during, and after attacks, dynamically detecting threats and automating incident responses.
Cisco also announced that ACI is now validated by independent auditors for deployment in PCI-compliant networks. Cisco touts a broad ecosystem of partners for ACI with Intel Security, Check Point Software, Infoblox, Radware, Symantec, and most recently Fortinet’s FortiGate.
Automation, integration and ease-of-use are in focus here, as Cisco cites Enterprise Strategy Group (ESG) research that show 68 percent of IT security professionals reported that it is difficult to remove expired or out-of-date access control lists (ACLs) or firewall rules because it is so time-consuming and entails many manual processes.
An extensive ESG survey report on ACLs is available here, which says 74 percent of midmarket and enterprise respondents claimed that, on average, took days or weeks to complete firewall or routing ACL changes. | | 3:30p |
The IT X-Factor to Gain Business Agility Paul Miller is VP of Product Marketing for HP Converged Data Center Infrastructure.
I visit many customers a year, and most of the IT executives I talk with are considering flexible, scalable infrastructure that can be deployed quickly and support their most critical workloads. They need solutions that are architected to stand up on-premise for private or hybrid cloud delivery. They need to gain real-time insights from big data pools they are storing, increase sales productivity through mobile apps, and improve the agility to flex to whatever the business hot button is at the moment. IT executives realize their organizations need to operate more efficiently so they can redirect their investments into innovation.
These requirements demand a higher degree of automation within the data center so they can deliver new services at cloud scale to keep their business ahead of the competition. Conventional approaches to IT are becoming less effective – similar to the IT model they were built upon.
So what is the new IT X-factor that can give you the improved business agility you need to stay competitive even as you continue to reduce costs?
Converged systems (also known as integrated systems and unified computing) are rapidly gaining acceptance as the way to improve overall business agility, which increases the productivity of IT staff and the quality and speed of services delivered to clients. IDC forecasts the total integrated systems market will grow at compound annual growth rate (CAGR) of 19.6% to $17.9 billion in 2018, up from a value of $7.3 billion in 2013.
A converged system that is pre-built, workload-optimized and governed by software-defined management software, can now be efficiently delivered as infrastructure services. This sets a new standard for how IT can successfully manage, automate and deploy infrastructure within the data center.
Let’s look at three key benefits that IT can gain using converged systems.
Fast IT
Converged systems improve operational efficiency through hardware compatibility, optimized workload density that increases performance and reliability, and modular scalability. Now you have the ability to free up IT staff resources to research, design and tune infrastructure to handle the workloads you need to drive your operations. Policy-based automation such as software-defined templates and easy-to-use management software further maximize those operational efficiencies and frees up IT staff to pursue revenue-generating opportunities and other strategic work.
Simple IT
Converged systems can help minimize upfront design and testing costs and deployment time because they come pre-integrated and optimized, reducing complexity and improving system performance and uptime. Add in a single layer of software-defined management that unifies your existing and new infrastructure, and tools and processes, then manages them as one – all from a single management console. Now you can simplify everyday administrative tasks, reduce the number of tools you currently use to manage your infrastructure, and practically eliminate costly errors.
Efficient IT
Converged systems can help you transition your IT organization to a cloud scale delivery model that provides open, modular architecture and scalability and cloud economics for private and hybrid cloud. This enables you to become a builder and broker of services. IT can now efficiently deploy blocks of scalable infrastructure that serve as dynamic pools of resources, are workload optimized for specific applications, and simply managed through a single management console. The infrastructure is interoperable with existing and future infrastructure and can quickly adjust to fit your business requirements.
The Best of Both Worlds
With the introduction of hyper-converged systems, IT executives have more choice in the deployment of infrastructure. With a condensed footprint and enterprise-grade features, performance and resiliency built in, hyper-converged systems are gaining lots of attention as a quick and affordable way to modernize IT infrastructure while letting you operate securely, efficiently and at scale. They retain all the attributes of a larger system yet are simpler turnkey appliances made up of integrated server, storage and networking building blocks. Their versatility offers fast setup and easy administration and provides lower costs for faster responses to business demands. And they scale just as easily; each additional system seamlessly adds the power of four servers and associated storage. Hyper-converged systems seem a perfect fit for small to medium size businesses, remote or branch offices, and lines of business with limited IT support. Complemented by a larger converged system, you can now choose which system best matches the location needing resources.
In a recent study to measure the business value of converged systems, IDC selected and interviewed 20 companies at different convergence maturity levels based on a composite ratio that included percentage of nodes using virtualized storage, percentage of storage linked via virtualized I/O, percentage of OS images configured or provisioned automatically and other measures of standardization and best practices. The results indicated “a marked correlation between higher levels of convergence and reduced IT costs per unit of workload, faster deployment, optimization of IT staff, and reduced downtime.”
Clearly, the speed, simplicity and efficiency of converged and hyper-converged systems are causing IT executives to rethink the way to improve business agility in their organizations, as these systems represent a very effective way to modernize their IT infrastructure.
Are these the makings of an X-factor for business agility? I’ll let you be the judge.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 4:00p |
CoreOS Gives Up Control of Non-Docker Linux Container Standard Taking a major step forward in its quest to drive a Linux container standard that’s not created and controlled by Docker or any other company, CoreOS spun off management of its App Container project into a stand-alone foundation. Google, VMware, Red Hat, and Apcera have announced support for the standard.
Becoming a more formalized open source project, the App Container (appc) community now has a governance policy and has added a trio of top software engineers that work on infrastructure at Google, Twitter, and Red Hat as “community maintainers.”
When CoreOS founder and CEO Alex Polvi announced the appc standard late last year and in the same blog post slammed Docker for having a “broken security model” and for building tools into its container platform instead of focusing on the simple basic element, Docker supporters did not take kindly to his statements. Given how popular Docker had become in a very short period of time, the anger was understandable.
However, support by VMware, Red Hat, and Google demonstrates that there’s growing momentum behind appc, and that the alternative Linux container effort was not some sort of a sneak attack launched by a startup desperate to establish its market identity, as some at the time explained the CoreOS announcement. Polvi has always maintained that his team’s intentions were simply to address engineering issues Docker had not addressed.
Mesosphere and EMC’s Pivotal have supported appc from the beginning. Google Ventures recently invested $12 million in CoreOS. At the same time, the San Francisco-based startup unveiled its core commercial product: Tectonic, a platform that combines Google’s open source Linux container management system Kubernetes and CoreOS, the Linux distribution optimized for large-scale compute clusters the startup is best known for.
Standard Separate from Company or Product
You can’t really do an apples-to-apples comparison between Docker and appc, since Docker doesn’t necessarily have a container standard, CoreOS Product Manager Kelsey Hightower said. Docker’s technology is open source, and the company has started creating some documentation around things like image format, but there’s no company-independent foundation.
appc is a spec that now has its own name space and a governance structure that makes it independent from CoreOS. “That group of people will push the standard forward,” Hightower said. “We won’t be controlling the standard.”
Docker is a technology others can build on top of, which is what CoreOS has done early on, and so has Google’s Kubernetes team. This is different from what Apcera, for example, is doing with appc, building its own execution environment for running apps in containers using the spec, Hightower explained.
“appc is detached from any product,” he said. “It’s just an idea.”
No Plans to Stop Docker Support
CoreOS and its technologies still support Docker, and there are no plans to discontinue that support any time in the near future. “I ask that question a lot internally,” Hightower said about the possibility that one day the company might discontinue support for Docker altogether. “That’s one thing that can’t be on our radar right now. We have to offer choice on our platform.”
The company does, however, make it possible to use Docker containers on Rocket, its container runtime, without running the Docker daemon, he said. Users being forced to run the daemon was one of the things Polvi said was wrong with Docker last year. According to him, because the entire Docker platform is a daemon that runs as root, it is fundamentally unsecure.
Joyent, a San Francisco-based cloud infrastructure service provider, does a similar thing on its platform, giving users the ability to pull Docker images to a hard drive using the Docker client through an API but then execute on a different engine, Hightower said.
Not a Zero-Sum Game
With formal support from Google, which commands universal respect from engineers, and from VMware and Red Hat, two behemoths in the enterprise data center software space, appc joins the big league.
There are different kinds of Linux containers with differing functions and differing philosophies by companies behind them. One way CoreOS’s philosophy differs from Docker’s is CoreOS thinks there should be a basic container standard that’s independent from a software stack, Hightower said.
CoreOS doesn’t subscribe to Docker’s famous “batteries-included-but-removable” approach, where the technology comes with all the bells and whistles by default, and it’s up to the user to customize if they need to, he explained. Docker by default points to the Docker Hub for container image hosting, for example. You can change that default to store images anywhere you need to, but it takes a small workaround.
Hightower doesn’t think it will be a two-standards-enter-one-standard-leaves kind of a situation though. “There will always be multiple standards for everything,” he said.
Besides appc and Docker, there’s LXC, an OS-level virtualization environment for Linux containers with its own format. There’s also Oracle’s Solaris, which does container images differently from all of the above.
“There will always be more than one way of specifying utility of the container,” Hightower said. | | 5:20p |
vXchnge Buys Eight Sungard Facilities in Edge Data Center Markets vXchnge has acquired eight data centers from Sungard Availability Services extending its reach to a total of 15 geographic markets. The data centers are in underserved metros, aligning with vXchnge’s edge data center strategy. Financial terms of the deal were not disclosed.
vXchnge acquired the assets and operations teams associated with former Sungard AS data centers, which are in use by customers. vXchnge’s target user base consists of network-centric businesses needing to extend geographic reach in underserved big metro areas.
vXchnge expands its edge data center footprint while Sungard AS frees up capital to invest into other cities. The data centers largely house colocation customers, according to Sungard AS.
“We will continue to serve our customers located in those centers who are leveraging our managed and cloud services,” Sungard AS said in a statement. “We still support colocation services and it remains a growing business for us.”
The new vXchnge edge data centers are in:
- Portland, Oregon
- Austin, Texas
- Minneapolis, Minnesota
- St. Paul, Minnesota
- St. Louis, Missouri
- Pittsburgh, Pennsylvania
- Raleigh, North Carolina
- Nashville, Tennessee
Acquiring the data center operations teams in each location was a key part of the acquisition, said vXchnge CEO Keith Olsen, since it makes the acquisition seamless for the existing customers.
There will still be a period of integration. Sungard AS and vXchnge are different types of companies. Sungard AS focuses on disaster recovery, a different business that vXchnge’s edge data centers but one that happens to do well in the same types of markets.
Content providers want to deliver content from data centers at the network edge, so they desire colocation in or near big metros, while disaster recovery is a big need for enterprises in these cities.
Sungard AS isn’t getting out of the colocation business, but it is focusing investment in cloud and managed services. The company was recently named a leader in Gartner’s Disaster Recovery as a Service Magic Quadrant, and the company has been busy expanding its footprint as well.
“This agreement supports our overall business strategy calling for us to invest in the markets where we can deliver our broader, integrated solutions,” the company said in a statement. “That’s why, in the past few years, we have opened or expanded more than 10 data centers, serving Philadelphia, New York, Denver, the Midlands, U.K., Stockholm, Toronto, and other key markets.”
Upcoming Sungard AS expansions this year include Houston, Carlstadt, New Jersey, Dublin, and “potentially other locations,” the company said.
The two companies happen to have Philadelphia in common. It is home to Sungard AS headquarters and where the first Switch & Data data center is located. vXchnge was founded by former Switch & Data executives following that company’s acquisition by Equinix in 2009 and started construction on a 70,000 square foot Philadelphia data center last July.
vXchnge was formed after the Stephens Group acquired Bay Area Internet Services (BAIS), a colocation provider in Santa Clara, California, in 2013. The Little Rock, Arkansas-based private equity firm partnered with what is now the vXchnge management team on the transaction and formed vXchnge.
“We see tremendous opportunities directly tied to this transaction,” said Keith Olsen, vXchnge CEO, in a press release. “It accelerates our strategic presence in ‘edge’ based marketplaces for companies to grow their businesses. These marketplaces operate as deployment points for our customers’ network-enabled applications and cloud services that require safe, secure and resilient data centers.” | | 5:58p |
IBM Stakes POWER8 Claim to SAP Hana Hardware Market After entering a period of enterprise software détente in 2014 with SAP, IBM today announced a series of POWER8 servers that are optimized for SAP HANA in-memory computing applications.
Doug Balog, general manager of Power Systems at IBM, says that given the price-performance attributes of IBM POWER8 servers IBM expects to be able to best x86 servers as SAP Hana hardware. Intel’s x86 chips, he says, don’t match POWER8 servers in terms of the number of threads per core that can be processed or in the ability to move massive amounts of data quickly through the system.
“We see SAP Hana applications as being a sweet spot of the type of workloads that lend themselves to POWER8 servers,” Balog says. “We’re pretty confident we can compete against Intel in this space.”
IBM plans to make two configuration of the Power Systems Solution Editions for SAP Hana available. The first offering is based on the IBM Power Systems S824 with 24 POWER8 processor cores and up to 1TB of memory. IBM says this system is ideally suited for the SAP Business Warehouse application running on SAP Hana, with databases up to 512GB (compressed) in size.
The second offering is based on the IBM Power Systems E870 with 40 POWER8 cores and up to 2TB of memory. IBM says this platform is ideal for databases up to 1TB (compressed) in size.
While IBM and SAP initially circled each other warily when SAP first launched its in-memory database management system, the two companies have since decided to cooperate around SAP Hana hardware, while continuing to compete at the database level. IBM has made several in-memory computing enhancements to its IBM DB2 database that the company says provides a more comprehensive database environment that spans in-memory and magnetic storage.
In the meantime, HP, Dell, and Lenovo have all brought to market servers based on high-end Intel Xeon processors that are optimized for SAP Hana. While IBM has been slow to bring IBM POWER8 systems to market to counter those offerings, adoption of SAP Hana in production environments has been steady. Most of the use cases of SAP Hana involve SAP applications at this point, and SAP has been making a decided effort to push as much adoption of those applications into cloud computing environments managed by SAP.
But with enterprise IT teams that prefer to run SAP applications on premise, IBM is betting that traditional database strengths of the POWER8 will continue to prevail. In addition, IBM now makes POWER8 servers available via the IBM SoftLayer cloud alongside x86 servers.
Obviously, IBM sees application workloads running on SAP Hana as a critical component of the POWER8 base of database applications. While POWER8 servers have been gaining market share in the RISC/UNIX server market for several years now, the entire category as a whole continues to lose share to x86 servers that have relentlessly expanded the base of application workloads that run on the Intel platform.
Nevertheless, the number of SAP Hana applications being deployed in production is expected to increase significantly in 2015. As such, IBM sees SAP Hana as a growth opportunity in the months and years ahead. | | 6:30p |
Equinix Targets Enterprise SaaS Space With Office 365 Services Equinix is now offering private, managed connections to Microsoft Office 365, the Software-as-a-Service version of Office. The service will be available in 15 markets worldwide in the third quarter.
Equinix’s cloud connectivity services are “moving up the stack,” able to access a true enterprise SaaS application for the first time. Equinix’s Cloud Exchange has so far only focused on providing connectivity to cloud compute and storage services. The private, managed connections to the application make the SaaS more suitable for the enterprise.
The announcement is big on a few fronts. The move will likely trigger similar private connectivity to other enterprise SaaS applications and suites out of Equinix and other colocation providers. In addition to making Microsoft’s SaaS offering a little more attractive to the enterprise, it makes Equinix more attractive to an enterprise looking to use SaaS.
“This removes another barrier of control and visibility from the end-consumer experience,” said Chris Sharp, vice president of cloud innovation at Equinix. “There’s been some great discussions on how customers will have to re-architect their architectures for these apps because of the delivery method. You need a different way to consume these services, and now we’re providing private connectivity down to the app.”
Microsoft’s Office 365 has enjoyed explosive growth, however a lot of enterprises remain reluctant to move to the online office suite, with unpredictable performance of SaaS being one key concern. Direct Access is a more secure, reliable, and guaranteed way to connect to these apps, according to Equinix.
A recent Forrester report, Beware the Pitfalls Within Networking for Hybrid Cloud (registration required), found end-user experience should be a company’s top priority when considering cloud connectivity options for business productivity applications.
“The enterprise is looking for a multi-cloud solution,” said Sharp. “We’re creating an environment with as much choice as possible.”
Equinix isn’t selling Office 365; it’s providing the connectivity through its cloud exchange, and customers will still have to buy through Microsoft. Equinix and Microsoft are enabling the enterprise to use existing MPLS or wide area network (WAN) infrastructure to get high-speed secure access to Office 365 for a better end-user experience.
As with the Azure offering, customers initiate the sale through a Microsoft portal. Once they set it up, it flows through Equinix’ APIs, and customers are able to dynamically map through APIs or leverage Equinix portals.
Office 365 Direct Access will require an existing port on the Equinix Cloud Exchange. If companies access Azure ExpressRoute through Equinix Cloud Exchange, they can dynamically manage and allocate their bandwidth requirement, ensuring that specific applications get the priority they need to deliver the performance required.
Equinix and Microsoft began working on private Azure connectivity in 2013, with the relationship formally launching in April 2014, offering ExpressRoute in 15 Equinix markets. The two share a lot of multi-national customers in need of consistent global access.
Direct Access to Office 365 will be available in all 15 shared markets, including Amsterdam, Atlanta, Chicago, Dallas, Hong Kong, London, Los Angeles, New York, Osaka, Seattle, Silicon Valley, Singapore, Sydney, Tokyo, and Washington, D.C. | | 7:00p |
Amazon Acquires ClusterK to Help Users Spot Cloud Savings 
This article originally appeared at The WHIR
Amazon has acquired Palo Alto-based software company ClusterK this week for a reported $20-$50 million. ClusterK optimizes AWS infrastructure and helps significantly lower the cost of compute.
As public cloud providers continue to drop prices to stay competitive, AWS will integrate ClusterK’s technology into its EC2 Spot Instances in order to offer instances at a fraction of the price, according to a report by VentureBeat.
ClusterK has two products: ClusterK Balancer, which leverages the AWS Spot Market to achieve up to 90 percent savings; and ClusterK Cirrus, a cloud-native HPC grid scheduler.
The Amazon EC2 Spot Market allows customers to name their price for compute capacity, and is made up of hundreds of smaller compute markets with real-time supply and demand-based pricing. “At this price point we fundamentally believe the total cost of ownership for cloud compute is materially cheaper than even the most efficient enterprise data centers,” the company says on its website.
While any single Spot Market “can be highly volatile and, in isolation, not appropriate for mission critical applications…ClusterK automates the use of multiple instance types, across multiple availability zones to create a highly available platform ideal for mission-critical applications.”
ClusterK’s team will relocate from Palo Alto and work in Amazon’s Seattle office, VentureBeat said.
“We firmly believe that market based mechanisms such as AWS Spot is the future of cloud computing which will lead to better resource allocation, better data center utilization by cloud providers and lower cost of ownership for customers. We design software and solutions to enable our customers, ISVs, and cloud providers efficiently and reliably use market based pricing mechanisms at scale,” ClusterK said.
For many startups and bootstrapped companies, price is still a main factor in determining their cloud infrastructure provider. By keeping an eye to technologies that can help lower compute costs, cloud providers will appeal to a wider range of cloud consumers who are more budget-conscious.
Cloud hosting providers that leverage AWS infrastructure may also be interested in ClusterK’s technology as they may be able to pass the savings onto their customers.
Even by offering users a price break, AWS will likely still maintain its cloud lead as its usage grows faster than revenue. AWS revenue in the year’s first quarter grew nearly 50 percent year-over-year, reaching $1.57 billion.
This first ran at http://www.thewhir.com/web-hosting-news/amazon-acquires-clusterk-help-users-spot-cloud-savings |
|