Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, March 2nd, 2015
| Time |
Event |
| 1:00p |
Is it Time to Put a ‘For Sale’ Sign on Your Data Center? If you are a corporation that has a data center that’s reasonably up to date, this may be a good time to sell it.
You’ll get out of the business of data center management, recoup millions of dollars of capital, and prevent losing any more money to depreciation. If you sell it, you can stay in the facility as a tenant and use the money you made from the deal to upgrade the infrastructure.
That, essentially, is the case Sean Brady, senior director at the commercial real estate firm Cushman & Wakefield, is making in an appeal to operators to consider a data center sale-leaseback transaction. Benefits of the sale aside, now is a good time to sell simply because there is currently a number of buyers in the market seeking such deals, money in hand.
“They are primarily private equity firms, who have got a technology background and have invested in data center companies over the last several years,” Brady said about the buyers. “They understand the industry, and they understand real estate, and they are very confident in buying.”
They want data centers, because a data center is a very stable asset, where the likelihood of a lease renewal by the tenant, who has spent millions upon millions on equipment inside, is very likely.
Brady said he was currently involved in three such deals. One data center is owned by an enterprise, the second by a service provider, and the third by a local landlord, he said.
Some of the biggest examples of active data center buyers are Carter Validus Mission Critical REIT and GI Partners, a private equity firm. The former’s modus operandi has been buying occupied data centers and hospitals in exactly the kind of sale-leaseback transactions Brady is talking about. In fact, the real estate investment trust announced its most recent data center purchase just this month: a $56.7 million acquisition of a data center in Alpharetta, Georgia, that was built for a financial services company.
These companies don’t limit themselves to corporate data centers. Carter Validus, for example, bought two data centers from IO in the Phoenix market last year, leasing them back to the colocation provider. Also last year, GI Partners acquired service provider Peak 10 and its 24 of its data centers.
A corporate data center that is not nearing its end of life, however, is an attractive proposition for investors. That it’s not near the end of life is an important factor. It has to be a facility the occupant wants to stay in. Otherwise, it doesn’t make much sense for an investor to buy it and then invest millions of dollars in renovating it to bring it back to market.
A data center that’s run down to the point where the occupant wants to get out of it would be an example of the owner having waited too long to get rid of the property, Brady said. For a company in this situation, the best way to go may be to take down some space with a wholesale provider. | | 4:30p |
The Business Case for Mainframe Modernization or Migration Adam Redd is vice president of development for GT Software.
New expectations from customers, internal operations leaders, employees and competitors are forcing companies in a variety of industries to innovate and do so quickly. So how can an IT department best align technology with business objectives to meet this escalating pressure for innovation? And more importantly, how do they do it and keep within ever-tightening budget constraints?
What’s Keeping Business and IT Leaders Up at Night?
With these complex business demands taking center stage, CIOs and IT professionals—along with their functional peers in marketing, finance and operations—must recognize the need to come together to create innovation while simultaneously holding the line on costs. If not, they stand to lose a competitive edge.
Where do we look for cost-reduction opportunities and efficiency? At first glance, the “big iron” in the data center may not seem like a revenue generator or an asset that can support innovation. But guess again. By modernizing the powerful mainframe hardware that already supports an organization’s most critical business applications and houses its valuable data, businesses can unlock tremendous cost savings, and pave the way to efficiency and new revenue streams.
Correctly Accounting for a Mainframe Asset
Modernization or migration should begin with an assessment of the data center IT infrastructure. The idea is to create a roadmap for achieving improved performance, operational support and cost-management.
This assessment should incorporate an audit of current (hard) IT investments, including software and license fees, maintenance and support, IT talent, and hardware expenses. In addition, the soft costs should be evaluated. It is important to examine the potential risk and costs associated with disrupting the current applications on the system by evaluating the time, effort, and knowledge capital required for the project.
With a handle on the costs associated with current operations, you can then look into deploying modernization or partial migration projects, and evaluate the associated savings, as the cost differences between pre- and post-mainframe modernization can be staggering. An average of actual savings from migrating off the mainframe to Microsoft Windows can be seen in the figure below. As the chart suggests, the ROI of a mainframe migration can be significant and can come rapidly after migration completion.
 Click to expand.
Modernize or Migrate?
Generally, decision makers who choose to modernize or migrate legacy systems are part of the IT group and don’t always interact with their functional peers across lines of business.
This can be a drawback since an entire organization’s operational processes should be considered when measuring cost savings or potential operational improvements. IT leaders with a big-picture-perspective frequently include operational business units such as marketing, customer service and procurement in their decision-making. Doing so not only helps to build the business case and internal buy-in, it can also better showcase IT’s strategic value. Additionally, getting this perspective helps provide insight into ways for IT to offer further innovation and added value to the business.
In the end, a thorough understanding by all stakeholders can mean the difference between success and failure, and no matter which is chosen – data modernization or migration – there is little room for error.
Modernization—Integrating with newer technologies and modernizing the mainframe is a dependable way to extend the ROI of IT systems, as well as improve strategic services in marketing, finance, sales and other areas of the organization. Whether you want to improve operational performance with easy mobile access to business-critical solutions, or unify your data from disparate sources for a comprehensive view of enterprise data, integration using the right tools can make it simple.
You’ll want to find a solution that enables easy interaction, integration and information orchestration across your mainframe and other platforms to give your mainframe a new lease on life and help empower customers and employees via easy access to mainframe information and applications.
Migration—Migration involves moving legacy technology to newer platforms. Like modernization, a migration path can deliver significant cost savings, yet poses substantial inherent risk. However, migration can be simplified by using tools that automatically convert data from one form to another. There are also tools to convert the code from one platform to another to be either compiled or interpreted. An alternative to converting the code is to employ software that can translate the code from the old system on the new system. For companies where full migration is simply not an option, targeted migration of specific applications or batch processing is a viable option that also results in significant savings.
Third-Party Analysis Reduces Risk
Whether migrating or modernizing, the best solution is to work with a vendor that has no stake in that decision. Such a provider will drive the project based on objective evaluations of the current infrastructure and strategic business needs, taking into account existing systems, costs, potential savings, time commitments and risk. An effective third-party vendor can also navigate the perceptions and preferences of internal staff, as there will be favoritism of one platform over another and perhaps even some political motivation behind well-intended decisions.
Innovation: The Mainframe Holds the Key
In the end, when aligned with strategic business objectives, new applications and services enabled by the mainframe can add significant value by empowering customers, business partners and employees. They can also improve customer service, reduce administrative time and costs, and greatly improve an organization’s operational efficiency and thus extend the ROI of mainframe investments to support continued innovation and maintain a competitive edge.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission processfor information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 5:12p |
Report: NTT In Talks To Acquire German Data Center Provider E-Shelter Japan’s NTT Communications is reportedly in talks to acquire German data center service provider e-shelter, Nikkei business daily reported. The price tag is around $830 million, according to a Reuters report that cited an anonymous source.
It would give NTT a robust data center footprint in the German market, the second largest data center market in Europe. E-shelter has an almost singular focus in the market, with 8 data centers in Germany. In the Frankfurt region alone, the company operates 650,000 square feet across three data centers. NTT operates data centers globally and has one data center in Frankfurt.
NTT is an international player investing heavily in data centers. The acquisition would provide e-shelter with a parent company capable of funding costly data center expansion plans.
NTT acquired a controlling interest in U.S. provider RagingWire in 2013, doubling NTT’s footprint in the U.S. RagingWire has campuses in Ashburn, Virginia, and Sacramento, California, providing ample room to grow. NTT acquired enterprise IT and network services company Virtela Technology Services in 2013.
In 2010, the company bought South Africa’s Dimension Data for $3.2 billion to boost the data center business.
There has been a lot of activity in the data center market in Germany, and Europe in general. TeleCityGroup and Interxion are in the process of merging, creating Europe’s largest data center provider.
Consolidation is expected among the European players with heavy footprints in singular markets to compete against those with international footprints.
 A map of e-shelter’s data center locations (Source: e-shelter)
Frankfurt is the main campus for e-shelter, with room for close to 650,000 square feet of data center and technical space. It consists of five free-standing buildings, the latest built in 2012. The company has a lot of planned growth in Frankfurt, with a planned expansion of close to 200,000 square feet.
Frankfurt 2 is one of the smaller facilities, at 20,000 square feet of data center supported by 3 megawatts. The first stage of Frankfurt 3 is close to 45,000 square feet. The second of a planned three stages started construction last year. The three additional stages combined will provide 190,000 square feet.
The Berlin data center is located in the administrative district of Spandau. The site is 430,000 square feet, with 140,000 square feet of data center space.
Vienna is another opportunity to grow. The first building went under construction last year. The company has a capacity of 20 megawatts in Vienna and is planning 260,000 square feet of gross floor space. Three planned building stages will create roughly 90,000 square feet of data center space.
Munich 1 is close to 22,000 square feet, while Munich 2 is being developed. The company owns a plot close to 200,000 square feet. The first building is 60,000 square feet, the company announcing a first stage last year.
Outside of Germany, the company owns a data center in Zurich, which received LEED Platinum pre-certification. | | 7:04p |
Red Hat Forms OpenShift Commons to Drive Open Source PaaS Innovation Red Hat has unveiled OpenShift Commons, a new community initiative around its open source Platform-as-a-Service OpenShift and the technologies it is based upon.
Red Hat’s OpenShift combines best-of-breed open source technologies, including OpenShift Origin, Docker, Google Kubernetes, Project Atomic. Commons is a community of communities. The project can be seen as a bridge between them, designed to facilitate sharing of knowledge. The Commons will work towards best practices and enable collaboration on dependencies to advance open source PaaS.
OpenShift is already a very open project, with an upstream community called Origin, where all the code is free.
“As we went on this journey, customers would say they want knowledge of best practices,” Ashesh Badani, a general manager at Red Hat, said. “We run big operations, and we had partners working with us, but they also want to hear from other peers. Instead of having these one-to-one conversations, why not have a transparent community?”
OpenShift Commons launches with participants across the technology landscape, from big vendors like Cisco and Dell to service providers like Orange, integrators like CSC, as well as startups and end users.
Over 2.2 million applications have been created on the four-year-old platform.
Red Hat hopes the community will direct Commons completely going forward. “We’re looking to get it seeded, but over time I expect more organic ideas,” said Badani.
The Battle for Top Open Source PaaS
There is another popular open source PaaS out there: the Pivotal-led Cloud Foundry project. Similar to Red Hat and Commons, Cloud Foundry launched its own independent governing body in a bid to establish a formal governance model for the project to not have it driven by a single vendor.
There is also a PaaS project within OpenStack, called Solum, which popped up roughly a year ago. OpenStack is important to open source cloud, so Solum is seen as something that will play a big role in the future of open source PaaS.
Red Hat, a massive contributor to OpenStack overall, was asked to contribute to Solum, and the company was interested. It even appeared at one point that OpenShift could merge into the PaaS component of OpenStack. Architect and founding OpenShift member Matt Hicks posted a blog that illustrates the excitement for the project.
Rackspace (one of the major driving forces behind OpenStack from its birth) and Red Hat appeared to collaborate for a few months, but then it somewhat fizzled.
There was some industry speculation about what happened. Cloud Foundry started the CF Foundation (the independent governing body), and speculation ramped further when Rackspace decided to throw its weight behind it.
Around this time, OpenStack Foundation director Josh McKenty made a controversial prediction in a Forbes interview: OpenStack would abandon its own PaaS efforts and Red Hat will eventually join the CF Foundation. Would Red Hat abandon its PaaS ambitions? Creation of the Commons suggests it would not.
Mirantis co-founder Alex Freedland posted his thoughts on the interview last year. “It will be extremely hard for Red Hat to abandon its current messaging around OpenShift and its elegant attempt to morph it into OpenStack via project Solum,” wrote Freedland. “For many years now, Red Hat has been promoting OpenShift to its customers and admitting that it was a mistake and betting on a different horse would be a strong blow to the company’s credibility as its customers’ most reliable adviser in the ever-changing world of open source software.”
Badani suggested the drama was a little exaggerated.
“There were a few folks in OpenStack and a folks out of Rackspace that were interested in creating a PaaS project,” said Badani. “We were approached and asked if we were interested. We’re happy to support any technology that drives OpenStack. That product has gone through transformations. We’re watching it, and if we see a lot of demand, we’re happy to help it grow. I have no opinion and don’t believe there was anything going on in regards to any ‘project creep.’ We do see a lot of folks that want OpenShift on OpenStack.”
Red Hat outlined its overall cloud strategy to DCK last month. In terms of the technology, its primary interest seems to be advancing the next-generation technology platform through community.
“As a company, Red Hat’s had a lot of success in enterprise Linux and the Linux marketplace in general, for both OS and metalware,” said Badani. “In the last 3 years there’s been a huge transformation. Today Red Hat is focused on commoditizing existing markets and emphasizing the need for forward-looking innovation in areas not fully defined yet. PaaS is in that area. There’s also a big investment in container technology, and project Automic. There are big investments on OpenStack and Infrastructure-as-a-Service.” | | 7:50p |
Report: IBM to Conduct Cloud Reboot to Patch Xen Issue IBM is another cloud provider that will reboot its infrastructure because of the most recent Xen hypervisor security issue.
Similar to the cloud reboot to address a potential security flaw five months ago, providers are rushing to reboot and patch before more details of the vulnerability are disclosed.
IBM notified its SoftLayer cloud customers that it will reboot some instances between now and March 10, GigaOm reported. Much like last time, the IBM notification follows Amazon Web Services and Rackspace notifications.
These issues have always existed. However, they’re more visible these days because cloud’s mainstream usage and visibility.
The vulnerability has the potential to affect all of Xen hypervisor land, not just individual clouds. In order to minimize potential impact of the rebooting process, providers are staggering the reboots in different regions rather than hitting the big reset button on everything.
Some service providers fared better than others during the last big cloud reboot. The previous maintenance affected less than 10 percent of AWS’ EC2 fleet and nearly a quarter of Rackspace’s 200,000-plus customers.
Customers themselves can also help minimize downtime through using multiple availability zones. Netflix lost over 200 database servers during the last reboot but managed to stay online. Linode provides a handy Reboot survival guide.
The Xen project has a detailed security policy available here. | | 9:16p |
Open-Source Database Company MariaDB Gets $3.4M from Runa Capital 
This article originally appeared at The WHIR
The MariaDB Corporation, a software vendor specializing in open-source database solutions, has been given €3 million ($3.4 million) in financing from Moscow-based Runa Capital which will help the company further the development of its product and markets.
According to the announcement late last week, Runa Capital managing partner and co-founder Dmitry Chikhachev will join MariaDB’s Board of Directors as part of the investment deal.
MariaDB is designed as “a drop-in replacement for MySQL” that provides more robust, scalable, and reliable SQL server capabilities. It was originally a fork of MySQL, and has been chosen by the likes of Google and Wikipedia as their SQL database technology. MariaDB just releaseda new stable release (version 10.0.17) and a beta release (10.1.3) this week.
Runa Capital has deep roots in funding emergent players in the hosting technology industry. The VC fund itself was co-founded by Serguei Beloussov, who was CEO of virtualization software company Parallels and who co-founded backup and disaster recovery software provider Acronis along with other companies.
In the past few years, Runa Capital has made significant investments in the hosting space. Runa Capital invested around $2 million (€1.5 million) in Series A funding for Berlin-based web hosting startup cloudpartner.de.
In 2013, it invested in web security solution Wallarm, led a $2 million funding round for cloud backup technology provider BackupAgent, and participated in a $10 million funding round for Nginx. That year it was named one of the top-25 most active Russian VC funds. In also made a “significant investment” in cloud platform ThinkGrid in 2011, and contributed seed funding to cloud hosting platform Jelastic.
Chikhachev said that MariaDB’s open source model makes it an attractive investment and will help give it a leading edge in web and database development, and a lucrative enterprise market. “MariaDB has assembled the greatest tech talent in the community and keeps driving innovation in the database space; it’s obviously growing in the enterprise as well,” he said in a statement.
This article originally appeared at http://www.thewhir.com/web-hosting-news/open-source-database-company-mariadb-gets-3-4m-runa-capital | | 10:53p |
Five Ways Next-Gen Data Centers Will Be Different from Today’s Cloud and virtualization will become the normal for the modern data center as new technologies improve density, efficiency and management. There is clear growth in both virtualization and cloud services all over the world. In fact, a recent Gartner report goes on to say that cloud computing will become the bulk of new IT spending by 2016. “In India, cloud services revenue is projected to have a five-year projected compound annual growth rate (CAGR) of 33.2 percent from 2012 through 2017 across all segments of the cloud computing market. Segments such as software as a service (SaaS) and infrastructure as a service (IaaS) have even higher projected CAGR growth rates of 34.4 percent and 39.8 percent,” said Ed Anderson, research director at Gartner. “Cloud computing continues to grow at rates much higher than IT spending generally. Growth in cloud services is being driven by new IT computing scenarios being deployed using cloud models, as well as the migration of traditional IT services to cloud service alternatives.”
With so much new cloud data traversing the data center – and the increased number of users utilizing cloud services – what will the next-generation data center resemble? What are some of the efficiencies that administrators can utilize? How will the business evolve around new data center demands?
Let’s look at five ways the next-generation data center will evolve.
- The software-defined data center (SDDC). Think of this as the logical layer within the data center. Security, storage, networking and even the data center now incorporate the software-defined technologies (SDx) realm. This logical layer allows for even greater control of both physical and virtual resources. Let me give you some specific examples – Storage: Atlantis USX and VMware vSAN. Networking: Cisco NX-OS and VMware NSX. Security: Palo Alto PAN-OS and Juniper Firefly. Data center: VMware SDDC and IO.OS. These are solid platforms which help control many new aspects of cloud computing and the next-generation data center.
- Multi-layered data center control. The data center is hosting a number of different systems. With that in mind, the control layer must be extremely diversified. This management console now integrates into APIs to span an ever-growing data center footprint. New integrations allow for big data control, data manipulation, and even resource allocation. Here’s a specific example around the latest release of OpenStack, Havana. The networking component (Neutron) allows administrators to do some pretty amazing things with their cloud model. Now, with direct integration with OpenFlow, Neutron allows for greater levels of multi-tenancy and cloud scaling by adopting various software-defined networking technologies into the stack.
- The data center operating systems (DCOS). The spanning data center needs a spanning control layer. Already, global data center providers are deploying data center operating control layers which manage policies, resources, users, VMs, and much more. Most of all, you’re creating a proactive management infrastructure capable of greater scale. For example, IO and their IO.OS environment helps control many of the absolutely critical components – from chip to chiller. The great part is that this DCOS layer has visibility into every critical aspect that a data center has to present.
- Infrastructure agnosticism. To be completely honest, the future data center won’t care which hypervisor, storage layer, or server platform you’re running. Layered management tools will be able to pool resources intelligently and present them to workloads. This type of infrastructure and data center agnosticism will allow administrators to scale better and create more powerful cloud platforms. Technologies like BMC begin to explore the concept of agnostic cloud control. By connecting with major control plains and interfacing with solid APIs, the cloud computing concept and everything beneath it can be better abstracted.
- Data center automation (and robotics). The next-generation data center will revolve around better workflow orchestration and automation services. Resources will be provisioned and de-provisioned dynamically, users will be load-balanced intelligently, and administrators will be able to focus on providing even greater levels of efficiency. Know what else the next-gen data center might have more of? Robotics. Big robotics makers like FANUC are already developing smaller, smarter and much faster robotics. Here’s another interesting example: a recent article discusses how IBM is actually using robotics to plot the temperature patterns in data centers to improve their energy efficiency. Basically, IBM is using robots based on iRobot Create, a customizable version of the Roomba vacuum cleaner, to measure temperature and humidity in data centers.
There’s really no question that data center technologies are quickly progressing. New ways to integrate at the API layer, improved methods of optimization, and overall density are all impacting data center platforms. It doesn’t stop here though. Trends show that more users are utilizing IT consumerization to process even more through the cloud. This means that data centers will have to evolve even more. |
|