Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, December 21st, 2015
| Time |
Event |
| 4:03p |
Getting to the True Data Center Cost Will it be cheaper to run a particular application in the cloud than keeping it in the corporate data center? Would a colo be cheaper? Which servers in the data center are running at low utilization? Are there servers that have been forgotten about by the data center manager? Does it make sense to replace old servers with new ones? If it does, which ones would be best for my specific applications?
Those are examples of the essential questions every data center manager should be asking themselves and their team every day if they aren’t already. Together, they can be distilled down to a single ever-relevant question: How much does it cost to run an application?
Answering it is incredibly complex, which is the reason startups like TSO Logic, Romonet, or Coolan, among others, have sprung up in recent years. If you answer it correctly, the pay-off can be substantial, because almost all data centers are not running as efficiently as they can, and there’s always room for optimization and savings.
They aren’t data center infrastructure management products, or DCIM, and they aren’t IT service management. These companies, while different from each other, have one thing in common: they focus squarely on cost, trying to reduce the long list of factors and their interrelationships that determine data center cost to numbers that make sense not only to data center managers but also to business executives.
“Really, what we’re selling is savings,” Aaron Rallo, CEO and founder of Vancouver-based TSO Logic, said. Put simply, TSO’s software analyzes as much operational data from a data center as it can get to and finds out how much it is costing a company to run a certain application on its current infrastructure, and how much it could cost if they would run it with something else underneath.
“If we find that you have 1,000 VMs that are doing nothing at all, before you go out and buy more licenses, buy more hosts, buy more compute, let’s repurpose the ones that you have,” Rallo said.
It is TSO that put out the well-publicized report together with Stanford research fellow Jonathan Koomey that estimated that about 30 percent of servers deployed worldwide weren’t delivering any computing services.
While looking at physical and virtual infrastructure day in and day out, TSO routinely sees that about one third of servers deployed is completely forgotten about, Rallo said. The report estimated that these idle servers together represent about $30 billion in sunk investment.
Being Useful to Financial Stakeholders
Reflecting the change in the role the data center plays today within the enterprise – being looked at as a strategic asset that helps generate revenue, rather than a necessary cost of doing business – TSO’s user within a customer organization is rarely the data center manager. Most often, it’s someone who is a financial stakeholder: a CIO, someone who oversees applications or application provisioning, and sometimes even a CFO, Rallo said.
Being useful to people like that is TSO’s key strength, Jeff Klaus, GM of data center solutions at Intel, a TSO partner, said. The platform presents information in a “reportable format, so it can go beyond the guy who’s running the data center,” he said. “It can go to the COO or the finance individual in a way that he doesn’t have to massage or get an interpretation from the IT guy or the facilities person.”
TSO can collect data from a DCIM software solution underneath, tap directly into CPU power, temperature, and utilization metrics through Intel’s Data Center Manager middleware, analyze data from the virtualization platform, and know who within the organization a VM or an application belongs to using data from ITSM software. The wider the variety of data the platform ingests, the more useful its output will be.
“We start with the applications and work our way down to the physical,” Rallo said. “It’s exceptionally important to have a holistic view of the data center; not just a physical view, and not just an application view.”
Reaching Into the Cloud
The company recently struck an agreement with Amazon Web Services so that its platform can add data from one more essential layer of modern enterprise infrastructure to the mix: the public cloud. AWS will be its first cloud integration, but the plan is to add Microsoft Azure, the second-biggest public cloud, and then also smaller more specialized cloud providers.
The data set a platform like TSO can draw from AWS is “pretty extensive,” Rallo said. The cloud has numerous third-party APIs that expose data around utilization levels and essentially everything else TSO gets from an on-premise virtualization stack.
Not a Replacement for DCIM
Software by TSO and the others doesn’t necessarily replace DCIM or ITSM software. As described, TSO uses both to enrich the data set it analyzes. The company isn’t interested in developing the capabilities to monitor power consumption of cooling units or humidity levels on the data center floor, for example.
The DCIM integration opportunity is important, however, and it’s important to the DCIM vendors themselves. TSO has partnerships with Siemens around DCIM and others. “They want to tightly couple the physical to the application,” Rallo said.
Most DCIM solutions strive for a holistic view of the data center, and no matter how complete your view of the physical infrastructure underneath is, it can never be called a “holistic view,” if you don’t have visibility into the software that infrastructure is supporting. | | 4:30p |
Is Your Organization Ready for Application and Desktop Virtualization? Sachin Chheda is Director of Solutions for Nutanix.
Exporting applications and desktops to a remote screen isn’t a new concept. From days preceding even UNIX/X Window System, IT has been able to deliver applications and desktops to end users. However, the real renaissance for app and desktop delivery started when users were able to connect to their Windows apps/desktops from any device over any network.
Independent of vendors, there are several different ways for IT to deliver apps and desktops to their end users, each delivering a different set of benefits. This article discusses some of the popular options IT can utilize to deliver virtualized applications and desktops to the end user and the functions of each:
Hosted Shared Desktops (HSD)
HSDs serve desktops from a single (shared) VM running a server-based operating system to multiple end-users. HSD represents the majority of the deployments supporting end users and is also often called out as Server Based Computing (SBC).
There is an additional category of software that delivers application virtualization. Here, the application is encapsulated from the underlying operating system on which it is executed. Virtualization in this case means encapsulation and not running on a hypervisor. This allows for different versions of the same application or mutually exclusive applications to co-exist without having to natively install them. Microsoft App-V and VMware ThinApp are options for application virtualization.
Hosted Virtual Desktops (HVD)
HVDs allow each end user to be served by his or her own individual desktop, typically running a desktop OS. They are becoming increasingly popular in environments requiring full feature desktops.
There are additional factors associated with virtualized desktops. IT administrators can provision personal/persistent desktops or pooled/non-persistent desktops. In the former case, the end users get their own desktops to run and customize. User files can be stored within the virtual desktop itself, but they would have to be protected individually.
The other “pooled” scenario deals with users logging into one of the desktops pulled from a pool of desktops. However, the user’s data typically isn’t stored within the virtual desktop. Customizations associated with the user’s desktop are maintained through the use of roaming profiles and folder redirection using a myriad of third-party user environments, or profile management tools.
Recently, another set of technologies around application layering has hit the app/desktop virtualization market called app and profile layering. These tools allow IT to create multiple virtual desktops via one golden image by choosing the appropriate layers of applications. This may even include a personalization layer so end users have their desktop settings when they log in, essentially creating personalized desktops from non-persistent pools.
Streaming and Local Desktops
Streaming and local desktops come in to play where end users run their desktops locally with OS images that are either streamed from a central location or presented locally. In certain cases, the virtualized desktop itself might run on top of another OS as a virtual machine. These types of desktops require significant bandwidth (when streamed), adequate compute on the end-user device to accommodate performance needs, and in the case of local virtualized desktops, additional software.
Streaming is finding a niche in the connected kiosk space where network connecting and computing at the end-point isn’t an issue and centralized management and control is needed. The local desktops are gaining traction in the PC/Mac space where there is a need to run other OSes locally (Linux on a PC running Windows or Windows on a Mac). This is seen in small/midsize environments lacking corporate-level resources for HSD or HVD/VDI.
Citrix, Microsoft and VMware have played key roles in the application and desktop virtualization and delivery space through innovation in the areas of protocol, virtualization software, including the broker and operating systems. Here are some of the components of the stack and the different options and vendors in each:
- App/desktop virtualization software: This is the nerve center of any app/desktop virtualization solution. Offerings include Citrix XenApp/XenDesktop, Microsoft RDSH and VMware Horizon View. Citrix and VMware also deliver functionality in their respective stacks to help with provisioning of desktops, leveraging golden images and cloning.
- Protocols: This is what is used to stream desktops from the server to the end-user device and capture end-user input. Protocols now also cover areas such as printer, USB for end-user peripherals and so on. Citrix has been the industry pioneer with its ICA and HDX technologies; Microsoft has the pervasive RDP and RemoteFX technologies; and VMware has PCoIP, which is licensed from Teradici and also supports RDP.
- Client: This is the application running on the end-user device used to provide access to the desktop/application. Citrix has Receiver, Microsoft has RDC client, and VMware has View Client. End-users can now also use HTML-5 compatible web browsers to access their desktops and applications.
- Networking and security: This is a critical, but often overlooked, component of the overall app/desktop virtualization and delivery stack with features like load balancing, encryption and network acceleration and firewalls. This application enables secure and highly available remote access to virtualized desktops and applications. Major players in this space include Citrix NetScaler and F5 Big-IP.
- Server hypervisors: A variety of hypervisors can be used with the app/desktop virtualization software to run the end-user desktops including VMware vSphere, Microsoft Windows Server 2012 R2 (with Hyper-V) and the Nutanix Acropolis Hypervisor (AHV).
There is also the topic of underlying compute and storage infrastructure. If you haven’t already, it is definitely worthwhile to research the advantages of hyperconverged infrastructure for app and desktop virtualization, especially ones that come from using a linearly scaling web-scale architecture.
The app and desktop virtualization and delivery space has come a long way from where it used to be even five years ago. There are numerous vendors mentioned here that can help you simplify the process of app/desktop delivery. Do your research to evaluate what tools can help solve your organization’s needs.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 5:00p |
Amazon AWS IoT Cloud Service Now Generally Available 
Article courtesy of Talkin’ Cloud
Amazon has announced the general availability of its AWS Internet of Things offering, which allows small devices to connect to the company’s cloud platform.
Although AWS IoT was introduced as a beta offering back in October, Amazon said the full launch of the service is the company’s effort to make the cloud more IoT-friendly. The new service was designed specifically for users with small devices running on limited system resources who still need to connect to the cloud, according to AWS Chief Evangelist Jeff Barr.
Toward that end, AWS IoT features a SQL-like programming interface and lightweight communication protocols. It’s also designed to be highly scalable so that it can accommodate an ever-changing and ever-growing number of devices, to which the service assigns unique identifiers.
Amazon announced the general availability of AWS IoT on Dec. 18. It also said several companies, including Philips and Scout Alarm, are already using the service.
To be sure, bringing the cloud and IoT together basically means marrying the biggest buzzwords from 2012 and 2015, respectively (even though IoT is not actually new and neither, for that matter, is the cloud). But by focusing on technology that is actually tailored to IoT devices in a legitimate way — not just rebranding an existing cloud service by adding another buzzword to its name — Amazon seems to be doing something truly worthwhile at the intersection of IoT and the cloud.
This first ran at http://talkincloud.com/cloud-computing/amazon-aws-iot-cloud-service-now-generally-available | | 5:30p |
IBM and Box to Integrate Enterprise Solutions 
Article courtesy of theWHIR
Cloud-based file sharing provider Box is expanding its strategic partnership with IBM to sell joint solutions.
The two companies have agreed on stronger go-to-market and sales commitments and will find ways to leverage Box’s collaboration platform with IBM’s analytics, content management, and social solutions, its security technologies and the global footprint of IBM’s SoftLayer cloud.
The companies this week announced the general availability of Box integrations with IBM Case Manager and IBM Datacap. IBM Case Manager with Box enables sharing content on the Box platform with external participants. IBM Datacap with Box helps businesses capture documents across multiple sources, extract key information, and store them to Box.
These two integrations complement existing product integrations with IBM Content Navigator and IBM StoredIQ. IBM Content Navigator with Box lets customers search, access and share content across both on-premise and Box environments, and IBM StoredIQ with Box does in-depth assessment and classification of unstructured data across Box and on-premise environments to inform business decisions.
With many of the existing cloud storage and collaboration solutions where security and data retention concerns aren’t paramount, the collaboration between Box and IBM seems squarely aimed at making cloud solutions suited for enterprises that value security and are required to handle data carefully.
This first ran at http://www.thewhir.com/web-hosting-news/ibm-partners-with-box-on-enterprise-oriented-integrations | | 7:45p |
Report: Google May Turn Tennessee Silicon Plant into Data Center Google is eyeing a brand new but defunct silicon plant in a small town not far from Nashville as a potential data center site.
Hemlock Semiconductor finished building the $1.2 billion polysilicon plant in Clarksville, Tennessee, in 2013 but decided not to launch it, citing an oversupply of the material, used to make photovoltaic panels, caused by a spike in its production in Asia. The internet giant is in talks to buy the plant and the property it sits on to convert it into a Google data center, The Tennessean reported, citing local officials.
Big manufacturing plants and especially semiconductor plants are attractive to companies that build data centers at grand scale, of which Google would be one. These sites have access to a lot of power and all the expensive infrastructure to deliver that power already in place. The Hemlock site, for example, has its own electrical substation.
Google has repurposed a former paper mill in Finland, adapting a lot of its power and cooling infrastructure for data center use. This year it announced a plan to turn a former coal power plant in Alabama into a Google data center.
Example of a commercial data center service provider that has perfected the art of buying massive defunct industrial plants for pennies on the dollar and turning them into data center facilities is QTS Realty Trust. QTS has done this with former semiconductor plants in Virginia and Texas and with a former Sun-Times newspaper printing plant in Chicago.
The Clarksville-Montgomery County Industrial Development Board was expected to vote on a land deal with Google Monday, according to The Tennessean. Local officials hope the Google data center deal, if it goes through, will ease some of the angst caused by Hemlock when it pulled out of what was expected to provide a major boost to the local economy. | | 8:41p |
What (Hardware) You Need to Build an Azure Cloud in Your Data Center Hoping to exploit the edge over VMware in the enterprise data center it has due to the massive scale of its public cloud, Microsoft is preparing to launch the first preview release of Azure Stack – a private Azure cloud environment a company can stand up in its own data center that will look exactly like the public version of Azure to users and be seamlessly integrated with the public cloud.
This is a similar angle on hybrid cloud VMware has been pursuing since 2013, when it announced its vCloud Hybrid Service that was later rebranded into vCloud Air. VMware promised a virtual extension of a customer’s on-premise VMware environment into the cloud.
The public cloud portion of VMware’s hybrid cloud is hosted in fewer data centers than Azure, relying on smaller footprint in colocation facilities, while Microsoft spends billions of dollars on massive data centers around the world, in some cases building on its own and in other cases leasing large facilities wholesale.
While it is working with hardware vendors to bring to market pre-integrated hardware-software packages for quick setup of private Azure clouds, Microsoft has released hardware specs for those who want to try the preview version of the Azure Stack software it expects to launch next quarter.
Here’s what you need to have in your data center if you want to build your own private Azure cloud, described by Microsoft’s technical fellow Jeffrey Snover:
You can find the specs in list form on the Microsoft Server and Cloud Blog. | | 11:51p |
NetApp Buys All-Flash Storage Vendor SolidFire for $870M Enterprise storage giant NetApp has acquired SolidFire, one of the leading vendors in the all-flash storage array market, for $870 million, the companies announced Monday.
NetApp has struggled to grow revenue since 2013 and stock value since 2011, but on a call with analysts company execs shot down any suggestions that the move was meant to compensate for shrinking revenue from the Sunnyvale, California-based company’s other products, painting as more of a strategic deal.
Flash has been quickly improving in performance and cost in recent years, and the fast storage medium has been seeing more and more use in enterprise data centers. While NetApp has a varied flash-based product portfolio, it has struggled to keep up with some of the biggest rivals in this space.
SolidFire specializes in scale-out all-flash arrays for web-scale data centers, the kinds of data centers operated by cloud infrastructure providers or other massive internet services. It has extensive integration with VMware and OpenStack and strong multi-tenancy capabilities.
NetApp CEO George Kurian said the company made the acquisition to add those capabilities to its portfolio, saying SolidFire had “the best and the most differentiated capabilities for the customers’ data centers of the future.”
“Performance and economics of flash are continuously improving,” he said. Flash is being used to address a broader and broader range of use cases, and NetApp wants to target customers making the transition from disk to flash, on their way to an “all-flash data center.”
SolidFire’s web-scale flash capabilities complete the range of all-flash use cases NetApp is addressing, adding to its existing all-flash lineup for performance and enterprise use cases.
Gartner named EMC, IBM, HP, and Pure Storage leaders in its 2015 Magic Quadrant for sold-state arrays. The market research firm named NetApp a “challenger” in the space, saying it was a late entrant to the market.
Pure Storage stands out in the quadrant of leaders as a startup among long-time IT giants. The startup’s shares debuted on the New York Stock Exchange in October in a less-than-stellar IPO, ending the first day lower than the opening price. After some ups and downs, Pure Storage closed today at the same price as it was at the start of the IPO day. |
|