Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, June 12th, 2014

    Time Event
    12:00p
    Nlyte Says It Can Now Integrate DCIM With Any ITSM Software in Days Instead of Months

    When a customer asks whether a DCIM vendor’s software integrates with the customer’s IT service management system, the vendor has to say ‘yes.’ Very often, however, that ‘yes’ is an uneasy one.

    If the customer happens to be using an ITSM system the vendor has not already built a connector for, that ‘yes’ means there will potentially be months of programming work and tons of money spent on consultants.

    Nlyte, a San Mateo, California-based DCIM vendor, says this is a problem it no longer has. The company’s ITSM framework, announced Wednesday, gives it a way to plug its DCIM software into any ITSM solution in a matter of days, Mark Harris, Nlyte’s vice president of marketing and strategy, said.

    ITSM-DCIM integration no longer optional

    ITSM is a process-oriented approach to providing and maintaining high quality of IT services by an IT organization to its customers, and there is a multitude of software solutions built to enable it. The two most common ones are Configuration Management Database and workflow systems for change management.

    Just about all customers that undertake modern DCIM (Data Center Infrastructure Management) implementation want some kind of integration of DCIM with these systems. “Almost every customer buys more than one of those integrations,” Harris said. This means vendors who want to stay competitive in the space have to say they can do it.

    Hard-wiring DCIM with ITSM is a bad idea

    Every other DCIM vendor he has heard of takes what he calls a “brute force” approach to integration, using vast programming and consulting resources and taking a lot of time to hard-wire their software to customers’ ITSM solutions. As soon as one of the hard-wired systems changes, however, say through a software update, programmers have to roll their sleeves up again, Harris said.

    With its ITSM framework, it is now a lot easier for Nlyte to say yes to those integration questions. “It makes our ability to create connectors much, much simpler,” Harris said.

    Nlyte knows this because like others it has spent years building and releasing various hard-wired ITSM connectors.

    After about six months of work, the framework started shipping as part of Nlyte’s software last month. Delivered as a module in the DCIM solution, it can talk to internal and external programming interfaces used in ITSM systems and understands how to deliver information between them and the DCIM software. It is essentially an orchestration module.

    Connector licenses will cost the same

    Nlyte licenses connectors for different ITSM and server virtualization systems separately from the main DCIM software license. It charges per rack and the list price varies, depending on the connector, ranging between $50 and $100 per connector per rack.

    The ITSM framework should reflect positively on Nlyte’s bottom line, since connector pricing will remain the same, but the company will not have to spend money on consultants and thousands of programmer hours.

    12:30p
    Peer 1 Launches Hosted Trial of Private Cloud GPU Offering

    logo-WHIR

    This article originally appeared at The WHIR.

    Peer 1 has launched a hosted trial of its private cloud GPU offering, the company announced Wednesday. The offering delivers NVIDIA GRID GPU acceleration from the data center to any device.

    GRID graphics boards use NVIDIA Kepler architecture, which allows GPU hardware virtualization. The service is geared towards SMBs, as they can provide graphics-rich and 3D-intensive content to multiple users with full performance, stability and compatibility, but without the associated infrastructure costs and concerns.

    “A big challenge for businesses today is the ever-increasing volumes of data, and the increasing demand for fast processing,” said Donya Fitzsimmons, Channel Account Executive at Peer 1. “Harnessing the power of GPU-based servers allows them to maximise their performance and focus on their core business, rather than have to worry about hardware and the high costs associated with procuring, building, managing, scaling and upgrading a solid infrastructure.”

    NVIDIA GRID became available on VMware’s Horizon virtual desktop in April. Peer 1’s GRID GPU virtualization will also be available on Citrix’s and Microsoft’s virtualization solutions.

    Peer 1 SVP of Business Development Robert Miggins told The Register that the CPU-GPU cloud offering is a reaction to interest from 20 Peer 1 customers, and it has met with “tremendous demand.”

    The new offering is the first major product released by the UK host since Gary Sherlock became CEO in January. Peer 1 is offering 30, 60, and 90-day trials, and will lease GPU cloud capacity starting at $2,000 a month, according to The Register.

    SoftLayer and Peer 1’s own public cloud division Zunicore have offered GPU hybrid cloud solutions since 2012, and Peer 1 will find out if there is also demand for an HPC capacity private GPU cloud.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/peer-1-launches-hosted-trial-private-cloud-gpu-offering

    1:00p
    Bitcoin Miners Teach Pivotal a Lesson About Isolating Cloud Workloads

    When Pivotal started offering free 60-day trials of its public Platform-as-a-Service cloud, giving people free data center infrastructure for Bitcoin mining wasn’t exactly what the company had in mind.

    Of course, a free trial is a free trial and users are free to use it however they want to. The problem with Bitcoin-mining applications for Pivotal, however, was that these apps use very little memory and a lot of CPU horsepower, Mark Kropf, Cloud Foundry product manager at Pivotal, said.

    Until recently, the company used to use the size of memory a workload consumes to charge for Pivotal Web Services, regardless of the CPU resources involved. Its free trial originally gave a user 2GB of memory and unlimited CPU.

    With this approach, Pivotal very quickly ended up with lots of Bitcoin mining applications – which only require about 8 megabytes of memory – running in its cloud and hogging a massive amount of the infrastructure’s compute capacity, Kropf said.

    To stop the problem, Pivotal changed the model, tying CPU capacity to memory so that the amount of memory an application uses dictates the amount of physical CPU capacity available to it. And it worked: the miners went away. “That [change] made it financially not viable for Bitcoin miners,” Kropf said.

    To be sure, Pivotal does not view Bitcoin mining as a workload that should be kept away from its cloud at all costs. More than anything, the experience taught the company a lesson about isolating workloads in its cloud to prevent applications from hogging CPU capacity to a point where they harm other applications. “We are able to better segregate workload between apps,” Kropf said.

    Pivotal Web Services is based on Cloud Foundry, an open source PaaS technology the company spearheaded and continues to be heavily involved in. Like Amazon Web Services’ Elastic Beanstalk or Salesforce’s Heroku, it enables developers to easily deploy and scale applications in the cloud.

    Public cloud is only one of the ways Bitcoin miners deploy their workloads. Because it requires a lot of processing horsepower, Bitcoin mining has become a rapidly growing niche market for data center service providers.

    A company called KnC Miner, for example, is building a 10 megawatt data center in Sweden where it will provide dedicated servers for the sole purpose of mining digital currency. C7 Data Centers has about 4.5 MW of data center capacity dedicated to Bitcoin mining and companies in the cryptocurrency ecosystem.

    1:30p
    Moving From Cloud Back to Data Center – Not as Easy as You May Think

    What if you went down the cloud path and realized it was a mistake? What if you deployed a massive data point or critical application into a cloud data center and it’s not working very well. Now what? How do you pull back and place your environment back into your own data center? Regardless of your organization’s size, migrating critical applications and workloads can be a bit scary. But it doesn’t have to be. Many IT shops have had to move large workloads between public and private cloud instances. In some cases, it simply has to be done, whether it’s for performance reasons or just use-case issues.

    Let’s take a look at some of these use cases and what organizations have done to migrate their cloud workloads:

    • Finding the issue. There needs to be a reason. It’s sometimes as easy as that. Many organizations follow trends or adopt technologies without understanding the impact on their existing business model. So what happens if you’re in a situation where a piece of your cloud-based environment just isn’t working the way it needs to? Before ripping everything out, find the issue at hand. Is it a specific application or possibly a resource dependency? Did a piece of your platform greatly outgrow your own expectations? Organizations looking to migrate back to the data center should do so cautiously. In many situations, ripping everything out and making it private again is actually yet another mistake. Taking time to identify a cloud challenge will not only help you understand which piece of your infrastructure should be moved, it’ll also help you proactively plan out your platform.
    • Planning the migration (always try to do this in a parallel deployment). In my numerous experiences and architectural projects, one of the best ways to migrate a system, application or platform is to do it in parallel. Even in large organizations emotions can get the best of people, so creating the least disruptive migration plan is important. You can plan for transitional phases, you can test-dev the system, and you can effectively train your users and your staff as needed. A good architecture can take a single application – or an entire platform – and migrate it in parallel to your existing systems. Planning metrics around capacity, bandwidth and other resources must be taken into consideration. 
    • Replicating data points, resources and users. Just because it works one way in the cloud does not mean it’ll work the same way in a private data center. Take the time to understand the impact of the migration. There are tools out there, like LoginVSI, which can create “shadow” users to test against a system. Can you handle the additional workloads? Do you have the resources needed to scale? Will the application you just migrated work optimally in a new setting? Proactively answering these kinds of questions will ensure a smooth migration process. Storage, networking, compute and the end-user experience must all be considerations. Fortunately, data center and infrastructure technologies have really come a long way. Our ability to replicate data and networks and span the user are much further advanced than ever before. Consider using these tools.
    • Creating the new use case. Now that you’ve elected to migrate a piece of your environment, it’s important to create your new use case. Why? Because you don’t want to have the same challenges more than once. Deploying a resources on a system requires planning and a good use-case scenario. The application which was functioning in a public cloud environment will operate and interact differently when you bring it in-house. Even before you migrate something from a public cloud environment, you need to create a good use case, because there may be a solid chance that this same resource also won’t work well in a private data center.
    • Ensuring workload longevity. Workload, application or even infrastructure migration is never easy. In today’s ever-evolving technology world it’s critical to ensure longevity of your data and workloads. During the migration process (and after) take the time to plan out your resources and requirements. Scalability and supporting the end-user from a performance perspective will go a long way in allowing your platform to run well. Look for optimizations to help. There have been ltos of innovations in the software-defined world. These types of technologies help abstract the physical and create powerful logical layer which helps application and infrastructure resiliency.

    Cloud computing can be a powerful tool. However, not all workloads are designed the same and certainly not all cloud models are built for the same purpose. Fortunately, cloud and replication technologies have come a long way, making it much easier to migrate massive workloads. Regardless, this does take up extra business cycles that could have been dedicated to more proactive projects. One of the best recommendations around the cloud is to plan twice and deploy once.

    2:00p
    Do You Know Your Data Center Cooling Profile?

    The modern data center can be as complex as it is unique. New levels of convergence and high-density computing have created new types of cooling and management profiles for the next-generation data center platform.

    Here’s the reality – your data center model will only continue to change and evolve. Through it all, managing data center environmental variables becomes critical for the health and longevity of your data center infrastructure.

    In this whitepaper from HP and Intel, we learn that a recent data center cooling survey found that the majority of respondents used traditional, well-established cooling technologies with a long track record. Smaller numbers had ventured into newer technologies such as free air cooling and water-based cooling systems designed to remove air from the entire data center.

    The key takeaways included the following:

    • Only 1 in 5 people have tried liquid cooling in production, and only 6 percent are experimenting with it.
    • There are significant benefits to using it, especially newer technologies such as immersion.
    • Significant numbers of people are interested in it, but many still think that it involves a major upheaval.
    • Many people don’t know what would prompt them to consider it, but they are interested in it, suggesting a large and untapped market for a technology with lots of potential.

    With all of this in mind, it’s no wonder that cooling is so important to the people responsible for keeping the nation’s data centers running. Eighty-five percent of respondents answering the question agreed or strongly agreed that reduc­ing cooling costs is one of their highest priorities, and the same proportion are always on the lookout for new opportunities to reduce cooling overhead.

    There are two main ways to deal with the cooling issue. You can adjust your existing data center infrastructure to mitigate the heat problem, or you can improve the efficiency of your cooling systems. Ideally, you’ll do the former before attempting the latter to ease the strain on your cooling systems. Download this whitepaper today to learn more.

    New cooling technologies, particularly in areas such as liquid cooling, are evolving rapidly; but many people still don’t have much experience. The survey goes on to show that a significant proportion of peo­ple will consider these technologies only at points when they are undergoing major upheavals in oth­er areas, such as infrastructural changes, or new fa­cilities builds.

    The most important aspect to remember is the constant change being experienced by your organization and the end-user community. With more cloud, IT consumerization, and a lot more data – the data center will be a critical component for any business. DCIM and other proactive ways to monitor and control your environment will be critical in helping you out-compute the competition.

    5:13p
    TierPoint Management Teams Up With Investors to Buy, Recapitalize the Company

    TierPoint, a data center service provider, has been acquired by its management and a group of investors experienced across the communications infrastructure industry.

    The company was founded in July 2010 as Cequel Data Centers after its owners acquired Colo4. Cequel acquired TierPoint in 2012, which became the brand name for the portfolio. The acquisition puts TierPoint in a good position for future investments and acquisitions with fresh recapitalization. Terms were not disclosed.

    Several new investors are aboard to help expand the company as it has expanded in the past: through targeted acquisitions in underserved regional markets. TierPoint operates six WAN-connected data centers in Dallas, Oklahoma City, Tulsa, Spokane, Seattle and Baltimore, a portfolio built through acquisitions.

    The investor group includes Chairman Jerry Kent, RedBird Capital Partners, The Stephens Group, Jordan/Zalaznick Advisers and Thompson Street Capital Partners. TierPoint’s CEO Paul Estes,and the existing management team will continue to lead the business.

    “Our new financial partners bring a long-term orientation and expertise in building high-growth communications businesses.  With their involvement and this recapitalization we are well-positioned to continue investing in our infrastructure, technologies and people,” Estes said.  “We plan to acquire additional strategically sound assets and continue building TierPoint into an industry-leading company.”

    Classic data center rollup

    TierPoint is an example of what is often referred to as a “rollup” play, where investors — usually private equity — buy up small companies that compete in the same market and make business gain through economy of scale.

    St. Louis telco- and cable-focused investment and management firm  Cequel III formed Cequel Data Centers in 2010 and bought Colo4Dallas (which later became Colo4). Two private equity firms, Thompson Street Capital Partners, also of St. Louis, and Charterhouse Group of New York, then joined Cequel’s data center business. Later, Cequel acquired Perimeter Technology, which formed its Oklahoma footprint, and then TierPoint.

    Today, the company manages about 140,000 square feet of data center space, a footprint primed to grow with the new investors aboard.

    “This transaction is a confirmation of our strategy led by our talented management partners at TierPoint to create a leader in data center services, targeting underserved regional markets,” said Jim Cooper, managing partner of Thompson Street Capital Partners. “The recapitalization is a win-win, providing additional resources to fund the company’s unique growth strategy, while simultaneously generating a strong return for our investors.”

    6:06p
    Verizon’s IaaS Cloud Storage Built on Intel-Backed Amplidata’s Himalaya

    To serve and scale cloud-based workloads with immense storage requirements Verizon Enterprise Solutions‘ recently launched Cloud Storage offering uses the new Amplidata Himalaya storage architecture to power distributed object management and global file accessibility.

    The architecture is designed for enterprise cloud workloads to quickly and securely store and retrieve large and diverse data sets. Himalaya is a core component of the offering, which manages an object-addressable, multi-tenant storage platform.

    “Verizon understands the needs of its diverse enterprise customers and laid out the demands for a storage architecture that supports its vision of a complete enterprise-level cloud offering,” Mike Wall, CEO of Amplidata, said. “Himalaya is the architecture that brings that vision to life and helps Verizon renew its leadership as the cloud provider for enterprises large and small.”

    Verizon currently hosts its Infrastructure-as-a-Service platform and cloud-based object storage service at its Culpeper, Virginia, data center, but plans to expand to several other global sites in 2014. The cloud storage will then offer a “geo-redundant” fault tolerance option, replicating data across three sites. Its no-charge beta period that began when it launched last fall ends this month.

    Amplidata said the cloud service provider chose its technology because it could store and manage zettabytes of data and trillions of stored objects under one global namespace.

    The Himalaya architecture is at the foundation of Amplidata’s two new offerings, one for cloud service providers and the other for OEMs (original equipment manufacturers). Using off-the-shelf Intel-based hardware Himalaya supports multi-tenancy, heterogeneous SLAs, as well as non-disruptive changes in storage configuration and allocation.

    Earlier this year Amplidata secured $11 million from Intel Capital, Quantum and others to help drive its next phase of growth in advancing cloud-scale storage economics.

    6:38p
    HP Launches Helion Network, a Meeting Place For All Things HP Cloud

    HP announced the Helion Network, an attempt to create an ecosystem of independent software vendors, developers, system integrators and value-added resellers that will drive adoption of open standards-based hybrid cloud services.

    The network is essentially a meeting ground for those developing cloud solutions that are, at various degrees, HP-centric. Those that join the Helion network have a chance to tap into a wider customer base for cloud that harnesses HP and partner technology and expertise.

    Service providers in the network can sell each other’s cloud solutions as well. HP says the Helion Network will be hardware-agnostic.

    It will also provide access to HP training, collateral and go-to-market support. TheHelion Network will become a part of a wider portfolio the company announced recently, including HP’s own Helion OpenStack distribution, and the Helion Development Platform (a Platform-as-a-Service offering based on the open source PaaS technology Cloud Foundry). Both OpenStack and Cloud Foundry form the basis of HP’s Helion cloud strategy.

    It will also be assisted by the HP CloudAgile Service Provider program, which includes many service providers and cloud deployments around the world. Network participants, such as initial partners AT&T, HKT, Intel and Synapsis, will also work with HP to evolve the network.

    “Global enterprises grapple with a daunting array of cloud products and services across locations, which creates challenges that include security, data sovereignty, interoperability and quality of service,” said Martin Fink, executive vice president and CTO of HP. “The HP Helion Network leverages HP’s expertise gained from running OpenStack technology at scale and our ability to unite service providers and technology partners. Together, we’re building a federated ecosystem that enables organizations to deploy services on the right platform at the right time and at the right cost.”

    Its enterprise offerings will include:

    • An open, secure and agile hybrid IT environment with no vendor lock-in, which enables workload portability between on- and off-premises environments.
    • Access to an expanded enterprise-grade cloud services portfolio that includes horizontal and vertical applications, as well as network-enhanced services such as secure cloud networking, enabling customers to meet local and multinational hybrid requirements.
    • The ability to meet country-specific data regulations regarding data sovereignty, retention and protection.

    “Our enterprise customers are looking for a secure cloud solution for applications that require heightened levels of information protection, allowing them to gain economic, speed and control benefits,” said Andrew Geisse, CEO of AT&T Business Solutions. “We are excited to integrate our patented NetBond technology with HP Helion and look forward to finding ways to further extend these capabilities across The HP Helion Network ecosystem.”

    << Previous Day 2014/06/12
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org