Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, August 13th, 2014

    Time Event
    1:43p
    What Infrastructure and Operations Has to Do With DCIM (Hint: A lot!)

    Matt Bushell, director of product and corporate marketing for DCIM software company, Nlyte Software.

    I recently had the opportunity to not only exhibit at the Gartner Infrastructure and Operations show, but to attend several sessions in an effort to soak up the latest and greatest in the industry. I was pleasantly surprised to hear that Gartner was saying a lot of what we, at Nlyte, have been observing. Notably:

    Change is a collaborative process – Professor Eddie Obeng of the Henley School of Business gave the opening keynote, “Transforming with Confidence.” I was thinking about how his animated presentation could apply to someone thinking about bringing DCIM into their organization, but might be meeting resistance because their audience isn’t familiar with DCIM. Professor Obeng indicated that in order to see change, you need to reduce the level of fear, and data can help reduce that. What he described was a good change model:

    1. Issue – Identify the issue with the other parties (always a good practice, vs. making declarative statements).
    2. Data – Provide facts, not statements. This is huge as DCIM provides insightful reports and dashboards whereas status quo solutions (data in spreadsheets) make reporting tedious and incomplete.
    3. Question – Ask questions. For example, “What will we do together?” or “What might need to change?” Nlyte takes pride in its Workflow, where processes are mapped collaboratively, cross-functionally, often for the first time, in the Nlyte system, step by step. Often we’ll see a reduction in the process steps once they question what they see “on paper.”
    4. Build – Build the solution together, and build trust. With the latter, the formula is Promise, Do it, Remind/Confirm. Repeat this at least four times. Clearly, a well-controlled DCIM process is one that is repeatable and thus, builds trust in your ability to execute.

    Business value trumps all – Jeff Brooks, the event co-host made this point in his session, “Tell the Story with Business Value Dashboards.” It came up repeatedly to the extent that infrastructure and operations teams need to convey what 99.5 percent uptime means to the business and therefore to the executive team.

    For example, X number of hours down means Y processes don’t happen means Z sales don’t occur and thus, becomes a dollarized statement instead of a process statement. One instant poll of session attendees revealed that 66 percent planned to adopt ‘Business Value Dashboards’ as a primary means to communicate value to the business in the next 12 months (5 percent already had, 24 percent in the next 2 years, 5 percent not ready to think about).

    While this may be a leading question, session attendees would be quick to realize how DCIM can offer customizable dashboards, and should be a good harbinger to sell to infrastructure and operations teams to help realize their desire to have business value dashboarding capabilities for their data center.

    IT services are built on assets and processes – In order to build an IT service portfolio, you need a catalog of IT services, which are built upon processes and assets, upon which services and finally value can be created.

    Debra Curtis and Suzanne Adnams detailed an ITSM framework for infrastructure and operations leaders, where IT maturity increases as you build upon the assets and processes.

    Figure 1: Nlyte's interpretation of a presentation by Debra Curtis and Suzanne Adnams of Gartner.

    Figure 1: Nlyte’s interpretation of a presentation by Debra Curtis and Suzanne Adnams of Gartner.

    At Nlyte, we have demonstrated that it is best to focus first on Assets, or even things like Asset Lifecycle Management, and then to focus on what to do with those assets process-wise (for Nlyte, this is our Workflow capability).  The value and data center maturity level simply build from there.

    Business value has a specific “speak” – Consistent with and building upon Jeff Brook’s session was Robert Naegle’s, “Business Value: Laying the Ground Work for High Value I&O.”

    Robert had a great chart that compared IT (Feature/Function) vs. Business (Need/Benefit). In order to convey things like revenue impact, risk mitigation or cost reduction, you needed specific metrics tailored to these things, which fortunately, infrastructure and operations has strong levers for.

    Business value metrics of transactions per hour, capacity utilization to plan and unplanned downtime can be greatly affected by things the infrastructure and operations team focuses on: Connectivity, Security and Compliance, Service Support and Continuity, as well as Hardware and Software – all things that DCIM helps manage.

    More expensive assets have lower total cost of ownership – Jay Pultz’s session, “Significantly Reducing I&O Costs”, covered many areas but the one that stood out showed two bar charts, side by side, with a more expensive asset on the left, and a less expensive asset on the right, with two-thirds of the cost increase being due to staff the “cheaper” server. One server cost $6500, the other $5200, but the staff cost difference was $1700 per year per server.  One can only imagine what a server beyond its 3-year cycle will cost in staff support.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:46p
    UK Startup ClusterHQ Makes Docker Containers Data-Aware

    Riding the rising wave of popularity of application container technology, UK-based Hybrid Cluster renamed itself to ClusterHQ and put a focus on data portability in containers with the launch of Flocker.

    Positioned as the ‘data people for Docker,’ ClusterHQ hopes to increase adoption of Platform-as-a-Service with its data-aware container approach. Flocker is an open source container management tool focused on the data, solving the operational challenges that come from running data-backed services inside containers. The company says Docker containers are not data-aware, which is a gap it aims to fill with Flocker 0.1, an open source volume manager for running databases, key-value stores and queues inside Docker containers.

    Modern applications are distributed systems

    ClusterHQ CEO Luke Marsden says that the company “will do for containers what vCenter did for VMs.” The company is backed by industry veterans and Flocker is the result of more than five years of research, development and operational experience combining distributed storage with containers.

    HybridCluster was a container management platform launched by the company two years ago to prove the resilience and scalability of the technology. The company estimates the burgeoning market of Linux containers to be at $1.7 billion by 2018, one-quarter of the overall $6.94 billion PaaS market.

    Containers are an enabling technology for PaaS. Modern PaaS offerings are built to enable developers to build applications that can run on different kinds of infrastructure, be it a variety of cloud platforms or an on-premise environment, and Docker containers make that cross-platform portability possible.

    Infrastructure as code

    Flocker aims to solve some of the problems with the approach to PaaS and containers today by treating infrastructure as code, with the entire application being portable by managing stateful and stateless components. The company says Flocker will provide a lightweight volume and container manager that lets developers deploy their entire app, including its databases, to multiple hosts and then migrate them later as operations demands.

    It lists three primary benefits for Flocker: to capture the entire application and data, to easily migrate databases, queues and key values with the rest of the application, and the support for deployment to and between any public, private or bare-metal clouds.

    Colin Humphreys, CEO of CloudCredo, said, “Production-ready data-backed services represent the biggest challenge for PaaS adoption. Solving the data problem such that databases and other data-backed services can be run inside containers will unleash a massive amount of pent-up demand for containers and PaaS.”

    Containers are in

    Docker, a San Francisco startup, recently released its lightweight runtime offering for moving apps from one place to another. Docker 1.0 was supported by AWS, Google, Microsoft, Red Hat and others and the startup was recently reported to have closed on a funding round of more than $40 million.

    Another major supporter is CenturyLink Technology Solutions, which yesterday announced Panamax, an open-source solution for multi-container application management.

    Fueled by the success of containers ClusterHQ looks to make Docker a production-ready service for delivering modern applications.

    Flocker is available on GitHub under an Apache 2.0 license. The 0.1 release is not quite ready for prime time, but the company hopes to gain some traction for the product with this release and have full functionality with a general availability launch later.

    3:24p
    VMware Vets Launch Private Cloud Startup Platform9, Raise $4.5M

    Emerging from stealth Tuesday, Sunnyvale, California-based startup Platform9 launched a Software-as-a-Service platform designed to quickly transform on-premise enterprise servers into an agile, self-service private cloud. The company also announced a $4.5 million Series A funding round from Redpoint Ventures.

    AWS-like efficiency in the enterprise

    Consisting of a group of ex-VMware employees, Platform9 set out to abstract on-premise infrastructure to deliver it as a private cloud that can be used like the Amazon Web Services cloud but in-house. The solution forms a private cloud of servers and storage arrays and is delivered from the vendor’s cloud as a service.

    The company said several mid-size and large companies were currently trying out a beta version of its product.

    Platform9 co-founder and CEO Sirish Raghuram said he and his team “founded Platform9 because as early engineers at VMware, we observed how customers were struggling to achieve AWS-like efficiency with increasingly archaic management software. We believe that just like SaaS revolutionized the world of enterprise applications, it can do the same for enterprise data centers.”

    The company is certainly not alone in the market for private cloud offerings, but it does have a unique product. Redpoint Ventures partner Satish Dharmaraj, who has joined Platform9′s board, said, “Platform9 is uniquely positioned to seize this opportunity with their all-star founding team and cloud-based delivery model.”

    Platform9  uses OpenStack APIs, automation and policy control like public cloud services.

    In addition to pooling servers, storage and networking, the platform’s intelligent placement technology ensures optimal hardware allocation and enables rich policies for tiered consumption of resources. Although initially supporting just KVM, the platform will have support to allow administrators to mix and match Docker containers, KVM and VMware vSphere virtualization or container technologies.

    Platform9 says it will give developers UI- and API-based self-service access to private clouds to provision instances, or automate their build-test-release pipeline by leveraging OpenStack APIs or libraries.

    4:00p
    Want a “Warehouse-Scale” Computer? CoreOS is Working to Get You One

    CoreOS, the San Francisco startup with a Linux distribution for companies that operate massive data centers, has acquired Quay.io, a New York-based company started by two former Google engineers that offers a solution for centralized hosting of application images using Docker containers.

    The acquisition is yet another step toward CoreOS’ goal of offering a complete solution that will enable any enterprise to stand up and manage data center environments the way web-scale companies, such as Facebook and Google, have been doing it.

    The CoreOS Linux distribution is designed for large compute clusters. The operating system and tools that come with it ensure consistency among nodes in a distributed infrastructure – a must for architectures where clusters are designed to withstand individual node outages.

    Docker containers are a way to deploy an application on any type of infrastructure, be it a bare-metal server, a cloud infrastructure or a laptop. The technology is enjoying a lot of interest because it divorces an application from the infrastructure it runs on.

    The current default solution for hosting “Dockerized” application images is Docker Hub, a hosted service offered by the app container company. The problem with that approach for enterprises is that the Docker Hub registry is public. “Anybody can view the image and download it,” Alex Polvi, CoreOS founder and CEO, said in an interview.

    Quay.io’s registry is deployed on customers’ internal infrastructure. “It’s a place where you can store images, share them with your colleagues and use it as a centralized place to distribute it to all of your servers,” Polvi explained.

    Besides security, enterprise customers may want to opt for the self-hosted version to protect proprietary applications or because their applications are big in size and would be cumbersome to work with over a web-based service.

    Quay.io also has a hosted registry alternative to the enterprise product, and CoreOS is offering it free for six months to the first one-thousand users who sign up for Quay.io’s 20-repository plan. This is a way to make it easier for users to test the technology before they commit to the enterprise product, Polvi said.

    CoreOS, Docker and Quay.io all sprung up very recently – little more than a year ago – but the former two have already created a lot of buzz because they are offering an entirely new way of thinking about using data center infrastructure.

    Quay.io was founded by Jacob Moshenko and Joseph Schorr, both of whom used to work on Google’s APIs. Google uses containers to deploy its own applications and has been a strong public proponent of Docker.

    CoreOS did not disclose how much it paid for Quay.io, whose team still consists of its two founders. The San Francisco firm is also using the deal to expand operations to the east coast, taking over Quay.io’s office in Manhattan and planning to expand there, Polvi said.

    7:28p
    Instor to Fit Out Data Center Halls for Digital Realty Customers

    Instor, which provides infrastructure solutions for data center floors, has gotten on the list of Digital Realty Trust’s official contractors used to outfit data halls for its customers.

    Digital Realty Trust is one of the biggest data center providers in the world and the biggest wholesale data center provider in the world, and the deal is a huge win for Instor. The contractor has been doing work for Digital Realty for about one year, and the wholesaler recently made it official by naming it “alliance partner.”

    “It’s a big deal for us,” Jack Vonich, vice president of sales and strategic partnerships at Instor, said. “It could be one of the largest deals that we’ve done as a partnership.”

    Instor will provide remote power panels, power distribution units, IT racks, hot- and cold-aisle containment solutions and other infrastructure components. It will both supply the equipment and hire subcontractors to install it, managing the process from start to finish.

    The company will compete for projects with general contractors Digital Realty also uses to do this kind of work. Vonich says Instor can get the components cheaper and quicker than the data center contractors do, since it gets them directly from manufacturers instead of going through a multi-stage supply chain general contractors usually go through.

    Instor has been serving the lab and data center industry since 1988, but has recently been branching out beyond its traditional end-user focus. “We do this kind of business direct with end users a lot, and we just started doing more of it with colocation facilities and their customers,” Vonich said.

    Do date, the company has worked on about 40,000 square feet of data center space for Digital Realty. The projects include a 10,000-square-foot facility in Santa Clara, California, for a software company, 14,000 square feet in Ashburn, Virgnia, for a media corporation and about 15,000 square feet of space total at a Franklin Park, Illinois, facility for a telco, a financial services firm and another software provider.

    While Digital Realty is going through a portfolio-optimization process, selling off underperforming or “non-core” properties, its global data center portfolio is going to remain massive. The company continues to lease out big chunks of data center space every quarter, which means a lot of potential business for a partner like Instor.

    In the second quarter, for example, Digital Realty leased out about 285,000 square feet of space. It signed leases for about 370,000 square feet of space in the first quarter. Not all, but most of that space is in North America.

    9:31p
    Vantage Offers Private Direct Links to Amazon, Microsoft Clouds

    Vantage Data Centers has partnered with Level 3 to provide private network connections to Amazon Web Services, Microsoft Azure and other public clouds within its Santa Clara, California, data centers. Data center providers continue to enable direct private links to cloud in order to meet hybrid computing needs of customers.

    Direct links to cloud make it easier for customers to integrate public cloud services into enterprise environments. They also offer the benefits of better privacy, security and performance over basic public cloud because they bypass the public Internet. Multi-tenant providers are using direct link capabilities as another feature to entice enterprises to use colocation. While cloud providers might be considered a competitor to colocation, many colocation providers are treating cloud as a compliment.

    Vantage has had options for customers to connect to the cloud in the past, but this is the first formal partnership designed to “productize” its cloud connection options. “Cloud is becoming so important to our customers that we’ve decided to invest in creating a clearly defined cloud connect solution,” said Vantage Senior Vice President of Operations Chris Yetman. “We have decided to begin with Santa Clara, as many of our customers and prospects here are running hybrid solutions.  As we continue to build out Quincy [Washington], we may begin to offer cloud direct connection products there as well.”

    The “Wholo” trend

    Like other wholesale data center companies, Vantage is evolving beyond its traditional wholesale roots, offering smaller deals and more services to customers. This is an industry-wide trend. The lines between traditional data center provider business models are blurring, giving rise to the term “wholo,” a portmanteau of wholesale and colo. Vantage’s roots are in wholesale, but the company recently noted it has been accepting smaller-size deals and becoming increasingly hands-on with customers as a result.

    Direct links to cloud are one example of this ongoing evolution. “Smaller customers in the ‘wholo’ segment are typically earlier in the development of their infrastructure, and many of them started with the cloud and are now moving to a dedicated data center space for the first time,” said Yetman. “These customers need options for connecting their data center to their existing cloud infrastructure.”

    The company says that many of its larger customers also have some workloads in the cloud and need direct-connection options. “The reality is that most companies have some combination of data center and cloud infrastructure and need a performant and cost-effective way to connect the two,” said Yetman.

    Direct links to cloud providers continue to be one of the most requested features for multi-tenant data center providers. “Many customers have applications that exist partially in their data center and partially in the cloud,” said Gred Vernon, senior vice president of sales for Vantage. “This connectivity between Vantage and Level 3 makes it much easier and more cost-effective for businesses to integrate the two.”

    Colocation continues to see tremendous growth amongst web-based companies and enterprises.  Enterprises that still host on premises are a large potential market.

    Many providers are offering direct links to cloud, either through portals that link up customers with other tenants that provide cloud, or with public cloud providers. Equinix, Telx, Interxion, DuPont Fabros and Digital Realty have all expanded their cloud connectivity services.

    Vantage’s campuses in Santa Clara and Quincy consist of four enterprise-grade data centers totaling over 100 megawatts of potential capacity.

    10:00p
    BGP Routing Table Size Limit Blamed for Tuesday’s Website Outages

    Many websites, including Data Center Knowledge, responded sporadically from certain locations Tuesday, but the outages did not result from loss of power at a hosting company’s or a cloud provider’s data center, a flood or a network cable severed by a squirrel. The problem was attributed to a structural problem in the way the Internet is built.

    That issue is capacity of a certain type of memory chips on older-generation router hardware used in many service providers’ infrastructure. Ternary Content-Addressable Memory is memory routers use to store the Internet’s routing table. In very simple terms, it is sort of a combination of an address book and a map for routes Internet traffic travels on.

    The amount of routes TCAMs can store is finite, as a post on The IPv4 Depletion Site blog, ran by a group of network and IT experts, explains. While workarounds have been developed to deal with this limit, not all routing equipment (especially older routing equipment) has been upgraded to use them. On Tuesday morning, the Internet felt a very distinct tremor that resulted from the size of the routing table reaching that magic number of 512,000 BGP routes. BGP is the protocol used to communicate routing information.

    Representatives of the hosting company Liquid Web (which hosts Data Center Knowledge, among many others) indicated on the company’s Twitter feed that the issue had been attributed to the table size hitting the TCAM limit.

    Since the issue affected numerous network operators, it was not easy to send traffic around affected areas of the Internet. “Generally, we would reroute traffic, but this is being hindered by the amount of providers experiencing outages,” the Liquid Web team tweeted.

    According to downdetector.com, service providers that had network issues Tuesday morning included Comcast, Level 3, AT&T, Cogent, Verizon, Time Warner and possibly others. Outage start times, courtesy of downdetector:

    • Comcast is having issues since 8:30 AM EDT
    • Level 3 is having issues since 9:55 AM EDT
    • AT&T is having issues since 9:35 AM EDT
    • Cogent Communications is having issues since 10:10 AM EDT
    • Verizon Communications is having issues since 10:41 AM EDT
    • Time Warner Cable is having issues since 10:01 AM EDT

    Things began looking up in the afternoon, when LiquidWeb tweeted, “As ISP’s have recovered from #512k active bgp routes being reached, many of our customers affected by these carrier issues have regained ability to reach their sites.”

    The hosting company updated its Twitter feed around 3 pm Pacific, saying all of its customers had regained connectivity from all locations.

    << Previous Day 2014/08/13
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org