Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, August 5th, 2013

    Time Event
    11:22a
    Data Center Jobs: Telx

    At the Data Center Jobs Board, we have a new job listing from Telx, which is seeking a Process Analyst in New York, New York.

    The Process Analyst is responsible for ensuring adherence to Telx established policies and processes for those projects being managed, developing processes to support new product launches, providing end-user support following the launch of a new or updated process, maintaining existing processes (updates, tweaks, enhancements), managing document repository, providing expertise in Telx certifications and industry compliance standards, providing on-going support for certifications, and managing auditors and vendors as needed. To view full details and apply, see job listing details.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

    12:28p
    The Human Cost of Data Backup Sprawl

    Eric Silva is Director, Product Marketing, Sepaton, Inc. He has more than thirteen years of experience in the storage industry, and more than five years of hands-on experience as an IT director and an IT solutions architect.

    Eric-Silva-Sepaton-tnERIC SILVA
    Sepaton

    As discussed in the recent article, The Hidden Costs of Sprawl – Total Cost of Ownership, many large enterprise data centers have taken an “add another system” approach to dealing with rapid backup data growth. With this approach, they have simply added more single-node, disk-based backup targets every time they run out of capacity, or fail to meet their backup windows.

    In an enterprise data center, where data is measured in tens of terabytes and growing at rate of 20 to40 percent compounded annually, the “add another system” approach often takes a toll on the human beings who manage the IT data center. Large organizations can mitigate this human toll by consolidating data backups onto enterprise-class systems that are designed to enable a single administrator to manage tens of petabytes without stress. They not only scale to meet growing performance and capacity needs, but also automate a variety of disk management, monitoring and reporting tasks.

    Disruption to Ongoing Backup Processes

    Every time a new backup target is added in a non-scalable environment, IT staff face the time-consuming tasks of re-allocating backup volumes, load balancing all of the backup systems in the environment, and tuning the entire environment to restore optimal efficiency. They also need to make difficult decisions about how to go about dividing their backup volumes among existing and new systems.

    For example, should they divide the backup onto multiple systems? Move backups to the new system and let the older data expire off the older systems? Or, should they disrupt ongoing operations for a significant change in backup strategy or use a less disruptive, more costly “Band-Aid” approach? As a result, adding a backup target to a non-scalable backup environment means increasing workloads, adding complexity, and placing more stress on an IT organization.

    A better alternative is to use a grid scalable system that enables IT staff to increase performance by adding processing nodes or to increase capacity by adding disk shelves. These systems integrate the added performance or capacity with existing resources and perform all load balancing, tuning and management tasks automatically and seamlessly, without disruption to ongoing operations or the need to make difficult decisions.

    More Systems Mean More Maintenance

    Every new machine added to the backup environment requires added IT time to maintain software licenses, upgrades, updates and patches as well and hardware maintenance. In short it means more tedious, time-consuming tasks for an IT staff that is probably already stretched thin.

    With fully loaded full-time IT employees costing the company $150,000 per year, companies should look for ways to make better use of their time with systems that increase the terabyte (TB) of data that a single sys admin can safely manage. Consolidating backups onto a single, scalable system dramatically reduces total cost of ownership (TCO).

    The Stress of Uncertainty

    With key information about backups divided on multiple, individual systems, IT staff have more uncertainty to deal with and a harder time making holistic, informed decisions about their company-wide backup environments.

    Enterprises should choose a solution that enables IT data managers to get fast, accurate information on the status of their entire backup environment. Robust dashboard functionality can not only enable a single administrator to manage more backup data, it can also enable them to reduce inefficiencies in their backup environment, plan for future capacity needs, and ensure restore Service Level Agreements (SLAs) are achievable. For example, a single dashboard on a grid scalable solution can put the following information at an administrator’s fingertips:

    • Which backup volumes have been backed up and to which backup target?
    • How efficient was the deduplication process in saving capacity requirements?
    • What’s the status and efficiency of data replication (has replication completed? How much bandwidth was required to complete it?
    • Are backup targets operating efficiently? Are any systems in danger of failing?
    • Can I meet backup windows consistently? Restore service level agreements (SLAs)?
    • What is the cost of data loss to the business unit or organization I’m serving?
    • Am I adding risk to the business by continuing with this strategy?

    When Less is More. . .

    While the impact on morale is harder to quantify, the human costs of sprawl can quickly affect the bottom line with increased staff turnover, low staff productivity, increased overtime. By consolidating backups and automating backup tasks, companies can free key IT staff for more productive and more gratifying work but most importantly achieve their mission; meeting their SLAs and reducing the business risk of data loss and downtime.

    Scalable, enterprise-class systems also provide a tighter level of management control and reporting to ensure IT departments get the most value from their backup investment and more accurate planning for future needs.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:48p
    How A Switch Failure in Utah Took Out Four Big Hosting Providers

    global-data-470

    On Friday morning, two network switches failed in a data center near Provo, Utah. As the impact of the failed switches rippled through the facility’s network, the downtime spread across four major U.S. web hosting firms, affecting millions of customers.

    How could an equipment failure in a single facility knock out four large national brands, including BlueHost, HostGator, HostMonster and JustHost? The simultaneous downtime reflects the ongoing consolidation in the hosting industry, as well as the tendency for large firms to congregate in many of the same data center facilities. It’s not a new trend, and can also be seen in cloud computing, where power problems at a single Amazon facility can quickly ripple across popular start-ups and social media sites.

    The answer lies in the growth of the Endurance International Group (EIG), which is well known in the hosting industry, but not a household name outside of it. Endurance has grown through a series of acquisitions, as it has pursued a “roll-up” of shared hosting companies. In 2012, Endurance made a huge splash in the industry, acquiring HostGator and its $100 million business, as well as Intuit’s $70 million web hosting business, turning the company into one of the biggest mass market hosting operations in the world. However, EIG’s operations remain something of an enigma, as it owns and operates so many brands that there isn’t even a definitive list of its properties.

    Endurance owns and operates several big name brands in the market, including A Small Orange, Bluehost, Fatcow, iPowerJustHost. With each acquisition, Endurance has maintained the acquired companies’ established brand. This strategy has its advantages, allowing it to target specific markets with specific brands. It also means that as EIG moves these brands onto the same hosting platform, and it means that an outage at the data center can take out several services.

    Simultaneous Outages

    One result of the consolidation of the shared hosting industry is the convergence of infrastructure into fewer data centers. When those data centers suffer downtime, several brands can be knocked offline. Previously, the hosting landscape was spread across multiple data centers, but roll-up plays such as EIG as well as the emergence of cloud such as AWS (particularly its East region), means that outages are able to be felt by more customers, and could prove to be more fatal.

    The Provo, Utah data center outage began Friday morning, and by 5:30 p.m., many sites continued to experience problems. The outage knocked out some of Endurance’s most well known brands, including BlueHost and HostGator.

    Company Response

    Endurance created a dedicated web site to update customers, and an executive also addressed the incident in comments and a statement at The WHIR.

    “During routine data center network maintenance, two of our core switches failed,” wrote Ron LaSalvia, COO, Endurance International,  posted. ”This resulted in a significant service disruption for many of our customers, for our own websites, and for our phone systems. Our entire team spent the day diagnosing and repairing the switches and restoring customer sites. At this point, almost every site is back online. We will continue work to ensure that our services are fully restored for every customer, and will do an extensive analysis to improve our network stability.”

    “Clearly today was not good enough,” LaSalvia added. “Nothing bothers me more than when we do not deliver what you expect and deserve.”

    The chatter around the outage from customers was that Endurance was migrating one of its major 2012 acquisitions, HostGator, from SoftLayer to this Ace Data Center facility in Provo when this happened.

    Customer Relations, Communications Need to Be Prioritized

    Whatever the cause, outages happen in this industry and they’re terrible for all involved – the company and its customers. In a crisis, the best way for a company to navigate out of an outage is to communicate excessively with customers. A Twitter feed with granular details on the recovery progress goes a long way to preserving transparency and alleviating customer concerns. Even if customers don’t understand the complexity of the language, frequent updates lets them know that you’re working hard to recover.

    Endurance has always been somewhat mysterious, and perhaps this doesn’t lend to the best transparency. The company was apologetic, but there seems to be plenty of customer appetite for a detailed post mortem to regain their trust.

    The separate branding on the part of Endurance means that customers aren’t aware that many of these properties operate on the same platform. The outage resulted in several major brands and several million websites knocked out, many of whom probably didn’t realize that they were Endurance customers.

    Consolidation and roll-up are inevitable in the hosting industry, as competitive pressures come from all angles, including cloud, and even social media such as Facebook, LinkedIn, Google+ or Pinterest. These social networks have taken a lot of personal web page business away from the giants of the hosting world. The focus has shifted in the mass market hosting industry to small business clients, yet these new bread-and-butter customers might not fully understand what’s going on behind the curtain.

    As more companies, large and small, use cloud service providers and as the shared hosting industry consolidates to a few major players, the Endurance outage provides a reminder of the need to address single points of failure, and their close relative, the perfect storm of impossible eventsThe Endurance outage hit seemingly disparate hosting brands, amplifying an incident in a single location across multiple services.When an outage happens to Amazon Web Service’s East Region, it impacts multiple services using their cloud infrastructure. 

    This seems to point to a need for increased transparency and information to the customer about underlying infrastructure. Maybe the old adage of “Let the buyer beware,” should be modified to “Let the customer be informed.”

    6:58p
    Strong Growth Continues for Cloud Provider DigitalOcean
    digitalocean

    DigitalOcean’s marketing pitch leads with SSDs and simplicity.

    Cloud provider DigitalOcean has been growing at an impressive clip. The company just opened its second New York data center, taking space with Telx in Google-owned 111 Eighth Avenue.  The location is live, with users now able to spin up virtual machines, what the company calls “droplets.”

    “Googleplex East is among the world’s most wired buildings and one of the two key Internet intersections in Manhattan,” the company wrote in a blog. “In addition to significantly increasing East Coast capacity, this new data center brings DigitalOcean even closer to rolling out private networking for our customers, which will premiere in NY2.”

    In addition to the new VPN offering, the company plans on adding object storage down the line. The company also announced that it added a significant amount of server capacity in its Amsterdam, NL location.

    Getting Rid of the Blank Slate Problem

    One of the issues with raw compute and storage cloud providers is that they aren’t necessarily user-friendly. This means the low end of the market can find these services complex to use. The company’s aim is to make cloud computing easier than what’s currently offered. The message is working, with Netcraft finding 7,000 web-facing servers at DigitalOcean in June, up from 1,000 last December. The company has accumulated more than 35,000 customers in a short span, launching recently in 2011.

    The company’s other NYC metro area data center is in Northern New Jersey. New customers in the past month have been told they can’t host their accounts there, leading some to believe that Jersey is filled to capacity. Customers were pointed to other DigitalOcean locations in Amsterdam and San Francisco.

    The new data center adds what appears to be much needed space, though the company hasn’t stated it was running out of space in New Jersey.

    Making it Easier

    A lot of cloud providers have put an emphasis on making it easier to operate in the cloud. Rather than the “blank slate” approach of market leader Amazon Web Services (though to be fair, Virtual Machine Images and recent tutorials from AWS have made it more user-friendly), providers such as DigitalOcean differentiate themselves by providing simple deployment options to get up and running. It’s an approach targeted to developers, as well as startups and small businesses. The company plans to expand the types of virtual machine images that are ready to go, pre-loaded with applications like WordPress.

    The company claims 1 minute set up times. Customers can resize servers to another pricing tier in the same amount of time.

    It’s also another cloud provider using Solid State Drives (SSD) and Software Defined Networking (SDN) for a performance boost (another example in the space is CloudSigma). Its cloud is KVM Hypervisor based.

    Growing Pains: Virtual Currency & Security Mishap

    Last month, speculators of a new virtual currency named Primecoin briefly threw a wrench into DigitalOcean’s operations. The company was forced to stop provisioning new servers (called droplets) on its cloud as users spun up machines for cheap computer cycles to mine the currency. Like the more well-known Bitcoin, Primecoin relies on cryptographic ciphers as a mechanism to increase the money supply. The demands for generating more currency increases over time.

    Bitcoin’s price sky-rocketed to over $100 per coin as speculators drove up the price of the virtual currency. Folks were banking on Primecoin to do the same, and began mining for it.

    Another problem along the road of growth was a security vulnerability discovered by one user. The user discovered that the service was using identical SSH fingerprints for multiple Ubuntu droplets generated within a single account. The company fixed the problem.

    The company handled both problems promptly and transparently.  It’s growing at a solid pace, expect it to continue grow at a quick clip.

    7:40p
    Compass Moving Away From Modular Marketing
    compass-notmodular

    A look at one of Compass Data Centers’ facilities. The company says it will no longer emphasize modularity in its marketing.

    Compass Datacenters said today that it will no longer be emphasizing “modular” in its marketing, saying the term has become too closely identified with containers and other portable IT enclosures.

    CEO Chris Crosby outlined the change in “We Made A Mistake,” a blog post explaining that Compass’ efforts to differentiate as a “Truly Modular Data Center” had not cleared up the confusion.

    “I was meeting with a prospective customer who expressed to me that they had reservations about working with us because we offered a modular solution,” Crosby writes. “But when he actually saw one of our videos of a facility being built he was immediately relieved and said, ‘That’s not modular; that’s a building.’

    “Unfortunately for our industry, the term modular seems to have become synonymous in the minds of many of end users with the ‘trailer-like’ structures,” he continued. “We’ll still let customers know that they can grow their operations incrementally, but we won’t be throwing the ‘M’ word around with the degree of frequency that we used to. We’ll also argue strenuously with anyone who tries to pigeon hole us as a ‘modular provider.’”

    On one level, Compass’ decision is a refinement of one company’s marketing. But its decision reflects a larger debate in the industry about what “modular” means, with some applying the term to factory-built enclosures for IT gear, while others applying the term to describe a multi-stage deployment of traditional data center space, perhaps using pre-fabricated components for mechanical and electrical infrastructure. IO and AST Modular have been leading proponents of modular as factory-built, while Compass, CyrusOne and Digital Realty have all used the term to describe phased deployment of raised-floor space.

    The use of “modular” was a key step in differentiating more advanced factory-built designs from the ISO-compatible shipping containers that defined the early days of portable IT enclosures. The term gained buzz and became more widely used in 2010.

    Crosby says the effort to refine the definition of modular to highlight incremental expansion was an error.

    “By focusing on the methodology we used to build our data centers, we were missing what was most important to the customer: the permanence of the solution,” he wrote. “Although modular construction is an advanced in building development, we were promoting how we built the clock rather than telling people what time it was. A building connotes the permanence, security and stability that most customers are looking for when they make their 25-year data center investment.

    “Sometimes reality just has to hit you in the face to confirm what you suspected,” Cropsby concluded. “So folks you can be sure that here at Compass we’ll never forget: ‘That’s not modular. That’s a building.’”

    8:59p
    NASDAQ FinQloud Hosts Cloud-Powered Compliance

    cloud-storage-470

    Is regulatory compliance a barrier to cloud adoption? Not at NASDAQ OMX FinQloud, which is now hosting a service that monitors compliance requirements for high-frequency trading operations. NASDAQ FinQloud, which launched a year ago to provide specialized cloud services for the financial industry, is hosted entirely on the Amazon Web Services platform.

    Jordan and Jordan will host its Execution and Compliance & Surveillance Service (ECS) on FinQloud, becoming the service’s 19th client, NASDAQ said. The ECS migration to FinQloud provides Jordan & Jordan the ability to easily boost the application’s power and storage capacity. FinQloud will be able to host Jordan and Jordan’s ECS as it manages large volumes of trade and quote data, provides storage and retrieval of full reports and individual records, and allows read-only access for inquiries and auditing.

    “Compliance needs are increasing, as is the use of cloud services, and this is an exciting match of needs and solutions,” said Tom Jordan, Chief Executive Officer of J&J. “We believe this relationship with FinQloud will allow J&J and our clients to reach new levels of reliability, redundancy and scalability while continuing to demonstrate our commitment to ensure regulatory compliance. In addition, the FinQloud ecosystem provides new opportunities for J&J to enhance the range of services offered to ECS customers and enables easy access to our services for other FinQloud customers.”

    As a service operated within FinQloud, Jordan & Jordan’s ECS will enable customers to analyze their stored trade data without maintenance costs associated with daily data transfers. After an independent assessment of FinQloud’s R3 data storage solution it was confirmed that it provides the necessary stringent record keeping requirements of SEC and CFTC regulations.

    “We are pleased that J&J decided to use FinQloud to optimize their compliance offering for broker-dealers. The financial services community is readily adopting scalable tools which decrease operational expenses, considering transaction volumes and regulatory requirements,” said Stacie Swanstrom, Head of Access Services at NASDAQ OMX. “The ecosystem we are fostering within FinQloud paves the way for our industry to build, provision, and manage mission critical applications for the future.”

    << Previous Day 2013/08/05
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org