Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, October 1st, 2014

    Time Event
    7:00a
    Docker Management Startup StackEngine Emerges From Stealth With $1M in Funding

    StackEngine is an early-stage startup that helps to manage and automate Docker containers. Containers are being lauded as the biggest thing since virtualization, promising to change the way IT is done. But the parallels between virtualization and containerization don’t stop there. While they fix major problems, they also both require a new approach to operations and management.

    StackEngine wants to solve what it says is an emerging Docker management bottleneck. While developers love the new approach that makes it easier to deploy and move applications, the wider organization needs tools and best practices to help them quickly and simply launch “Dockerized” products and services.

    Containers don’t remove the complexity of managing the overall infrastructure, change and configuration control, as well as capacity and resource management for the infrastructure underneath.

     

    StackEngine is part of a new breed of companies emerging to offer Docker management. Example of another early-stage company trying to solve these problems is Shippable.

    StackEngine has raised a $1 million round of seed funding to help it mature its platform and set the stage for future growth. The funding comes from Silverton Partners and LiveOak Venture Partners.

    StackEngine is currently in “private alpha” with about 15 customers and plans to use the round to move towards general availability in the fourth quarter.

    The company was co-founded by Bob Quillin and Eric Anderson, formerly of CopperEgg (monitoring), Hyper9, which was acquired by SolarWinds to become its virtualization management product, and VMware.

    “Virtualization was the predecessor,” said Anderson. “Docker is potentially the next VMware. Just as an ecosystem evolved around helping operations around virtualization, we saw the same ingredients now brewing for the Docker ecosystem.”

    Containers solve a huge problem in IT by packaging an application and taking its dependencies with it as it moves to where it needs to go. “Docker solves the huge problem of systems management, virtualization management,” said Quillin. ”This has been an issue for last 10-20 years. Now there’s no more dependency issue on the server, but I have new problems up the stack. How do I harness all of that power Docker gives me? Where should I run it?”

    Channeling Spider-Man, Quillen said, “With all that power comes some responsibility. There’s cost controls, performance constraints…

    “StackEngine agents run across the host and virtual machine forming a management mesh. It gives real time understanding of the containers that are running, and how the host is performing. There’s a lot of value in providing management visibility of what you have. We’re adding in the ability to do actions as well. Visibility and monitoring is moving to actions and orchestration.”

    The company believes that with the early history and up-swell of interest in Docker, this next wave of containerization will help the industry move to the next level, getting containers to the enterprise.

    3:30p
    Explaining the Uptime Institute’s Tier Classification System

    Matt Stansberry has researched the convergence of technology, facility management and energy issues in the data center for over a decade. Since January 2011, he is Director of Content and Publications at Uptime Institute.

    Uptime Institute’s Tier Classification System for data centers is approaching the two decade mark. Since its creation in the mid-1990s, the system has evolved from a shared industry terminology into the global standard for third-party validation of data center critical infrastructure.

    Over the years, some industry pundits have expressed frustration with the Tier System for being confusing. In many cases these writers have misrepresented the purpose and purview of the program.

    Invariably, these authors and interview subjects have never been involved with a Tier Certification project. Typically, the commentator’s understanding of the Tiers is entirely secondhand and ten years out of date. And yet, when a commentator manages “1 million square feet of data center space for a large multinational enterprise” and represents a respected organization like AFCOM, we feel the need to respond.

    I would like to take this opportunity to explain what the Tiers look like today, illustrate how Tier Certification works, list some companies that have invested in Tier Certification and offer Uptime Institute’s vision for the future.

    What are the Tiers?

    Uptime Institute created the standard Tier Classification System to consistently evaluate various data center facilities in terms of potential site infrastructure performance, or uptime. The Tiers (I-IV) are progressive; each Tier incorporates the requirements of all the lower Tiers.

    Summary definitions of the Tiers I-IV are available here.

    Data center infrastructure costs and operational complexities increase with Tier Level, and it is up to the data center owner to determine the Tier Level that fits his or her business’s need. A Tier IV solution is not “better” than a Tier II solution. The data center infrastructure needs to match the business application, otherwise companies can overinvest or take on too much risk.

    Uptime Institute recognizes that many data center designs are custom endeavors, with complex design elements and multiple technology choices. As such, the Tier Classification System does not prescribe specific technology or design criteria. It is up to the data center owner to meet a Tier Level in a method that fits his or her infrastructure goals.

    Uptime Institute removed reference to “expected downtime per year” from the Tier Standard in 2009. The current Tier Standard does not assign availability predictions to Tier Levels. This change was due to a maturation of the industry, and understanding that operations behaviors can have a larger impact on site availability than the physical infrastructure.

    Tier Certification

    The Tier Certification process typically starts with a company deploying new data center capacity. The data center owner decides to achieve a specific Tier Level to match a business demand.

    Data center owners turn to Uptime Institute for an unbiased, vendor neutral benchmarking system, to ensure that data center designers, contractors and service providers are delivering against their requirements and expectations.

    The first step in a Tier Certification process is a Tier Certification of Design Documents (TCDD). Uptime Institute Consultants review 100% of the design documents, ensuring each subsystem among electrical, mechanical, monitoring, and automation meet the fundamental concepts and there are no weak links in the chain.

    Uptime Institute has conducted over 400 TCDDs, reviewing the most sophisticated data center designs from around the world. As you might imagine, we’ve learned a few things from that process. One of the lessons is that some companies would achieve a TCDD, and walk away from following through on Facility Certification for any number of reasons. Some organizations were willfully misrepresenting the Tier Certification, using a design foil to market a site that was not physically tested to that standard.

    The TCDD was never supposed to be a final stage in a certification process, but rather a checkpoint for companies to demonstrate that the first portion of the capital project met requirements. Uptime Institute found that stranded Design Certifications were detrimental to the integrity of the Tier Certification program. In response, Uptime Institute has implemented an expiration date on TCDDs. All Tier Certification of Design Documents awards issued after 1 January 2014 will expire two years after the award date.

    Data center owners use the Tier Certification process to hold the project teams accountable, and to ensure that the site performs as it was designed. Which brings us to the next phase in a Tier Certification process: Tier Certification of Constructed Facility (TCCF).

    During a TCCF, a team of Uptime Institute consultants conducts a site visit, identifying discrepancies between the design drawings and installed equipment. Our consultants observe tests and demonstrations to prove Tier compliance. Fundamentally, this is the value of the Tier Certification, finding these blind spots and weak points in the chain. When the data center owner addresses the deficiencies, Uptime Institute awards the TCCF letter, foil and plaque.

    Does the industry find value in this process? The clearest proof is the list of companies investing in Tier Certification. There are more Certifications underway at this moment than at any other point in the 20-year history of the Tiers.

    Look at adoption among the telecommunications companies, colocation providers and data center developers: Digital Realty, Compass Data Centers, CenturyLink, and Switch. We have been pleased to impress each and every one of those companies with our dedication to quality and thoroughness, because we understand all that is on the line for them and their clients.

    As the IT industry moves further into the cloud and IaaS mode of IT service delivery, the end user has less control over the data center infrastructure than ever before. Tiers and Operational Sustainability provide third-party assurance that the underlying data center infrastructure is designed and operated to the customer’s performance requirements.

    Here is the full list of Tier Certification awards.

    Beyond Tiers: operations

    As mentioned previously, Uptime Institute recognizes the huge role operations plays in keeping data center services available. To that end, Uptime Institute developed a data center facilities management guideline in 2010 (Tier Standard: Operational Sustainability) and certifies data center operations. This is a site-specific scorecard and benchmarking of a facilities management team’s processes, with an on-site visit and detailed report.

    For companies with existing sites, or for whatever reason have not chosen to certify data center facilities against Tiers, the operations team can be certified under the Management & Operations (M&O) Stamp of Approval.

    The key areas reviewed, observed, and validated include staffing, training and maintenance. The Full citeria are described in Tier Standard: Operational Sustainability.

    By covering these essential areas, a management team can operate a site to its full uptime potential, obtain maximum leverage of the installed infrastructure/design and improve the efficacy of operations.

    The path forward?

    In addition to the certifications listed above, Uptime Institute is delivering and developing further services for the IT industry around corporate governance and IT resource efficiency. As we bring those services to market, we will commit to being more present in the public forum.

    With further education in the market, we hope to engage in substantive debates about our processes and approach, rather than defending claims from individuals with incorrect or incomplete knowledge of the Tiers program.

    Fundamentally, it is our responsibility to better explain our approach and intellectual property. We owe it to our hundreds of clients who have invested in Tiers Certification.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

     

    5:35p
    Cosentry Gets Foothold in Milwaukee With Red Anvil Acquisition

    Midwest IT solutions provider Cosentry has acquired full-service managed data center provider Red Anvil. Cosentry bought all assets of the company, expanding its operations into the Milwaukee data center market. Terms of the deal were not disclosed.

    Cosentry continues to build out its footprint in the Midwest. Its roots are in colocation, but the company has recently expanded into cloud services and hosting infrastructure. Its acquisition strategy this year has been to acquire strong managed services players in active Midwest data center markets.

    Earlier this year it acquired managed services provider XIOLINK in St. Louis. The company now has nine data centers across five markets in the region: Kansas City, Milwaukee, Omaha, St. Louis and Sioux Falls, all of which Red Anvil customers now have easy access to.

    Cosentry will interconnect Red Anvil’s data center with its other locations to provide high-speed, redundant backup and disaster recovery services. It will continue to offer Red Anvil’s complete set of services.

    “Cosentry is well-known throughout the Midwest for its impressive data center and managed services,” said Neil Biondich, CEO of Red Anvil. “The combination of our services and customer support will enable our region’s businesses to take advantage of world-class business continuity, cloud, colocation and managed IT services right from their own hometown.”

    The company anticipates expanding its current Milwaukee facility with a new full-service data center in the first half of 2015.

    “Cosentry is excited to enter the Milwaukee market with the acquisition of Red Anvil, a data center provider who has established themselves as a regional leader over the last 10 years,” said Brad Hokamp, Cosentry’s CEO. “The combination of the two companies will give businesses in Milwaukee and the surrounding area access to world-class IT solutions and a great alternative to Chicago or larger city providers.”

    The company completed a refinancing of its existing credit facilities last year that provided it with up to $100 million of capital for expansion.

    Hokamp joined in 2013 with a vision to turn the company into a Midwest data center juggernaut. The industry veteran has in the past worked at Savvis, Telx and most recently Layered Tech, where he served as president.

    5:55p
    Prince William County Says It Too is N. Virginia’s Data Center Magnet

    Prince William County in Virginia now has 2 million square feet of data center space, county economic development officials announced.

    Prince William is located just south of Loudoun County, which has the highest concentration of data centers in Northern Virginia. Nearby Prince William has been growing steadily and wants to remind data center operators that it is a popular destination for their industry.

    Both counties are considered part of Northern Virginia, which is expected to surpass New York as the largest data center market in the U.S. In total, the region has more than 5.2 million square feet of data center space.

    Multi-tenant providers in Prince William include COPT, EvoSwitch (which operates out of the wholesale COPT data center) and Verizon Terremark. It is home to several enterprise data centers as well and is a popular data center destination for the federal government.

    The county is in close proximity (20 miles) to Washington, D.C. and has easy access to Interstate 95 and Interstate 66.

    More than 75 percent of Virginia workers live within a 30-minute commute to the center of Prince William, giving it access to a rich talent pool. It had a 57-percent increase in the number of business establishments over the past decade.

    Close proximity to Loudoun County means it benefits from the infrastructure and workforce there as well.

    Data centers are a targeted industry in the county, which makes them eligible for expedited permitting, fast-track site plan approval and other special incentives.

    Some of the benefits include a low tax burden on computers and equipment with an accelerated depreciation schedule. There are two power grids and a lot of fiber.

    “We’re delighted to have surpassed the 2 million square foot data center capacity threshold,” said Jeffrey Kaczmarek, executive director, Prince William County Department of Economic Development. “Data center projects yield significant capital investment and highly-skilled, high-paying jobs to Prince William County.”

    Prince William is one of the highest-income counties in the U.S. Its large population centers include the independent City of Manassas and Dumfries.

    6:14p
    Verne Signs Effects Firm With “Gravity” Credits for Iceland Data Center

    RVX, a visual effects rendering company, is using Verne Global in Iceland for its data-intensive work. RVX took part in effects work for the Oscar-winning film “Gravity” and will use the Verne data center campus for some of Hollywood’s highest-profile film projects, such as “Everest,” set to be released in 2015.

    The movie industry relies heavily on High Performance Computing for post-production needs. Verne puts a green spin on its pitch to HPC users, touting the use of 100 percent renewable energy (a combination of hydroelectric and thermal) by its Iceland data center. Iceland is very friendly when it comes to renewable energy.

    “We are always being asked to push the envelope to create more visually stunning, hyper-realistic special effects in the projects we develop,” said Dadi Einarsson, co-founder and creative director for RVX. “The graphics requirement of each film is ten times more complex than the film before it. The last thing we want is for the skill and talent of our artists to be constrained by the technical infrastructure and computing power needed to create those graphics.”

    Iceland has a far lower carbon footprint than the popular film industry hubs of London, New York, Amsterdam and Paris, according to Verne.

    “The film and digital media industry, like other compute intensive sectors, relies heavily on the power and infrastructure a data center provides,” said Jeff Monroe, CEO of Verne. “By hosting their HPC render farm at Verne Global’s campus, RVX now has maximum flexibility in their business operations and can focus solely on creating stunning visual effects for some of Hollywood’s biggest movies.”

    Other recent HPC customers at Verne Global include BMW, managed hosting provider Datapipe, CCP Games, GreenQloud and Opin Kerfi.

    6:46p
    AWS, Rackspace Cloud Reboots to Patch Xen Complete

    The Xen-based cloud reboot post mortems are up. Last week, Amazon Web Services and Rackspace both had to reboot parts of their clouds to fix a known security vulnerability affecting certain versions of XenServer, a popular open source hypervisor.

    There were no reports of compromised data, although some reboots didn’t go as smoothly as others. The maintenance affected less than 10 percent of AWS’ EC2 fleet and nearly a quarter of Rackspace’s 200,000-plus customers.

    The Xen project has a detailed security policy available here. It includes the protocols and processes for dealing with these kinds of issues.

    It’s important to note that these issues affect both open source and proprietary technology. This patch was not limited to AWS and Rackspace. They are just two big examples of cloud providers faced with a challenge that was quickly overcome.

    The issue can finally be revealed without potential security repercussions. The vulnerability could have allowed those with malicious intent to read snippets of data belonging to others or to crash the host server through following a certain series of memory commands.

    Rackspace worked with Xen partners following the security issue to develop a test patch and organize a reboot plan. The patch was ready the night of September 26. With the technical details scheduled to be publicly released today, the company has to work quickly.

    “Whenever we at Rackspace become aware of a security vulnerability, whether in our systems or (as in this case) in third-party software, we face a balancing act,” wrote Rackspace CEO Taylor Rhodes. “We want to be as transparent as possible with you, our customers, so you can join us in taking actions to secure your data. But we don’t want to advertise the vulnerability before it’s fixed — lest we, in effect, ring a dinner bell for the world’s cyber criminals.”

    “The zone-by-zone reboots were completed as planned and we worked very closely with our customers to ensure that the reboots went smoothly for them,” wrote AWS chef evangelist Jeff Barr.

    AWS advised customers to re-examine infrastructure for possible ways to make it even more fault tolerant, including the use of Chaos Monkey, pioneered by Netflix to induce various kinds of failures in a controlled environment.

    7:00p
    Pivotal Scales GemFire In-Memory Database Capacity With Latest Release

    Pivotal announced a new release of its GemFire distributed in-memory database, which is part of its Big Data Suite. Pivotal GemFire was formerly known as VMware vFabric GemFire.

    GemFire 8 scales across nodes and clusters and responds to thousands of concurrent read and write operations on many terabytes of data, the company said. With advances in the new release, nodes can manage up to 50 percent more data per node than before. Much of the increase in capacity comes from an appropriately named algorithm called Snappy — a speed-optimized compression codec.

    A new RESTful API allows developers to enhance performance and resilience of a wider range of high-scale applications, such as those developed in Ruby, Scala or Node.js languages. Other new features include node reconnection and data restoration and an ability to serially update software on nodes in a cluster that remains live, eliminating a need for planned downtime for upgrades.

    GemFire 8 uses distributed commodity hardware in a ‘share-nothing’ architecture. Although configurable, it emphasizes partition tolerance and consistency over availability of data, Pivotal’s enterprise software marketing executive Gregory Chase pointed out. Product features that help balance the need for availability of nodes include node failover to replicas in the event of system failure or network partitioning and the ability to run multiple clusters connected via a WAN, giving you multi-site disaster recovery capacity.

    7:30p
    New Open Platform for NFV Project Aims to Standardize How Entire Networks are Virtualized

    logo-WHIR

    This article originally appeared at The WHIR

    A new project known as Open Platform for NFV (or OPNFV) aimed at standardizing a way of virtualizing entire networks has just launched with the participation of nearly 40 founding member companies that represent all areas of IT from cloud computing to the telecom industry.

    NFV, short for Network Function Virtualization, is essentially about virtualizing every part of the network, not just the software controller, which is the goal of Software-Defined Networking. NFV represents a growing field that envisions software being able to further deal with the complexity of massive networks and their underlying infrastructure.

    According to the announcement this week from The Linux Foundation, OPNFV will be a carrier-grade, integrated, open-source reference platform that will essentially standardize how entire networks are virtualized.

    Linux Foundation executive director Jim Zemlin told Gigaom that NFV was originally seen as a way to help carriers better manage their complex networks, but it could also help enterprises with complex infrastructure.

    NFV was a major topic of conversation at OpenStack Silicon Valley. The OpenDaylight project has been an NFV component for use in the OpenStack cloud orchestration platform. OpenStack Foundation executive director Jonathan Bryce told the WHIR (in a video interview) that NFV is becoming a rallying point for “telco service providers and telco operators around the world who are looking to modernize their infrastructure and the way that they provide voice services, data services, [and] messaging services.”

    OPNFV will incorporate different open-source networking technologies such as OpenDaylight and OpenStack, and other key technologies that would be required to fit into a standardized OPNFV framework. By standardizing NFV protocols, OPNFV hopes to encourage the introduction of new products and services that are NFV compatible.

    OpenDaylight Project executive director Neela Jacques said in a statement, “Open source is quickly becoming a de facto standard for cloud platforms (Openstack), SDN (OpenDaylight) and virtual switches (Open vSwitch) because it’s a neutral playing field that everyone can build on and integrate with. I see strong interest from carriers to leverage these open source projects for their NFV deployments and we look forward to collaborating with OPNFV as they work to stitch these technologies together.”

    OPNFV founding members include Platinum members: AT&T, Brocade, China Mobile, Cisco, Dell, Ericsson, HP, Huawei, IBM, Intel, Juniper Networks, NEC, Nokia Networks, NTT DOCOMO, Red Hat, Telecom Italia and Vodafone.

    Silver founding members include 6WIND, Alcatel-Lucent, ARM, Broadcom, CableLabs, Cavium, CenturyLink, Ciena, Citrix, ClearPath Networks, ConteXtream, Coriant, Cyan, Dorado Software, Ixia, Metaswitch Networks, Mirantis, Orange, Sandvine, Sprint and Wind River.

    Zemlin told Gigaom that OPNFV is aiming to release its platform by next year.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/new-open-platform-nfv-project-aims-standardize-entire-networks-virtualized

    8:00p
    Concerns of Identity Theft, Personal Cybersecurity Keep Americans Up at Night

    logo-WHIR

    This article originally appeared at The WHIR

    Americans are more concerned with identity theft and personal cybersecurity than any other security threats, according to a survey released by the University of Phoenix on Monday. Respondents were also asked what threats concerned them more now than five years ago, and again more people cited identity theft and personal cybersecurity than any other.

    Harris Poll surveyed over 2,000 people online in August for the University of Phoenix College of Criminal Justice and Security. It is worth noting that comparing on and offline threats as perceived by people online may yield different results than surveying people at the local community center.

    Seventy percent of respondents said identity theft is among the areas they are most concerned with, while 61 percent said personal cybersecurity. Among other answers, 55 percent said terrorism, 47 percent said neighborhood crime, 44 percent said natural disasters, and 31 percent said organizational security, which includes corporate cybersecurity but not workplace violence.

    Personal cybersecurity and identity theft, at 61 and 60 percent respectively, topped national security (50 percent) as the threats people are more concerned with than previously. Part of what could be contributing to this concern is the fallout of the massive data breaches at retail stores including Home Depot and Target.

    Data breaches compromising people’s privacy had become regular items in mainstream news. Even governments are sources of personal cybersecurity mistrust for regular netizens, in light of the startling data collection practices of the NSA revealed in July.

    A June report by the RAND corporation found that a shortage of cybersecurity professionals is increasing risk on a national level.

    Service providers have recognized and begun responding to people’s concerns. Earlier this month Google, Dropbox, and the Open Technology Fund announced Simple Secure, which aims to improve consumer adoption rates of security tools, and security providers continue to be targeted in mergers and acquisitions.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/identity-theft-tops-list-americans-security-concerns-university-phoenix-report

    10:51p
    Investors in $12.5M Pica8 Round to Build SDN Startup’s Asia Presence

    Software defined networking startup Pica8 has raised $12.5 million in a Series B funding round. A sizable round for a startup, it brings the company’s total funding to date to more than $20 million.

    The round does more than beef up the Palo Alto, California-based company’s war chest. It brings on board two investors that will help grow its presence in Asia.

    One of them, Cross Head, will focus on growing Pica8’s revenue in Japan, where it does investment, training and system integration, according to Pica8.

    The other new investor is Taiwan-based Pacific Venture Partners, which will work to ensure visibility of Pica8 and its technology among the country’s massive hardware manufacturers.

    California venture capital VantagePoint Capital Partners, an early Pica8 investor who led this latest round, lists Tesla Motors and BrightSource Energy. The latter was the company behind the project to build a huge solar thermal power plant in California’s Mojave Desert called Ivanpah that eventually failed to realize.

    Pica8 has developed a network operating system that works on commodity network switches – the opposite of expensive Cisco or Juniper gear that runs proprietary software. Called PicOS, the software is based on Open vSwitch, an open source virtual switch.

    PicOS is designed for new-breed SDN-style automation but supports traditional switching and routing protocols.

    The company takes PicOS to market as stand-alone software and as part of a fully integrated data center solution, including software and hardware.

    VantagePoint Chief Operating Officer Patricia Splinter said the venture capital firm believed SDN was on its way to becoming the standard for enterprise operations. “Pica8 is on its way to becoming a leading supplier of SDN technology because it has the right team, unique technology and deep market understanding to make the most of this significant transition,” she said.

    11:19p
    Closing its IBM Deal, Lenovo Becomes China’s Top Server Vendor

    Lenovo is now the biggest x86 server supplier in China, according to IDC. Its acquisition of IBM’s x86 server business, closed this week, gives Lenovo the biggest x86 server market share and strengthens the company’s overall server portfolio.

    The acquisition also puts Lenovo in the top three in terms of x86 server market share worldwide. IDC pegged its global share at 11.7 percent in the first half of 2014.

    There is little overlap between the System x product line and the rest of IBM’s portfolio, according to IDC. The merger also combines Lenovo’s regional reach with IBM’s product R&D.

    This is not the first time Lenovo has acquired an IBM business line to later dominate a market. Its 2005 acquisition of Big Blue’s PC business made Lenovo one of the industry leaders in that space.

    “And the successful experience will help Lenovo realize smooth integration of the newly-acquired server business,” said IDC. “This will also help Lenovo expand its market overseas by replicating its success in the PC field.”

    Lenovo shipped 99,101 x86 server units pre-acquisition in first half of 2014. Combined with IBM’s x86 shipments, its market share in China is 23.9 percent.

    Former leader Dell is still within reach of the top spot at 20.36 percent. IDC expects the competition for top spot will heat up.

    Lenovo has a slight advantage in this race as it’s a domestic provider. IDC believes that its ability to say “made in China” will help tilt the scales in Lenovo’s favor.

    The acquisition is a recent one, so it will take a few quarters to stabilize and integrate the business internally, which will add difficulty in the near term. IDC expects Lenovo to make up any lost ground during the integration period.

    << Previous Day 2014/10/01
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org