Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, January 28th, 2015

    Time Event
    1:00p
    Newly Identified Linux Vulnerability Gives Full Access to Servers

    Software security researchers recently identified a bug that provides hackers with an open door to the bulk of the world’s servers running Linux.

    The vulnerability in the Linux GNU C Library (shorthand: glibc) “allows attackers to remotely take control of an entire system without having any prior knowledge of system credentials,” according to a statement released Tuesday morning by Qualys, a Redwood Shores, California-based security firm.

    “Most Linux servers will have the vulnerable glibs version in which the issue was identified,” Qualys Director of Engineering Amol Sarwate said in an email.

    Getting complete access to a vulnerable machine can be as simple as sending an email to that machine.

    Linux plays a huge role in Internet infrastructure. More than one-third of websites run on servers using various Linux distributions, according to W3Techs, which specializes in collecting data about technologies used for building and running websites. Another third runs on Windows, and the rest run on non-Linux variants of Unix.

    The newly discovered Linux vulnerability, known as GHOST, has been there for more than a decade. The first vulnerable version of the Linux GNU C Library, glibc-2.2, was released in November 2010, Sarwate said.

    glibc is a standard C library for basic Linux facilities. According to the GNU project website, it is used in GNU systems and most systems with the Linux kernel.

    Qualys identified GHOST earlier, but did not disclose it until Tuesday because it needed to give Linux distribution vendors enough time to update their respective software packages with a patch.

    “It was discovered by Qualsys, and they used responsible disclosure to alert the security teams of various Linux distributions prior to making a public announcement,” Dustin Kirkland, Ubuntu cloud solutions product manager at Canonical, said Tuesday. “Canonical had sufficient time to prepare and test updated packages for Ubuntu prior to this morning.”

    Ubuntu is one of the most popular Linux distros, second to Debian in market share. GHOST affected Ubuntu 10.04 LTS and 12.04 LTS releases, Kirkland said. Ubuntu 14.04 and newer releases were not affected.

    While, according to Sarwate, most Linux servers were affected, “the actual threat depends on what services are running on a given system,” Josh Bressers, lead of the product security team at Red Hat, another major Linux distro provider, said. Still, the company recommends that all users upgrade vulnerable glibc versions.

    Several factors, including a fix that was released in May 2013, mitigate the bug’s impact. The fix, however, was not classified as a security advisory, which means most stable distributions with longest-term support were left exposed, Qualys reps explained.

    They include:

    • Debian 7 (wheezy)
    • Red Hat Enterprise Linux 6 and 7
    • CentOS 6 and 7
    • Ubuntu 12.04

    Now that the Linux vulnerability has been recognized, the best course of action for all users is to consult with their Linux vendors to identify current glibc versions and apply updates if needed and available, Bressels said.

    1:00p
    Docker Reorgs Open Source Project’s Structure

    Docker has changed operational structure of the popular open source application container project, appointing two people to take over some of the responsibilities in overseeing the project from Solomon Hykes, who started both Docker the open source project and Docker Inc., the company that sells products and services around Docker container technology.

    The point is to enable the already rapidly growing project to continue to scale. Docker has been one of the fastest-growing open source projects in history.

    Instead of overseeing architecture, operations, and maintaining of the project, Hykes will now be chief architect, while remaining CTO of Docker the company. Michael Crosby, also a Docker employee, will be chief maintainer.

    The company made an outside hire to fill the role of chief operator. Steve Francia, who has played an oversight role at MongoDB, a major open source NoSQL database project and company, will now do similar things at Docker.

    Francia said he’s been impressed with Docker’s rate of growth: “It scaled at a pace I’ve never seen any open source project scale in the history of open source.”

    The open source community around MongoDB is larger than the Docker community, but it took MongoDB a lot longer to get to where Docker is today, he said.

    MongoDB and Docker have similar business models. Most of their products are open source, with the exception of few proprietary technologies. Docker Hub, the online repository for Docker container images, and its derivative Docker Hub Enterprise, would be an example.

    Being in a role with accountability to a commercial entity while at the same time overseeing an open source project that commercial entity’s business depends on could potentially put one in situations of having to decide between doing something in the best interest of the company and doing something in the best interest of the community.

    Francia said such situations are a possibility but what’s best for the open source project is usually best for the company. “I see where there could be a potential for that,” he said. “I believe that the most successful open source companies recognize that their success depends on the success of the project.”

    He does not remember any instances in his experience at MongoDB of project and company priorities being out of alignment, and he believes the same will be true in Docker’s case.

    “I wouldn’t call it a delicate balance. I would call it a complete consistent prioritization across the company and across the community.”

    4:30p
    Seven Questions to Ask Service Providers Before Signing Your Next Contract

    Bhavesh Patel is Director of Marketing and Customer Support at ASCO Power Technologies, Florham Park, NJ, a business of Emerson Network Power.

    At a data center, the health, maintenance, and monitoring of the emergency backup power system are of paramount importance. You want the backup power system to kick in and run flawlessly as soon as it is called for. There is a direct relationship between thorough, regularly scheduled preventative maintenance on system components and the reliability of that system to operate and deliver peak performance when called upon. Selecting the best service provider for that maintenance is an important process.

    Generally, it is preferable to hire a specialty service provider directly who can address your needs and priorities rather than rely on the property management firm, which wears many hats. Adding a layer of responsibility and dealing with an intermediary can result in delays and misrepresentation or misunderstanding of issues.

    Property management firms are a good choice for maintaining grounds, A/C and heating systems, elevators, pest control, and vending machines but less suitable for maintaining a facility’s critical power, including stand-by generators, transfer switches, and UPS/power distribution. In this age of specialization and evolving technology, you want to make sure your service provider is the best choice for all your needs.

    Here are seven questions to ask service providers before signing the next contract. The responses will help you hone in on the best qualified (not necessarily least costly) service provider to forestall problems and, should any develop, fix it as quickly and efficiently as possible. (And that’s the ROI that you are really looking for!)

    Before You Sign on the Dotted Line

    What is the size your organization? How many crews, and how many techs trained and authorized to work on my critical equipment, and do you respond 365/24/7. You don’t want the phone answerer to also be the person who goes out in the van to service your equipment and then need to rush off to respond to the next call. Understaffed operations can leave subsequent callers to reach only voice mail.

    How long have you been in business? Does the company have a track record with data centers or other critical facilities? Will it provide references to current customers, especially those running equipment from the same manufacturers of your equipment? Check out the website and links for company background and associations, as well as business review sites.

    How do your crews respond to service calls? The answer you’re looking for is “quickly in a well-stocked truck” and with quick and easy access to all needed parts not already onboard. If there is a needed part not in their inventory and not available locally, how long will it take to get it? The logistics of delivery could greatly hinder the repair. And if the service provider is relying on the manufacturer to have the part available, that might prove not to be the case. If the provider is the OEM, then the tech likely could have access to the warehouse off-hours.

    Are your technicians trained specifically to handle repairs on installed equipment from my manufacturer and will only those technicians be sent in response to my call? System components may relate to one another differently among manufacturers. You want technicians trained on your equipment to fix your equipment. Quality service providers often require technicians undergo initial mandatory factory training on specific manufacturer equipment and periodic “brush-up” training every year to keep up with new equipment and protocols.

    Alternately, or in addition, a service provider could require its technicians to earn certification in an industry-neutral program that entails both classroom learning and hands-on work experience and which may require renewal annually. (The service division of the OEM may have the most informed technicians, with the quickest knowledge of service bulletins and upgrades and the most vested interest in preventing failures of system components.)

    Do your field forces use tablets? Yes indicates keeping up with technology. Instant access to data such as customer history, notes on past service calls, maintenance schedules for critical equipment, access to technical data from a service library, and the ability to query technical experts offsite can speed up repair time and enhance accountability on service calls.

    At large data centers, a service provider might even have access to an electronic map showing locations of equipment along with history of service of that equipment. Furthermore, inputting data such as arrival time, length of service call, and/or invoicing on the spot is a lot more accurate than relying on hand-written (often illegible) service tickets and notes.

    Will you train my personnel at my facility on the basic operation of critical components of the backup power system so they can recognize developing problems and quickly and accurately inform you about them? Ideally, the service provider should educate a few key in-house “first responders” as possible points of contact who can understand issues that may require a service call.

    How do you structure fees for comprehensive annual preventative maintenance and emergency repairs around the clock? How are repairs billed – per visit, per length of visit, immediacy of response, day of week and time of service or other parameters? Are optional upgrades offered?

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:32p
    IBM Confirms Departure of SoftLayer CEO Lance Crosby

    logo-WHIR

    This article originally appeared at The WHIR

    IBM confirmed on Wednesday that SoftLayer CEO Lance Crosby has resigned. Crosby is expected to leave the company this week, several reports said.

    IBM acquired SoftLayer in 2013 for $2 billion to expand its cloud services portfolio, and SoftLayer CEO and founder Crosby stayed on for about 19 months after the acquisition, according to a report by Re/Code.

    “We wish Lance Crosby the best as he takes a well-deserved break before pursuing new endeavors,” an IBM spokesperson said in a statement. “Lance has left his mark on IBM. SoftLayer has become an important part of IBM’s cloud portfolio, and has played a big role in our success. IBM reported $7 billion in cloud revenues for 2014, and we will continue to build on that momentum with our clients.”

    Crosby ran SoftLayer for eight years prior to its acquisition by IBM, launching the Dallas-based company in 2005.

    Crosby said in a statement: “I am very proud of the business we built and the team who continue to evolve SoftLayer at IBM. Now that the business is successfully integrated into IBM, I am ready to take some time off before I pursue my next challenge.”

    In January, IBM appointed IBM veteran Robert LeBlanc to run the new dedicated cloud business unit, which oversees SoftLayer. He reports to IBM CEO Virginia Rometty.

    Since the acquisition of SoftLayer, IBM has put a lot of emphasis on growing its cloud business. Cloud services and software revenue accounted for about eight percent of IBM’s sales in 2014, most of which came from SoftLayer. In an interview with the Wall Street Journal, LeBlanc said that the cloud group has about 1,000 job openings, despitecutbacks in other areas expected to take hold this week.

    Last year, IBM announced a commitment of $1.2 billion to grow its global cloud data center footprint. The company has rapidly announced data centers in London, Melbourne, Hong Kong and Toronto, among other locations. With the support of its partner Equinix, it plans to have 49 cloud data centers around the world.

    Crosby has been a fixture of the hosting industry for many years having seen SoftLayer through multiple acquisitions, financing rounds, data center launches and expansions. He also guided the company through a significant transition: meeting the needs of customers through traditional hosting solutions to cloud services.

    Having been at the helm of the company since 2005, it will be interesting to see what Crosby does beyond SoftLayer, and the WHIR will certainly be watching.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/ibm-confirms-departure-softlayer-ceo-lance-crosby

    5:38p
    Emerson Deploys 1.1MW Modular Data Center for T-Systems

    T-Systems, the IT services subsidiary of Deutsche Telekom, has deployed a 1.1 megawatt modular data center in Barcelona to support its cloud services in Europe. The data center, designed and built by Emerson Network Power, consists of 38 modules.

    The main advantage of modular data centers is being able to add data center capacity quickly. Modules are usually pre-fabricated at a factory and shipped to user’s site, where they are assembled and plugged into network, power, and chilled water.

    It took nine months to design and build the data center, which, according to Emerson, would have taken two to three years to build using the traditional construction approach.

    Here’s a time-lapse video of the construction process:

    This is a second announcement of a modular data center deployment this month, albeit a much larger one. The first one was deployment of a CommScope data center by the University of Montana.

    Other recent deals include Chinese search giant Baidu’s installation of Schneider Electric modules, and Australian data center services provider Red Cloud’s planned expansion using T4 data center modules by Cannon Technologies.

    The T-Systems project in Barcelona is unusually large for a modular data center deployment – the method is typically used to deploy smaller amounts of capacity – indicating that T-Systems is seeing a lot of opportunity for cloud services in the region.

    The data center supports about 300 racks and has received Tier III certification from the Uptime Institute. The Tier certification is an official third-party confirmation of reliability of the facility’s infrastructure.

    Emerson's Liebert HPC chillers at T-Systems' modular data center in Barcelona, designed and built by Emerson. (Photo: Emerson)

    Emerson’s Liebert HPC chillers at T-Systems’ modular data center in Barcelona, designed and built by Emerson. (Photo: Emerson)

    T-Systems has been expanding its data center capacity in Europe rapidly. In July 2014, the company announced launch of a large data center in Biere, Germany.

    Raul Saura, head of dynamic platforms services at T-Systems Iberia, said the Barcelona facility played a key role in an ongoing consolidation and transformation program. “It was absolutely strategic to quickly and successfully deploy this data center to fulfil T-Systems- vision of the future,” he said in a statement.

    7:34p
    etcd Plugin Coming to Mesosphere’s Data Center OS

    Mesosphere, a startup built around Apache Mesos, open source software that makes big server clusters behave like a single computer, is working to build support for etcd, a system for maintaining consistent configuration across all nodes in a cluster, into its data center OS.

    etcd was created by CoreOS, a San Francisco startup that makes software for companies that want to operate their data centers the way web-scale operators like Google and Facebook do. CoreOS announced the first stable release of etcd, also open source, today.

    “Shared configuration and shared state are two very tricky domains for distributed systems developers as services no longer run on one machine but are coordinated across an entire data center,” Benjamin Hindman, chief architect at Mesosphere and chair of Apache Mesos, said in a statement. “Apache Mesos and Mesosphere’s Data Center Operating System will soon have a standard plugin to support etcd.”

    Mesosphere launched an early-access preview of its data center OS in December and plans a public launch sometime in the first half of 2015.

    etcd is a distributed key-value store. One of the things it helps ensure is preventing failure in a single node from bringing down the entire cluster.

    Google’s open source application-container management system Kubernetes and Pivotal’s open source Platform-as-a-Service technology Cloud Foundry both use etcd. It is an important component of the Linux variant that is the chief product of CoreOS.

    etcd is an implementation of Chubby, a software tool Google designed to manage consistency across server clusters in its own data centers. The big difference between the two is the “consensus protocol” each of them uses. Chubby relies on a protocol called Paxos, while etcd uses Raft. Both are written in Go.

    Mesos is used by a swath of high-profile Internet companies, including Twitter, Airbnb, and Hubspot. Hindman, one of the original creators of Mesos, came to Mesosphere from Twitter, where he and his colleagues applied the software to make the social network’s infrastructure more resilient.

    Mesosphere raised a $10.5 million Series A funding round in 2014.

    Hindman said Mesosphere’s data center OS users have been asking for etcd support, so the company is responding to demand.

    CoreOS raised $8 million of venture capital in a Series A round last year.

    Toward the end of 2014 the company created some controversy by introducing an application container standard called App Container as an alternative to Docker, the highly popular open source container technology that CoreOS itself had been a big supporter of.

    The company said Docker’s security model was flawed. It also accused Docker Inc., the company, of steering away from its original mission by adding container management features to what it apparently expected to remain a simple basic container.

    8:02p
    RF Code Sells 3M RFID Tags for Data Center Management

    Asset management solutions specialist RF Code announced it hit the 3 million mark for RFID tags tracking data center assets. RF Code does a lot of business in the data center industry, due to the importance of tracking servers and other IT equipment in data center management. That business added 1 million tags since hitting the 2 million mark in 2013, and 750,000 tags in the last year.

    Radio Frequency Identification tags (RFID) are used in data centers to keep track of IT gear. Some of RF Code’s big newer customers include CenturyLink, Vodaphone, and various other cloud and managed services providers. The tags appeal to regulatory, financial, and resource demands faced by critical IT facilities. For this reason it does well in financial, healthcare, and oil and gas verticals.

    RF Code’s RFID tags are attached to servers and racks for asset tracking and can also monitor environmental conditions. The company partners and integrates with a lot of data center infrastructure management, and electrical and mechanical infrastructure vendors. The tags are an important element in data center management, providing accurate tracking of what you have and where.

    The messaging has shifted for the company. Current CEO Ed Healy took the helm in 2014, positioning the company as an Internet of Things play for data centers. The company’s goal has been to provide more value through software. It offers a single platform, incorporating power, environmental, security, and asset management data.

    “2015 will see the company take major steps forward in providing executives with the metrics required to turn the data center from an insatiable cost center to a profit-making service,” said Healy in a release.

    RF Code may benefit from DCIM adoption and the growing Internet of Things, with many calling 2015 the year of inflection for IoT. There has been an unprecedented number of mergers and acquisitions in the IoT space according to 451 Research. More connected devices means more need for tracking it all.

    RF Code’s Workplace IoT offering joins the data center, office, field operations and supply chains together.

    The bottom line is whether or not it saves a company money. Healy believes RF Code provides good TCO, and three million tags sold suggests the same.

    9:10p
    Report: Bitcoin Mining Firm CoinTerra Files for Bankruptcy

    CoinTerra, Austin-based vendor of bitcoin mining servers and provider of hosting services for mining equipment, has filed for bankruptcy, Austin Business Journal reported.

    The company is involved in at least two ongoing lawsuits. One of them is by a data center provider C7 Data Centers, which is suing it to recover millions of dollars in allegedly unpaid colocation fees. CoinTerra has filed a countersuit against C7, claiming it hasn’t been able to make money and pay its customers and lenders because C7 shut down its servers, preventing it from mining the cryptocurrency.

    A rapid drop in bitcoin value, which went from $1,100 at one point in 2013 to about $240 today, is creating a stir in the bitcoin ecosystem. Some companies announced they would stop their bitcoin mining servers temporarily, until the market recovers, while others, such as CoinTerra, taking what is likely to be a fatal blow.

    Because CoinTerra, and some of its peers, have taken some data center space from commercial providers, those providers are feeling the impact as well. CenturyLink is another major data center provider for CoinTerra, but the company has been tight-lipped about its dealings with the customer.

    Because of C7 shutting down its servers, CoinTerra defaulted on its debt, the company’s CEO Ravi Iyengar told Data Center Knowledge earlier this month. Iyengar did not respond to a request to comment on the bankruptcy filing.

    According to the Business Journal report, the company owes between 200 and 999 creditors from $10 to $50 million. Its assets are valued in the $10 to $50 million range. CoinTerra has reportedly noted that it will not be able to pay unsecured creditors.

    The bankruptcy petition was filed by Timothy Davidson of Andrews Kurth LLP, a bankruptcy attorney.

    << Previous Day 2015/01/28
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org