Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, January 6th, 2014

    Time Event
    12:30p
    Data Center Jobs: Compass Datacenters

    At the Data Center Jobs Board, we have a new job listing from Compass Datacenters, which is seeking a Director of Sales in Dallas, Texas.

    The Director of Sales  is responsible for sourcing new opportunities, leverage relationships, network, Compass marketing to uncover new leads and opportunities, managing sales opportunities through the sales cycle, being able to qualify opportunities to ensure a fit, leverage internal resources wisely to the advancement of opportunities through the sales cycle, working with real estate brokerage community to ensure Compass is top of mind, and creating proposals that demonstrate the Compass value to prospects and clients. To view full details and apply, see job listing details.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

    12:55p
    Lengthy Outages for Hacker News, FastHosts

    It was a rough weekend for uptime, with significant outages at UK hosting provider FastHosts and the startup news portal Hacker News.

    Customers of FastHosts were offline for up to seven hours Saturday after a utility power outage triggered connectivity problems. “Our datacentres switched over to UPS (Uninterruptable Power Supply) power whilst our backup generators started up for prolonged backup power,” the company said on its status page. “Mains power to the site returned after a few moments, however although power was quickly fully restored to both datacentres we experienced a network issue that occurred as a result of this interruption to service.”

    Meanwhile, visitors to Hacker News have been greeted by timeouts and an error message  (“Yeah, that didn’t work. Try again, perhaps later?”). The outage began Sunday and the site remained down as of early Monday. The error message contains a reference to CloudFlare’s network, but the CDN says the problem is at Hacker News itself. “They’re having problems with their server,” tweeted CloudFlare CEO Matthew Prince. “Not a problem on our network. We’ve been in touch to see if we can help.” Hacker news is a social news site operated by startup incubator Y Combinator.

    1:42p
    Hypervisor 201: The 2014 Market Update
    cloud-keys-dreamstime

    The hypervisor market has undergone some changes over the last year, including a shift in focus to the cloud.

    Just over a year ago we took a close look at the hypervisor market. We examined the top three players, reviewed their features and offered some ideas regarding direction and technological innovation.

    A lot can change in a year.

    Over the course of 2013, we saw a huge increase in data center adoption, hybrid cloud models, and an even greater push towards hypervisor and infrastructure agnosticism. That’s where we find out biggest changes. The hypervisor market no longer caters to the paravirtualization drivers it has to optimize, nor does it really care as much about the hardware that it sits on. Of course, those are still crucial to the virtualization experience.

    But now the big connection point revolves around the cloud. How well can you integrate with a hypervisor sitting thousands of miles away? How well can your platform extend an existing data center into a hybrid cloud model? Can your hypervisor integrate with critical APIs to increase efficiency and optimize the end-user computing experience?

    In our previous discussion, we took a look at the definitions behind what comprises a hypervisor. Let’s revisit those key definitions and add some more:

    • Type I Hypervisor. This type of hypervisor is deployed as a bare-metal installation. This means that the first thing to be installed on a server as the operating system will be the hypervisor. The benefit of this software is that the hypervisor will communicate directly with the underlying physical server hardware. Those resources are then paravirtualized and delivered to the running VMs. This is the preferred method for many production systems.
    • Type II Hypervisor. This model is also known as a hosted hypervisor. The software is not installed onto the bare-metal, but instead is loaded on top of an already live operating system. For example, a server running Windows Server 2008R2 can have vSphere 5 installed on top of that OS. Although there is an extra hop for the resources to take when they pass through to the VM, the latency is minimal and with today’s modern software enhancements, the hypervisor can still perform optimally.
    • Guest Machine. A guest machine, also known as a virtual machine (VM) is the workload installed on top of the hypervisor. This can be a virtual appliance, operating system or other type of virtualization-ready workload. This guest machine will, for all intents and purposes, believe that it is its own unit with its own dedicated resources. So, instead of using a physical server for just one purpose, virtualization allows for multiple VMs to run on top of that physical host. All of this happens while resources are intelligently shared between other VMs.
    • Host Machine.  This is known as the physical host. Within virtualization, there may be several components – SAN, LAN, cabling, and so on. In this case, we are focusing on the resources located on the physical server. The resource can include RAM and CPU. These are then divided between VMs and distributed as the administrator sees fit. So, a machine needing more RAM (a domain controller) would receive that allocation, while a less important VM (a licensing server for example) would have fewer resources. With today’s hypervisor technologies, many of these resources can be dynamically allocated.
    • Paravirtualization Tools. After the guest VM is installed on top of the hypervisor, there usually is a set of tools which are installed into the guest VM. These tools provide a set of operations and drivers for the guest VM to run more optimally. For example, although natively installed drivers for a NIC will work, paravirtualized NIC drivers will communicate with the underlying physical layer much more efficiently. Furthermore, advanced networking configurations become a reality when paravirtualized NIC drivers are deployed.
    • APIs. Application programming interfaces (APIs) dictats how some infrastructure components interact with other resources within a data center. These software-based components really revolved in a certain area of IT – until recently. Now, there is quite a bit of interaction between APIs and the hypervisor specifically. There are new ways to tie in resources or integrate directly into a hypervisor to reduce the amount of hops that resources have to take. Client-less security, application inter-dependence, and integration with key hardware components are all things that APIs can help with. 

    With that in mind, let’s examine how the hypervisor market has changed.

    3:12p
    Why In-Memory Computing Technology Will Change How We View Computing

    Nikita Ivanov founded GridGain Systems in 2007, which is funded by RTP Ventures and Almaz Capital. Nikita has led GridGain to develop advanced and distributed in-memory data processing technologies, including a Java in-memory computing platform.

    NikitaIvanov-tnNIKITA IVANOV
    GridGain Systems

    Big Data is BIG — in fact, it’s too big to be of any use without the technology capable of handling it. With businesses creating 2.5 quintillion bytes of data every day, it’s nearly impossible for them to gain timely, actionable insights from it without the right tools. In order to remain competitive, organizations must be able to make data-derived decisions from the unwieldy amounts of information springing from countless sources: geothermal sensors to social media sites, CRM data, GPS, supply-chain, and virtually any other segment of an organization where information can be analyzed. Unfortunately, most organizations created and ingest much more data than they can make sense of. Think of it as organizational sensory overload.

    In order to get past organizational sensory overload and actually derive intelligence from its data, businesses must rethink computing. Traditionally, data is brought to the computation – a time-consuming, resource-intensive process. Due to the way that data is stored and accessed with traditional computing, latency actually increases as more data is placed in storage.

    Think of this bottleneck as traffic caused by multiple lanes on a highway merging into a single lane during rush hour. As the number of cars on the highway increase, so too does the length of time stuck in traffic, likewise, with traditional computing, there is a positive correlation between the amount of data stored and latency.

    On the other hand, there is In-Memory Computing (IMC). IMC technology essentially reverses a fundamental tenet of computing by bringing the computation to where the data is – in memory, which is orders of magnitude faster and frees resources. It is this reduced computational latency that makes deriving live action from streaming data possible. In fact, IMC is the only way to address data in-flight.

    Moving Beyond the Traditional Methods

    IMC technology makes it possible for organizations to conquer challenges that are beyond the capabilities of traditional technology. With IMC, businesses and organizations are able to provide real-time answers, while looking across vast amounts of data, expanding the possibilities for real-time decision making. This means that businesses could then use this data to better understand customer preferences and behaviors, and detect other correlations that they’d otherwise overlook.

    Examples of data-oriented tasks that require the high performance computing capabilities of In-Memory Computing include:

    • In the financial services industry, organizations may only have a fraction of a second to analyze a wide range of datasets in order to detect fraud and/or market risk.
    • In the oil and gas industry, companies must analyze large volumes of real time data to monitor pipelines and seismic sensors.
    • In the logistics industry, real time data is used to calculate pickup and delivery routes, based on package location, traffic and weather conditions.
    • In sales and marketing, businesses must be able to track every single item in every store, and analyze sales patterns and trends in real time.
    • In healthcare, organizations could use real time data to diagnose and treat patients.

    At the rate organizations are creating data, it’s unsurprising that much of it goes unused and insights go undiscovered. If enterprises could move a hundred, or even a thousand times faster, the possibilities it could derive from its data are endless. With the adoption of IMC, these possibilities are becoming realities every day.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    9:50p
    CloudSigma Adds Live Snapshotting To Its Cloud

    Public cloud IaaS provider CloudSigma has introduced Snapshot Management Technology to its SSD storage offering to improve data protection and access in the cloud. The technology greatly enhances  cloud-based disaster recovery capabilities, as well as has other potential use cases such as copying production environments for testing and development, enabling a company to test changes before deploying. The Snapshot Management Technology allows customers to incorporate enterprise storage strategies into their cloud infrastructure while meeting complex compute requirements, through a web application.

    The new snapshot management technology is fully integrated into CloudSigma’s existing platform. Customers are able to take a snapshot of live running drives and also clone them to create full drives to run separate Virtual Machines. The new drive can be at the same site or a second site for off-site data recovery.  Data can be recovered on a drive-by-drive basis.

    “What’s great about our snapshot technology is that it is completely non-disruptive,” said Robert Jenkins, CEO, CloudSigma. “Once the drive snapshot is created, an exact replica can be generated quickly at any time based on that snapshot. This replica can then be used as a ready-to-go replacement, should a disaster occur. Combining this with our existing private patching capability, companies can also easily pull that clone back into their private compute environment, as if nothing happened. We are truly transforming storage management in the cloud, even for companies with the most complex compute and security needs who can now make our public cloud a key part of their business continuity strategy.”

    Customers are able to completely automate their snapshot management processes by creating snapshot management policies. These policies allow customers to define scheduled snapshots, snapshot retention policies and more. Multiple policies are possible and each policy can be applied to one or more drives.

    10:00p
    NSA Will Cool its Secret Servers With Waste Water

    A new data center being built by the National Security Agency (NSA) will use up to 5 million gallons a day of treated wastewater from a Maryland utility. The agency last week reached an agreement with Howard County to use treated waste water – also known as “gray water” – that would otherwise be dumped into the Little Patuxent River, according to the Baltimore Sun.

    As part of the agreement, the NSA will spend $40 million to build a pumping station that will supply up to 5 million gallons a day of water for the cooling systems at the NSA data center under construction at Fort Meade, which is scheduled to come online in 2016.  The agency is investing $860 million to build the 600,000 square foot facility, which require 60 megawatts of energy and include at least 70,000 square feet of data center space.

    The NSA is already building a massive data center in Utah, investing up to $1.5 billion in  a project that will feature up to 1 million square feet of facilities. The NSA says both facilities will be used to protect national security networks and provide U.S. authorities with intelligence and warnings about cyber threats. But the agency data centers have become a flash point for controversy in the wake of public disclosures about the NSA’s covert data collection efforts.

    Following Google’s Lead

    With its use of local waste water, the NSA is taking a page out of Google’s playbook. A Google data center near Atlanta is recycling waste water to cool the thousands of servers housed in the facility, and then purifying the excess water so it can be released into the Chattahoochee River. The plant builds on concepts Google used in Belgium, where it treats water from an industrial canal for use in its data center cooling system, allowing the facility to operate without chillers.

    The enormous volume of water required to cool high-density server farms is making water management a growing priority for data center operators. The move to cloud computing is concentrating enormous computing power in mega-data centers containing hundreds of thousands of servers. In many designs, all the heat from those servers is managed through cooling towers, where hot waste water from the data center is cooled, with the heat being removed through evaporation.

    As the scale of these huge facilities has increased, data center operators have begun working with local municipalities, water utilities and sewage authorities to reduce their impact on local potable water supplies and sewer capacity.

    At a potential capacity of 5 million gallons a day – even more than the reported 3 million gallons a day required to cool the its Utah data center – the NSA would be using an enormous amount of water. By using gray water, the data center’s operations have less impact on the supply of potable water available to local residents.

    << Previous Day 2014/01/06
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org