Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, May 7th, 2015

    Time Event
    12:00p
    New Relic Aims to Preempt Docker Container Sprawl

    Although Docker containers may not be widely deployed in production environments just yet, monitoring those containers represents the next major challenge facing IT operations teams.

    To address that issue New Relic announced that it will provide support for Docker containers within its Software Analytics Platform, a software-as-a-service platform used for application performance.

    In addition, the company said it was making available a service map application to better identify IT resources strewn across the data center and streamlining its alert system to reduce IT operator fatigue.

    Finally, New Relic also announced that it became a member of the Cloud Foundry Foundation, which oversees the development of an open source Platform-as-a-Service environment.

    Patrick Lightbody, vice president of product management for New Relic, says the company has updated its agent software in a way that integrates its Software Analytics Platform with container management layer of a Docker container.

    “A Docker container looks like just another host and operating system inside an IT environment, “says Lightbody. “We’re taking what is an opaque environment and making it transparent.”

    He notes that while container sprawl has not yet become a major data center issue, it’s only a matter of time before it does. A physical server that can support 10 to 25 virtual machines can run hundreds of Docker containers. To make matters more challenging those Docker containers can run on physical servers, virtual machines, or in a cloud environment.

    In general, containers are driving the emergence of micro-service architectures across the enterprise. While the concept of service-oriented architecture has been around for a long time, containers enable a more granular approach to delivering application services that developers are embracing from the bottom up.

    For developers, containers represent a faster way to provision IT infrastructure resources. IT operations teams are now being challenged with finding ways to make sure those containers remain secure and isolated when running in a production environment.

    To help better visualize an IT environment that is increasingly resembling the proverbial rat’s nest, Ligthbody says New Relic created a service map application that is an extension of the company’s application performance management service.

    The issue that IT operations teams will soon face, says Lightbody, is not only tracking services across virtual and physical machines running in and out of the cloud, but also inside Docker containers that can be spun up by developers almost anywhere.

    At this stage, most IT operations teams are not prepared to manage Docker containers in a production environment. But given the fact that it’s only a matter of time before Docker containers show up in large numbers, IT operations teams might want to start thinking in terms of an ounce of IT monitoring prevention now that will be worth a ton of Docker container management cure later on.

    3:00p
    Google to Provide Its Internal NoSQL Database as a Cloud Service

    Google’s Bigtable, the fully managed, highly scalable NoSQL database that powers the company’s biggest services like search, Gmail, and Analytics is now available as a service.

    The cloud NoSQL database service can handle huge volumes of data and is being positioned as a valuable tool in the hands of Internet-of-Things players. Bigtable is meant for large ingestion, analytics, and data-heavy service workloads; its track record behind Gmail and search validates these claims.

    The technology has backed Google’s mission critical applications for more than a decade, and as a result, the company is promising at least two times the performance per dollar over unmanaged NoSQL alternatives. Google’s economies of scale make it a potentially disruptive force in the NoSQL market.

    Cloud Bigtable is also secure, built with a replicated storage strategy, and all data is encrypted both in-flight and at rest. The NoSQL database service is accessible through the open source Apache HBase API.

    Google also has a service partner ecosystem offering managed Cloud Bigtable services, including SunGard, Pythian, CCRi and Tellit Wireless Solutions.

    Bigtable is available as a beta release in multiple locations.

    The Growing Cloud NoSQL Market

    Cloud Bigtable is a potential competitive threat to other cloud NoSQL service providers, given the price point and track record.

    NoSQL, in general, stands to capitalize on the Internet of Things, with many players tuning their message to capture the massive, emerging market. NoSQL is good at handling unstructured data, e.g. the types of data generated by a multitude of connected devices. There’s a growing opportunity for offering managed services around NoSQL.

    “As businesses become increasingly data-centric, and with the coming age of the Internet of Things (IoT), enterprises and data-driven organizations must become adept at efficiently deriving insights from their data,” wrote Cory O’Connor, product manager at Google. “In this environment, any time spent building and managing infrastructure rather than working on applications is a lost opportunity.”

    IoT is a major factor behind increasing investment in new players as well as service providers getting into the act. New offerings continue to come to market, such as last month when RethinkDB rolled out its first commercially supported release of its NoSQL designed for real-time applications.

    Amazon Web Services also has a NoSQL DB-as-a-Service play, DynamoDB. CenturyLink recently acquired a NoSQL-as-a-service player Orchestrate to complement its more traditionally managed Oracle or Microsoft SQL database services. Rackspace is another service provider big in the database services game, acquiring ObjectRocket in 2013 and expanding managed database services from there.

    Other NoSQL DBaaS include IBM-owned Cloudant, a company which eternal SoftLayer competitor Rackspace previously invested in; and Tesora’s Trove, and Cumulogic, which offer both SQL and NoSQL as a service.

    DB-as-a-Service offerings also stand to capitalize from the increasing amount of database activity moving to the cloud in general. While this is not close to a wholesale move, other As-a-Service offerings will certainly continue to grow.

    3:03p
    Equinix Makes TelecityGroup Bid

    TelecityGroup is reviewing a cash and stock takeover bid from Equinix months after a proposed merger between European data center giants Telecity and Interxion, according to documents filed by TelecityGroup. Equinix’s bid values Telecity at around $3.4 billion USD.

    The Equinix bid potentially threatens the Telecity and Interxion merger, which was proposed as an all-share deal. A lot of the storyline around a potential Telecity Interxion merger was that the combined forces would leapfrog Equinix as the biggest provider in Europe. Equinix is the largest provider worldwide in terms of revenue.

    Equinix has offered an almost 50/50 stock and cash split, or about $17.42 per share for a premium of 27 percent over current Telecity stock. While it’s too early to tell which company will ultimately win out once the smoke clears, one winner in all of this is investors of Telecity.

    Telecity is prohibited from seeking alternative proposals under the merger agreement with Interxion, but takeover discussions are allowed in limited circumstances.

    “Having carefully considered the Equinix proposal in the light of this exception, the Board of TelecityGroup has determined that it is required by virtue of its fiduciary duties to enter into discussions with Equinix and has decided to permit Equinix to undertake a short period of due diligence,” said TelecityGroup in a press release.

    Equinix will have a 28-day period ending June 4th to announce either a firm intention to make an offer or to state it does not intend to make an offer.

    Rampant consolidation is currently occurring in the data center industry. Earlier this week QTS agreed to acquire Government cloud provider Carpathia. Telx is rumored to be exploring a potential $2 billion sale as well.

    3:30p
    In-Memory Computing: Transforming Business Intelligence and the Internet of Things

    Dr. William L. Bain founded ScaleOut Software in 2003.

    We are in the midst of a data revolution – deriving business intelligence (BI) from data has graduated from a nice-to-have feature to a business-critical function. As connected devices, sensors, and machines become ubiquitous, organizations increasingly require a solution that can keep up with the real-time data delivered by the Internet of Things (IoT).

    Traditional techniques for business intelligence cannot handle the demands of IoT initiatives; their ever-growing streams of data require real-time tracking and analysis to be effective. Businesses are looking toward the next-generation of analytics technology – operational intelligence (OI) – to handle live, fast-changing data and provide immediate feedback.

    Operational intelligence for IoT requires a computing platform that can store, update and continuously analyze data sets representing dynamic real-world entities or business assets. In-memory computing, which can perform these functions with scalability and extremely low latency, provides the computing power required for OI. For example, it can analyze a terabyte of continuously changing data in a few seconds and can ingest and analyze events from millions of sources within milliseconds. Tracking and analyzing events from a huge collection of dynamic assets and quickly generating feedback opens up new revenue streams and business opportunities that were previously impossible.

    Here are some scenarios in which OI can introduce disruptive changes and add significant business value for IoT.

    Financial Services

    The financial industry has been one of the fastest to integrate high-performance computing technology into its day-to-day business functions, moving trading off NASDAQ’s floor and onto the Internet. Investment firms now rely on this technology to drive financial decision making with real-time analysis and forecasting. Using in-memory computing, investment firms can analyze several trading algorithms simultaneously, comparing historical stock prices, market fluctuations and equity positions to determine whether a trade should be initiated. A hedge fund can track the effect of market fluctuations on its portfolios, allowing long and short equity positions to be quickly evaluated based on proprietary strategies for rebalancing. Through in-memory computing, financial institutions can obtain the OI they need to make trading decisions faster and more successfully.

    Manufacturing

    Outfitting “smart” machines on factory floors with real-time telemetry enables OI that can monitor performance and identify early indicators of problems, preventing costly failure scenarios. The losses associated with unexpected outages build up quickly; the cost of repair, replacement of expensive equipment, and reduced productivity reverberate throughout the business. Depending on what a factory produces, a minor mechanical failure that is not immediately recognized and addressed has the potential to endanger consumers and even risk non-compliance with industry regulations. Rather than relying solely on periodic inspections and component replacements, in-memory computing continuously analyzes live data and cross-references it with historical patterns to proactively avoid impending failures.

    Retail

    The proliferation of e-commerce has created intense competitive pressure on traditional brick- and-mortar stores, necessitating the integration of IoT technologies to add OI. E-commerce sites have long used in-memory computing to evaluate shopping behavior, purchase history, and spending patterns, providing real-time customized offers to prospective customers while they browse online. Brick-and-mortar retailers have started to apply the same techniques to create a personalized shopping experience within their stores. Shoppers can opt-in to a personalized shopping experience, which uses OI to combine demographics, brand preferences, shopping history, and current offers to generate immediate, personalized recommendations that assist sales associates. By employing RFID tags attached to merchandise, OI also allows retailers to precisely track inventory changes and lower costs.

    Media Providers

    Similarly, cable TV providers can stream telemetry from a vast network of set-top boxes to an OI platform, enrich the data with relevant historical information, and make timely, personalized recommendations to viewers as they watch TV. Data from customer history, demographic patterns, and other sources all can feed into a new way for brands to reach and engage viewers in the moment. By using in-memory computing, cable providers can perform this analysis in real-time and provide tailored offers, not just for entertainment recommendations that compete with streaming powerhouses like Netflix and Hulu, but also to immediately identify and correct technical issues.

    Summing Up: IoT and In-Memory Computing

    By using in-memory computing to continuously ingest, correlate, and analyze real-time data enriched with historical information, OI detects patterns and trends on a second-by-second basis. This powerful technology can provide immediate feedback that steers behavior, optimizes performance, avoids downtime, and captures important new business opportunities. Many industries have been quick to implement OI in order to maximize the value of their IoT deployments. Because of this technology’s impressive power and flexibility, countless new use cases will undoubtedly emerge.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:00p
    “Not Your Father’s EMC” Open Sources Software Defined Storage Controller

    EMC is releasing its ViPR Controller storage automation and control software as an open-source project. ViPR creates a virtual pool of storage and helps automate policy-based capacity provisioning for applications.

    EMC will continue to sell a commercial version with services and support, but third parties will be able to develop their own services and applications around ViPR. Open sourcing also stands to make ViPR more compatible with more parts of enterprise storage environments in general.

    The ViPR open source project is called Project CoprHD. It will go up on GitHub next month, licensed under Mozilla Public License 2.0.

    The larger trend of the software-defined data center means all parts of the data center are being abstracted and controlled through software, destroying silos and proprietary walled gardens. While there is still a lot of demand for proprietary appliances, more and more they need to be able to work in a larger virtualized pool of heterogeneous hardware. What makes them work as a team is a controller like ViPR.

    Not all integrations are created equal, and ViPR is currently more tightly integrated with EMC products, meaning third parties don’t enjoy the same list of features, despite integration.

    Open sourcing a controller so it can better play that neutral role makes a lot of sense. It’s easier to be neutral if you’re not proprietary. Proprietary doesn’t play nice in the world of hardware abstraction.

    The ViPR open source project brings in the wider development community to enhance ViPR. The company also recently launched a community initiative.

    The open source project will help the commercial version, as it will employ improvements generated from the community. Open sourcing ViPR might also open up sales prospects for EMC, acting as a potential customer on-ramp rather than the product itself.

    EMC also released a free edition of ScaleIO to the same end as the open source ViPR project. It’s about getting people into EMC’s ecosystem rather than selling specific software. EMC acquired ScaleIO in 2013.

    “For anyone left confused, today’s announcements prove this is not your father’s EMC,” said Jeremy Burton, president of products and marketing at EMC.

    “By offering access to Project CoprHD, the open source version of EMC ViPR Controller and free downloads of EMC ScaleIO – two key enabling pieces of our Software-Defined Storage portfolio – EMC has turned a corner by delving more deeply into open, collaborative software development with our customers, partners, developers, and competitors,” he said.”This represents a massive shift in our strategy that we believe will help accelerate our customers’ efforts to develop new application-centric business models.”

    4:30p
    HP Lets Loose Horde of New Servers

    HP this week launched a raft of servers spanning everything from a 1U offerings to a 16-socket server powered by the new high-end Intel Xeon processors.

    In addition, HP announced that its Integrity Superdome X platform is now certified to run Windows Server and updated its ConvergedSystem portfolio for SAP Hana with the Intel Xeon E7 V3 processors also unveiled this week. The 16-socket HP ConvergedSystem provides up to 12 terabytes of data in a single memory pool.

    At the other end of the spectrum, the HP Apollo 2000 series is a hyperscale platform designed to plug into a traditional rack server, Vineeth Ram, vice president of product marketing for HP Servers, says. Up to four independent hot-pluggable servers can be plugged into a standard 2U chassis.

    “We see this offering as a bridge to converged infrastructure for IT organizations running rack servers,” says Ram. “We’re leveraging the principles of converged infrastructure in a traditional server.”

    Meanwhile, the Apollo 4000 Systems includes three servers that Ram says are purpose-built for data-intensive workloads such as Hadoop. It includes what HP claims is the industry’s densest 2U server in the form of the HP Apollo 4200, which can support 28 large form factor drives or 50 small form factor drives.

    The Apollo 4510 is based on a 4U chassis with one server node providing rack-scale storage server density with up to 5.44 Petabytes in 42U rack. The Apollo 4530 member of the series provides support for three server nodes in a single chassis.

    The HP ProLiant DL580 Gen9 and the HP ProLiant DL560 servers are four-socket x86 systems that fit in a 2U chassis that can be configured with Intel Xeon E7-4800/8800 v3 processors. In addition, an HP ProLiant BL660c Gen9 is a four-socket compute server blade.

    The DL580 Gen9 supports up to 6 TB of HP DDR4 Smart Memory across 96 DIMM slots and provides access to up to 9 PCIe Gen3.0 slots for greater IO bandwidth and an HP Smart Array P830i SAS Controller with 2 GB or 4 GB Flash Backed Write Cache enabled. It also provides access to up to five FL/FH Double-wide GPGPU cards.

    Intel claims that the Xeon E7-4800/8800 v3 processors on average provide a 40-percent performance improvement over the prior generation of processors, while delivering a six-fold performance improvement on applications that run in memory thanks to new Intel Transactional Synchronization Extensions that Intel has made to the Xeon instruction set.

    Each member of the family, says Intel, can be configured with up to 18 cores, a 20 percent increase in cores compared to the prior generation, and up to 45 megabytes of last-level cache. The processors themselves can support servers configured with as many as 32 sockets.

    The end result, says Intel, is as much as a ten-fold improvement in performance per dollar depending on how those servers are actually configured.

    The degree and rate at which IT organizations will embrace converged infrastructure will vary greatly. But the one thing that is certain is that many of the management concepts and constructs that were originally developed for blade servers are now being applied to traditional rack servers, which one day may lead to a unification of at least how different server platforms get managed across the data center.

    5:06p
    Swath of Government Cloud Services Get DISA Green Light

    U.S. Defense Information Systems Agency has granted Provisional Authorization to 23 government cloud services for hosting mission data up to Impact Level 2, which is non-controlled, unclassified information, including publicly released information as well as some private unclassified Department of Defense information with some minimal access control.

    DoD, a DISA parent department, has been warming to commercial cloud services. One vocal proponent of using cloud intelligently has been DoD CIO Terry Halvorsen. Cloud promises savings, however not all data is suited for cloud. The first step was defining the types of data, with Provisional Authorization defining which providers are suitable for Impact Level 2 data.

    There are four Impact Levels, each describing the sensitivity and risk associated with the data. Counterintuitively, they are levels 2,4,5, and 6, 2 being the lowest and 6 reserved for classified double-secret probation information.

    Each provider is first granted either a FedRAMP Joint Authorization Board Provisional Authorization or a FedRAMP Agency Authority to Operate. A primer on FedRAMP is located here.

    Some of the more well-known names that were granted Provisional Authorization Level 2 include:

    • CDN provider Akamai
    • Microsoft Windows Azure public cloud
    • Amazon Web Services Redshift, a data warehouse service
    • AT&T’s Storage-as-a-Service
    • IBM SmartCloud for Government
    • Microsoft Office 365 and supporting services, such as Active Directory
    • Oracle Federal Managed Cloud Services, and its SaaS Service Cloud
    • Salesforce Government Cloud, PaaS and SaaS
    • Verizon Enterprise Cloud Federal Edition

    Many data center providers have either gone after the larger federal space or acquired positions in this growing market. QTS, which received its FedRAMP certification in 2014, acquired Carpathi Hosting this week to boost its government cloud offerings.

    What workloads are suitable for which cloud is still being established. New cloud security requirements were launched early this year.

    The other clouds granted PA are:

    IaaS: Autonomic Resources Cloud Platform (ARC-P), Clear Government Solutions FedGRID Government Community Cloud, Lockheed Martin SolaS-I Government Community Cloud, OMB MAX General Support Services, USDA National Information Technology Center

    PaaS and/or SaaS: MicroPact Product Suite, AIMS eCase, OMB MAX.gov Shared Services, U.S. Treasury Workplace.gov Community Cloud, Economic Systems Federal Human Resources Navigator (FHR Navigator), SecureKey Briidge.net ExchangeT for Connect.Gov, Edge Hosting CloudPlus, which provides managed, secure Windows and Linux application hosting (technically an ASP).

    Remote hosted desktops: Concurrent Technologies Corporation.

     

    6:11p
    Report: Defunct OpenStack Startup Nebula Engineers Join Oracle

    Oracle has hired most of the engineering team behind Nebula, the private OpenStack cloud startup that went out of business in April.

    Peter Magnusson, vice president of cloud development at Oracle, told Re/code he went after Nebula engineers within one week of the announcement that the cloud startup was going out of business. He soon offered 90 percent of them (about 40 people) jobs at Oracle.

    Engineers skilled at enterprise cloud, and especially those skilled at OpenStack, a group of open source cloud infrastructure software, are hard to come by. Just this week we covered a recent analyst report that found that while the technology needed to stand up an OpenStack cloud was cheaper than proprietary alternatives on the market, the scarcity of OpenStack talent made the open source solution a more expensive proposition, since engineers who are good at it require unusually high salaries.

    Nebula built hardware-and-software packages for quick and easy private OpenStack cloud deployment by enterprise customers.

    Some of the members of the team behind Nebula were involved in creation of OpenStack when it was conceived as a collaboration between engineers at NASA and Rackspace. Chris Kemp, the startup’s founder, was CIO of the NASA Ames Research Center in Silicon Valley where he and his team built a private cloud for the center’s own use that was also named Nebula. Kemp was one of OpenStack’s creators.

    It’s unclear whether Kemp was one of the engineers that recently joined Oracle.

    Oracle has its own OpenStack distributions for both Oracle Linux and Oracle Solaris operating systems. The company is a member of the OpenStack Foundation. But Magnusson told Re/code the former cloud startup’s team will report to him and work primarily on Oracle’s Infrastructure-as-as-Service offerings.

    << Previous Day 2015/05/07
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org