Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, July 12th, 2013

    Time Event
    11:51a
    EMC Strengthens Flash Portfolio With ScaleIO Acquisition

    Accelerating its strategy to deliver Flash across the enterprise server and storage EMC announced it will acquire privately-held ScaleIO. Adding ScaleIO’s highly scalable server software with PCIe Flash cards to EMC XtremSF PCIe cards will increase EMC’s portfolio for Enterprise Private Cloud and Service Provider environments.

    ScaleIO Elastic Converged Storage (ECS) software will become part of the EMC XtremSW Suite of Flash storage software. The software approach that ScaleIO takes creates a virtual pool of server-based storage by logically combining SSDs, PCIe Flash cards, HDDs—or any combination of these devices. It provides support for both virtualized and non-virtualized environments and scales from tens to thousands of servers. Upon closing, ScaleIO will operate within the EMC Flash Product Division.

    Earlier in the year EMC Senior Vice President and General Manager of the Flash Products Division Zahid Hussain talked about how Flash has become ubiquitous and was deployed in a number of different places with a number of different use cases. As he pointed out, software will evolve at each layer of the infrastructure stack, and with the ScaleIO acquisition EMC accelerates its ability to abstract the properties of different types of Flash with software that manages all flavors of Flash across the data center.  In a recent blog post Zahid Hussain talks about ScaleIO, its capabilities, and the future of building upon the ECS core.

    “Flash now permeates every layer of IT—in virtualized and non-virtualized environments. Enterprise workloads are diverse in nature, and EMC is committed to offering our customers and partners choice in their Flash deployments,” said David Goulden, President and Chief Operating Officer at EMC. ”ScaleIO is a natural extension to our best-of-breed portfolio. It strengthens our product capabilities in the area of server-side storage and brings a world class team that will undoubtedly enable us to innovate more quickly in the future.”

    12:30p
    The Software Defined Data Center Meets Disaster Recovery

    Shannon Snowden, Senior Technical Architect at Zerto.

    Shannon-SnowdenSHANNON SNOWDEN
    Zerto

    The concept of the software-defined data center (SDDC) rose to prominence during VMworld 2012, with VMware touting it as the next big leap forward in information technology management. The term refers to an IT facility where the networking, storage, CPU and security are virtualized and delivered as a service. Furthermore, the provisioning and operation of the entire infrastructure is completely automated by software. This integration and automation brings everything in the infrastructure together and allows for a high degree of flexibility. It also means the person managing the data center can more efficiently utilize resources while providing better service to the company. Finally, the concept works at the hypervisor level, which allows the company to better and more fully utilize its hardware.

    The Logical Next Step

    It’s important and exciting to note that the realization of this concept is already happening. More and more companies are setting up data centers or thinking of data centers in this way, which means that related spin-off concepts have begun to appear. For example, software-defined networking has been coined as an approach in which networking control is decoupled from hardware and controlled by a software application.

    Because many IT organizations are comfortable with the concept – and implementation – of SDDC, the logical next step is the adaption of software-defined disaster recovery (SDDR.) This approach represents complete hardware abstraction, meaning disaster recovery is no longer tied to the hardware. It’s now part of the software, which makes the process of replicating and recovering significantly more efficient.

    Maximum Efficiency and Flexibility

    To be specific, implementing the concept of SDDR allows for simple disaster recovery implementation by extending the benefits of virtualization, such as flexibility and portability, to Disaster Recover (DR). The management of DR is also greatly improved, as SDDR incorporates replication and recovery of multiple sites and can be initiated and maintained from one central location. Individual virtual machines or entire virtualized data centers can now be moved to and from any physical location, either to avoid disasters or as part of scheduled data center relocations.

    SDDR works at the hypervisor layer and does not depend on hardware. When one compares a hardware-based deployment to an SDDR deployment, the benefits really come into focus. For a hardware-based disaster recovery solution, a multi-site deployment is very difficult, expensive and typically requires on-site configuration by the hardware and software vendors. The build time can be anywhere from several days to several weeks, depending on the number of sites. Contrast that with the typical multi-site SDDR deployment that normally takes about an hour to install and configure, is done remotely and includes testing sample failovers and failbacks.

    The SDDR concept allows for fast recovery – near-continuous data protection combined with non-disruptive DR testing allows for recovery time objectives of minutes instead of hours or days. There’s also no reliance on inefficient storage snapshots, whose associated delays in recovery can cause major performance issues. Per-VM replication provides efficient bandwidth usage with minimal network impact and the model enables maximum flexibility, where VMs can be protected and recovered to private, public or a hybrid clouds.

    Does this sound familiar? If this sounds familiar, it should! This is essentially the same argument made for server virtualization versus physical servers that we have all witnessed as fact for years. SDDR just expands the portability benefits to encompass multiple geographically separated sites.

    Impact on Cloud Service Providers

    For all the talk of DR being something that could easily move to the cloud, reality shows that DR specifically has lagged behind other cloud services being offered by cloud service providers (CSPs). DR has proven to be much more complex to provide than other cloud services. There are three irrefutable requirements that have to be addressed in order to provide cloud-based DR services successfully that SDDR can meet and physical hardware cannot.

    1. It must be hardware agnostic.
    In a hardware solution, the provider has to connect disparate hardware at a deeper level in the infrastructure than is normally needed for other cloud-based services. While the CSP might match one customer’s hardware, it is highly unlikely to match very many customers at the hardware level. SDDR works at the virtualization level so that cloud service providers can offer disaster recovery services to companies that do not have the same hardware as the CSP, specifically because they are leveraging the benefits of virtualization.

    2. It has to have integrated multi-tenant support.
    CSPs have to build a multi-tenant environment– this means that they have many customer environments, but that each of those are completely separate and secure. For DR purposes, multi-tenancy is quite complicated to achieve and manage if the DR tools are not designed to be multi-tenant.

    3. It has to have streamlined customer management capability.
    SDDR attracts customers to cloud-based DR unlike anything that previously existed. The customer base tends to grow very rapidly and can crush an ill-prepared support department. Having a DR tool that is designed for ease of management with profile driven, self-service customer interaction tools is critical for CSPs to successfully offer DRaaS in a cost-effective manner. Providers are able to offer global management and service level reporting for customers, regardless of VM location.

    Benefits to Providers and Consumers

    SDDR is so effective that is has essentially created a new service offering for CSPs. The SDDR model allows for rapid, flexible growth and enables CSP to initiate their first service offering or to expand their existing service offering with the ability to meet aggressive service level agreements and compete for the top tier DR services.

    With the advanced multi-tenant management capability, CSPs can be a fully managed DR as a service offering to some customers or more of a customer-initiated, self-service DR provider if that is what the customer prefers. In fact, SDDR is a great entry point for companies that possibly have never considered cloud-based services before because of the significant savings in capital and operational costs of not needing a dedicated DR site.

    SDDR makes both providing and consuming DR as a service better because it removes the dependency on hardware, supports multiple sites, streamlines DR management and leverages virtualization to its fullest extent.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:34p
    Scality Gets $22 Million for Scale-Out Storage

    Storage and big data companies Scality and WebAction receive funding to advance their offerings, and Avere Systems is selected for the Library of Congress and South American web hosting company Locaweb.

    Scality secures $22 million. Storage provider Scality announced it has closed its series C funding round for $22 million, bringing the total invested capital since company inception to $35 million. With funding led by Menlo Ventures and Iris Capital, the investment will be used to strengthen its worldwide sales & marketing initiatives targeting enterprise and service provider markets, and to increase investment in its world-class R&D team. Customers deploy Scality’s software to provide large-scale storage for Cloud, Big Data, and Backup and Archive applications. “The growth opportunity in the software storage market is very exciting for us,” said Doug Carlisle, managing director, Menlo Ventures.  “The intersection of Mobile, Social, Big Data and Cloud Infrastructure is creating a disruption in technology innovation.  Scality is leading the disruption in software defined storage technology.  Since its introduction in 2010, Scality’s RING software storage product has seen rapid adoption by companies eager to try a new storage approach.  We look forward to watching Scality continue to innovate in data storage solutions for the web scale infrastructure markets.”

    Customer wins for Avere.  Avere Systems announced that the Library of Congress has selected the company to increase the efficiency and performance of its storage infrastructure. The Library of Congress website and file repositories will be supported by Avere’s FXT Series Edge Filers, enabling congressional and public users quick access to valuable content. The Avere FXT Series, with its ability to deliver up to 150 TB of Flash in a single cluster, was built to address the difficulty of providing fast and scalable access to large amounts of content typified by the Library of Congress data environment,” said Ron Bianchini, president and CEO, Avere Systems. “This is a tremendous achievement for Avere and we are excited that our products are helping the Library of Congress deliver its massive data stores to users.” Avere also announced that a large South American web hosting  company Locaweb has selected Avere FXT Series Edge filers to realize performance and cost improvements in hosting more than 500,000 web sites.   Locaweb first leveraged Avere to consolidate and simplify the management of thousands of storage appliances and drives into a single global namespace. In addition, Avere was able to help Locaweb’s performance by dynamically tiering active data to high-performance FXT Series Edge filers and offloading the ZFS-based Core filers.

    WebAction receives $11 million investment. Real-time big data server company WebAction announced that it has closed an $11 million Series B financing led by Summit Partners. The new capital will allow the company to expand research and development, and increase investments in sales, marketing, and other business development activities. Its Real-Time Big Data Server is an end-to-end platform that enables the next generation of real-time, data driven applications by acquiring, processing, and delivering structured and unstructured data. ”We are delighted to have Summit Partners join the team as their experience and success with rapidly growing companies makes them the ideal partner for WebAction,” said Ali Kutay, Chairman and CEO of WebAction. “Given our prior joint successes with Summit Partners,” he added, “we are well positioned to deliver industry-leading solutions to the enterprise market.”

    1:51p
    Friday Funny: Close Encounters of the Data Center Kind

    Happy Friday! At the end of the work week, we take a little time for some data center humor. Before your weekend, submit caption suggestions for our latest cartoon about a “close encounter” in the data center. Many thanks to all who submit some pretty humorous caption suggestions each week. Diane Alber, our fav data center cartoonist, writes, “I thought I would take sending your information to the “cloud” to a whole new level. . .”

    The caption contest works like this: We provide the cartoon (drawn by Diane) and you, our readers, submit the captions. We then choose finalists and the readers vote for their favorite funniest suggestion. After reader voting, the winner will receive his or her caption in a signed print by Diane.space-ship-smClick to enlarge graphic.

    Please visit Diane’s website Kip and Gary for more of her data center humor. To see our previous cartoons, visit Data Center Knowledge’s Humor Channel.

    1:58p
    With Latest DCIM Update, FieldView Focuses on Integration

    FieldView Solutions is announcing version 6.0 of its Data Center Infrastructure Management Suite today. Fieldview says its latest update focuses on addressing a critical gap in DCIM solutions: sharing data that is gathered, stored and analyzed with other applications.

    “FieldView 6.0 represents a wealth of customer feedback and the realization to conform our solution to better meet the needs of a global marketplace,” said Tim Regovich, Chief Technology Officer, FieldView Solutions.  ”We are proud to offer a data center tool that’s adaptable enough to integrate into any existing set of data center management tools while flexible enough to conform to individual requirements.”

    To address these gaps, FieldView is changing from providing a custom integration for individual applications to creating two different data links that streamline and share power, cooling, historical and other information critical to optimizing today’s data centers.  The data links are called DataView and LiveView.

    DataView is a non-compressed cache of data for a wide variety of applications to access or publish historical and trending data for asset management and capacity planning needs. LiveView is a live temperature and power feed that offers the most recent measurement readings for an at-a-glance view of global data center operations.

    DataView and LiveView enable FieldView to virtually interconnect with a bunch of applications. This  simplifies integration of historical and real-time data collected by FieldView into asset, systems and network management solutions, as well as financial applications, dynamic facilities control, IT power control, and other applications.

    In addition to the new data sharing abilities, FieldView 6.0 also adds the following features, which will be delivered throughout Q3 and Q4 of 2013:

    • Extended Business Intelligence (BI) capabilities: BI functionality have been enhanced, including capacity planning of space, power and cooling.
    • Data Warehouse Excel Integration: Fully-customizable Excel features enable users to save any query or run any regression.
    • Enhanced Dashboards: With the use of configurable widgets, FieldView’s user dashboards can now be customized. Whether the end user wants to view PUE data or just alarms, this new feature provides only the information desired.
    • “What If” Planning Scenarios: Predictive analysis is critical to data center operations and FieldView 6.0 has enhanced functionality to help forecast space, power and cooling requirements vs. available capacity, and to simulate the impact of potential deployments.
    • Ticketing System Integration: Full integration with industry-leading ticketing systems enables integration with customers’ operational processes for resolving critical alerts. FieldView 6.0 generates alarms, and aggregates alarms generated by the systems it monitors.
    • Mobile: Leveraging HTML5, FieldView 6.0 information is now optimized for smart phones and tablet viewing and interaction.
    • Internationalization: FieldView will offer five languages: Chinese, Portuguese, Japanese, German, and Spanish as well as inherent capability of operating and capturing data in Metric or Imperial units.
    • Energy Optimization: Enhanced reports provide power, cooling and space trending information to identify servers with long-term power draws and other anomalies.
    • IT Asset Management Integration: Newly enhanced version now has a common format for importing and synchronizing data – making a simpler and deeper connection to asset management tools to automatically share information.
    2:32p
    Intel Rolls Out Development Kit for KVM Tools

    Intel is offering a new Software Development Kit for data centers that complements its Data Center Manager (DCM) product, a key ingredient in more than 80 percent of today’s Data Center Infrastructure Management (DCIM) offerings. The company announced Virtual Gateway, a cross-platform keyboard-video-mouse (KVM) software development kit (SDK) that aides solution providers offer enhanced capabilities for diagnosing and troubleshooting datacenter hardware.

    It’s a virtual solution rather than a hardware approach to managing and troubleshooting. The solution is vendor agnostic, so it will appeal to those who have IT devices from multiple vendors (most people). It offers visibility and control for IT assets, consolidated central access to racks, blades, network and storage from one hub. The Intel Virtual Gateway can support up to 10,000 managed devices and 50 simultaneous sessions of remote access.

    IT managers can use Virtual Gateway to securely configure or fix compatible components, whether they be servers, network switches, storage devices remotely, in a “one to many” solution. The company is positioning Virtual Gateway as a natural complement to data center power management. The first solution provider to integrate Intel Virtual Gateway is Schneider-Electric in its StruxureWare datacenter product.

    A Shift in KVM Appliances

    “This technology represents an evolutionary shift from the traditional KVM hardware-based control and management appliances to a virtual solution,” said Jennifer Koppy, Research Manager, IDC. “With the goal of creating more efficient and agile data centers, IT managers and CIOs are exploring datacenter infrastructure management (DCIM) solutions to manage and control IT infrastructure. Intel’s newly announced Virtual Gateway, as well as its existing DCM tool, help provide insight and analytic capabilities for datacenter managers.”

    Because it’s being distributed as an SDK, it can be integrated into existing consoles. It is also a natural extension of Data Center Management “IT device” monitoring.

    “One player we’re enabling is the traditional DCIM player,” said Jeff Klaus, General Manager of Intel DCM. “It’s a more natural model to sit for that constituent. We’re helping them to get to the IT management space, bringing IT devices and facilities together.”

    “By integrating IT device control developed and supported by a trusted industry provider such as Intel, DCIM providers that go to market with Intel Virtual Gateway are benefiting from Intel’s credibility and position in the market,” said Koppy.

    Zhao Ming, Operation Manager at China Telecom Cloud Computing Co provides an example of how Virtual Gateway has helped operations:

    “With the increasing number of data centers and IT devices operated by China Telecom, we were facing a challenge in configuring and managing the important hardware in our environment. Most remote management solutions were hardware-based, expensive, or proprietary. The ZZnode ALOES vKVM solution, which is based on Intel Virtual Gateway technology, now offers us cross-OEM visibility and remote access with the capability to support up to 10,000 managed devices and 50 simultaneous sessions of remote access. “

    For more on Intel’s approach to KVM and data center management, see The Tower of Babel Invades the Data Center & The NOC at Industry Perspectives.

    3:15p
    Peak Hosting is Anchor for Digital Realty Project
    dlr-Digital-Loudoun-Bldg-G

    An artist’s illustration of Bguilding G at Digital Realty’s Digital Loudoun campus in Ashburn, Virginia. (Image: Digital Realty)

    Peak Hosting will lease more than 1 megawatt of space in Digital Realty’s newest building in Ashburn, Virginia, the companies said this week. Peak Hosting is the first operational customer in the 200,000 square foot first phase of Building G at the Digital Loudoun campus, with additional tenants currently being installed.

    The Peak Hosting deal is the latest in a series of announcements highlighting the expansion of Digital Realty’s operations in Ashburn, as well as the broader growth trends for the data center industry in Loudoun County. Digital Realty said Wednesday that it would invest an additional $150 million in Building G, a 400,000 square foot project in two phases. The first phase includes space for 10 Turn-Key Flex PODs, each spanning about 10,000 square feet, as well as 100,000 square feet of powered shell space and 30,000 square feet of offices. Phase two will feature 12 Turn-Key Flex data center PODs.

    Peak Hosting is a manged hosting provider that describes its business as “Operations as a Service.” The San Francisco-based comnpany provides dedicated, cloud, and hybrid enterprise-class architectures, promising to manage “everything but your code.”

    Targeting ‘Cloud Fatigue’

    “For companies with less than 1,000 servers, the economic incentive to do things in-house simply isn’t there,” said Matt Lewin, Chief Executive Officer for Peak Hosting. “It just becomes a distraction. We are eliminating ‘cloud fatigue’ for businesses across the nation and providing the best engineers, architects, and support staff on the Internet. This isn’t just ‘managed hosting’; it’s a fully-outsourced, customized technical operations service.”

    “Several factors led to our choosing Digital Realty to expand our east coast footprint,” said Jeffrey Papen, founder of Peak Hosting. “We committed to over a megawatt in one of Digital Realty’s newest properties where we can leverage our deep consulting and technological experience to build, from scratch, a managed hosting environment where we can implement custom designs to solve the specific business problems that our clients face.”

    Digital’s Turn-Key Flex solution lets customers select from a catalogue of components during the design and construction process to meet their data center specifications. The program offers customers finished “plug and play” raised-floor data center space, which shifts the data center development costs from the tenant to the landlord, and allows for much quicker deployment than if the customer built a new facility on its own.

    “We are very pleased to expand our relationship with Peak Hosting at our Digital Ashburn Datacampus,” said Michael Foust, Chief Executive Officer of Digital Realty.  “Understanding their business needs in terms of geographic diversity and flexibility enabled us to deliver a data center solution that will support their immediate and long-term growth plans.”

    5:00p
    IBM Acquires CSL to Advance the Cloud on System z

    IBM boosts its System z portfolio by acquiring CSL International, Actian leverages previous acquisitions to launch new cloud and big data platforms and EastWest Bank selects HP to build a private cloud for updating its infrastructure.

    IBM acquires CSL International.  As a strategic investment to further its System z portfolio, IBM has announced a definitive agreement to acquire CSL International. Privately held CSL is a leading provider of virtualization management technology for IBM’s zEnterprise system. CSL’s CSL-WAVE software enables companies to monitor and manage thei z/VM and Linux on System z environments using a powerful and easy-to-use interface. The zEnterprise System enables clients to host the workloads of thousands of commodity servers on a single system. ”As clients create smarter computing environments, they are looking for ways to manage IT costs and complexity without sacrificing security or the ability to scale,” said Greg Lotko, IBM business line executive, System z. “The response by clients to the advantages of Linux on System z have been tremendous, with the shipped capacity nearly doubling in 1Q13 year to year. With the acquisition of CSL International, IBM expands its cloud virtualization capabilities, making it even easier for clients to take advantage of Linux on System z.”

    Actian Launches DataCloud and big data analytics platforms. Actian announced its plans to leverage the assets from its recent acquisitions of ParAccel, Pervasive Software and Versant Corporation by offering two new platforms, the Actian DataCloud and ParAccel Big Data Analytics Platforms, to tackle the challenges of the Age of Data. the DataCloud platform integrates cloud and on-premises applications while providing robust data quality and other data services. It offers three tiers of data and application integration: invisible connect – a prebuilt one-click integration between existing SaaS applications, Basic Connect for prepackaged applications, and Advanced Connect for enterprise-grade environments. The big data analytics platform is unconstrained analytics – a real-time, high-performance platform extracting analytic value from data while removing constraints around size, speed and complexity of analytics.  “We believe that every company, no matter its size, should harness the promise of big data and analytics, and we’ve invested hundreds of man-years and millions of dollars to deliver two scalable, completely modern platforms that marry our decades of data expertise with innovative, cutting-edge architectures,” said Steve Shine, chief executive officer of Actian. “To win in the Age of Data, organizations must become action-enabled enterprises with access to unconstrained analytics and frictionless data integration. Our modular approach delivers the quickest time to value with phenomenal scalability to future-proof your business.”

    HP builds private cloud for EastWest Bank.  HP (HPQ) announced that EastWest Bank has selected HP Converged Infrastructure to streamline IT operations, reduce costs and scale its IT infrastructure in response to increasing customer transactions. The upgraded infrastructure includes HP 3PAR StoreServ T400 Storage System and HP ProLiant DL360 Servers. EastWest Bank realized that its legacy IT infrastructure hindered productivity, and began evaluating solutions from IBM, EMC and HP. “Our previous system didn’t give us enough flexibility to meet our daily business needs; the infrastructure was limiting because we were unable to allocate storage between systems,” said Randy Evangelista, vice president and head, Information Technology Group, EastWest Bank. “By implementing HP Converged Infrastructure, we’ve taken an important step towards building a private cloud and delivering a dynamic and flexible IT environment that can keep pace with our business demands.”

    6:52p
    CNBC Features CoreSite Trading Hub

    An article over at CNBC looks at the growing importance of the Washington, D.C. market for high frequency trading, especially on days when the U.S. government issues potentially market-moving reports.  The story focuses on the concentration of trading activity at the CoreSite data center building at 1275 K Street. CNBC writes: “The idea: Get access to federal data milliseconds faster than those traders waiting patiently for it to travel at the speed of light up fiber optic lines to markets in New York, New Jersey and Chicago. ”

    Long-time readers of Data Center Knowledge will be familiar with the value of 1275 K Street as a low-latency hub, as we wrote about this topic three years ago.  But the CNBC story is likely to increase the profile of the CoreSite facility, and the prospect of  infrastructure in Washington itself having a latency advantage over trading hubs in northern Virginia and New Jersey. Check out the CNBC web site for the full story.

    << Previous Day 2013/07/12
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org