Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, August 27th, 2013

    Time Event
    12:30p
    Data Storage in Flux – Time for a Radical Change?

    Dan Crain is CEO of Whiptail, a provider of flash storage solutions, based in Whippany, New Jersey.

    Dan-Crain-tnDAN CRAIN
    Whiptail

    Over the last few months, two diametrically opposed data points about the state of the storage industry were published by credible sources, in well-known, trusted journals.

    John Webster – one of the most intellectually honest and rational storage industry analysts I know – published a piece featured on Forbes.com titled, “No New Storage in 2013.” He cites discussions with various IT executives about their 2013 intentions to acquire additional data storage equipment, and concluded that there will be little spending on new storage this year.

    On the other hand, Information Week posted a story citing data from their “Information Week Outlook 2013” survey of IT executives, citing that 44 percent of respondents said upgrading their storage infrastructure was a top priority for this year.

    Data Storage Outlook in 2013

    So which is it, up or down, and where do we go from here? Let’s examine some additional factors.

    The earnings reports of the top five sellers of commercial and enterprise storage systems – IBM, EMC, NetApp, HP and Dell – show some large declines in storage system sales over the past four consecutive quarters. This would clearly support Mr. Webster’s thesis. The declines range from a 19 percent year over year contraction, to barely flat sales. These somewhat shocking declines in storage systems – large, complex storage arrays that store nearly all the world’s important data – revenues, from the combined concentration of market share in the industry is exactly the opposite of what’s been happening for nearly a decade and a half.

    The perennial discussion of the “data explosion” has been a reliable leading indicator of constantly rising storage system sales at these large technology firms, as well as numerous small ones. Indeed, data storage is one of the few remaining segments of technology manufacturing that supports both private and publicly traded “pure play” companies – firms that derive most of their revenue from sales of one primary product type – such as NetApp and EMC. Most of the other large technology firms have diversified substantially beyond their original core business. And to be fair, while EMC does offer a large selection of products to their customers, data storage devices are still a large part of their revenue and company identity.

    According to IDC, the data storage business will have an estimated total market value between $90-$100 billion this year. That is enormous! The technology is not for the faint of heart though – M&A values tend to be very large, and there are limited quality properties in the market at any given time. Because storage is one of the most complex parts of the computer industry, it has resisted the trend toward rabid commoditization due to the barrier of entry being high. We’re not talking three college kids building a smartphone app at the local coffee shop here. These storage systems are designed to make sure the data is always available.

    So where’s the truth? Is the industry shrinking, growing, or flat? In the spirit of public clarity, I’d like to suggest that the answer is yes, maybe and no. Will the industry have another down year, as John Webster suggests, or will a very large group of customers execute large transitional storage strategies as the Information Week survey suggests.

    I think the traditional non-innovative storage industry will receive the brunt of Mr. Webster’s “No New Storage” thesis, as we are already witnessing a dramatic shrinkage of the top 5 providers market share. And the reasons for this are simple. The Data Storage industry born in the 1950s has plateaued. Today it is dominated by a handful of players who are struggling with several issues including:

    • Boredom that has led to more innovative marketing than technical innovation
    • A radical disconnect between the wonks and the customers
    • An interdependent ecosystem that is highly resistant to change
    • Hyper aggressive sales tactics that promote ineffective solutions and alienate customers

    So what trajectory will the industry take? We are seeing a new generation of data storage technology from companies that are pioneering new ways to think about “Big and Fast Data.” The next several years will finally be transformative for the data storage business. New entrants will rise, and established suppliers will continue to wither.

    It might be the most exciting time in this industry since Professor Patterson’s team brought us the ideas that created the now aging generation of storage machines that house the worlds data more than 30 years ago.

    The storage industry has a huge market potential, but the opinions on where this complicated market is going vary widely. Where do you think the market is going? Will it reach its $90-100 billion value that IDC predicts, or will it fall short?

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:30p
    Understanding Geographic and Risk Mitigation Factors in Selecting a DC Site

    Let’s face facts – the modern data center has become an absolute integral part of any organization. We are no longer working with single data center nodes. Now, organizations must find ways to be truly distributed and ensure the highest resiliency for their data and infrastructure. A major part of this process is selecting the right data center based on the appropriate risk factors.

    Data center platforms must be built not only around infrastructural best practices – but geographic and risk mitigation factors must be taken into consideration as well. Far too often, geographical factors are overlooked in site selection activities, or at best are incompletely examined. And most data center providers, quite frankly, don’t do much to help in that process. Many data centers produce information about hardware reliability or facility security, but often geography as a measure of a facility’s ability to competently serve its clients is neglected.

    In this white paper, FORTRUST examines several key factors in selecting the right type of data center environment for your organization. These factors include:

    • Seismic Activity (Seismic zone data and fault line analysis)
    • Threat of Flood (Defined flood zones according to varying levels of risk)
    • Incidence of Tornadoes (Zones which are more prone to tornadoes than others)
    • Hurricane Activity (Regional risk based on weather patterns and historical data)
    • Snow and Wildfires (Regional data on snow accumulation and seasonal wildfires)

    Although environmental variables are extremely important – it’s also crucial to take into consideration the availability of resources. Download this white paper today to learn about:

    • Grid maturity
    • On-site power requirements
    • Multi-grid access
    • Carrier presence
    • Fiber backbone and proximity

    As the data center continues to play a key part for any organizations – managers must take into consideration the environment and geographic impact of a data center site. Remember, logical data center clusters and high levels of efficiency can only be attained with the infrastructure is built around best practices, resiliency, and optimal location selection metrics.

    2:30p
    Cut Datacenter Costs and Complexity by Unifying and Simplifying DCIM

    New technologies are literally being born within the datacenter. New applications, workloads and service models are all being driven from the data center down to the end-user. As more organizations digitize their environments, the push for scalable and efficiency data center services will only continue to grow. Let’s face facts, the modern data center is under intense pressure from every direction—and is challenged to keep up. Disparate teams, management tools and performance metrics are all adding to this operational challenge. With no central visibility of power consumption, asset utilization or service levels, IT and facilities departments are struggling to deliver the availability, capacity and efficiency that the business demands of today’s data center.

    Because of the growing needs around data center services, performance must be monitored and controlled at every level. Erratic and uncontrolled performance can place the entire organization at risk. At the end of the day, there is an inseparable linkage between every aspect of the data center. Without an accurate 360-degree view of how the data center is performing—whether it involves humidity levels or server response rates—organizations will quickly lose control of every operational facet including:

    • Availability
    • Resource
    • Capacity
    • Sustainability

    To combat this, organizations need to stop data center costs, complexity and capacity from spiraling out of control before business outcomes are irreparably damaged. Data Center Infrastructure Management systems (DCIM) bridge the operational gap by providing organizations with access to centralized information and processes. In this white paper, we see that by combining real-time insightful metrics with predictive analytical tools, visualization and integrated workflows, DCIM systems can provide the ‘connective tissue’ needed to achieve a service-oriented approach. This will enable the optimization of critical data center assets and resources. Remember, a DCIM suite should cover the three following pillars:

    • Power and cooling
    • Capacity and inventory
    • IT management and business services.

    CADCIM2

    [Image source: CA Technologies – DCIM: A simplified and unified approach]

    Download this white paper today to see how you can maximize availability, capacity and sustainability. The ability to make the right decisions about future initiatives as well as current operations will help organizations stay in control of:

    • Availability
    • Resources
    • Capacity
    • Sustainability

    Data centers are becoming increasingly difficult to control, thus putting business outcomes at risk. DCIM systems provide the connective tissue needed to address today’s operational challenges and exploit tomorrow’s opportunities. From decreasing power consumption and costs to increasing capacity and availability, DCIM systems will enable a more proactive and optimized approach. As a result, organizations will be able to safeguard availability, maximize resource utilization and achieve enhanced sustainability. With data center performance increasingly linked to business success, these improvements will all help to drive profitable growth and competitive advantage.

    4:30p
    Learn How to Extend the Life of Your Data Center

    The modern data center has become one of the core components of any company. In fact, because of technologies around cloud computing, IT consumerization and big data – data center platforms are growing at a very rapid pace. So what does your organization do when it’s time to make that data center buying or expansion decision? Does your organization have both the infrastructure resources as well as the budget to deploy the data center that the business needs?

    Planning an extension to the life of a data center that appears to be out of cooling and power is not merely a matter of eliminating hot spots and recapturing stranded capacity to supply a static environment. After all, when data center managers are forecasting hitting a capacity wall, they are envisioning some continued growth to support the business’ mission critical activities. This growth is a combination of increased traffic, incremental applications, and technology refreshes. However, all these stimuli for growth do not necessarily translate directly to a linear growth in IT power and cooling load.

    In creating an optimal data center environment  – there must be direct understanding into what the data center is housing now and what resource capabilities might be in the near future. This white paper outlines the importance of deploying core data enter efficiencies to not only prolong the life of your data center – but to help reduce operational costs as well.

    Download this white paper today to learn how intelligent hot as well as cold aisle designs are able to create highly effective containment solutions. In creating the most optimal data center for your organization – it’s critical to architect a platform build around airflow, power and aisle control best practices. This will help your organization align IT and data center needs directly with the goals of the business.

    5:08p
    Fusion-io Accelerates Virtual Desktops with ioVDI Software

    At the VMworld 2013 event in San Francisco this week, Fusion-io (FIO) announced Fusion ioVDI software for virtual desktop infrastructure (VDI) acceleration. Based on the Fusion ioTurbine virtualization acceleration software, ioVDI expands the Fusion-io software portfolio with a virtual desktop solution optimized for flash memory.

    Fusion-io seeks to deliver data faster, with hardware and software capabilities. According to the company, ioVDI delivers inline file deduplication to accelerate VDI responsiveness by reducing virtual machine disk reads over 95 percent; Transparent File Sharing significantly reduces boot times by 5x or more, enabling a fully loaded server with 200 running VMs to reboot in 8-12 seconds; and using a unique Write Vectoring feature, ioVDI eliminates latency and prolongs the life of shared storage resources by storing non-critical transient data on server-side flash rather than on shared storage.

    “With ioVDI software, enterprises can finally deliver a virtual desktop experience that is just as responsive as physical hardware,” said Vikram Joshi, Fusion-io Chief Technologist and Vice President. “Writing data creates a storage bottleneck in virtual desktop infrastructure that is uniquely solved by ioVDI software. Using highly optimized algorithms, ioVDI fetches data from server-side flash, intelligently offloading most reads and up to 80 percent of the writes from primary storage to ensure ample storage resources for applications delivered virtually. Making efficient use of CPU and I/O resources with ioVDI allows for hundreds of desktops to be hosted on a single server without compromising end-user experience, which is absolutely critical to the success of VDI.”

    Fusion ioVDI features flash-optimized VDI performance, transparent file sharing, write vectoring, dynamic flash allocation and flash-aware HA and VMware integration. Using servers from Cisco, Dell, HP or IBM ioVDI can be deployed and integrated with existing VAAI-compliant shared storage. It supports multiple guest operating systems, including Windows XP, Windows 7 and Windows 8 running on vSphere 5.0 and 5.1 Virtual desktops are managed using VMware Horizon View and VMware vCenter Server, with event monitoring and customizable email alerts provided through vCenter.

    “Consumer adoption of tablets and smartphones as windows to the cloud is driving wider acceptance of virtual desktops (VDI) in the workplace – but only when those virtual desktops match the performance and deliver the same experience as the physical desktops they replace,” said Tim Stammers, 451 Research Senior Analyst. “Fusion ioVDI is designed to combine the performance of server-side flash with the capacity and data protection provided by conventional back-end shared storage, using features that streamline the stack for intelligent, flash-optimized performance.”

    Support for VMware Virtual SAN

    Fusion-io also announced that Fusion ioMemory products are supported by and compatible withVMware Virtual SAN, which was announced at VMworld 2013. Fusion-io offers a reliable flash tier with a unique architecture that VMware Virtual SAN customers can leverage to maximize the performance of their VMware Virtual SAN deployments.

    “As virtualization and cloud solutions continue to help enterprises deliver on business objectives, storage performance is an increasingly critical factor for meeting user expectations in virtualized infrastructure,” said Jeffrey Treuhaft, Fusion-io Executive Vice President of Products. “VMware Virtual SAN is a unique software-defined storage solution based on a resilient, highly available scale out architecture that intelligently eliminates performance bottlenecks with caching. By leveraging Fusion ioMemory in their VMware Virtual SAN deployments, customers can achieve superior VMware Virtual SAN peak performance using the server infrastructure of their choice.”

    6:30p
    Widespread Adoption of VMware NSX Network Virtualization
    An overview of VMworld's NSX network virtualization. (Image:VMworld)

    An overview of VMworld’s NSX network virtualization technology. (Image:VMworld)

    VMware‘s new NSX network virtualization platform has hit the ground running, with a a lot of partners announcing support for the new operational model.  VMware touts its list of NSX partners in a blog post, saying that VMware NSX was architected in a way to ensure that the services offered on NSX virtual networks deliver next-generation functionality, and are also as co-existent, transparent, and effective as those deployed on physical networks. As an extensible platform, it leverages a distributed service framework for easy insertion of partner services. A standard NXS API exposes platform capabilities to partners and allows them to consume the network through a single API.

    Cumulus Networks

    Cumulus Networks announced integration with VMware NSX, and hardware Layer 2 gateway services on networking gear running Cumulus Linux, extending virtual networks to physical workloads. The combined Cumulus Linux and VMware NSX solution empowers service providers and enterprises to rapidly provision physical and virtual networks and on-board new applications within minutes, significantly decreasing time and maintenance demands on IT. Cumulus Linux enables high capacity IP fabrics on industry-standard networking hardware. VMware NSX network virtualization will decouple logical networks from the underlying physical infrastructure.

    “With VMware NSX integration, we are helping to drive the networking industry forward with flexible, affordable solutions that adapt to our customer needs,” said JR Rivers, co-founder and CEO of Cumulus Networks. “Building on the incredible response we’ve had with Cumulus Linux, we are excited to bring this latest iteration to our customers and provide them with even faster, simpler, and more cost effective networking.”

    Juniper

    Juniper Networks (JNPR) announced that it has expanded its partnership with VMware to deliver a broad range of solutions for unifying virtual and physical networks within a virtualized data center environment. Aimed at making management of workflows across virtualized and non-virtualized systems easier, new solutions include VMware NSX L2 Gateway integration and VXLAN routing capabilities across access, aggregation, core and edge tiers of the data center network. VMware NSX L2 Gateway Services will be offered on Juniper EX Series and QFX Series core, aggregation and access switching platforms, and MX Series edge routers.

    “Network virtualization is rapidly transforming virtual data centers today, and as enterprises seek to simplify IT by accelerating their use of private and hybrid cloud environments, physical and virtual networks must be viewed and managed as a single unified infrastructure. Juniper Networks offers a broad level of VMware NSX integration across its routing and switching platforms, which not only provides enterprise customers with unprecedented levels of IT flexibility, but also allows them to fully maximize their data center investments,” said Hatem Naguib, vice president, Cloud Networking and Security, VMware.

    Brocade

    Brocade (BRCD) announced a new network virtualization gateway that integrates VMware NSX with Brocade VCS Fabric technology to unify physical and virtual environments with the Brocade VCS Gateway for VMware NSX.

    “Today’s introduction of the Brocade VCS Gateway for VMware NSX advances the capabilities of our VCS Fabric technology by unifying physical and virtualized resources within the data center,” said Jason Nolet, vice president, Data Center Networking, at Brocade. “The Brocade VCS Gateway for VMware NSX provides our joint customers with an essential element for ensuring consistent and seamless connectivity between virtualized workloads and physical resources within the data center.”

    NSX Partners

    A wide range of additional technology partners announced support for the NSX network virtualization platform, including F5 Networks, Fortinet, Dell’s S6000 data center switch gateway for VMware NSX, Arista Networks, HP, Citrix, McAfee, Symantec, and others.

     

    7:01p
    VMware Launches vCloud Hybrid Cloud
    A visual overview of VMware's vCloud Hybrid (Image: VMware)

    A visual overview of VMware’s vCloud Hybrid (Image: VMware)

    At the VMworld 2013 event underway this week in San Francisco, VMware (VMW) announced a new vCloud Hybrid Service, new data center locations, and new capabilities to bring existing and new cloud-native applications to the public cloud. The new infrastructure-as-a-service (IaaS) cloud,operated by VMware was unveiled last May.

    “Since its debut on May 21, VMware vCloud Hybrid Service has experienced great momentum and success with an over-subscribed Early Access Program, acquiring a strategic beachhead of customers taking full advantage of the ability to extend their applications to the cloud,” said Bill Fathers, senior vice president and general manager, Hybrid Cloud Services Business Unit, VMware. “With the new data centers and important new capabilities, we’re executing quickly against our vision of a hybrid cloud service that is completely interoperable with existing infrastructure and enables new and existing applications to run without compromise.”

    U.S. service availability for the vCloud Hybrid Cloud begins in September, and will be delivered from three data centers – in Santa Clara, California; Sterling, Virginia; and Las Vegas, Nevada.  VMware is also expanding its partnership with Savvis to accelerate adoption of the new service. VMware and Savvis will deploy VMware vCloud Hybrid Service within Savvis’ North American data center footprint in 2013 and 2014, enabling customers to gain new data center locations for vCloud Hybrid Service.

    “Savvis is proud to expand our VMware partnership to integrate our offerings with VMware vCloud Hybrid Service,” said Jeff Von Deylen, president of Savvis. “Customers using VMware vCloud Hybrid Service will benefit from our broad portfolio of network, colocation, managed hosting, managed services and cloud services to enable custom IT solutions. Hybrid cloud customers are looking for the kind of secure, low-latency network connectivity solutions that Savvis and CenturyLink can provide.”

    New hybrid capabilities will make application migration easier. Direct Connect will let customers connect to their data center network directly, over private dedicated networks. Disaster Recovery as a service will automatically replicate applications and data to vCloud Hybrid Service – to securely protect applications at a fraction of the cost of building out additional physical capacity. Cloud Foundry Platform as a Service will provide full support for the open source Cloud Foundry distribution and Pivotal CF. Finally, VMware Horizon View Desktop-as-a-Service will be able to run Horizon View Desktops on vCloud Hybrid Service, and rapidly deploy new desktops.

    “Columbia Sportswear has continued to grow over the past few years, driving the company to transform its business and IT infrastructure by implementing a software-defined data center architecture powered by VMware’s vCloud Suite,” said Michael Leeper, director, global technology, Columbia Sportswear Company. “VMware has an excellent track record of deploying reliable, high-performance, secure clouds that are compatible with existing applications and data center operations. Our long-term vision is to move beyond compute and memory to a software-defined data center, which we believe VMware’s vCloud Hybrid Service can be an integral part of reaching that goal.”

    vCloud Hybrid Service Dedicated Cloud will provide physically isolated and reserved compute resources with pricing starting at 13 cents an hour for a fully protected, fully redundant 1 GB virtual machine with 1 processor. vCloud Hybrid Service Virtual Private Cloud will offer multi-tenant compute with full virtual private network isolation. Virtual Private Cloud pricing starting at 4.5 cents an hour for a fully protected, fully redundant 1GB virtual machine with 1 processor.

    VMware also launched a new vCloud Hybrid Service Marketplace, where customers can find all the software they need to extend their data centers to the cloud. In the Marketplace customers can discover, download and launch solutions for the vCloud Hybrid Service.

    7:30p
    Violin Memory Files for $172.5 Million IPO
    The Violin Memory series 6000 storage array.

    The Violin Memory series 6000 storage array.

    Violin Memory, a flash storage provider, announced that it has filed a S-1 registration statement related to the proposed initial public offering (IPO) of its common stock. Initial plans laid out call for an estimated value of $172.5 million. The company plans to list its common stock on the New York Stock Exchange under the ticker symbol “VMEM.” The funds will be used for working capital and general corporate purposes, including further expansion of sales and marketing efforts, and continued investment in research and development.

    With a 14 percent stake, Toshiba is listed as the largest shareholder currently, with other investors over the years have including GE Asset Management, Highland Capital Partners, the venture arm of SAP, Juniper Networks and SAP Ventures. Since Violin Memory was founded in 2005, the market has become crowded. Gartner ranked Violin Memory No. 1 in a flash array market share report, showing Violin with a nearly 20 percent market share in the flash-based storage array category.

    Listed as a risk in the Violin Memory S-1 filing is the historical note of large purchases. In 2012, Hewlett Packard represented 65 percent of total revenue, where in 2013 and the six months ending July 31, 2013, Hewlett Packard represented less than 10 percent of total revenue. The filing also notes that it is a competitive market, with incumbent vendors like Dell, EMC, Hitachi, NetApp, IBM and others.

    Flash competitor Fusion-io (FIO) filed its IPO in 2011, and has struggled on the stock market since. Another competitor, Pure Storage has been growing rapidly, and has been rumored to be looking at an IPO as well.

     

    7:47p
    VMware Drives Software-Defined Data Center Vision Forward
    Pat Gelsinger, VMware CEO, chats with Martin Casado, CTO of Networking, VMWare at VMworld. (Photo: VMware)

    Pat Gelsinger, VMware CEO, chats with Martin Casado, CTO of Networking, VMWare, at VMworld. (Photo: VMware)

    Celebrating its 10th annual event this year, VMworld took over San Francisco Monday, as VMware unveiled its next-generation architecture for further enabling the software-defined data center. With attendance over 22,000 at the event, VMware took the opportunity to make multiple introductions — including new innovations in network virtualization, VirtualSAN, vCloud, and vSphere with Operations Management 5.5.  The event conversation can be followed on Twitter hashtag #VMworld.

    Accelerating Software-Defined Data Center Architecture

    A year after introducing its software-defined data center framework, VMware announced a wave of new products and services designed to take advantage of the value of advanced virtualization in areas such as networking and security, storage and availability, and management and automation. A recent survey conducted by the company found that of businesses able to take full advantage of a complete software-defined data center architecture, 85 percent were able to generate new revenue as high as 22 percent for their businesses.

    “With today’s news, VMware is further empowering IT to help organizations become more agile, responsive and profitable,” said Raghu Raghuram, executive vice president, Cloud Infrastructure and Management, VMware. “New products such as VMware nsx and VMware Virtual SAN will fundamentally redefine the hypervisor and its role in the data center. Along with the recently introduced VMware vCenter Log Insight, these products represent the next wave of innovation at VMware. We continue to evolve the software-defined data center architecture to address IT’s critical needs – enabling them to build infrastructure that is radically simpler and more efficient while delivering the agility and flexibility to support the velocity of their businesses.”

    Network Virtualization

    VMware launched VMware NSX, a network virtualization platform that will deliver the entire networking and security model in software, decoupled from networking hardware. Its approach to this new operational model for networking enables data center operators to treat their physical network as a pool of transport capacity that can be consumed and repurposed on-demand.

    One unified platform marries the Nicira NVP and VMware vCloud Network and Security to create an entire networking and security model (layer 2-7) in software. This architecture enables NSX to handle as much as 1 TB per second of network traffic per cluster of 32 hosts.  In addition, the VMware NSX virtual networks support existing applications, unchanged, on any physical network infrastructure.

    VMware NSX technology partners are organized into service categories that follow the network virtualization lifecycle. More than 20 partners showed support for VMware NSX at VMworld 2013. VMware NSX will be available in the fourth quarter of 2013.

    Virtual SAN

    VMware rolled out Virtual SAN, an innovative technology that extends vSphere to pool compute and direct-attached storage. It will deliver a virtual data plane that clusters server disks and flash to create high-performance, resilient shared storage designed for virtual machines. Built on a distributed architecture, Virtual SAN will allow storage services to scale out linearly with the needs of the application. VMware has redefined the role of the hypervisor to deliver virtualized compute and storage services in an elastic, flexible fashion. The distributed architecture enables VMware Virtual SAN to deliver I/O performance comparable to mid-range storage arrays while leveraging the economics of direct-attached storage.

    VMware vCloud Suite 5.5

    VMware vCloud Suite 5.5 was announced, with new features and enhanced product functionality, as well as broad product integrations. vSphere-based private clouds can be built using the software-defined data center architecture, providing virtualized infrastructure services with built-in intelligence to automate on-demand provisioning, placement, configuration and control of applications based on policies.

    vSphere 5.5 introduces vSphere App HA to detect and recover from application or operating system failure. New VMware vSphere Flash Read CacheTM virtualizes server-side flash to lower application latency dramatically. VMware vSphere 5.5 enables configurations two times the previous physical CPU, memory and NUMA node limits. A new vSphere Big Data Extension enables customers to run Apache Hadoop and Big Data workloads on VMware vSphere 5.5, alongside other applications. In addition, VMware vSphere 5.5 now supports next-generation Intel  Xeon processor E5 v2 and Intel Atom Processor C2000.

    VMware’s new vSphere with Operations Management combines vSphere virtualization platform with insight to workload capacity and health. VMware vSphere with Operations Management will enable customers to optimize their environment through integrated capacity planning while proactively monitoring and maintaining overall performance of their environment.

    VMware also announced new strategic and technology consulting services, as well as new levels of VMware Certifications.

    8:00p
    Chris Downie Named CEO of Telx
    downie-telx-imn

    Chris Downie, shown speaking at an industry event last year, is the new CEO at Telx. (Photo: Rich Miller)

    The Telx Group has announced that Chris Downie, who has served as the President and Chief Financial Officer of Telx since 2007, will assume the role of Chief Executive Officer effective immediately. Downie succeeds Eric Shepcaro, who passed away earlier this year.

    Downie has 22 years of experience in the communications and finance industries and has provided leadership to Telx in finance, strategic planning, business development, customer service and operational support.

    “Chris has been a compelling leader for Telx over the past six months,” said John Kelly, Chairman of the Board of Directors. “The board and the company’s investors have great confidence in Chris and his ability to drive Telx’s continued success as a premier provider of interconnection and data center solutions to our customers.”

    “I am honored to follow in the steps of such a capable leader as Eric Shepcaro,” Downie said of his new role. “Eric provided tremendous guidance as the company expanded into a number of new markets and developed innovative interconnection products and data center services for our clients. He served as a great mentor to me and we all miss him.” Downie added, “I look forward to leading Telx in the next stage of its growth.”

    Prior to joining Telx, Downie served as CFO and COO for satellite services company Motient Corporation. He also has held leadership positions at Communications Technology Advisors (CTA), BroadStreet Communications, Daniels & Associates and Bear Stearns. He holds an MBA from New York University and a BA from Dartmouth College.

    Telx provides colocation and interconnection services to more than 1,100 customers in 13 markets with six data enters across the New York/New Jersey Metro area, two facilities in Chicago, two in Dallas, four in California (Los Angeles, San Francisco, and two in Santa Clara), two Pacific Northwest facilities (Seattle and Portland), and facilities in Atlanta, Miami, Phoenix and Charlotte, N.C. 

    8:05p
    London Internet Exchange Enters U.S. Market With EvoSwitch in Virginia

    The exterior of the data center in Manassas, Virginia where the London Internet Exchange (LINX) will open an exchange with EvoSwitch, one of the tenants.(Photo: EvoSwitch)

    The London Internet Exchange (LINX) is entering the U.S. market, opening a neutral interconnection service in the EvoSwitch WDC1 data center in Manassas, Virginia. The exchange says its presence at EvoSwitch is the first step in establishing a multi-site Internet exchange championing the European interconnection model, in which Internet traffic exchanges are managed by participants, rather than the colocation providers hosting the infrastructure.

    The arrival of LINX fulfills a key goal for EvoSwitch, an Amsterdam-based colocation provider that entered the U.S. market roughly a year ago by leasing space in the COPT data center in Manassas. LINX says it intends to ultimately extend its infrastructure to additional data centers in northern Virginia.

    Both EvoSwitch and the London Internet Exchange are aligning their effort with Open-IX, a new community effort to improve the landscape of Internet peering in the U.S. Open IX is looking to endorse data centers and Internet exchange points (IXPs) to encourage neutral, public peering. EvoSwitch and LINX expect to receive an early endorsement from Open IX, and to be the first to market in northern Virginia, one of the most fiber-dense regions in the U.S.

    A Beachhead for the European Model

    The new exchange introduces an approach that has thrived in London, Amsterdam, Frankfurt and several Asian markets. It represents an alternative to the market leadership of Equinix, whose Ashburn campus is the focal point for interconnection activity in northern Virginia. The European model has struggled to establish itself in the U.S., but EvoSwitch’s entry into the U.S. market and the emergence of the Open IX movement provide an alignment of interest and opportunity for the London Internet Exchange.

    The LINX Network consists of two separate high-performance Ethernet switching platforms installed across ten locations in London, including data centers for Telehouse, Telecity, Equinix and Interxion.

    “EvoSwitch approached us early November 2012 with the idea to bring the powerful IXP model that LINX represents to the USA,” said John Souter, CEO of the London Internet Exchange. “Out of those conversations, and in close collaboration with Open-IX since it was formed early 2013, we are now close to launching LINX USA in north Virginia.

    “There is a strong demand for a change in the way networks interconnect across the United States,” said Souter. “Neutral, multi-site IXPs where peers are members with a clear say in running the Exchange as stakeholders, provide real choice. They add resilience in the network, reduce latency and ultimately lower the cost of exchanging Internet traffic, which in the end stimulates growth which benefits all.”

    Next: Fiber Ring to Connect Three Sites

    << Previous Day 2013/08/27
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org