Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, January 17th, 2013

    Time Event
    12:30p
    Cloud News: Verizon Terremark Enhances Enterprise Cloud

    Here’s a roundup of some of this week’s headlines from the cloud computing sector:

    Verizon Terremark enhances enterprise cloud. Terremark announced that it has increased availability, security and flexibility for its Enterprise Cloud service. To meet increasing demand for cloud resources Terremark will expand cloud platforms in data centers located in Dallas and London.  Additionally, as a response to the strong demand for hybrid clouds and to further simplify adoption, Verizon Terremark will be extending the Enterprise Cloud service to include instance-based compute and storage. This feature allows customers to pay for their cloud services per virtual machine versus reserving resource capacity, as they continue to have complete awareness of usage through embedded CloudSwitch software technology. ”The ability to provide rapid access to cloud environments with high level of security shows our leadership and commitment to the Infrastructure-as-a-Service market, not only domestically but on a global scale,” said Chris Drumgoole, Verizon Terremark’s senior vice president of global operations. “Through the enterprise-scale cloud ecosystem we have built, Verizon Terremark is in the best position to serve enterprises and governments and enable them to improve the lives of consumers and citizens through the use of the best technology available.”

    SOASTA awarded cloud provisioning patent.  SOASTA announced that the United States Patent and Trademark Office has issued SOASTA the industry’s first-ever patent for Cross-Cloud Grid Provisioning, U.S. Patent No. 8,341,462. The technology enables SOASTA and its customers to realistically simulate mobile and web traffic by deploying thousands of servers across different cloud providers simultaneously. The global test cloud from SOASTA is able to leverage more than 500,000 servers in 60 global locations running on 20 providers, including Amazon, Rackspace, IBM, Microsoft, and GoGrid. “Cloud computing depends on rapid deployment and on-demand access,” said Melinda Ballou, Program Director for IDC’s Application Life-Cycle Management research. “Workloads like load and performance testing that can depend on a large number of variegated servers driving traffic from different locations are a logical application for cloud computing. Grid provisioning technology like SOASTA’s can provide immediate access to these load servers across environments to help with the problems users face when trying to utilize different cloud platforms for testing.”

    Microsoft launches new management tools for Cloud OS.  Microsoft (MSFT) announced the availability of new solutions to help enterprise customers manage hybrid cloud services and connected devices with greater agility and cost-efficiency. System Center 2012 Service Pack 1, the enhanced Windows Intune, Windows Azure services for Windows Server and other new offerings deliver against the Microsoft Cloud OS vision to provide customers and partners with the platform to address their top IT challenges. “With Windows Server and Windows Azure at its core, the Cloud OS provides a consistent platform across customer datacenters, service provider datacenters and the Microsoft public cloud,” said Michael Park, corporate vice president of marketing for Server and Tools, Microsoft. “Powerful management and automation capabilities are key elements of the Cloud OS, taking the heavy lifting out of administration and freeing IT organizations to be more innovative as they embrace hybrid cloud computing and the consumerization of IT.”

    1:00p
    OVH Goes Big and Green With New Quebec Data Center
    OVH-BHS-Datacenter-Tower-47

    A look at the new OVH data center in Quebec, which will eventually be able to accommodate 360,000 servers. (Photo: OVH)

    Web hosting giant OVH has officially opened up a massive, energy-efficient data center in Beauharnois, Quebec to act as its North American presence. The facility, located in a suburb of Montreal, is a behemoth of a data center with capacity for 360,000 servers.

    The new facility is distinctive, built in a fomer Rio Tinto Alcan alumnium plant with an airflow design reminiscent of the Yahoo Computing Coop design, designed to allow waste heat to rise and exit through a central ceiling vent.

    This isn’t the first time it’s focused on green design. DCK readers may be familiar with OVH for the innovative Cube-shaped data center it opened in 2011 in Roubaix, France, which houses servers in an exterior corridor built around an open center, allowing for easy airflow through the facility. One of the reasons the company chose Quebec is because the location works with this RBX-4 “cube” design (more on that in the April coverage).

    Just 300 Yards from Hydro Dam

    Beauharnois is just outside of the island of Montreal and uses renewable energy from a hydro-electric dam located just 300 meters away from the building. It was also architected to not use air conditioning, using proprietary cooling technology developed in-house. All of this results in a claimed Power Usage Effectiveness of less than 1.1.

    “We set our standards very high and our prices are competitive,” said Octave Klaba, the CEO and Founder of OVH.com. “We expect to meet North American user’s needs as we do in Europe, where we already manage 140,000 servers.”

    OVH designs and builds its own servers in order to control all of the components and for performance optimization. The 2013 roll-out focuses primarily on Intel’s Ivy Bridge/Sandy Bridge server technologies, hosted on guaranteed network pipe and unlimited bandwidth. Since 2006, OVH.com deployed its own fiber-optic network, which is today one of the world’s largest, covering Europe and North America, and includes 33 points of presences (POPs). It offers 2.5 Tbps of ever-increasing bandwidth capacity.

    The company offers what it calls dedicated cloud. Built on VMWare’s vSphere, customers can provision an initial virtual data center in under a half an hour, and adjust it afterwards in less than 2 minutes. Customers don’t share server resources, with OVH providing dedicated physical servers to each customer (hence the initial half hour of provisioning time). It guarantees 99.99% availability for dedicated cloud.

    This is a sizeable hosting provider that has a business model based on total ownership and control of the supply chain. It’s a leader in Europe, and now it has a significant presence in North America, with more planned.

    “North America is a territory so vast that it takes at least three large data centres to cover the entire population,” CEO Octave Klaba said when the plans were revealed. “We are in phase A, which is taking account of the East coast. Later we think do the same to the west and certainly the center.”

    OVH-Server-Racks-470

    A wall of servers inside the OVH data center in Quebec. (Photo: OVH)

    2:00p
    Telx Speeds Cloud Connections With AWS Direct Connect

    Hurricane Sandy provided a reminder that businesses need to consider business continuity and disaster recovery as part of overall business and IT strategies. Increasingly demanding customer expectations and accelerating usage of cloud services and hybrid architectures require a variety of network strategies to connect users, enterprise data centers, colocation facilities, and the cloud.

    These trends are behind recent interconnection moves by Telx, as the company continues to up its interconnection game. Telx has announced the combined availability of Amazon Web Services (AWS) Direct Connect for Telx clients across the United States.

    This mean customers in Telx data centers can now leverage private interconnection to an expanded roster of AWS facilities, and this can be combined with network services from the hundreds of carriers and cloud providers interconnecting at Telx’s facilities.

    “Extending Telx’s service offerings with AWS enables clients to design their applications to achieve faster end-user response times, greater security and availability, and better protection from potential regional failures,” said Joe Weinman, senior vice president of cloud services and strategy for Telx. “Today’s enterprises face an array of complex application and architectural requirements. Telx’s continued investments in state-of-the-art facilities and ever-increasing ability to provide a rich variety of interconnection solutions between facilities and to AWS continues to increase the value of the Telx ecosystem.”

    This announcement means its clients can procure secure, private access to the AWS network to support requirements for mission-critical systems in the event of a disaster. Also, its clients with international end-users can leverage Telx’s gateway facilities, international network service providers, and Telx’s AWS Direct Connect and Datacenter Connect to enable localized access to their global user base. Basically, the company is taking several steps to make sure all sorts of connection options are available nation-wide to its customers.

    This widening interconnection strategy is applicable in a variety of scenarios. The company provided a few in the release:

    • Connectivity for direct storage – Private, high-throughput access to AWS facilitates periodic migration/replication of data to or from AWS into customer-managed storage solutions; this enables a wide spectrum of business continuity/disaster recovery policies and data retention strategies.
    • Big Data – High-bandwidth connectivity provides customers with the ability to efficiently transfer large data volumes into and out of AWS, allowing for import/export of large-scale data sets for high performance “big data” computing applications.
    • High-performance applications – High bandwidth combined with low-latency connectivity for improved I/O and API response times allows customers to use AWS as an extension of their data center LAN and integrate hybrid cloud strategies into a wide range of applications.
    • Custom hybrid architectures – High-throughput access to AWS enables integration of custom hardware solutions into the AWS management system, with real-time streaming of data to and from AWS.
    2:30p
    Hyve Unveils Open Compute Servers and Storage
    An Open Compute Project compliant rack that was developed for Riot Games was available in the Hyve booth at the summit.

    A participant at the Open Compute Project Summit talked with Hyve Solutions staffer at the company booth. An Open Compute Project compliant rack that was developed for gaming company, Riot Games, was available for attendees to view at the conference. (Photo by Colleen Miller.)

    Hyve Solutions, a division of SYNNEX Corporation (SNX), rolled out Open Compute Project compliant hardware at Tuesday’s Open Compute Project (OCP) Summit, which drew 1,900 participants. Many attendees at the event held in Santa Clara, CA were at the company’s booth reviewing the production-grade rack which was on hand.

    Riot Games, an online PC game developer and publisher whose flagship title is “League of Legends,” is using Hyve’s OCP hardware within its data center environment. Hyve Solutions worked with Riot Games engineers to tailor an OCP solution that would meet Riot Games’ needs around scalability, reliability, cost efficiency, and environmental responsibility in data centers.

    “Ultimately, our investment in OCP provides an even better player experience for the more than 12 million core gamers who play League of Legends every day,” said Ron Williams, Riot Games’ Vice President of Operations. “We especially like the vanity-free aspect of OCP -by eliminating unnecessary components, plastic, paint and overhead, we’re able to spend more of our technical dollar where it counts – on performance that will translate into an even more responsive and fun in-game environment.”

    Steve Ichinaga, Senior Vice President and General Manager of Hyve Solutions, said Hyve’s ability to deliver servers quickly is a differentiator. “This experience shows that OCP solutions can be designed, validated and deployed within a very short time frame-less than three months from discovery to delivery of a fully integrated, plug-and-play production solution,” said Ichinaga.

    Additionally, Hyve announced the creation of a new OCP server platform, the Hyve Solutions 1500 Series, which fits in a standard 19-inch rack. The new 19-inch form factor of OCP V2 (21-inch rack size announced in May) has similar building blocks as existing OCP designs, but makes for easier evaluations and more flexible deployments of OCP solutions without major data center retrofitting.

    “Hyve Solutions’ customers want the power efficiency and price benefits of OCP but are often constrained by existing data center requirements for standard 19” racks,” said Ichinaga. “Our new Hyve Solutions 1500 Series platform delivers these benefits without removing existing rack infrastructure.”

    OCP Summit Hackathon

    The OCP Summit Hackathon is also taking place during the conference. Hyve, which provides customers with purpose-built data center servers and storage, is also contributing a high-density storage design concept to the OCP community. Hyve engineers will participate in the brainstorming (“hacking”) during the summit.

    The design concept for the storage solution is 2 OpenU high and can accommodate 15 3.5” drives. Hyves said it addresses many points along the storage spectrum, including capacity/bulk storage, cloud storage and “cold” storage. It is form-factor compatible with the Open Rack compute node and designed to occupy similar space in an Open Rack innovation zone, Hyve said.

    “In the spirit of OCP’s openness and collaboration, Hyve Solutions is contributing our Open Rack storage design concept to the community to help continue to develop the most efficient computing infrastructure possible,” Ichinaga noted.

    3:00p
    Rackspace: We’ll Fill Data Centers With Open Compute Gear
    Mark Roenigk, COO, Rackspace, shared how Rackspace based its new infrastructure on Open Compute standards and that it will be moving all its cloud infrastructure to OCP compliant hardware this spring. (Photo by Colleen Miller.)

    Mark Roenigk, COO, Rackspace, shared how Rackspace based its new infrastructure on Open Compute standards and that it will be moving all its cloud infrastructure to OCP compliant hardware this spring. (Photo by Colleen Miller.)

    SANTA CLARA, Calif. – From this point forward, all new servers added to the Rackspace Cloud will run on Open Compute hardware, the company said yesterday. The announcement at yesterday’s Open Compute Summit marked one of the largest commitments to hardware designs based on the Open Compute Project.

    “Every expansion for our cloud business will be 100 percent Open Compute hardware,” Rackspace chief operating officer Mark Roegnik said in an interview with Data Center Knowledge.

    That deployment will begin this spring, when Rackspace will install its Open Compute servers in a new data center in Ashburn, Virginia that has 6.2 megawatts of IT capacity. The servers will reside in a modified version of the Open Rack, which is wider than traditional racks.

    “I believe we’re going to fill it up fast, all 6 megawatts,” said Roegnik. “It will take the load off of other data centers where we’re straining our capacity. When we move to this new platform, we can point every new customer there.”

    Rackspace is one of the largest providers of cloud computing services, and spends more than $200 million a year on servers and storage. A significant chunk of that will shift from previous suppliers Dell and HP to Quanta and Wistron, two Taiwan-based firms that build custom servers based on Open Compute designs.

    Last May, we outlined Rackspace’s plans to use designs from the Open Compute Project to reduce its hardware costs costs on servers and operations. The company has spent the last year developing servers and racks based on Open Compute designs, including servers from Quanta and Wiwynn (Wistron’s U.S. business) and a version of the Open Rack built by Delta, another Taiwan firm.

    Rackspace’s new hardware design will require some slight modifications from the landlord of its Virginia data center, DuPont Fabros Technology.  The electrical distribution, which is typically routed below a raised-floor, will be shifted to overhead trays. Running power overhead gives Rackspace more flexibility in where it places equipment in its data center, a consideration driven by the need for flexibility in the future path of the company’s storage hardware.

    3:30p
    Fusion-io Sets the Stage for the All-Flash Data Center
    fusion-ioscale

    Fusion-io has introduced ioScale (pictured above), which features pricing that promises to make the all-Flash data center more accessible. (Photo: Fusion-io)

    At the Open Compute Summit in Santa Clara Wednesday Fusion-io (FIO) announced its newest product line, Fusion ioScale.  Aimed at the hyperscale and cloud companies, ioScale provides up to 3.2 terabytes of ioMemory capacity, and is available to order in a minimum of one hundred units. Pricing starts as low as $3.89 per gigabyte, with increasing discounts based on volume.

    “By making ioScale available to growing webscale and emerging cloud companies, Fusion-io is at the forefront of the transition to the all-flash hyperscale datacenter, powered by open software defined solutions,” said David Flynn, Fusion-io CEO and Chairman. “Hyperscale companies are an entirely different market with different needs compared to enterprise organizations. Fusion ioScale has been specifically designed with the input of existing hyperscale market leaders to maximize the simplicity of the all-flash datacenter and meet the unique needs of webscale customers.”

    Evolving from past Fusion-io products, ioScale targets the unique needs of webscale and emerging cloud companies. ”We’ve been involved in all stages of the product’s research and development, and we’re excited by this technology’s potential to help the industry meet its rapidly growing storage demands,” said Frank Frankovsky, Chairman of the Open Compute Foundatio

    Fusion ioScale contains up to 3.2TB of capacity on a single half length PCIe slot, enabling a small form factor server to reliably scale to 12.8TB or more. Servers that support UEFI (Unified Extensible n.Firmware Interface)  can boot from ioScale, further eliminating the need for RAID controllers or disk infrastructure. It also includes enterprise reliability with the Self-Healing, Wear Management, and Predictive Monitoring capabilities of Fusion ioMemory. ioScale is compatible with the Fusion ioMemory software development kit (SDK) to everage application programming interfaces (APIs) like Atomic Writes and directFS permitting applications to run natively on flash.

    “Hyperscale companies architect their infrastructure with bare bones servers and open source software that scales-out cost-effectively in the hundreds and thousands,” said David Floyer, Wikibon Chief Technology Officer. “These organizations focus on capital expenses; this is very different from the operating expense focus seen at traditional enterprises that implement feature-rich infrastructure with long lifespans.

    “The data requirements of these hyperscale companies are growing astronomically fast, much faster than the enterprise market,” said Floyer. “The design of the ioScale flash memory products will enable Fusion-io to reach a broader range of webscale and cloud companies, including the emerging hyperscale leaders who will power the services consumers will enjoy in an always-connected world.”

    6:58p
    Scenes from Day 1 of Open Compute Summit

     

    Attendees to the Open Compute Summit IV converse at the Avnet booth. (Photo by Colleen Miller.)

    Attendees to the Open Compute Summit IV converse at the Avnet booth. (Photo by Colleen Miller.)

    The first day of the Open Compute Project Summit IV drew a crowd of 1,900 hardware and technology professionals to the Santa Clara Convention Center. And there was buzz among the group about the technological innovations. The Open Compute (OCP) Project is now in its second year and its emphasis is advancing technologies and releasing the designs as open hardware, with the aim is to improve servers and change data centers. Here’s a photo gallery of the first day: Highlights from Open Compute Summit IV.

    9:12p
    Digital Realty Buys 3 Data Centers in Paris

    Digital Realty Trust has completed the acquisition of a three-property data center portfolio in Paris area from Bouygues Telecom for €60 million ($80.3 million US). It was structured as a sale-leaseback transaction, with Bouygues selling the three properties to Digital Realty, then immediately signing long-term, triple net leases for all three facilities. It’s a mutually beneficial transaction that boosts Digital Realty’s portfolio and frees up capital for Bouygues to use for other means.

    The portfolio consists of one Tier III+ facility at Montigny-le-Bretonneux and two Tier III facilities in Bievres and Saclay.  The properties total approximately 87,000 rentable square feet, with nearly five megawatts of IT capacity.

    “The acquisition of this institutional-quality portfolio further expands our footprint in this key European market and, equally important, adds a new network and IT service provider to our global customer base,” said Michael Foust, Chief Executive Officer of Digital Realty.

    This works for Bouygues by unlocking that capital for use in other, productive ways. “We’re very pleased to have established a long-term strategy for our infrastructure assets with a leading global provider of data centre solutions,” said Richard Viel, Deputy Chief Executive Officer of Bouygues Telecom. “Digital Realty’s global portfolio, financial strength and proven long-term investment philosophy provides the stable operating environment we were seeking to continue to grow our business in France.”

    The deal works for Digital Realty as it provides three more fully leased data center properties to add to an ever-expanding portfolio. “This acquisition demonstrates our ability to source and complete a highly structured transaction, enabling the seller to monetize its data centre real estate assets while continuing to maintain its critical operations with a well-capitalized, long-term data centre owner,” added Scott Peterson, Chief Acquisitions Officer of Digital Realty.  “Similar to the Sentrum portfolio in London that we acquired in July 2012, this is a continuation of our strategy of expanding our European footprint by acquiring high-quality, operating data centre facilities that are home to top tier global brands.”

    << Previous Day 2013/01/17
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org