Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, October 16th, 2014

    Time Event
    3:00p
    What’s New in OpenStack Juno

    Now in its fourth year, OpenStack has become the go-to software suite for anyone wanting to stand up a cloud environment outside of an Amazon, a Microsoft or a Google data center. Legacy IT vendors have based their cloud strategies on the open source architecture, and countless startups have devised entire business models around it.

    Juno, release number 10 of the suite, is coming out this week. OpenStack Foundation leaders gave Data Center Knowledge a preview of some of the key improvements.

    Native big data analytics controls

    You could use OpenStack together with big data processing frameworks, such as Apache Hadoop and Spark, before, but you’d have to roll up your sleeves and do all the integration work manually. That changes with OpenStack Juno, which introduces a new “data processing” interface for these frameworks.

    “This data processing part of OpenStack takes a little bit of pain and management out of that,” Mark Collier, chief operating officer of the OpenStack Foundation, said. “It shows up in the dashboard as a data processing tab.”

    You can select a Hadoop distribution of your choice, for example, and create templates for different compute cluster configurations, assign worker and control nodes and so on. It will be up to distribution providers to create plug-ins for this feature in OpenStack. Hortonworks is among distros that already have one, Collier said. “I would expect we’ll see additional types of frameworks in the future.”

    OpenStack will disrupt the telcos (in a good way)

    OpenStack has made an immeasurable impact on the cloud service provider market. All of a sudden, companies with zero or very little cloud infrastructure experience could stand up private clouds for customers or public clouds of their own to act as service providers. It has shaved off some of the technological edge proprietary cloud operators like Amazon, Microsoft and Google have had.

    But there is another sector OpenStack is quietly disrupting – the telcos. Pretty much every major telco has been actively participating in the various open source development projects that make up the OpenStack movement, Collier said. They include AT&T, Telefonica, Verizon, Orange, NTT and their peers.

    For them, OpenStack is an opportunity to rid themselves of having to spend tons of cash on expensive proprietary network management technology they buy from the incumbent vendors. Over time, they hope to be able to buy commodity off-the-shelf server hardware and use open source software to manage traffic on their networks. OpenStack is one component of that vision.

    “It’s not something that is going to be widely deployed across all of their networks in short period of time, but that’s where they’re headed,” Colliers said.

    This effort is still in its early stages, today focused primarily on improving reliability of the hardware orchestrated by OpenStack. With millions of subscribers, carriers require “a level of engineering precision that maybe isn’t needed for a traditional workload in the cloud,” Collier said. Juno is a more reliable release of OpenStack, and the telco effort deserves a lot of the credit for that.

    Object storage tiering comes to OpenStack

    Swift brought the reliable low-cost object storage technology to OpenStack in 2010. It automatically stores multiple copies of an object across multiple inexpensive hardware boxes, so “you can have a lot of hardware failure and still recover the files,” Collier said.

    With OpenStack Juno, Swift has gotten smarter about how it does its job, however. With new tiering capabilities, a user can be selective about the amount of copies that are made for each object and where those copies are stored. A less critical piece of data, for example, can be copied once and stored on a low-performance box, while an important object that is frequently accessed by applications will be replicated many times, stored on flash, and its copies will be distributed across multiple geographic locations.

    Tiering has been part of Cinder, the block storage component of OpenStack, for about a year and now comes to object storage as well.

    Cinder has been enhanced in the latest release too. There are 10 additional drivers for block storage systems in Juno, including drivers by Fujitsu, FusionIO, Hitachi, Huawei and EMC, among others.

    Mo’ better plug-ins for Neutron

    A lot of work has gone into improving Neutron, the portion of OpenStack created for advanced software-defined networking capabilities, Collier said. You can now use Neutron to implement more traditional network models, which makes it easier to migrate your network from Nova (the original network component of OpenStack) to Neutron and start using those advanced SDN features.

    There are now more SDN plug-ins for Neutron and those plug-ins have been tested for reliability. In fact, all new drivers written for OpenStack are more reliable thanks to a new testing system that has been implemented. “For each of those drivers to be include in the release they have to be passing a set of automated tests,” Jonathan Bryce, executive director of the OpenStack Foundation, said.

    There are about 35 Neutron plug-ins now, including new ones by IBM, Mellanox, Juniper and Brocade, among others. All of them have passed testing environments hosted by the vendors themselves and also a central testing environment operated by the OpenStack community. The system is essentially a globally distributed testing lab that feeds data into development servers, Bryce explained.

    5:15p
    Dell and Cloudera Roll Out Spark-Powered In-Memory Processing Appliance for Big Data

    Dell, Intel and Cloudera have co-engineered an in-memory processing appliance to run Apache Spark, a stream processing framework for real-time big data analytics. It runs much faster than Hadoop MapReduce runs in-memory, according to Dell.

    Pitched as a quick way to deploy a Hadoop cluter, the solution scales up to 48 nodes.

    Its roots stemming from a project at University of California Berkeley, Spark is built on top of the Hadoop Distributed File System but isn’t limited to MapReduce, which was designed for batch processing. Spark was made for cluster computing and storing data in memory of the cluster nodes for quick access.

    Dell’s new appliance combines Spark with Cloudera Enterprise, essentially a Hadoop distribution, but one that comes with features like Spark, as well as cluster management and support.

    5:30p
    Nutanix Intros All-Flash Storage Appliance for Web-Scale Data Centers

    Web scale infrastructure provider Nutanix introduced a new all-flash storage appliance and a new feature that enables continuous availability across data centers. The announcements were made at VMworld 2014 Europe last week in Barcelona.

    NX-9000 is built for applications that require extreme speed and have large data sets, where predictable and consistent I/O latency are required, the company said. To further assist these needs Nutanix adds its scale-out compression and de-duplication technologies to leverage all compute capacity in a cluster.

    Nutanix says it provides greater application support by giving high performance to all applications and all I/O sizes, thus eliminating guess-work for how to match storage to various workload types. Data is localized for each application so that read requests are handled directly by server-attached flash, avoiding read-heavy traffic and network latency inherent in the deployment of all-flash arrays.

    The NX-9000 is available immediately. Pricing begins at $110,000 per node.

    New Metro Availability technology will be integrated into the upcoming Nutanix Operating System 4.1, the company said, giving the ability to stretch datastores for virtual machine clusters across two or more sites located up to 400km apart. Going beyond just basic hardware redundancy and other legacy technologies, the feature will let teams non-disruptively migrate virtual machines between sites during planned maintenance events, providing continuous data protection with zero recovery point objective and a near-zero recovery time objective.

    5:34p
    Isolationism, Globalization and the Role of the Colocation Provider

    Kate Baker is a business strategist at Custodian Data Centre.

    A need to know and a thirst for content is something that has always invigorated the way that the Internet has grown. Technology has always been the means by which the world has become simultaneously tangible, touchable and close, yet widened as our understanding of the world around us expands.

    The World Wide Web and data have become a means of the traditional map boundaries of old being challenged, stretched and tested. It is the ultimate in enabling a global community to exist outside the borders and confines of land and sea. Yet as this web of interconnections has quickly developed, creating a myriad of networks inextricably linked, the political world has struggled to keep up with the speed of technological innovation.

    Whether it be online gambling, diversity issues or matters surrounding personal data, the world that we live in has to constantly balance a globalization of information and data use, with the idiosyncrasies of nations and regions. The World Wide Web has opened Pandora’s box leading to retrospective policies across many different legal and political systems.

    > tech challenges, international law

    The breadth of global legal permutations is endless, meaning that technological innovation not only has to conquer new technical challenges but also be mindful of international laws. Recent legislation that came into force on August 1, 2014, in Russia concerning social media and bloggers is simply a stepping stone when coupled with new legislation due to come into effect in September 2016: this states that all personal data relating to Russian citizens will have to be stored in Russia from this date.

    As legal commentator Peter Brophy has pointed out, foreign companies that need to store personal information as part of their processes, will now need to have servers located physically in Russia itself. Brophy points out the potential ramifications for businesses such as the aviation industry that use online booking and ticketing systems for global travel, which leads me to wonder whether this policy is even workable/feasible technically and commercially speaking?

    This new legislation is by no means unique. In Brazil following the revelations from Edward Snowden regarding the NSA PRISM program, they have been working on the Marco Civil da Internet (Brazilian Civil Rights Framework for the Internet). It is an act designed to guarantee and safeguard civil liberties and rights in relation to Internet use and requiring companies to store information about Brazilian users in Brazil – which has serious implications for businesses looking to operate in Brazil.

    Implications for providers

    For many companies which operate their own data centers such as Google or Facebook, it potentially leaves them with the task of building and operating new facilities in these regions in order to continue operating.

    Colocation providers face a different challenge as they provide the space which people can rent to house their own servers. They are not in direct control of what application or business is being run by their clients. So what if those servers are holding Russian or Brazilian citizen’s data, or have Russian or Brazilian customers, where do they legally stand?

    The UK Data Protection Act’s eighth principle states that “personal data shall not be transferred to a country or territory outside the European Economic Area unless that country or territory ensures an adequate level of protection for the rights and freedoms of data subjects in relation to the processing of personal data.” When you read the interpretation of the principle, it suggests that a colocation provider could be subject to “the law in force in the country or territory in question.” Does this mean that despite many colocation providers not knowing what is on a client’s server, they could be unwittingly contravening an international law and in doing so also contravening the UK Data Protection Act?

    It would seem to me that colocation providers need to ensure that they have robust contracts and procedures in place to protect themselves in a world where some isolationist data laws seem to be the start of a series of 21st century data related iron curtains.

    The downside for colocation providers is that if a series of data isolationist policies developed, businesses could see their potential client pool diminish as they could no longer vie for business on a truly global scale.

    Conversely, isolationist data center storage policies needn’t always be a negative development. The need for personal data to be stored in individual countries, means that there is a strategic opportunity for companies to develop new sites abroad and those with the financial ability to do so could corner new geographies.

    The tip of the iceberg

    In a turbulent world where differences in ideologies continue to spark wars, the Internet is at risk of becoming increasingly politicized. Isolationist data policies are the tip of the iceberg when you look at the potential for political turmoil surrounding the Internet and its ability to challenge national borders.

    The growth of Bitcoin, a peer-to-peer payment network, was born from the virtual world, leading to real-world retrospective policies such as the Russian Ministry of Finance drafting regulations to outlaw cryptocurrencies. France has also recently released a report entitled “Regulation in the face of innovation: public authorities and the development of virtual currencies” which discusses how technological advances are affecting the global legal system.

    Government policies can only ever react to technological innovation and the disparity between technological change and legal change is one of the greatest tests facing nations.

    The growth of decentralized networks such as diaspora, the Tor project and even email, means the ability to control or switch off certain parts of the Internet is becoming a bigger challenge and leading to an underground world without borders. A court order in one country may demand an email provider be closed down, yet email itself will always continue to exist due to the number of alternate email providers out there.

    Common sense suggests that the international legal cases currently being tried, such as the U.S. government issuing a warrant to Microsoft for access to emails stored on servers in Ireland, are only the beginning of an international battle to balance isolationist and globalized views of the virtual world, with technology companies such as colocation providers and their legal advisors being in the unenviable position of trying to strategically comply and operate on a global legal scale.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    6:00p
    Brocade Pitches Extension Switch as Replacement for WAN Optimization Appliances

    Brocade has enhanced remote data center disaster recovery and replication capabilities with a new extension switch and enhancements to its Fabric Vision technology. The extension switches are an alternative to Wide Area Network (WAN) optimization appliances and used for remote replication and backup.

    Brocade said its enhanced remote data center DR is achieved through what it calls superior WAN link utilization and new unique failover functionality. The company said it provides shorter recovery points and faster recovery times in the event of a disaster or in routine backup and replication tasks, over what the company calls “unreliable” WAN links.

    The DR platform also extends and enhances Brocade’s Fabric Vision technology. Fabric Vision is a fabric network monitoring solution. It has improved its long distance monitoring of remote data centers, useful if the data center isn’t physically reachable. It helps companies troubleshoot over distance and detects WAN anomalies. Companies can monitor dispersed data center for performance and hopefully avoid unplanned downtime. It speeds up replication as well.

    Brocade is tackling two trends with its fabric networking solutions: the explosion of data and the pressure on IT to keep this data highly available across multiple data centers, hence better DR and replication capabilities over distance.

    The new Brocade Extension Switch is denser, improving performance, and offers simplified remote data center management. Replication performance over distance has been improved using data compression and disk and tape protocol acceleration. Brocade said the switch has 80 Gbps of application data throughput over long distances with 256-bit IPsec encryption, and does this without a performance hit.

    The new Extension Switch is part of its Gen 5 Fiber Channel SAN portfolio. It has 80 Gbps of application data throughput while securing data flows over distance with 256-bit IPsec encryption without a performance penalty, according to the company.

    Other enhancements to 7840 Extension Switch include:

    • WAN-side non-disruptive firmware upgrades
    • Extension trunking: combines multiple WAN connections into a single, logical, high-bandwidth trunk, providing active load balancing and network resilience to protect against WAN link failures.
    • Adaptive rate limiting: dynamically adjusts bandwidth sharing between minimum and maximum rate limits to optimize bandwidth utilization and maintain WAN performance during disruptions.

    The Brocade 7840 offers both 40 Gigabits per second (Gbps) and 10 Gpbs Fiber channel over IP (FCIP) connectivity options.

    Fabric Vision technology is done over hardware and software. It helps enterprises optimize resources and overcome infrastructure complexity between data centers. It includes tools and automation capabilities that simplify management and provide proactive monitoring. It includes:

    • A Monitoring and Alerting Policy Suite (MAPS): simplifies monitoring with policy-based automation between data centers to automatically detect WAN anomalies and avoid unplanned downtime.
    • Flow Vision: accelerates troubleshooting of end-to-end I/O flows over distance with integrated diagnostics and eliminates the need for expensive third-party tools.
    • Dashboards for discovering and resolving WAN network issues via root-cause analysis and point-in-time playback. Administrators can view all critical information from thin-client, Web-accessible dashboards

    “Managing multiple data centers is inherently complex and time-intensive,” said Jack Rondoni, vice president, Storage Networking, at Brocade. “By deploying the Brocade 7840 Extension Switch with enhanced Fabric Vision technology, organizations can create better solutions that meet or exceed their requirements and expectations for faster replication and recovery to achieve always-on operations.”

    The company also introduced a new port blade for the Brocade 8510 DCX Director, part of the DCX Backbone family. The port blade consumes less power, increases port density and reduces cabling requirements. It’s called the Brocade FC16-64. The blade is scalable up to 512 Gen 5 Fiber Channel ports and total system bandwidth of 10.2 Terabits per second.

    6:30p
    Terascala and Dell Upgrade High Capacity Storage Solution for HPC

    Terascala announced updates to its joint high capacity storage solution for high performance computing with Dell that improve storage management in technical computing environments. The software and hardware improvements reflect the growing demands of University and HPC customers for a high-throughput efficient storage environment that supports large-scale data sets.

    Keeping with what the Terascala products are known for, the iterative advances maximize data throughput speeds in storage for big data needs. Hardware improvements in the HSS offering include arrays that feature a quad port 12 Gbps Serial SAS interface, compared to the 6 Gbps in previous arrays. With that improvement the company says that peak read performance is up 59.7 percent to 10.7 Gbps, and write performance increased 74 percent to 6.09 Gbps. The HSS appliances have also been advanced from the Dell MD32xx line to the new Dell MD34xx series.

    To compliment the hardware improvements the TeraOS intelligent operating system helps speed big data workflows by simplifying management of Lustre-based storage. Lustre is the Linux-based parallel, distributed file system designed for high-capacity storage environments used in HPC applications.

    The updated HSS product includes Lustre 2.1.6 and TeraOS version 5.3.1. To further augment high speed data movement demands, Terascala introduced its Intelligent Storage Bridge product earlier this year – which further accelerates big data with automated transfers and support for connecting Lustre, NFS, CIFS and cloud file systems.

    7:00p
    Huawei and Accenture Partner to Offer Enterprise Private Cloud Solutions

    logo-WHIR

    This article originally appeared at The WHIR

    Huawei Technologies and Accenture are expanding a strategic alliance to offer cloud service in China and Southeast Asia, the companies announced Thursday. Together the companies will offer solutions for communications service providers (CSPs) and Enterprise Private Clouds.

    The companies will leverage Accenture’s IaaS offering with Huawei’s private cloud infrastructure to offer private cloud solutions to a range of industries. They will also offer CSPs business support system and systems integration services to support functions like billing and customer care.

    “In an era where the physical and digital worlds are increasingly converging, no enterprise is able to address all customer needs alone. Enterprises need to collaborate openly and integrate their resources and capabilities to help customers succeed,” said Mr. Eric Xu, Rotating CEO of Huawei. “Our collaboration with Accenture will further augment Huawei’s business in the enterprise ICT market, enabling us to build on our diverse product portfolio to offer our enterprise and carrier customers even more innovative software and services solutions that support them in boosting efficiency and driving revenue growth.”

    The alliance builds on a partnership established in 2010, when Huawei and Accenture began offering joint business support software services to telecommunications companies.

    Since announcing a target of $10 billion in enterprise revenue by 2017, Huawei has been expanding its enterprise services reach. The Chinese company began a partnership with Infosys in September to provide enterprise cloud and big data services.

    Huawei remains focussed on expanding its service and customer base in China and emerging markets, as it faces significant barriers to some lucrative developed markets due to security concerns.

    The enterprise cloud market is in a strong growth period, and multinational service providers like CenturyLink are partnering with Chinese companies to address the tightly regulated domestic market.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/huawei-accenture-partner-offer-enterprise-private-cloud-solutions

    7:30p
    How Cloud has Changed Data Center Technology

    Let’s face it, if you’re a technologist and you’re reading this article, you’re tied to the cloud in one way or another. Whether you have Gmail synced to your phone or you upload photos to Dropbox, you’re utilizing cloud computing. Over the past few years, cloud services have become more enhanced and prevalent. There is more emphasis on data delivery, our ability to continuously stay connected and how our information is distributed.

    Data centers and other technologies have had to adapt to this growing trends by deploying, in reality, some pretty cool technologies. This is happening at the IT consumerization level and within the data center:

    • Cloud Computing – We know about the cloud. We know that there are now four general models to work with (private, public, hybrid and community). The really amazing part that’s been happening is the open-source and cloud connectivity movement that’s been happening. People behind open source projects like OpenStack and CloudStack are creating powerful cloud API models to interweave various services and even platforms. The great part is that these technologies are still evolving and becoming better. Cloud APIs and connection models push the industry towards a more unified cloud architecture. Now, new concepts around software-defined technologies are helping push the cloud boundaries even further. Software-defined storage is a lot more than just a buzz term. It’s a way for organizations to manage heterogeneous storage environments under one logical layer. When convergence around network, storage and compute intersect with software-defined technologies – you create the building blocks for a commodity cloud data center.
    • Network Communications – This is where it gets really interesting. We’ve heard about software-defined networks, but the reality is that cloud-based networking has become pretty advanced. Cloud providers are deploying highly intelligent switching components which are capable of handling thousands of virtual connections. Furthermore, they’re able to present multiple networks to one another and still keep various services segmented. We are seeing more converged and unified systems where advanced networking capabilities are built directly into the rack, server, and storage infrastructure. Layer 4-7 switches are not only controlling traffic – they’re intelligently manipulating it based on various variables. These controls can revolve around geographic policies, connection points, and even device interrogation rules. This is also where we begin to include software-defined networking as powerful cloud data center concept. SDN can create very intelligent, globally connected, environments. Furthermore, SDN can help with load-balancing cloud and data center infrastructures. SDN already helps with global traffic management by logically sending traffic to the appropriate data center. Moving forward, SDN will strive to create even more fluid data center traffic flow automation. These types of efforts will help with downtime, data resiliency, and disaster recovery planning.
    • Disaster Recovery/Business Continuity – Emergency events can happen at any time – and for any reason. This is where the cloud has helped many organizations create solid disaster recovery or business continuity environments. Whether it’s an active site or a “pay-as-you-go” public cloud model; DR strategies are becoming more feasible for more organizations. A lot of this has to do with better global server load balancing (GSLB) and global traffic management (GTM) techniques. Our ability to route traffic based on numerous variables has empowered organizations to distribute there environments and their data. Not only does GSLB and GTM help by creating one logical network flow for data traffic and user access – administrators are able to keep users closer to their data centers. By identifying the user geo-location, cloud technologies are able to route users to the data center nearest to them. In the event of a failure, GTM and GSLB are able to immediately and transparently, route the users to the next available set data center resources.

    The latest Cisco Global Cloud Index shows just how fast everything in the cloud is growing:

    • Annual global cloud IP traffic will reach 5.3 zettabytes by the end of 2017. By 2017, global cloud IP traffic will reach 443 exabytes per month (up from 98 exabytes per month in 2012).
    • Global cloud IP traffic will increase nearly 4.5-fold over the next 5 years. Overall, cloud IP traffic will grow at a CAGR of 35 percent from 2012 to 2017.
    • Global cloud IP traffic will account for more than two-thirds of total data center traffic by 2017.

    More is being stored within the cloud and more devices are connecting to the Internet. There are more services delivered via the web and entire organizations can go live without purchasing a single piece of equipment. As the reliance on cloud computing and the Internet continues to expand, the need for continuous innovation will help create even better platforms.

    11:17p
    Google Tells Users to Move Cloud VMs for Infrastructure Refresh

    Google is asking its cloud infrastructure service users to move their VM instances from one of the cloud availability zones in Europe to another to upgrade data center equipment that supports the zone.

    A typical IT refresh cycle in a data center is three to four years, and Google is no exception. In a notice about the changes posted on the public Google Compute Engine Operations, a Google Cloud Platform representative said GCE cloud zones were refreshed every three to five years.

    The company recently stood up an entirely new availability zone in Europe, called europe-west1-c, which is one of the zones it suggests users move their VMs to from europe-west1-a – the zone that’s being prepared for upgrades.

    Europe-west1-c, expected to come online in two weeks, will have “all new, shiny Ivy Bridge machines with our latest and greatest infrastructure,” the Google rep wrote in the forum post. The zone will be supported by Google’s latest servers, power, cooling and network fabric technologies.

    GCE currently has three availability zones in the U.S., two in Europe and three in Asia. The three Asian zones run on Intel’s Ivy Bridge chips (Ivy Bridge is the newer 22-nanometer cousin of the Sandy Bridge microarchitecture that used 32 nm process technology). All other zones, save for one of the three U.S. ones, still run on Sandy Bridge, which means refreshes for them are also around the corner.

    The European zone that’s up for an upgrade now is going offline at the end of March 2015, and all VMs and persistent disks (storage attached to cloud VMs) still running on it at that time will be terminated.

    Inter-zone VM migration tools in the pipeline

    Google is suggesting that users with infrastructure in this zone make disk snapshots and use them to launch new instances in the new zone or in the existing europe-west-1b.

    The company’s Live Migration feature for moving VMs from one physical host to another only works within zones. In the post, however, the rep wrote that Google was working to deliver tools that would help automate the inter-zone VM migration by the end of January 2015.

    Quarterly data center spend continues to grow

    The amount of money Google spends on data centers and equipment that houses them has been growing steadily. In its third-quarter earnings report, released today, the company said it spent $3.35 billion in the “other cost of revenue” category, which is mostly data center operational expenses, hardware inventory and content acquisition costs, among other expenses. That’s 20 percent of the company’s revenue and up from $2.44 billion spent in this category in the third quarter of last year.

    That’s operational expense. In the capital expense bucket, Google reported spending $2.42 billion during the quarter, most of it on data center construction, production equipment and real-estate purchases. That’s up from $2.29 billion spent in this category during the same three months one year ago.

    11:30p
    Shutterfly Deploys 1,000 Cabinets at Switch SUPERNAP

    In one of the largest colocation deals ever, Shutterfly Inc. is deploying 1,000 cabinets at the SUPERNAP data centers in Las Vegas. The photo editing and sharing service has signed a seven-year contract for colocation and conenctivity services with Switch, which operates the huge – and growing – SUPERNAP campus.

    Shutterfly Inc. operates a family of digital lifestyle brands that create books, cards, event invitations and gifts. Its flagship Shutterfly brands allows users to upload photos and create their own products. It’s a business with significant storage requirements.

    “At Shutterfly, the safety and security of our customers’ memories is paramount,” said Shutterfly CFO Brian Regan. “As we expand our storage and technology platforms, it is crucial to have partners that can offer a comprehensive technology solution. We looked at dozens of colocation facilities around the country and the Tier IV Gold-rated SUPERNAP offers best in class design, security and scalability.”

    Switch operates two huge data centers on its main Las Vegas campus spanning more than 750,000 square feet. Its newest facility, SUPERNAP 8, which earlier this year became the first data center to receive both of the Uptime Institute’s highest certifications, the Tier IV Constructed Facility and Tier IV Gold certification for Operational Sustainability.

    On an adjacent piece of land, Switch is now building its largest project yet, the 600,000-square-foot SUPERNAP 9.

    Switch now has more than 1,000 customers, including more than 40 cloud computing companies and a dense concentration of network carriers. The company’s roster of clients includes tech industry heavyweights like eBay, Google, Cisco, VMware and Microsoft’s Xbox One.

    The newest client relationship is a familiar one for Switch CEO and founder Rob Roy. “My friends and family have been using Shutterfly for years, and love the product,” said Roy, who has numerous patents for innovations in cooling and data center design implemented at the SUPERNAP complex.

    “We are thrilled to welcome Shutterfly to the SUPERNAP ecosystem,” Roy added. “As an industry leader, Shutterfly holds billions of photos and precious memories for people around the globe. Our SUPERNAP data centers are the perfect place for Shutterfly to secure and store their most valuable assets.”

    << Previous Day 2014/10/16
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org