Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, May 16th, 2013
| Time |
Event |
| 11:30a |
HostingCon 2013 Gears Up for Austin  The Austin Convention Center which is the conference site of HostingCon 2013.
HostingCon, which will be held June 17 – 19, is an industry event designed to serve the professional interests of the web hosting and cloud services community. Participants will discuss the challenges of the present and think broadly about the future of the cloud computing and web hosting industry.
This year’s event will be located in one of North America’s top technology hubs, Austin, Texas, at the Austin Convention Center.
Thought leaders, subject matter experts and professionals convene at HostingCon for three days, with the intention of networking, making connections and furthering their business interests. The exhibit hall space will host more than 150 vendors.
According to organizers, the mission of HostingCon is to provide essential industry knowledge and intelligence in an unbiased platform. It is the only North American event that is seeks to comprehensively meet the needs of the web hosting and cloud community.
Attendees who purchase the full conference pass get access to HostingCon Connect, which allows attendees to link to the people they most desire to connect with before and during the conference. Send messages to up to 30 people and search the HostingCon attendee database by name, company and title to find the right decision makers to connect with.
More information and registration is available on HostingCon’s website. Early bird registration rate ends today, May 16, at midnight. For DCK readers, use this coupon code when registering: DCK2013 to receive a discount on the registration fee. | | 12:00p |
Box is Beefing Up its Network for the Enterprise 
Box is one of those Cinderella technology stories. The cloud file-sharing and storage company started with just a couple of guys and now has grown to serving over 150,000 businesses, including 92 percent of the Fortune 500. Its vision: to let you share and manage and access your content from anywhere.
With half of activity coming outside of the U.S. and 40% coming from mobile devices, its customers have tested that mission statement. The company has been boosting Accelerator, its global data transfer network, as well as adding several key certifications in a bid to make its global enterprise customer base happy. Further infrastructure expansion lies ahead.
“We really think we’re solving a problem for an end user,” said Jeff Quesser, VP of Technical Operations for Box. “But we’re also solving an IT concern; they can get all the auditing, compliance they need. This can be run in a very safe way.”
Engineering for the Enterprise User
The company is still seeing triple digit growth year over year, with over 150 percent growth last year. That has prompted the company to tailor its service in the best ways possible to serve the enterprise crowd, which requires fast uploads and often has geographically dispersed workloads and workforces.
An astounding 50 percent of Box activity is happening outside of the US, either from international firms or U.S. enterprises with a global presence.
“It’s a tipping point where it became a first class problem,” said Queisser. “Speed is absolutely critical. If you have sites all around the world, you need blazing fast download speeds.”
Accelerator: Infrastructure Plus Intelligent Routing
This enterprise customer need was the impetus behind Box Accelerator. The company has established upload endpoints in key global data center hubs featuring end-to-end encryption. The company has built patent-pending intelligent routing and optimization technology that delivers uploads 2.5 times faster on average. It has built a network that helps you get data into Box as fast as possible.
“(With) most consumer operating systems, networking stacks are not optimized,” said Queisser. ”There’s the bandwidth delay problem. TCP is an amazing protocol, but wasn’t made for these types of distances and this kind of bandwidth. It’s a testament to how amazing the protocol is that it’s done what it’s done.
“What we’ve done is unique in that it’s optimizing inbound data,” Queisser added. “How do you ingest 100MB rather than send it out? The other piece is that we built these nodes, and a routing feedback loop technology. It determines the fastest way to get to Box. Sometimes it’s an accelerator node, but there are times when direct is the fastest path.”
Neustar conducted a performance analysis test and found that “Box had the lowest average upload time across all locations, about 66% faster than the closest competitor.”
More Cloud-Based End Points, and an API in Box’s Future
Accelerator started off as nine new points of infrastructure, but has been growing. It’s a small footprint that provides a big performance boost. The ultimate goal is to have cloud-based endpoints in all regions.
The locations of the Box accelerators are also telling in that these are the areas where the company is seeing the most growth, and/or anticipating the most growth. If you see an endpoint pop up, it means a combination of latency mapping and customer growth gave birth to it. For example, one of the latest endpoints not yet on the official map is Dublin, an area that has seen its fair share of Internet infrastructure growth as a key European market.
The future for the company is more Accelerator locations, and an upcoming API that will allow developers to leverage the work that Box has done for its own apps.
API on the Way
“We will have a beta for an API that lets any developer in the world use what we’ve built,” said Quiesser. “If you’re trying to build something that’s as fast as possible, you don’t want to have to do all we had to do. Instead you get all of that with an API call.”
The company is also planning to apply this technology to file downloads. Accelerator has added speed to enterprise uploads, but the company says it is looking to speed up downloads in similar fashion. “We need to do that in a way where it’s encrypted and it isn’t cached,” said Quiesser.
It in terms of certifications, it just added ISO 27001 this week, and announced support for HIPAA last quarter. ISO 27001 is the international standard for information security management systems (ISMS) and demonstrates how the policies and controls put in place at Box protect user data. In short, the standard prescribes requirements and best practices for systematically building, deploying, verifying and managing information, content and data. It also has SOC-1/SSAE16 Type II, SOC-2 Type II reports. | | 12:30p |
Mellanox To Acquire Kotura in Photonics Deal Mellanox (MLNX) announced its intent to acquire privately held Kotura, a leading innovator and developer of advanced silicon photonics optical interconnect technology for high-speed networking applications.
The approximately $82 million deal will boost Mellanox’s ability to deliver high-speed networks with next generation optical connectivity. Kotura holds over 120 granted or pending patents in CMOS photonics and packaging design. When combined with Kotura technology, the Mellanox interconnect products will reach 100Gb/s and beyond, and have longer reach optical connectivity at a lower cost, allowing users to further reduce their capital and operating expenses.
“We believe that silicon photonics is an important component in the development of 100 Gigabit InfiniBand and Ethernet solutions, and that owning and controlling the technology will allow us to develop the best, most reliable solution for our customers,” said Eyal Waldman, president, CEO and chairman of Mellanox Technologies. “We expect that the proposed acquisition of Kotura’s technology and the additional development team will better position us to produce 100Gb/s and faster interconnect solutions with higher-density optical connectivity at a lower cost. We welcome the great talent from Kotura and look forward to their contribution to Mellanox’s continued growth.”
Kotura launched its low-power 100 gigabits per second (Gb/s) optical engine to support the interconnect fabric at the OFC/NFOEC conference last year. Mellanox expects to establish its first R&D center in the United States at Kotura’s Monterey Park, California location, and retain Kotura’s existing product lines to ensure continuity for customers and partners. It also believes the proposed acquisition will enhance its competiveness and its position as a leading provider of high-performance, end-to-end interconnect solutions for servers and storage systems.
“This acquisition is important for both companies to enable interconnect innovation for data centers that require solutions that move data faster and more efficiently. Together, we can execute faster and deliver better solutions based on Kotura’s silicon photonics platform that delivers the demands of 100Gb/s interconnects and beyond,” said Jean-Louis Malinge, president and CEO of Kotura, Inc. “We are delighted to join the Mellanox team and look forward to working together to propel the combined company’s further growth.” | | 2:00p |
400G Network Deployed Using Cyan Blue Planet SDN Here’s our review of some of this week’s noteworthy links for the data center industry:
Cyan selected by GlobalConnect for 400G network. Cyan (CYNI) customer GlobalConnect, the leading Danish alternative provider of network and hosting services, announced it has completed the deployment of a 400G network employing the Cyan Blue Planet software-defined networking (SDN) system and Z-Series packet-optical platform throughout Denmark. The Denmark-wide 400G rollout is anchored by Cyan Z-Series packet-optical transport platforms (P-OTPs). The GlobalConnect network spans Denmark with 12,000 fiber route kilometers and includes extensions into northern Germany and southern Sweden. The network upgrade allows GlobalConnect to deliver a wide variety of ultra-high-capacity wavelength and Ethernet services to enterprise customers and data center operators, as well as wholesale services to other carriers. “Many people think that the name of the game for service providers is simply adding more and more capacity,” said Peter Olsen, GlobalConnect chief technical officer. ”While capacity is certainly crucial, unless we can drive enhanced capabilities and features into our network we will suffer from bandwidth commoditization. Working with Cyan, we’ve been able to architect a next-generation network that is operationally more efficient, delivers the scalability we need, and provides a means to deliver enhanced services to our customers.”
Yahoo! Japan deploys Juniper QFabric. Juniper Networks (JNPR) announced that Yahoo! Japan has deployed a Juniper Networks QFX3000-M QFabric System at its new environmentally friendly “Shirakawa Data Center.” It is a large-scale IT data center that covers the Tokyo metropolitan area and East Japan. It now operates a single-layer network fabric based on Juniper Networks QFabric technology to deliver more cost-effective, low-latency performance and linear scalability. The deployed system will support up to 768 ports of 10 Gigabit Ethernet (GbE) with low latency, providing YAHOO! Japan customers with a high-quality online experience, no matter where they are. A MX960 3D Universal Edge Router is also implemented at the site to support advanced services. ”YAHOO! Japan is a pioneer in Internet service delivery, and Juniper Networks has been proud to play a part of this success for many years,” said Douglas Murray, senior vice president, Juniper Networks Asia Pacific. ”Implementing QFabric as the network foundation of its new state-of-the-art data center will support the company’s expansion and future innovation in Japan seamlessly. It also serves as a testament to QFabric traction in Japan and throughout the Asia Pacific region.”
EdgeCast and Telia Sonera partner. EdgeCast Networks and TeliaSonera announced a managed CDN agreement that will bring new site acceleration and CDN solutions to TeliaSonera’s customer base while expanding the reach and capacity of the EdgeCast network. Under the agreement, both companies will make significant investments in high-performance network infrastructure, along with sales and marketing efforts, to address the massive demand for acceleration and CDN solutions across TeliaSonera’s Nordic markets. TeliaSonera will leverage its massive regional network presence, along with EdgeCast’s proven CDN technology, to build out a regional CDN that will be directly connected to EdgeCast’s worldwide network. “TeliaSonera is the premier, dominant provider in a critical region — and after many years working with them, I’m excited we’re bringing our relationship to this new level,” said James Segil, co-founder and president of EdgeCast Networks. “They have a huge number of deep customer relationships, and will now be able to offer those customers powerful and proven CDN solutions.” | | 2:57p |
Real-Time Command and Control in NOCs Simon Clew is Sales Director at Adder Technology Limited
 SIMON CLEW Adder Technology
Since the arrival of virtualization, the Keyboard, Video and Mouse (KVM) market for data centers has changed dramatically. KVM is no longer as critical within the data center as it once was, rather the focus has shifted to improving KVM within the network operating centers (NOC), which run and manage multiple data sources.
Today’s NOCs and Command/Control Centers are characterized by vast arrays of screens and control panels being used and managed by a team of busy, and more than likely stressed, individuals. In these hubs of activity the ability to notice and react quickly to any situation is critical; otherwise it could result in a catastrophic data center shut down. For example, Emerson Network Power surveyed 41 data center companies and discovered that the average cost of an outage was $507,000.
A key element for ensuring responsiveness in NOC and CCC’s is to give operators the ability to see clearly and in real-time what is occurring in the systems they are managing or using. This was the cause of huge problems for many NOC/CCC operators due to ineffective KVM solutions being used to view and control what was happening on a system network. Often the image on the screen would be poor-quality or pixilated. Even at the desk the image may appear to be acceptable but enlarged onto a video wall such small imperfections are hugely magnified underlining the limitations of analogue systems. Add to this the inherent latency of legacy KVM solutions, the lack of support for input devices such as touch screens; this all limits the operator’s agility to act
Real-time Control
We have moved towards the NOC where there is a real focus on real-time control. For example, in NOCs running a range of data centers, operators are looking at a multitude of data sources. If any of these are affected by a serious situation, the controller will need to act immediately. Those looking after data centers that are part of a critical infrastructure system, such as power distribution and management, will not have time to wait for a system to boot up and connect to the affected machine.
Fortunately, with the advent of digital video and USB connectivity, real time control and low latency video are a reality. Another benefit offered by improved KVM in NOCs is the simplification of operator functions through the use of specialist input devices such as keyboards containing a number of unique keys or touchscreens and tablets – combined with common access card readers and multifunction mice.
Digital KVM is also making an impact in NOCs through providing the ability to command and control multiple screens (and computers) seamlessly using just one keyboard and mouse, a capability offered in switching solutions such as the Adder CCS4USB. Allowing an operator to monitor several systems from one work station has a range of benefits, not least of which is improved efficiency.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 3:00p |
PernixData Raises $20 Million for Software-Defined Storage Here’s our review of some of this week’s noteworthy links for the data center industry:
PernixData closes on $20 million round. Software-defined storage platform company PernixData announced the close of an over-subscribed $20 million Series B financing. The round was led by Kleiner Perkins Caufield & Byers (KPCB) with additional support from existing investors Lightspeed Venture Partners and industry leaders Mark Leslie , John Thompson and Lane Bess. The Flash Virtualization Platform (FVP) from PernixData disrupts the storage market by enabling virtualized datacenters to take advantage of an architecture that decouples storage performance from storage capacity. With this additional investment, PernixData will grow its sales and marketing globally, continue its ambitious R&D roadmap, develop a channel eco-system and accelerate go-to-market plans with leading players in the server and storage industries. “PernixData is solving one of the biggest outstanding issues in enterprise data centers: the cost and performance of storage,” said Matt Murphy of KPCB. “Pernix has the opportunity to do for storage what VMware did for compute. The technical team they’ve assembled for such an ambitious mission is unparalleled.”
Violin Memory speeds Oxford Press. Violin Memory announced it has helped Oxford University Press to improve its SAP IPM application performance by three times – cutting month end processing by two and a half days, and reducing average dialog response time; the time it takes from the first dialog request to the presentation of the final data – by 30 percent. Working with solution partner SCC Oxford Press selected the Violin Memory 6212 Flash Memory Array, an all-silicon shared storage system with industry-leading performance (up to 1 million IOPS) and ultra-low latency. “Most importantly we’ve removed the daily impact and the pressing risk to our month end close caused by the ever-lengthening batch processing times required as the data-set grew,” said Mark Harwood, SAP Lifecycle for Oxford University Press. “The decision to choose Violin Memory was made much easier by the professional approach shown by their team throughout our engagement, which has helped us to better understand all-flash memory arrays, an area of technology that is new to us.”
HP expands liquid cooling to Z820 workstations. Asetek announced that HP (HPQ) has expanded the availability of Asetek liquid cooling to include single processor Z820 workstations. Previously only available in dual CPU configurations, the more affordable single CPU Z820 with Asetek’s integrated sealed loop liquid cooling provides reduced system noise and increased productivity. The design win is expected to translate into an 8-10 percent increase in Asetek’s workstation business. “HP has always been on the cutting edge of workstation technology,” said Scott Chambers, Senior Director of Marketing at Asetek. “The expansion of Asetek liquid cooling within the Z820 line further validates Asetek liquid cooling as a valuable addition for improved productivity”. | | 3:29p |
Bionimbus Applies Cloud Power to Genetic Data-Crunching  A look at the Beagle supercomputer at the Computation Institute at the University of Chicago. It’s one of the tools the university is using in its biomedical computation program. (Photo: University of Chicago)
An ambitious project at the University of Chicago aims to lead the nation in biomedical computation, by making the region the largest hub in the world for genetic and medical information.
At the forefront of the effort is Bionimbus, an open source cloud-based system for managing, analyzing and sharing genomic data. Developed by the Institute for Genomics and Systems Biology (IGSB) at the University of Chicago, the Bionimbus community cloud is operated by the Open Cloud Consortium‘s Open Science Data Cloud, and an open source version of Bionimbus available to those who wish to set up their own clouds.
Bionimbus is designed to support next-generation gene sequencing instruments and integrates technology for analyzing and transporting large datasets. The Open Cloud Consortium (OCC) currently distributes around one petabyte of scientific data to interested users and plans to roughly double that amount of data in each of the next several years. Most OCC users are at universities and institutes that are on high-speed networks Internet2 or National Lambda Rail.
Pritzkers Assist With Fundraising
Recently Hyatt Hotels Chairman Tom Pritzker and his wife Margo hosted a fundraiser to introduce the project to about 50 influential friends. Pritzker is a university trustee and has hosted many annual dinners for the University of Chicago Medicine.
“Frankly, I’ve walked away from any one of the dinners really excited about whatever the topic was because it’s like a window into the future,” Pritzker told the Chicago Tribune. “You get to sit here, and for two hours someone is painting a picture for you of what the world is going to be like 10 to 15 years from now.”
During the fundraiser University of Chicago computer scientist Ian Foster presented a map of global fiber-optic networks, highlighting the densely populated Chicago area. With Chicago being the crossroads of information, the big data project hopes to leverage that geographic advantage for building the genome storage hub.
“Business, innovation, discovery, jobs still depend on taking raw materials and turning them into refined products,” Foster said. “Often, nowadays, the raw material is data and the refined material is knowledge.”
Leveraging Beagle Supercomputer
University of Chicago Computation Institute (C.I.) senior fellow and IGSB associate senior fellow Robert Grossman has been working on the Bionimbus Cloud for approximately four years. He states that it is currently one of the largest clouds to hold genomic data. It is the first project of its kind authorized by the National Health Institute (NIH) to use public data about genomes to perform biomedical research.
Argonne National Laboratory and IGSB are collaborating on two big data projects, using the Beagle supercomputer and the Bionimbus Cloud. The Beagle supercomputer was launched last month by the University of Chicago Biological Sciences Division and the Computation Institute. The 150 Teraflop system contains 186 blades, housed in 8 Cray XE6 cabinets.
With a goal to revolutionize the way clinical researchers analyze and collect medical data, the big data projects will simulate biological processes in order to understand the causes of certain diseases like cancer, and to compile knowledge about basic patient outcomes and recent medical discoveries in order to discern more effective diagnoses and treatments. | | 4:05p |
Design Lifecycle: Leading Edge vs Current Practice This is the fifth article in a series on DCK Executive Guide to Data Center Designs.
One of the design issues facing data center operators is the projected life cycle of the facility, and the ability of its infrastructure systems to be upgraded in order to feasibly and cost effectively extend its long term viability. The data center is evolving at a much faster pace over the last several years, especially when compared to the pace of change over the previous 35 years. Designs and systems that were once considered as Leading Edge can become the new normal State-of-the-Art reliable modern facilities, with a good long life cycle, if they have been well planned and have solid technical underpinnings. One such example is the use of “fresh air free cooling”, which would have been seen at unthinkable less than 10 years ago is becoming more common (see part 3 Energy Efficiency).
The Software Defined Data Center
IT systems have moved to virtualize every aspect of the IT landscape; i.e. the Virtual Server, Storage and Network. The next step is the virtualization of the data center – the “Virtual Data Center” which is a term that has begun to appear along with “Software Defined Data Center.”
While this sounds a bit fanciful, it does not mean that the physical walls and rows of racks of the data center will literally move or morph with the click of mouse. However, it refers to the concept that all the key IT components (servers, storage and networking) will be fully virtualized and transcend the underlying limitations of a physical data center. This does not mean the physical data center will cease to exist, but it does imply that the new data centers must be able to be ready and be flexible enough to accommodate more changes in IT hardware designs and their new requirements. Virtualization has helped improve availability and resource allocation and effectiveness, yet in many cases the physical facility designs have not necessarily reflected the changes that can result by a fully virtualized IT architecture.
The complete Data Center Knowledge Executive Guide on Data Center Design is available in PDF complements of Digital Realty. Click here to download. | | 6:14p |
Latisys Launches Disaster Recovery as a Service Cloud service provider Latisys has launched Disaster Recovery as a Service (DRaaS), a tailored service requiring no capital investment by the customer. The portfolio of DRaaS solutions ranges from simple offsite data backup to near-instantaneous continuous availability (geoclustering) services.
Latisys DRaaS services feature a consultative approach, which that begins with a critical business impact analysis that distills the need for disaster recovery into three key concepts:
- A company’s recovery point objective (RPO) – how much data can you afford to lose?
- Recovery Time Objective (RTO) – how soon do you need to have your systems up and running?
- The Cost of Downtime – how much does an hour of downtime actually cost?
“Our customers are increasingly asking for comprehensive DR solutions,” said Christian Teeft, VP of Engineering, Latisys. “In the past you had to maintain a completely redundant infrastructure at a cost of 2X, putting DR out of reach for most small-to-medium enterprises. Today we have a range of options that can be tailored to your specific RPO (Recovery Point Objective) RTO (Recovery Time Objective), making DR more affordable, more powerful and more effective.”
Based on the traditional concept of cold, warm, and hot site Disaster Recovery, Latisys’ DRaaS offerings include a wide range of options:
- Data Protection (cold) – Managed data protection using EMC Avamar. These managed backup services ensure that data can be restored from a disk–a good option if cost of downtime is low, or if there is contractual or regulatory obligation to fulfill.
- Storage Replication (warm) – Several options for storage replication are available, including using the HP 3PAR StorServ storage platform to replicate from the storage system to a remote location. This is ideal if RPO and RTO both need to be less than 12 hours.
- Workload Replication with VMWare (even warmer) – VMWare Site Recovery Manager (SRM) maintains a scripted recovery plan to shut down specified virtual machines and automatically restore them to a recovery site. RPO is less than one hour and RTO is less than four hours.
- Workload Replication with Microsoft (warmer still) – The Microsoft Hyper-V Replica function performs asynchronous replication over commercially-available broadband networks, enabling enterprises to perform manual failover in the event of a disaster. This form of hypervisor replication is a good option for smaller enterprises or those already invested in Microsoft technologies.
- Geoclustering (hot) – When cost of downtime is very high, Latisys can design active/active geoclustered database replication and globally load-balanced sites with nearly instant failover. This is a good option for companies with thousands of transactions per hour.
- The Latisys Cloud – Powered by the HP CloudSystem Matrix, Latisys’ enterprise cloud infrastructure provides flexible and cost-effective access to DR compute resources.
The portfolio features a range of options, balancing the continuous availability needs of the high end of the market, with simplicity and flexibility needed for a large part of the market looking to get a DR plan together.
“Latisys has made the capital investment in hardware as well as the operating investment in people and processes to tailor a DR solution to specific customer requirements,” said Pete Stevenson, CEO of Latisys. ”DR is an increasingly important component of any enterprise IT business strategy and Latisys is focused on making DR both affordable and available so resources are actually there when businesses need them most.” | | 6:24p |
CyrusOne Launches Internet Exchange Across Sites  An aerial view of the new CyrusOne data center in Phoenix. The company today launched a national Internet exchange. (Photo: CyrusOne)
Colocation provider CyrusOne continues building up its connectivity story, introducing its National Internet Exchange (IX), an on-net platform deployed across CyrusOne facilities in Texas and Arizona. The platform enables high-performance, low cost data transfer and accessibility for customers, uniting 12 CyrusOne sites in Dallas, Houston, Austin, San Antonio, and Phoenix, with other locations coming on soon.
CyrusOne first entered the interconnection market back in February 2012. Earlier this year, it launched a Texas IX, and now is focusing on a wider build-out of the exchange.
“With the launch of our Texas IX earlier this year, and with the recent opening of our data center site in Phoenix, CyrusOne has completed the important first steps in building out the CyrusOne National IX,” said Josh Snowhorn, vice president and general manager of Interconnection for CyrusOne. “No matter what kind of scalability our customers choose, the National IX will deliver robust, national connectivity and enable content and ISP peering that brings the heart of the Internet closer to CyrusOne data centers. Enterprises benefit from core access to the most powerful networks in the world—opening the door to facility-to-facility interconnection at costs and performance metrics that were previously not available to them.”
The interconnection play is a great business, complementing existing assets while not requiring a great deal of capital. National IX delivers interconnection across states and between metro-enabled sites within the CyrusOne facility footprint and beyond. CyrusOne customers have the ability to “mix-and-match” solutions to unite top-tier data centers within and across metro areas for both production and disaster recovery needs.
The CyrusOne IX gives customers choice when building out capacity to transport large amounts of data. Customers may choose CyrusOne’s bandwidth marketplace, its Internet Exchange platform, or cross-connect to cloud services. Customers using the CyrusOne National IX have the ability to connect between CyrusOne facilities region-to-region at greatly reduced wholesale cost via terabit-class capacity. This capability can also provide cross-connection with any on-net third-party facility within metro regions for a minimal charge.
“CyrusOne remains ahead of the multi-site deployment curve by continuously anticipating needs and changes while aggressively building and integrating data centers throughout the U.S.,” said Snowhorn. “We are excited about the opportunities associated with our new CyrusOne National IX.”
While it’s billed as a national exchange, CyrusOne currently doesn’t have data centers in several major Internet markets, including Silicon Valley, northern Virginia and the greater New York market. |
|