Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, March 11th, 2014

    Time Event
    12:00p
    The SUPERNAP Goes Global, as Switch Adds International Partners
    supernap-tscif-470

    A look inside one of the high-density hot aisle containment systems, known as T-SCIFs, at the SUPERNAP campus in Las Vegas. Switch has announced an international expansion. (Photo: Switch)

    The SUPERNAP is going global. Colocation pioneer Switch has formed a joint venture to launch an international expansion, teaming with Accelero Capital Holdings and Orascom TMT Investments (OTMTI) to build SUPERNAP data centers around the world.

    The three companies have formed SUPERNAP International to build Tier IV data centers in multiple global markets, the companies said. No sites have yet been announced for the expansion, which will feature a new data center design developed by Switch founder and CEO Rob Roy. The design, which had its initial rollout at the new SUPERNAP 8 in Las Vegas, allows facilities to operate with high reliably in a wide range of climates and weather conditions.

    Accelero and Orascom are part of the telecommunications and investment empire built by billionaire Egyptian businessman Naguib Sawiris. They are among the investors in the cloud computing provider Joyent, which houses key infrastructure in the SUPERNAP 7 facility in Las Vegas.

    “The Partners to Make this Happen”

    “The formation of SUPERNAP International will help further drive our client initiatives worldwide by providing enhanced accessibility to SUPERNAP ecosystems,” said Roy. “Accelero and OTMTI have a successful track record globally of strategic investments and operational management in telecommunications and digital media and are the partners to help make this happen.”

    The joint venture will take Switch’s high density data center designs beyond Las Vegas, where the firm has established a colocation empire built atop the SUPERNAP, a 400,000 square foot facility completed in 2009 that featured innovations in cooling, airflow containment and power design.

    Accelero Executive to Head International JV

    “SUPERNAP International marks a significant milestone for Accelero to expand in the media and technology sector,” said Khaled Bichara, CEO and co-founder of Accelero Capital. “We are very excited to be part of this journey and to provide innovative methods to change the way data centers are engineered and utilized worldwide.”

    Bichara will serve as President and CEO of SUPERNAP International, which will have an exclusive license for Switch’s data center technology and Roy’s patented designs. Bichara is a veteran of Wind Telecom and VimpelCom, the huge Russian mobile provider.

    In addition to Joyent, Accelero’s brands include European telcos Wind Telecom (WIS) and Italia Online. In North America, Accelero is known for its unsuccessful 2013 effort to buy the Canadian telco MTS Allstream for $500 million.

    “We believe that the collection of our experiences, combined with state-of-the-art SUPERNAP technology, will provide another dimension to how data centers are engineered worldwide,” said Bichara. “We look forward to working closely together to define the company’s strategy to enable the vast reach of this superior technology around the globe.”

    The design for SuperNAP 8 can operate effectively in any climate, providing an ultra-efficient template for global growth. Some of Switch’s innovations include:

    • Super-sized its versatile custom cooling units, which each provide 1,000 tons of air handling and are unusually versatile, supporting six different modes of cooling. The software that manages the system selects the most efficient cooling method based on the exterior temperature and other conditions. The new units at SuperNAP 8 feature distinctive hoods, which will protect the units from ice and snow accumulations in colder climates, and also allow Switch to use exhaust air from the data center’s hot aisles to melt snow.
    • The Rotofly system, which uses 2,000 pounds of rotary flywheels to provide extended runtime for each HVAC unit. In the event of a power outage, this capability ensures that the cooling units will continue to move air through the data halls.
    • The data hall features a steel framework known as the Black Iron Forest. The steel serves a dual role, providing physical support for Switch’s aisle containment system (known as a T-SCIF) and helping to cool the data centers by serving as thermal storage, chilling the air around it to help cool the room and provide a cushion during cooling failures.
    • SwitchSHIELD, a double-roof system that can protect the data center from wind speeds of up 200 miles per hour. The two roof decks are located nine feet apart and are attached to the concrete and steel shell of the facility and contain no roof penetrations. This allows Switch to replace either roof level without any loss of protection for the servers housed in the data hall.

    Switch will continue to own 100 percent of its U.S. holdings.

    1:00p
    How to Create an Agnostic Cloud
    cloud-vm-movement

    It’s time to look at the cloud as an agnostic ecosystem.

    In today’s inter-connected global infrastructure, cloud computing has created new ways to compute and process information. Originally, we had a few cloud models that had specific use-case and solid fits. Now, the entire data center and cloud landscape has evolved.

    Mobility has become the new normal.

    A new “on-demand” generation spans both the user and the modern organization. Small and large enterprises are all looking at various cloud services to help them grow and evolve with the needs of the industry. All of this introduced a new way to look at the cloud, and the data center that supports it.

    Traditionally, we saw cloud models emerge with specific uses. Private, public, community and even hybrid cloud platforms enabled organizations to do great things. But what does the future cloud infrastructure look like? What if you want to leverage critical services spanning numerous platforms? How can you create intelligent interconnects which leverage numerous resources regardless of where they’re located?

    This is why it’s time to start looking at the cloud model from an even higher level. It’s time to look at the cloud as an agnostic ecosystem capable of handling any service, on any device, any time, and anywhere.

    Moving forward, organizations are going to look at new technologies which help them connect existing resources with powerful cloud platforms. Today, solutions are beginning to emerge and take shape which help create an agnostic cloud – one that can truly look at underlying resources and allow your manipulate them as needed.

    • Software-Defined Technologies (SDx). If you haven’t read our Official DCK Guide to Software-Defined Technologiesyou should take a look. This is the platform for building an agnostic cloud. These are new types of virtual services that can sit on any hypervisor at any location all over the world and still be able to intelligently communicate with critical resources. SDx is not just a buzz word. There are very real technologies behind these platforms. Software-defined networking has the potential to revolutionize both data center and cloud interconnectivity. VMware’s NSX and Cisco’s NX-OS technologies completely abstract the network layer to allow for greater efficiency throughout the entire data center. Other technologies like software-defined storage (see Atlantis ILIO USX) effectively abstract the storage layer. Now, you let the controllers do the disk work, while the virtual layer allocates capacity or performance workloads to the appropriate storage repository. The point is to establish that logical layer that simply utilizes whatever resources you present to it. This applies to security, your cloud, and even the software-defined data center.
    • The Hybrid Cloud Methodology. The future cloud model will pretty much be a hybrid platform. However, it’s not in the sense that you may traditionally define it. An agnostic cloud is a hybrid cloud. However, it’s not just a combination of public and private. Future cloud models will have all sorts of services pointing to a management platform that will control resources, users, and critical workloads. Begin to evolve your thinking of the hybrid cloud where a variety of outside services can actually help your environment operate even better. There’s a good reason that the data center market is booming. Cloud platforms have allowed a number of traditional data center providers to offer new types of services. The idea behind an agnostic cloud revolves around simple access to cloud resources regardless of where they are. It’s this scenario that you begin to explore the powerful data center operating system which integrates into all of your distributed nodes to handle the environment at the upper-most layer – all the way down to automated resources control.
    • Cloud Automation. Take a look at what technologies like CloudPlatform and OpenStack are doing. These technologies are creating a logical management layer for your entire cloud infrastructure. They don’t care where it’s located or what’s sitting underneath. These solutions care about automation and cloud efficiency. With these management platforms, you’re able to create powerful multi-tenant cloud environments based on best practices and workload automation. The beautiful piece here is that this technology can span data centers, regional offices, and entire countries. Intelligent load-balancing and resource allocation allows for very smart delivery of applications, desktops, and entire workloads to a very distributed workforce. The cloud management layer manages what you present to it – whether that is sitting in Amazon AWS or at one of the IO Data centers. The management of these systems is becoming much more abstracted and much more powerful. Cloud control and automation spans the traditional definition of cloud to create an agnostic management environment capable of empowering both your users and your organization.

    There is so much more data being passed through the data center that something had to evolve. This agnostic cloud layer allows organizations to not focus on the type of infrastructure they have, but rather how to deliver content more securely and optimally to a dispersed end-user environment. Through 2018, more users and devices will come online. More information will pass through your data center and this content will be a lot richer.

    Traditional, concepts around computing are slowly fading away. The latest Cisco Visual Networking Index report clearly shows that data traffic will reach some truly amazing milestones within the next five years.

    • Monthly global mobile data traffic will surpass 15 exabytes by 2018.
    • The number of mobile-connected devices will exceed the world’s population by 2014.
    • Due to increased usage on smartphones, smartphones will reach 66 percent of mobile data traffic by 2018.
    • Tablets will exceed 15 percent of global mobile data traffic by 2016.
    • There will be more traffic offloaded from cellular networks (on to Wi-Fi) than remain on cellular networks by2018.

    All of these new metrics indicate that organizations will see a massive increase in cloud utilization and mobile data traffic. To positively impact your user environment, begin to look at your cloud model from a much higher level. Start to build your agnostic cloud to help support your next-generation infrastructure.

    1:17p
    Navigating New Networking Trends: Beware of the Urge to Over-Engineer

    Scott T. Wilkinson is senior director, technical marketing, MRV Communications. He leads a team of industry and technology experts focused on educating internal resources and external customers on market trends, new technologies, and recent innovations in MRV’s lines of optical and packet product lines.

    s_wilkinson_tnSCOTT WILKINSON
    MRV Communications

    Emerging applications – including on-demand video, cloud services and synchronous replication – are increasing data center interconnect bandwidth demand more than ever before. Operators are grasping at the optical trends they hope will support the mass amounts of data crossing their network. However, it is crucial to recognize that networking approaches that are successful for traditional service providers might not translate into the right approach for the data center.

    Service providers and data centers each have a specific role in keeping the data flowing across the network. Service providers offer end-user access and are responsible for transporting user’s information across the network. Data centers are responsible for storing, managing and disseminating that data. Service providers make money from the bits flowing across the network. Data centers make money from the applications and information at the ends of those connections.

    Many data centers, however, are actually using equipment and techniques that were designed for traditional service providers. Operators need to be careful they do not fall prey to over-engineering their networks to telecom standards when it’s not required. Instead, they should focus on building a network that delivers the functionality to support current services while providing a future path to meet growing customer demand.

    There are many differences between service provider and data center optical networks, but the main areas where data centers can drive efficiency and innovation fall into a few categories.

    1. Transport Layer vs. Application Layer
    While service providers make their revenue from transport, where reliability and Service Level Agreements (SLAs) drive network design, data center operators make their revenue from applications and content, where cost and capacity drive network design.

    Service providers are required to have multiple layers of redundancy in the transport layer in order to ensure the highest level of service and customer satisfaction. Generally, these transport layer requirements do not exist for data centers, as they rely on the packet layer for resiliency.

    To increase revenue, data centers aim to quickly boost capacity to locations. Data center operators should focus on innovation at the application layer and consider new options such as leasing capacity and alternate routing. While these alternative approaches would not meet service provider requirements, these options can provide data centers with benefits like a more predictable expenditure model, the flexibility to scale capacity as needed and the ability to better safeguard data by rerouting at the packet layer.

    2. Port Density vs. Cost Per Port
    For service providers, port density is a paramount concern, while data centers focus on cost per port. This difference is reflected in the split between 100G port deployment by service providers and the continued use of 10G ports by data center operators. Service providers will pay extra for high-density ports, because each port is a revenue-generating entity and translates into more revenue in less space.

    Data centers are less concerned with density and more concerned with cost per port and internal network flexibility. With the high cost of 100G ports on routers and the need to re-purpose ports quickly, a network based on dense 10G ports is more appropriate.

    However, with the rapidly decreasing cost and size of 100G optics and the ability of emerging pluggable 100G coherent solutions to coexist with 10G signals, data center operators should be preparing for 100G deployments within their network. Deploying platforms that offer plentiful 10G port capacity, and include the option to add on additional 100G interfaces as either 10x10G muxponders or as native 100G interfaces, can be a very effective approach.

    3. Fiber Standards: Long-Distance vs. Short-Distance
    Long-distance dark fiber is generally difficult to lease and requires more advanced optical knowledge and access to intermediate sites for amplification and regeneration. This usually leaves data center operators considering hardware requirements for shorter reach and metro distances (where fiber is available) and using leased bandwidth for longer spans. Service providers, who own long-distance fiber, must consider hardware requirements for a much wider variety of applications, including long-distance routes that may carry a wide spectrum of signal types.

    Because the majority of data centers lease short-distance fiber for a smaller portfolio of service types, there is no need to select networking platforms that are built for long distance, multi-service deployments and can be over-engineered and complex.

    4. Support Staff and Simple Solutions
    Finally, data center operators need simple, point-to-point solutions that do not have complex operations, are easy to deploy and do not need to be touched after installation. While service providers have a constant support team to organize part numbers, learn complicated software structures, and constantly update software licenses, this is not the case for data centers. Data centers require simple, low-cost solutions that can easily be operated without worrying about the details of optical technology. Optical interconnect should look like virtual fiber, and data center operators should evaluate their equipment choices appropriately.

    It can be easy for data center operators to become enchanted by emerging and exciting service provider-designed solutions for optical networking. However, it is critical that the data center operator remains focused on the unique needs—and capabilities—of the data center network. By addressing their current network challenges and opportunities, they can easily avoid over-engineering the network and continue to successfully manage their high profile role in the data revolution.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:30p
    DBT-DATA Sees Opportunity In Federal Space
    The DBT-DATA Cyber Integration Center in Harrisonburg, Virginia. (Photo: DBT-DATA)

    The DBT-DATA Cyber Integration Center in Harrisonburg, Virginia. (Photo: DBT-DATA)

    DBT-DATA is an interesting provider in the Virginia market that has been timely in capitalizing on data center investment opportunities. DBT-DATA is a spinoff from DBT Development Group, a Washington-based commercial real estate firm.

    The company’s history includes several examples of savvy moves at just the right time. Its latest property is in Ashburn, where it sees tremendous opportunity in the federal space.

    Eight years ago, DBT-DATA Chief Executive Dave Tolson was a residential and commercial real estate developer. Then, the economy hit hard, and he began looking at new options.

    “I had the opportunity to invest in a data center project,” said Tolson. “As an economist, I have a big belief in the supply-demand curve. The more I dug into the need for digital storage, the more I noticed there was a lack of supply.”

    Risk Pays Off

    Despite the shaky environment for real estate world, Tolson took a risk, and it paid off. “After the dot com bust, real estate became a pariah asset class,” said Tolson. “It became hard to get financing. You also had the power density issues that come with data centers.”

    Now the company is humming along. “We’re very familiar with the infrastructure,” said Tolson. “We’re strong in construction. Data centers are 100 percent different than other real estate. We had a good initial facility and went from there.”

    That initial facility was in Harrisonburg, Virginia, where the company had Carpathia Hosting as its anchor tenant. The market DBT was targeting was the federal community, especially disaster recovery and even primary facility customers looking for space outside Washington, D.C. Harrisonburg is about a 2 hour drive from Washington.

    The company sold that data center in 2010 to Harris Corp., which bought the property for $41.6 million and reportedly invested $200 million in the facility through 2012 on the back of its cloud ambitions. Harris exited the cloud hosting business and sold the facility back to DBT last year for a reported $35 million.

    “I sold it to Harris Corporation, then Harris built out 350 kW pods that are Tier 3 certified,” said Tolson. “Then Harris got a new CEO who was not supportive. He actually closed it, took a $140 million charge in the fourth quarter of 2012. Long story short, I bought it back.”

    Prime Property in Ashburn

    After the initial sale to Harris in 2010, DBT-DATA came to Ashburn, where its strategy was to market the facilities as a powered shell – undeveloped space with the power and fiber connectivity already in place. This allows for easy expansion for companies with the capital to build the data center infrastructure themselves.

    DBT-DATA bought a corporate campus at Beaumeade Circle in Ashburn, right in the heart of Loudoun County’s “Data Center Alley.” It did a deal with RagingWire Data Centers for a 123,000 square foot powered shell, which became the hub of RagingWire’s East Coast operations. Another of the buildings was recently sold to an unnamed global telecom provider.

    The company offers powered shell out of Ashburn, while Harrisonburg is full turn-key.

    “I think the prospects going forward are excellent,” said Tolson. “Ashburn as a market continues to grow, driven largely by interconnection opportunities and the rich fiber. In Ashburn, due to connectivity and pricing availability of power and the necessary talent pool from the human capital side, we will continue to expand and be very robust.”

    The Federal Opportunity

    At the heart of the Ashburn opportunity for DBT-DATA is the federal space

    “I’m very bullish on the federal space right now,” said Tolson. There’s a data center consolidation initiative that’s gaining momentum. There’s budget certainty. I think you’ll see a lot of postponed projects in the federal space come to fruition this year.

    “If you look at federal IT spending, it’s $83 billion in annual spend, and a good proportion stays here,” said Tolson. “Northern Virginia has a technology center that is highly undervalued in terms of how successful and robust the technology sector is.”

    Tolson also believes that cloud will also continue to bolster the market. “Cloud will have a bigger impact on managed hosting providers,” said Tolson. “Even with the adoption of cloud and IaaS you will see there’s still the need for space power and cooling. Cloud adoption will actually bode well for data center demand.

    DBT-DATA’s strengths include its flexibility, access to capital, and the ability to be a capital partner to customers as well. Going forward, the company isn’t limiting itself to one particular model.

    “We’re looking for opportunistic opportunities,” said Tolson.

    2:00p
    VMware Launches Horizon Desktop as a Service

    VMware (VMW) launched Horizon DaaS (desktop as a service), a new cloud-based desktop service that delivers virtual desktops running on VMware vCloud Hybrid Service. Last fall at VMworld 2013 the company announced that it acquired leading DaaS company Desktone and its platform for service providers. At its VMware Partner Exchange event last month, VMware partnered with companies like Google to give businesses access to secure, cloud access to Windows applications, data and Desktops on Google Chromebooks.

    “Our experience working with customers deploying DaaS the last several years have shown that the majority prefers a blended environment with both on-premise and cloud desktops,” said Sumit Dhawan, vice president and general manager, desktop products, End-User Computing, VMware. “However, very few solutions in the market can deliver a seamless end-user experience across multiple clouds like VMware Horizon DaaS.”

    Multiple Platforms, Seamless Experience

    The new DaaS solution gives customers the ability to blend public cloud desktops and on-premise VMware Horizon View private cloud desktops for a seamless end-user experience.  Instead of just Remote Desktop Service (RDS), VMware Horizon DaaS supports both types of desktops in addition to full Windows Client Desktops. It supports enterprise mobility by enabling end-user access to full Windows desktops and applications through the cloud using desktops or laptops (PC or Mac), zero/thin clients, Chromebooks or mobile devices (Android or iOS).

    “Businesses today are betting big on the cloud and Chromebooks used with web applications like Google Apps for Business fit the way we work today,” said Caesar Sengupta, vice president, product management, Google. “VMware Horizon™ DaaS helps companies transition to the cloud while supporting existing Windows applications. Chromebooks are secure, manageable and cost effective, making them a great option for businesses with the impending end of support and security updates on Windows XP.”

    Customers can choose between three unique virtual desktop deployment models from a single vendor: Off-Premise public cloud desktops, on-premise private cloud desktops, and Hybrid Cloud Desktops. The DaaS offering will be delivered by partners and compatible with other VMware-based cloud services.

    VMware Horizon DaaS can be sold the same way as on-premise VMware virtual desktops with a standard SKU, and partners can retain the billing relationship with customers. In addition, cloud service providers can deliver value added services around the desktops to increase market opportunity.

    VMware Horizon DaaS is available immediately from VMware or through select channel partners and resellers in the United States with global expansion expected later in the year.

    6:58p
    5th Annual 21st Century Data Center Symposium

    Integrated Design Group, a provider of green and modular data center design, will host an informative full-day conference focused on the “Future of Data Centers: Thinking Locally, Delivering Globally” on March 20, 2014 in Dallas, Texas.

    The event is intended for C-Level executives, IT leadership, data center managers, and real estate and facilities professionals. The 21st Century Data Center Symposium is sponsored by Integrated Design Group, Inc., which has offices in Dallas, Texas and Boston, Mass. For more than 10 years, Integrated Design Group (ID) has served as an architectural and engineering firm dedicated to the innovative design of data centers.

    Focusing on data center design, the symposium will educate and allow for the free exchange of ideas about data center design and construction. Presenters include:

    • James Brownrigg, Vice President and Manager of Business Development at Turner Construction
    • B.J. Butler, President of Butlers Lakeside Services
    • Shane Campbell, Client Executive for Guidepost Solutions Technology Design Consulting
    • Steve Carter, Vice President of 451 Planning Advisors
    • Don Hodges, Vice President of State Street Corporation
    • David Ibarra, Advanced Technology-Mission Core Market Leader at DPR Construction
    • Dennis Julian, P.E., ATD, DCEP, Principal at Integrated Design Group
    • Matthew Koerner, P.E., LEED AP, Principal at Critical Project Services LLC
    • Michael Lewis, Senior Director of Engineering for Fidelity Investments
    • Patrick Markham, Regional Vice President at Guidepost Solutions Technology Design Consulting
    • Jack McCarthy, P.E., DCEP, Principal at Integrated Design Group
    • Carlos Osuna, CIO and Head of Colocation Services at RedIT
    • Ira L. Schimmel, Executive Vice President at Terremark Technology Contractors, Inc. and Director of Real Estate Operations at Verizon Terremark
    • Dax Didier Simpson, ATD, CDCMP, Data Center Facilities Manager at KIO Networks
    • Roberto Velasco, Architect and General Manager at KMD Architects
    • Dan Verity, Project Manager at Guidepost Solutions Technology Design Consulting
    • Rick Wilson, Manager of International Trade Compliance at Emerson Network Power

    Venue

    Cityplace Conference Center
    2711 North Haskell Avenue
    Dallas, TX

    For more details visit the website for the 21st Century Data Center Symposium.

    For more events, return to the Data Center Knowledge Events Calendar.

    7:00p
    Velocity Conference 2014

    Velocity is about the people and technologies that keep the Web fast, scalable, resilient and highly available. From e-commerce to mobile to the cloud, Velocity is where the future of the Web takes shape. This year the event is in Santa Clara from June 24 through June 26.

    This conference focuses on the core aspects of building a faster and stronger web. Speakers represent a wide variety of companies and organizations.

    For more information and registration, visit Velocity 2014. Register before April 3 and get the best price available.

    Venue
    Santa Clara Convention Center
    5001 Great America Parkway
    Santa Clara, CA 95054

    Hotel
    The Hyatt Regency Santa Clara
    5101 Great America Parkway
    Santa Clara, CA 95054 (map)
    Phone: (408) 200-1234
    Fax: (408) 980-3990
    (Hotel is connected to the Santa Clara Convention Center)

    For more events, please return to the Data Center Knowledge Events Calendar.

    8:25p
    Northern California Data Center Summit

    The Third Annual Northern California Data Center Summit by CAPRATE events will convene April 10 in San Francisco drawing executives from Silicon Valley and San Francisco’s most active and innovative data center real estate and technology infrastructure companies.

    The one-day event features 50 speakers in 13 panel discussions and join the expected 400+ attendees for discussion, debate and networking.
    Topics include:

    * How are large end-users and tech start-ups in the region planning their data center growth?
    * How have data center lease terms and the transaction time frame changed in recent years?
    * Exploring the concepts of “rightsizing,” “scalability,” and cloud delivery
    * How much data center space will end-users in SV and SF truly require?
    * How are prominent end-users and operators/service providers looking at needs today vs. tomorrow?
    * How are public agencies viewing data privacy vs. private companies?
    * Analysis of the impact of regulation on the data center industry and high-frequency trading

    Venue:

    St. Francis Yacht Club
    On the Marina
    San Francisco

    For more information and registration visit the Data Center Summit website.

    For more events, return to the Data Center Knowledge Events Calendar.

    << Previous Day 2014/03/11
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org