Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, May 8th, 2014
Time |
Event |
12:00p |
Vacant Facebook Space Gets Leased Up in Silicon Valley When Facebook migrated servers from leased data centers to its ultra-efficient server farm in Oregon, it left behind acres of empty space in Silicon Valley. The shift provided a major test of how a key data center market handles the exit of a major player, particularly when Facebook announced plans to sublease its vacant space, a strategy that loomed as a potential headache for landlords.
These data centers are now being filled with servers from companies like Alibaba, Groupon and Internap. About 7 megawatts of data center capacity has been leased by new tenants, most of whom have wound up working directly with landlords.
“Facebook has had success subleasing their space in Santa Clara,” Avison Young principals Jim Kerrigan and David Horowitz reported in their most recent market update.
The Challenges of Subleasing
The success in filling Facebook’s old space in Silicon Valley offers a hopeful sign for other data center markets where large tenants are vacating data center space, either to move into company-built data centers or to shift cloud capacity across a national or global footprint. That is especially true in northern Virginia, where Yahoo has announced plans to sublease 24MW of leased data center space in the heart of the nation’s busiest data center market.
Last June, Facebook confirmed that it was seeking to sublease space Santa Clara, Calif. This prompted fears that the move would disrupt the pricing dynamics in the Santa Clara market. Facebook had to continue paying its landlords but could sublease the space at a discount to recover those costs.
The wholesale data center providers in Silicon Valley have been actively involved in the deals. These landlords do not name their tenants. But industry sources, along with market reports from Avison Young, provide outlines of the deals that have filled the former Facebook space:
- Colocation and cloud services provider Internap has leased 3 MW of space previously occupied by Facebook in a Digital Realty Trust facility in Santa Clara. In its earnings report yesterday, Digital Realty said that it had signed a $12 million lease with a “former subtenant” in Santa Clara.
- Wholesale data center provider CoreSite restructured the lease at its single-tenant SV3 data center in Santa Clara, allowing it to reclaim 28,000 square feet of space that was leased to a new tenant, identified as a “global end user.” CoreSite did not identify the companies involved. But Facebook is known to be the tenant at SV3, and Avison Young says Chinese Internet titan Alibaba is the new customer.
- E-commerce company Groupon is reported to have leased 1.5 MW of space at Vantage Data Centers in Santa Clara, which had previously been leased by Facebook on a short-term basis.
The CoreSite deal reflects the pragmatism with which landlords have been working with both Facebook and prospective tenants in the Silicon Valley market.
“We believe it’s an attractive win for both the existing customer at SV3 and the new customer at SV3,” Tom Ray, CEO and president of CoreSite, in the company’s earnings call. “We were able to offer an aggressive, customer-friendly rental rate for the new customer, by which CoreSite, the new customer and the old customer all came out ahead of where they otherwise would have been.”
Ray said the five-year deal with the new customer was at below-market rates, but noted that “the old customer will continue to pay some degree of rent, and the net of the new rent and old rent is favorable compared to the old rent.”
Yahoo Marketing Virginia Space
Will this approach work in northern Virginia? Last month wholesale data center specialist DuPont Fabros Technology said that Yahoo has decided to sublease 24 MW of space in two DFT data centers in the Ashburn market.
In a company earnings call Hossein Fateh, DFT’s CEO and president, said, ”I believe subleasing this space may be difficult.Yahoo tells us they’re not vacating ACC2 until the end of the year, leaving only 9 months of term. The remaining term at ACC4 is between 3 and 5 years. Customers who spend this much capital require a longer lease term and visibility for extension rights to justify that expense.”
Fateh also noted that tenants who sublease space are not eligible for state-level incentives, including Virginia’s sales tax exemption on servers and storage equipment. ”There is specific language within the legislation that there must be a direct relationship between the landlord and customer in order to receive the benefits,” said Fateh.
If the Silicon Valley experience is any indicator, Yahoo and DuPont Fabros will likely need to collaborate to succeed. “If they do subleasing, we’ll obviously be happy to work with them,” said Fateh. | 12:30p |
In the Blink of an Eye: Does Your Business Operate with Web Scale IT? Brandon Whichard is director of product management at Boundary.
In a recent adaptation of the bestseller book about high-speed trading, “Flash Boys: A Wall Street Revolt,” author Michael Lewis reports on a five-millisecond window in late 2013 during which General Electric (GE) experienced heavy bid and offer activity resulting in 44 trades. Just keep in mind, these are milliseconds. Human beings perceive one second as a very short amount of time. A computer sees one second as a long time during which thousands of things can (and do) happen. One minute of computing can produce tens of thousands of application flows and in an hour, millions.
Research in the book showed that once a few curious traders realized that the mind-blowing speed at which trades were happening was costing them money, they developed a trading strategy that helped guarantee they could get the trade they wanted without having the stock price artificially inflated by high-frequency traders. While milliseconds may not matter outside the stock market, seconds do matter–and to more industries than you might think.
More than Just Investing
If you are streaming a movie or music or video game and the content buffers out for a few seconds, you’re going to be annoyed, right? If it happens for a few minutes, you’re going to shut it down altogether. HBO Go recently buckled during the April 7 Season Four premiere of “Game of Thrones,” leaving many anxious viewers in the dark as it tried to unclog the network.
Take a more staid industry, such as hotels. It’s spring break, and dozens of families are checking in at the same time. Suddenly, your reservation system stalls and people are waiting, impatiently. The line grows longer. Will those people come back next year? Perhaps not. We can only guess at what (if any) checks and balances and redundancies that the ObamaCare website had when it launched. Real-time alerts of application and network status may have been in the strategy, but the site and its underlying infrastructure were clearly not designed for Web scale.
For years, companies have been operating under the presumption that monitoring system health every few minutes is adequate. In the days of client server and more recently, three-tier web applications, application models were relatively static and engineered for highly predictable workloads in terms of load, usage patterns and application functionality. Now applications are dealing with unpredictable loads and are dynamically scaling to meet the needs of that load. The applications themselves are changing at a faster rate than ever before, as time to market becomes a critical component in business success.
Monitoring Time Frames
This means that today, monitoring performance metrics every minute or two is woefully inadequate because you are missing a bunch of data points that indicate a brewing problem (see graph below). The bottom line is the longer it takes for you to detect a problem, the larger the impact it will have on your customers. You have to monitor more frequently to detect problems faster and prevent issues from affecting users. Waiting to address the problem after the fact is too late.
 Click to enlarge.
Operating differently, using second by second knowledge across a fault-tolerant, open-source architecture is sometimes called “Web Scale” IT. Gartner describes it as “a pattern of global-class computing that delivers the capabilities of large cloud service providers within an enterprise IT setting.” The research firm predicts that by 2017, Web Scale IT will be an architectural approach found in 50 percent of global enterprises, up from less than 10 percent in 2013.
Organizations that focus on the tenets of Web Scale IT—open architecture, full automation, rigorous testing and continual monitoring of their service not only prevent problems – but can influence buying decisions faster and grow sales, move markets or even elect a new president. President Obama’s 2012 reelection campaign staff made daily decisions affecting fundraising, messaging and many other strategies relying upon the ability to crunch massively large data sets all the time. The campaign’s website, databases, and overall technology infrastructure was hosted on Amazon and incorporated a slew of open source technologies. You can bet that frequent testing and modern operations management tools were used to keep the machine running smoothly.
Web Scale IT Requires Rethinking How We Run IT
Here are the core methods that progressive companies are using to build a Web-Scale IT operation:
1. Architect for horizontal scalability. The high-volume, web-based architecture is built on peer-to-peer relationships, not a single control point. The new architecture provides a high tolerance of failure, as you have many components working with each other but completely independent from each other. Therefore, if one node goes down, the whole system doesn’t come to its knees but dynamically adjusts to accommodate the change. Open-source software models are intrinsic to this architecture.
2. Automate everything. IT automation and change management tools like Puppet and Chef are popular these days, because they help companies deploy tens of thousands of instances in a few seconds. That’s not a task that anybody can do manually, and if you are a Web Scale business, automation is your friend. Such tools can auto scale components based on load, so that while you sleep, the system is scaling up and down as needed to maintain performance levels. These corrective actions are based on scripted, well-defined processes, and require detailed upfront work and ongoing maintenance. Yet automating all facets of operations is how Web Scale IT is possible.
3. DevOps all the way. DevOps enables rapid release cycles and that is part of the Web Scale attribute set. Without a well-managed DevOps culture enabled by modern, collaborative tools, Web Scale isn’t possible.
4. Use the cloud. Web Scale infrastructure is viable thanks to cloud computing. The cloud, with its global reach and elasticity, is the ideal place to house a Web Scale application. You need a provider that can let you spin hundreds or thousands of instances to accommodate spikes in demand. You also want to make sure you can spread your instance across multiple availability zones so problems in one geographic region don’t take down your whole site. Amazon Web Services (AWS) is the preferred cloud platform for many organizations today and is a logical place to start.
It won’t be long before every business will need to operate at Web scale, just to stay alive. Being prepared begins by having conversations with your team to assess just how far along you are in the journey. With more and more business being done online and consumer expectations growing all the time for speed, seconds are fast becoming the lifeblood of IT.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 12:30p |
Eucalyptus 4.0 Addresses Enterprise Private Cloud Pain Points Eucalyptus Systems, provider of open source software for building private and hybrid clouds compatible with Amazon Web Services, has released Eucalyptus 4.0. The release addresses network, storage and other pain points that occur as private or hybrid cloud deployments grow and builds on the software’s deep compatibility with AWS.
Enterprise-scale private clouds face some challenges as deployments grow. Growth can cause overload in the network that dramatically slows cloud application performance. Storage presents a major challenge, as it must scale with the cloud deployment.
Tim Zeller, vice president of sales and marketing at Eucaliptus, said most customers still used Eucalyptus for development and testing environments before deploying to Amazon.
“But we’re seeing more data analytics: they have a massive data set and want to run jobs on Eucalyptus,” he added. “We’ve had some early adopters here but we’re seeing more people coming to the forefront.”
Customer feedback was the primary driver of the release, Zeller said. “There’s existing customers that have started small to medium in size, but the growth that they’re seeing internally for the service has grown immensely.“
The new features include:
- The release adds ability to use scale-out storage on top of commodity resources using open source and commercial solutions that implement the S3 interface (e.g. RiakCS, Ceph). The company has also partnered with Basho to resell and deliver commercial support for RiakCS Enterprise. The Object Storage Gateway allows users to leverage the active-active failover capabilities of their storage choice, simplifying large-scale deployments.
- Edge Networking simplifies cloud deployments by enabling the cloud to fit into existing network infrastructure. The goal is to make it easy to deploy Eucalyptus into existing network topologies. It also reduces potential bottlenecks by separating data and control paths, allowing more direct connection to cloud applications.
- Improved load balancing on the front end to support a large number of users. Clouds can now use multiple instances of front-end services to balance user traffic. Front end services are the endpoints that service API calls, such as EC2, S3, IAM. Deploying and load-balancing across many machines increases service availability and removes the potential for service bottlenecks.
- Dynamic cloud configuration is meant for those that outgrow existing infrastructure and need to move clouds. It allows administrators to change the configuration of cloud services (e.g. cloud controller, cluster controller) without reinstalling. This is used if a cloud has outgrown the capacity of a current network and needs to be installed in a larger network partition, or if a cloud needs to move due to IT consolidation or in the event of a merger or acquisition.
- The Eucalyptus hybrid cloud user console has been redesigned to improve user efficiency and support large-scale Eucalyptus deployments. The console is available through desktop, phone, or tablet. The user console was launched in 3.4, and the team took the 6-9 months of feedback since into the redesign.
- There is new support for multiple security groups and S3 bucket lifecycle, enabling consistent resource policies across clouds. Multiple security groups give users the ability to define and combine fine-grained network access rules. The Elastic Load Balancer service now supports SSL termination and session stickiness, helping users deploy secure, scalable cloud applications. Eucalyptus 4.0 includes additional improvements to API fidelity to expand the breadth and diversity of cloud workloads that can seamlessly shift between public and private clouds.
| 1:00p |
Maginatics Integrates Cloud Storage Platform With EMC’s ViPR Distributed enterprise storage provider Maginatics launched Cloud Storage Platform (MCSP) version 3.0, targeted at enterprise and service provider customers. New features include enhancements to the Maginatics Virtual Filer, or MVF, and the introduction of two new performance features – Content Delivery Cache and Site Cache.
MVF also now integrates with an EMC ViPR object storage system. ViPR is EMC’s software-defined storage technology announced in 2013.
“With the addition of support for EMC ViPR object storage in particular, customers deploying an EMC-Maginatics solution will be able to seamlessly migrate their legacy applications to a software-defined cloud environment,” Jay Kistler,CTO and co-founder of Maginatics, said. “The combined platform delivers the vastly enhanced agility enterprises demand.”
MCSP 3.0 provides enhanced multi-layer adaptive caching for performance optimization over greater distances. To reduce WAN traffic and speed access for LAN-connected users who share constrained WAN pipes, a new Site Cache feature allows data to be cached at a local branch office.
Augmenting Site Cache, the new Maginatics Content Delivery Cache capability gives customers the ability to directly utilize a Content Delivery Network (CDN), without sacrificing data consistency. This enables them to provide an efficient read-cache experience for users who are continents away from their data.
Maginatics says it has also improved flexibility and control while enabling the manipulation of shares and MVFs from the IT admin’s tool of choice. It introduced enhanced security and audit control capabilities with support for multi-user-name admin accounts, ensuring that all actions performed on the virtual filer are attributed to the user invoking the action.
The new version also introduces improved disaster recovery and virtual machine healing functions aimed at reducing the number of steps required by admins during DR and fail-over scenarios. This ensures business continuity at all times despite major disasters.
| 2:00p |
The Power of Port Replication in the Data Center With so much more hitting the data center platform these days, how do admins keep up? How can you continue to deliver powerful performance and create that great end-user experience?
As most data center professionals can attest, bandwidth needs continue to increase for today’s business-driven technologies. This demand for higher bandwidth results in an increase in fiber optic ports for networking and storage. Over the past several decades of data center cabling evolution, this increase has led to significant challenges in the management of fiber cabling infrastructures. In this whitepaper from CABLExpress, we find how one key component makes managing this increase in fiber ports and density much easier. This technology revolves around port replication in patch panel design.
Here’s the reality: Proper port replication simplifies fiber cable management and increases uptime potential.
Your data center platform will continue to change as business needs evolve. A big part of this change will revolve around application and data delivery. As mentioned earlier, keeping up with bandwidth requirements is critical for an optimal end-user experience. Part of optimizing the data center platform has to revolve around both the logical and physical networking layer. Again, this is where port replication can help. In addition to the one-to-one mapping, port replication essentially reduces the distance between active hardware ports. When utilized in combination with the recommended TIA-942 structured cabling design, all ports are replicated in a Main Distribution Area (MDA), which significantly reduces the physical distance between ports and further minimizes the opportunity for human error.
There are other direct benefits as well. This includes:
- Increase in fiber ports and density.
- Minimizing user error and eliminating mistakes.
- Reducing the likelihood of downtime.
- Reducing the distance between active hardware ports.
- Minimizing the risk of increased signal loss by reducing the opportunity for excessive bending and stress on the connectors plugging into the switch.
Download this whitepaper today to learn how port replication can greatly reduce installation time and help mitigate the risk of costly mistakes that lead to downtime. Remember, the greater resiliency you can achieve within your data center – the better your business can continue to operate. Effectively, port replication can significantly increase efficiency and manageability in the data cater by simplifying the cabling process and eliminating opportunities for error. | 3:30p |
Dell Reaffirms Commitment To Open, Agnostic Cloud Dell and HP, while growing from similar original equipment manufacturer (OEM) roots, are taking two very different approaches in transition to the world of cloud services.
HP is investing in its own public Infrastructure-as-a-Service and Platform-as-a-Service offerings and data centers to underpin them, while Dell is taking an open, ecosystem approach. Both are betting on OpenStack, the open source cloud technology that has become the de facto standard for building cloud services alternative to proprietary clouds by the likes of Amazon Web Services, Microsoft Azure or Google Compute Engine.
HP made a $1 billion commitment yesterday to building its Helion IaaS and PaaS offerings. In response, Dell reaffirmed its commitment to providing customers with open, standards-based architectures on which to build public, private and hybrid clouds.
Dell’s cloud-agnostic approach is a change from an earlier strategy. The company made a similar $1 billion commitment back in 2011, but later changed the strategy.
The move made it easier for Dell to get into bed with all the leading public clouds, including such as AWS, CenturyLink, Microsoft and Google, and meant less potential conflict of interest. The company also argues its approach will help customers avoid lock-in into a single cloud provider.
“Dell has a long-standing and proud heritage of delivering open computing and standards-based technology for consumers and businesses worldwide,” Sam Greenblatt, vice president and CTO of engineered solutions at Dell (and a former high-level HP tech executive), wrote in a blog post published Wednesday. “ This has been a core part of Dell’s DNA since the company’s founding 30 years ago. There is a lot of industry chatter today about approaches to cloud, open source and different ideas on how to guide businesses toward deploying clouds, much of which feels like a step back to closed architectures.”
Dell continues to invest in its Cloud Partner Program and agnostic management capabilities for third-party clouds. The company serves as single-source supplier, providing expert guidance in helping assess, build and operate cloud environments.
One of Dell’s most significant cloud partnerships is with Red Hat, the open source software vendor. Together, the two companies build an enterprise-ready OpenStack-based private cloud solution.
The co-engineered solution is built on Dell hardware and the Red Hat Enterprise Linux OpenStack Platform, and Dell was the first OEM to offer the Red Hat platform.
Dell and HP are examples of now-common stories of tech giants readjusting for a rapidly changing technology landscape where they now have to compete with cloud market incumbents, such as Amazon, for enterprise cloud dollars. Other examples of such stories are Cisco‘s recent announcement of a $1 billion investment in cloud services of its own, IBM’s aggressive investment in the space and the rapid expansion of cloud capabilities by CenturyLink.
| 5:28p |
Facebook Taps Emerson for Rapid Deployment Design Facebook has chosen Emerson Network Power to help implement its vision for a “rapid deployment data center” (RDDC) that will combine factory-built components with lean construction techniques, the companies said today. The first implementation of the new concept will be Facebook’s second data center building in Luleå, Sweden.
“Because of our relentless focus on efficiency, we are always looking for ways to optimize our data centers including accelerating build times and reducing material use,” said Jay Park, director of data center design at Facebook. “We are excited to work with Emerson to pilot the RDDC concept in Luleå and apply it at the scale of a Facebook data center.”
Emerson will deliver over 250 shippable modules to Luleå, including power skids, evaporative air handlers, a water treatment plant, and data center superstructure solutions. Facebook says the new approach to data center construction, which Data Center Knowledge described in detail in February, will be more efficient, use less material and be faster to deploy.
Facebook likens its RDDC approach to the snap-together furniture developed by Swedish retailer IKEA. The Facebook project builds on industry innovations in modular deployment, and could benefit Emerson if the concepts gain wider adoption as companies seek to speed time-to-market for data center construction projects.
“We worked with Facebook to understand their wants and needs, and we collectively developed an integrated, cost-effective, tailored solution,” said Scott Barbour, global business leader of Emerson Network Power. “This collaboration with Facebook illustrates our competencies in modular construction and showcases next-generation thinking. Emerson is able to deliver innovative, global, turnkey data center solutions comprising design, construction, critical infrastructure equipment, building management system, and services.”
The RDDC design will eliminate Facebook’s distinctive penthouse cooling system, which uses the entire second story of the building to process fresh air to cool its servers. This will dramatically shrink the amount of real estate required for cooling. | 6:32p |
Panduit Beefs Up DCIM With SynapSense Acquisition Physical infrastructure vendor Panduit has acquired SynapSense, a data center infrastructure management vendor, for an undisclosed sum, to enhance cooling monitoring capabilities of its DCIM offering. Cooling is a major cost driver in the data center, and SynapSense specializes in thermal risk management and cooling energy savings.
Tinley Park, Ill.-based Panduit’s move is an example of on-going consolidation in the growing DCIM field, where vendors race to beef up their offerings with best-of-breed features through acquisitions, partnerships and integration. Analysts expect this market to continue to consolidate throughout this year.
Panduit has a broad portfolio of physical infrastructure hardware, services and DCIM software, while SynapSense specializes in wireless monitoring and cooling-system control for enterprise and service-provider data centers. SynapSense monitoring and cooling automation technology will complement Panduit’s current strengths in power, asset-management and connectivity.
The deal also means broader worldwide reach for SynapSense products, which have only been available in North America, much of Asia and, through a partnership with Stulz, in Australia and New Zealand. Panduit plans to launch the solution globally in the coming months.
Panduit President Tom Donovan said SynapSense had a proven track record of helping data centers reduce energy cost and unlock stranded capacity. “This has resulted in strong, trusted partner relationships with leaders in the banking, ISP, hosting, retail, corporate and government sectors,” he said.
In 2012 Panduit acquired Unite Technologies, expanding its capabilities for energy management and environmental monitoring solutions. |
|