Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, November 20th, 2013
| Time |
Event |
| 12:30p |
Carter Validus Buys AT&T Data Center Sale-leasebacks continue top be a popular option for data center real estate investment trusts (REITs). Today Carter Validus Mission Critical REIT announced that it has conducted its second sale-leaseback transaction of an AT&T data center, paying $110 million for an AT&T facility in the Nashville suburb of Brentwood, Tenn.
The 103,000 square foot property serves as AT&T’s main communications hub for Tennessee and Kentucky, and features 75,000 square feet of mechanical space, with the remainder utilized for engineering and support functions.
“This acquisition continues to reinforce our commitment to purchase high quality, mission critical real estate leased to creditworthy tenants in strategic markets throughout the United States,” said Michael Seton, President and Chief Investment Officer, Carter/Validus Advisors, LLC.
Last month Carter Validus conducted a sale-leaseback deal with AT&T for a Wisconsin data center.
A sale-leaseback option typically involves a property owner selling their building to a second party, while agreeing to continue to lease space in the building. The transaction generates cash for the former owner (now the tenant), and provides the new owner steady rent from the lease. These deals are particularly attractive when the initial owner is a blue-chip company with a strong credit rating.
Carter Validus Mission Critical REIT is focused on two sectors, data center and healthcare, citing societal trends that it believes will boost demand for data storage and outpatient healthcare. | | 1:00p |
Facebook Ops: Each Staffer Manages 20,000 Servers  Delfina Eberly, Director of Data Center Operations, Facebook, presented the Tuesday morning keynote about optimizing data center operations. In terms of hardware, Facebook, because it runs such an enormous volume of servers, focuses on serviceability, including starting from the ground up by influencing server design to ensure easiest and least time consuming methods to repair equipment in the data hall. (Photo by Colleen Miller.)
SAN ANTONIO - Facebook has been an industry leader in building its Internet infrastructure for scalability. That includes the scalability of the people that work in the company’s data centers.
Each Facebook data center operations staffer can manage at least 20,000 servers, and for some admins the number can be as high as 26,000 systems, according to Delfina Eberly, Director of Data Center Operations at Facebook. Eberly was the keynote speaker Tuesday morning at the 7×24 Exchange 2013 Fall Conference, speaking on “Operations at Scale.”
Facebook’s performance appears to break new ground in the server-to-admin ratio, which has rarely exceeded 10,000 to 1, improving on the 10,000one-to (see High Scalability for more). The company’s success affirms the potential of using an integrated approach in which the operations team works closely with other teams in IT and facilities.
Data center operations is a critical skill at Facebook, which now has 1.15 billion users, including 720 million who log in daily. Each day, Facebook users share 4.75 billion content items and “like” 4.5 billion items. The company now stores more than 240 billion photos, and adds 7 petabytes of photo storage each month.
Automated Troubleshooting
To manage all that activity, Facebook has developed software to automate many aspects of data center operations. That includes software known as CYBORG, which detects problems with servers and attempts to fix the problems. If CYBORG exhausts automated repair options, it will send an alert to dispatch a data center staffer to investigate the issue.
“Our goal is not to deploy a technician to the data center floor unless they actually have to physically handle a server,” said Eberly.
The emphasis on automation is not because Facebook is interested in unmanned data centers, or having robots operate facilities. It’s because Facebook values its workers, according to Eberly.
“We want to hang onto our talent,” she said. “The way you do that is to give them the opportunity to work on high-value tasks. We want them to stay and improve. This matters to us.”
Eberly is a veteran of the data center industry, beginning her career at McKesson in 1998, followed by stints at colocation pioneer Exodus Communications and Critical Path.
Design Supports Serviceability
At Facebook, the operations team’s time and workloads are considered during Facebook’s hardware design. An example: all servers are designed to be serviced from the front, so data center staffers have no need to enter the hot aisle. The server is designed so drives and components can be replaced without tools. The result: Facebook has reduced the time needed to repair servers by 54 percent.
The Facebook operations team carefully tracks equipment failure rates, and the data is reviewed when the company makes supply chain decisions, Eberly said. The company’s asset management and ticketing systems track hard drives and other components by serial numbers, providing a complete insight into the life cycle of each piece of hardware.
Eberly said these systems are sophisticated, but didn’t require an army of developers. Facebook has three software engineers dedicated to the operations team. “They’re absolutely vital to the work we do in the data centers,” she said. | | 1:30p |
Savvis to Add Data Center in Toronto Yesterday was a big day for CenturyLink’s cloud computing business. While CenturyLink was busy unveiling its acquisition of Tier 3, the company’s Savvis cloud unit was announcing some expansion of its own.
Savvis, which CenturyLink bought back in 2011, announced plans Tuesday to expand its presence in Canada with the launch of a new data center in the Toronto region in mid-2014. The 100,000-square-foot TR3 data center will support up to five megawatts of IT load.
“Toronto holds international status as a leading center of business across industries that rely on secure, agile IT services to grow and innovate,” said Ash Mathur, regional vice president and country manager for Savvis’ Canadian operations. “This data center signifies our commitment to powering business in Canada through world-class infrastructure services, complemented by advanced solutions for big data, business applications, content management and e-commerce initiatives for Canadian and global organizations.”
Within five years, 70 percent of Canadian IT leaders plan to outsource a majority of their infrastructure to colocation, managed hosting and cloud services, according to a recent survey conducted by international research firm Vanson Bourne for Savvis. The TR3 data center – Savvis’ second in the Toronto region and fourth in Canada – will enable businesses to deploy agile hybrid solutions that access these outsourcing services through Savvis.
“As we expand our global data center footprint, experience shows our clients expect more than one type of infrastructure service,” said David Meredith, senior vice president and general manager, Savvis. “We have designed a broad portfolio to drive greater business value through carrier diversity, interconnectivity and our Savvis ClientConnect service, which lets organizations promote services, drive efficiencies and generate new opportunities with others inside the data center ecosystem.”
Savvis operates 55 data centers worldwide, with more than 2.4 million square feet of gross raised floor space throughout North America, Europe and Asia. | | 1:30p |
Evolution of Storage: VM-Aware Storage for Virtualization Sachin Chheda is the Director of Product and Solution Marketing at Tintri; he has long been involved in Information Technology with positions at HP, NetApp and Nimble Storage–developing and taking to market products that power some of the largest enterprises. This is part two of a two part series about the mismatch between virtualization and storage.
 SACHIN CHHEDA
Tintri
In part one, we discussed the mismatch between storage and virtualization in management, performance and scalability, data protection, and TCO and ROI.
Solving the mismatch between virtualization and storage can help IT with its efforts to move to 100 percent virtualization using software-defined infrastructure. The first requirement is for storage to understand and operate at VM-level. This means doing away with the traditional storage constructs of LUNs and volumes. This is one of the main concepts behind a VM-aware storage architecture, which represents an evolution of storage beyond SAN and NAS specifically for the virtualized datacenter.
Storage and Virtualization Need to be Intelligent
Operating with, and serving up, individual VMs is not enough. To make virtualization predictable for tier-one apps and end-user desktops and enable mixing and matching workloads, storage should be intelligent, to determine what data is active at the individual virtual machine (VM)-level and guarantee equitable performance or a quality of service (QoS) for all VMs.
Data reduction techniques to maximize the usage of expensive storage, such as flash, as complements to QoS, and hybrid flash plus disk storage setups are a must for virtualized environments. When properly paired with high capacity and dense disk storage, compression and deduplication of data on flash can cost-effectively deliver both performance and capacity beyond what flash-only storage can claim.
Monitoring Analytics Is Critical
Performance analytics are also a must for storage for virtual environments. Offering deep insight into the performance of individual VMs (not LUNs or volumes with multiple VMs) can simplify trouble shooting, help identify the impact of new workloads and detect any trends. Jeff Boles from the Taneja Group estimates that the impact of having an end-to-end view of VM performance can be measured in substantial time savings — on the order of days — every time administrators must troubleshoot performance issues in their virtualized environments.
Adopting a modular approach to scaling using VMs and virtual disks as the unit for deploying storage is now possible using virtualization functionality such as VMware Storage DRS to load balance across different storage systems. This greatly simplifies how administrators can scale their environment without the complexity of scale-out or scale-up storage solutions. Adding the ability to control and monitor individual storage systems from a centralized administrative interface can further reduce the overhead IT faces with storage.
Lastly, VM-aware storage must provide administrators the ability to protect individual VMs (and not LUNs/volumes) with efficient snapshots and WAN-efficient replication. Jason Buffington, a senior analyst with Enterprise Strategy Group has analyzed and extensively written about concept of snapshot for protecting data. In his blog from Oct 2012, “Snapshots vs. Backups — a great debate, no longer,” he offers the advice of first deciding how to recover and then picking the protection method. VM-aware storage must also offer the ability to elegantly and quickly recover individual VMs from backups in addition to enabling quick and efficient backups of individual VMs. Efficient VM-granular replication over the WAN is also a must to facilitate disaster recovery.
Using VM-aware storage with cost-effective performance and capacity, simplified management and scalability, deep insight, and efficient protection and disaster recovery can enable IT to achieve 100 percent virtualization with excellent ROI.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 2:30p |
Intel Launches Data Tools, New Phi HPC Accelerator 
Intel (INTC) has launched new software tools to gain greater insight into data, and disclosed the final form factors and memory configurations for the second generation Phi processor, code named “Knights Landing.” The announcement was made at the Supercomputing Conference (SC13) in Denver this week.
“In the last decade, the high-performance computing community has created a vision of a parallel universe where the most vexing problems of society, industry, government and research are solved through modernized applications,” said Raj Hazra, Intel vice president and general manager of the Technical Computing Group. “Intel technology has helped HPC evolve from a technology reserved for an elite few to an essential and broadly available tool for discovery. The solutions we enable for ecosystem partners for the second half of this decade will drive the next level of insight from HPC.”
Second Generation Xeon Phi
Demonstrating its continued parallel architecture prowess, Intel was featured in the November 2013 Top500 list of the most powerful supercomputers in the world, with two of the top 10 systems using Intel Xeon Phi, and 82.4 percent of all Top500 systems having Intel processors.
At SC13 Intel unveiled how the next generation Intel Xeon Phi product (codenamed “Knights Landing”), available as a host processor, will fit into standard rack architectures and run applications entirely natively instead of requiring data to be offloaded to the coprocessor. This will reduce programming complexity and eliminate “offloading” of the data, thus improving performance and decreasing latencies caused by memory, PCIe and networking. The new design uses 14nm manufacturing technology and will be available as a host CPU with high-bandwidth memory on a processor package.
Knights Landing will also offer developers three memory options to optimize performance.Unlike other Exascale concepts requiring programmers to develop code specific to one machine, new Intel Xeon Phi processors will provide the simplicity and elegance of standard memory programming models.
Data-Driven Discovery
Targeting the increasing growth of unstructured data, Intel announced its HPC Distribution for Apache Hadoop software. The new release combines the Intel Distribution for Apache Hadoop software with Intel Enterprise Edition of Lustre software to deliver an enterprise-grade solution for storing and processing large data sets. This powerful combination allows users to run their MapReduce applications, without change, directly on shared, fast Lustre-powered storage, making it fast, scalable and easy to manage.
Offered through Amazon Web Services Marketplace, a new Cloud Edition of Lustre is available from Intel, as a scalable, parallel file system to maximize storage performance and cost effectiveness. The software is ideally suited for dynamic applications, including rapid simulation and prototyping. In the case of urgent or unplanned work that exceeds a user’s on-premise compute or storage performance, the software can be used for cloud bursting HPC workloads to quickly provision the infrastructure needed before moving the work into the cloud. Intel and its ecosystem partners are bringing turnkey solutions to market to make big data processing and storage more broadly available, cost effective and easier to deploy. | | 2:53p |
Photo Highlights from 7X24 Exchange Sessions & Events SAN ANTONIO – The 7X24 Exchange organization, whose members manage mission critical facilities in all parts of the United States, marks its 24th year this year. The fall conference drew about 820 participants to discuss issues significant to running the data centers that power enterprises, academia, mega-scale Internet services and more. Check out our photo highlights from the sessions and evening events — Scenes from 7X24 Exchange Conference: Sessions and Evening Events. | | 3:30p |
Salesforce.com Introduces Salesforce1 Platform 
At Dreamforce 2013 this week in San Francisco Salesforce.com (CRM) introduced Salesforce1, a new social, mobile and cloud customer platform, built to transform sales, service and marketing apps. Salesforce1 is the first CRM platform for developers, ISVs, end users, admins and customers moving to the new social, mobile and connected cloud. Major ISVs, such as Dropbox, Evernote, Kenandy and LinkedIn, are now on the Salesforce1 Customer Platform.
The Internet of Customers
Built for next generation applications, Salesforce1 has 10 times more APIs and services built-in, for developers to build quickly, and easily create personalized experiences for connect smartphones and wearable smart devices. With Salesforce1, every ISV can accelerate their growth by building, selling and distributing apps for the connected customer.
ISVs such as Evernote and Kenandy are building mobile-ready apps on the platform and leveraging the power of the Salesforce1 AppExchange to market and sell these apps. A new mobile app built on the Salesforce1 Customer Platform allows users to access and experience Salesforce everywhere on any form factor.
With Salesforce1, the more than 10 million Visualforce pages and custom actions are mobile-enabled. Now with the new Visualforce1, administrators can combine fields, objects and even other services into pages, components and apps that run within Salesforce1, making it simple to build and distribute apps through a single mobile platform.
With new Salesforce1 Communities, Heroku1 and ExactTarget Fuel, companies have the customer platform to connect with their customers in a whole new way. Companies can now build and deploy thriving communities to connect customers, partners and products with the all new Salesforce1 Communities. Heroku and Salesofrce.com alum Adam Gross founded Cloudconnect, which is now a part of Heroku, a Salesforce.com company.
A new Salesforce1 AppExchange is built on the Salesforce1 Customer Platform, allowing major ISVs to build, market and distribute the most engaging apps possible to connect with customers and end users in a whole new way. With new APIs and mobile-ready tools, any ISV can build custom apps, from accounting to contract management to ERP, faster than ever before and bring the most important data into the central feed of Salesforce.
The new Salesforce1 Customer Platform is now generally available and included with all user licenses of Salesforce CRM and the Salesforce Platform. |
|