Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, May 8th, 2015
| Time |
Event |
| 12:00p |
How Facebook Cut 75 Percent of Power It Needs to Store Your #tbt Photos One reason a service like Facebook grows and maintains popularity is by ensuring high performance for the users. If it took too long to upload or view photos on Facebook, for example, the social network would probably not be growing as quickly as it has been.
Optimizing everything for performance is a costly exercise, however, and at Facebook’s scale, it is an extremely costly exercise. The good news is, at that scale, even a small infrastructure efficiency improvement can translate in millions of dollars saved.
The company’s infrastructure team, including software, hardware, and data center engineers, spend a lot of their time thinking about where that next degree of efficiency is going to come from. Early last year, the product of one such project came to life.
Two Facebook data centers designed and built specifically to store copies of all user photos and videos started serving production traffic. Because they were optimized from the ground up to act as “cold storage” data centers for a very specific function, Facebook was able to substantially reduce its data center energy consumption and use less expensive equipment for storage.
This week, the Facebook team that designed these cold storage facilities in Prineville, Oregon, and Forest City, North Carolina, shared details about the design and the savings they were able to achieve.
Data Center Design Optimized for Single Purpose
The way Facebook ensures user content is always available and retrieved quickly is by storing lots and lots of copies of every media file in its data centers. Copies of all those files are stored in the primary Facebook data centers and in the cold storage facilities.
A newer, more frequently accessed file, however, gets more copies in the “hot” data centers than older files that are not in as much demand. Main function of the cold storage sites is to make sure a file is always available for retrieval regardless of how many copies of it are stored in a hot data center, Kestutis Patiejunas, a Facebook software engineer who’s been involved in the cold storage project from day one, explained.
Because they store replicated files, the cold storage facilities could be built without any redundant electrical infrastructure or backup generators that are traditionally present in data centers. That’s one way the team was able to cut cost.
The cold storage systems themselves are a modified version of the Open Vault, a Facebook storage design open sourced through its Open Compute Project initiative. The biggest and most consequential modification was making it so that only one hard drive in a tray was powered on at a time.

A modified Open Rack, the cold storage rack has fewer power supplies, fans, and bus bars (Photo: Facebook)
The storage server can power up without sending power to any of the drives. Custom software controls which drive powers on when it is needed.
This way, a cold storage facility only needs to supply enough power for six percent of all the drives it houses. Overall, the system needs one-quarter of the power traditional storage servers need.
This allowed the team to further strip down the design. Instead of three power shelves in the Open Rack, a cold storage rack has only one, and there are five power supplies per shelf instead of seven. The number of Open Rack bus bars was reduced from three to one.
Because all hard drives are never powered on at the same time, the number of fans per storage node was reduced from six to four.
10+10=14
In addition to optimizing the hardware architecture, Patiejunas and his colleagues used a technique called Reed-Solomon error correction to reduce the amount of data storage capacity needed to store copies. The technique basically allows a user to store less than a full redundant copy of a file in a different location but still be able to recover it in full if one of the locations becomes unavailable.
If, for example, a file is split into 10 parts, the amount of parts needed to store it across two locations would be 14 instead of the full 20.
Here’s a visual, courtesy of Facebook:

What’s Next?
While the cold storage setup is working well, and both data centers have lots of room to accommodate more data, Patiejunas and his team are already thinking about their next move. Besides adding another cold storage facility elsewhere in the U.S., one of the next steps would be to do cold storage replication across multiple data centers.
Today, data from a hot data center on the West Coast is backed up in a cold storage site on the East Coast and vice versa. The next step would be to apply the Reed-Solomon technique across multiple geographically remote cold storage sites, Patiejunas said.
When there is a third cold storage site, the topology would be virtually infallible. If data is spread across three data centers, a file will be available even in an entire site goes down completely. The chances that two out of three data centers will go down are very slim, he said. | | 3:00p |
CoreOS, Supermicro Partner on Web-Scale Servers for Enterprises At its CoreFest conference this week in San Francisco CoreOS revealed it has formed partnerships with Supermicro and Redapt through which white box servers based on the latest generation of Intel processors will be certified to run the company’s distribution of Linux.
Aimed primarily at enterprise IT organizations that are looking to move away from commercial server platforms in favor of lower-cost white-box servers, CoreOS CEO Alex Polvi says, enterprise IT organizations are starting to copy the data center architectures that companies such as Google have developed.
The rate at which enterprise IT organizations will embrace new data center platforms remains to be seen. Most of the usage of containers and microservices to date has been confined to application development projects running in the cloud.
But as more of those applications begin to find their way into production, it’s only a matter of time before many of them get deployed on private clouds inside enterprise data centers and hosting facilities. As that process occurs, Supermicro appears to be betting that enterprise IT organizations will simultaneously reevaluate IT infrastructure investments that were originally made to support an entirely different era of computing.
“IT organizations are moving to containers and microservices,” says Polvi. “We’re starting to change the way people think about infrastructure altogether.”
While the latest Tectonic release of CoreOS is not yet generally available, it’s already clear that organizations are looking for server platforms that are much simpler to manage, Polvi adds. To that end, Tectonic embeds Google’s open source Kubernetes framework for container management in CoreOS to make it’s simpler to manage and orchestrate containers.
As part of that effort CoreOS is also moving to make sure the security frameworks surrounding those containers is robust enough to support production applications environments.
Under the terms of the alliance, Supermicro will work with Intel to develop a pre-built rack, while Redapt will provide systems-integration expertise.
Beyond simply copying the data center architectures used by large-scale web companies, enterprise IT organizations are also looking to employ more automation across converged sets of IT infrastructure. CoreOS has been working with Intel on implementing a software-defined infrastructure architecture for the data center.
As defined by Intel, that architecture consists of an orchestration layer to manage workloads, a composition layer to manage configurations and performance, and a hardware pool that keeps track of physical hardware resources. At the same time, Intel has extended its Trusted Execution Technology (TXT) to include support for Trusted Compute Pools (TCP) that better isolate application workloads. | | 3:30p |
Friday Funny: Pick a Caption for Smoking Rack I can think of about a million reasons why your data center could start smoking… and I’m pretty sure this was 100 percent Kip’s fault!
Diane Alber, the Arizona artist who created Kip and Gary, has a new cartoon for Data Center Knowledge’s cartoon caption contest. What do you think would be the funniest text for the bubble? Post your caption in the comments. Then, next week, our readers will vote for the best submission.
Congratulations to Ben from Data Cave, whose caption for the “Data Center Treadmill” edition of Kip and Gary won the last contest. Ben won with: “When we talked about testing alternative forms of electricity, this isn’t what I had in mind!”
Here’s the poll for the caption contest for last week’s “Beached Data Center” edition. Please vote!
Take Our Poll
For more cartoons on DCK, see our Humor Channel. For more of Diane’s work, visit Kip and Gary’s website. | | 4:00p |
Google Tackles PaaS Lock-In Fears With AppScale Collaboration Google is addressing one of the biggest potential knocks against Platform-as-a-Service through a collaboration with AppScale. AppScale makes it easy to migrate out of and run Google App Engine on any physical or cloud infrastructure.
By contributing engineers to drive compatibility and interoperability between App Engine and AppScale, Google is putting general PaaS lock-in fears to rest with respect to infrastructure.
However, AppScale still requires building applications to App Engine specifications, which means that some PaaS lock-in exists in terms of platform.
AppScale helps companies better serve customers that have custom integration requirements as well. Google proposes hybrid PaaS as a potential use case: A company could serve worldwide customers on AppEngine and better serve individual customers who need custom, private installations.
“We know how important flexibility is to you in the languages you write in, the deployment model you use, the tools you build with, and the infrastructure on which your software runs,” wrote Miles Ward, global head of solutions for Google Cloud Platform.
The big cloud providers want to enable at-home development on their platforms. Being able to take an app outside of the cloud isn’t a competitive threat, but a complement and enabler of hybrid scenarios. Similarly, Microsoft’s recently announced Azure Stack is an on-premises complement to its public cloud.
AppScale exposes a subset of the 1.8 App Engine AP, but Google is working with AppScale to make it compatible with the newest version 1.9.
| | 4:30p |
Brocade Storage Networking Built for XtremIO All-Flash Arrays Brocade announced that its Fibre Channel and IP-based storage networking technologies will be used by EMC in customer solutions based on the new XtremIO 4.0 all-flash storage arrays that EMC unveiled on Monday. With the version 4.0 software upgrade for EMC XtremIO arrays, larger all-flash configurations are possible and will enable petabyte scale-out storage consolidation across all workloads.
Brocade said its storage network switches and related storage management software will be branded and sold by EMC as part of the EMC Connectrix family. Josh Goldstein, vice president of marketing and product management for EMC XtremIO, noted, “EMC Connectrix Gen 5 Fibre Channel and IP storage networking solutions, based on Brocade storage fabric technologies, have become a bedrock for XtremIO customers for better enabling the new wave of consolidated database, analytics, private/hybrid cloud, VDI, and business application workloads.”
Recent Wikibon research shows that flash is now cheaper than disk for active data, and that flash is a better foundation for the next generation of big data applications. Brocade cited Gartner research that stated, “By 2019, 50 percent of traditional general-purpose storage arrays used for low-latency, high-performance workloads will be replaced by SSAs [solid-state arrays].”
Brocade storage networking technologies used in EMC customer solutions include Gen 5 Directors and switches, VDX 6740 switches for IP storage, 7840 extension switches for fibre channel and IP storage, Fabric Vision technology, and Network Advisor for fibre channel and IP storage.
Brocade also announced that its Gen 5 Fibre Channel SAN technologies are an integral part of the new EMC VSPEX with VMAX3 100K converged infrastructure solution that was revealed at EMC World recently. | | 5:54p |
HostingCon Global Unpacks Conference Schedule: Here are Some Highlights HostingCon Global, the premier industry conference and tradeshow for the hosting and cloud community put on by DCK’s sister company, has released its schedule for the upcoming show in San Diego, California from July 27-29, 2015.
New this year is pre-conference training workshops on Sunday so if you’re coming in early and are looking to pack in as much learning as possible, be sure to check those out. To kick off the afternoon, William Bell, VP of product development at Phoenix NAP will be talking about how to use containers to grow business and generate revenue at 1 pm.
On Monday, Lance Crosby, founder of SoftLayer, an IBM company, will be delivering the conference keynote, his first keynote since leaving SoftLayer in January. Following the keynote, the opening networking reception will give attendees the opportunity to reconnect with friends and meet new people – a great way to start the week and get down to business!
Highlights from Tuesday’s schedule include a session delivered by Maria Karaivanova, head of the Strategic Partnership team at CloudFlare. Her session, “Clicks Not Cash: Give Users a Free SEO Boost” will help hosts understand how to help improve customers’ search rankings, a particularly relevant topic as Google continues making changes to its ranking algorithms.
A panel discussion on Tuesday, “Staying Successful in the Days of DIYs” will explore how to remain competitive and create new revenue opportunities in the era of DIY low-cost websites.
On Wednesday, ResellerClub will present on how hosts can manage a business that has a global audience and “glocal” strategy. It should be a great session for hosts looking at ways to expand their business internationally.
Shameless plug alert: I will be moderating a panel discussion on new gTLDs on Wednesday morning at 10 am, evaluating the first year of new TLDs and adoption rates, as well as identifying sales strategies that are working. It should be a lively discussion and have many takeaways for attendees!
This is just a snippet of the content at HostingCon, so be sure to follow along as we release more information about the event on the WHIR, as well as the HostingCon blog.
Early birds can save $100 off their HostingCon registration right now so register today!
The WHIR wants to know: Are you attending this year’s HostingCon in San Diego? What sessions are you looking forward to? Let us know in the comments – we’d love to hear from you! | | 8:31p |
Virtual Container Security Suite TwistLock Launches with $2.5M Seed Funding 
This article originally appeared at The WHIR
Tel Aviv and San Francisco-based startup Twistlock announced on Thursday that it received $2.5 million in seed funding from YL Ventures. The company will use the funding to launch it’s security suite focused on container security.
Twistlock seeks to solve what it sees as two problems with using container technology: the introduction of new security risks introduced with containers and the ability to monitor and respond to security incidents associated with container applications. The security suite allows developers to implement “quality gates” using a customizable open source security framework prior to applications going into production.
The author of a Garner report in January called “Security properties of Containers managed by Docker” wrote in a blog post: “Security properties of containers are a largely unexplored field and there is a lot of controversial discussion about whether containers do contain or not.” Overall the report found that Docker is fairly secure despite a few shortcomings and a current drought of tools to deal with enterprise needs.
Usually when containers are used it’s impossible for operations teams to see what is happening in the container environment, TwistLock says. What is seen from a monitoring standpoint is virtual machines running unknowing processes. Twistlock security gives teams a way to monitor risks “within the application of the container, enabling enterprises to consistently enforce security policies, monitor and audit activity and identify and isolate threats in a container or cluster of containers,” according to the release.
“Enterprises are in the midst of a data center revolution,” said Ben Bernstein, CEO and co-founder of Twistlock. “Twistlock’s container security suite provides the fuel enterprises need to accelerate their ability to use containers to develop, share and scale the applications that drive their business forward. With our solution, security operation teams finally have the visibility and granular controls they need over their ‘Dockerized’ workloads.”
The company was founded by Ben Bernstein and Dima Stopel. Both have experience in enterprise security expertise having spent time at both the Microsoft R&D center in Israel and serving in the Israel Defense Force’s (IDF) intelligence corps.
“YL Ventures is proud to back the first company capable of making containers, and more specifically Docker containers, secure enough for the enterprise” said Yoav Leitersdorf, managing partner for YL Ventures. “Twistlock provides the missing critical security features enterprises require before they can put containers in production environments. With the launch of Twistlock, the container market as a whole can take a huge leap forward.”
YL Ventures invest in cybersecurity, cloud computing, big data and SaaS companies, with a focus on the Israeli market.
Although containers are not new, there has been a resurgence of interest in the industry with the popularity of Docker. Docker just received 15M in series B funding last year and recently purchased Kitematic to make Docker easier to run on Macs.
Startup companies such as Asigra and SocketPlane have been adding support for Docker while larger more established companies such as Microsoft have been doing the same.
This first ran at http://www.thewhir.com/web-hosting-news/virtual-container-security-suite-twistlock-launches-with-2-5m-seed-funding |
|