Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, March 14th, 2014
| Time |
Event |
| 12:34p |
Facebook Dials Up Infrastructure to Render “Look Back” Videos  Facebook’s armada of servers delivered up to 9 million “Look Back” videos per hour during its recent 10th anniversary celebration (Photo: Rich Miller)
Aiming to celebrate its 10th anniversary in style last month, Facebook (FB) developed “Look Back” videos and ended up challenging even its own massive infrastructure, rendering over 720 million videos. As a last minute idea within the company, the Look Back video tool would enable people to generate one-minute videos that highlighted memorable photos and posts from their time on Facebook.
The video project would prove to be a challenge for engineers to put it together in less than a month, and for the Facebook infrastructure and network to handle the additional processing, storage and network impact it would incur. With a mission to render faster and store better the team rose to the challenge, Facebook figured it would need around 25 petabytes of storage and an estimated 187 Gbps of additional bandwidth – based on a potential 25 million videos generated within a day of the anniversary, each shared to and viewed by five people.
Animate Photos for a Billion People
The compute power available in Facebook’s global infrastructure would make any rendering project easy, but a key challenge for the Look Back project was to not disrupt regular Facebook operations. On top of every day infrastructure demands, Facebook was about to launch its Paper app, and a week later the site would be inundated with the start of the 2014 Winter Olympics.
In a recent blog post, the Facebook infrastructure team explains that “for compute, although we had tens of thousands of servers available, our data center power efficiency plans were not designed for all of them to be running at full capacity at all times. We decided to track power usage while the jobs were running and added dials to the software to allow us to slow down if needed.”
The team reviewed configurations on its Haystack media servers, especially the geographical location of the server, to help prevent disruptions to normal site operations. The Look Back videos would be isolated in their own servers across both U.S. coasts and they would treat them as different media types instead of regular uploads. The video rendering jobs would be distributed and scheduled, as users would edit their videos and then quickly render them on-demand.
9 Million Videos Rendered per Hour
After quickly prototyping and testing the tool internally, it was ready for launch. In the days and hours before launch the team watched the space used in Haystack grow from 0 bytes to 11 petabytes, rendering over 9 million videos per hour at peak. The video feature went live on February 4th, and people quickly checked out their videos and shared with friends. Facebook estimated that about 10 percent of the people who saw their video would share it, and to their surprise, well over 40 percent of the viewers were sharing.
Without a single breaker tripping, Facebook’s Look Back video project logged some impressive statistics:
- Over 720 million videos rendered
- More than 11 petabytes of storage (reduced from estimates due to optimizing video size and reducing the replication factor)
- More than 450 Gbps outgoing bandwidth at peak and 4PB egress within days
- Over 200 million people watched their Look Back movie in the first two days, and more than 50 percent have shared their movie.
Outgoing bandwidth of 450 Gbps is impressive – as the blog post states, “even by Facebook standards.” That amount of bandwidth is equivalent to almost every single person in Kansas City simultaneously using 1 Gbps. The success of the tool is a testament to the capacity of Facebook’s global infrastructure and to the almost internally crowdsourced team of engineers and developers that put it together in such a short amount of time.
“The Look Back project represented the things we love about Facebook culture: coming up with bold ideas, moving fast to make them real, and helping hundreds of millions of people connect with those who are important to them,” the company wrote. | | 1:00p |
FedRAMP OnRamp Seeks to Ease Path to Secure Government Clouds  Steve O’Keeffe of MeriTalk introduces the FedRAMP onRamp tool during yesterday’s Data Center Brainstorm event for the federal IT community at the Newseum in Washington, D.C. (Photo: Rich Miller)
WASHINGTON, D.C. - Ordering a pizza over the Internet is easy. Provisioning compliant cloud services for federal government agencies is hard.
Steve O’Keeffe would like to change that. O’Keeffe is the founder of MeriTalk, a public/private partnership focused on improving government IT, which has launched a new tool to help federal agencies find cloud providers that have received security certifications under The Federal Risk and Authorization Management Program (FedRAMP).
The FedRAMP OnRamp was launched Thursday at the Data Center Brainstorm, a conference at the Newseum that brought together IT managers from federal agencies, along with representatives of leading vendors and service providers to the government sector.
“The challenge with FedRAMP is that it hasn’t been particularly transparent until now,” said O’Keeffe. “There are different flavors of FedRAMP, and they’re all about risk management.”
Cloud First, But Only With FedRAMP
FedRAMP is designed to centralize the process of certifying vendors to offer cloud computing services that meet the strict security requirements of federal agencies. Cloud providers must gain FedRAMP certification to provide cloud services to federal agencies. Without FedRAMP, service providers would need to individually certify cloud installations at each agency they serve.
That would be an expensive undertaking. MeriTalk estimates the average cost for the government to perform a FedRAMP cloud security certification at $250,000. Using FedRAMP has already saved service providers more than $37.5 million in certification costs, according to estimates from MeriTalk and the General Services Administration.
That doesn’t mean that it’s always user-friendly. One of the goals of the FedRAMP OnRamp is to provide quick access to information about which companies have gained certification as Cloud Service Providers. That number currently stands at 14: AINS, Inc., Akamai, Amazon, AT&T, Autonomic Resources, CGI, Concurrent Technologies, HP, IBM, Lockheed Martin, Microsoft, Oracle, and the U.S. Department of Agriculture.
Another 15 cloud providers are currently in the FedRAMP approval process, including Acquia Inc., CA Technologies, CenturyLink Technology Solutions, Clear Government Solutions (CGS), Economic Systems, Fiberlink, HP, Layered Tech Government Solutions, Microsoft, Oracle, Salesforce.com, SecureKey Technologies Inc., Verizon Terremark, Virtustream, and VMware.
Immense Opportunity for Cloud Providers
The government cloud opportunity is immense. The U.S. Federal government spends more than $80 billion each year on IT. The Office of Management and Budget (OMB) has directed federal agencies to embrace a “Cloud First” policy to improve the efficiency of government IT spending and slash spending on data centers and applications.
The FedRAMP program was introduced in 2010, and the federal government has invested $15 million in the FedRAMP certification process. O’Keeffe says that MeriTalk’s data shows that this has been a winning investment for U.S. taxpayers.
“The centralized FedRAMP security certification process is accelerating Uncle Sam’s jump to the cloud,” said O’Keeffe. “So far, we’re realized $37.5 million in savings. We’re not just talking cost avoidance. We have the investment numbers to map against the cost avoidance, showing a $3.50 return for every $1 invested. As agencies use these secure cloud offerings, that number will continue to grow over time. Hats off to GSA and other agencies that are changing the economics of government IT.”
An “Invaluable” Connector
Service providers see clear benefits from a portal that makes it easier for end users to connect with vendors and understand their offerings.
“This tool offers clarity through an intuitive portal, allowing agencies to effectively evaluate approved Cloud Service Providers,” said John Keese, President and CEO, Autonomic Resources. “FedRAMP OnRAMP is the connector between agencies and their optimum cloud services, and it will soon prove invaluable for the ‘Cloud First’ initiative.”
“As government computing efforts continue to become more ‘cloud focused’ it’s important that Federal IT staffs have a convenient way to know which of their vendors are FedRAMP compliant,” explained Tom Ruff, Vice President, Public Sector, Akamai Technologies. “Participating in the OnRAMP program is designed to give our federal customers confidence that Akamai cloud services can be part of ‘end-to-end’ FedRAMP compliant solutions.”
O’Keeffe sees FedRAMP OnRamp as a small part of the government’s long, slow shift to a more economic and effective IT infrastructure. It’s not quite as easy as Internet pizza. But it should be.
“We’re used to consumerization in our private lives,” said O’Keeffe. “Why not in FedRAMP?”
 O’Keeffe with a screen from the OnRamp tool during Thursday’s event. Federal agencies spend $80 billion annually on IT, and are mandated to pursue a “Cloud First” policy. (Photo: Rich Miller) | | 1:58p |
U.S. Navy Shifting Public Data to Amazon Cloud  Terry Halvorsen, the Chief Information Office for the U.S. Navy, says his IT operation is moving most of its public data to the Amazon Web Services cloud computing platform. (Photo: Rich Miller)
WASHINGTON, D.C. - The U.S. Navy is shifting large amounts of data to the Amazon Web Services cloud, and expects the move to produce huge savings.
“We are in the process of putting most of our public-facing data in an Amazon cloud service,” said Terry Halvorsen, the Chief Information Officer of the Department of the Navy, in a keynote at Meritalk’s Data Center Brainstorm event Thursday. Halvorsen said the move could save the Navy as much as 60 percent versus the cost of managing that data in its own data centers.
“There is still a place for the data center non-cloud solution,” Halvorsen said. “Getting that balance right is my mission.”
Cloud First
The Navy’s use of Amazon Web Services (AWS) is the latest example of how organizations focused on security and compliance are finding ways to use public cloud services. Halvorsen said this is part of a larger shift of IT assets to commercial service providers as agencies seek to slash costs under mandates of the Federal Data Center Consolidation Effort (FDCCI) and the Obama administration’s “Cloud First” focus.
The FDCCI is still very much an ongoing process, and Halvorsen believes it goes much deeper than counting data centers. “In the end, it’s about counting dollars,” said Halvorsen, on the consolidation efforts. ”I don’t like the word consolidation.” Halvorsen prefers the term “Application Kill.”
“It’s not just about rationalization or consolidation,” he said. “To save money, you have to kill things.
There are many complexities when it comes to the FDCCI effort. Halvorsen said that there are around 150 Department of Navy data centers with 50 servers or more. “The Navy owns a lot of old buildings,” said Halvorsen. “I need to get this down to 25 or less.”
Beyond Closing Facilities
In a sentiment echoed by many presenters, the consolidation effort goes beyond simply closing federal data centers. There are several issues to keep in mind.
The first stage of the FDCCI involved a lot of “lift & shift” – forklift efforts in moving the small data centers and closets into central facilities. The next stage was virtualization, and application rationalization. There is a lot of application sprawl in federal IT.
In order to move forward with consolidation efforts, “you have to go through all the data,” said Halvorsen. Data needs to be evaluated for what can go to cloud, what can go to a private cloud and what is critical. Currently, Halvorsen says that his Department needs to get 50 percent of data into some type of commercial solution. Most of the public data is going to Amazon Web Services.
“During the next 5-6 year window, we have a glut of capacity in the commercial world,” said Halvorsen. “How do we capitalize on that?”
There needs to be standardization in the level of service requirements with anyone the Department of Navy, and the rest of government, deals with. “If I’m just parking data, I want to take advantage of those locations,” said Halvorsen. “If I make something less expensive, but maintain the same risk, it’s ok.”
The bottom line, according to Halvorsen, is that they need to be more transparent about their needs, and anyone seeking contracts with the Navy needs to get transparent with them. If the Department of the Navy were a Fortune 500, it would be number three or four on the list, according to Halvorsen. These are big needs.
“Do your homework, know my business, understand my needs” said Halvorsen. | | 2:00p |
Mellanox and Ranovus Champion OpenOptics Multi-Source Agreement Mellanox and Ranovus introduce an OpenOptics Multi-Source agreement to standardize WDM technologies, Level 3 enables Northrop Grumman to deliver a distributed simulation training network for the U.S. Air Force, and Windstream selects Infinera to power its 100G long-haul express network.
Mellanox and Ranovus champion OpenOptics Multi-Source Agreement. Mellanox Technologies (MLNX) and Ranovus technologies founded an industry consortium to standardize Wavelength Division Multiplexing (WDM) for an interoperable 100G WDM standard for 2 kilometer reach. The OpenOptics multi-source agreement (MSA) combines 1550 nm WDM laser and silicon photonics for QSFP-based solutions enabling the lowest cost, highest density, and highest bandwidth single mode fiber (SMF) connectivity, significantly improving terabit-scale data center infrastructure ROI. “Our cloud customers want to deploy data center infrastructure that allows seamless upgrades to the interconnect just as they do in server, storage and network hardware,” said Shai Cohen, COO of Mellanox Technologies. “With 100G interconnects approaching commercialization in data centers, OpenOptics MSA brings 100G WDM technology to data center economics, density, power consumption, and 2 km link scalability on single mode fiber infrastructure.”
Level 3 selected by Northrop Grumman. Level 3 Communications (LVLT) announced that it has been selected by Northrop Grumman Corporation to deliver network services as part of the defense contractor’s multi-million dollar agreement with the U.S. Air Force to support their distributed simulation training network. As part of the Distributed Mission Operation Network (DMON), Level 3 has connected more than 50 sites worldwide. ”The Level 3 network is incorporated into the distributed training solutions Northrop Grumman provides to the U.S. Air Force,” said Edward Morche, senior vice president of East Region Enterprise and Government Markets for Level 3 Communication. “The U.S. Air Force benefits from the global reach and low-latency performance of the Level 3 network to deliver the highest-quality training experience and realize efficiencies across various training networks and programs both in the United States and abroad.”
Windstream deploying Infinera for long-haul express network. Infinera (INFN) announced that Windstream (WIN) is deploying the Infinera DTN-X platform, featuring 500 Gigabit per second (Gb/s) super-channels across Windstream’s long-haul express network. The Infinera Intelligent Transport Network enables Windstream to differentiate their services, protect their investment and lower operational expense as they scale their network. Using single-card 500 Gb/s FlexCoherent super-channels based on Infinera’s widely deployed photonic integrated circuits provides Windstream with a solution integrating DWDM optical transmission and five terabit per second non-blocking OTN switching in a single platform. ”By deploying the Infinera Intelligent Transport Network, Windstream will significantly increase the capacity of our network infrastructure to meet the needs of our customers,” said Randy Nicklas, executive vice president of engineering and chief technology officer for Windstream. “The DTN-X platform enables us to offer services that result in lower latency for mission critical applications while providing a network that is even more reliable and enables rapid provisioning of services.” | | 3:40p |
Friday Funny: Vote for the Best “Leprechaun” Caption WOOHOO! Oops, didn’t mean to go all caps. But we’re excited that it’s Friday, because that means it’s time for our caption contest, with cartoons drawn by Diane Alber, our favorite data center cartoonist! Please visit Diane’s website Kip and Gary for more of her data center humor.
Here’s how it works. We provide the cartoon and you, our readers, submit the captions. We then choose finalists and the readers vote for their favorite funniest suggestion. The winner receives a hard copy print, with his or her caption included in the cartoon.
This week, we are voting on the last cartoon. Please vote below, and have a good weekend!
Take Our Poll
Please visit Diane’s website Kip and Gary for more of her data center humor. For the previous cartoons on DCK, see our Humor Channel. |
|