Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, December 11th, 2015
| Time |
Event |
| 1:00p |
Facebook to Open Source Custom AI Hardware Facebook wants to replicate the success of its open source server design efforts in hardware for artificial intelligence. The social network announced it plans to contribute designs of the latest generation of custom high-performance servers it uses to build neural networks to the Open Compute Project, the open source hardware initiative Mark Zuckerberg claimed last year had saved it $1.2 billion.
Facebook’s new AI hardware is powered by GPUs, processors built originally to run graphics but increasingly used in high-performance computing. The company’s latest AI server, called Big Sur, packs eight of Nvidia’s Tesla GPUs, each consuming up to 300 watts.
Big Sur is twice as fast as Facebook’s previous-generation AI servers. Neural networks are trained by processing data. To recognize an object in an image, for example, a neural network has to analyze lots of different images containing the same object, and the more of them it gets to “see,” the better it gets at recognizing the object. Doing this kind of “training” across eight GPUs enables Facebook to build neural networks that are twice the scale and speed of networks it built using its older off-the-shelf hardware, Facebook engineers Kevin Lee and Serkan Piantino wrote in a blog post.

Facebook’s Big Sur server for artificial intelligence (Image courtesy of Facebook)
GPU-assisted architectures are increasingly common in supercomputers. The presence of GPUs on Top500, the list of the world’s fastest supercomputers, has been growing steadily. More than half of all systems on the latest list – published last month – relied on Nvidia GPUs as co-processors. That’s up from less than 15 percent one year ago.
Big Sur servers are compatible with Open Rack, the custom server rack Facebook designed and contributed to the Open Compute Project. The AI server can run in the company’s regular data centers, which is unusual for high-performance computers, which often need specialized high-capacity cooling systems that often use liquid instead of air as the heat-exchange medium.

Big Sur is designed for easy serviceability. All individual parts except CPU heat sinks can be replaced without tools. Image courtesy of Facebook
Facebook saves on infrastructure through Open Compute in multiple ways. Because the hardware is “vanity-free” and optimized for its applications, it is cheaper. By open sourcing the design, the company has multiple suppliers, including both traditional vendors, such as HP Enterprise, and design manufacturers, such as Quanta or Hyve, compete for its business on price alone, since the design is the same.
The company hasn’t open sourced Big Sur yet, but Lee and Piantino wrote that it plans to submit the design materials to OCP. Facebook has been open sourcing its AI software code and publishing papers on its discoveries in the field. | | 7:48p |
Weekly DCIM Software News Update: December 11 Nlyte receives ServiceNow integration certification, Nlyte joins the VMware Technology Alliance Partner program, and Baselayer integrates Intel DCM into its RunSmart software portfolio.
Nlyte receives certification of ServiceNow integration. Nlyte Software announced that it has received certification of its integration with ServiceNow. This certification signifies that Nlyte is the first DCSM solution to successfully complete a set of defined tests focused on integration interoperability, security and performance. The certification also ensures that best practices are utilized in the design and implementation of Nlyte DSCM’s integration with ServiceNow.
Tier44 joins VMware Technology Alliance program. Tier44 announced it has joined the VMware Technology Alliance Partner (TAP) program and that its flagship product, EM/8, is now listed within the VMware solution exchange. Members of the TAP program collaborate with VMware to deliver innovative solutions for virtualization and cloud computing. The diversity and depth of the TAP ecosystem provides customers with the flexibility to choose a partner with the right expertise to satisfy their unique needs.
Baselayer integrates Intel DCM into RunSmart portfolio. Baselayer announced the integration of Intel DCM into Baselayer’s RunSmart software portfolio. Formerly called Baselayer OS, the software is an infrastructure intelligence product with editions available to customers in the DCIM Service provider and soon to be IoT markets. The robust network of IT provided by Intel DCM fulfills the Baselayer’s RunSmart philosophy of “Sophistication Simply Delivered”. According to Samir Shah, VP of product management and marketing at Baselayer , “We are creating a sophisticated product that is simple to connect, simple to use, simple to upgrade, and simple to add new functionality. The Intel DCM integration allows us to give our customers the most value and best user experience.” | | 8:55p |
NTT Launches Thailand, Hong Kong Data Centers The latest data centers to come online as part of the rapid global expansion of data center capacity by the Japanese telecommunications and IT services giant NTT Communications are in Hong Kong and Thailand. The company announced completion of construction projects in both countries this week.
Hong Kong is one of Asia’s most important business centers. As such, it has become a key network interconnection and data center hub used by non-Asian companies to serve Asian markets, primarily mainland China, and for Asian companies to connect to and serve markets around the world.
The only other hub that’s close in importance for access between China and other key global markets is Singapore.
The new Hong Kong data center, called FDC2, completes the multi-phase build-out of NTT’s massive Hong Kong Financial Data Center complex. The complex’s total capacity is 7,000 racks.
Thailand has a robust manufacturing industry, its largest sector being electronics. The country is one of the two top suppliers of hard disk drives in the world (the other is China), a fact that became known to most people when the high-tech industry was hit by hard drive shortages that followed massive floods in Thailand in 2011.
According to NTT, there is demand for data center capacity in Thailand from local financial institutions as well as from multinationals looking to use outsourced data centers in the Southeast Asian nation.
Not surprisingly, flooding is a concern for those using and building data centers in Thailand. The NTT data center is in Bangkok suburbs, in an area four meters above sea level, where the risk of flooding is lower, according to the company. The facility is also surrounded by floodwalls and dykes.
This is the company’s second data center in the Bangkok market. It has about 40,000 square feet of data center space, enough for 1,400 IT racks, NTT said in a statement.
NTT has been expanding its global data center infrastructure rapidly over the past several years, primarily by acquiring controlling stakes in data center providers around the world and helping them finance expansion. Examples of companies NTT has bought include RagingWire in the US, e-shelter in Europe, PT Cyber in Indonesia, and NetMagic in India. | | 11:10p |
Report: Google May Turn Caching Sites into Mini Data Centers Google is considering expanding data center capacity in many places around the world where it currently caches content to deliver it faster to users in those locations, anonymous sources told Fortune. The 70 caching sites make up the company’s content delivery network.
Google is often mentioned as a cloud-services peer to Amazon Web Services and Microsoft Azure, but the scale of the data center network that support’s Google’s cloud infrastructure services pales in comparison to that of the two other cloud giants.
The subsidiary of Alphabet does have a wide-reaching, global fleet of massive data centers, but its infrastructure services, collectively referred to as the Google Cloud Platform, are served out of only four locations: South Carolina and Iowa in the US, Belgium, and Taiwan. Amazon’s cloud customers can choose from 11 regions around the world, while Microsoft’s cloud is supported by data centers in 17 geographic areas.
Having a wide variety of data center locations matters for a cloud service. One big reason is performance. Cloud users often set up redundant infrastructure across multiple locations, and having lots of location options to choose from makes it easier to do.
Another reason, which has been growing in importance since the Edward Snowden disclosures, is data sovereignty. More and more cloud users care about where their data and applications are stored physically, either because of their own data sovereignty concerns or because they’re forced to by regulations.
Google is reportedly considering ways to add capacity at its CDN caching sites by deploying data center pods that are much smaller than its core data centers. The company did not respond to a request for comment.
If confirmed, the plan would be a way to leapfrog Amazon and Microsoft’s cloud data center expansion. There are many ways to deploy small amounts of data center capacity quickly by using pre-fabricated modules that are shipped to location.
While this way to expand is faster, it doesn’t offer the kinds of economies of scale Google and other data center operators of its caliber achieve by constructing massive web-scale facilities. Fortune’s sources said if the plan comes to fruition, users may potentially pay more for placing their cloud infrastructure in these smaller-capacity locations. |
|