Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, July 17th, 2013
| Time |
Event |
| 11:00a |
Yahoo Expands its Vision for Wind-Cooled Server Coops  Yahoo’s “Computing Coops” in Lockport, New York feature a series of louvers in the side of the data center that allow cool air to enter through the side, and server exhaust heat to exit through the cupola at the top of the facility. (Image: Yahoo)
For many years, Yahoo was one of the Internet’s master builders, deploying server farms far and wide as it built upon its status as a Web pioneer. But over the past two years, as rivals like Google, Apple and Facebook have invested billions of dollars to build ultra-efficient data centers, things have been pretty quiet over at Yahoo.
But a change may be in the offing, as Yahoo appears to be ready to expand its vision for a large campus of wind-cooled “computing coops” in upstate New York.
Yahoo has been in a state of transition under new CEO Marissa Mayer. One year into Mayer’s tenure, Yahoo shares are up 70 percent as investors have been buoyed by the new leadership and a flurry of 17 acquisitions. It has rolled out new features that could require significant data storage, including the acquisition of blogging service Tumblr and the launch of the “new Flickr,” which offers users up to a terabyte of photo storage.
Looking at Third Phase
That’s why it was interesting to see reports from Buffalo-area media that Yahoo wants to buy an additional 20 acres of land at its “Computing Coop” data center campus in Lockport, New York. The land would be for a third phase of the Yahoo project, even though local officials say the company has yet to begin work on the $168 million second phase that it announced in March.
The first phases of the Lockport project, built in 2010-11, featured 275,000 square feet of data center space housed in five 120-by-60 foot prefabricated metal structures using the Yahoo Computing Coop data center design. The project was part of a global initiative to make Yahoo’s data center footprint more efficient and sustainable, saving millions of dollars in power costs along the way.
The cool, windy weather in Lockport is a major element of the plan. The campus is fed by hydro-electric power generated from dams on the Niagara River, rather than coal. The Yahoo Computing Coop uses fresh air to cool its servers, rather than energy-hungry chillers and air conditioning system. They feature pre-fabricated buildings modeled on the thermal design of chicken coops, which use the shape of the building to guide air where it is needed to efficienctly cool the interior.
Each coop has louvers built into the side to allow cool air to enter the computing area. The air then flows through two rows of cabinets and into a contained center hot aisle, which has a chimney on top. The chimney directs the waste heat into the top of the facility, where it can either be recirculated or vented through the cupola.
Refinements Ahead?
The result is an extraordinarily efficient and sustainable facility, which boasts a power Usage Effectiveness (PUE) of 1.07 while earning rare praise from the environmental group Greenpeace for its clean power sourcing.
It’s not clear whether Yahoo’s move to procure more land suggests that it may be shifting into a more active building mode. Officials in Lockport says Yahoo has yet to file site plans for the second phase of the project, but they understand the company may seek to begin construction in September. David Kinyon, the executive director of the Lockport Industrial Development Agency, said there appears to be “extensive internal discussions at Yahoo about what the second complex should look like.”
That sure sounds like modifications to the design, which would be in keeping with the company’s history of innovation. Many of the industry’s design thought leaders have spent time on the Yahoo team, including Facebook’s Tom Furlong, Apple’s Scott Noteboom, CyrusOne’s Kevin Timmons and uber-consultant K.C. Mares. If Yahoo is going back to the drawing board, the next phase of the Computing Coop campus will be a project to watch. | | 12:30p |
Managing What Matters In the Cloud: The Apps Paul Speciale is Chief Marketing Officer at Appcara, which is a provider of a model-based cloud application platform. He has more than 20 years of experience in assisting cloud, storage and data management technology companies as well as cloud service providers to address rapidly expanding Infrastructure-as-a-Service and big data sectors.
 PAUL SPECIALE
Appcara
Numerous IT management tools are available today for use with the cloud, but the rubber meets the road at the level of the application because this is what a user will actually “use.” The need for application management tools is particularly critical when applications go beyond typical single server Web sites into the more complex category of multi-tier enterprise applications. In these applications, multiple Web servers are needed to address a large load of user requests, which in turn depend on the business logic in a tier of application servers – and they, in turn, must access data in a tier of database servers. That’s a lot of inter-dependencies to manage so that users experience consistently fast performance, at agreed-upon SLAs.
To address this complexity, it’s therefore critical to carefully manage and automate the application layer – and while various kinds of solutions are available to help, not all are targeting the same layer on the classic application stack.
Cloud Management
Many companies have now transitioned to using clouds for access to IT resources such as servers and storage. The term “Cloud Management” is nebulous, but typically refers to a distinct set of tasks related to managing the infrastructure level (IaaS) cloud layers, typically comprised of the physical and virtual infrastructure, as well as the cloud orchestration layer – whether inside a corporation or a Service Provider cloud.

This pertains to managing the infrastructure elements on which the cloud is running – including the physical infrastructure elements such as servers, networks and storage, as well as the virtualization layer and the cloud stack. The latter can be open-source software such as OpenStack, CloudStack or commercial products such as Citrix CloudPlatform or VMware Cloud Director (VCD). Managing the cloud stack itself is typically done through the vendor-provided UIs, and augmented by 3rd party tools. The user level elements that are managed within such an IaaS cloud are virtual servers, cloud storage and shared resources such as load balancers and firewalls.
Cloud Application Management
In contrast to Cloud Management, the emerging category of “Cloud Application Management” addresses the next level above the Infrastructure cloud. It is implicitly driven by the enterprise shift of moving more and more application workloads to cloud style deployments. Critical enterprise applications that run businesses will soon make the shift to cloud, including proprietary apps that capture the competitive advantage in sales, marketing and operations for many companies. Too many obstacles have previously inhibited migrating, deploying and managing these applications in cloud environments. The advantages of on-demand usage and billing, fast provisioning and agility can make an IT environment much more productive than before.

The application layer can therefore include but is not limited to
- The application packages and any dependent packages (such as PHP and Java)
- The application directories and configuration files
- Security policies
- Firewall rules
- The metadata describing inter-dependencies between components
The last point is key, since modern Enterprise Apps are typically constructed on multiple servers. This is an important transition to enable Enterprise apps in the cloud – the ability to holistically and simply manage applications consisting of multiple components as a single entity. | | 1:00p |
Telx Expands to Another Major NYC Data Hub  The 27-story data hub at 32 Avenue of the Americas is framed by the cloudscape over Manhattan. Also visible in the forground: A billboard for Telx, the building’s newest tenant and Meet Me Room operator. (Photo: Telx)
For Telx, the shortest path to the cloud runs through the skyscrapers of Manhattan. The fast-growing provider is expanding to a third major telecom hub in New York City, leasing a floor of space at 32 Avenue of the Americas in Tribeca, where it will also manage the building’s Meet-Me-Room (MMR) for landlord Rudin Management.
The new facility continues a major expansion for Telx in the New York market, where it now operates six data centers. Telx is already a major player at the city’s two largest carrier hotels at 60 Hudson Street and 111 Eighth Avenue. and now adds a third base of operations at 32 Avenue of the Americas, a historically important telecom hub. The colocation and interconnection specialist also operates three data centers in northern New Jersey, with a facility in Weehawken and two data centers in Clifton, including a brand new $200 million greenfield project.
Once the new NYC3 data center comes online in the second quarter of 2014, Telx will operate a total of 550,000 square feet of data center space in the greater New York market.
“Telx’s Backyard”
“New York City has always been Telx’s backyard, and although we have expanded throughout the country in recent years, we always remain focused on providing unparalleled network connectivity and datacenter solutions within the New York City metro area,” said Chris Downie, President and Chief Financial Officer for Telx. “The establishment of NYC3 at 32 Avenue of the Americas provides Telx’s clients and prospects with a comprehensive connectivity solution and another option for expansion in New York City that is equal to the expertise and operational compliance that Telx delivers at NYC1 and NYC2.”
The deal brings together two veteran players in the Manhattan telco landscape. The Rudin family bought its first property in Manhattan in 1902, and has built a portfolio spanning 14 million square feet of space, including some of the city’s premier commercial properties. While its legacy is somewhat briefer, dating to 1992, Telx was a pioneer in developing the market for telecom services at 60 Hudson Street and has since expanded nationally across 20 sites.
The new Telx NYC3 data center will be a 72,000 square foot facility occupying the entire 10th floor of 32 Avenue of the Americas, with an option to expand to “multiple additional floors” in the building. Telx will also manage and operate the HUB, the building’s carrier-neutral meet me room. The Rudin organization has a strategic vision for the HUB to aggregates voice, data and wireless service providers in a single network dense facility.
“We’re proud to be partnering with Telx on this exciting and innovative venture at 32 Avenue of the Americas, which is widely acknowledged as among the world’s most user-friendly centers for telecommunications, technology and media firms,” said William Rudin, Vice Chairman and CEO of Rudin Management. “This partnership further demonstrates 32 Avenue of the Americas and New York City as a technology and media center. As technology needs continue to grow in New York, the infrastructure will be there to meet the growing demands.”
“Off Island” Options Offered
Telx will offer its portfolio of interconnection services, and will tie all three Telx NYC metro facilities together via the Telx Metro Cross Connect network. It will also offer both “on island” network diversity with “off island” options with Tier III environments outside the 500-year flood plain.
32 Avenue of the Americas also offers line-of-sight advantages for point-to-point microwave data transmission, which are of growing interest as a low-latency connectivity option for high frequency traders with gear at the major financial data centers in the New York area.
32 Avenue of the Americas was built in 1932 by AT&T, and for many years was one of the city’s critical communications hubs, along with the Western Union building at 60 Hudson Street. The 27-floor, 1.15 million square foot property was purchased by the Rudin Organization in 1999. The building’s existing 40+ key participants and 90+ network service providers has established the initial infrastructure to attract other cloud service providers.
CBRE, Inc. (CBRE) represented Telx on the transaction, through a team led by Robert Meyers and Amanda Bokman. Robert Steinman, Vice President at Rudin Management Company, represented the building’s ownership in-house on the transaction. Terms of the deal were not disclosed. | | 1:15p |
IT Automation Provider Puppet Labs Acquires Cloudsmith IT automation software provider Puppet Labs has acquired Cloudsmith, a provider of development tools for rapidly building and testing infrastructure automation. The acquisition will integrate the Cloudsmith engineering team and products into Puppet in a bid to accelerate enterprise adoption of IT automation. Terms of the deal were not disclosed.
Cloudsmith provides tools for system administrators and developers that make it easier to automate management of IT resources through intuitive GUIs and SaaS applications. Puppet Labs plans to tightly integrate Cloudsmith’s products with its flagship Puppet Enterprise offering to boost customers’ ability to automate and manage complex infrastructures.
“The Cloudsmith team is stacked with an exceptional group of technologists who are influential and heavily involved in both the Puppet and Eclipse communities,” said Luke Kanies, CEO and founder of Puppet Labs. “Like Puppet Labs, Cloudsmith cares about its products’ ease of use and has tremendous expertise in building great tools. We look forward to leveraging Cloudsmith’s experience and technology in our product suite to make automation accessible to an even larger segment of enterprise IT professionals.”
Two notable Cloudsmith offerings are Geppetto and Stack Hammer, both of which will continue to be developed and supported post-acquisition. Geppetto is an integrated development environment (IDE) for building and publishing Puppet modules. Stack Hammer is a service for integrating, testing, and deploying collections of Puppet modules as complete “stacks” through integrations with GitHub and Amazon EC2.
“We have been working with the Puppet Labs team and community for years and find we have a shared vision, culture, and passion around helping IT teams improve their lives through automation,” said Mitch Sonies, CEO and founder of Cloudsmith. ”And while the Cloudsmith Geppetto community has grown to more than 10,000 users in a short period of time, together with Puppet Labs we’re looking forward to bringing Geppetto’s ease-of-automation benefits to many, many more.”
According to technology research firm Gartner, automation is one of the top initiatives for enterprise IT in 2013. Puppet Labs’ mission is making it easier to manage the flow of changes in complex IT environments. | | 2:30p |
Data Center Jobs: Cushman and Wakefield At the Data Center Jobs Board, we have a new job listings from Cushman and Wakefield, which is seeking a Project Engineer in Shelton, Connecticut.
The Project Engineer must be able to manage engineering support for a portfolio of critical operating environments including a 250,000 square foot, tier IV data center and a 1.4 million square foot banking headquarters, support a Senior Property Manager and Operations Manager, report to the Regional Critical Systems Manager, provide technical engineering, process and other required resource support to the facilities, manage the implementation of the capitol project program for the facility, and manage all construction managers, consultants, vendors and management processes, i.e. budget, design, build and commissioning of capital improvements. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 3:36p |
Micron Unveils 16 Nanometer Flash Memory  Micron is sampling 16-nanometer process technology, enabling 128 gigabit (Gb) multi-level cell (MLC) NAND Flash memory devices. (Photo: Micron)
Micron announces a super small 16-nanometer flash technology, and Violin Memory and Fujitsu partner for extreme scale Flash-memory based Microsoft SQL data warehouse solutions.
Micron’s 16-Nanometer NAND
Micron (MU) announced that it is sampling 16-nanometer process technology, enabling 128 gigabit (Gb) multi-level cell (MLC) NAND Flash memory devices. The 16nm node is not only the leading Flash process, but it is also the most advanced processing node for any sampling semiconductor device. Targeted at both retail applications and data center cloud storage, the new 128Gb NAND Flash memory provides the greatest number of bits per square millimeter and lowest cost of any MLC device in existence. Full production is planned for the fourth quarter of this year, with a new line of solid-state drive solutions expected to ship in 2014.
“Micron’s dedicated team of engineers has worked tirelessly to introduce the world’s smallest and most advanced Flash manufacturing technology,” said Glen Hawk, vice president of Micron’s NAND Solutions Group. “Our customers continually ask for higher capacities in smaller form factors, and this next-generation process node allows Micron to lead the market in meeting those demands.”
Violin and Fujitsu partner for Flash Data Warehouses
Violin Memory and Fujitsu Technology Solutions announced a partnership to offer three Microsoft SQL data warehouse solutions certified for 20TB, 60TB and 240TB of all-flash memory. These data warehouse solutions scale from 20TB (5U) to 240TB (24U) and do not require re-architecting of the data warehouse environment, making the deployments simple and predictable. The certified solutions provide customers with performance previously achieved only from in-memory server solutions.
“Delivering the right solution to meet individual customer needs is important to Fujitsu. By combining Fujitsu PRIMERGY Server, Fujitsu ETERNUS Storage and Violin Memory all-flash arrays we can offer database acceleration based on industry standards. Compared to other market offerings, we provide customers with access to some of the fastest and affordable solutions for running their SQL environment,” said Bernhard Brandwitte, Vice President Storage Business, Fujitsu Technology Solutions. | | 5:00p |
Audit-Buddy Wants to Befriend Your Environmental Data 
Purkay Labs has a simple, elegant solution to figuring out data center temperature and humidity. At a time where many DCIM providers are trying an “everything but the kitchen sink” approach, Purkay Labs’ Audit-Buddy is a simple system that does one thing well: gathers environmental data at the white space. It’s portable, easy to use, and by design doesn’t have any links to existing infrastructure.
“It’s a product that we’ve developed so you can get the temp across the entire aisle,” said CEO Indra Purkayastha. “They now have a tool they can easily use rather than call somebody.”
Audit-Buddy is a portable system that takes minutes to install and requires little back-end management. Data is either displayed on the unit or transferable by USB. It’s essentially a low cost way to check environmentals either short or longterm, from an hour to a week (called QuickScan or LongScan).
Audit-Buddy consists of three modules and an adjustable carbon fiber rod that measure the air quality at three different heights. There’s a patent-pending fan design that draws external air into the module, rapidly measuring outside air quality without requiring the unit to reach the thermal ambient of the outside air. It quickly surveys the site to pinpoint problem areas and collects time-stamped data to provide information for corrective action.
Potential uses include:
- Track temperature and humidity variations at server racks;
- Establish a 3-D baseline thermal survey profile of the site inexpensively;
- Measure inlet air quality;
- Detect a heat leak or cold air loss;
- Evaluate the cooling and containment performance in hot or cold aisles;
- Measure airflow at three different heights in multiple locations in the facility.
This fits a niche. It’s simple, easy to use and solves a problem. It’s targeted to smaller deployments like server closets or data centers under 10,000 square feet. What might not be directly apparent, the company says, is the appeal Audit-Buddy may have at larger facilities.
“For people who are thinking of investing in a DCIM system, this might give them the data that they need to justify it to CIO,” said Purkayastha. At a fraction of the cost of a DCIM – the unit sells for $1,449 – it can act as baby steps into DCIM. Purkayastha says the company also found another interesting market: colo operators and their tenants. “The operators can use the product to justify that they’re delivering the SLAs, and the tenants can check to see if they’re getting the right air quality,” said Purkayastha.
“The people who commission data centers is another market,” said Purkayastha. “They can prove to the owner that they can basically validate the whole environment is indeed what it is. My personal hope is operators will use this .
The company has launched Audit-Buddy out of stealth after first revealing it at a conference last month. “Last week a large cloud provider invited us in to do a pilot,” said Purkayastha.
The competition includes IR guns and wireless sensors, and Purkayastha believes Audit-Buddy has advantages over these tools. ”IR guns are not accurate, plus they need a surface,” said Purkayastha. “Audit-Buddy is, by design, made to monitor the air temperature accurately. In terms of wireless sensors, these sensors need to place somewhere, there’s a logistics issue and it needs a monitoring system.”
A low price point and ease of use means that Audit-Buddy might be of interest to the data center operator, serving a niche as the simple entry point into larger DCIM systems for users seeking to cost-justify an investment. Operators can quickly check out environmentals and put the unit away, and customers can check their colo space to see if they’re getting the environment they need. | | 5:22p |
Riverbed Releases Enterprise-Class Server Consolidation Solution Riverbed (RVBD) has introduced a new release of its Granite product family, delivering an enterprise-class solution for server and data consolidation. With the new release IT managers can now extend the benefits of Granite to larger branch offices and data-intensive applications that previously were difficult or impossible to consolidate.
Granite 2.5
The new release includes new Granite 2.5 software that adds support for Fibre Channel to its existing iSCSI support, enabling Granite solutions to now support over 90 percent of the current enterprise-class storage array market. Granite 2.5 also improves data protection with automated snapshots and simplified support for existing data-center class backup and recovery software. It also adds Fibre Channel support for the virtual Granite Core that extends integration with enterprise-class storage arrays, including those from EMC, NetApp and IBM.
“We see the expanded flexibility of Granite 2.5 in the data center as a big win – both from a storage and a data protection standpoint,” said Mike Rinken, director of information technology at Mazzetti, an engineering design and consulting firm. “In our business where large CAD files have been moving toward a more collaborative parametric modeling data set, more resources for Granite Edge appliances make perfect sense. We can consolidate even more data and still make sure branch users get the local performance they need to be effective. We’re excited to have the flexibility of utilizing Fibre Channel storage and how Riverbed is extending their storage footprint by offering more robust devices.”
New Steelhead EX 1360
Riverbed also announced a new EX 3160 Steelhead appliance, delivering local-speed experience for applications that need high-performance storage access such as high-volume transaction databases and geographic information systems (GIS) data modeling files that require high input/output per second (IOPS). The new Steelhead appliance also enables organizations with larger data workloads to increase the amount of data it can lock within the appliance cache, increasing the ability of branch offices with larger data sets to maintain productivity, even during WAN outages.
Granite 2.5 and Steelhead EX 1360 model appliances are expected to be generally available in the third quarter of 2013. |
|