Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, May 2nd, 2013
| Time |
Event |
| 11:30a |
David Shaw of IO is AFCOM’s Data Center Manager of the Year LAS VEGAS – David Shaw of IO has been named the Data Center Manager of the Year Award by AFCOM, the leading association for data center managers. Shaw, the Senior Vice President of IO, was honored Wednesday night in an awards ceremony at Data Center World in Las Vegas.
Shaw manages more than 1.5 million square feet of data center capacity for IO, which has been a pioneer in deploying modular data center designs. He oversees IO’s “Data Center as a Service” offering to deploy managed data space in IO’s factory-built modules, which also includes the company’s IO.OS data center management software.
The other finalists were Tate Cantrell, the Chief Technology Officer at Verne Global, and Donna Manley, the Senior IT Director at the University of Pennsylvania.
Shaw, who has been working in the industry since 1987, oversaw the opening of the world’s largest modular data center in Edison, New Jersey, and ensured that the facility remained 100 percent operational during Hurricane Sandy and in its aftermath.
The Power of People
In accepting the award, Shaw noted that there are six things data center managers work with – power, cooling and connectivity, and people, process and technology. Of those, he said people were the most important part of the equation. He dedicated the award to his team of 50 staff members, who work in four data centers in the U.S. across the U.S.
Shaw also emphasized the importance of bringing “new blood” into the industry by getting the next generation of IT workers working in the field.
Prior to joining IO, Shaw led the greenfield build and operational implementation of a 7 megawatt data center to support electronic medical records and critical patient care systems for 26 hospitals and services across five states. He was also responsible over a 10-year period for global data center management at Perot Systems.
He received in-depth specialist data center and operations training from IBM and the UK Ministry of Defense, and is certified in ITIL service management. While serving in the Royal Air Force, he graduated in electronic engineering and studied computer-aided engineering, specializing in robotics.
The data center manager of the year was selected in a blind test judged by three past winners. The award is named for Len Eckhaus, the founder of AFCOM.
AFCOM was founded in 1980 to support the educational and professional development needs of data center and facilities management professionals around the globe. The association has more than 3,500 members and 40 chapters worldwide, and provides education and networking for data center managers through its Data Center World conferences, regional chapters, and Data Center Management magazine. | | 12:00p |
Surviving Sandy: Two Views of the Superstorm  A look at some of the damage wrought by Superstom Sandy on a property adjacent to the IFF data center in Union Beach, New Jersey. (Photo: IFF)
LAS VEGAS - For Alex Delgado, things were going from bad to worse as Superstorm Sandy slammed the Jersey Shore. It was high tide, during a full moon. There was a 13 foot storm surge, and the data center was less than a mile from the beach. Six hours into the storm, the company’s operations team in India had to be evacuated due to a cyclone.
The staff at the International Flavors & Fragrances (IFF) data center in Union Beach, N.J. used to joke about a single telephone pole that carried “half of the Internet and half of its power.” As Sandy came ashore, that was the pole that fell. In short, had Delgado won a raffle that week, it would have been for the Hunger Games. Everything was going wrong.
The campus was swamped with six to seven feet of water. Both its power substations were under water, as were the diesel fuel pumps. UPS batteries were nearing their end of life. Street power was out, and access to the facility was hindered by partially collapsed road.
Different Scenarios, Different Considerations
Delgado, the Global Operations and Data Center Manager for IFF, shared his experience this week as part of a keynote panel at Data Center World in Las Vegas. The panel showcased two stories of Sandy’s impact: one from the Jersey Shore at the heart of the damage, another from Philadelphia.
The data center in Union Beach supports more than 50 manufacturing facilities around the world for IFF, a chemical manufacturing company that did over $2.8 billion in revenue last year. While Delgado and his team struggled with the storm, the event had no major impact on customers, as the company didn’t lose a single order.
The 4,500 square foot facility is a single tenant building with 30 minutes of UPS backup. Its disaster recovery site is 2 hours away at an IBM facility in Sterling Forest, New York. As the storm intensified, IFF was able to shift its critical operations to the backup facility.
The damage in Union Beach was severe. The data hall stayed dry, as it was on the second floor of the building. But the storm surge took out power and mechanical infrastructure, and flooded the machine shop, ruining most of the facility’s power tools and spare parts. With the power out and roads closed or blocked, staff stayed in place for days. With provisions exhausted after the first 48 hours, IFF staff subsisted on vending machine food as they began the recovery effort, Delgado said.
The data center was returned to service on Dec. 8 with new generators and infrastructure. Delgado wound up procuring 300 batteries and 3 generators.
Delgado’s key “lesson learned ” included vendor support. ”If you don’t have a good relationship with your vendors, start shaking some hands today,” he said. He also noted that the company had moved to cloud email, which saved a ton of headaches in terms of communication.
The View From Philly
Donna Manley, IT Senior Director at the University of Pennsylvannia, showed a different side of the storm. While Philadelphia wasn’t nearly as impacted as much, the operational impact of the storm was great.
The university’s data center is in a multi tenant building, with a main data center of 4,850 square feet in the University City section of Philadelphia. Manley’s story is important because it revealed a larger concern than just the data center: the city of Philadelphia’s aged infrastructure.
A week prior to the storm, Manley and her team started the planning process. They identified teams, began tarping the windows, and put diisatser recovery provider SunGard on alert. “We started our crisis command center on the 29th, setting up a separate box.net instance just in case we lost power and were in an emergency situation,” said Manley.
Understanding the geographic diversity of the staff was important, as some employees lived in areas where the storm hit hard. “We had very few individuals that could have been on site,” said Manley. ““We needed to make sure there was technical and management staffing.”"
Cloud Services Play a Role
Manley leveraged online storage provider Box.net to get them through the storm. “Resourcing doesn’t just mean people,” said Manley. “One of the big things we have going on is our documentation. Up until recently, we had it in Sharepoint. We made it available on box.net, and we didn’t have to worry about servers going down and documentation not being available to us.”
Manley’s advice is to have a data center crash kit checklist. “Because we’re an urban campus, we have a couple of unique items on there – respirator masks, subway tokens to get to disaster recovery site at Sungard)”, she said.
She said it’s also important to read the fine print on Disaster Recovery Agreements to see whether a fee is required to put your provider on standby. There’s also food, as there’s a chance workers will have to stay put at the data center for extended periods of time, and the local restaurants aren’t as committed to staying online as a data center.
Both organizations said the prospect of managed services and hosting now appealed to them a little bit more than prior to the storm. Cloud services played an important role in both disaster plans, even if only to keep communications open through email. | | 12:39p |
New Arista Switch Beefs Up Network Scale and Speed 
Arista Networks announced several new features to its flagship 7500 modular switching platform Wednesday. With enhancements to density, scale and speed, Arista says the 7500E is its fastest and most scalable Ethernet switch ever, and will enable cloud networks to scale to over 100,000 servers and and millions of virtual machines while delivering a network architecture that can concurrently support cloud, big data, web 2.0, and virtualization.
“The 7500E Series is a major engineering achievement, offering the industry’s highest throughput and three times the capability of the original Arista 7500 in every dimension – performance, density and power without a chassis upgrade,” said Andreas Bechtolsheim, Arista’s Chairman and Chief Development Officer. “It enables customers to build the world’s largest switching infrastructures that handle the most demanding workloads with ease.”
The Arista 7500E offers configurations with a choice of 152 ports supporting 10GbE, 288 ports for 40GbE or 96 100GbE wire-speed ports. Key enhancements include 3x fabric bandwidth at 30 Terabits per second, 3x packet buffer at 144 Gigabyte per switch, 3x the control plane performance, triple the power efficiency at less than 4 watts per 10GbE port, and a triple-speed 10/40/100GbE line card.
New Line Cards, SDN Features
Four new line cards are available for the 7500E, including the 10/40/100G line card with integrated MXP (multi-speed port) optics that can be software configured on a per port basis delivering constant price-per-bandwidth at every port speed. All line cards offer the same deep packet buffers that support 128 MBytes per 10G port,512 MBytes per 40G port and 1.5 GBytes per 100G port, large L2/L3 lookup tables, and wire-speed VXLAN processing on every port.
The Arista 7500E together with Arista EOS, includes Software Defined Networking capabilities that support programmatic control of the switch. With a Layer-3 load-balancing architecture, a universal cloud networking infrastructure can be built to support data centers with more than 100,000 servers that deliver consistent performance for dynamically scaling workloads in public or private clouds.
“The high density 40GbE and 100GbE interfaces, deep packet buffers, SDN features and the robustness of EOS makes the Arista 7500 an ideal spine platform for our network,” said Benjamin Nathan, Director, IT Operations and Infrastructure at Weill Medical College of Cornell University. “Arista continues to innovate on programmability with its Linux based EOS and was a key factor in deciding on the 7500E for our Big Data needs.”
The 7500E series switches and line-card modules are generally available now. | | 1:06p |
365 Main Expands Data Center in New York City Data center developer 365 Main is expanding its New York data center, doubling the size of the facility to address increasing demand. The company is adding 16,000 square feet of space at its existing facility at 65 Broadway.
365 Main said it has made ”considerable investments” to build out the facility, which is located near Wall Street and enables the company to increase its base of financial services customers.
“365 Main is committed to meeting rapidly growing customer demand in New York and other critical geographies,”” said Chris Dolan, CEO. “Strategic expansion is at the core of the 365 Main customer service plan, from the moment we acquired our 16 data centers in 2012. We are translating our vision into meaningful action – our customers and partners are queuing up to
capitalize on our increased size and markets.”
Dolan and partner James McGrath bought 16 data centers from Equinix last year, including a number based in secondary markets where 365 Main sees growth potential. But the company also sees opportunity in historic data center hubs, including Manhattan.
365 Main is a privately held company based in San Francisco. Its financial partners include Housatonic Partners, Crosslink Capital and Brightwood Capital. | | 4:33p |
Brocade Rolls Out Broad-Based SDN Strategy  A look at the Brocade MLXe module. (Image: Brocade)
Network equipment vendor Brocade (BRCD) this week rolled out a broad strategy to boost its offerings for software defined networking (SDN). The initiative spans both software and hardware, with a central component being the Brocade VCS Fabric technology. Brocade describes its approach as “On Demand Data Center,” and said it is a logical step on the path toward mass customer adoption of SDN.
“While one of the more attractive benefits of virtualization is a reduction in capital expenses, we are starting to see the operational expenditures of highly virtualized environments increase because they lack proper orchestration, automation and management tools,” said Zeus Kerravala, founder and principal analyst with ZK Research. “Brocade’s On-Demand Data Center strategy provides a resilient and complete blueprint that unifies vital areas of the data center, from Fabrics to storage to physical and virtual infrastructure. Additionally, this strategy provides a pragmatic route for the adoption of emerging Software-Defined Networking technologies.”
Software solutions
For software networking solutions, Brocade announced release 6.6 for its vRouter family, which is based on technology acquired in its purchase of Vyatta, and will support Multicast routing and Dynamic Multipoint VPN (DMVPN), two technologies that are critical for large enterprises and cloud service providers. The Vyattta vRouter is platform and hypervisor-agnostic and deployed in environments ranging from virtual private data centers to public clouds.
As a part of its application delivery controller portfolio Brocade announced the Virtual ADX, which provides a virtual application delivery platform that increases the speed of application resource deployment and differentiated services for dynamic cloud environments. The company also enhanced its cloud provisioning capability with an update to Brocade Application Resource Broker and continued work on the OpenStack plugin for load balancing as a service.
Hardware solutions
Brocade announced a new four-port 40 GbE MLXe module for data-intensive services, that features wire-speed performance on the 40 GbE module. For smaller networks Brocade announced new versions of its compact NetIron CER routers, featuring up to four ports of 10 GbE. NetIron software updates were also introduced, which include enhancements for high-performance routing and SDN capabilities. he new release supports OpenFlow Hybrid Port Mode technology, to help customers simultaneously deploy OpenFlow and traditional routing on the same port for a seamless migration path to SDN.
Brocade was an integral vendor in the March 2013 launch of CyrusOne’s Texas Internet Exchange, an interconnection platform deployed across CyrusOne facilities in Austin, Dallas, Houston and San Antonio. “Given the massive data we transport for some of the world’s largest companies, CyrusOne cannot solely rely on traditional networking practices to solve modern business challenges,” said Josh Snowhorn, vice president and general manager of InterConnection at CyrusOne. “The Brocade MLXe routers enable CyrusOne’s infrastructure to evolve with our customers’ needs, whether that requires scaling from 10 Gigabit to 100 Gigabit Ethernet or implementing new technologies that allow us to better serve them.”
Fabric Orchestration for OpenStack
Brocade announced its continued support for the OpenStack initiative and its ongoing commitment to bringing open network solutions to enterprise and service provider customers through the introduction of a new Brocade VCS fabric plugin that delivers powerful on-demand fabric provisioning capabilities in OpenStack-based cloud environments. The VCS plugin is available as a component of the OpenStack Grizzly release and is an essential part of the On-Demand Data Center strategy.
“The benefit of deploying a private or public cloud built on OpenStack is that it gives the customer the utmost level of flexibility and control when provisioning essential components within an open cloud architecture. The inclusion of Brocade as a part of the Rackspace Private Cloud reference architecture addresses the desire of our customers to have choice in their distributions,” said John Igoe, vice president of Rackspace Private Cloud. “Innovative companies like Brocade, who are committed to supporting open initiatives, enable customers to fully capitalize on their IT investments without risk of vendor or technology lock-in associated with proprietary platforms.”
Additionally the company is taking a leadership role in the OpenStack development community with the delivery of a Fibre Channel blueprint for storage networking. Brocade is also a founding board member and platinum sponsor of OpenDaylight, a common software-defined networking platform. | | 6:50p |
MapR Updates Big Data Platform For NoSQL and Hadoop MapR Technologies has announced an update of its Big Data platform that provides performance improvements for NoSQL and Hadoop applications. With the MapR M7 Edition, MapR says it has removed the trade-offs organizations face when looking to deploy a NoSQL solution.
M7 delivers over one million operations per second with a 10-node cluster, and can support up to one trillion tables across thousands of nodes, according to MapR. It will perform automatic region splits and self-tuning with no downtime required for any operation, including schema changes. The MapR M7 edition is available immediately.
“The number of enterprise-level deployments of Hadoop MapReduce is rising quickly, driven by a need to understand and potentially adopt this new business analytics platform for business applications,” said John Webster, principal analyst, Evaluator Group. “Responding to this demand, MapR delivers a distribution of Apache Hadoop that addresses many of the enterprise quality issues currently limiting its adoption in production data centers. With M7, HBase applications can access data directly without the redundancy of extra layers of communication yielding a single, scalable and more reliable data store that offers high performance and is easier to develop to and administer.”
MapR LucidWorks Search
MapR also announced the distribution of LucidWorks Search with the MapR platform for Apache Hadoop. This single platform features predictive analytics, full search and discovery and the ability to do advanced database operations.
“Using search and Big Data isn’t just about analyzing social media content and Web traffic,” said Ted Dunning, chief application architect, MapR Technologies. “There is a wide array of new applications for combining fast Hadoop with real-time, ad hoc data accessibility to mine raw data and find useful patterns of behavior. With the MapR/LucidWorks solution, users gain a compelling alternative that is less time consuming and more unified without the need to convert, transfer or move data as required with other approaches.”
This product integration and bundling lets customers benefit from the added value that LucidWorks Search delivers in the areas of security, connectivity and user management for Apache Lucene/Solrthat users would otherwise have to develop from scratch using Solr alone. “LucidWorks and MapR share a common vision for enterprise-grade enhancements users need on top of open source software for rapid development, deployment, ease of use and production quality,” said Grant Ingersoll, LucidWorks CTO. |
|