Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, December 10th, 2012
| Time |
Event |
| 1:42p |
Pica 8 Announces Open SDN Reference Architecture 
Palo Alto-based open networking startup Pica 8 today announced an open software defined network (SDN) reference architecture. Designed specifically as a network development platform for cloud providers, this architecture includes a physical switch, integrated with a hypervisor virtual switch and an SDN controller using the OpenFlow 1.2 standard to communicate between the three components.
It has been an extremely busy year for SDN technologies and initiatives, and Pica 8 is looking to take on the challenge of being more open, and avoiding the proprietary path for SDN. With seamless integration with leading and emerging open network protocols, the new reference architecture is best suited for cloud, portal and service providers where the biggest and most dynamic data centers reside today. It combines an Open vSwitch (OVS) 1.7.1 virtual switch, which is licensed under the open source Apache 2.0 license, with the Pica 8 OpenFlow-compliant PicOS operating system. With support for OpenFlow 1.2 the PicOS integrates with the Ryu 1.4 OpenFlow controller, designed by NTT Laboratories specifically for SDN applications for service providers.
“Cloud service providers are drawn to SDN for its technological, operational, and business benefits,” says Brad Casemore, Research Director, Data Center Networks at IDC. “In offering integrated SDN components, Pica8 and other vendors are trying to make it easier for cloud providers to quickly test, validate, and deploy SDN for cloud-service delivery.”
After several years in stealth mode, Pica 8 has emerged in 2012 with a round A of financing and significant expansion of its executive management team. With sales and support offices worldwide and R&D facilities in Beijing Pica 8 strives to deliver open, flexible and adaptive Ethernet switches. This new reference design was developed on the building blocks that the company has produced to date, with traffic switching, mirroring, filtering and aggregation.
The new reference architecture is the first such demonstrable product tested at the new Pica8 open network architecture lab. In the coming weeks, Pica8 will continue to test configurations with customers and partners. The testing results will contribute to reference designs that can help the development of cloud-specific applications. | | 2:57p |
Equinix Plans Major Expansion in Toronto  Inside a data hall in an Equinix data center in Silicon Valley. The company continues to expand its data center footprint. (Photo: Equinix)
Equinix is preparing a major expansion in Toronto, where it has leased substantial space in a new building on Front Street, not far from the company’s existing facility, housed in the city’s primary telecom hub at 151 Front Street. The company disclosed its plans in an SEC filing late last week.
The new data center will be a 220,000 square foot facility, and the first phase will include 137,000 square feet of data halls, and will be supported by an initial 8 megawatts of power, with the option to expand to 20 megawatts.
The expansion in Toronto is part of Equinix’ ongoing focus on providing colocation space and interconnection centers in leading financial markets. Toronto is home to the Toronto Stock Exchange, the largest financial exchange in Canada.
The site is part of a larger tract known as the “First Parliament” property that was the original location of Canada’s Parliament buildings, which occupied the land from 1798 to 1824. The parliament buildings were burned in an American attack during the War of 1812. The site currently houses a car wash and car rental agency. The data center project was included in a land exchange deal between the city of Toronto and the Bresler development firm. City documents indicate the building at 271 Front will also include a new branch of the city library.
Equinix has signed a lease for 15 years, with options to renew for longer terms. with a total rent obligation of approximately $141 million over the initial term. Equinix expects to invest approximately $42 million to build Phase 1 of the new data center, which will create capacity for more than 675 cabinet equivalents in Phase 1. Equinix expects the new data center will open in the fourth quarter of 2014. | | 3:11p |
3 Steps to Keeping Your Cool in Cold Weather Months As marketing manager for Emerson Network Power, Liebert Services, Mark Silnes is responsible for the development of new and existing service offerings related to thermal management. Joining Liebert Services in July 2006, Silnes has held roles within application engineering and business development.
 MARK SILNES
Emerson/Liebert Services
Winter is coming, bringing with it environmental changes that can negatively affect your data center. Now is a good time to prepare to ensure data center performance isn’t left out in the cold when frigid temperatures arrive.
When the temperature drops, so too does the humidity. In the data center, low humidity produces static electricity that can build up and discharge, which may damage sensitive IT equipment or cause data loss. Controlling the air’s moisture in a narrow band when the humidity is low outside can also cause inefficient fighting between computer room precision air cooling units. Lack of service on the systems that maintain humidity in the recommended range can create problems as well, such as causing water to flow onto the data center floor.
As you prepare for cold weather, here are three simple steps to help eliminate these problems and minimize the potential for downtime:
1. Keep temperature and humidity in the American Society of Heating, Refrigerating and Air-Conditioning Engineers’ (ASHRAE) recommended range.
For most data center operations, staying within the ASHRAE recommended ranges provides the most efficient and reliable operation:
Recommended
Temperature 64.4° F – 80.6° F
Humidity 41.9° F – DP-59° F DP (Dew Point)
Adhering to recommended ranges for data centers ensures reliability in all environmental conditions. When operating a mission-critical facility, you should be concerned that operating with a lower-than-recommended humidity can result in loss of data, which may disrupt business continuity. If considering pushing or exceeding the recommended ranges, you should first conduct a detailed data center assessment.
2. Use intelligent cooling controls for maximum control of temperature and humidity across a room or zone.
Intelligent control systems increase efficiency by allowing multiple cooling units to work together as a single system utilizing teamwork. During the cold months, intelligent controls prevent units in different locations from working at cross-purposes. Without this type of system, a precision cooling unit in one area of the data center may be humidifying, while at the same time a unit across the room is dehumidifying. The control system gives you visibility into conditions across the room and the intelligence to determine whether humidification, dehumidification or no action is required to maintain conditions at target levels. This way, reliability is automatic.
3. Schedule winter maintenance to inspect environmental infrastructure equipment.
If you have regular maintenance from the cooling system’s original equipment manufacturer (OEM), you likely are getting an inspection to check components that can cause cold-weather water leaks or equipment failure. If you aren’t working with the OEM on preventive maintenance, you need to find the right service team to prevent inconvenient and/or costly problems such as these:
- Clogged drain due to mineral deposits in the humidifier pan.
- Electrode failure in a steam humidifier, causing the humidifier to malfunction.
- Buildup from calcium deposits, causing humidifier to overflow onto the data center floor.
- Heat rejection fluid freezing because the right amount of inhibitor or glycol was not added (similar to adding antifreeze to a car radiator).
- Inefficient cooling system operation from tree leaves and other debris getting pulled into the condenser and dry coolers, depending on their location.
- Heater pad failure in condensers with Lee Temp receivers that needed to be replaced to operate in winter.
An annual pre-winter inspection can uncover these and any other issues that will prevent your data center thermal management system from performing optimally during the cold months.
Cooling Efficiency with Adjusted Set Points
Beyond taking these three steps to prevent problems caused by winter weather, you can also realize seasonal efficiencies by resetting the heat rejection set point to take advantage of economization. Fans on dry coolers and chiller plants work to drive down the temperature of heat rejection fluid to less than 85° F and exhaust the heat out of the building. In winter, you can get the temperature down to 45°, which allows cooling from the cooling fluid to be captured and used to cool the space without the compressors running.
With no compressors running, economization increases cooling system efficiency from 30-50 percent, depending on the application and your geographical location. The OEM or your service team can reset the set point on the cooling fluid so you can reap one of the few benefits of the cold season.
The Cure for Data Center Winter Doldrums
Cold weather can cause problems for your data center that result, at best, in inefficiencies, and at worst, in loss of data and equipment failure – both of which can take down your data center. As we head into the colder months of the year, the above steps are just as important for data centers as boots and shovels are to homeowners in high-snow areas. Weather conditions cannot be prevented, but conditions within the data center can be, so long as data center managers take the time to be proactive and prepared.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 3:35p |
AT&T Partners With Akamai on CDN Suite Here’s a roundup of some of some of this week’s headlines from the content delivery network (CDN) sector:
Akamai and AT&T form alliance. Akamai Technologies (AKAM) and AT&T announced a strategic alliance to deliver a global suite of content delivery network (CDN) solutions to companies. The AT&T global network and Akamai CDN platform will be combined to deliver to companies an exclusive suite of global CDN and telecom solutions that will be jointly marketed, managed and supported by the two companies. Under the terms of the agreement, Akamai will deploy CDN servers at the edge of AT&T’s IP network and in AT&T facilities throughout the United States. The companies have also agreed to dedicate shared resources including technical support, customer care, marketing and professional services support. ”Aligning Akamai’s services with the global reach, scale and product depth of AT&T creates a powerful relationship aimed at helping enterprises optimize their online businesses,” said Paul Sagan, President and Chief Executive Officer of Akamai. “Together with AT&T, we share a common goal of developing solutions to maximize the Web and mobile end user experience, while driving down network-related costs and improving network efficiencies. We believe there will be tremendous value to customers in deploying within AT&T’s robust IP network, and in jointly going to market with leading content delivery and cloud infrastructure offerings.” For additional background and context, see Dan Rayburn’s analysis.
TED selects Level 3 for video streaming. Level 3 Communications (LVLT) announced it has signed an agreement to serve as the primary streaming partner for TED, the global nonprofit devoted to “ideas worth spreading.” The videos, known as TED Talks just surpassed one billion views, and Level 3 will ensure TED’s international audiences receive an optimized viewing experience, through its highly scalable, global media delivery platform. ”We are truly honored to help deliver TED’s extensive and continually growing repository of powerful, insightful talks from thought leaders around the world,” said Mark Taylor, vice president of Media and IP Services for Level 3. “TED distributes content globally, making it critically important that they have a high-quality, rapid CDN that can easily scale to handle large amounts of content. We are pleased to support them with our global CDN platform.”
Octoshape and Juniper collaborate. Octoshape announced it has developed an integrated technology showcase with Juniper Networks to provide a foundation for Broadband TV with the scale, quality and cost efficiency of broadcast TV. The showcase is being hosted from Juniper Networks New Jersey-based OpenLab, the Junos Center for Innovation, which facilitates a collaborative environment for Juniper’s customers, partners and academia to learn about and develop new network integrated software applications. The showcase integrates Octoshape’s Infinite HD-M Federated Multicast platform with Juniper Networks MX Series 3D Universal Edge Routers for both Native Multicast, and Automatic Multicast Tunneling (AMT). The HD-M solution for high definition Internet video has been in production since April 2012, and now utilizes the first commercially available AMT relay from Juniper. “Octoshape’s advanced video distribution technology provides high definition Internet video regardless of the geographic location, connectivity or network conditions of the viewer. Combined with Juniper’s AMT technology on its MX Series routers, it enables us to transform the quality and economics of Internet-based video,” said Mike Bushong, senior director product line management, Junos, Juniper Networks. | | 5:35p |
Video: Coresite Focuses on Cloud-Ready Data Centers At the fall Gartner Data Center Conference in Las Vegas, Data Center Knowledge interviewed Jarrett Appleby, the Chief Operating Officer of CoreSite Realty Corporation (COR). CoreSite, a publicly traded company, is a national provider of data center products and interconnection services, with more than 750 customers such as Global 1000 enterprises, communications providers, cloud and content companies, and a wide range of enterprises. In this video, Appleby discusses Coresite’s move to leverage direct connects between its data center campuses to support the “old” and “new” worlds, as well as partnering with companies such as Amazon Web Services (AWS), RiverMeadow Software and Violin Memory. These partnerships enable carriers and cloud service providers within the CoreSite ecosystem to deliver high-performance cloud on-boarding, disaster recovery and storage services over their Infrastructure as a Service (IaaS) platforms. The video runs about 4 minutes.
For additional video on data centers, check out our DCK video archive and the Data Center Videos channel on YouTube. | | 6:07p |
Why Does Gmail Go Down?  Don’t worry, your Gmail hasn’t evaporated. In worst-case scenarios, Google can back up lost Gmail messages from huge tape backups like this one, as the company did in an outage last year. (Photo: Connie Zhou for Google)
We’ve written many times about the breadth of Google’s data center infrastructure and its focus on reliability. So how does a widely-used app like Gmail go down, as it has today? There have been a number of Gmail outages over the years, usually involving software updates or networking issues. Or in some cases, a software update causing a networking issues.
The reports of a Gmail outage are widespread, but don’t appear to be uniform. Some users are able to access their Gmail boxes (as I just did). Google is acknowledging reports of issues, but not really confirming them yet. “We’re investigating reports of an issue with Google Mail,” the company said on its status dashboard. “We will provide more information shortly.”
UPDATE: As of 1:10 p.m. Eastern, Google says Gmail is back up. “The problem with Google Mail should be resolved. We apologize for the inconvenience and thank you for your patience and continued support. Please rest assured that system reliability is a top priority at Google, and we are making continuous improvements to make our systems better.”
On at least three occasions, Gmail downtime has been traced back to software updates in which bugs triggered unexpected consequences. A pair of outages in 2009 involved routine maintenance in which bugs caused imbalances in traffic patterns between data centers, causing some of the company’s legendary large pipes to become clogged with traffic. That was the case in Febuary 2009, when a software update overloaded some of Google’s European network infrastructure, causing cacading outages at its data centers in the region that took about an hour to get under control.
In Sept. 2009, Google underestimated the impact of a software update on traffic flow between network equipment, overloading key routers. One element of that outage may offer clues to today’s issues. In that event, the Gmail web interface was unavailable, even as access to IMAP and POP continued to work – which is also being reported with today’s issues. It turns out the web and IMAP/POP traffic uses different routers. In the Sept. 2009 outage, Google addressed the problem by throwing more hardware at it, adding routers until the situation stabilized.
Despite the sophistication of Google’s networks, updates sometimes bring surprises.
“Configuration issues and rate of change play a pretty significant role in many outages at Google,” Google data center exec Urs Holzle told DCK in a 2009 interview. “We’re constantly building and re-building systems, so a trivial design decision six months or a year ago may combine with two or three new features to put unexpected load on a previously-reliable component. Growth is also a major issue – someone once likened the process of upgrading our core websearch infrastructure to “changing the tires on a car while you’re going at 60 down the freeway.” Very rarely, the systems designed to route outages actually cause outages themselves.”
But don’t worry that Gmail might lose your data. In addition to storing multiple copies of customer data on disk-based storage, Google also backs up your data to huge tape libraries within its data centers. The company restored some customer data from tape in a 2011 outage, also caused by a software bug. |
|