Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, June 3rd, 2014
| Time |
Event |
| 12:00p |
Facebook Testing Broadcom’s Open Compute Switches in Production It has been a year since Facebook announced that its Open Compute Project had an initiative focused on defining a network switch that could be used with a variety of operating systems, so that data center operators would not get locked into using a single vendor’s software once they bought that vendor’s hardware.
Facebook’s wish to disaggregate networking hardware from networking software has now been granted. Two switch designs (one by Mellanox and the other by Broadcom) were submitted to Open Compute for approval, and Facebook is already testing a handful of the Broadcom boxes in production in its data centers, Najam Ahmad, director of network engineering at Facebook, said.
Facebook started the Open Compute Project in 2011 as an open source data center and hardware design effort. OCP has since grown into an active ecosystem of vendors and end users focused on web-scale data center infrastructure.
Customizing throughout the stack
The beauty of an OCP switch is that the company will soon be able to deploy network hardware made by a variety of vendors with its own network management software. “The key idea was, if we actually disaggregate, we can mix and match,” Ahmad said. “Buy some and make some. We don’t have to buy a complete vertically integrated solution at this point. That’s really why we’re driving it.”
Given its substantial in-house engineering capabilities, Facebook wants the flexibility to choose between different vendors’ solutions and its own homegrown technology across its entire stack. This allows it to optimize the whole system for its applications and to win on price from competition among vendors. It already uses its own servers and storage arrays, both available as open source designs through OCP, and network gear has been the remaining piece of the puzzle.
Today, the test switches running in production are based on Broadcom’s design, but it does not mean hardware by Mellanox, or Big Switch Networks, or another vendor, will not also be deployed in the future. “We expect a lot more switches to come through the OCP pipeline in that manner,” Ahmad said.
Both Broadcom and Mellanox designs are close to being approved as the official OCP designs, which will mean anybody will be able to manufacture and sell them.
Any OS Facebook’s heart desires
Facebook can use its home-brewed network management software on OCP switches because of one key piece of technology called Open Network Install Environment. A switch with ONIE, an open source boot loader, boots up, finds and loads whatever operating system is available on the network.
The ONIE project was founded by Broadcom, Mellanox, Big Switch, Agema, EdgeCore, Penguin Computing, Quanta and Cumulus Networks. The software was contributed to the Open Compute Project in November of 2013.
SDN for path selection at the edge
Facebook’s network operating system is a Linux variant. The company has a Software Defined Network controller for centralized network management. “We’re very big believers in SDN,” Ahmad said.
One example where SDN helps is selecting optimal network path for data at the edges of the network around the globe. The primary protocol Facebook uses at the edge is BGP, which is good for setting up sessions, path discovery and policy implementation, but not very good at path selection. BGP selects the shortest path and uses it without considering capacity or congestion, Ahmad explained. Facebook’s SDN controller looks at paths BGP discovers and selects the best path using an algorithm that also takes into consideration the state of the network, ensuring the most efficient content delivery.
The network is not making routing decisions at the edge on its own. The decisions are instead made by a central controller and pushed back into the fabric. As a result, Facebook has been able to increase utilization of its network resources to more than 90 percent, while running the application without any packet backlog, Ahmad said.
SDN for bulk Hadoop data transfer
Another example of SDN implementation at Facebook is management of bulk data transfer between data centers. The company’s Hadoop system lives across multiple facilities, which means a lot of traffic is traveling between data centers just for Hadoop. Such transfers, which often involve several terabytes of data, cause substantial congestion in different parts of the network. “You can see congestion for hours at a time,” Ahmad said.
Facebook’s bulk traffic management system enables applications to register what data they need to copy where and over what period of time. It then automatically identifies paths on the network with available capacity and uses that capacity to transfer the requested data. It essentially reshapes inter-data center traffic to avoid congestion that can affect performance of other applications. | | 12:30p |
Humidity Control in Data Centers Using Air Side Economizers Robert F. Sty, PE, SCPM, LEED AP is Principal of SmithGroupJJR’s Technologies Studio in Phoenix, AZ and focuses on the Architecture and Engineering design of mission-critical facilities. Robert is on LinkedIn.
Air side economizer systems, or “free air-side cooling” as it’s known, has gained popularity as energy efficiency has become a major factor in data center operation. When climate conditions are appropriate outside air is introduced into the data center for cooling purposes, and thereby, reducing the number of hours in which mechanical cooling is required. This strategy has led to an overall reduction in energy use and Power Usage Effectiveness (PUE) ratios.
In 2011, the American Society of Heating, Refrigeration, and Air Conditioning Engineers (ASHRAE) expanded their recommendations for allowable temperature and humidity conditions for server inlet temperatures. (See Table 1 below).
 Table 1 – Click to enlarge.
Introducing outside air into the data center in certain climates can offer significant energy savings, but it does add another set of variables to the control strategies in how the system reacts to maintain the recommended set points at the server inlet. Air side economizer cycles cannot be driven solely by dry bulb temperature conditions alone. The specific dry bulb temperatures may be suitable for full or partial economizer, but the amount of moisture in the air may require additional conditioning through either humidification or dehumidification strategies. This could offset potential energy savings and increase operational expenses.
The Issue: Relative Humidity or Dew Point?
Should the data center mechanical systems respond to Relative Humidity or should Dew Point temperature sensors be used? There is little argument that Dew Point sensors will give a more accurate depiction of the moisture content of the air stream but to many it’s unclear just why that is. Relative Humidity (RH) is just that, relative. It is a ratio of the amount of moisture in the air compared to how much moisture could possibly be in the air at a given temperature and pressure. It is typically identified as a percentage which is fairly easy to understand.
If the RH percentage rises while holding temperature constant then it is logical that the amount of moisture in the air has also increased. However, if the temperature has increased or decreased, the RH percentage can also increase or decrease without any moisture added or removed! The RH value is relative to the temperature of the air, and changes with that value.
The Dew Point temperature (DP) is a measurement of the true amount of moisture in the airstream. Should the temperature of the air fall below the dew point temperature condensate will form and collect on surfaces. In a sensible mode of heat transfer (processes staying above the dew point temperature) the amount of moisture in the air is not increased or decreased. This is the process which occurs as air passes from the inlet to the outlet of the server cabinet. Air temperature increases, but the amount of moisture in the air remains the same therefore dew point temperature remains the same.
 Table 2 – Click to enlarge.
Location, Location, Location
What effect does the type of control have on data center operation? Table 2 above highlights the percentage of hours of operation under various conditioning modes for five different cities. The DP Control column shows a strategy which controls to a dew point temperature window in the full range of recommended ASHRAE conditions (41.9⁰ DP – 60 percent RH and 59⁰ DP). The RH Control column shows a strategy which is also in the window of recommended conditions however it maintains a tighter RH level in the data center at 40-50 percent range, which many operators feel comfortable with. Both strategies assume a data center utilizing full and partial air side economizer strategies.
In each case it is noted that the number of hours per year of both humidification and dehumidification modes are increased by controlling to a specific RH range in lieu of controlling to a set dew point temperature range. Over the life of the data center, controlling to a specific and tight RH window can greatly increase the number of hours of required humidification/dehumidification cycles. This can lead to a large increase in operational expenses in energy and water treatment.
So is control of humidity through RH sensors instead of DP sensors wrong? Not necessarily. Both strategies can control supply air to the appropriate window of server inlet conditions as recommended by ASHRAE. It is important to note that per ASHRAE the risks of Electric Static Discharge (ESD) are impacted more by the absolute humidity level (as noted by the dew point temperature) versus the relative humidity levels of the air.
Also, one can see that controlling to a tight RH band can lead to an increase in hours of operation of the moisture control systems. By understanding the impacts of the specific control strategy data center managers and operators can select the preferred mode of control which suits their specific needs best.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 12:30p |
Foxconn Upgrading Data Center Containers With Solar Panels Taiwanese electronics manufacturing giant Foxconn announced plans to design containerized data centers that come with renewable energy generation capabilities, such as photovoltaic panels.
While famous for manufacturing consumer electronics for U.S. companies – Apple’s iPhone and iPad and Amazon’s Kindle are the most prominent examples – Foxconn, which operates massive manufacturing plants around the world, has been expanding its business in the data center market.
It produces HP’s containerized data centers called EcoPod. In April, it announced a deal with HP to co-develop and manufacture servers targeted at the so-called “hyper-scale” market. Hyper-scale customers are companies like Facebook, Amazon, Google and Yahoo, which operate huge data centers globally and buy massive amounts of custom-designed servers.
Also in April, Foxconn announced that 21Vianet, a data center service provider in China, had retained it to build data centers for it. When that announcement was made, a Foxconn official said the company was going through a transformation with the goal of leveraging its technology development and manufacturing experience.
Foxconn now also reportedly has its own containerized data center product line, which it is upgrading with renewable energy sources, according to a report by CIOL, an Indian business technology publication.
Containerized data centers, also referred to as modular data centers, are used for rapid deployment of IT infrastructure. Customers deploy them when they need to quickly expand existing data center capacity or to add data center capacity temporarily – near oil fields or in areas of military operations for example. | | 1:00p |
HP Launches Big Data-Based IT Service Management Solutions HP has introduced service management offerings that leverage big data generated in customer IT ecosystems. The announcement included a Software-as-a-Service service desk offering called Service Anywhere and Propel, a solution for building enterprise service catalogs, enabling IT organizations to deliver and broker traditional, cloud and hybrid IT services and address point-to-point integration issues.
Machine learning for help desk
Service Anywhere is a big data service management SaaS, which aggregates data from multiple sources, such as social media and machine data, to automate, deliver and assure IT operations.
“Here at HP we’ve been doing this a long time; we lived in this world,” said Tony Sumpster, vice president and general manager of service portfolio management at HP Software. ”Enterprises silo incidents and knowledge. They’re not shared. All knowledge in context is valuable. We’re suggesting solutions to what’s coming in and increasing IT agent productivity.”
Service Anywhere includes Autonomy IDOL (Intelligent Data Operating Layer) and Vertica (a real-time analytics platform). “It helps users become self-sufficient and allows IT to proactively tackle issues through big data analytics and unique context relevant knowledge,” said Sumpster. Through gathering information from previous requests and solutions, it eliminates many tier-one service requests, freeing up IT to handle more important, pressing projects.
Propelling IT into service brokerage
Propel is partially a response to shadow IT, Sumpster said. By offering a central services catalog and generating the information about the usage of these services, it keeps an enterprise from procuring services in individual silos within the organization, instead allowing them to price and bundle these services effectively.
“Going out and procuring SaaS services is perfect from a line-of-business perspective,” Sumpster said. From a corporation’s perspective, there are different responsibilities. It has governance; it has risk management, procurement goals. Central IT tries to provide the overarching support. It provides them a way to do that.”
A Propel catalog captures all IT demand and provides a way to integrate fulfillment engines that fit behind those requests, he explained. “It’s an internal marketplace … and it looks like Amazon or other consumer offerings and can be themed in many different ways, by company, brand, sub-brand.” | | 1:30p |
Heartbleed Happened – What You Can Do to Stay Proactive It was a rough couple of weeks for cloud, networking, security and application professionals. The Heartbleed bug impacted everyone, vendors from Cisco to Juniper, and a variety of online services. As the dust settles, security engineers must analyze what happened and how to make sure something of this nature doesn’t happen again.
The issue was that so many different types of services were utilizing a very popular cryptography library. OpenSSL is still a widely adopted security tool, and as we all change or update our passwords and read the numerous software releases, it is important to know what some shops did right during the fallout caused by the vulnerability. There were several organizations which were prepared (to some extent) to deal with this type of issue.
Here’s what they did:
- Effective security policies. Good security policies, user controls and general infrastructure best practices can help control or mitigate a situation very quickly. Here’s the important piece to understand: even though Heartbleed was a software vulnerability, security policies must span the physical aspect as well. Numerous breaches happen because of unlocked doors or poorly monitored systems. Remember, when creating a good security policy, take into consideration your entire infrastructure. This will span everything from passwords to locked and monitored server cabinets.
- Proactive monitoring across the entire platform. Many organizations will have monitoring set up on their local as well as cloud-based environments. Physical appliances monitor traffic and report malicious users. New types of monitoring systems are able to aggregate firewalls, virtual services and even cloud components. There are so many new aspects to consider within the modern infrastructure. The logical layer is only continuing to grow and monitoring it is become even more critical. With that in mind, it is important to ask yourself a couple of visibility questions around your cloud and data center platform. How well can you see data traverse your cloud? How secure is your data at rest and in motion? Can you effectively monitor traffic extending out to your end-users? Proactive monitoring can help find spikes, anomalies and even security holes in your environment.
- Using next-gen security services. This is where it gets interesting. There are powerful physical appliances that can sit at the edge or internally within an environment. One security professional working at a large enterprise told me how he was impacted by Heartbleed. Although they had vulnerable services, their IPS/IDS solution spotted the bots and alerted the engineers to shut down services which were being impacted. Although they still released a bulletin to alert their users, the ramifications were much smaller. Virtual security appliances can be application firewalls, virtual firewalls or just security services running within your infrastructure. These powerful agents can create a very good proactive system capable of advanced security monitoring.
- Logging and event correlation. The expanse of the cloud has created a bit of a logging issue. Organizations must take event correlation and security logging into their security planning methodologies. Powerful engines can see issues before they actually happen and alert administrators to change or update specific settings. Here’s the other reality: if a breach happens, this will be your best piece of trackable documentation. In the case of Heartbleed, many organizations were able to see the source of a bot or tracking tool on their network. Not only were they able to block the sources, they were able to quickly mitigate access into corporate resources.
- Vulnerability testing. How well is your system running? How secure are your virtual servers? What about your physical infrastructure? When was the last time you ran an application vulnerability test? For some it’s an easy answer, while for others it’s a bit more eye-opening. The only way to stay ahead of the bad guys is to find problems before they do. The process is helped by all the technologies mentioned earlier. However, finding faults in scripts, ports, security updates and even user actions can proactively help you fix problems before they become breaches. Large organizations have a healthy vulnerability testing cycle. Some do testing on a cycle, others have random on-going testing, while others include specific application and data testing protocols. Regardless of the scenario – you’ll be much better off finding the issue before anyone else.
Ultimately, there is no silver bullet for every security issue out there. New types of advanced persistent threats are taking their aim at the modern data center. Remember, as cloud computing adoption and IT consumerization continue, there will be more data targets for the bad guys to go after. Staying proactive means continuously testing your own systems and ensuring effective infrastructure monitoring. Regardless of the industry, ensuring data security and integrity are critical pieces of the overall IT infrastructure plan. | | 4:44p |
Telefonica, Red Hat and Intel Partner on Open Source NFV for Telcos Intel, Red Hat and Telefonica announced plans to create a virtual infrastructure management platform based on open source software running on standard Intel-based servers.
The collaboration will be part of Telefonica’s Network Functions Virtualization Reference Lab, which hopes to help an ecosystem of partners and network equipment providers test and develop virtual network functions along with upper service orchestration layers. Telefonica is a part of the European Telecommunications Standards Institute’s NFV group, which is working to define a common framework architecture for network services based on standard virtualized commercial off-the-shelf hardware running the network functions in software.
The open source solution will be based on Intel Xeon E5-2600 V2 processors, Red Hat Enterprise Linux, a Kernel-based Virtual Machine (KVM) hypervisor, Red Hat’s OpenStack platform, and OpenFlow-enabled switching equipment. Earlier this year Telefonica signed a co-innovation agreement with Alcatel-Lucent to define NFV-evolved architectures and identify and test different NFV scenarios and environments.
In support of the lab, Telefónica, Red Hat and Intel will each commit engineering and testing resources and collaborate with partners and the open source community to enable these technologies to achieve the levels of performance and functionality NFV workloads require to make the vision of an “Open Digital Telco” a reality.
“For NFV we need to avoid closed and non-interoperable environments, which would hamper its widespread adoption,” said Enrique Algaba, network innovation and virtualization Director at Telefónica. ”For that purpose, we have launched the Network Functions Virtualization Reference Lab, where Telefónica, along with key players from the industry, is working to enhance baseline virtualization technologies from the open source community and contributing them back to the upstream community to avoid technological fragmentation.”
Cyan also announced that it is collaborating with Telefónica and Red Hat to develop an NFV architecture. Cyan is delivering the NFV orchestrator that will make use of enhancements being made to OpenStack developed by Red Hat in close collaboration with Telefónica. Using the Red Hat Enterprise Linux OpenStack Platform, Cyan’s SDN platform Blue Planet will be designed to orchestrate the deterministic placement of virtual network functions in the server infrastructure.
“NFV offers communications service providers the ability to reduce infrastructure cost via simplified network design and management, while increasing agility to achieve faster time to revenue,” said Radhesh Balakrishnan, general manager for virtualization and OpenStack at Red Hat. “Along with OpenStack, NFV truly has the ability to revolutionize the way CSPs build and operate their networks.” | | 5:23p |
Big Data Infrastructure Provider GoGrid Joins Equinix’s Cloud Bazaar Equinix has landed another cloud for its cloud exchange: GoGrid, a provider of purpose-built hosted infrastructure for big data applications. GoGrid will connect its Cloud Bridge solution to the Equinix Cloud Exchange in Ashburn, Virginia, with future expansion plans in Europe.
Equinix has always placed emphasis on robust interconnection options at its data centers, and Cloud Exchange is a cloud-focused variant of that strategy. It allows seamless direct on-demand access to multiple cloud providers and networks around the globe. GoGrid’s big data cloud, called Open Data Services is a way to deploy multi-server big data clusters across virtual machines at the click of a button.
Equinix has been focused on expanding connection capabilities with cloud service providers because it makes colocating within its facilities more alluring to enterprises. Transferring and analyzing massive amounts of data securely and quickly is critical and becoming more routine for many companies. This is better done over private connections rather than over the public Internet.
It makes sense for GoGrid to join the Cloud Exchange — where it will be in the company of many other cloud providers, including Amazon Web Services and Microsoft Azure — because its Open Data Services are better suited for such secure, dedicated network connections. Joining it also enhances its overall market reach. GoGrid has been a tenant in several Equinix facilities, so this is a fresh spin on an existing relationship.
“As a company that specializes in cloud infrastructure and Open Data Services, we understand the many intricacies that are incorporated into developing successful cloud strategies,” said Mark Worsey, chief operating officer at GoGrid. “The Equinix Cloud Exchange simplifies the process of establishing hybrid cloud deployments by connecting enterprises and services providers – delivering the many benefits associated with cloud environments and private access.”
The Equinix Cloud Exchange is currently available in 13 markets globally – Silicon Valley, Washington D.C., New York, Toronto, Seattle, Los Angeles, Atlanta, Dallas, Chicago, London, Amsterdam, Frankfurt and Paris – with plans to expand to 19 markets by the end of 2014. | | 6:12p |
McDermott Bets on HANA-Powered Cloud to Make SAP Simpler Is SAP too complex? The CEO keynote message at SAP’s Sapphire Now conference this morning was that the German business software giant was focused on running simple.
The company has been criticized in the past for being too complex. In his keynote, CEO Bill McDermott committed to simplifying the way SAP does things. “There’s a huge chip on my shoulder,” he said. “We chose to fight complexity because no company understands the problem [of] complexity like SAP.”
Cloud is the path SAP has chosen to address the problem of complexity, and the cornerstone of its cloud strategy is HANA, the company’s in-memory computing system.
CEOs want the integrated enterprise, McDermott said. “Eighty percent of companies that move to the cloud save money [and] 2014 is the first year the majority of new workloads will be performed in the cloud. Transition is happening rapidly and SAP is with you every step of the way.”
SAP cloud powered by HANA is running in 20 data centers and now has 36 million users, running a variety of applications. “You can run your entire enterprise on HANA in the cloud,” said McDermott. HANA integrates all of SAP’s solutions in the cloud.
“HANA removes redundancy, reduces complexity and simplifies the IT stack,” he said. But it is about more than just a different way to deploy applications businesses have been using in the past. “This enables a new class of applications. We can use HANA to simulate and predict actions of consequence before they actually happen.”
Many of those applications are developed by companies other than SAP that use HANA. There are more than 1,500 companies in the HANA startup program, McDermott said. Synerscope, for example, has an application that analyzes transactions looking for fraud and does so on HANA.
SAP itself is a 70,000-person company running its entire business on its enterprise cloud. It has reduced its data consumption from 11 terabytes to two in the transition, which McDermott largely attributed to HANA.
Free Fiori apps, eBay partnership
The CEO dedicated some time in his keynote to announce that Fiori, SAP’s collection of about 300 role-based applications for user productivity and personalization, was included for free in maintenance contracts now. If you previously paid, you’ll get a credit (not money back). Fiori apps are for customers using SAP Business Suite on any database and SAP Business Suite powered by SAP HANA.
SAP also announced a partnership with eBay and Ariba Network to build a cloud-based enterprise service procurement catalog. It allows setting spending restrictions and creating a system for service approvals. It is an effort to shut down “maverick buying.” SAP is offering this service free for the next 30 days. “We wanted to partner with a company that understands marketplaces,” said McDermott about the choice to partner with eBay.
The company also announced a number of industry-specific cloud services. Check them out on our sister website theWHIR. |
|