Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, February 10th, 2016

    Time Event
    4:00p
    Look Mom: No Hands! Robotic Data Center Switch Automates Physical Connections

    There has been a slow but steady progress toward automating more and more tasks IT or network managers have for years done manually. Automating network management using network virtualization software has been the big frontier in that movement in recent years, done primarily by creating virtual software networks that can be reconfigured on the fly. But those virtual networks still run on physical switches, servers, and routers, linked by physical network cables, and plugging and unplugging network cables has been the physical limit to how far automatic network configuration can go.

    But one Silicon Valley company doesn’t think that ceiling is unbreakable. Wave2Wave builds robotic network switches for data centers and telecommunications facilities that automate physical network configuration, plugging and unplugging physical ports on command from software, allowing physical data center network connections to be managed quickly and remotely.

    Today, the company announced some major additions to its line of robotic switches, introducing a 500-port one, a 1,000-port one, and a 2,000-port one. Until today, it has been shipping switches with 360 ports. The 500-port switch is shipping already, while the higher capacity ones are expected to hit the market in the second half of the year.

    Besides providing higher port count, new switches in the portfolio come in 19-inch chassis, which are standard for data center racks. The previous-generation switch came in telco-standard 23-inch chassis.

    Essentially, a Wave2Wave ROME switch (ROME stands for Robotic Optical Management Engine) acts as a big, 10-rack-unit patch board. All networked devices in the data center plug into the board, and software takes it from there. “All roads lead to Rome,” is a convenient proverb the company’s CEO David Wang likes to use.

    “Traditional physical fiber connections through ROME can be made automatically, remotely, and much more quickly, without human intervention,” he says. Thousands of physical connections between devices in a data center are reduced to a few hundred, and there’s no need for patch panels.

    It can complement Layer 2 network infrastructure on the spine layer. A connection from the spine to a top-of-rack switch, for example, can be made in seconds. It also offers an added benefit of offloading some workload from the spine switch ASICs.

    There are three ways to manage the switch: through an API, a command line interface, or through a graphic user interface.

    Wave2Wave has been around since 2003. With headquarters in Milpitas, California, and Dublin, Ireland, offices in Hong Kong, and R&D and manufacturing facilities in China and Israel, it does design, engineering, and manufacturing for customers that operate data centers globally. Its presence in Silicon Valley and global reach has helped it win a lot of business with some of the world’s biggest Web 2.0 companies and telcos, Wang says, but adds that he’s unable to share any of the customers’ names.

    Many of its product design decisions are informed by existing customer requirements, and several dozen customers have deployed ROME 360 in their data centers. Customers with multi-location data center infrastructure who want to manage it remotely are a big driver for ROME.

    “This is a customer-proven product and technology and has already been deployed for more than the last five years,” Wang says.

    5:29p
    How to Improve Reliability in Data Centers with Cogen Plants

    Christian Mueller is a sales engineer at Minnesota-based MTU Onsite Energy.

    Combined heat and power (CHP), also known as cogeneration, is the simultaneous production of heat and electric power from the same source of fuel. From data centers to universities, interest in CHP systems as sustainable standby power supply is rising.

    Historically, CHP was reserved for very large installations. For example, waste heat from a coal-fired power plant could be used for greenhouses or large apartment complexes. Today, significantly smaller facilities, such as hospitals, hotels, commercial buildings, and some data centers are reaping the benefits of utilizing heat that would otherwise be wasted from the production of electricity. Because CHP systems require less fuel than separate heat and power systems, a reduction in operating cost, despite rising energy cost, is guaranteed. Over the long term, CHP can significantly reduce energy expenditures that can be applied to the bottom line—as long as there is a simultaneous need for electric power and heating (or cooling) for most of the year.

    Read more: Why Combined Heat and Power for Data Centers Makes Sense

    In a conventional data center, all electrical power is supplied by a local utility. If heat is needed, the facility would have a gas-fired boiler to supply hot water for space heating or process heat. Additionally, separate water heaters running on natural gas or electricity would provide hot water to the data center. In contrast, a facility with a properly sized CHP module running on natural gas would supply most of the electrical and heating loads, cutting grid energy usage and expenditures.

    Defining Uptime

    The concept of uptime is not a new one for data centers. The Uptime Institute was developed in 1993 to set global power design standards while improving reliability and uninterruptible availability—uptime—in data center facilities. It is often best described as the opposite of downtime—when a system becomes inoperable for any number of reasons, including malfunction, scheduled maintenance or a decision by the operator not to run a unit. Unexpected downtime can result in significant loss of business and revenue, and for data centers, critical information is put at risk during downtime.

    To avoid downtime, it is important to first identify its causes.

    Read more: Good Time to Consider Microgrids for Data Centers?

    Causes of Downtime

    Natural Wear and Tear: It should come as no surprise that simple wear and tear can create unexpected downtime. There is not much that can be done to prevent downtime in these cases, but the first line of defense is preparation. The goal of any instance of downtime should be returning to uptime as quickly as possible. Operation training can be a key factor in this. As the first person that arrives during a shutdown, the facility operator should have some basic troubleshooting knowledge of the system. Also, it is important to establish a Long Term Service Agreement (LTSA) with a local distributor from the start to ensure a record of parts and their scheduled replacement timing. The fixed cost of an LTSA can also help control overhead spending for a facility.

    Ambient Conditions: Certain climates create harsher ambient conditions than others. With proper planning, downtime because of hot or cool temperatures, or extreme altitude, is lessened. Designing a CHP should be customized to ambient conditions.

    Fuel: CHP modules are dependent upon fuel arriving through a pipeline. Many factors can affect operability. In addition to ensuring optimal pressure, operators must safeguard the composition of fuel. Impurities like sulfur and siloxane can contaminate the engine if above ideal limits, leading to corrosion. To combat against this, gas treatment cleaning systems are needed to remove contaminants before they reach the engine.

    Demand of Output (Electricity and Heat): Appropriately sizing a CHP for the specific application has a major impact on system reliability. CHPs are designed to run when needed, meaning a 50-100% load is the optimal operating range and a load between 80-100% load is where the greatest efficiency is achieved. If there is a large range in electricity load, sizing becomes even more critical to carry those wide ranges. Careful load match is key to ensure the unit is not over- or under-sized.

    Peripheral Equipment: Heat recovery, cooling, ventilation, electrical and control systems are examples of peripheral equipment that help a CHP operate. Operability of a CHP is dependent on this complex network of highly engineered modular pieces and components communicating and working together seamlessly. For example, if the sizing is incorrect on one piece of equipment, the entire CHP unit suffers.

    Scheduled Service and Maintenance: As with any standby power system, CHP modules require regular service and maintenance at the recommendations of the manufacturer. Scheduled parts updates, visual inspection, hose replacement, filter changes and other routine systems checks safeguard against downtime. Parts availability is also integral in system maintenance. In order to maintain uptime, it’s important to have various levels of stocked parts in close proximity to the unit.

    Support from a trained technician with mechanical and electrical systems knowledge is vital. Operators and technicians should undergo specialized training based on manufacturer specifications for a unit’s generator, engine, controls and heat recovery system to be fully prepared for the inevitable.

    Even with capable on-site support, remote access is recommended. Web access is ideal as it grants the manufacturer access to the control unit from anywhere in the world. With the ability to view key information, such as operating conditions, power output and other historical data, the factory can troubleshoot many issues remotely, helping return to uptime quickly.

    Reliability and continuous uptime is central to every data center. Power disruptions can cost more than $1 million a day in lost revenue for some enterprises. As a key factor in the economic viability of a business, ensuring power reliability is imperative for data centers that demand continuous power 24 hours a day, 365 days a year. Preventative planning measures, contingency planning and establishing a collaborative relationship with the factory and local distributor will ensure continuous uptime and availability of a CHP system.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    10:17p
    IT Innovators: Navigating the Challenges of Integrating Cloud Technologies into Your Organization

    WindowsITPro logo

    By WindowsITPro

    Has the cloud cast a shadow on your IT organization? Research firm IDC predicts that by 2016 there will be an 11 percent shift of IT budget away from traditional in-house IT delivery, toward various versions of cloud computing as a new delivery model.

    Tony Savoy, senior vice president and general manager of managed hosting and cloud services at Hostway Services, Inc., recently sat down for a Q&A with IT Innovators to provide readers with insight into navigating the challenges of integrating cloud technologies into their organizations.

    What are your customers’ unique needs in the cloud?

    Customers need to fit their applications to the right landing zone that is the right fit for their business, the needs of their applications and the needs of the consumer of those applications. What we’re finding is that in an entire IT shop, there are various types of applications that an IT shop wants to outsource. Some of those applications are a fit for the public cloud, some of those applications are a good fit for the private cloud. And essentially that’s the definition of a hybrid cloud deployment.

    Do customers know which type of cloud services they need?

    More often than not, there are companies that just don’t know. That’s where cloud readiness assessments can help. They help customers look at the applications currently deployed and define which ones are a good fit to be deployed on the cloud versus which ones are not. Then, for the ones that are a good fit to be deployed on the cloud, which landing zone is better suited to service the needs of that application—they might need a private cloud because of compliance or guaranteed performance.

    Why do customers need assistance deploying cloud technologies?

    Many companies don’t have cloud experts on staff. Some are just dabbling in cloud services. They know they need to adopt cloud, they just don’t know to what extent and how to actually leverage it. So they do look towards consulting companies and hosters to help them get there. Getting to the cloud is one thing, but managing the workloads are a little bit different in the cloud. From a hosting perspective, we need to help support the delivery and sustainment of those applications once they’re in the cloud, because they just operate differently.

    What are some of the new IT challenges that have emerged?

    One challenge is this concept of shadow IT, where lines of businesses are going around IT because IT is just not nimble enough. Mid-market companies need to figure out how to get their arms around that. Mid-market IT departments have to become more of a service provider for the business and one way to become more agile, more scalable, is through the use of software, but through the use of cloud services as well.

    What advice would you give IT professionals dealing with shadow IT?

    Don’t continue to ignore it, embrace it. Resolve those problems. Through the use of cloud, through the use of software, through the use of automation, you can enforce policy. Embrace the user’s requirements for cloud by developing cloud either internally or partnering with a provider. If they’re not adopting cloud and they’re not investing in software and automation, their competitors are going to pass them up.

    What are some pitfalls you’d warn against in cloud implementations?

    Consider the security constructs. Sometimes things move so fast that you have a hard time ensuring things are secure, and that’s one of the biggest fears with the adoption of cloud. Is the cloud that you built inherently secure? Is the way that you’re interacting with and using the cloud secure? And that boils down to your processes and procedures, your IT service controls. Are they inherently secure or do they need to be modified to support the cloud? Treat security as a first class citizen, don’t try to address it after the fact.

    What have you learned at Hostway that could apply to any IT organization looking to utilize the cloud?

    Ensure that you have a substantial training, education and awareness program for your internal organization. Cloud is a new thing for many people. If you try to build a practice around individuals that don’t understand the technology, then you’re only going to get as far as the few people that understand it. Invest broadly in training and education for your staff and turn them into skilled workers versus stressed workers because they don’t have the right skillset to perform the job function.

    You’re predicting that in 2016, the stigma associated with the cloud will continue to subside. Where do you see this happening?

    If companies don’t have a strategic roadmap to get to the cloud, they’ll be forced to do that, and 2016 is a pivot point for that. Companies are investing in it and many companies are all in. There are too many benefits associated with cloud to pass it up. If you’re not thinking cloud, if you don’t have plans for cloud, you’re going to be left behind.

    Christy Peters is a writer and communications consultant based in the San Francisco Bay Area. She holds a BS in journalism and her work covers a variety of technologies including semiconductors, search engines, consumer electronics, test and measurement, and IT software and services. If you have a story you would like profiled, please contact her at christina_peters@comcast.net.

    The IT Innovators series of articles is underwritten by Microsoft, and is editorially independent.

    This first ran at http://windowsitpro.com/it-innovators/it-innovators-navigating-challenges-integrating-cloud-technologies-your-organization

    10:25p
    HPE Introduces Linux Server for Data Analytics and Real-Time Computing

    the var guy logo

    By The VAR Guy

    Hewlett Packard Enterprise has rolled out a new Linux-based server that it says will help enterprises manage high-performance, large-volume data analytics and real-time processing workloads.

    The platform, called the HPE Integrity MC990 X Server, was announced Tuesday. The company says it was developed in response to growing demand for more efficient and scalable computing power for the datacenter.

    The MC990 X is a rack-mounted server that features eight sockets, Intel Xeon E7-8800 v3 processors and up to six terabytes of memory. That hardware gives the server the power to handle “large business processing and decision support Linux workloads,” HPE says.

    But hardware specifications are only part of HPE’s pitch. The company also hopes to attract customers to the new server by integrating the offering into its consulting services, which it says can help businesses determine how best to leverage the MC990 X and other hardware to meet data analytics and real-time processing needs.

    The new server may not revolutionize highly scalable data analytics and other high-performance workloads, but it is an incremental step toward making them easier, especially in Linux-based environments. It reflects vendors’ continuing efforts to meet the steady demand from enterprises in this space — as well as to pair hardware with consulting services in order to offer a more integrated package than customers have traditionally received from large OEMs.

    This first ran at http://thevarguy.com/open-source-application-software-companies/hpe-introduces-linux-server-data-analytics-and-real-time-

    << Previous Day 2016/02/10
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org