Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, December 18th, 2013
| Time |
Event |
| 1:00p |
The Role of Robotics in Data Center Automation  The hot aisle of a Google data center. Google has been acquiring robotics firms. Will that lead to a deeper level of data center automation? (Photo: Google)
Today we present part two in Data Center Knowledge’s three-part series on data center automation and the potential role of robotics.
Why is Google buying up robotics companies? No one knows for sure, but the technology titan is known for its intense focus on customization and efficiency in its data centers. Whatever else Google may have up its sleeve in robotic applications, the data center seems a natural opportunity for automation.
We’ve previously discussed how the data center of the future will likely incorporate more robotics-driven technologies. Already, we’re seeing great levels of robotics happen at the warehousing, industrial and manufacturing arenas. Data center administrators looking to embrace optimization and better efficiency are looking to robotics to help them alleviate these challenges.
Our previous articles have examined the concept of a “lights-out” data center, and drew comments and thoughts from both ends of the spectrum. Many embraced this new visiion for a more efficient data center. Others very clearly wanted it to be known that robotics have no place in the data center. Why not meet in the middle?
Right now, it’s pretty clear that robotics will never (at least not for quite some time) replace the need for unique human interaction within the data center. What robotics can help with is creating automation around repetitious human labor. By freeing up professionals to do bigger and better things, robotics can actually encompass a more automated environment and increase productivity.
Already, robotics, automation, and services around these technologies are helping define the next-generation data center model. Let’s take a look at a couple of quick examples:
- A recent article discusses how IBM is actually using robotics to plot the temperature patterns in data centers to improve their energy efficiency. Basically, as another post in Slashdot points out, IBM is using robots based on iRobot Create, a customizable version of the Roomba vacuum cleaner, to measure temperature and humidity in data centers. The robot looks for cold zones (where cold air may be going to waste instead of being directed to the servers) and hotspots (where the air circulation may be breaking down. IBM is putting the robots to commercial use at partners — while EMC is at an early stage on a strikingly similar project.
- Panduit, a leader in Unified Physical Infrastructure, just recently announced the launch of advisory services aimed at assessing, designing, and deploying optimized physical infrastructures for industrial organizations. These services help customers create industrial network systems designs that reduce deployment time, exceed performance requirements, and reduce maintenance and repair costs. Panduit goes on to state that as networks converge, the physical infrastructure becomes even more critical to support the demands of real-time control, data collection, and device configuration. Other data center and infrastructure shops have already begun exploring automation and robotics advisory services and roles as well.
- Another example is Blue Prism. This organization has developed a robotic automation technology to enable business operations teams to configure their own local business process automations and rapidly design, build, test and deploy new business process automations with all the functionality and IT governance required to support enterprise operations. At this point, more than 300 processes have been automated by over 1,000 robots. Already, organizations like Fidelity Investments, Telefonica, University Hospitals Birmingham, and RWE npower have jumped on the robotics train.
- Robotics manufacturers – the big ones – are already looking at ways they can place their robotics into a data center. Big robotics makers like FANUC are already developing smaller, smarter and much faster robotics. The idea is to create new lines of data center-ready robotics capable of scaling racks and truly optimizing the data center. The future of the physical robot is very bright. These machines will continue to get smaller and develop new sensors capable of analyzing equipment, computer parts, and negotiating intelligent machine routes.
The future of the data center will have to incorporate new levels of automation. Whether this is cloud-based software automation. or creating direct data center robotics automation protocols. next-generation data center models will require new levels of control.
Before we get into the next section of this article it’s important to note a few important points. First of all, robotics are NOT here to replace you. Although companies like Blue Prism certainly spell trouble for some outsourcing organizations. it really revolves around the evolution of both technology and the data center. You can either embrace new ways to compute and control your data center, or you can stand to the side.
Secondly, robotics is not here to replace intricate business processes or people who need to do unique job tasks. Robots, ideally, replace repetitive tasks and allow those administrators to focus on higher-level business and technology-oriented projects. These knowledge workers and thought-leaders will always be necessary to make a data center run optimally.
Finally, the major obstacles currently facing robotics are found on two fronts: technology and economics. Currently, it might not make any sense for a data center to attempt a retro-fit with robotics when they know their existing data center is old and out dated. Similarly, for some data center operators – there may not be a financially feasible fit for robotics at this time. Either way, before jumping into the robot pool. make sure you do quite a bit of research to ensure the longevity of your robotics initiative. | | 1:31p |
Building a Business Continuity Plan: You Will Need One David Van Allen, VP of Operations, INetU. Van Allen has been involved with INetU since its inception over 16 years ago, but with his own company at the time, FASTNET.
 DAVID VAN ALLEN
INetU
Disaster does strike. It is likely that you and your business will experience a very disruptive and costly interruption. The question is: are you ready?
IT professionals always hear the phrase “plan for the unexpected,” but this way of thinking ends up making us too overwhelmed to actually plan for an (unlikely) ominous, “unexpected” event. Relax, the meteor will not hit us, trust me. What businesses should be doing is planning for the “expected.”
Big catastrophic events are not your most probable threat to business continuity. In reality, smaller, disruptive events can cause the same damaging effect and are far more likely to occur.
For example, pipeline gas leaks, an isolated tornado, a railway accident with a spill, HAZMAT incidents or a plane crash, can all affect your business’s continuity. Even though your office may not be directly damaged, if an accident or incident takes place in close proximity, you and your employees may not be allowed to access your facilities for an extended period. If this happens, your organization is now set up to suffer from sudden customer loss, project delays, significant unplanned expenses, employee issues and more. However, by planning for the expected, you can overcome this and build a business continuity plan that is both usable and valuable.
To begin building a plan, ask yourself the following questions:
What if you and your employees could not get into your facility for just a few days – what would happen to your business?
- Without access to the building, could you have communicated with customers, employees and vendors?
- If you possess critical systems, can they be maintained remotely?
- For your IT infrastructure – can you have the ‘must have’ systems up and running?
- For things like your web sites, payment processing and ecommerce systems, healthcare information storage, ERP systems, email, electronic records, etc., could you have accessed these systems in whole, in part, or at all?
When you take the time to slow down, dissect and consider the unique aspects of your business, you will find that you are asking the important questions that you may have missed before. Answering these questions will help you identify what you really need to prepare for, even if it’s only a few days of a business continuity failure.
Once you’ve taken measures to answer these questions, you can begin to create a plan based on your needs. While drafting a business continuity plan, it’s important to include four key factors: consequence-based planning, identifying critical functions, planning for the expected and preparing a check list to cover all your bases.
Consequence-based Planning
When putting together a business continuity plan, use real consequence-based planning for this exercise. For example, “What would we do if our building were hit by a tornado?” Be realistic and specific in identifying threats for your location. Make a list of things that are critical to keeping your organization functioning within those first few hours after an interruption, and another list of things that will be important later on.
Business Critical Functions
Be sure to identify your business’ critical IT functions. These are those most basic systems or processes that must continue or be restored first to keep your organization functioning. These may include a customer facing website or mission-critical applications, your email or accounting systems. Once you identify what technology must be maintained, you can begin to design, prepare and test a cost effective fail-over solution, the process of switching from your main systems to your backup systems, either manually or automatically. The most important part of this is to be realistic in your expectations during a fail-over.
Make and Use a Checklist
Here is a framework for a sample checklist/questionnaire that will get you thinking in the right direction:
- Who is my first call to?
- Do I have a way to conference together key decision makers?
- If I can’t get access to my building, is there a way I can contact those employees that may be inside?
- If I have to evacuate the building, who goes where?
- Is there a physical administrative ‘meet’ point, and an alternate?
- Is there a location I can use as a temporary operations site? Can you make a mutual deal now with another business owner a few miles away?
- Which employees are key decision makers/key tactical assets?
- Do I bring in additional people to help run the business?
- Are we able (and set up) to telecommunicate and operate remotely?
- What contracts, covenants, compliances or laws might I breech during this situation?
When building a business continuity plan, think about the real and possible events that can occur, disregard the rest. Once you have a final well thought out and tested plan, you will feel more confident should something bad happen. Along with feeling confident, your business will survive and not lose productivity, profit and all of the hard work you and your employees have put into it every day to get it to where it is now.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 2:00p |
CyrusOne Houston Now Offers Up To 900 Watts Per Square Foot  The exterior of CyrusOne’s Houston West campus, which specializes in hosting high-density installations for the energy industry. (Photo: CyrusOne)
CyrusOne has added three megawatts of power to its Houston West campus and says it can now deliver up to 900 watts per square foot across 24,000 square feet of the 300,000 square foot facility. Houston West now has a total of 31 megawatts of critical power.
The Houston West campus is optimized for energy companies, providing seismic exploration computing to the oil and gas industry. It offers high-performance computing and high density colocation space to serve these needs. Its HPC offering enables companies to align performance computing directly to project periods and refresh cycles — optimizing budgets, time commitments, and technology for oil and gas customers managing seismic processing demands.
“What we’ve created at CyrusOne’s Houston data center campus is a geophysical computing center of excellence with 31 megawatts of critical power load available for customers and an HPC area that can deliver up to 900 watts per square foot,” said Gary Wojtaszek, president and chief executive officer of CyrusOne. “This means that companies involved in oil and gas exploration research and development can deploy the latest high-performance technology they need to manage the extreme levels of computing required in processing seismic data. We also see HPC demand growing across other industries as companies seek to more efficiently collect, process, store, and report data.”
The Houston data center campus has undergone several expansions. The company recently acquired 32 acres adjacent to the facility.
Upon completion, the Houston campus will have total power capacity approaching 100 megawatts and more than 1 million square feet of data center and 200,000 square feet of Class A office space. | | 2:00p |
Intel Adds Graph Builder to Big Data Tools Intel adds graph and other big data software updates, Trinity Pharma raises $15 million to grow its health care analytics offering, and Spice Machine and MapR collaborate for real-time SQL-on-Hadoop databases.
Intel updates big data tools. Intel (INTC) announced several updates to its data center software products that provide enhanced security and performance for big data management, as well as a suite of tools that simplify deployment of machine learning algorithms and advanced analytics, including graph analysis. The announcements include the release of Intel Graph Builder for Apache Hadoop software v2.0, Intel Distribution for Apache Hadoop software 3.0, Intel bAnalytics Toolkit for Apache Hadoop software and the Intel Expressway Tokenization Broker. Intel Graph Builder for Apache Hadoop software v2.0 is a set of pre-built libraries that enable high-performance, automated construction of rich graph representations. Intel Distribution for Apache Hadoop software 3.0 includes a number of security enhancements to the second generation of the Apache Hadoop architecture recently released by the open source community. The Intel Distribution for Apache Hadoop software 3.0 includes support for Apache Hadoop 2.x and YARN with major upgrades to MapReduce, HDFS, Hive, HBase, and related components. ”Some of the leading data-driven companies have invested heavily to create and implement their own big data analytics solutions,” said Boyd Davis, vice president and general manager of Intel’s Datacenter Software Division. “Intel is bringing this capability to market by providing software that is more secure and easier to use so that companies of all sizes can more easily uncover actionable insight in their data.”
Trinity Pharma raises $15 million in growth capital. Big data health care analytics company Trinity Pharma Solutions announced that it has raised $15 million in growth and expansion funding. The investment will enable Trinity to extend its solutions in response to strong demand for its cloud-based, big data healthcare analytics. “Over the last 12 years, Trinity has built a proven business and technology that is solving the increasingly complex challenges that life sciences and healthcare companies face worldwide. The rapid change in healthcare requires real-time analytics to provide value beyond counting pills and with the potential to improve patient outcomes,” said David Tamburri, HEP General Partner. “With a seasoned management team, led by Co-Founder and CEO Zackary King, we see tremendous opportunity to accelerate Trinity’s growth in support of demand for its cloud-based software.” Trinity expects to invest in the areas of sales, marketing, and technology to better serve its rapidly expanding customer base. The company also plans to double its employee base and expand its geographic reach with offices in New Jersey and California.
Splice Machine and MapR partner for Hadoop database. Real-time transactional SQL-on-Hadoop database provider Splice Machine announced a partnership with MapR Technologies. The partnership brings Splice Machine to the MapR enterprise Hadoop platform, enabling companies to use the MapR Distribution for Hadoop to build their real-time SQL-on-Hadoop applications. Splice Machine enables MapR Distribution users to tap into real-time updates with transactional integrity, an important feature for companies looking to become real-time, data-driven businesses. “This partnership is another step in the progression of Hadoop, from a highly scalable data store, to a real-time, high-performance platform for operational and analytical applications,” said Bill Bonin, VP of business development of MapR Technologies. “Now, companies have the ability to combine our enterprise-grade Hadoop distribution with Splice Machine to build real-time, transactional applications that are also dependable, scalable and secure.” |
|