Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, July 2nd, 2013
| Time |
Event |
| 11:30a |
Data Center Jobs: Jones Lang LaSalle At the Data Center Jobs Board, we have three new job listings from Jones Lang LaSalle, which is seeking a Data Center Lead Facility Technician, a Data Center Facility Technician, and a Data Center BMS Monitor in Kings Mountain, North Carolina.
The Data Center Lead Facility Technician is responsible for installation, maintenance, operation and repair of mechanical and electrical equipment and systems, includes but is not limited to: electrical switchgear, diesel generators, HVAC/CRAC systems, UPS systems, PDUs, RPPs, BMS/EPMS systems, and fire alarm and suppression systems, ensures proper operation of systems in compliance with required regulations and codes, testing, maintaining and evaluating equipment by using instrumentation, testing and calibrating electronic HVAC and building environmental controls to ensure that equipment is functioning properly, and performing, as required, skilled maintenance activities to include but not limited to construction, welding, soldering and plumbing. To view full details and apply, see job listing details.
The Data Center Facility Technician must be able to independently plan work assignments, perform duties with a minimum of direct supervision, and assist as a helper in other trades and in the general maintenance and operation of buildings and grounds. In the absence of a supervisor, one of the tradesmen shall be capable of acting as the working foremen or lead man, and is responsible for installation, maintenance, operation and repair of mechanical and electrical equipment and systems, includes but is not limited to: electrical switchgear, diesel generators, HVAC/CRAC systems, UPS systems, PDUs, RPPs, BMS/EPMS systems, and fire alarm and suppression systems. To view full details and apply, see job listing details.
The Data Center BMS Monitor is responsible for monitoring and immediately responding to building system alarms via various computer and control systems found within the Plant Services Room by contacting/notifying appropriate on/off site personnel and use of systems within the Plant Services monitors desk, performing various tasks including, but not limited to, making building wide announcements, operating CCTV systems, and dispatching events via two way radios and telephone, maintaining shifts logs and detailed documentation of events and activities during scheduled shift, interfacing and communicating with various authorities, building management, and operations support staff, and inputting data in various system within the Facility. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 12:00p |
Which Cloud Delivery Model is Right for Your Business? 
Cloud models are continuing to evolve as business drivers push more organizations towards a more globally distributed infrastructure. Many companies are actively moving parts of their environments into the cloud for more efficiency, agility and capabilities around growth. This is a major push which has helped data centers come to the forefront of the technological evolution and has allowed more organizations to leverage their services. Costs are coming down and the environments supporting the cloud are becoming more stable.
Now, with mobility, consumerization and big data all playing a major role in how organizations develop their IT and business plans, it’s important to understand the various cloud models.
The beauty of cloud computing is its ability to create truly diverse and flexible environments which directly fit in with what a business is trying to accomplish. Many organizations are seeing the benefits of cloud computing since they are able to expand their environment without having to place a massive budget on upfront server and data center costs. Depending on the needs of the organization, IT managers may decide to go with one or several options when it comes to picking the right cloud model. For the most part, there are four major types of cloud technologies which can be adopted by an organization.
- Private clouds are great solutions for organizations looking to keep their hardware locally managed. A good example here would be application virtualization technologies requiring a local and private presence. Users have access to these applications both internally and externally from any device, anytime and anywhere. Still, these workloads are privately managed by the organization and delivered over the WAN down to the end-user. These private clouds can be located either on premises at an existing datacenter, or remotely at a privately held datacenter location. Either way, this private cloud topology is not outsourced and is directly managed by the IT team of a given organization.
- Public clouds are perfect for organizations looking to expand their testing or development environment. Many companies simply don’t want to pay for equipment that will only be used temporarily. This is where the “pay-as-you-go” model really works out well. IT administrators are able to provision cloud-ready resources as they need them to deploy test servers or even create a DR site directly in the cloud. With a public cloud offering, businesses can take advantage of third-party providers and use non-corporate owned equipment only as the IT environment requires. This is where economies of scale truly work best. The ability to provision new servers and resources without having to pay for physical hardware can be a very efficient model for organizations looking to offload their hardware footprint. Still, even in a public cloud, monitoring the environment and managing resources will be very important.
- Hybrid clouds are being adopted by numerous organizations looking to leverage the direct benefits of both a private and public cloud environment. In a hybrid cloud, companies can still leverage third-party cloud providers in either a full or partial manner. This increases the flexibility of computing. The hybrid cloud environment is also capable of providing on-demand, externally-provisioned scalability. Augmenting a traditional private cloud with the resources of a public cloud can be used to manage any unexpected surges in workload. This is where workflow automation can really help out. If an organization has peak usage times, they are able to offload their user base to cloud-based computers which are provisioned only on demand. This means that these resources are only being used as needed. So, for organizations still looking to keep a portion of their cloud environment private, but still use elements of the public cloud offering – moving to a hybrid cloud may be the right solution.
- Community clouds are a somewhat new breed in the cloud computing world. Many organizations are beginning to use a community cloud to test-drive some high-end security products or even test out some features of a public cloud environment. Instead of just provisioning space in a public cloud, organizations can test and work on a cloud platform which is secure, “dedicated” and even compliant with certain regulations. The really interesting part is that with a community cloud, the presence can be either onsite or offsite. Another great example would be the need to for a provider to host a specific application on a set of cloud-based servers. Instead of giving each organization their own server in the cloud for this app, the hosting company allows multiple customers to connect into their environment and logically segment their sessions. Although the customer is hitting the same server as other people for that application; the session itself is completely secure and segmented.
When working with a private, public, hybrid or community cloud environment, planning will be the most important deployment step. During the planning phase, engineers and architects will examine how to build out their cloud environment and size it for future growth. By forecasting growth over a span of one, two and three years, IT managers can be ready for spikes in usage and be prepared for the growth demands of the business. This level of preparedness is called cloud growth agility. This means that the environment has been proactively sized and is ready to take on additional users as required by organizational demands.
Before the cloud, many companies looking to expand upon their current environment would have to buy new space, new hardware and deploy workloads based on a set infrastructure. Now that WAN connectivity has greatly improved, cloud-based offerings are much more attractive. The emergence of the cloud has helped many organizations expand beyond their current physical datacenter. New types of cloud-based technologies allow IT environments to truly consolidate and grow their infrastructure quickly, and, more importantly affordably. | | 12:30p |
Eleven Points to Consider Before Buying a Data Protection Solution Jarrett Potts, director of strategic marketing for STORServer, a provider of data backup solutions for the mid-market. Before joining the STORServer team, Potts spent the past 15 years working in various capacities for IBM, including Tivoli Storage Manager marketing and technical sales. He has been the evangelist for the TSM family of products since 2000.
 JARRETT POTTS
STORServer
The term “solution” is not something to use lightly as it refers to a product or set of products that provide the total package. While no single vendor can be the answer to every pain and problem a business encounters, there are important items to consider before making an investment in a data protection solution.
In this three-part series, I will discuss the 11 major items that must be considered before purchasing a data protection solution as well as the issues affecting businesses on multiple levels, including total cost of ownership and time spent on administration, maintenance, support and recovery.
More than Just Backup and Recovery
When choosing a product to protect a company’s data, it must offer more than just backup and recovery. Sure, backup and recovery are important—cornerstones even—but they are not everything.
When talking about data protection, there is so much more to consider. Long-term data storage, otherwise known as data archiving, is not just another copy of a backup tape. All data protection solutions should have a way to separate data into classes that include, but are not limited to, backup data or archived data.
Backup data is usually short-term in nature (60 seconds to 60 weeks), while archive data is typically kept for six months to 60 (or more) years. Because of the difference in retention times, it is important to manage the media and the information differently. That’s not to say that the media should be set on a shelf with a label and forgotten about. On the contrary, long-term data must be actively managed.
When an archive is created and stored for several years, it is important to know the data can be retrieved when needed. Therefore, media needs to be checked on a periodic basis to verify the data can still be retrieved and used. To do this, a proper solution should be able to audit the media and ensure the data is viable.
Not only should it be able to verify the contents of the media, but it should be able to roll that data forward in time. In seven years, there will no longer be LTO5 libraries as it may be up to LTO10 or even some new technology, such as “data flux capacitor storage.” If there is a need to retrieve that old data and there is no longer an LTO5 drive, users may be in trouble. The solution needs to be able to roll data forward in time, moving the data from one type of media to a newer type of media, while keeping all metadata intact as well as verifying the validity of the data. Without this feature, long-term data is stuck and married to the device that created it.
Archive data is only one example. There is also hierarchical storage management (HSM), application aware and encrypted data, and so much more. A true solution allows users to separate this data and treat different types of data in different ways. Without this ability, users are getting a backup and recovery product and not a data protection solution.
Subscription and Support Should Not Be Forgotten
Tired of calling the data protection vendor and them saying the issue is a hardware problem only to have the hardware vendor point the finger back to the solution provider? Finger pointing is a huge waste of time and can actually cause major disruptions in service.
When choosing a data protection solution, look for one that offers peace of mind through world-class customer support as well as subscription and support (maintenance) contracts, which provide enormous financial value to customers.
For example, all improvements made to a solution in the past three years as well as the planned improvements for the next three years should be taken into consideration. This includes source and target data deduplication, an eight-fold increase in scalability and substantial improvements in reliability, performance and ease of use.
The best of both worlds occurs when all of these new features are provided to existing customers at no additional charge through subscription and support contracts. These customers enjoy all the benefits of new versions as they are released without incurring additional costs. As a result, subscription and support will be significantly enhanced during the lifespan of the solution.
Another factor that is just as important is the quality of support. If there are multiple products cobbled together to create a solution, it can cause a real problem as there is no single owner to any particular issue that occurs. The appliance model of data protection excels here as the provider can support the hardware, software, operating systems and everything in between.
When choosing a data protection solution, do not discount the value of support and subscription. Having a good support organization standing behind the solution can make life a lot easier.
Reliability: Key Measure of a Data Protection Solution’s ROI
At the end of the day, the real value of a data protection system lies in its ability to restore data when and where needed. If data fails to restore successfully, some part of the business is going to suffer, possibly with costly consequences. As a result, reliability is a key measure of a solution’s ROI.
A user may have the best, most expensive, fastest data protection solution in the world, but if data cannot be retrieved and used in a timely fashion, the point is moot.
When choosing a solution for data protection, look for experience in the data protection market—years spent protecting the business critical data of some of the largest organizations in the world. This is not a professional sports draft, so do not look for the diamond in the rough that can be molded into the perfect player (solution).
Instead, look for a solution that has a track record of being reliable over many years. The longstanding ability to migrate between storage devices as needed helps to ensure that organizations enjoy longevity on the platform, protecting their investments and saving on costs over the long-term.
Without this factor, all solutions become short-term fixes instead of long-term strategies that help businesses focus on future growth instead of today’s problems.
When choosing a solution, the ability to know the data can be recovered must be more important than any other factor. Yes, it is actually more important than price.
Users do not want to end up replacing the solution in just a few years (or less) because it cannot keep up with growth and continue to recover data.
In the second part of our series, I will discuss the importance of finding a solution that’s easy to use, why different data should be treated differently, how to eliminate the burden of virtual machine backups and why all the talk shouldn’t focus on deduplication.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 1:00p |
DCIM Roundup: CommScope, Nlyte & RF Code DCIM providers continue to flesh out their offerings, adding a slew of functionality, integrations and capabilities. iTRACS, which was recently acquired by CommScope, released the latest iteration of its DCIM software. Several providers have been integrating RF Code, with RFID capabilities in DCIM in high demand. Nlyte has released its own barcode reader, built with insights derived from thousands of man-hours of auditing experience.
These are just some of the recent changes across the DCIM landscape. Here’s a look:
iTRACS Adds Integrations, Remote Management
iTRACS was acquired by CommScope and has been adding functionality ever since. Earlier this month, CommScope announced three integrations and a slew of other new functionality. Converged Physical Infrastructure Management (CPIM) is now at version 2.8. New to CPIM is purpose build browser and iPad user interfaces, deep integrations with VMWare, RF Code and HP. The offering leverages the DCIM Open Exchange Framework and the new myDCIM role-based user experience.
“CPIM 2.8 provides data center owners and operators with a new level of decision-making in the areas key to their business success,” said George Brooks, senior vice president of Enterprise Product & Market Management, CommScope. “From capacity planning and change management to virtualization and asset lifecycle management, iTRACS helps customers optimize the data center as a performance-based resource. With the myDCIM role-based user experience, customers will be able to tailor their interfaces to their exact needs for even greater speed, convenience, and efficiency.”
CPIM 2.8 has boosted asset management, performance optimization, change management and mobility capabilities. The CPIM browser interface and iPad mobile app means infrastructure can be viewed and managed anywhere. These interfaces join the existing CPIM Windows interface.
The three integrations are also key. The VMWare integration boosts capabilities around visualizing and analyzing the relationships between physical assets and logical hosts. This integration shows the larger trend of DCIM providers working on improving DCIM capabilities when it comes to virtual assets.
The HP System Insight Manager integration adds features around asset discovery and operational data collection to allow CPIM users to understand, manage, and improve the energy usage and efficiency of their assets. It offers a “single pane” view to bring together multiple vendor data sets in iTRACS’ seamless management interface.
The RF Code integration allows CPIM users to collect, manage and analyze asset location information and environmental information. There have been several RF Code integrations as of late, as RFID is becoming integral in proper DCIM.
Larger Trend: RF Code integration
Real time asset management company RF Code has had several integrations as of late. Server Tech announced an integration with RF Code last month, enabling its SPM power monitoring solution to take full advantage of environmental data captured by RF Code sensors and then provide a consolidated view of the environmental/power information.
In May, CA Technologies announced it was integrating RF Code technology with CA DCIM, helping customers gain greater business returns on their investments in data center infrastructure.
The ability to integrate RFID sensor technology into a DCIM solution is becoming a standard, much-needed feature. RF Code integrations make monitoring and managing the physical assets of a data center much easier.
Nlyte Adds Barcode Reader
Nlyte recently announced a Barcode Reader for DCIM. The company set out to create a barcode reader that builds extreme discipline surrounding workflows and change orders to keep the Nlyte Content Repository up to date, accurately reflecting the data center at every point in time.
The company decided against existing bar code systems citing lack of integration into any other management systems. The company spent time building a user-friendly experience. Barcode Reader enables the portable auditing of cabinets, IT devices and their network and power connection, greatly reducing the time to install equipment, capturing changes and improving data accuracy and reliability – at the location of change itself.
It uses any combination of inexpensive handheld barcode scanner in conjunction with a tablet. It allows a user to scan an asset tag, confirm its information or add it to the Nlyte Central Repository database all online (or with the ability to synch offline).
DCIM Market Still a Scattered Landscape
The DCIM market is expected to grow significantly, despite a very full competitive landscape. There remains some confusion among customers when it comes to differentiating the offerings. Holding true to predictions, integrations to tackle more of what goes on in the data center, consolidation and acquisition, and a push towards real-time data capabilities are all driving the market. Expect more integrations and more functionality coming out at a rapid clip as providers try to break away from the pack in a big way. | | 1:08p |
Apple Planning Solar Farm in Reno, Gets the Thumbs Up From Greenpeace Apple is planning to build a solar farm next to its planned Reno, Nevada data center. The solar farm will eventually power the data center as well as provide power to the nearby community. Information on the project is rolling out slowly; including some recently taken aerial photos.
This continues Apple’s commitment to using 100 percent renewable energy in its data centers. The company already has a huge solar array in North Carolina, as well as uses biogas from nearby landfills.The Nevada complex will reportedly generate between 18 and 20 megawatts of power, similar to North Carolina, but using a different kind of technology.
The 137 acre solar array in Nevada will generate approximately 43.5 million kilowatt hours of clean energy for Sierra Pacific’s power grid, which provides power to Apple’s data center, upon completion.
The new facility might support more than 1,000 jobs both directly and indirectly in and around Washoe County. The project is expected to result in a total of $24.1 million in direct and indirect revenues in Nevada over a ten year period, according to a survey by Applied Economics for the state of Nevada.
Greenpeace has some positive words for Apple regarding its latest investment in solar energy in Nevada. Greenpeace International Senior IT Analyst Gary Cook issued the following statements:
“Apple’s latest investment in solar energy in Nevada shows that the company is making good on its promise to power its iCloud with 100 percent renewable energy. The detailed disclosure that Apple gave today can give confidence to Apple’s millions of users that the company is powering its corner of the Internet with clean energy.”
“With Google, Facebook, and now Apple all announcing major new deals in recent months for new renewable energy to power their data center operations, the race to build an internet powered by renewable energy is clearly in full swing. Tech companies are showing they have the ability to use their influence and buying power with utilities to change their supply of electricity away from coal and toward renewable energy.”
Greenpeace also called out others to look towards Apple’s efforts as inspiration. “Microsoft and Amazon – both of which still power their Internet using the dirty electricity that causes global warming – ought to take notice. In the race for a clean Internet, Apple is leaving both of those companies in the dust.”
Greenpeace will release an update to its “How Clean Is Your Cloud?” analysis ranking IT companies for their energy choices later this year.
| | 2:00p |
Appfluent Unlocking the Power of Big Data 
Thinking about how Hadoop can help your business? Appfluent is one provider that thinks it can help you.
Appfluent believes it is transforming the economics of data warehousing. The big data analytics player is young but growing. It’s about 30 employees, and they’re currently hiring aggressively.
”We’re at a unique time in history,” said Frank Gelbart, President and CEO. “The relational database is under siege. (There’s) exploding data volumes, machine data, sensor data. The relational database infrastructure is just not adequate.”
The company is a provider of enterprise BI and data warehouse management software. Appfluent software non-disruptively captures and correlates user activity, data usage and query performance for detailed usage-analysis over time. “We are seeing an acceleration in the amount of data that companies are starting to collect, and they need to do something with it,” said Gelbart.
Insight Into Data Warehouses
Appfluent Visibility integrates with leading BI applications to provide insight into business activity and data usage for Data Managers, Data Architects, BI Application Managers and Data Warehouse Managers. It provides information to measure the value being delivered by your data warehouse so you can prioritize your initiatives based on your business activity. Appfluent provides granular information on how data is used and what data is unused, enabling a company to reduce the complexity and costs of data management.
We’re at a time where new technologies like Hadoop will continue to grow in importance, as companies need to deal with Internet scale volume problems in a cost effective way. Appfluent develops technologies that help extend the capabilities that allow companies to scale much more cost effectively. “Out of the box, we’re going to slash costs, improve efficiency and help you make the move to hadoop,” said Gelbart.
The company’s customers come from several verticals. Financial services is a growing segment, Gelbart says. Examples given include credit card companies trying to mitigate credit risk, and the ability to crunch through current transactions to immediately identify a potential problem. The company also has telecommunications customers, as well as government and defense customers. Other areas it plays in are heathcare and retailing. “We don’t have direct competitors,” said Gelbart. “We’re competing against a myriad of homegrown solutions and some database monitoring. But those solutions can’t accomplish as much, they’re bare bones.”
State of NJ A Happy Customer
The company has signed up companies like Pfizer, the Union Bank of California, and SAP Business Objects, as well as helps the State of Pennsylvania and New Jersey. “Since implementing Appfluent, we have improved the efficiency and productivity of our Data Management Services unit. It is able to quickly pinpoint application usage and performance inefficiencies, slow reports, and bad queries, allowing us to optimize our data stores and enhance performance. We are providing better service and will be able to support more projects more effectively using our existing staff,” said Dan Paolini, the Director, Data Management Services for the State of New Jersey
“We can deliver a real ROI (Return On Investment) today by just taking existing workload and moving it,” said Gelbart. “This isn’t pie in the sky stuff. This is Hard ROI dollars, you can wheel Hadoop into your Enterprise Data Warehouse, and do it for a tenth of the cost.”
The company is partnered with Cloudera, one of three major independent Hadoop distribution companies.
Technologies like Hadoop have enabled the data processing power to accomplish great things. “Big companies were managing big data warehouse in the dark,” said Gelbart. “Companies can really understand today how their business users are interacting with data, and stop spending money on existing legacy data warehouse platforms. It’s about optimizing what you have, understanding usage so you can offload.”
Big Data Getting … Bigger
Big Data, and by relation Hadoop usage, is on the rise. “The market would be growing even faster if more people were up to speed on Hadoop,” said Gelbart. “Part of the expertise comes down to new applications that can leverage Hadoop.” the demand for expertise around Hadoop is outpacing the supply, leaving a massive opportunity for companies positioned where Appfluent is; companies helping the evolution to Hadoop, and driving real, usable analytics around Hadoop.
“We are continually developing products that address the evolving challenges of big data,” said Gelbart. “Our goal is to help businesses significantly reduce their data warehouse costs by pinpointing underutilized data, optimizing the most frequently used data and enabling them to make the smart move to Hadoop and Big Data.” | | 2:00p |
Video: Software Release Automation At Velocity 2013, DevOps – a combination word formed from development and operations – was much discussed. DevOps is a software development method that stresses communication, collaboration and integration between software developers and information technology (IT) professionals. Jonathan Thorpe, DevOps Evangelist and Product Marketing Manager, Serena Software, discusses software release automation and how it works with the DevOps, which is defined as a combination of Culture, Automation, Management and Sharing (CAMS). The video runs 5:22.
For additional video, check out our DCK video archive and the Data Center Videoschannel on YouTube. |
|