Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, January 29th, 2013
| Time |
Event |
| 12:08a |
Data Center Outage Cited in Visa Downtime Across Canada A data center power outage is being blamed for payment processing problems that prevented Visa card holders in Canada from using their cards for most of Monday. Total System Services Inc. (TSS), one of the largest processors of card-payment transactions in North America, said late Monday its systems were back in service after a power outage at a data center interrupted its ability to authorize transactions.
The problems rippled through to customers of Visa customers whose cards were issued by major Canadian banks, including CIBC, Royal Bank of Canada and TD Canada Trust.
Visa Canada said it had restored service to its users as of 5 p.m. Eastern, but as of 7 p.m. RBC was still telling Twitter users that “the VISA system is currently experiencing tech issues.” | | 12:00p |
Storage News: SGI and Scality, Micron, Nimbus Data Here’s a roundup of some of some of this week’s headlines from the storage industry:
SGI partners with Scality for scale-out storage solution. SGI announced a strategic OEM agreement with Scality, a leader in software-defined storage, to provide a unified scale-out storage solution that helps customers manage massive unstructured data sprawl. The Scality RING Organic Storage software will be bundled with SGI’s Modular InfiniteStorage Platform to enable a converged multi-petabyte storage architecture. A single 19-inch rack can house nearly three petabytes of scale-out storage with total data security and no single point of failure, to enable an infrastructure that can grow as needed with no data migration or downtime required. ”Scale-out object-based solutions are designed to address this particular set of problems by minimizing manual intervention for storage expansions, migrations, and recoveries from storage system failure,” said Ashish Nadkarni, research director, Storage Systems at IDC. “Such a dispersed, fault-tolerant architecture enables IT organizations to more efficiently absorb data growth in a manner that is predicable for the long term.”
Micron introduces new solid state drive. Micron (MU) announced its next-generation solid state drive (SSD) for data center servers, appliances, and storage platforms. The P400m SSD is a high-endurance SATA caching and storage solution that was designed to handle the amassing petabytes of structured and unstructured digital information that is created, stored and accessed every day in data centers. The P400m is designed for high endurance and high performance, featuring XPERT, Micron’s Extended Performance and Reliability Technology, which closely integrates the storage media and controller through highly-optimized firmware algorithms and hardware enhancements. ”The growth in big data is placing tremendous pressure on IT administrators,” said Ed Doller, VP and general manager of Micron’s Enterprise SSD division. “Users require fast, on-demand access to data. This means data centers must deliver more data, faster than ever before—in an environment that has zero tolerance for data loss,. Integrating flash storage into the data center is the preferred way administrators can meet these growing demands. The Micron P400m delivers the endurance, reliability, and performance critical for data center storage.”
Nimbus Data deploys over 1PB in one month. Nimbus Data announced that it has marked a major company milestone by shipping over one petabyte (1,024 terabytes) of flash memory last month, a record-setting performance that contributed to a 415 percent sales increase for the company in 2012 compared to the prior year. The bulk of the shipment total includes Nimbus Data’s latest Gemini flash memory array, a third-generation flash memory array that combines enhanced performance with enhanced density and power efficiency. “We are delighted to have achieved this tremendous milestone,” stated Thomas Isakovich, CEO and founder of Nimbus Data. “Shipping over a petabyte of flash memory in the past month demonstrates flash memory’s rise as the torch-bearer for next-generation primary storage and Nimbus’ technology leadership and operational maturity. We foresee continued rapid adoption of Nimbus Data solutions as the storage industry goes through perhaps its greatest paradigm shift in decades.” | | 12:30p |
Data Center Jobs: FortressITX At the Data Center Jobs Board, we have a new job listing from FortressITX, which is seeking a Linux System Admin in Clifton, New Jersey.
The Linux System Admin must have the ability to research solutions to problems on his or her own using common support channels (Google, mailing lists, forums, etc.), must have in-depth experience with Linux (Redhat and Debian variants) and FreeBSD, expert experience with: Apache (1.x, 2.x) / PHP (4.x, 5.x) / MySQL (4.x, 5.x) / Bind / Exim, experience with supporting and troubleshooting a large range of hardware, a core understanding of how the Internet and networks operate, the ability to lift 50+ pounds to a height of 6 feet, must be a self-motivated hands on type of person that is able to multi-task, must play nicely with others, work solo, and manage his or her own time. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 1:11p |
Cisco Unifies Wired & Wireless Access In New Catalyst Switch  With its new Catalyst 3850 switch, Cisco is integrating wired and wireless access with a Unified Access Data Plane ASIC (Image: Cisco Systems).
At the Cisco Live! London conference this week Cisco (CSCO) announced new solutions under the Cisco Unified Access umbrella that simplify network design by converging wired and wireless networks. The technologies come together in the new Cisco Catalyst 3850 Switch, which offers integrated wired and wireless LAN controller functionality.
The new products address an evolving environment in which employees may access corporate networks from a variety of devices, locations and networks.
The strategy for Cisco’s Unified Access is to unify wired, wireless and virtual private networks (VPNs), into a single, secure network based on one policy source and one management solution for the entire campus network. To realize this strategy many years and millions of dollars were spent developing a platform with a unified access data plane (UDAP) ASIC, a modular IOS-XE network operating system, and OnePK Software Defined Network APIs.
The new ASIC (Application Specfic Integrated Circuit) terminates wired and wireless traffic and enables consistent services to be applied to both wired and wireless. Since the ASIC is programmable, it offers extreme flexibility and scale, and it is compliant with Open Flow 1.3. This new UADP ASIC is featured in the new access switch and WLAN controller being announced.
New Unified Access equipment
The Catalyst 3850 Switch brings the best of wired and wireless features into a single platform, built on the new UADP ASIC, and powered by the modular IOS-XE network operating system. It supports network-wide visibility and analytics for faster troubleshooting, and granular hierarchical quality of service across the entire wired-wireless infrastructure. A total of 480G of throughput can be achieved by stacking 4 Catalyst 3850s together. The new Catalyst 3850 performs 7 times faster than the previous Catalyst generation.
Cisco also introduced a new Cisco 5760 Wireless LAN Controller, which features the UADP ASIC and runs on IOS-XE network operating system. Designed for high performance, large campus deployments, one controller can support up to 12,000 connections on one layer 3 architecture. The 5760 delivers 60Gbps of capacity with hierarchical QoS capabilities for a centralized deployment.
“Customers want a simple, highly secure network with reduced TCO that allows them to address new access requirements such as BYOD and new innovative line of business applications,” said Rob Soderbery, senior vice president, Enterprise Networking Group at Cisco. ”Cisco Unified Access allows customers to achieve these goals by moving away from individual vertical stacks of technology and disparate components toward a single architecture for an intelligent network.”
Policy and Management Solutions
The new Cisco Identity Services Engine (ISE) 1.2 release adds mobile device management (MDM) integration with top industry solutions, including Good, Airwatch, Mobile Iron Zenprise and SAP, to improve mobile device management and deliver a single simplified policy management solution. A new version 2.0 of Cisco Prime Infrastructure is also updated to allow IT to integrate the new Unified Access infrastructure components, including the Cisco Catalyst 3850 switch and Cisco 5760 Wireless LAN Controller. Cisco onePK open architecture for software-defined networking was introduced. It is a developer toolkit that allows applications to receive information from Cisco switches and routers offering a programmable data plane that enables investment protection through fast feature rollout.
“At University Hospital of Wales, our “dream” is to deliver a highly secure and sustainable healthcare without boundaries,” said Gareth Bulpin, technical development network and support manager, Cardiff and Vale University Health Board. ”The successful partnership working between Cisco, Deamovo UK and the Hospital Network Team delivered our ‘dream’ into reality in 58 days with the Cisco Unified Access being the ’game changer’ Cisco Unified Access has enabled us to apply consistent wired and wireless policies at the access layer of our wired and wireless network which now supports our mobile workforce to leverage BYOD in a sustainable and highly secure fashion.” | | 1:30p |
Demonstrating IT Value, Illustrated Hani Elbeyali is a data center strategist for Dell. He has 18 years of IT experience and is the author of Business Demand Design methodology, which details how to align your business drivers with your IT strategy. His previous post was How to Measure IT Value is the Real Issue.
 HANI ELBEYALI
Dell
The value IT creates for the organization is proportional to the benefits gained by the business minus the total cost of ownership, divided by the total cost of ownership, by this I mean:
IT Value = (Expected Return – Total Cost of Ownership) / Total Cost of Ownership
The organization only cares about the bottom line, and as such, organizations only want to spend money on projects with positive ROI. The ROI must be higher than the organization’s internal rate of return (IRR), therefore the enterprise only cares about the left side of this equation the “IT value” or “Benefits.” The problem is — most IT organizations only focus on the cost side of the equation. Unfortunately, reducing cost alone will not increase value. Here is why:
Assumptions
- Expected Return is the benefits gained from the IT organization, such as: increased sales due to better IT infrastructure, higher customer satisfaction, customer retention, and reduction of customer turn over etc. Simply, all benefits tangible and intangible gained from IT, quantified in monetary value.1
- Total Cost of Ownership is the sum of capital and operating spend for IT projects over their sustainable life.
- Net IT value is the value IT created subtracting all other associated expenses2 at year end
Using the equation above, Table-1 below shows two organizations. The top organization was successfully able to lower its OpEx spend from $100M to $60M, but kept its expected return constant at $1,100M, the result at year four is $58M of net IT value. The bottom organization did the same thing, in addition, increased its expected return by 2 percent year over year, the result is whopping $135M in net IT value at the end of year four.
Benefits of Lowering Cost and Increasing Return, Illustrated – Table-1
 Click to enlarge.
Plotting the data result in a graph (figure-1) shows the incremental delta difference in IT value overtime between the two organizations (A and B), the difference in year three is $56M, and year four is $77M. Organization B had better results because they lowered cost while increasing expected returns or “Benefits.”
 Click to enlarge graphic.
So What Does This Mean?
It’s been said that the definition of insanity is “doing the same thing over and over again and expecting different results.” IT organizations tried that same classical bottom-up approaches with emphasis only on TCO for two decades, and as a result, they can’t get a head of the usual “keeping the lights on” problem.
This vicious cycle causes business people to say, “IT failed us, and we are not sure why IT budget is close to eight percent of total company gross revenue?”3 Essentially reducing cost alone year over year will cause IT to lose most of its budget over the years, and may get outsourced.
What I’m promoting is a new approach called “Business Demand Design” because IT value corresponds directly to both Total Cost of Ownership and the Expected Return, and increasing the value of IT requires balancing both ER and TCO. Reducing TCO alone is not enough and will result in lowering the expected return to the IT end-users, and by virtue, reduce IT value. IT has become the single face of the company to its customers, staying in today’s competitive market requires increasing IT value, not decreasing it.
Proposed Solution
Business Demand Design is a new practical approach, striking the balance between:
Expected Return (Demand) by aligning the firm’s business drivers with IT strategy, i.e. profiling each line of business demands into distinct workloads, run the workloads over fit for purpose and real time infrastructure. This will effectively squeeze out the wasted resources.
Total Cost of Ownership (Supply) by spending a little more on planning, and designing, the IT organization can break free from the vicious cycle of keeping the lights-on syndrome, IT could stop living in the consequences of the past poor planning decisions. This has a positive impact on fixed and variable IT spend, and will transform the organization into real time enterprise running fit for purpose design, or what we call “Business Demand Design.”
Business Demand Design
Business Demand Design (BDD) is an all-encompassing framework. It provides a guide to striking the balance between TCO (supply) and the ER (demand). Business Demand Design has two principles:
1) Business demands are unique in each organization. IT provides the supply for those unique demands. In some cases having one-hundred percent match between business and IT drivers are not possible, so you should design for maximum efficiency and effectiveness as possible, this can be done by profiling and aligning the business drivers to the IT drivers for the organization.
2) Designing the infrastructure should be top-down and bottom-up. This means taking a radical slant at minimizing classical IT cost cutting approaches, avoiding one-platform-fits-all solutions and the supply-driven IT market, and spending a little more on planning and designing to drive higher IT value for the organization.
 Click to enlarge.
Fundamental Steps in Achieving Business Demand Design
The first few preliminary steps in achieving Business Demand Design are;
- Understand the organization business drivers. List those drivers based on their priority level and how they are measured by the business, later these drivers will be used in creating the business profile.
- Understand the supporting IT drivers for each of the business drivers, list them down according to their Key Performance Indicators (KPIs) and how the related to benefits the business, this will be important in creating the tangible and intangible expected return.
- It’s essentially important to understand the IRR (internal rate of return) for the organization, the cost of money, and how you’re going to quantify and measure the success of each IT project over its sustainable life against its total cost of ownership.
Endnotes
1Assuming ER of 10%, no. Time value of money and other financial factors are not discussed at this high-level assumption.
2To simplify, I have omitted depreciation, inflation and taxation.
3Dr. Rubin, Rubin World Wide (http://www.rubinworldwide.com/)
Please note the opinions expressed here are those of the author and do not reflect those of his employer.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 3:49p |
IBM Goes After Enterprise Apps With Its Public SmartCloud The enterprise giants continue to wake to the potential of cloud. IBM today announced a global expansion of its enterprise-grade cloud. IBM’s infrastructure-as-a-service cloud, SmartCloud Enterprise+ (SCE+), is now live in Japan, Brazil, Canada, France and Australia as well the U.S. and Germany. IBM is also announcing a SAP based service unique to Big Blue, called IBM SmartCloud for SAP Applications.
IBM is placing a stake in the ground for its public cloud play here. It believes cloud computing is creating the next wave of IT services sourcing, and that cloud is the natural evolution of its services business. It is also ensuring that its traditional outsourcing models blend seamlessly with newer cloud delivery models, and believes cloud computing is a net add, not a cannibalization of traditional business lines.
This continues Big Blue’s momentum in cloud computing. The company announced last week that cloud revenue grew 80 percent in 2012. IBM says it now has more than 9,000 enterprise cloud clients and expects cloud computing to account for $7 billion in revenue by 2015.
The SCE+ service combines the best features of sourcing– high service level agreements, security and reliability– with the best features of cloud – elasticity and subscription-based pricing. SCE+ offers the same level of assurance normally associated with a hosted service to make sure clients can always access their core applications for ERP, CRM, analytics, social business and mobile computing from the cloud.
Decidedly Enterprise Positioning
This is a public cloud service, but the company is distancing itself from other public cloud plays by touting a decidedly enterprise customer base. The company is differentiating its new cloud from “one size fits all” services you can buy with the swipe of a credit card, as well as hoping to distance itself from recent cloud outages from some of the public cloud platforms. For this reason, it’s touting some of its more outage-sensitive clients such as financial institutions, telecoms, and other big businesses. Clients listed include the Philips Smart TV platform for internet services; Summit Health, a healthcare management company, and The Generalitat de Cataluña, a regional government in Spain, which is planning on using SCE+ in a new IBM cloud data center in Spain to improve its healthcare system and share resources among its universities and town halls. These are all heavy-duty enterprise level customers that can’t afford instability in their infrastructure, and highlights how IBM is positioning its SCE+ cloud to the upper end of the market.
SCE+ is offered from IBM’s cloud centers in Japan, Brazil, Canada, France, Australia, the U.S. and Germany, giving clients broad geographic choice of where their data resides. IBM announced today the opening of its first cloud center in Spain, located in Barcelona, to service clients worldwide, which will be operative by mid-2013. The SCE+ environment can have service levels that guarantee availability for each single OS-instance from 98.5 percent up to 99.9 percent.
New also is IBM Migration Services for SmartCloud Enterprise+, which helps clients migrate to cloud more quickly and cost effectively by determining which workloads are best suited to the SmartCloud Enterprise+ environment. Standardized and automation-assisted, IBM Migration Services are economically priced, aim to deliver ROI in 6 to18 months.
IBM is also bumping up the security, by managing patch updates and identity management. SCE+ is strictly touting enterprise apps, whereas cloud plays like AWS has always relied on a vibrant startup customer business.
“This is a logical evolution of IBM’s sourcing business that gives us an advantage both in our services relationships and the cloud market as we define a new enterprise-grade cloud today,” said Jim Comfort, general manager of IBM SmartCloud Services. “Our clients want sophisticated, economical cloud-based services that provide the same quality and service level as a private, hosted IT environment. With that assurance, they can focus more on driving business value from their data and operations, and less on managing their IT.”
SmartCloud for SAP Applications available
SAP Applications are decidedly enterprise, and the company concurrently revealed SmartCloud for SAP applications, an enterprise service unique to IBM. The service is available globally.
Operating and managing IT environments running SAP solutions requires an advanced infrastructure and strong SAP operational skills. SmartCloud for SAP applications automates and standardizes provisioning of IT environments, backed by expert certified staff.
One interesting feature of SmartCloud for SAP applications is the ability for clients to develop and test operations on IBM’s public cloud service. This lets customers “try before you buy” complicated SAP enterprise applications. If they like what they see, clients can transition applications to the SCE+ platform for production.
This service is available for SAP Business Suite software and the SAP BusinessObjects solution portfolio as an enterprise-class, fully managed Platform-as-a-Service (PaaS) offering for running SAP solutions in a production environment.
“IBM’s new cloud service for SAP applications exemplifies our two companies’ work together in the last 40 years in delivering enterprise value to thousands of clients,” said Dr. Vishal Sikka, member of the SAP Executive Board, Technology and Innovation. “Cloud computing is helping our clients transform their IT infrastructures and businesses. We are confident that our partnership with IBM — using their SmartCloud platform and our business applications – will help drive differentiated value to clients around the globe.”
IBM is also marrying its Global Business Services deep expertise, tools and processes with SmartCloud for SAP applications to deliver LifeCycle as a Service. This handles implementations of SAP applications end to end—from sandbox to production. IBM takes responsibility and control of the SAP applications and provides management, including software patching of SAP solutions as well as support for the underlying operating system, database and middleware.
In the last few weeks, the company also announced an IBM private cloud with new predictive cloud provisioning running the web and mobile access for the Australian Open; its 20th year of patent leadership including cloud breakthroughs; and its cloud based “smarter home” project at CES showing the business and technical value of appliances connected to the cloud. | | 4:00p |
Video: What Lurks Beneath Your Raised Floor? There might be lots of “cruft” down there. This video, produced by a data center cleaning company, shows some examples of what really can be happening below the tiles of your data center’s raised floor. Obviously there’s a pitch for data center cleaning and decontamination, but we thought readers might like to peek under the gleaming white tiles and see what can happen below the sub-floor. Video runs 3:41.
For additional video, check out our DCK video archive and the Data Center Videos channel on YouTube. | | 4:01p |
NCS Launches Ruggedized 1U Server 
In an offering targeting military users, computer manufacturer NCS Technologies announced the availability of the Bunker XRV-5241 rugged 1U short-depth server, a powerful and reliable system engineered to fit into tight spaces and endure harsh conditions on land or at sea.
Designed specifically for rugged service it serves everything from tactical military deployments to civilian first responders and outdoor construction and transportation. The Bunker XRV-5241 meets stringent MIL-STD-810G, MIL-S-901D and MIL-STD-167 environmental, shock and vibration requirements. MIL-STD-810 is a U.S. Military standard for equipment’s environmental design and test limits for conditions that it will experience throughout its service life. MIL-S-901D is a military specification for high impact mechanical shock which applies to equipment mounted on ships. The D on the end of MIL-S-901 specifies that equipment meets requirements to withstand shock loadings which may be incurred during wartime service due to the effects of weapons.
The server features dual Xeon E5-2600 series processors, 256GB of DDR3-1600MHz memory and four 2.5 inch hot-swappable disks. Highly efficient, 1+1 redundant 750W hot swappable power supply modules can be fitted for AC or DC input power, which is ideal for adapting to aircrafts, command shelters and ships. Low voltage kits are also available for high temperature environments. It’s 18 inch depth is perfect for use in short-depth server rack environments.
NCS designs, manufactures, distributes and supports a wide range of computing products, for government, education, enterprise and OEM customers. In 2012, NCS Technologies was named Microsoft US OEM Partner of the Year, as well as being named a Dell OEM Partner. | | 4:29p |
Leveraging Cloud and Virtualization for Disaster Recovery As the data center continues to become an integral part of any organization, administrators are working hard to find ways to be as resilient as ever. The data center environment is a lot more complex now with many moving parts – all of which are vital to the efficiency foundation of an infrastructure. With cloud computing, more users, and an increase in data – the challenge has become disaster recovery and business continuity. All distributed systems have to be checked and the data points must all be monitored. In working with these more complex data centers, many administrators are turning to the cloud and virtualization to help them create a more robust DR plan.
The reality is this: a well-planned out cloud and virtualization solution can truly help any organization create a more agile environment. There are inherent benefits to working with specific types of cloud models and virtualization platforms. A large part of IT is creativity – that’s why using new types of technologies can help reduce management costs and keep an environment running longer.
The following are some ways that cloud and virtualization can be leveraged for disaster recovery:
- Cloud for replication. Site-to-site replication has become easier with the utilization of both private and public cloud technologies. With better storage systems and more control over the WAN, organizations are able to better replicate their environments. This can be entire virtual machines, specific databases, or just data points. Furthermore, cloud computing has made disaster recovery much more financially feasible for more organizations. Why? The inherent flexibility of the cloud means you can dictate exactly how much downtime your organization can tolerate and where the costs break even. With that, companies who are trying to stay financial conscious are able to design a solution that fits both – the data center and the budget.
- Virtualization as a mechanism for backup and recovery. The idea is simple. It’s much easier to recover a virtual machine than it is a physical device. So, administrators with numerous VMs in their environment are working with virtualization-ready backup strategies to keep their data centers agile. Snapshots can be distributed to various points using private networks or cloud technologies. Furthermore, administrators are able to have “mirrored” VMs running at a remote site ensuring an active/active scenario should one of the vital corporate VM workloads fail.
- Using-software defined technologies. The conversation around software-defined technologies continues to grow where some see this term as an extension to virtualization. Software-defined platforms can range from networking equipment to security appliances. The idea is to create a truly agile environment where various “virtual” networking instances can be deployed. For example, Global Server Load Balancing (GSLB) can greatly assist a DR strategy. By load balancing data centers on a distributed level, administrators are able to seamlessly (and automatically) point users to a different data center should the primary go down. In fact, a similar connection can be created whether that secondary data center is privately held or provisioned by a data center provider. For the end-user, the transition is almost transparent. For the IT administrator and the business manager, this means – very simply – less downtime and faster recovery.
- IaaS or “Data Center On-Demand.” Cloud and virtualization platforms now allow for the very fast provisioning of needed resources. In a disaster scenario, the ability to recovery VMs and data, quickly, is essential. This is where cloud computing and virtualization can help. Administrators are able to create active/active or active/passive IaaS solutions which can be very cost-effective. For example, an engineer can set a backup cycle with a given provider. That information is then housed in the form of dormant VMs and backup data. Then, if an emergency happens – the organization can ask the provider to immediately spin up new VMs and attach their data to the newly provisioned environment. Although recovery isn’t immediate, it’s still very quick. Furthermore, organizations are able to adopt a “pay-as-you-go” model so that VMs are only paid for when used. On the other hand, organizations requiring high levels of uptime may simply have the VMs already running. The point behind a data center on-demand platform is the flexibility. Depending on your recovery needs, you can use a provider to help you gain that DR agility.
The current trend clearly points to more information being present in any given infrastructure. This means more users, more devices and much more data. For any organization, there needs to be some type of disaster recovery plan in place. Remember; never place all of your “technology” eggs in one basket. Although cloud computing is resilient, it’s not without its faults.
Over the past year, major cloud vendors have seen serious outages which have caused big companies to become production ineffective for extended periods of time. In these situations, having a plan in place to ensure that users and customers can still access data is vital.
A large part of a good disaster recovery solution is the thought and creative behind it. One of the first steps in creating such a solution is identifying the data and infrastructure components which are required to keep the business running. It’s in these cases that a business impact analysis (BIA) is strongly recommended. Not only will this report show you the most crucial IT parts of your environment, it’ll help you prioritize and plan out a solid DR platform.
Remember, a DR strategy is something you may have to seriously invest in – and hope you never have to use. However, if an emergency even happens, a well-laid out and executed disaster recovery initiative can save time, management overhead and – very importantly – costs associated with experienced extended outages. | | 8:15p |
Compass Offers Data Center Comparison Shopping Tool 
One of the Internet’s great gifts to consumers is comparison shopping, with an explosion of services and apps providing guidance on everything from cameras to Camaros. Then there’s the Progressive Insurance model, in which a company provides information on competitors’ products and rates, with the confidence that its offerings may benefit from the comparison.
Compass Datacenters today released an online tool providing data center users with comparisons between different deployment models and providers. Compass says this “online data center configurator” is being offered as an educational resource for decision-makers involved in the planning phase for a data center project, directing users to the type of facility and the providers that meet their objectives.
“The process of choosing the right solution and provider must start with an understanding of the company’s specific technical and business needs,” the company said in a press release. “Too often, the process starts with a solution in mind and then tries to impose that on a company’s needs, but the data center configurator helps companies avoid that mistake by walking users through a decision-making process that matches the best option to the specific set of challenges and needs of their organization.”
“Cutting Through The Noise”
CEO Chris Crosby says Compass is adopting the Progressive approach in offering a wide range of products and providers. ”This is the equivalent of tools offered by insurance companies to help customers select the level of coverage that best suits their needs,” said Crosby. “We want it to be a valuable resource to help them make the right choice for their next data center project. There are a lot of providers that try to be all things to all people, and it is difficult to discern in the decision-making process. This is our attempt to cut through the noise and offer help that wants to educate rather than sell.”
The challenge with this approach is whether the tool’s output is seen as a useful comparison, or as a way to steer business to a particular provider – which can also be a factor in comparison sites run by brokers of colocation services, who often receive fees from referrals to providers.
“The self-serving side is that this may help some users understand how Compass can help them,” said Crosby. “But we’re only a solution for a small subset of users. If this just promotes us, no one’s going to use it. There’s also information about many other providers.
“There’s a whole lot of confusion out there about who does what,” Crosby added. “Knowing the difference between modular – what IO does, versus what Digital Realty does versus what we do – is important. We provide three different types of products.”
Reducing Up-Front Legwork
One industry analyst believes the Compass tool can help reduce work for companies approaching a data center expansion project.
“Selecting the right data center can be a confusing process because of the increasing complexity of these projects coupled with information overload,” said Jeff Paschke, Research Director at 451 Research. “This complexity provides data center providers with the opportunity to educate and help customers. Compass’ configurator will help point people in the right direction so they avoid the false starts and costly mistakes that too often happen with major data center projects.”
The tool has two parts – an education segment that provides definitions of different data center deployment models, and a configuration section featuring a drop-down menu to select the options for a requirement. For example, selecting “Mission Critical” and “1-4MW” and “Traditional” generates a list of pros and cons of that selection, as well as a list of 12 vendors offering this product (which doesn’t include Compass). The education segment may prompt discussion in how it frames the different deployment models, two of which are described as “monolithic,” probably not a term that would be readily embraced by providers of these products.
Crosby says Compass welcomes feedback and wants the tool to be relevant for the industry. ”It’ll be interesting to see what kind of feedback we get – who we piss off, and who calls to thank us,” said Crosby. “We’ve tried to set it up for continuous improvement. If nothing else, it will definitely start some conversations.” |
|