Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, September 30th, 2015
| Time |
Event |
| 12:00p |
Six Facts in High-Availability Data Center Design As the data center increasingly becomes the heart of the enterprise, data center reliability needs increase. But data center design isn’t simply about infrastructure redundancy. As senior company executives pay more attention to what’s happening in the data center, it is more important than ever for a data center design to match specific company needs.
More redundancy than necessary means overspending and, as you’ll learn later in the article, can actually works against reliability. Steven Shapiro, mission critical practice lead at Morrison Hershfield, an engineering firm that does a lot of data center projects, said companies have to align business mission with expectations of data center performance when deciding how redundant the design should be.
Shapiro talked about the basics of data center design decisions from the availability perspective in a presentation at last week’s Data Center World conference in National Harbor, Maryland. Here are some of the highlights from his presentation:
More redundancy doesn’t always mean more reliability
Not only is it important to design as much as possible for actual reliability needs of the applications, more infrastructure redundancy doesn’t automatically make a system more reliable. In fact, there is a point at which increasing component redundancy lowers reliability because the system becomes more complex and difficult to manage, Shapiro said.
Tier IV costs twice as much as Tier II
Infrastructure reliability level has to match the needs of the applications the data center is supporting. Simply designing and building the most reliable data center you can afford is not the smart way to go, especially considering the cost of redundancy.
The difference in cost between an Uptime Institute Tier I and a Tier II design or between a Tier III and a Tier IV one is small, the jump from Tier II to Tier III is enormous: almost 100 percent. Citing Uptime’s own estimates, Shapiro said a Tier I data center with 15,000 square feet of computer space would cost $10,000 per kW of usable UPS-backed power capacity. The cost goes up to $11,000 for a Tier II facility, but to $20,000 for Tier III and $22,000 for Tier IV.
2(N+1) UPS config not much more reliable than 2N UPS
In another example where more redundancy doesn’t mean more reliability, Shapiro said a design doesn’t get much more reliable by going from a 2N UPS configuration, which has enough UPS modules for the IT load times two, to a 2(N+1) configuration, which has IT load plus one more module times two.
The probability of failure for a system that has 2N UPS, N+1 generator capacity, dual utility feeds, an alternate-source transfer switch, and IT gear with dual power cords is 4.41 percent, according to Shapiro. A system that is the same in every other respect but has a UPS configuration of 2(N+1) has the same probability of failure.
2N generator config is marginally more reliable than N+1
A 2(N+1) generator configuration makes a difference in availability compared to an N+1 config, albeit a small one. In a system with 2(N+1) UPS, dual utility feeds, an alternate-source transfer switch, and dual-corded IT equipment, the difference in failure probability between an N+1 generator configuration and a 2(N+1) configuration is about 1.5 percent – 4.41 percent for the former and 2.94 percent for the latter.
Satisfying even the highest Tier IV requirement in Uptime’s rating system doesn’t require redundant generators. Uptime’s requirements call for a generator that will run continuously, even during maintenance. That’s a guarantee all major generator manufacturers will readily provide, satisfying the requirement, Shapiro said.
Tier III and Tier IV requirements do, however, call for redundant power distribution from the generator plant and for the fuel supply infrastructure to be concurrently maintainable or fault-tolerant.
15 percent of generators fail after eight hours of running
Generator redundancy is important because generators aren’t infallible. Even if a generator starts successfully, and the facility switches to backup power without incident, things change when generators have to run for prolonged periods of time.
Hurricane Sandy’s aftermath in New York provided that rare test of generator reliability when running at length, and many did fail the test. A number of facilities operated by Morrison Hershfield clients switched to generator power and saw the lights go down after hours of operation, Shapiro said. The failures happened for different reasons, but in one case, a genset failed when it reached the bottom of the fuel tank and took in impurities that had accumulated there and failed to filter out.
He cited a study by the Idaho National Engineering laboratory that found that 15 percent of emergency diesel generators failed after eight hours of continuous operation; one percent failed after 24 hours; five percent failed after half an hour; and 2 percent failed to start.
Tier requirements alone won’t determine reliability
While Uptime’s Tier system defines reliability of infrastructure design, there are many factors that affect reliability beyond design. They include site location, construction of the building, quality of the equipment, the commissioning process, age of the site, operations and maintenance practices of the management, personnel training and level of personnel coverage. | | 3:00p |
When Monitoring Runs Amok Gerardo Dada is VP of Product Marketing and Strategy for SolarWinds Cloud.
Anyone who’s ever binged on their favorite junk food (or even Netflix these days) knows that “too much of a good thing” can quickly become a reality. All too often the same principle can be applied to organizations that leverage cloud monitoring tools. With so many tools available, it’s tempting to monitor everything. However, an abundance of metrics can be redundant, confusing and ultimately transform monitoring from a simple, behind-the-scenes action to a daily responsibility that requires a significant investment in time from your engineering team.
That’s not to say monitoring isn’t important; the cost of downtime alone makes monitoring operational metrics a necessity. Still, without proper planning and implementation, monitoring tools can eventually become a project in and of itself that requires attention and resources that would be better spent somewhere else, while failing to provide the value technology companies expect from monitoring systems.
The truth is, building an effective infrastructure is no easy task. Although the number of commercial and open source monitoring tools on the market has exploded in the last 10 years, it’s unlikely that any one tool can deliver exactly the data and insights that are most valuable to the business. From monitoring bandwidth, security systems, servers, code management and implementation metrics, all the way to high-level business metrics, there are a plethora of data points available to collect. On top of that, as businesses expand, they often feel they need more monitoring power to keep things running smoothly. Consequently, organizations will patch several tools together – each providing different metrics – to create a massive, complex monitoring systems that require dedicated management resources. In these cases monitoring becomes a task itself, rather than providing businesses with a seamless foundation of actionable data.
Business expansion doesn’t need to herald the creation of a “Frankenstein” monitoring system, though. Many start-ups have successfully grown their businesses without losing valuable resources to managing monitoring infrastructure. Take Slack for example, a San Francisco-based start-up that delivers a team communications application to 1.1 million users each day – and doubles its user base every three months. With that kind of exponential growth, not to mention a small team of engineers, Slack’s Ops team has to be smart to keep pace, and that means avoiding mammoth, costly monitoring systems. Deploying one solution that neatly aggregates metrics and results has allowed Slack engineers to become more engaged with data, which equals better availability, security and performance. Unfortunately, streamlining a monitoring system can be an arduous process. In many instances large monitoring systems are simply too interconnected to dismantle without causing significant outages and downtime; at other times they may seem like a necessary evil for a rapidly expanding business. But at a time when engineering talent comes at a premium, you should look to leverage tools that don’t require two engineers to manage them. So what does an ideal monitoring infrastructure look like? It’s different for every organization, but by keeping the following best practices in mind you can ensure the days of managing a massive, patchwork monitoring system are safely behind you.
Avoid Over-Monitoring
In many cases this issue can be addressed by evaluating your organization’s response to two questions. The first, for whom am I monitoring? Are metrics more important to the operations engineer, the product manager, the C-suite? Even within the engineering contingent there may be a wide array of monitoring needs. Second, it’s tempting to monitor everything. But for organizations currently bogged down with complex infrastructures, you likely already know that too many metrics can be redundant and confusing. IT pros should ask themselves “what metrics do I really need?” to keep things running smoothly, without drowning in alerts and data. At the end of the day, investing in a separate tool for each group in your organization is costly and inefficient. Your organization must identify the most valuable audience and metrics to avoid requiring multiple tools.
Assess Resources as you Grow
For organizations that are currently expanding, it may seem necessary to substantially grow your monitoring capabilities as well. But it’s important to note that monitoring is a means to an end – not a task itself. And while monitoring is mission critical and can provide significant increases in time to resolution, it is not a profitable business asset. As your business grows, you should assess the resources you’re allocating to accurately determine what, if any, expansion or investment is needed.
Focus on the Data
It’s easy to lose sight of the point of monitoring – the resulting data that informs business and operational decisions – when your business is weighed down with a complex infrastructure. To see the bigger picture, your business should invest in a new comprehensive monitoring tool based on who truly utilizes metrics, which metrics will deliver the most valuable data for actionable insights, and how much makes sense to spend on a self-sufficient monitoring tool? Once you’ve dismantled your existing, inefficient infrastructure and replaced it with a streamlined tool, you will be able to better realize the benefits of monitoring while at the same time more thoughtfully deploy additional systems or monitoring metrics.
In today’s digital environment, quality of user experience can mean the success or failure of a business. Whether your company provides a specialized service to a small user base or is experiencing explosive growth, maintaining insights into the infrastructure and application level is critical for the operations team. Almost more critical for organizations, however, is to remember that monitoring should not be a task at the expense of valuable IT talent and resources. Rather, your business must employ strategic, streamlined monitoring that provides real business value in and out of the data center.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 5:46p |
Rackspace Launches New Managed Security and Compliance Offerings 
This article originally appeared at The WHIR
Rackspace announced on Tuesday new Managed Security and Compliance Assistance offerings where Rackspace security experts help customers with strategic planning for security monitoring and threat analysis.
The offering will initially be in limited availability for its global customers out of Rackspace US offices.
Though demand for DDoS mitigation skills is on the rise for employers, not all companies have the on-site team able to effectively address security and compliance concerns. This is where a managed service comes in. Rackspace’s offering is backed by a 24/7/365 Customer Security Operations Center (CSOC), which will open at Rackspace headquarters in October.
“Every day, businesses are at risk of being affected by a security threat or data breach,” Perry Robinson, vice president and general manager of Managed Security at Rackspace said. “These threats often occur without warning, can be directed at any part of the business, and come from anywhere in the world. Damage from malicious parties can range from lost revenue and recovery costs to potential liability costs and compliance-related fines.”
The services part of the Managed Security offering host and network protection, security analysis, vulnerability management, threat intelligence, compliance assistance, configuration hardening and monitoring, patch monitoring, user monitoring and file integrity management. Onboarding consultation and deployment is also part of the service.
“Cloud vendors need to be more proactive and help their customers understand and manage security in the cloud, and with this new Managed Security offering, Rackspace is responding to that need,” Christopher Wilder, practice head and senior analyst, cloud services at Moor Insights & Strategy said in a statement. “As organizations increasingly adopt complex cloud environments and cyberattacks become more frequent, customers can benefit from the 24/7 security expertise and support from Rackspace to help them keep their information and data secure. These services are even more important to smaller and mid-sized firms who might not be able to find or afford the top security talent to run their IT organizations.”
Rackspace’s new managed security offering has come as the company added Managed Cassandra to its NoSQL data services portfolio through a partnership with DataStax.
This first ran at http://www.thewhir.com/web-hosting-news/rackspace-launches-new-managed-security-and-compliance-offerings | | 5:53p |
Linux Foundation: Open Source Code Worth $5B 
This post originally appeared at The Var Guy
By Christopher Tozzi
What is open source software worth? That’s the difficult question the Linux Foundation is aiming to help answer in a new report that aims to measure the development costs of Linux-related Collaborative Projects.
Placing a price tag on Linux and other open source platforms is tough for several reasons. Most obviously, a lot of open source software is available at no charge, which means there’s no clear answer to how much people would be willing to pay for it if it cost money. In addition, open code is often shared freely between projects, and some developers are paid for their work by companies while others volunteer their time.
But over the years, people have developed methods for measuring the value of open source code. In studying its Collaborative Projects, the Linux Foundation relied largely on the SLOCCount Model, which David A. Wheeler created in 2002 to determine the financial worth of a Linux-based operating system. The SLOCCount Model evaluates the total lines of code in a software project.
The study also took into account estimates of the number of person-hours required to produce the Collaborative Projects code, as well as development costs.
So, what’s the total value of the Linux Foundation’s Collaborative Projects code? A whopping $5 billion, according to the report, which is freely available from the Linux Foundation’s website.
The Collaborative Projects code is not owned by the Linux Foundation. The projects are instead a set of independently funded ventures that the Linux Foundation helps organize. More than 500 companies and thousands of developers drive them.
Still, the Linux Foundation is happy to be able to place a more definitive financial value on open source code.
“Over the last few years every major technology category has been taken over by open source and so much opinion has been shared about the proliferation of open source projects, but not about the value,” said Amanda McPherson, vice president of Developer Programs and CMO at Linux Foundation, and co-author of the report. “As the model for building the world’s most important technologies have evolved from the past’s build vs. buy dichotomy, it is important to understand the economic value of this development model. We hope our new paper can help contribute to that understanding.”
This first ran at http://thevarguy.com/open-source-application-software-companies/093015linux-foundation-study-open-source-collaborative-code-wor | | 7:03p |
AP: States Issued $1.5B in Data Center Tax Breaks over Past Decade Competing for the big construction projects whose economic-development value remains controversial, state governments around the US have committed to a total of about $1.5 billion in tax breaks for data center builds around the country over the past decade, analysis by the Associated Press concluded.
The high rate of growth in demand for content and services delivered over the internet has driven a lot of data center construction around the country. Big internet companies and cloud service providers, such as Facebook, Google, Amazon, and Microsoft, continuously build massive data centers in rural areas.
As more and more people connect to the internet in population centers other than the biggest metros and consume online content and cloud services, companies are building more data centers in places not traditionally known as major data center hubs to deliver that content and services to customers in those areas with better quality.
Data centers have become something state and local economic-development officials pursue, and tax incentives are one of the things they leverage to attract them. Data center developers examine a long list of factors during the site-selection process, and availability of tax breaks is high on the list, along with things like cost of energy, fiber-optic network infrastructure, climate, population density, cost of real estate, and risk of natural disasters.
At least 23 states have passed legislation to provide data center tax breaks specifically, the AP said. Another 16 have offered incentives to data center developers through general economic-development programs.
Here are some of the latest states to pass new data center tax breaks, extend existing ones, or introduce new data center tax legislation:
Most of the controversy around data center tax breaks is around their ability to create jobs. Even the largest data centers don’t need big teams to run them, but government officials usually pitch tax-break legislation as stimulus for job growth.
Another point of controversy is state governments spending state resources on projects that benefit only one area in a state.
The economic impact of a data center construction project on an area is hard to quantify. Positive impact is a combination of things like a boost to local tax revenue while construction is going on, taxes operators pay on energy purchases, and the proverbial place on the map for towns otherwise rarely heard of which acquire a tech-hub status of sorts.
In some more ideal cases data center hubs form around an initial facility. This has been the case in Prineville, Oregon, where Facebook built its first data center early this decade. Besides the growing Facebook data center campus there, Prineville is now also home to an Apple data center.
Another success story is Quincy, Washington, a rural town where sales tax breaks for data center operators have attracted projects by Microsoft, Yahoo, and Dell, among others. Data centers have more than quadrupled local tax revenues in Quincy over the past decade, according to the AP report.
One of the newest hubs starting to form is in Reno, Nevada, where an Apple data center project was followed by a massive $1 billion build by Las Vegas-based Switch, which claims its future Reno data center will be the biggest in the world. The Switch facility’s anchor tenant will be eBay.
Rackspace is also said to be evaluating Reno as a potential site for its next data center, seeking tax incentives there. |
|