Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, November 12th, 2015
| Time |
Event |
| 1:00p |
When is the Best Time to Retire a Server? There’s no magic number for the length of the hardware refresh cycle that works for everyone, but the set of variables that together determine the ideal time to replace a server is fairly uniform across the board. Identifying those variables and analyzing their relationship is a question Amir Michael and his team at Coolan, a data center hardware operations startup, recently asked.
Everything that has to do with managing and designing data center infrastructure cost-efficiently has occupied Michael for many years now. After five years as a hardware engineer at Google, he spent four years working on hardware and data center engineering teams at Facebook. While at Facebook he co-founded the Open Compute Project, the open source hardware and data center design initiative.
He and two colleagues founded Coolan in 2013 with the idea of using their years of experience with web-scale data center infrastructure to help other types of data center operators run their infrastructure more efficiently and cost-effectively.
In a recent blog post, Michael outlined the basics for calculating the best time to retire a server. Sometimes because of tight budgets and sometimes because it’s hard to predict demand for IT capacity, companies wait too long to replace aging hardware and pay penalties in hidden costs as a result.
Michael heard from one of his customers who said their company had some servers that were more than eight years old. “We’re just keeping it around because it’s there, and it’s an easy thing to do,” goes the typical explanation, Michael said in an interview. “It’s hard to think about all the different factors that go into making this decision.”
There is always a point in time at which holding on to a server becomes more costly than replacing it with a new one. Finding out exactly when that point comes requires a calculation that takes into account all capital and operational expenditures associated with owning and operating that server over time.
According to Michael, the basic factors that should go into the calculation are:
- Cost of servers
- Data center CapEx
- Cost of cluster infrastructure and UPS
- Cost of network equipment
- Cost of data center racks and physical equipment
- Data center OpEx
There are other considerations, such as increased failure rate as hardware ages, that weren’t included in the analysis on purpose.
The full breakdown of how all the factors combine over time to create a clear picture of the total cost of ownership is in Michael’s blog post. Essentially, the idea is that as hardware gets better, you can do more with fewer boxes, but that doesn’t mean replacing those boxes with new ones as soon as the new ones come out will add up to a lower TCO. There is a “magic number” of years at which point CapEx and OpEx intersect in a way that makes it more cost-effective to upgrade, but, as in the scenario he outlined, it’s usually not the first year.
In the hypothetical example Michael used, applying Coolan’s TCO model, total cost of a fleet of storage servers whose capacity totaled 100 PB over a period of six years was close to $8 million higher if they were replaced with newer, more efficient boxes one year later than if the owner held on to them for the entire six years. The gap narrowed to about $3 million with a refresh after two years and disappeared completely if the servers were replaced after three years. In other words, the six-year TCO for the same storage capacity was the same, had the servers been replaced with newer ones after three years or not.
Newer servers are denser, so you need fewer of them, which means you spend less in OpEx. If you keep old servers for too long, your OpEx, while staying at approximately the same level, starts to provide diminishing returns, and it becomes cheaper to replace them than to keep supporting them.
The problem of holding on to servers for too long is bigger if you consider that not only are companies supporting underperforming machines, many have servers in their data center that don’t run any useful workloads. According to some recent research, conducted by TSO Logic, a company that also looks at efficiency and cost of IT operations, together with Stanford University research fellow Jonathan Koomey, about 30 percent of servers deployed worldwide do not do any computing, representing about $30 million worth of idle assets.
Coolan’s TCO model for hardware is available for free on the company’s website (Google Docs spreadsheet). As Michael put it in his blog post, aging infrastructure costs more than many people think, but deciding when is a good time to spend the capital on new hardware doesn’t have to be a guessing game.
“With each new generation of hardware, servers become more powerful and energy efficient,” he wrote. “Over time, the total cost of ownership drops through reduced energy bills, a lower risk of downtime, and improved IT performance.” | | 4:00p |
Don’t Forget About Memory: DRAM’s Surprising Role in the High Cost of Data Centers Riccardo Badalone is co-founder and CEO of Diablo Technologies.
Ask the average CEO to explain why data centers are getting bigger and more expensive, and you’d likely get a lot of wrong answers before you got to the right one: DRAM.
System memory, “commodity” silicon that rarely gets any attention, is more often than not the hidden issue forcing companies to build new data centers – facilities that can cost companies $1 billion or more.
The reasons are subtle – though not to the data center architects. Fortunately, that issue is on the radar of enterprise hardware vendors, and there are potential solutions.
The problem starts with the fact that having vast amounts of system memory – generally DRAM – has become crucial to getting the necessary performance out of the new breed of enterprise and data center applications.
Big Data is called that for a reason. One large company I’ve talked with is scaling up to run a web application that contains 10 petabytes of data, that data needs to always be in system memory in order to be useful. Even with the fastest storage technologies, performance would degrade to unacceptable levels if the servers’ CPUs were continually waiting for data to move back and forth between storage and memory.
This is where DRAM economics enters the picture. Most people unfamiliar with the DRAM market would predict that if you plotted the price points of DRAM modules with 8, 16, 32 and 64 gigabytes of memory, you’d see a line angling slowly and gently upwards. What you actually get, though, is quite different. While the price actually declines from 8 GB to 16 GB per module, it increases sharply between there and 32 GB, and sharper still from 32 GB on the way to 64 GB. A 32 GB memory module is 2.5 times more expensive as one 16 GB module; a 64 GB module is 7 times more expensive.
The bottom line: DRAM technology is struggling to deliver the capacity and low-energy use that data centers need. As a result, system architects often fill up their motherboard memory module slots with lower-capacity, and thus less expensive, modules. The average data center server today contains less than 256 GB of system memory. It’s a reflection of the fact that a server fully loaded with 64 GB modules costs 17.5 times as much as one with 16 GB modules. With that kind of economics, it’s cheaper to simply add more servers, despite the added capital and operating expenses.
The company I referenced earlier uses 100,000 servers to run its application with responsiveness that customers demand. In addition to a $300 million capital expenditure just for the servers, it pays substantial bills for both data center space and power.
What the industry needs is a way to cut costs while maintaining or improving quality of service. Adding more servers is a weak solution. Data center owners require vastly more system memory than what today’s generations of machines can support; they need to fit more into their billion dollar data centers while cutting their power bills. It’s a challenge enterprise vendors are tackling.
Intel and Micron, for example, cited the DRAM issue when they announced a new “3D XPoint” memory substrate that they say will be significantly denser and less expensive that current DRAM. Unfortunately, their technology won’t be ready for a number of years.
But data center architects have a problem right now, and there’s a solution that’s ready: flash memory. When most IT people think of flash, they think of solid-state drives, replacing hard-dries for data storage. For all its cost advantages, they say, flash is just too slow for system memory. But that’s no longer true. Now, with the right design, you can use flash as memory and still get web-scale performance.
You can currently buy a product that makes it possible to have four times the memory density as DRAM for a fraction of the cost – and other companies are likely taking their own approach to the issue.
As an industry, it’s fair to say that until recently, we had “forgotten about memory” and the role it plays in making data centers an ever-bigger line item on income statements. The good news is that the vendors are now on the case. The problem will be solved, and every company that runs data centers will benefit.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 6:14p |
New York Proposes New Cybersecurity Regulations for Financial Institutions 
This article originally appeared at The WHIR
The New York Department of Financial Services has sent a letter to Financial and Banking Information Infrastructure Committee members outlining potential new cybersecurity regulations. The letter (pdf), dated Monday, provides a review of the assessment measures taken by the organization, as well as proposed regulatory criteria including the establishment of policies and procedures, use of multi-factor authentication, and employment of Chief Information Security Officers and other cybersecurity personnel.
The letter by Acting Superintendent of Financial Services Anthony Albanese is part of an ongoing process which previously introduced cybersecurity questions into the regulatory approval process and a proposal for new legislation from state attorney general Eric T. Schneiderman. The FBIIC consists of regulators and industry groups including the Securities Exchange Commission, the Federal Deposit Insurance Commission, and the Federal Reserve Bank of New York.
Surveys and analysis conducted beginning in 2013 by the NYDFS began a financial cybersecurity review process, which continued with risk assessments and a further survey, this time relating to interactions with third-party service providers. That process has produced the set of regulations in eight areas outlined in the letter.
The NYDFS proposes that financial institutions adopt:
- Cybersecurity policies and procedures addressing 12 topics
- Third-party service provider contracts include six security provisions
- Multi-factor authentication for both customers and employees
- Chief Information Security Officers
- Application security procedures, guidelines, and standards
- Cybersecurity personnel and intelligence, which could be provided by a third party
- Audit trail systems
- Notice of cybersecurity incident requirements
Albanese notes in the letter that the list is neither final nor complete, and that additional dialogue among industry and regulatory stakeholders is necessary to finalize the new requirements.
Also this week US prosecutors announced charges against conspirators in the 2014 JP Morgan data breach, which remains the most high-profile hack ever on a financial institution.
This first ran at http://www.thewhir.com/web-hosting-news/new-york-proposes-new-cybersecurity-regulations-for-financial-institutions | | 6:32p |
Logicalis US: CIOs Get No Respect From LOB Executives 
This post originally appeared at The Var Guy
In the world of enterprise IT, many corporate CIOs can relate to legendary comic Rodney Dangerfield, in that both are often treated with “no respect”.
But unlike Dangerfield, CIOs are refusing to take such treatment lying down, and are fighting back against Line of Business (LOB) executives who refuse to include them in the decision-making process for major technology investments, according to a new study from Logicalis US.
The IT solutions and managed services provider recently published its third annual Global CIO Survey, which polled more than 400 CIOs worldwide to gauge their opinions on the state of enterprise security decision-making and shadow IT.
According to the study, IT leaders are struggling to cut down on shadow IT because LOB executives continually bypass both the CIO and IT departments when making major technology investments. Logicalis found that 31 percent of CIOs globally are routinely bypassed by LOB in IT purchasing decisions, while 90 percent are bypassed at least some of the time.
While CIOs have typically had limited means of pushback in regard to their exclusion from technology purchases, about 42 percent are now actively utilizing a new internal service provider model that will help them increase business value and relevancy in the decision-making process, according to Logicalis. This new model will help CIOs to regain their status as security experts and make them more relevant in the eyes of Line of Business executives, according to Vince DeLuca, CEO of Logicalis US.
“The consumerization of IT and the widespread availability of as-a-service cloud options has, therefore, made it both easy and, in many cases, practical to bypass the IT department,” said DeLuca in a statement. “These actions, however, have yielded significant consequences for the IT professionals tasked with corporate IT governance and security measures – a fact which has forced many CIOs to redefine their role from that of technologist to what is fast becoming known as the ‘internal service provider.’”
Despite the amount of disregard shown toward CIOs, they survey found that 66 percent of CIOs do hold the balance of power over technology spending, in that they are responsible for more than half of all the IT purchasing decisions in their organizations. However, this number has decreased by six percent since last year’s study.
So how can CIOs regain their status as security experts in a time when everyone considers themselves to be an IT pro? Logicalis suggests they continue to focus on becoming internal service providers, so they can ultimately create a leaner and more efficient department capable of managing services for LOB executives. Currently, 42 percent of CIOs spend nearly half of their time developing their internal service provider model, according to Logicalis.
So CIOs, take heart: You are not irrelevant or obsolete. You just need to redefine your role in the larger organizational hierarchy so executives understand just how vital you are to the business. Reinvention may be easier said than done, but it certainly isn’t impossible.
This first ran at http://thevarguy.com/my-world/logicalis-us-cios-get-no-respect-lob-executives | | 7:29p |
How Cloud Computing Changes Storage Tiering Storage has always been an interesting topic when it comes to cloud. Organizations today are making some big changes in ways they manage and control their cloud storage environments, as the amount of data they have to manage is exploding. Cisco’s latest cloud index indicates predicts that annual global data center IP traffic will reach 10.4 zettabytes by the end of 2019, up from 3.4 ZB per year in 2014, growing three-fold over the next five years.
New challenges in controlling data traversing the data center and the cloud have emerged. How do we handle replication? How do we ensure data integrity? How do we optimally utilize storage space within our cloud model? The challenge is translating the storage-efficiency technology that’s already been created for the data center — things like deduplication, thin provisioning, and data tiering — for the cloud.
I recently had a chat with Jeff Arkin, senior storage architect at of MTM Technologies, who argued that cloud computing adds an extra tier. “Cloud introduces another storage tier which, for example, allows for moving data to an off-premise location for archival, backups, or the elimination of off-site infrastructure for disaster recovery,” he said. “This, when combined with virtual DR data center, can create a very robust cloud-ready data tier.”
Before we get into moving and manipulating cloud-based data, it’s important to understand how data tiers work. Tiering is assigning data to different types of storage based on:
- Protection level required – RAID 5 v. RAID 0 v. mirrored sync or a-sync
- Performance required – Low-latency application requirements
- Cost of storing data – SSD v. SAS v. SATA
- Frequency of access – Less accessed data stored on cheaper near-line storage, such as SATA
- Security – Requirements for encryption of data at rest or compliance issues with multi-tenancy or public clouds
This methodology scales across your on-premise data centers and your entire cloud ecosystem. When creating storage and data tiers, it’s absolutely critical to look at and understand your workloads. Are you working with high-end applications? Are you controlling distributed data points? Maybe you have compliance-bound information which requires special levels of security. All of these are considerations when assigning data a specific tier.
Data can be moved in different ways:
- Post process analysis – Running scheduled analytics to determine historical hot v. cold data
- Real time analysis – Moving hot data (blocks) into SSD or other cache in real-time, based on current activity
- Manual placement – Positioning of data and information based on location, user access, latency, and other variables
Here’s another example Arkin gave: Modern cloud and on-premise storage providers actually offer solutions that scan Tier-One data, checking for inactive data and moving stale data to private or public storage.
This is done automatically to ensure the best possible utilization of your entire storage ecosystem. Remember, we’re not just trying to control data in the cloud. For many organizations, storage spans across on-prem. and cloud resources. Intelligent data tiers and good automation practices allow the right repository to sit on the proper type of array and have the appropriate services assigned.
This type of dynamic storage management creates efficiencies at a whole new level. Not only are we able to position storage where we need it, this also helps deliver data faster to the end user. Remember, storage tiers can be contained within an array or across arrays and physical locations. One advantage to using cloud-based storage is the ability to reduce or increase capacity (and even performance) on demand. This means data tiers can be applied to cloud bursting requirements where storage is delivered on a pay-as-you-go basis.
Cloud has broken traditional storage models. Elastic storage means vendors and cloud providers will have to continue to adapt to the growing needs of the market and the modern consumer. |
|