Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, May 24th, 2017

    Time Event
    12:00p
    How Much of the $900M in Wasted Cloud Spend Are You Responsible For?

    Brought to you by MSPmentor

    One of the greatest advantages of cloud computing is the scalability and the ability to use precisely the resources you need, precisely when you need them.

    But too often managed service providers are managing cloud environments without paying close enough attention to how much of their customers’ cloud spend is being used wisely.

    That’s the premise behind a new product from startup ParkMyCloud, billed as a way to instantly and dramatically reduce the costs of using public cloud services.

    See also: Google Opens a New Front in Cloud Price Wars

    Initially, the technology worked only with Amazon Web Services but was expanded in recent months to include Microsoft Azure.

    Functionality for Google Cloud Platform and integration with IBM SoftLayer are said to be in the works.

    In short, ParkMyCloud can be set to automatically shut down – or park – public cloud resources during no-use hours.

    “Reduce cloud cost by 65 (percent),” the company’s website reads. “Automate AWS (and) Azure scheduling in 15 (minutes).”

    In a news release announcing $1.65 million in seed funding last September, ParkMyCloud co-founder and CEO Jay Chapel described the startup this way:

    “Companies are spending billions of dollars a year on 24 x 7 servers, when many of those resources are idle much of the time,” the statement said. “ParkMyCloud customers can ‘park’ or pause non-production servers such as development, staging, testing and QA when they aren’t needed, saving companies up to 60 (percent) on their AWS bills.”

    That 60 percent figure was calculated for AWS public cloud.

    Azure, the fastest-growing public cloud service, was added later.

    “Dominant competitor (AWS) has had more time to feel and subsequently address growing pains that Azure users are now starting to feel,” Chapel wrote in a recent article for the publication CloudTech.

    “This means that AWS users have more options available to them to address certain concerns that come with using public cloud,” the piece goes on. “Chief among these concerns is managing costs.”

    Citing research from RightScale’s 2017 State of the Cloud report, Chapel offers the following analysis of cloud waste:

    • The public cloud IaaS market is $23 billion
    • 12 percent of that IaaS market is Microsoft Azure, or $2.76 billion
    • 44 percent of that is spent on non-production resources – about $1.21 billion
    • Non-production resources are only needed for an average of 24 percent of the work week, which means up to $900 million of this spend is completely wasted.

    ParkMyCloud connects to AWS via IAM role or user credential; and to Azure via dedicated credential.

    “Your compute resources will be displayed in ParkMyCloud’s single-view dashboard, across all availability zones and any number of AWS and/or Azure accounts,” the website states.

    “To ‘park’ your compute resources, you assign them schedules of hours they will run or be temporarily stopped…” it continues. “Most non-production resources…can be parked at nights and on weekends, when they are not being used.”

    The dashboard provides a running tally on how much money the user is saving.

    Client dollars saved on cloud compute services can be put to work for other purposes, Chapel argued in another CloudTech piece.

    “This keeps the end user satisfied by giving them more value per dollar,” the article states. “The MSPs are satisfied by providing more, and stickier, services to their customers.”

    This article originally appeared on MSPmentor.

    4:46p
    SoftBank Takes $4B Stake in GPU Titan Nvidia

    Ian King (Bloomberg) — SoftBank Group Corp. has quietly amassed a $4 billion stake in Nvidia Corp. making it the fourth-largest shareholder in the graphics chipmaker, according to people familiar with the situation.

    The Japanese company, which just closed its Vision Fund, disclosed it owned an unspecified amount of Nvidia stock when it announced $93 billion of commitments to the technology investment fund on Saturday. A holding of 4.9 percent, just under the amount that would require a regulatory disclosure in the U.S., would be worth about $4 billion.

    A stake in Nvidia fits with SoftBank founder Masayoshi Son’s plans to become the biggest investor in technology over the next decade, with bets on emerging trends such as artificial intelligence. Under its founder, Jen-Hsun Huang, Nvidia has become one of the leaders of the charge by chipmakers to provide the underpinnings of machine intelligence in everything from data centers to automobiles.

    See also: Nvidia CEO Says AI Workloads Will Flood Data Centers

    SoftBank spokesman Matthew Nicholson declined to comment. In announcing the Vision Fund’s capital commitments, SoftBank said the fund will have the right to acquire several investments including its Nvidia stake.

    SoftBank shares reversed losses and closed mostly unchanged on the news. Nvidia shares rose as much as 3 percent in New York Wednesday to an intraday record high of $141.07.

    Depending on when the shares were acquired, Son may have made a savvy wager. Nvidia’s stock tripled last year and is up 28 percent again this year, giving the company a market value of more than $80 billion. Its worst annual gain since it started rallying in 2013, was the 25 percent run up achieved in 2014.

    Nvidia, which is the biggest maker of graphics chips used by computer gamers, earlier this month countered concern among analysts that its share price appreciation had outrun its ability to grow profit by reporting earnings that beat estimates and forecasting a further improvement. The results showed that gains are being driven by progress expanding into new markets, such as automotive and data centers.

    Son set up the planned $100 billion Vision Fund so he can pursue even more ambitious deals than he’s been able to do on his own. He has invested in startups in China, India and the U.S. and acquired control of larger companies such as U.K. chipmaker ARM Holdings Plc and U.S. wireless operator Sprint Corp.

    See also: Tencent Cranks Up Cloud AI Power With Nvidia’s Mightiest GPU Yet

    SoftBank invested $5 billion into the Chinese ride-hailing giant Didi Chuxing last month in the largest-ever venture fundraising. This month, the Japanese company put $1.4 billion into the digital payments startup Paytm in the largest funding round from a single investor in India’s technology sector.

    Son has made the U.S. a particular focus after meeting with President Donald Trump in December and pledging to create 50,000 new jobs in by investing $50 billion in startups and new companies. That month, SoftBank contributed $1 billion to a funding round in OneWeb Ltd., a satellite startup based at Exploration Park, Florida near Kennedy Space Center. In March, SoftBank invested $300 million in WeWork Cos., a U.S. startup that rents out office space and desks to small businesses and freelancers.

    5:15p
    AIM for Infrastructure Visibility in the Data Center

    LeaAnn Carl is the product line manager for CommScope’s Automated Infrastructure Management (AIM) solution. 

    Data center managers are always implementing new applications and network technologies, and that often means deploying new optical fiber and equipment. The migration to higher speeds can add complexity as new optical interfaces are deployed: Some equipment is suited to duplex fiber, while parallel fiber paths are best suited to other types of network equipment.  As a result, it’s becoming increasingly difficult to track and manage the large number and variety of connections to ensure that work proceeds efficiently and without the risk of costly application outages.

    Automated Infrastructure Management (AIM) systems provide the visibility that is critical to support the rapid migration to higher speeds, and the fabric-like nature of modern data center networks, where everything is connected to everything else with an evolving mix of duplex and parallel connectivity. In this article, we’ll look at how AIM systems improve data center visibility and governance.

    Visibility Challenges in the Modern Data Center

    A modern data center can house thousands of devices connected by tens of thousands of optical circuits, some duplex and some parallel. Manually drawn network diagrams are often obsolete as soon as the ink is dry, and using other primitive tools like Excel spreadsheets does not provide an overall view of the data center network with the ability to drill down to specific connections or devices. This state of affairs presents challenges in five key data center management scenarios:

    • Unscheduled connections or disconnections: Too often (and just once may be too often), network technicians will inadvertently plug a cable into the wrong port or disconnect the wrong cable. In most cases, the network administrator has no knowledge of this error until it causes downtime for users or applications, with ever costlier consequences.
    • Troubleshooting: One key goal in the data center is to reduce downtime and mean time to repair (MTTR). When a situation arises that needs repair, every minute counts, and it can take hours for a technician using a manual network map to trace the problem to the right cable or device.
    • Finding the location of a network device: Manual record-keeping systems typically list the characteristics of networked devices, but do not necessarily keep track of the device’s location and end-to-end connectivity trace. Even if the location is listed, it may have been necessary to move the device or its connections at some point, and this change was never documented. This can make finding a particular device a challenging and time-consuming exercise.
    • Service provisioning: When provisioning services, technicians or administrators often have to hunt for available ports on switches and patch panels, and they may not take the shortest or most efficient route from the switch to the service device.
    • Server decommissioning: When retiring or replacing a server, it is important to ensure that the server connectivity is also removed and the status of infrastructure and switch ports is updated in the process—freeing up switch port and panel capacity. In a typical data center, verifying status of connectivity elements left over after server decommissioning may require a technician to manually trace cables to see where they lead – a process that can take hours.

    How AIM Systems Help

    AIM systems combine automated network mapping, connectivity state awareness, reporting, work order management, and alarms to provide the critical visibility needed into the interdependent physical layer as well as the myriad devices connected to it. An AIM system knows where every cable goes, where every networked device is located, and which ports are in use or idle. An AIM system performs the following functions:

    • Accurately documents the end-to-end connectivity between networked devices
    • Tracks, in real time, all changes to physical-layer connections
    • Generates alerts for unauthorized or unplanned changes
    • Issues alerts when changes occur on critical circuits
    • Discovers and tracks network-connected devices
    • Generates electronic work orders for guided deployment
    • Delivers full reporting capabilities, including generation of custom reports
    • Identifies unused IT assets and cabling available for reuse
    • Simplifies and streamlines work flows through powerful process automation

    AIM is an innovative way to help administrators manage their networks and drive more value out of their physical-layer investments while addressing the three main challenges IT managers face today: optimizing capacity, optimizing availability, and optimizing efficiency.

    Optimizing capacity – With its ability to track utilization of panels, cabling, and switch ports, an AIM system provides real-time data on how physical-layer assets are being used while assisting in the planning process. As a result, addressing capacity challenges is not achieved solely by purchasing additional infrastructure; rather, the AIM system’s sophisticated tools reveal all active and inactive ports, enabling administrators to purchase only what is needed.

    Optimizing availability – An AIM system reduces time-intensive manual processes, generating electronic work orders and enabling guided administration of connectivity changes. As a result, both human errors and network downtime are minimized. Visibility into end-to-end circuits means all changes are fully documented, and in the event of a network failure, a root cause analysis can be quickly established, with service quickly restored.

    Optimizing efficiency – Real-time management of the physical layer ensures that stranded switch ports are identified and repurposed, rather than remaining idle while continuing to consume power. AIM systems provide this real-time management and also optimize server deployment and de-commissioning.

    As an analogy, an AIM system provides Google Maps-like functionality for the network infrastructure. It provides a holistic view of the network and how the infrastructure and assets are being used, as well as the optimal connectivity routes between devices, enabling users to optimize the allocation and use of the network’s resources and to troubleshoot problems quickly.

    AIM Systems Benefits

    By delivering greatly improved visibility into the network, AIM systems improve security, MTTR, and service agility.

    Security improves with AIM because the longer a security breach goes unaddressed, the more dangerous and costly it can be. AIM systems alert administrators to security issues and help them pinpoint security problems by identifying the location of an improperly connected cable, for example, or by searching for the location of an affected device and rapidly dispatching a technician directly to that device.

    MTTR can be slashed with an AIM system because the system can not only identify the precise location of the problem, but also generate an automated work order that walks the technician through the steps required to fix the problem. An AIM system enables guided patching, where the system can send an electronic work order down to each rack and guide technicians with an electronic display and blinking lights on ports to direct them to where they should connect a patch cord, for example. Guided electronic work orders improve speed and accuracy while keeping track of activity and automatically updating documentation, even for complex parallel and duplex circuits.

    Service agility improves greatly with an AIM system because the network administrator can know in advance how a change in services will impact the network infrastructure, and can plan the service rollout with detailed instructions about how to connect cables, switches, servers, and other devices. Service provisioning capabilities eliminate the need to manually select connectivity routes and ports, automatically selecting the optimal primary and backup connectivity routes. Trial and error is eliminated, and with it potential hours of delayed service implementation.

    AIM systems now conform to several international standards, including the definition of a standardized API that can integrate the AIM system’s information with other network and data center management tools such as change management tools, DCIM systems or network management platforms.

    As data center networks migrate to higher speeds, the data center infrastructure is becoming more complex as equipment and applications are added over time, and manual record-keeping is no longer sufficient to optimize the network. AIM systems that are capable of tracking duplex and parallel optical connectivity can deliver an accurate, real-time view of all data center connections to improve security, MTTR and service agility.

    6:00p
    DevSecOps, Machine Learning and Beyond: How IT Security is Changing

    Brought to you by MSPmentor

    Cybersecurity threats are changing, and so are the tools and strategies available for combating them.

    If you want to keep infrastructure and software secure, you need to familiarize yourself with concepts like machine learning and their role in IT security.

    Today’s security threats are wildly different from those of the past.

    If you pay any attention to the news, you already know that.

    In the past, computer viruses amounted to a nuisance, more than a grave threat.

    They disrupted local systems rather than entire networks.

    They were generally not hard to spot.

    Today, however, it’s common for attackers to target mission-critical systems with malware that shuts them down entirely, or, as was the case in this month’s ransomware attack, charges large amounts of money in order to restore access to critical data.

    New Security Paradigms

    Fortunately, along with the new generation of IT security threats comes a new set of paradigms for preventing and responding to attacks.

    Today’s security landscape is defined by trends like the following:

    • Immutable infrastructure. Technologies like Docker containers enable immutable infrastructure. By design, immutable infrastructure cannot be modified once it is running unless you wipe it out and create an entirely new version. From a security perspective, immutable infrastructure is an advantage because it makes it easier to detect anomalies that could signal a threat. When there is no legitimate reason for a running application to be patched or modified, changes stand out more clearly.
    • The DevSecOps (or Rugged Ops) concept. This is an extension of the DevOps philosophy. DevSecOps emphasizes the importance of integrating the security team into all parts of software development and deployment, rather than leaving them disconnected. When security experts are involved in designing, testing and managing code, they stand a better chance of helping an organization to discover and fix vulnerabilities before software goes into production.
    • Machine learning. Relying on humans to detect and interpret security problems is error-prone and doesn’t scale. For that reason, today’s generation of security tools leverage machine learning to detect and respond to anomalies automatically.
    • Automated security policy configuration. If you want to create large software environments, you can’t configure security policies for them manually. You need to rely on automated tools that use machine learning to generate and update security policies automatically, in real time.

    Embracing these concepts is key if you want to thrive in the face of today’s security threats.

    The recent spate of breaches shows that old-generation security practices are not working.

    While perfect security is not possible, strategies like those outlined above bring it closer.

    This article originally appeared on MSPmentor.

    6:53p
    Facebook Building Own Fiber Network to Link Data Centers

    Facebook is putting its own high-capacity fiber in the ground to connect its future Los Lunas, New Mexico, data center to the other server farms that host most of the social network’s infrastructure. The company expects the special type of fiber it is using to build the network to make it enormously more efficient than most cables that exist today.

    The underground cable system will be 200 miles long and provide three diverse network paths to the Facebook data center in Los Lunas, company representatives wrote in a post on the Facebook page for the future Los Lunas facility. Facebook claims it will be “one of the highest-capacity systems in the US.”

    To satisfy their hunger for network bandwidth, operators of hyper-scale data center platforms like Facebook, Google, Microsoft, and Amazon have been pouring a lot of money into the physical network infrastructure that makes up the internet, altering market dynamics for traditional network owners and operators. They’ve been investing in construction of intercontinental submarine cables – projects that cost hundreds of millions of dollars and have traditionally been backed almost exclusively by telco consortia.

    More: Here are the Submarine Cables Funded by Cloud Giants

    Facebook is now making a similar play on land, choosing to invest in its own terrestrial fiber instead of leasing network capacity from long-haul connectivity providers such as Level 3, CenturyLink, or XO Communications. And it’s building a network it expects to scale and perform better than the services those companies provide:

    “With state-of-the-art optical fiber being deployed, it will be 50 percent more efficient when moving information compared to most high-capacity cables previously built. Specifically, we can move information 50 percent farther without needing additional equipment to regenerate the signal — helping our bandwidth demands scale.”

    Facebook revealed earlier this month that it has been running two separate network backbones, one for connecting its data centers to the internet, and another to connect its data centers to each other. While its traffic to the internet has remained at a fairly steady level over the years, replicating rich content like photos and video across multiple data centers has driven a massive spike in intra-data center bandwidth needs:


    (Chart: Facebook)

    It appears that those needs are now massive enough to justify the investment in its own physical underground fiber.

    See also: Everything You Wanted to Know about Facebook Data Centers

    7:24p
    Data Center Strategy: Tips for Better Capacity Planning

    The world is littered with thousands of examples of the problems associated with data center strategy mistakes around capacity and performance.

    For example, Lady Gaga fans brought down the vast server resources of Amazon.com soon after her album “Born This Way” was offered online for only 99 cents. Similarly, a deluge of online shoppers caused the data center to crash after they bombarded Target.com for a mammoth sales event. And, of course, there was the famous healthcare.gov debacle, when an ad campaign prompted millions of Americans to rush to the website for healthcare coverage only to face long virtual lines and endless error messages. In total, it is estimated that more than 40,000 people at any one time were forced to sit in virtual waiting rooms as available capacity had been exceeded.

    Each of these examples highlights why data center managers have to make sure their data center strategy stays ahead of organization expansion needs as well as watching out for sudden peak requirements that have the potential to overwhelm current systems. The way to achieve that is via data center capacity planning.

    “When organizations lose sight of what is happening or what might happen in their environment, performance problems and capacity shortfalls can arise, which can result in the loss of revenue, reduced productivity, and an unacceptable customer experience,” says John Miecielica, former product marketing manager at capacity management vendor TeamQuest, now a consultant for Stratagem, Inc.

    “Data center managers need to ensure that business capacity, service capacity and component and resource capacity meet current and future business requirements in a cost-effective manner. This has everything to do with managing and optimizing the performance of your infrastructure, applications and business services.”

    See also: AI Tells AWS How Many Servers to Buy and When

    If It Ain’t Broke …

    The old saying, “If it ain’t broke, don’t fix it,” might be a workable principle in many different scenarios. When it comes to data center strategy for capacity, however, it can be a deadly philosophy as the above examples illustrate.

    One European data center, says Miecielica, implemented capacity planning to transition from only being able to fix things when they broke to being able to right-size its virtual environment based on accurate capacity forecasts. Result: That organization avoided infrastructure costs totaling $65,000 per month. Further, its ability to pinpoint bottlenecks helped it eliminate hundreds of underperforming virtual machines (VMs).

    Users tell a similar story. Enterprise Holdings, Inc. (EHI), corporate parent of Enterprise Rent-A-Car, Alamo Car Rent A Car, National Car Rental and Enterprise CarShare, is the largest car rental service provider in the world.   In the past, forecasting and modeling of data center capacity was done via manually collected data that was typed into Microsoft Excel and Access. As well as being resource-intensive and error prone, it also tended to be inaccurate. This was something EHI could ill-afford in a competitive marketplace. Slow systems could mean hundreds of car rentals being lost within a few minutes, as well as delays in getting vehicles to the places they were needed the most, leading to low customer satisfaction ratings.

    “Dozens of resources and countless hours were consumed in data collection, guesstimating growth and presenting a forecast on a quarterly and annual basis,” says Clyde Sconce, former IT systems architect at EHI.

    His company had been guilty of a common data center strategy mistake—over simplification of demand. One example was the practice of creating a forecast by taking current CPU usage and then using a linear trend to predict all future requirements.

    “If you do it that way, you will be mostly wrong,” says Sconce.

    EHI implemented TeamQuest Surveyor to streamline forecasting, automate the process and heighten accuracy. This made it possible for forecasts and reports to be made available and updated weekly and daily if necessary.  That enabled the data center to move out of reactive mode, understand changes as they happened and take action to ensure its systems never suffered from a Lady Gaga-like event.

    Capacity forecast inputs were obtained from Surveyor, and combined with a variety of business metrics and data gathered from a collection of Java tools. This was then translated into projections for CPU and business growth, dollar cost per server, forecasts relevant to different lines of business and executives, and even ways to check the accuracy of earlier forecasts.

    The point here is not to try to predict the future based on one or two metrics. Instead, EHI extracted a wide range of parameters from a variety of sources that includes database information such as server configuration (current and historical), resources consumed (CPU, memory, storage) and business transactions (via user agents). Specific to its UNIX AIX environment, metrics like rPerf (relative performance) helped the data center to understand whether it needed to add or remove CPUs to improve performance.

    Sconce cautioned data center managers to watch out for exceptions that can trip up forecasting when working on data center strategy. Take the case of historical data being incomplete or non-existent for a new server. That can result in an anomaly such as a fairly new server being forecast as having 300 percent growth.

    “We go in and override numbers like that in our forecasts, correcting them to a known growth rate for servers that house similar applications,” says Sconce. “Bad data, too, needs to be removed, and you have to watch out for baseline jumps such as shifts in resource consumption without changes in growth rates.”

    An example of the latter might be where two servers are merged into one. In that case, the workload has doubled but the growth rate has not changed. But the biggest lesson, says Sconce, is to align data forecasting to current as well as historical business transactions as that ultimately represents the whole point of the exercise: how the business is currently driving the resources being consumed in the data center, and how business or market shifts might overhaul internal resource requirements.

    The most important statistic at EHI is the number of cars rented per hour. Therefore, instead of feeding executives incomprehensible technical metrics, Sconce always translates them into how they relate to the cars per hour statistic to facilitate better understanding with management. Being able to achieve this, he says, requires close contact with business heads to accurately correlate business transactions to resources consumed in the data center and to then create a realistic estimate of their cost to the organization.

    “Throwing all your data and inputs into a blender won’t work very well,” says Sconce. “An accurate forecast must employ a sophisticated analytical tool that can do things like cyclical trending, anomaly removal, baseline shifts, hardware changes, cost correlations and flexible report groupings.”

    The values EHI relies on the most are peak hourly averages at the server level. The organization has also found it useful to have exception reports generated to flag servers with missing data or anomalies that need to be investigated.

    One final tip from Sconce: Base data center capacity forecasts on both cyclical growth as well as linear projections. EHI calculates annual growth but applies a cyclical pattern to that forecast based on monthly usage. This approach to data center strategy accounts for potential leaps in demand due to seasonal peaks, or campaign launches. A linear projection, for example, may show that a purchase should be made in June, but cyclical data highlights where surges in business usage may occur. This allowed EHI to defer capital expenditures or speed up purchases based on actual business needs instead of just projecting usage forward as an orderly progression.

    “By implementing capacity planning in this way, we dramatically reduced our resource time commitments; we were able to automate the forecasting process and implement daily/weekly reporting,” says Sconce.  “TeamQuest Surveyor enabled us to develop a standardized forecasting strategy and to conduct historical forecast tracking to identify areas of improvement.”

    Data Center Complexity

    While capacity planning has always been important, its star has risen in the era of virtualization, cloud computing, BYOD, mobility and Big Data. To cope with this, Gartner analyst Will Cappelli says capacity planning needs to be supported by predictive analytics technology.

    “Infrastructures are much more modular, distributed and dynamic,” he says. “It is virtually impossible to use traditional capacity planning to effectively ensure that the right resources are available at the right time.”

    This entails being able to crunch vast amounts of data points, inputs and metrics in order to analyze them, quantify the probabilities of various events and predict the likelihood that certain occurrences will happen in the future. Therefore, data center managers are advised to lean toward capacity planning tools that enable them to conduct that analysis in such a way that they can run a variety of “what if” scenarios. This allows them to determine their precise requirements, thereby reducing both cost and risk.

    Miecielica agrees. He says that the challenge for organizations is to understand how they can slice and dice all of the data coursing through the data center and the organization. By compartmentalizing all this data into actionable information, capacity planners can share this in the form of a dashboard with metrics that the business can understand and use to make strategic business decisions.

    However, the need to solve the issue of future capacity requirements is urgent. Bernd Harzog, CEO of OpsDataStore, says that conversations with enterprise users confirm that the typical data center server operates at 12 percent to 18 percent of capacity. This number is borne out by an extensive data center survey completed by a company known as Anthesis Consulting Group in a report entitled, “Data Center Efficiency Assessment.”

    “The standard method for adding capacity is to use resource utilization thresholds as triggers to purchase more hardware, but this results in excess hardware purchases as it does not factor in the requirements of the workloads (the applications) running on the infrastructure,” says Harzog. “The trick is to be able to drive up utilization without risking application response time and throughput issues.”

    One possible way to minimize the complexity inherent in the modern data center is via the creation of dashboards. The data center manager from a large telecom company, for example, recently implemented capacity management with goals set for cost reduction, risk avoidance and efficiency.

    “The project leader focused on dashboards first, and the visibility of the project changed in a dramatic way leading to the capacity management project team becoming in demand,” says Bill Berutti, president of cloud management, performance and availability and data center automation at BMC.

    Previously within this telecom data center, various storage, server and operations managers had periodic meetings to decide where to spend money in the data center. The first dashboard produced by BMC for the storage team provided actual usage numbers that led to about 40TB of storage being eliminated from a purchasing contract.

    Hardware Overspend

    As organizations strive to curtail data center costs, the first places they are likely to snip are planning and management tools such as capacity planning. Yet that one little red line in the expense budget could result in millions overspent on hardware, software or networking.

    “Most organizations are underinvested in capacity management, both as a process discipline and also in the tools required to support the process,” says Ian Head, an analyst at Gartner.

    << Previous Day 2017/05/24
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org