Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, August 25th, 2015
| Time |
Event |
| 12:00p |
Brocade Unveils Easy-to-Use Analytics Platform for SAN Monitoring Moving to reduce the cost of storage monitoring, Brocade today announced an analytics platform that enables IT organizations to more easily track both application and device-level I/O performance and traffic behavior at a lower cost.
Designed to be deployed as a Fibre Channel-based appliance, the Analytics Monitoring Platform analyzes traffic across all network-connected devices, including data flows between servers and storage devices, to provide end-to-end visibility into the performance issues, , Jack Rondoni, VP of storage networking at Brocade, said.
“To get this visibility previously you would have to buy a third-party product that is both expensive and difficult to deploy,” he said. “The reason a lot of organizations never used those tools is that they were just too complicated.”
In contrast, as an appliance, Brocade’s new storage monitoring platform is designed to be deployed in a matter of minutes in a storage area network (SAN) environment. Rival solutions depend on a complex array of taps and probes to be deployed across a limited number of ports to first capture data before it can be sent to an appliance to be analyzed, Rondoni explained.
Capable of analyzing 20,000 data flows and millions of IOPS using a single appliance, the platform makes it simple to uncover the actual causes of infrastructure issues that diminish performance and availability, including access to performance history and trends in a way that enables IT organizations to more proactively discover those issues.
Based on the Gen5 platform architecture that Brocade uses for its storage and networking products, the Brocade Analytics Monitoring Platform comes in a 2U form factor that can be configured with up to 24 Fibre Channel ports. The appliance itself sports two dedicated multi-core processors for frame processing and an onboard solid-state disk drive.
From a software perspective, it runs an implementation of Brocade’s Fabric OS (FOS) that includes analytics capabilities and can be integrated with Brocade Network Advisor software.
Rondoni said IT organizations can use the platform to generate customized reports to correlate and summarize trends and specific events. They can also create thresholds and automate alerts of application behavior to help detect potential issues earlier.
With more cores available to run analytics, Rondoni said, it’s become a lot more feasible for vendors such as Brocade to develop their own analytics applications that should ultimately help IT organizations drive up utilization rates in a way they lowers costs while still improving overall application performance.
Want to know what Brocade CEO Lloyd Carney thinks the future of the data center network market is? Here’s what he told us in a recent interview. | | 12:30p |
How Cloud Redefined Data Center Resource Utilization It’s important to quickly understand that cloud computing isn’t going anywhere. In fact, the proliferation of cloud computing and various cloud services is only continuing to grow. Recently, Gartner estimated that global spending on IaaS is expected to reach almost US$16.5 billion in 2015, an increase of 32.8 percent from 2014, with a compound annual growth rate (CAGR) from 2014 to 2019 forecast at 29.1 percent. There is a very real digital shift happening for organizations and users utilizing cloud services.
The digitization of the modern business has created a new type of reliance around cloud computing. However, it’s important to understand that the cloud isn’t just one platform. Rather, it’s an integrated system of various hardware, software and logical links working together to bring data to the end-user.
With that in mind, your organization must understand how the concept of the cloud has completely re-defined the way we use data center resource.
- Cloud-ready hardware resources. Cloud computing revolves around the proper utilization of data center resources. This is why it’s important to understand your cloud and see where the latest high-density, multi-tenancy hardware is required. Don’t forget, cloud sprawl can be a very real issue and controlling cloud resources is an important part of the deployment process. One big ask from data center administrators is that they create a cloud environment capable of resilience and cost-effectiveness. Properly sizing and utilizing hardware resources is a great way of optimizing your cloud.
- Data control and security. The distributed nature of cloud computing necessitates planning around good data controls and logical infrastructure security. This might mean creating user-based roles for storage systems or placing entire applications behind virtual application firewalls. Remember, the cloud isn’t perfect; this is why it’s important to be proactive around your information and the security platform. I won’t get into all of the latest breaches that have happened; let’s just say there’s been many. Security in your multi-tenant environment is critical for your users, the data, and your own brand image.
- User transparency. One of the key components of any cloud deployment will be the end-user. As the user consumes your cloud, the entire process must be as seamless as possible. You can have the best infrastructure in place, but if the user experience is degraded; you cloud will face adoption challenges. In creating user cloud profiles, know what workloads they are accessing, the information they require and the amount of resources you’ll need to provision to ensure a good user experience. Here’s another tip: new kinds of cloud tools can help you dynamically optimize user experiences based on specific contexts like latency, location, and even devise used.
- The elastic cloud. Cloud-based resources not only have to be carefully managed, but they have to be logically distributed as well. This is where the power of an elastic cloud can really come in. Modern organizations are able to leverage resources as needed and only pay for them when required. This “pay-as-you-go” model not only increases the efficiency of your cloud model, it also allows you to elastically utilize the resources that you need. This, in turn, helps prevent cloud sprawl. By utilizing the elastic cloud, administrators are able to dynamically provision and de-provision resources as needed. Remember, the cloud is a diverse, moldable, service-driven technology which can be custom-tuned to your requirements.
- Capacity planning. Whether you’re working with a private, public or hybrid cloud- the capacity planning process is integral. You’re not only planning for today’s needs; but you’re also ensuring that you have resource capacity to grow. Cloud computing is designed around placing numerous workloads and users onto single redundant cloud systems. These platforms must have the right resources to process data, user requests and the various cloud services which are becoming available. The capacity planning portion of a cloud deployment is actually an on-going process which must continuously evaluate resource utilization to ensure optimal cloud infrastructure performance. There are amazing tools ranging from DCIM to ITSM capacity planning features which can help you proactively size your data center and your cloud. Use these tools to stay head of demand and in pace with the cloud.
Already we are seeing entire organizations be born within the cloud. As IT consumerization continues and more devices connect into a cloud service, it’ll be crucial to work with a partner that understands the big cloud picture. When creating your cloud infrastructure, planning around resource not only creates a more robust platform, it’ll also save your organization money. It’s time to better understand resource utilization within your cloud – and how you can align key cloud services with your organization’s goals. | | 3:00p |
Evaluating Ongoing Total Cost of Ownership (TCO) for UPS Systems Sarju Tailor is Senior Product Manager for GE’s Critical Power Business.
As Einstein taught us, measuring anything is relative. In the data center arena, we measure processing speed, computing capacity, data input and output and demands for storage — over months, weeks, days and even seconds. Yet, when we measure the cost of acquiring and operating our data center equipment, we too often select a single point in time to measure cost. We calculate one-time, up-front capital expense (CapEx) and initial start-up operating expenses (OpEx)such as power equipment’s rated energy efficiency.
Calculating total cost of ownership (TCO), however, begins by looking at the long-term view, well before equipment is even selected and ending far past the operational life of the system. Looking at TCO for data center power systems, particularly uninterruptible power supply (UPS) units, offers a vantage on how to evaluate TCO across the full life cycle of data center systems. For example, typical TCO metrics include single point-in-time TCO factors such as UPS power conversion energy efficiency. This is certainly a key OpEx consideration, but when looking at CapEx and OpEx across the full life cycle of a UPS system, a number of other seemingly small but interesting TCO variables emerge.
Defining a Life Cycle
The first core question to ask is, “how long is the life cycle of the system that is being measured?” Is the life cycle based on the practical use-life of the data center system or, more often, just the UPS design life cycle? Defining the practical use limits of a system affects both the CapEx amortization schedule and long-term OpEx and maintenance expenses that are vital for an accurate UPS TCO evaluation.
The TCO of Evaluation and Selection
When beginning the evaluation and request for proposal process for a UPS system, businesses factor in the upfront design and specification time by typically engaging third-party consulting engineering services to handle the design, specification and evaluation of power protection providers. Part of the evaluation process is testing and maintenance costs associated with large or multi-location deployments. These points all factor into how to amortize the cost of the system over the life of the power protection system.
Deployment and Infrastructure
Most data center engineers and managers understand the costs related to deploying new or upgraded power protection systems. Certainly the time and staff costs for installation and initial testing are traditionally part of the overall cost evaluation and TCO metrics. Yet, there are often some hidden costs that also need to be factored into the TCO calculation. During an upgrade or expansion project, do some systems have to be taken off-line, and is the cost of this downtime factored into the TCO? Have new or expanded power distribution infrastructure issues such as expanded or redundant wiring been factored into the overall system cost and TCO?
Floor Space
As data center floor space is a fixed—and valuable—asset, it too becomes a major consideration for calculating overall TCO. Creating a smaller power equipment footprint means more space for the server cabinets which ultimately increases processing capacity and earns revenue.
Maintenance and Support
The costs related to ongoing maintenance, warranty costs, etc., are common metrics for measuring TCO; however, there are some less obvious maintenance-related costs that also need to be rolled into TCO evaluations.
For example, does the UPS topology have sufficient redundancy that allows a single UPS unit to be taken off-line for maintenance or evaluation, or does the entire power plant need to shut down while maintenance or repair is performed? Even scheduled maintenance has an effect on uptime, data and processing transfer time and costs, including labor costs.
Scheduled battery replacement is probably the major OpEx cost of a UPS, representing a significant part of a maintenance budget. If TCO is a critical evaluation factor, then understanding which battery technologies can extend the life cycle of a UPS becomes important. The same is true for remote UPS monitoring systems that improve battery life, maintenance and upgrade strategies.
Lastly, it’s important to understand that maintenance costs escalate over time. Just as an older car costs more to maintain, older UPS units require more maintenance, skewing later OpEx costs and ultimate TCO evaluations.
End-of-Service
What are the end-of-service costs in terms of decommissioning data center power infrastructures, and what are the hard costs (e.g., battery disposal) and less-defined carbon-impact costs of disposing or recycling power products? Is there a net-even valuation of batteries, for example, for decommission costs versus recycling value of the core components and materials?
So as we look at important CapEx and OpEx factors to calculate data center power UPS systems TCO, we need to be diligent in understanding and fully measuring these costs at points along the system’s life cycle. From the earliest evaluation steps, across operational and maintenance costs and through end-of-service factors, an accurate TCO analysis demands this insight.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 3:30p |
Dell Partners With Ingram Micro, Synnex to Sell IT Gear to Feds Dell’s revenue from U.S. Federal business rose approximately 18 percent to $178 million in 2014 over a year ago, according to its website.
While growth in the sector has slowed this year, it may pick up steam with the company’s recently announced partnerships with resellers Ingram Micro and Synnex, who can now access Dell’s sizable federal portfolio of products and services, reported our sister site, The VAR Guy. It’s the first time the hardware giant has granted distribution rights to its government business.
The news comes on the heels of an announcement by Ingram Micro that its partners could expect “a simplified and shortened sales cycle” with the beefing up of its Federal Advantage Program, which includes cloud, data center, networking and security solutions. Through the program, partners have access to a GSA schedule and GSA order desk via Ingram Micro’s Promark division as well as additional engineering, sales and professional services to grow their federal business opportunities.
Named by Dell as its “distributor of the year” two years in a row, this further builds on the existing partnership between the two companies.
“Expanding our relationship with Ingram Micro to include the federal market further enables our mutual channel partners to take advantage of the busy buying season,” said Frank Vitagliano, vice president of global partner strategy and programs, Dell, in a press release.
Meanwhile, Synnex can now help build Dell’s Federal channel business through its extensive list of Federal resellers, part of the company’s largest and most developed vertical practice.
“The way Synnex helps its customers win business through programs like its Diversity Alliance, alternative financing and dedicated Federal teams that understand the distinct complexities of the Federal IT space, made the decision to have them grow Dell’s Federal business even more of a win-win,” said Vitagliano.
The timing couldn’t have been better for all three companies. The announcement comes at a time when federal spending is heating up. The Federal Times expects IT spending by the government to grow nearly 3 percent in 2016 to $86.4 billion.
In fact, less than a month ago, the Department of Defense awarded the most lucrative health IT contracts in history to Leidos and Cerner Corp.—potentially worth $10.5 billion over 18 years, according to the Washington Post—to help overhaul the Pentagon’s current crippling electronic records system.
The complete post can be found at: http://thevarguy.com/information-technology-distribution-channels-news/082415/dell-taps-ingram-micro-synnex-federal-sales | | 5:06p |
QTS Wants to Wipe Kim Dotcom’s Megaupload Servers Stored by Carpathia 
This article originally appeared at The WHIR
Carpathia Hosting’s parent company QTS has asked for a federal court’s permission to destroy all data on the servers that once belonged to Megaupload, the now defunct file-sharing service founded by Kim Dotcom.
Data center provider QTS, which operates as a Real Estate Investment Trust, acquired Carpathia for $326 million in May.
The US government seized more than 1,000 servers belonging to Megaupload at a Carpathia data center in 2012. The court urged the hosting company to preserve the evidence, and at one time Carpathia estimated it was costing the company $9,000 per day to store the hardware. It is now costing it $5,760 per month since the servers have been moved to a storage facility, according to a report by TorrentFreak.
QTS filed the motion in a Virginia federal court earlier this month, arguing that it shouldn’t have to preserve the servers.
“As the servers have not been used for the purposes of any litigation since the filing of Carpathia’s Motion for Protective Order on March 20, 2012, QTS seeks an Order from the Court allowing for disposition of the servers and data,” QTS said.
Megaupload lawyer Ira Rothken plans to submit an opposition brief that will require the US to preserve the Megaupload data. Megaupload founder Dotcom hopes that its users will one day be able to retrieve their files.
EFF is helping at least one Megaupload user pursue legal action to get his files back. Kyle Goodwin, a sports reporter, used Megaupload to store work files. With the help of the EFF he has filed at least six requests asking the court to find a solution that gives him back his files.
This first ran at http://www.thewhir.com/web-hosting-news/carpathia-hosting-asks-for-court-permission-to-delete-megaupload-files | | 7:03p |
Telco That Brings Netflix Video to Rural Midwest Expanding Underground Data Center Midwestern telco Bluebird Network is expanding an underground data center in Springfield, Missouri, it bought last year from the city for $8.4 million.
The company’s core business is data transport over its 6,000-mile fiber-optic network spanning Missouri, Illinois, and the surrounding states. The company owns the physical fiber and provides transport services on it.
Bluebird is an interesting company in that it plays an important role in delivery of internet content to rural areas across the Midwest. The company is owned by 25 rural telephone companies, and until it bought the Springfield facility, its owners were its only customers, served out of its other two data centers in Kansas City and St. Louis, its CEO Michael Morey said.
But Bluebird also peers with Netflix – responsible for about one-third of all traffic on the internet – at numerous points on its network and with other online video companies Morey could not name, so that their content can be picked up by its owner companies and delivered to their customers in remote areas.
“We do have other video providers that are peering with us, because we can get them close to the rural areas,” Morey said.
It used to be that satellites were the only way for content companies to deliver their content to rural users. Satellite services are expensive, however, and by peering with the likes of Bluebird, those firms can reduce their expenses.
“Many of these providers are actually dropping satellite delivery of some of these services,” Morey said.
The data center isn’t large, but its purchase represented a shift in strategy for the company, which is now going beyond just serving the 25 telcos that own it. Its current power capacity is about 1.2 megawatts, which is currently being expanded to about 2 MW.
Bluebird bought the data center primarily to make it easier to connect with the likes of AT&T, Verizon, and other customers. Without the data center, to exchange traffic with those so-called “access” networks, or networks that serve internet traffic to end users, the company would have to build fiber to whatever facilities they are in, Morey said.
The data center is a place where the networks can interconnect with Bluebird’s network and avoid the costly fiber build-out. Springfield was already on the company’s network.
“We were able to basically leverage our existing fiber network by buying the data center,” Morey said.
When Bluebird bought the facility inside a mine 85 feet below ground, it was at capacity. The city sold it because it didn’t have the capital to expand it, he explained.
There are about 80 customers in the data center today, including healthcare companies, cloud service providers, government agencies, and others. The city had been acting as a data center service provider to these tenants.
The main advantage of an underground data center is of course security. “Yes, it is a little more complicated [to build underground], and yes it is a little more expensive, but it is far more secure,” Morey said. “You can’t see it from a satellite; you can’t see it from a Google Maps truck.”
In addition to stealth, being underground makes the facility immune from the region’s frequent tornadoes and thunderstorms.
Correction: A previous version of this post said the mine where Bluebird’s data center is located was defunct. The mine is actually operational, and Bluebird is one of the tenants there. The article has been corrected accordingly. |
|