Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, May 12th, 2014

    Time Event
    11:30a
    Schneider Adds More Colo-Friendly Functionality To DCIM

    The latest iteration of Schneider Electric‘s data center infrastructure management software adds  more functionality for colocation providers. The multi-tenant data center industry is a major focus for vendors because it is growing much faster than the enterprise data center space.

    Companies like Digital Realty, CoreSite and RagingWire are using DCIM as a way to improve efficiency as well as bring differentiated value to customers. It helps to improve transparency and improves operations from both the operator and customer perspective.

    Several DCIM vendors, such as Fieldview Solutions, have noted an uptake in DCIM by colocation providers.  Tenants increasingly want to understand temperature and power chain in the data center. Some also need asset management solutions.

    Schneider’s StruxureWare Data Center Operation v7.4. features updated capabilities to help colocation data center operators accurately bill tenants, determine available leasing space and avoid overspending on capital and operational expenses.

    “We made it a point to fill in some unmet needs we were hearing from customers with other industry DCIM solutions, particularly when it comes to data and actionable intelligence,” Soeren Brogaard Jensen, vice president of Schneider’s Solution Software division, said. “There is a great demand for accurate monitoring and measurement of colocation environments, as well as the need to reduce stranded capacity, right-size the data center and make the most of existing DCIM systems for all types of data centers.”

    StruxureWare has provided colocation, power and network management before, but the latest release provides deeper granularity into each and includes some visual upgrades, such as visual port mapping.

    Here are the major enhancements in 7.4:

    • Colocation management: new capabilities include paired receptacles to provide an overview of complete power redundancy at the case or rack level; cage and rack-based power overview; accurate tenant billing; enhanced tenant data; improved cage drawings.
    •  Power monitoring: ability to manage capacity and conduct impact analysis down to the breaker level (a feature unique to Schneider Electric’s StruxureWare solution); branch circuit monitoring and the ability to print breaker panel schedules.
    •  Network management: graphic network connections; visual port mapping; cable route and type visualization and network impact analysis

     

    12:00p
    Barclays ‘Fires’ 9,000 Idle Servers From Data Centers

    As it prepares to make huge cuts to its investment banking business and lay off 7,000 employees, Barclays has also been busy firing idle servers from its data centers.

    The British banking giant decommissioned more than 9,000 servers in 2013 alone, following removal of about 5,500 “comatose” servers from its global data center footprint in 2012. The effort to reduce waste of space and energy in its data centers has made Barclays winner of an annual data center industry “biggest (server) loser” contest for two consecutive years.

    Organized by the Uptime Institute, the Server Roundup contest is a way to inspire companies to take a hard look at their IT footprint and find and eliminate the machines that are not doing anything except sucking up precious data center power and cooling capacity.

    Last week Uptime announced that Barclays had split the first place in the 2013 Server Roundup with Sun Life Financial, the Toronto-based financial services multinational known primarily for its life insurance business. Last year’s winner was AOL, which in 2012 retired and recycled about 9,500 servers.

    Getting Rid Of Idle Servers Not Commonplace

    As common-sense as it may sound, removing idle servers from data centers is not common place in the industry, and companies need as much inspiration – which in this case comes in the form of a contest – as they can get. The common way of structuring data center management provides little incentive for roundups of comatose servers.

    Uptime says there are two reasons for this problem: fear and misplaced accountability for efficiency.

    Data center managers are simply afraid to unplug servers in complex IT environments, where one misstep can lead to downtime, since their job is to keep things running.

    Also, in about 80% of the cases, facilities managers or corporate real estate departments are the ones paying data center power bills. Since IT teams at these companies never see the power bill, they have little concern for energy consumption of the hardware they maintain.

    Lots of Money and Data Center Space Saved

    Optimizing IT footprint, however, can save a company a lot of money. The 9,000 servers Barclays removed in 2013 were consuming about 2.5 megawatts of power, and by removing them the company estimates to have reduced its annual power bills by about $5.4 million.

    By removing the servers, the company also freed up nearly 600 server racks, which translates into further savings by staving off expansion of data center space. Finally, the bank estimates to have saved about $1.3 million on legacy hardware maintenance and freed up more than 20,000 network ports and 3,000 SAN ports.

    “We are seeing reductions in power, cooling, rack space and network port utilization – all of this while our usable compute footprint goes up, giving us the room to continue to grow the business,” Paul Nally, a director at Barclays, said.

    A Rare Bit of Good News

    The reduction in cost is likely to be welcome news for the business that is going through major restructuring. Barclay’s announced last week that it would be slashing its investment banking division and letting go more than a quarter of the division’s employees, The New York Times reported.

    The company is going to sharpen focus on its core businesses: credit cards, retail and corporate banking in Britain and banking in Africa.

    Sun Life Gets Rid of 400 Servers

    Sun Life, the other winner of the latest Serve Roundup, retired about 440 servers in 2013. The company replaced 54 of them with newer, more efficient models and converted 75 of them to virtual machines.

    Sun Life expects the project to result in a total reduction in power requirements of about 115 kilowatts and savings of about $100,000 on energy costs.

    This is the third year Uptime has conducted the roundup.

    12:30p
    Big Data News: Zettaset, RapidMiner

    Zettaset announces secure HBase for a hardened Hadoop offering, and predictive analytics provider RapidMiner launches a new Platform-as-a-Service version of its RapidMiner Server solution.

    Zettaset: The company added Secure HBase as the latest feature in its Zettaset Orchestrator secure management platform for Hadoop. Secure HBase addresses need for hardened Hadoop components that incorporate security, reliability and process automation enterprises expect from commercial software applications. Zettaset Orchestrator works in conjunction with branded open-source Hadoop distributions, providing security and administration functionality along with a web-based interface that automates and simplifies routine tasks.

    RapidMiner: The predictive analytics provider’s new version of RapidMiner Server solution is a fully managed service, hosted on Amazon Web Services, aimed to enable a greater range of organizations to leverage predictive analytics. The solution can schedule execution of resource-intensive processes, share models and integrate predictive analytics with other business applications. The RapidMiner team performs all installs, configurations, backups, maintenance, monitoring, tuning and updates.

    12:30p
    View IT Assets as an Investor, Not a Consumer

    Frank Muscarello, founder and CEO, MarkITx, an exchange for used enterprise IT equipment.

    As consumers in a short-attention span world, the constant onslaught of new technology has led all of us to mindlessly collect countless computers, phones, TVs, routers, cables, stereos and video consoles which we later stow away to gather dust in our basements, attics and garages. If we feel particularly industrious, we may go so far as to sell old technology for a few bucks at a garage sale or on Craigslist, or, if we don’t want to haggle with strangers, we can drop our gear at the curb or to take it to a recycling center.

    We know it’s wasteful, but it’s our money, so no one is harmed.

    But what about when it’s an enterprise taking this same consumerist approach? Then, rather than an individual being impacted, the waste hits shareholders with reduced returns, employees with smaller bonus pools and customers with higher prices.

    But what if we adopted a different mindset–one that was less like that of a consumer and more like an investor? What if we managed our data center equipment the way an investor would manage a stock portfolio?

    An investor would never buy a stock and then sell it for less than its worth when it came time to reallocate. But that’s basically what data center operators have done for years. They invest huge amounts in equipment, and then, when it’s time to refresh, they sell it for pennies on the dollar to middlemen, pay a recycler to haul it away, or simply store it away in a dusty backroom.

    This investor mindset question isn’t rhetorical – the stakes are too high, with data center costs rising 20 percent per year, and doubling in just the last 5 years alone.

    Thankfully, thinking like an investor when it comes to enterprise IT – for more informed buy, sell and hold decisions – can start with just a few steps:

    1. Track Your Portfolio’s Value

    Investors regularly monitor the value of their holdings, and make allocation decisions as those holdings change in value. For IT equipment, value is based instead on fixed depreciation schedules, so it’s of little surprise that most of us are happy to get anything in return when selling aged gear.

    Certainly, it hasn’t always been as easy to track the price of old infrastructure as it has been to track the price of an exchange-traded stock, but with the commoditization of enterprise IT and a deep $300+ billion used equipment marketplace, it can be done. Whether through automated price tracking via a third-party marketplace like MarkITx, or having an intern monitor equipment values in a DIY spreadsheet, understanding the true value of your equipment can help you time your sale and purchase decisions.

    Monitoring the price of used gear can also give CIOs leverage: by pointing out when the depreciated value of equipment falls below its true market value, the decision to sell and reinvest is obvious to even the most obstinate CFO.

    2. Be Opportunistic

    Great investors often sit on cash, waiting to strike when the price is right. While data centers don’t have piles of unused cash laying around, there is cash value in your current equipment. If you are tracking its value, you are positioned to sell it more rapidly, and thus be able to buy when prices are right – at the end of a sales month, quarter or year, or just before the vendor’s hot new product is released.

    3. Manage Risk and Reward

    The best investors in the world aren’t looking to double their money with every investment (though they’ll certainly accept it if it happens!). Instead, they recognize that if they effectively manage the risk and reward of each investment, over time the end result will be strong overall performance. In the data center, the “risk” would be paying top dollar for new equipment even if there was not yet a compelling case for it. To make the risk-reward equation more compelling, balance equipment performance with the actual demands that will be placed on it. You may find that certified used equipment, or something less than top of the line, is sufficient for your lower performance, non-mission critical applications.

    4. Reinvest Dividends

    Income investors love dividend-paying stocks, and they constantly reinvest the dividends to lower their cost basis, increase their holdings and improve performance. While equipment doesn’t pay cash dividends (though that would certainly improve Cisco’s fortunes), data centers should factor in the true value of their current holdings when budgeting their next purchase cycle.

    5. Avoid Fear, Greed and Envy

    Fear, greed and envy are death to any investor, as they lead to rash, foolhardy and ill-conceived decisions. And it’s easy to fall into their trap in a data center. There’s the fear of falling behind the competition, the greed of wanting the most state-of-the-art infrastructure, and the envy of peers who do, in fact, have the most state-of-the-art infrastructure. Instead, data centers should set an equipment refresh plan based on their specific business objectives and financials. Do what’s right for your business, and you’ll be the CIO that others envy, regardless of the age or sophistication of your infrastructure.

    Moore’s Law has brought revolutionary change and improvement to enterprise IT, and IT equipment manufacturers spend billions of dollars on advertising and tradeshows to remind us. But if we’re not seeking to maximize the value of our current equipment and if we’re not making logical, informed decisions, then we’re cursed to be consumers – spending money needlessly, wasting resources and hurting our businesses.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    12:30p
    Brocade Updates SAN Management for EMC Storage

    Brocade announced updates to fabric technology for customers deploying virtualization and private cloud architectures running on the new EMC VNXe 3200 storage array and EMC Data Domain deduplication storage system at last week’s EMC World conference in Las Vegas, Nevada. The enhancements complement Fibre Channel connectivity offered by the two new entry-level EMC systems.

    “Midrange enterprises deploying EMC VSPEX solutions have many of the same requirements faced by large enterprises in supporting virtualization and private cloud initiatives, including maintaining high application availability and responsive support level agreements,” Jack Rondoni, vice president of data center storage and solutions at Brocade, said. “Because SANs play a crucial role in each of these areas, we optimized Brocade Fabric Vision technology and Brocade Network Advisor to enable customers to better leverage the Fibre Channel functionality designed into the new EMC VNXe and Data Domain solutions.”

    Brocade Fabric Vision technology aims to simplify the setup and ongoing operation of the new EMC storage and data protection systems.

    Customizable dashboard views provide early warning of potential problems and help IT staff to visualize overall fabric health and performance to reduce operational costs. Brocade Network Advisor Professional, an essential element of Brocade Fabric Vision technology, has new features to support mid-sized environments, including dual-fabric redundant paths between servers and storage that deliver enterprise-class reliability and availability.

    Brocade Fabric Vision technology is available now and the new Brocade Network Advisor Professional version will be available in the third quarter through EMC and its global network of channel partners.

    2:00p
    Understanding the World of Cloud Automation

    The modern data center is getting a lot smarter. We’ve got better systems, more optimally controlled resources, and the end-user experience is continuously improving. But there’s a lot more intelligence being built into your cloud platform than ever before. The sheer number of new users and workloads accessing the cloud has forced data centers and various organizations to adopt new, powerful, methods to delivery rich content.

    With that in mind, cloud and data center automation have helped shape how we provision resources and control workloads. The important note here is that just like any other piece of technology, the world of cloud automation continues to evolve. Here are several examples:

    Automating your next-generation cloud. The next-generation cloud environment happens to be very diverse. Now, compliance and regulations are opening up a bit to allow for more workloads to reside on a cloud system. Many automation tools now place governance and advanced policy control directly into their product. Technologies like RightScale allow cloud admins to control security aspects of their cloud, and gain quite a bit of visibility. Aside from being able to control costs around resource utilization, this type of platform creates a very dynamic automated cloud platform. Scaling, orchestration, and even multi-cloud controls are all built in. Here’s the cool part: your cloud automation platform now becomes proactive as well. In working with automation and analytics, you’re able to visualize and forecast requirements for your cloud infrastructure.

    How end-point and workspace delivery play a role in automation. The end-point compute model is evolving. Organizations like Google are delivering Chromebooks which are lightweight, browser-enabled, platforms that act as gateways to the cloud. The idea is to centralize and deliver complex workloads through non-complex means. These workloads now include entire desktops as well as virtual applications. DaaS models and virtual application delivery can be great tools to empower your end-user, but how can you automate the entire process? The good news is that automation platforms are certainly catching up, if not there already. You can deploy your own infrastructure within Amazon AWS, and then manage the entire platform with an intelligent, automated, system from Eucalyptus. Or, you can deploy VMware’s DaaS model while managing that environment through their vCloud Automation Center. The point is that there are now very diverse and feature-rich automation tools designed for cloud workloads and a variety of delivery models.

    Spanning automation between data centers and cloud models. The cloud is becoming a system of logical interconnects where are variety of data points can now communicate. This means that traditionally isolated cloud models can now share resources and information. The goal is a powerful cloud infrastructure which can be controlled and automated regardless of the underlying platform. Technologies like Puppet Labs aim to do just that. The idea is to create a unified management and automation approach to very heterogeneous environments. Puppet is capable of controlling environments – cloud, virtual, and physical – and allows you to automate the management of compute, storage, and network resources. The cool factor is that you can be using a VMware platform, Apache CloudStack, OpenStack, Eucalyptus, Amazon, or even your own bare metal data center. Above it all would sit this IaaS automation platform, helping you control and better manage your entire ecosystem.

    A look at the future of cloud and data center automation. As the cloud platform evolves, there will be even more initiatives to unify a variety of functions within the data center. One emerging trend and conversation point are robotics in the data center. The concept of a “lights-out” robot-drive data center takes the idea of automation to the ultimate level. We’re not quite there yet, but the future automation model will certainly begin to involve automated mechanics located within a data center. Another emerging trends is the abstraction and automation of the entire data center. The data center operating system aims to control everything from the chip to the cooler. Moving forward, these types of proactive data center systems wills strive to create an easier-to-control infrastructure model.

    Your cloud and data center model will only continue to evolve. There is an increased level of interconnectivity happening within the organization where entire platforms are becoming truly distributed. Your users are demanding more, your applications are becoming more critical, and your delivery model must keep pace with the industry.

    Since resources are still a critical piece of the cloud equation, controlling the delivery of data and resources must be a priority. This is especially the case since most new technologies are now born and live within the data center. So, as you build out your cloud platform, make sure to make the process as intelligent as possible, and build in automation wherever it makes sense.

    2:09p
    Confirmed: Peak 10 Acquired By GI Partners

    Private equity firm GI Partners has signed a definitive agreement to acquire IT infrastructure and cloud provider Peak 10, Inc. from investor Welsh, Carson Anderson & Stowe, the companies said today. Terms of the deal were not disclosed, but a report from Reuters last week said market rumors valued the deal at between $800 million and $900 million.

    “This is a pivotal time in the IT infrastructure space,” said David Jones, President and Chief Executive Officer of Peak 10. “Our industry continues to experience dynamic change with shifts and enhancements to virtual environments and the promise of the cloud. GI Partners’ deep experience in value creation within our sector forms a strategic partnership that aligns with our aggressive plans to expand geographically and deliver new capabilities to the marketplace for companies who need a partner like Peak 10.”

    Peak 10 operates 270,000 square feet of space in 24 data centers in 10 markets, primarily in the southeastern U.S., and has a history of calculated expansions. The Charlotte-based company has 2,500 customers, including GuideStar, the PGA of America and the Florida Panthers. Peak 10 has been around since 2000, safely navigating through the dotcom bubble as well as the major economic downturn in 2008.

    How has the company survived, and even thrived? Historically it has followed a conservative game plan, choosing to focus on smaller, regional data centers rather than speculative builds. It staffs these regional data centers with local talent that knows the area, quickly establishing Peak 10 as a staple in local business communities. The company takes smart, calculated plays in markets showing potential.

    Peak 10′s product strategy has evolved with the times, offering a unique mix of colocation, managed hosting, and most recently, cloud. It provides tailored solutions, often winning a piece of a customer’s IT infrastructure and further growing the relationship as time goes on.

    “I think it will be interesting to see if GI Partners and Peak 10 is going to be active in follow-on acquisitions to either get more scale or specific capabilities, said Philbert Shih, Managing Director Structure Research. “Peak 10 has not been acquisitive of late but in the past has shown a willingness to acquire and add facilities, customers in geographies it prioritizes”.

    GI Partners: A Veteran of Data Center Deals

    GI has invested in seven other IT infrastructure businesses, including Digital Realty Trust (NYSE: DLR), The Telx Group, and SoftLayer Technologies. GI’s investment in Peak 10 will be made from GI Partners Fund IV, a private equity fund with $2 billion of capital commitments.

    “This investment leverages GI’s long history of investing in technology infrastructure,” said Rick Magnuson, Founder and Executive Managing Director of GI Partners. “We see a significant value creation opportunity for businesses such as Peak 10, which form the backbone of the internet.”

    “Peak 10’s talented and experienced management team has built an outstanding organization with considerable scale,” said David Mace, Director at GI Partners. “We look forward to supporting the company as it continues to expand its leadership position.”

    Completion of the transaction, which is subject to regulatory approvals and customary closing conditions, is expected in the second quarter of 2014.

    Private equity firm Welsh, Carson, Anderson & Stowe acquired a majority stake in Peak 10 in 2010. Welsh Carson is a veteran investor in the data center industry, having previously owned positions in Savvis Communications, Amdocs Limited and and SunGard Data Systems.

    7:30p
    We Welcome Your Submissions

    Do you have your own perspective on a current data center topic? Do you want to respond to a column you’ve read on Data Center Knowledge?

    If so, we’d like to have you produce a guest column for the Industry Perspectives section of Data Center Knowledge, the leading source of news and information for the data center industry. More than 200,000 data center professionals visit Data Center Knowledge each month to stay informed as they plan, design, build, equip and manage world-class data centers.

    Our Industry Perspectives columns provide industry professionals with the opportunity to contribute articles sharing their insight, expertise and opinions with their colleagues and peers.

    The content focus is education and thought leadership, rather than marketing. Submissions are welcome on any topic, but we are particularly interested in these areas:

    • Industry best practices
    • Data center design
    • Energy efficiency
    • Measurement and metrics
    • Next-generation technologies
    • Regulatory updates
    • Green technology
    • Security/Disaster Recovery
    • Storage
    • Cloud computing

    Submissions can be as long as is needed to convey the core message, but 500 to 750 words is a good target length. About 1,000 to 1,200 is the maximum. (If a column is longer, we can break it into multiple pages, or multiple columns, if the text supports it.)

    Each Industry Perspectives article will feature the author’s head shot and one-line bio, which can include a link to a company home page or personal blog.  We include a single link, and links to objective resources. We do not link repeatedly to a company website, sales materials or documents. We are looking for original and exclusive articles (not published elsewhere online or print) that are “thought leadership” content.

    We welcome submissions from industry leaders. Email perspectives@datacenterknowledge.com to get started. Also, review our guidelines and submission page for information. View previously published Industry Perspectives in our Knowledge Library.

    9:42p
    IBM’s Watson Behind Software Defined Storage Offering

    The software defined storage game has just gotten a little hotter with IBM throwing its hat into the ring. Big Blue’s Elastic Storage, which uses Watson technology, pools storage resources across a variety of storage systems in locations, abstracting the resources to look like one big storage system. IBM will offer it as software and make it available as a cloud-based service through its SoftLayer Infrastructure-as-a-Service platform.

    “Digital information is growing at such a rapid rate and in such dramatic volumes that traditional storage systems used to house and manage it will eventually run out of runway,” said Tom Rosamilia, senior vice president, IBM Systems and Technology Group. “Our technology offers the advances in speed, scalability and cost savings that clients require to operate in a world where data is the basis of competitive advantage.”

    Born in IBM Research Labs, the patented Elastic Storage technology is suited for the most data-intensive applications, which require high-speed access to massive volumes of information. Examples include seismic data processing, risk management, financial analysis, weather modeling and scientific research.

    The Elastic Storage architecture was part of Watson, the “cognitive-computing” system that famously won the TV quiz show Jeopardy! in 2011 against two previous human winners. Watson was able to load 200 million pages (more than 5 terabytes of data) into its memory within minutes during its Jeopardy! appearance.

    Watson has become the foundation of many of IBM’s recent technology initiatives. The company plans to offer cognitive computing as a service for applications that include retail, healthcare and beyond.

    The benefits of software defined storage include moving less-used data to less expensive commodity storage and using faster, more expensive storage, such as Flash, for the important, urgently needed stored info. Guided by real-time Watson-powered analytics, IBM policies automatically match the types of data to the most appropriate storage systems.

    << Previous Day 2014/05/12
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org