Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, October 22nd, 2015

    Time Event
    12:00p
    DCIM: Managing Power and Cooling and Improving Resilience

    Sev Onyshkevych is the Chief Marketing Officer at FieldView Solutions.

    Tech Target defines resilience as “the ability of a server, network, storage system or an entire data center to continue operating even when there has been an equipment failure, power outage or other disruption.”

    No doubt the more complex the system, the more complex the definition of, and calculation of, resilience. At a data center level, resilience is very complex and fraught with challenges because data centers are like living things – they’re constantly changing. So methods to keep a data center up and running are also evolving.

    Traditionally, data center managers ensured uptime with redundancy. They had duplicates of everything – two power sources, two servers, two connections, in effect two whole data centers – and more, with spare units, backup/disaster recovery sites, extra capacity for that “black swan” day, etc. – in many cases you’re running at 10 to 15 percent of your total capacity with all the extra redundancy. That way, if something failed, you have a cascading hierarchy of redundant systems to take over.

    That’s great for peace of mind, but not great for the bottom line – especially as data centers are expanding and operational costs increasing. There had to be a better way to ensure uptime than having equipment sitting idle, not doing a bit of work, but still drawing power, creating heat, and taking up space.

    Enter Data Center Infrastructure Management (DCIM) monitoring. DCIM software monitors all the critical systems in a data center in real-time so users know how to optimize the use of space, power, cooling, and network capacity. What’s more, DCIM monitoring generates an alarm when something is headed for disaster before the catastrophe happens so changes can be made to reverse the risk.

    While it’s important for data center operators to gain access to what’s happening in their facility in real-time, DCIM can also help with future planning. When data center managers know what equipment they currently have, how much power it’s drawing and where that power is coming from, among other vital information, they’ll be able to determine how much more equipment their facility can handle. And by optimizing capacity, they can delay, or altogether eliminate, the need for constructing a new facility.

    A recent article in DataCenter Dynamics states: “In the case of data centers, most root causes (of failure) are down to human error… But very often failures occur through a combination of two or three faults happening simultaneously, none of which would have caused an outage on its own.”

    That’s why failure simulation is a valuable tool for mission-critical facility operators.

    The ability to simulate device failures and review “what if?” scenarios gives data center managers the information they need to make wise business-critical decisions and avoid disasters. It answers questions like:

    • If I took this piece of equipment off-line for updates or maintenance, what would happen?
    • What if something else failed while I was doing maintenance?
    • Where would the load go?
    • Would something else fail as a result?
    • Would there be a cascading failure situation?

    You may have reduced redundancy during planned or unplanned outages, or due to errors in connections. Yet, identifying potential single points of failure, knowing where your system is most vulnerable and how resilient a data center is, are all critical for improving system infrastructure and reliability.

    The Building Blocks of DCIM

    Let’s also be clear that DCIM is not a single piece of software but a software category. It consists of two core building blocks: DCIM monitoring, and IT Asset Management (ITAM).

    DCIM monitoring concentrates on collecting data about what is going on in the physical data center environment. It tells you what’s happening.

    ITAM keeps you informed about the IT equipment that’s inside your data center. It tells you what you have.

    When selecting a DCIM solution, make sure that it can provide:

    • Monitoring of power and environmental factors. Energy expenses account for 25 percent of a typical data center’s operating costs. That includes both the power to run equipment and cooling to neutralize the heat the equipment produces. If you know the temperature at a variety of spots in your facility, it’s possible to raise the ambient temperature slowly, eliminate the hot and cold spots, and safely repeat this process to spend less on cooling without endangering your equipment. With a clear view of your power chain you know where the power is coming from, where it’s going, what’s connected to what and where it’s all connected – and gain the ability to understand the upstream and downstream impacts of an element failing or being removed, and a clear picture of available capacity, and how you can grow responsibly.
    • Alarm and alert . The ability to know when a value reaches a pre-established limit will help you take action to correct problems before they become critical.
    • Trending. Real-time information is important, but trending values over time helps in responsible future planning. When is your data center busiest? How much power does it draw at those times?
    • Scalability. As data centers get bigger, more complex and their density soars, a DCIM system must be scalable to keep up with the constant changes in these packed facilities.
    • Failure simulation. Resilience metrics and information relating to system vulnerability and failure help maintain the highest possible uptime and predict what would happen in the event of a single failure or multiple failures in the power chain.

    Cisco’s Global Cloud Index predicts that annual data center traffic will reach a total of 6.6 zettabytes by 2016. With data processing and storage demands on the rise, operations become more costly and there is a definite shift to the virtual. This underscores the value of DCIM as a critical, “must-have” component to any well-run data center.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    3:00p
    Red Hat and Black Duck Partner on Open Source Container Security

    varguylogo

    This post originally appeared at The Var Guy

    As container adoption via platforms such as Docker grows, who will keep containers free of security vulnerabilities? That’s the quandary Red Hat and Black Duck hope to solve through a partnership that focuses on security for open source containers.

    Security issues in the container market are a real concern. A study by BanyanOps found this year that 30 percent of the images in the official Docker repository contain “high priority security vulnerabilities.” That risk is not lost on executives or IT admins, who cited security problems as a leading obstacle to container adoption in a survey Red Hat conducted this summer.

    On Oct. 21, Red Hat and Black Duck announced a collaboration to screen containerized apps for security vulnerabilities and certify them to be free of risks. The offering will be based on Black Duck Hub, a service for scanning containers for security vulnerabilities, in combination with Red Hat’s OpenShift PaaS platform.

    The companies also say that they “plan to include Black Duck technologies as a set of complementary services within Red Hat’s current container certification workflow for application builders such as Independent Software Vendors (ISVs).” That effort will be part of Red Hat’s comprehensive enterprise-focused container certification strategy, which it introduced in spring 2015.

    Both companies see this move as a way to speed enterprise adoption of containers, especially those based on Linux and open source technologies. “A significant part of an enterprise-ready container strategy is the ability to trust the code across the entire lifecycle of a containerized application, from development to management,” said Lars Herrmann, general manager, Integrated Solutions at Red Hat. “This collaboration demonstrates Red Hat’s continued commitment to delivering not only Linux container-based innovation, but also the tools and ecosystem to help enterprises adopt containerized applications that are secure, certified and supported.”

    Black Duck CEO Lou Shipley added, “Container technology is another breakthrough in the constant drive to increase development agility and get products to market more quickly. Speed and agility are key drivers for container adoption in the enterprise, but not at the expense of security. The Black Duck-Red Hat collaboration is rooted in the collective value that we deliver from an open source perspective, by helping to make containers safe for enterprise use.”

    This first ran at http://thevarguy.com/open-source-application-software-companies/102115/red-hat-and-black-duck-partner-open-source-container-secu

    4:13p
    HP to Shutter Helion Public Cloud in January

    logo-WHIR

    This article originally appeared at The WHIR

    The challenges of turning a profit on the low prices for public cloud, and the popularity of hybrid in all its forms and definitions have led HP to shutter HP Helion and increase its focus on private cloud and services. HP’s OpenStack-based Helion CloudSystem private cloud and managed and virtual private cloud offerings will continue, while the sun will set on HP Helion Public Cloud on Jan. 31, 2016, the company said in a blog post.

    In the blog post, Bill Hilf, SVP and GM of HP Cloud, explained: “Today, our customers are consistently telling us that in order to meet their full spectrum of needs, they want a hybrid combination of efficiently managed traditional IT and private cloud, as well as access to SaaS applications and public cloud capabilities for certain workloads. In addition, they are pushing for delivery of these solutions faster than ever before.”

    Hilf says that HP Helion OpenStack is growing well, and HP Helion CloudSystem revenue is growing by double digits. HP will aggressively grow its partner ecosystem, and integrate different public clouds, and has already added support for HP Helion Eucalyptus. It has also worked with Microsoft on Office 365 and Azure support.

    The company will offer developers HP Helion OpenStack and HP Helion Development Platform to build cloud-portable applications.

    HP entered the public cloud market in 2011, and Helion-branded portfolio of cloud products was introduced to the public only last May. The company admitted in April that it was not going to be able to compete head-to-head against the hyperscale public cloud giants, but has continued to make inroads such as Ormuco’s launch of a hybrid offering based on HP Helion OpenStack this summer.

    A report released in September by Synergy Research showed there is $20 billion in revenue being made quarterly in public cloud, but to hold onto any of that market, HP will sacrifice its dreams of Helion taking a huge market share.

    This first ran at http://www.thewhir.com/web-hosting-news/hp-to-shutter-public-cloud-in-january

    4:19p
    Western Digital Acquires SanDisk for $19B

    varguylogo

    This post originally appeared at The Var Guy

    Western Digital has agreed to purchase all outstanding shares of SanDisk, the storage giant announced earlier this week.

    The deal, which is valued at an estimated $19 billion, or $86.10 per share, is set to close in Western Digital’s third quarter for 2016.

    The exact purchase price is liable to change based on the closing of Chinese company’s Unisplendour’s investment in Western Digital last month, according to the announcement.

    Western Digital said the acquisition will further the company’s mission to become a global storage solutions company, and will effectively double its addressable market by providing access to high-growth segments. A merger will also enable Western Digital to access solid state technology at a lower cost, the company said.

    “This transformational acquisition aligns with our long-term strategy to be an innovative leader in the storage industry by providing compelling, high-quality products with leading technology,” said Steve Milligan, chief executive officer for Western Digital, in a statement. “The combined company will be ideally positioned to capture the growth opportunities created by the rapidly evolving storage industry.”

    Toshiba will honor its previous partnership with SanDisk following the acquisition. Western Digital said it will work with Toshiba to enable “vertical integration through a technology partnership driven by deep collaboration across design and process capabilities,” especially in terms of its NAND flash architecture.

    Steve Milligan will serve as CEO of the combined company, with SanDisk President and CEO Sanjay Mehrotra expected to join the Western Digital board of directors.

    The company will remain headquartered in Irvine, California.

    “Western Digital is globally recognized as a leading provider of storage solutions and has a 45-year legacy of developing and manufacturing cutting-edge solutions, making the company the ideal strategic partner for SanDisk,” said Mehrotra. “Joining forces with Western Digital will enable the combined company to offer the broadest portfolio of industry-leading, innovative storage solutions to customers across a wide range of markets and applications.”

    SanDisk’s acquisition marks the third major technology company purchase in the past week, following the announcements that Silver Lake Partners and Thoma Bravo have acquired SolarWinds N-Able and Thales Security’s purchase of Vormetric.

    This first ran at http://thevarguy.com/information-technology-merger-and-acquistion-news/102215/western-digital-purchases-sandisk

    4:28p
    EMC, VMware Launch Joint Enterprise Cloud Business Under Virtustream Brand

    logo-WHIR

    This article originally appeared at The WHIR

    EMC and VMware have formed a new business unit that will sell cloud services from EMC, VMware, and Virtustream, the cloud services provider that EMC acquired in May for $1.2 billion.

    The new cloud services business will operate under the Virtustream brand, and be jointly owned 50:50 by VMware and EMC. Virtustream’s financial results will be consolidated into VMware’s financial statements starting Q1 2016.

    EMC, which owns most of VMware, is in the process of being acquired by Dell the biggest tech acquisition in history. The companies announced the $67 billion deal earlier this month.

    Under the direction of CEO Rodney Rogers, Virtustream is expected to generate “multiple hundreds of millions of dollars in recurring revenue in 2016” with a focus on enterprise cloud services.

    According to Tuesday’s announcement, VMware will establish a Cloud Provider Software business unit that includes existing VMware cloud management offerings and Virtustream’s software assets, including the xStream cloud management platform.

    The joint company will offer a range of managed services for on-premise infrastructure and IaaS.

    “Through Virtustream, we are addressing the changes in buying patterns and IT cloud operation models that we are seeing in the market. Our customers consistently tell us that they are focused on their IT transformations and journeys to the hybrid cloud. The EMC Federation is now positioned as a complete provider of hybrid cloud offerings,” said Joe Tucci, EMC Corporation Chairman and CEO.

    The new business will offer VMware vCloud Air, VCE Cloud Managed Services, Virtustream’s IaaS, and EMC’s storage managed services and object storage services.

    This first ran at http://www.thewhir.com/web-hosting-news/emc-vmware-launch-joint-enterprise-cloud-business-under-virtustream-brand

    6:22p
    Facebook Data Center in Sweden Has New Hydro-Powered Neighbor

    London-based data center provider Hydro66 has brought online its 100 percent hydro-powered data center in Boden, Sweden, about 10 miles from the $1 billion Facebook data center in Luleå. The facility’s anchor tenant is Hydro66’s sister company MegaMine, which provides bitcoin mining services.

    The data center came online with about 11,000 square feet of computer-room space and 3.2 MW of power capacity, which is relatively low as far as colocation data centers go, but with a 120 MW substation next door and the 78 MW Boden hydropower plant nearby, there’s plenty of opportunity to expand.

    Hydro66 Boden inside

    Inside the data hall at Hydro66’s Boden data center (Photo: Hydro66)

    Hydro66 held a grand opening at the site this week, with Magdalena Andersson, Sweden’s finance minister, in attendance. The data center provider started construction about one year ago.

    The company is backed by David Rowe, of venture capital firm Black Green Capital. Rowe is known for having founded several early internet businesses in the UK in the 90s, including internet service provider Easynet and Cyberia, a cyber café.

    Black Green also funded MegaMine.

    Hydro66 Boden side wall air intake

    Air vents line the side walls of the data hall to take in outside air for free cooling (Photo: Hydro66)

    Hydro66 is in Sweden for the same reasons Facebook is. There’s cheap and relatively clean hydro-power, a cool climate for free cooling, robust network infrastructure, and lots of support from government economic development groups.

    Officials’ largesse may increase in the near future, following completion of a government-commissioned study that concluded that the data center industry should be given the same tax breaks other industries that consume a lot of energy and compete internationally get in Sweden.

    Hydro66 isn’t the only data center in the area that houses a bitcoin mining operation, and Facebook isn’t its only neighbor. KnC Miner, one of the largest players in the bitcoin mining space, operates a massive 30 MW data center in Boden, also taking advantage of low-cost hydro-power.

    << Previous Day 2015/10/22
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org