Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, December 17th, 2013

    Time Event
    1:33p
    Preparing for DCIM in 2014: Best Practices for Getting It Right

    Lara Greden is a senior principal, strategy, at CA Technologies. Her previous post was DCIM Makes A Difference for Colo Providers.

    LaraGreden-tnLARA GREDEN
    CA Technologies

    In 2014, many organizations will implement DCIM for the first time or expand or replace their existing implementation. Several critical factors will help organizations succeed. Here are recommended best practices based on experiences with our customers.

    1. Enable users
    First and foremost, when it comes to enabling users, is choosing a DCIM solution with usability in mind. The software should be role-based and thus meet your users where they are. Seemingly small things, such as allowing for single sign-on access, increase usability. One of the key benefits of DCIM is that its architecture brings together a large, valuable data set. To enable users, you need to know how users can leverage the data themselves. For instance, are they able to create their own metrics, or do new metrics require additional services costs?

    2. Integrate where it makes business sense
    Integration is at the core of DCIM. The most common first phase is integration with devices, power and cooling equipment and the BMS, i.e., the physical data center. Ultimately, DCIM can even replace some of the tools previously used to monitor and manage the physical data center.
    But integration in the IT stack is also important. To identify what makes business sense for your first phase, evaluate the workflows you are supporting or enabling with DCIM. Then, map the IT systems for which data sharing is critical to supporting accurate decision making and reducing manual efforts. For instance, if you are already using intelligent alerting and users receive alerts through a service desk, then you probably have a solid business justification for prioritizing integration of the DCIM alerts with your service desk.

    3. Talk to others
    Implementing and making use of DCIM technology is not a one-off project, as discussed further in this white paper on DCIM implementation success. Having trust in the underlying technology is key to success, but so is confidence that your team will be successfully up and running in the expected time frame. That’s why talking with other organizations with DCIM experience is so valuable.

    2013 has seen an increase in uptake of DCIM. Seeing a live implementation of the technology at peer organizations and having a frank conversation with those peers will be valuable for your DCIM acquisition and deployment process. It will help you gain insights for defining the critical requirements you will want to clearly spell out as part of your procurement process.

    With December being a particularly critical time of year for many in data center infrastructure and operations, use your observations to plan for your DCIM implementation and capitalize on the awareness building and early wins you’ve achieved. Organizations that plan for their DCIM implementations with a focus on enabling users, integrating where it makes business sense, and learning from peers are all achieving greater business value from their DCIM implementation.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:42p
    Avago Plunges into Storage with $6.6 Billion LSI Acquisition

    Avago Technologies (AVGO) will acquire LSI for $11.15 per share in an all-cash transaction valued at $6.6 billion, the companies said yesterday. Avago is a leading global supplier of a broad range of analog semiconductor devices, and with LSI will add enterprise storage to its mix of wired, wireless and industrial businesses.

    “This highly complementary and compelling acquisition positions Avago as a leader in the enterprise storage market and expands our offerings and capabilities in wired infrastructure, particularly system-level expertise,” said Hock Tan, President and Chief Executive Officer of Avago. “This combination will increase the company’s scale and diversify our revenue and customer base. In addition to these powerful strategic benefits, as we integrate LSI onto the Avago platform, we expect to drive LSI’s operating margins toward Avago’s current levels, creating significant additional value for stockholders.”

    Avago said it expects to achieve annual cost savings of $200 million in the first full fiscal year after closing. Avago funded the deal with $4.6 billion in loans from a group of banks, and a $1 billion investment from private equity firm Silver Lake Partners. The transaction has been approved by the boards of directors of both companies and is subject to regulatory approvals in various jurisdictions and customary closing conditions, as well as the approval of LSI’s stockholders.

    “This transaction provides immediate value to our stockholders, and offers new growth opportunities for our employees to develop a wider range of leading-edge solutions for customers,” said Abhi Talwalkar, President and Chief Executive Officer of LSI. “Our leadership positions in enterprise storage and networking, in combination with Avago, create greater scale to further drive innovations into the datacenter.”

    2:00p
    The Automated Data Center: Two Layers of Technology Innovation
    data-center-horizon

    What’s on the technology horizon for data centers? Automation!

    This week Data Center Knowledge presents a three-part series on data center automation and the potential role of robotics.

    There’s an interesting conversation taking place that revolves around automation, robotics and the future of the data center. We helped jump-start the discussion in May with The Robot-Driven Data Center of Tomorrow, and this week we’re going to look at what’s happening in the world of data center automation and how robotics may make an impact.

    Data centers will only become more critical overcoming years. As more users utilize content delivered directly from cloud resources – the data center will need to be able to handle the influx of new demands. This means creating efficiencies at all levels within the data center. And so, we’re seeing automation happen within the modern infrastructure at two layers:Logical and Physical Automation.

    • Automation at the logical layer. Virtualization, cloud computing, and the modern data center are intertwined to deliver some pretty amazing workloads. In an ever-connected world, automating workflow at the logical layer is absolutely crucial. Why? This is the only way to dynamically control user influx, new types of cloud content, and a new way that organizations interact with the data center platform. A few examples of automation include technologies like provisioning services. Platforms like Citrix’s Provisioning server or the Unidesk infrastructure are able to connect directly into virtualization brokers to help the delivery and control of both desktops and applications. Other platforms like CloudPlatform, OpenStack and Eucalyptus further help automate and create true cloud orchestration. Organizations are able to granularly control hosts, clusters, various zones, and even core virtual machine resources. Then, we have other technologies which help create even further IT automation and configuration management. Solutions like those from Puppet Labs allow administrators to create a unified approach to automation. Under this type of umbrella, an admin can manage a completely heterogenous infrastructure. This means controlling platforms like VMware, Amazon EC2, Juniper Networks, Google Compute Engine, and even bare metal systems. Furthermore these tools allow for organizations to enforce security and compliance policies by defining the desired state of your system and automatically monitoring all changes against that baseline.

    At the logical layer, this level of automation and orchestration will continue to advance. More logical systems are becoming interconnected as the resources they utilize are becoming much more streamlined and efficient. This means that intelligent APIs are removing the amount of hops that applications and data have to take to get to necessary resources for optimal operation.

    • Automation at the physical layer. Although entire data center automation technologies aren’t quite here yet, we are seeing more robotics and intelligent hardware solutions appear within the data center environment. Robotic arms already control massive tape libraries for Google and robotics automation is a thoroughly discussed concept among other large data center providers. Furthermore, technologies like those from Cisco and its UCS chassis allow administrators to create powerful “follow-the-sun” data center models where hardware automatically re-provisions itself for the appropriate set of new users. In a recent article, we discussed the concept of a “lights-out” data center. Well, major data center vendors are taking notice. In fact, Panduit is jumping on the automation bandwagon very quickly. Recently, Panduit launched its new Industrial Automation Advisory Services. These new services bridge the gap between IT and Control Engineers for connecting, managing, and automating industrial networks and control systems. The news release goes on to explain that today’s industrial organizations are driven to increase production and reduce costs while maintaining quality and safety. As networks converge, the physical infrastructure becomes even more critical to support the demands of real-time control, data collection, and device configuration.

    To paint an even clearer picture, according to Gartner Research, 80% of mission-critical outages through 2015 will be caused by people and process issues, and more than 50 percent of those outages will be caused by change/configuration/release integration and handoff issues.

    2:17p
    DCIM News: Nlyte Releases SaaS Version of DCIM Platform

    News from the DCIM marketplace includes: Nlyte releasing a SaaS version of its DCIM, iTRACS building a DCIM vendor ecosystem and ABB entering partnership with DataVision.

    Nlyte Releases SaaS Version of DCIM Platform

    Nlyte released a SaaS version of its platform that allows users to get up and running with DCIM quickly. The SaaS version isn’t a stripped down version for trial purposes only;  the SaaS version uses the exact same technology the company has always used. The only thing that is absent is integrations with BMC and HP.

    “Our tool was designed to be a completely web-based,” said Mark Harris, Vice President, Marketing and Data Center Strategy at Nlyte. “There’s no client to download, no application to install. We designed it from day one to be server-based with any browser access.”

    The company is strictly focused on managing the lifecycle of the data center.  “It is very clear that the time has come for enterprises to properly understand what assets they have in their data center and to properly manage the entire lifecycle of each of those assets in the context of a data center as a whole,” said Doug Sabella, Nlyte Software President and CEO.

    “Nlyte streamlines the workflows to get gear in there, keep it running, and get it off the floor,” said Harris.

    Harris believes that the emergence of the software-defined data center has put the company in great position going forward. “The software defined data center creates that abstraction layer between all the intricacies, It creates a delineation to the physical layer. The customers have always wanted to manage the physical components, but moving boxes that were connected and not broken was risky business. So what’s happened is that with the software defined data center – they literally can lift out the body, swap out the chassis without any change of business.”

    Nlyte allows people to swap out pieces or perform tech refreshes without the risky business of unplugging boxes without knowledge of how it will effect the infrastructure. Customers benefit from this in that they can get better utilization and make infrastructure on the whole more efficient.

    The SaaS version is offered in a subscription model, so the company has gotten rid of many significant barriers to entry, such as big upfront costs and complexity. A SaaS-based version also allows the company to roll out updates to its customers more effectively. Customers are able to try new versions of its platform in a test environment before its rolled out so they’re not surprised by these updates and potential migration issues.

    iTRACS Building DCIM Vendor Ecosystem

    iTRACS launched its new “ourDCIM Developer Community”, an industry-wide grassroots movement to build an open, best-of-breed DCIM vendor ecosystem for managing and optimizing physical infrastructure.

    The “ourDCIM Developer Community” is an industry-wide grassroots movement to build an open, best-of-breed DCIM vendor ecosystem for managing and optimizing physical infrastructure. The community’s goal is to:

    • Facilitate the exchange of information between DCIM, information technology, IT service management (ITSM), facilities, building management systems and energy management systems vendors;
    • Expand the coverage of DCIM, providing customers with a truly holistic approach to infrastructure management across both IT and Facilities; and
    • Advance the continued evolution of infrastructure management technology.

    ABB Enters Partnership with DataCenterVision

    Power and automation technologies giant ABB is getting deeper and deeper into the data center. The company announced a partnership with Switzerland-based DataCenterVision have announced a global partnership agreement.

    ABB will resell DataCenterVision software application for data center asset, operations and IT infrastructure management. DataCenterVision will resell ABB’s software application DECATHLON for data center power and power-infrastructure management. Both products are complementary, and both companies quickly saw the benefits of having a combined offering.

    “The data center market is now a key priority and business for ABB, as it is the digital factory fueling the world’s contemporary economy: e-commerce, online business, cloud computing, etc.” said Wolfgang Felber, Group Vice President and Systems Group Manager at ABB Switzerland.

    DataCenterVision has designed and developed a full-web, client-server software application for data center asset, operations and IT infrastructure management.

    3:00p
    Currency Miners Cause Spot Shortages of Dedicated Servers
    Heavy buying by miners of digital currency is testing the capacity of some dedicated hosting and cloud computing service providers. (Photo by Zach Copley via Flickr).

    Heavy buying by miners of digital currency is testing the capacity of some dedicated hosting and cloud computing service providers. (Photo by Zach Copley via Flickr).

    A number of dedicated hosting providers are running short of servers to lease, and attributing the shortages to heavy buying by customers mining for digital currencies. The surge in leasing of cheap servers appears to be tied to the soaring profile of Bitcoin, the flagship digital currency, which is driving interest in lesser known currencies.

    Some of these newer currencies, such as Primecoin and Litecoin, offer profit opportunities to those who apply computing power to verifying transactions, who are rewarded with newly-issued digital coins, a process that has come to be known as mining.

    To be clear, there are still plenty of dedicated servers available, as the dedicated hosting market is a competitive business with many players. But some of the more popular server configurations are under pressure, reflecting the influence of a relatively new class of customers.

    PrimCoin is Likely Focus

    The server inventory shortages, first highlighted at Web Hosting Talk, have been seen at discount dedicated hosting providers, and focused on more powerful servers with excellent network connectivity. OVH is sold out of its higher end servers, as are Wholesale Internet and Datashack.

    “We’ve been slammed,” reported Aaron Wendel of Wholesale Internet, a provider in Kansas City. “We’ve sold three times as many servers in the last 30 days as we do in a normal month. The strange thing is people are buying them 50-100 at a time.”

    Hosting professionals attributed the shortages to mining for Primecoin (XPM), a newer cryptocurrency that can still be mined using standard servers, as opposed to the customized ASIC boxes now required to successfully mine bitcoins. In recent weeks it has become more difficult to profitably mine Primecoins, which is prompting some miners to add more computing capacity to try and keep pace.

    “Most of these shortages are with the budget providers, which definitely points towards mining activities,” added one forum participant.

    Testing Cloud Capacity

    It’s not the first time that interest in Primecoin mining has caused capacity problems for hosting and cloud providers. When the currency launched in early July, cloud computing provider DigitalOcean experienced capacity problems as Primecoin miners spun up huge numbers of virtual servers in an effort to earn coins. The surge in buying was prompted by a post on the Bitcoin Talk forum, which reportedly sent more than 18,000 users to DigitalOcean.

    “We’ve actually never seen anything like it before,” Mitch Wainer II, DigitalOcean’s head of marketing, told The Register. “This increase in usage was the equivalent of our normal growth that we would do in 90 days, now in 2 days.” We temporarily disabled NY1 and AMS1 for only new customers.”

    At least one cloud provider sees an opportunity in Primecoin, and is actively its service to miners. CloudSigma is offering a turn-key Primecoin mining service, with a disk image that can be launched across multiple virtual servers to create a cloud mining farm.

    “What makes Primecoin in particular interesting is that you cannot (currently) use GPUs for the mining,” writes Viktor Petersson, the platform evangelist at CloudSigma. “This makes Primecoin extra attractive from a scalability perspective, as you can easily set up a large scale mining farm in the cloud and avoid having a lot of servers running in your house.”

    What’s not yet clear is whether miners of Primecoin and other digital currencies represent a customer category with staying power, or a source of periodic “surge” buying driven by fluctuations in the value and challenge of mining particular currencies.

    4:00p
    Intel Acquires Mindspeed for Wireless Infrastructure

    Intel acquires wireless assets from Mindspeed Technologies, Cisco and Ontario plan a $4 billion investment over 10 years for job growth, and Mellanox and Dell collaborate to integrate network cards into PowerEdge servers.

    Intel acquires Mindspeed wireless infrastructure division. Intel (INTC) announced a definitive agreement to acquire certain assets of the wireless infrastructure division of Mindspeed Technologies. Intel said the team and technology it is acquiring will make important contributions to how Intel Architecture-based solutions for wireless access within mobile network infrastructure. Mindspeed (MSPD) provides network infrastructure semiconductor solutions and recently announced a merger agreement with M/A-COM Technology Solution Holdings (MACOM). Speaking to the convergence of IT and communications technologies Intel’s GM of Communications Infrastructure Division Rose Schooler said ”the scalability, flexibility and operating expenses of ‘classic’ data centers have become a role model for carrier network operators looking to improve their economic profile. IT technologies and approaches are increasingly being used throughout telecom infrastructure used to manage our phone calls, provide wireless access, route traffic, and enable internet surfing on mobile devices using 3G or 4G.”

    Cisco and Ontario plan job creation initiatives.  Cisco (CSCO) and Ontario Premier Kathleen Wynne announced a 10 year agreement that focuses on adding up to 1,700 high tech jobs with a focus on research and development within the first six years. The agreement also includes a framework with the potential to grow Cisco’s total Ontario employee footprint up to 5,000 by 2024, reflecting a potential total investment of up to $4 billion, including $2.2 billion in salaries alone.  The Province of Ontario will provide up to $220 million in support of the total initiative. This new initiative builds upon Cisco’s growing presence in Ontario, including investments in university chairs, planning for a new and expanded Toronto headquarters, the Pan Am Games technology infrastructure sponsorship, and Smart + Connected community initiatives. ”Today marks a significant milestone for Cisco Canada and Province of Ontario,” said Nitin Kawale, President, Cisco Canada. ”This announcement builds on our existing partnership and our mutual commitment to drive productivity and create new economic opportunities through innovation. Together with the Province we will create, high value jobs that will stimulate the economy. This initiative will also ensure that Ontario continues to be a leader in the information and communications technology industry, with a vast talent pool representing the country’s next generation of innovation.”

    Mellanox and Dell collaborate.  Mellanox (MLNX)  announced that its dual port ConnectX-3 10/40GbE Network Interface Cards (NICs) are now fully compatible with qualified Dell PowerEdge servers and Dell networking solutions. Mellanox 10/40GbE NICs have backwards compatibility with 10GbE and support RDMA over Converged Ethernet (RoCE). “Customers deploying Dell solutions with Mellanox dual port 10/40GbE benefit from our industry-leading performance and efficiency combined with the power of Dell’s server and networking solutions,” said Chuck Tybur, vice president global accounts and Americas OEM sales at Mellanox Technologies. “This combination results in a high performance solution with low total cost of ownership in power efficiency, system scaling efficiency and compute density.”

    5:30p
    Cisco Launches Desktop as a Service

    Cisco (CSCO) announced a new Cisco Desktop as a Service (DaaS) solution, which adds highly agile, cloud-based desktop virtualization solutions to Cisco’s existing desktop virtualization product portfolio. The solution is built on top of its Desktop as a Service (DaaS) solution, which adds highly agile, cloud-based desktop virtualization solutions to Cisco’s existing desktop virtualization product portfolio.

    The solution features deployment options from ecosystem partners Desktone by VMware and Citrix, and is available through partners such as ChannelCloud, Logicalis, Proxios, Neteligent, Quest and others. Customers can choose to build their own on-premises desktop virtualization solution or buy a cloud-based, as-a-service solution with Cisco powered performance, scalability, and security.

    “Desktone by VMware and Cisco have delivered a complete and proven blueprint for enabling service providers to deliver a DaaS solution for delivering Windows desktops and applications as a cloud service based on VMware Horizon View.” said Peter McKay, vice president and general manager of DaaS, End-User Computing, VMware. “Our joint customers such as Logicalis, Quest, Netelligent, Dimension Data, Adapt and ANS are utilizing our unique architecture that includes multi-tenancy, self-service provisioning, multi data center management, multi-desktop model support, grid-scale, and security to deliver DaaS and provide differentiated value-add services to their customers”

    The DaaS solution is based on Cisco Unified Computing System and the Cisco Unified Data Center architecture.  To further strengthen this platform, Cisco has improved its underlying desktop virtualization solution to deliver greater scalability and performance, optimized cost, and wider client use case support for DaaS deployments. DaaS can support up to 252 virtual desktops on a single UCS blade server, and end user applications that require high-quality rendering of immersive, 3D graphics. New UCS solution accelerator packs for desktop virtualization offer simplified ordering.

    “Citrix is working alongside Cisco and its new DaaS solution to help Service Provider partners scale quickly, easily and economically, while delivering high levels of performance and service. This enables partners to differentiate with flexibility in desktop deployment models while preserving margins to help them capture new service and revenue opportunities. Our multi-tenant solution based on XenDesktop and XenApp delivers a rich user experience, desktop, mobile and thin-client support and self-service ease-of-use,” said Mitch Parker, group vice president and general manager of Citrix.

    << Previous Day 2013/12/17
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org