Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, June 12th, 2015

    Time Event
    12:00p
    ViaWest Boosts Security and Compliance With AppliedTrust Acquisition

    The data center and service provider space continues to see a flurry of merger and acquisition activity. ViaWest has acquired AppliedTrust, a Boulder, Colorado-based security, consulting, and infrastructure services company.

    ViaWest isn’t just a data center facilities provider; it is also a provider of more hands-on managed and cloud services. The acquisition boosts its capabilities around managed services, particularly meeting security and compliance needs of its customers. The two companies will work toward integrating their services into one another’s platforms.

    AppliedTrust’s capabilities augment the ViaWest platform with fast enablement of secure hybrid services. Key services include IT assessment, migration, compliance consulting, cloud readiness, and deeper application support.

    “It expands our offerings to include broader risk management and compliance consulting services, where today we focus on infrastructure services and how they fit in our compliant data center and cloud product sets,” said ViaWest CTO Jason Carolan. “It’s also scale — in an area that continues to expand rapidly with new threats and dynamic needs on a global level.”

    “It’s all about speed to market — with cloud-first applications having attributes that now make sense across the application stack,” he said. “Companies are trying to move forward with continuous integration and development. We are especially finding that our mid-market customers need help to get there in order to compete.”

    In order for service providers to meet these needs, they are expanding services via acquisition, said Philbert Shih, managing director of Structure Research.

    “This deal and other recent ones reflect growing interest among hosting providers and data center operators to use M&A as a way to add specific capabilities and expertise in the area of managed services and consulting,” said Shih. “Hosting services are becoming more complex and multi-faceted, and service providers are moving to address the pain points that are being clearly articulated by end-user customers.”

    Recent examples of professional services acquisitions include regional play Involta acquiring Data Recovery Services and Tech Data’s acquisition of professional services company Signature Technology. The larger enterprise cloud providers are also using acquisition to fill out professional services around cloud, such as IBM’s acquisition of BlueBox and Cisco’s acquisition of Piston.

    There’s a sea change occurring across both infrastructure and organization. Often described as IT moving from cost center to profit center, the evolution is about treating infrastructure not as a necessary burden but as a way to gain a competitive edge.

    That infrastructure edge requires the human counterpart. DevOps is driving an organizational evolution around agility, which requires infrastructure to evolve in support. Outsourcing and relying on service providers is one way to be agile in quickly evolving times. Security and compliance are a particularly big and time-consuming pain point that might be best left to a service provider.

     

    1:23p
    AWS Launches M4 General Purpose Instances

    Amazon introduced M4 instances for workloads requiring a balance of compute, memory, and network resources. The powerful general purpose instances come in five sizes and are a step up in terms of specifications from M3. M4 is also the first time that Enhanced Networking has been available to general purpose instances.

    Amazon said the instances are suited for applications such as relational and in-memory databases, gaming servers, caching fleets and batch processing, as well as applications like SAP and Microsoft SharePoint.

    Last April, Amazon said it wants to replace the enterprise data center. It also wants everything else and is casting a wide net. To do that, it needs to offer instances that match all needs. Amazon has compute-optimized (C4 family) and memory-optimized instances (R3 family) available. The M family is the porridge goldilocks chose: powerful but balanced.

    M4 features dedicated bandwidth to Elastic Block Store and Enhanced Networking for higher packet per second (PPS) performance, low network jitter and low latency. Enhanced Networking delivers up to 4 times the packet rate of instances without Enhanced Networking, and consistent latency even under high network I/O.

     

    The specs and pricing for the M4 family (source: AWS Blog)

    The specs for the M4 family (source: AWS Blog)

    Inbound marketing provider HubSpot is using the new instances. “M4 instances offer an optimal balance of compute and memory for our cluster, and the m4.10xlarge, with 40 vCPUs and 160 GiB of memory, will allow us to significantly reduce our cluster size while driving down costs through better hardware utilization,” said Whitney Sorenson, Vice President of Platform Infrastructure at HubSpot in a release.

    The M4 instances use a custom Intel Haswell processor optimized for EC2. Intel also designed custom Xeon processors for Amazon Web Services to power C4. M4 processors run at a base clock rate of 2.4 GHz and can go as high as 3.0 GHz with Intel “Turbo Boost”, according to Amazon. C4 processors run at base speed of 2.9 GHz, but with Turbo boost can go up to 3.5 GHz.

    AWS also lowered on-demand and one year reserved Instance pricing for the M3 and C4 instances by 5 percent in several regions as part of the M4 launch.

    3:00p
    Pluribus Joins Dell’s Open Networking Ranks

    Pluribus Networks has become the latest provider of an open source operating system to have its offerings distributed by Dell.

    Dell will make the Netvisor operating system based on Linux available on the Dell Open Networking series of 10/40G switches.

    Pluribus Networks CEO Kumar Srikantan says the shift to open networking platforms is all but inevitable, as IT organizations look to reduce their internal operating costs to be more cost-competitive with external cloud service providers.

    “Legacy network vendors charge $100 a port and make 70 percent margins,” says Srikantan. “IT organizations today need to be able to get that number down to $50 a port.”

    However, reducing the cost of the switch can’t come at the cost of any management functionality. What differentiates Pluribus, according to the CEO, is its software-defined network controller, which gives IT organizations a higher level of visibility required to manage their networking environments than proprietary networking software.

    Pluribus joins the ranks of IP Infusion, Cumulus Networks, Big Switch Networks, and Midokura as a provider of open networking software running on Dell network hardware.

    Dell has proposed that the time has come for the networking industry as a whole to standardize on a common set of open interfaces to facilitate adoption of open networking technologies.

    Of course, Dell also still sells proprietary networking software in a networking market it entered primarily by acquiring Force 10 Networks. Coupled with its server and storage portfolio, Dell clearly sees extending its reach into the realm of networking as critical to competing with rivals such as HP and Cisco that have equally comprehensive IT infrastructure portfolios.

    At the same time, Dell is also using its new-found status as a private company to enter new market segments at disruptive price points that in the case of networking are enabled by reliance on open source software.

    The degree to which Dell and its open source allies can usurp Cisco’s dominance of the enterprise networking market remains to be seen. But given the fact that many web-scale companies have already adopted open source networking software running on white-box switches based on Intel servers or commodity processors, it’s only a matter of time before a significant portion of the data center market beyond web-scale follows suit to at least some degree.

    3:30p
    Friday Funny: Pick the Best Caption for “Data Center Colors”

    I wonder who’s data center this is….

    Here’s how it works: Diane Alber, the Arizona artist who created Kip and Gary, creates a cartoon and we challenge our readers to submit a humorous and clever caption that fits the comedic situation. Then we ask our readers to vote for the best submission and the winner receives a signed print of the cartoon.

    Congratulations to Mike W., whose caption for the “Air” edition of Kip and Gary won the last contest with: “Oops, I thought the tiles with holes in them were defective. Guess I should put some of them back in.”

    Several great submissions came in for last week’s cartoon: “Twitter” – now all we need is a winner. Help us out by submitting your vote below!

    Take Our Poll

     

    For previous cartoons on DCK, see our Humor Channel. And for more of Diane’s work, visit Kip and Gary’s website!

    4:00p
    DCIM Benefits Versus Costs, Direct, Indirect, and Hidden

    This is Part 3 of our five-part series on the countless number of decisions an organization needs to make as it embarks on the DCIM purchase, implementation, and operation journey. The series is produced for the Data Center Knowledge DCIM InfoCenter.

    In Part 1 we gave an overview of the promises, the challenges, and the politics of DCIM. Read Part 1 here.

    In Part 2 we described the key considerations an organization should keep in mind before starting the process of selecting a DCIM solution. Read Part 2 here.

    In the first two parts of this series we examined the vendor promises for DCIM and guidelines for developing an RFI or an RFP. In order to maximize the potential benefits, each organization needs to define its own objectives. Therefore, you should be prepared to do a self-evaluation. How well is your organization currently managing its own data center and IT resources. This will help emphasize which areas require the greatest improvement and then focus on what aspects and features of the myriad of available DCIM tools are most needed to address these issues.

    Potential Facility Benefits

    Some of the potential facility benefits are direct and more readily quantified. The most well-promoted and well-understood is measurement and improvement of facility energy efficiency, expressed as PUE, or Power Usage Effectiveness. DCIM in and of itself cannot fix a major fundamental problem. However, it can provide a deeper view into the individual elements that collectively consume energy (electrical and cooling systems), but in many older facilities they are not monitored separately. For example, in a large older facility with relatively poor energy efficiency (i.e. a PUE of 2.5), the largest energy waste will typically be the cooling system. There are some basic practices that can help improve cooling efficiency (such as just raising the temperature a few degrees and installing blanking panels in the racks) without investing in a DCIM system. In many cases some simple changes may reduce the cooling system energy usage (and cost) by 5-10 percent.

    While simple changes are cost-effective, further improvements can be more elusive. One of the direct benefits of a DCIM system is that it can be used to measure and track all cooling system components (CRAC, CRAH, chillers, pumps, etc.) individually while correlating that data to the IT heat load. This allows it to display how much a change in one sub-system can impact the operation and performance of individual components and overall energy efficiency. This can result in a more substantial reduction in cooling energy, which can save recurring energy costs, or one element of overall ROI justification for DCIM – a topic we’ll address in detail in Part 5 of the series. However, if the site already is relatively efficient (i.e. a PUE of 1.5 or less), it is less likely that DCIM will be able to provide as much energy savings, which will reduce the value of that aspect of a DCIM system.

    Regardless of how much energy is saved, by implementing row- or rack-level environmental sensors within the whitespace, DCIM can help ensure that when temperatures are raised, there are no racks that are running too hot (i.e. outside of ASHRAE envelopes). It can also display any “hot spots” (typically due to improper airflow issues). It can then measure the potential benefit of any remediation efforts which can help optimize them by monitoring the results as they are deployed. This rack- or row-level monitoring has an indirect but very significant benefit of potentially improving IT equipment reliability by helping maintain ASHRAE-specified environmental conditions, especially in higher-density data centers.

    Depending on the capabilities of the DCIM vendor’s offerings, Computational Fluid Dynamics-type modeling (or actual CFD capabilities) can provide graphic thermal mapping functions. This feature can also calculate and display “what-if” airflow scenarios for new equipment placement, which is also an element of overall capacity management. This also allows for cooling system impact zone failure analysis. For example, if a CRAC unit fails or when it needs to be shut down for maintenance. This helps improve overall availability, which has indirect, but very valuable benefits. CFD functionality, however, can be a relatively expensive option. You may want to consider this as an add-on feature, if it can be purchased later on, once the basic system has been successfully installed.

    System Integration

    In many cases the Building Management System has some existing sensors connected to it, which can provide information to the DCIM system. This requires a hardware gateway to interface with the existing BMS. The implementation usually involves system integration services, which typically requires cooperation of the BMS vendor and costs to that vendor. From a practical perspective, if the existing BMS vendor is proposing their own DCIM package, they should be in the position to offer a low-cost (or no-cost) BMS integration as part of the enticement to purchase their DCIM system. This also presumably would avoid multi-vendor finger pointing. However, if they are not the selected DCIM vendor, you should anticipate system integration costs to become significant, since the BMS vendor will have no economic incentive to unlock their proprietary system.

    Direct Costs

    There are obviously several major cost areas associated with the above simplified facility example, starting with the basic cost of the core DCIM software and any optional feature modules. Furthermore, many vendors have additional license costs, which are typically based on the number of monitored devices (per electrical circuit, for example). In addition, there is the cost for a substantial number of environmental sensors for row or rack monitoring of the whitespace, as well as their installation. Finally, there is the cost of the dedicated DCIM server (or servers) that the software is installed on.

    Potential IT Systems Benefits

    The facility aspect is the most mature and developed area of DCIM. While IT capabilities, features, and benefits have been available for a while, they have been lagging for several reasons. First and foremost, there is still a major cultural divide between facilities and IT. Furthermore, IT systems have many more functional domains, system administrators, and in many cases diverse and proprietary management consoles that are closely tied to each system that may or may not be interoperable with a centralized IT system management console. Moreover, IT software is in a constant state of change, while hardware is also evolving, albeit at a slower pace. Even well before DCIM, many major (and minor) software vendors were trying to be the “uber” centralized IT management console.

    On the practical level, most DCIM implementations try to provide a granular look at the energy used by IT hardware. In fact, virtually all modern IT equipment has onboard management systems that can be queried via SNMP to allow DCIM packages to provide real-time power use and environmental conditions. However, despite the fact that this is technically possible, in most cases power monitoring at the rack level is gathered by branch circuit monitoring at floor-level PDUs, or by metered PDUs within the rack (intelligent power strips, some of which can meter per outlet to track energy used by the IT device). One of the main benefits of this is better rack-level capacity management and planning. It also significantly improves availability by helping to avoid circuit overloads and tripped circuit breakers if additional servers are blindly added to a cabinet. This is an indirect, yet very tangible and important benefit.

    Nonetheless, there is movement and progress to try to bring the holistic promise of DCIM to fruition, despite, and perhaps because of the shift to the Software Defined Data Center, Software Defined Everything, and of course cloud-based resources. This will be driven by vendors and end-users, as well as service providers. In fact, late last year The Green Grid formed a DCIM work group to help examine the elements and various aspects and benefits of a unified DCIM system. However, IT security concerns represent a substantial impediment to widespread IT adoption of DCIM.

    Indirect Costs

    While vendors provide costs in their proposals, there are many indirect costs attributable to your own organization’s internal resources, such as overall project management during the installation process, system administration, and training. These costs should not be underestimated.

    Administration

    One of the promised benefits of DCIM is to improve productivity, presumably by reducing the manual-process workload on staff by automatic polling of data from a multitude of devices. While this is a core function of every DCIM platform, it still needs a system administrator to handle tasks such as user access rights (permission levels, user screen views, etc.), report generation, adding new monitoring devices and system database back-up, just to name a few items. Like any other critical system, other technical staff should also be trained to provide extended coverage hours and back-up for the primary administrator. While overall, if properly implemented, a DCIM system can substantially reduce the workload of manual tasks related to taking and logging measurements, especially in larger facilities (and multiple sites), the added administrative burden should be taken into account.

    User Training

    While the DCIM dashboard should provide a unified view and simplify monitoring and management of all the elements of the data center, like any other system, user training is required to make the most effective use of the system. Some DCIM systems have more intuitive interfaces than others, while some of this will become apparent during a product demo, which will impact how much or how little user (and administrator) training will be required. This may be an added cost from the vendor (the first basic session may be included), however, additional sessions for users, as well as higher-level technical administrator training will be required, perhaps with additional costs. Moreover, the training time allocated by the end-user staff is also an indirect cost which should be considered as well.

    The Bottom Line

    The above examples only represent a fraction of potential benefits of DCIM. However, the size of your data center(s) and IT systems will also obviously impact the relative value of potential benefits, as well as the overall costs. When crafting your basic requirements or creating your ultimate DCIM “wish list,” plan for the end goal, a comprehensive DCIM system that improves operational and energy efficiency, as well as overall data center availability. But be realistic in your projected timelines and expectations, and in order to be successful, expect to execute in phases. Consider doing a proof-of-concept baseline project first to gain experience and then use that for more accurate budgeting and selection of the most practical functions for a full-scale deployment.

    What are the challenges in DCIM implementation? How do you justify the price tag of DCIM? Come back to read parts four and five of this series in the Data Center Knowledge DCIM InfoCenter.

    Read Part 1 here

    Read Part 2 here

    4:30p
    QTS Joins Cisco Intercloud Fabric

    QTS Realty Trust announced at Cisco Live this week that it is now offering solutions built on Cisco Intercloud Fabric. QTS has a longstanding relationship with Cisco; it’s Infrastructure-as-a-Service is already Cisco-powered, so joining Intercloud is a natural progression.

    Intercloud is essentially a federated cloud, with Cisco acting as a neutral traffic cop of sorts as these disparate cloud providers hook up on what it calls the Intercloud Fabric. Applications can move between clouds in the Intercloud Fabric, and the security framework comes along for the ride.

    Cisco is making progress in becoming a key federation point thanks to its focus on the network. There’s a desire for federation in the market, according to the company, because it helps different clouds compete and extend, but providers have been hesitant to federate for the sake of federation alone. Cisco is acting as a trusted gateway for enterprise cloud activity and keeping security consistent across clouds within its fold.

    QTS offers cloud services in what it calls the 3 Cs portfolio: C1 is wholesale, C2 retail colo, and C3 cloud and managed services. It also recently boosted its cloud offerings with the acquisition of Carpathia Hosting, a big player in the government space. The company has a big stake in secure, compliant solutions, and Cisco Intercloud gives it capabilities to extend beyond its provider cloud securely. QTS can offer extended, validated services easily and deliver SLA commitments.

    In terms of service providers, there are over 30 Intercloud members, with a few other notables in the data center world being Sungard Availability Services and Peak 10. Other members include big tech vendors, systems integrators, and end user enterprises. Cisco recently invited cloud application providers to the Intercloud party.

    Intercloud also acts as a selling point for Cisco to come in and help service providers build their clouds. Joining Intercloud is a perk from building an IaaS offering on Cisco, as one smoothly leads into the other. Cisco has been bulking up its enterprise private cloud capabilities through acquisitions of Metacloud, and, most recently, Piston Cloud Computing.

    “As our customers’ IT needs continue to evolve, QTS is dedicated to elevating our hybrid cloud offering and providing the most flexible, scalable IaaS solutions available to support their applications,” said Frank Eagle, vice president of business development at QTS, in a release. “Integrating Cisco Intercloud Fabric into our solution portfolio is one way that we are acting on that dedication.”

    QTS wrote on its blog about the three big lessons service providers have learned about cloud. Lock-in is bad; solving business problems is the main driver; security and integration are the big pain points. With enterprise cloud, don’t be a walled garden but be secure. They are elements that often contradict one another, but that’s a contradiction Cisco is hoping to address with its Intercloud strategy.

    5:00p
    Hurricane Electric Connects To 100 Internet Exchanges

    This week Hurricane Electric announced that it has connected to 100 different Internet exchanges. Hurricane Electric president Mike Leber says its Internet Backbone now has connections to more Internet peering exchanges than any other network on the Internet.

    “Our network now spans 23 countries on four continents,” says Leber. “Our goal is to soon be in 100 countries.”

    In the age of the cloud, internet exchanges have taken on added significance because cloud application providers are often trying to target specific geographic markets. As a result, there’s a lot of interest in making sure that application traffic remains in the same geographic region as the data centers being used to deliver access to a specific class to application services.

    Among large network operators such as Hurricane Electric, the proliferation of those services creates a race to sign up Internet exchanges that enable application providers to send traffic directly to any number of end users in a specific geographic region.

    As Internet networking services become more critical to the economic vitality of any given region, the number of Internet peering exchanges has been increasing dramatically, especially among providers of commercial Internet peering exchanges that focus on commercial applications.

    Leber adds that the rise in mobile, high definition video and Internet of Things (IoT) applications is driving most of the demand all across the globe. In fact, a recent Visual Networking Index: Forecast and Methodology, 2014–2019 report from Cisco forecasts that global IP traffic will surpass the zettabyte threshold in 2016 and the two zettabyte threshold in 2019.

    Hurricane Electric is betting that most of those new applications will make use of native IPv6 address versus IPv4 address that are becoming increasingly hard to come by. To support that anticipated demand for Internet services, the company has been adding 100G circuits between core routers in it’s network in Europe, North America, and Asia.

    Correction 6/12: incorrect references to peering agreements have been removed.

    << Previous Day 2015/06/12
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org