Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, May 18th, 2015

    Time Event
    12:00p
    Check Out the New DCIM InfoCenter by Data Center Knowledge

    Today we launch the DCIM InfoCenter on Data Center Knowledge. While there has been some consolidation in the DCIM space in recent years, the number of DCIM tools and their providers in the market is steadily growing.

    What is also growing is the need for IT and facilities staff in data centers to collaborate or at least speak the same language. This is quickly becoming a developer’s world, where technology vendors at every layer of the technology stack are doing everything possible to enable developers, startup or enterprise, to write and deploy more software and to do it faster.

    Implications for the data center infrastructure underneath are obvious. More software means more demand for IT capacity. The way to expand that capacity in the most efficient manner is to align facilities resources with IT resources to the maximum extent possible, and DCIM tools, a category created about five years ago, promise to help.

    As the need becomes more and more acute, companies need guidance in going through the long and complex set of decisions on the road from learning to deployment. This is where the DCIM InfoCenter comes in. As part of Data Center Knowledge, the data center industry’s journal of record that is now 10 years old, the InfoCenter can be your primary go-to information resource for DCIM decisions.

    In this new section we will go from explanation of the basics to nuanced how-to guides. We’ll cover case studies of real-world DCIM deployments and help you learn from what others have done and educate you about all the key vendors in the space. And, true to DCK’s heritage, the InfoCenter will bring you the latest news in the DCIM space.

    12:14p
    Dupont Fabros Sees Positive Data Center Leasing Trends But Flat Revenue

    Dupont Fabros revealed key data center leasing activity in upcoming properties during its latest earnings. The wholesale data center provider executed its first lease at its upcoming CH2 Chicago facility, and opened a phase in Santa Clara fully pre-leased.

    The Santa Clara facility is the company’s first property outside of Virginia that has opened 100 percent pre-leased and Dupont Fabros no longer has any vacant space in the Santa Clara market. The upcoming Chicago facility is now 20 percent leased for the first phase, which is anticipated to open in the third quarter.

    The company reported earnings last week. Revenue for the first quarter 2015 was $107.3 million, up from $102 million in the same quarter last year, but slightly down from the previous quarter of $108 million. However, there were extenuating circumstances.

    An undisclosed customer filed for bankruptcy negatively impacting Dupont Fabros’ earnings. The customer filed for bankruptcy in February and did not pay base rent owed for January and February, but did pay for operating expenses, direct electric and management fees in those months. The customer’s leases total over 6 megawatts across Virginia and New Jersey.

    Dupont Fabros also announced its first mini-wholesale lease for 200kW. The lines between retail and wholesale colocation have blurred over the past several years, with wholesale providers pursuing increasingly smaller deals than what has traditionally been called wholesale. The demarcation line between wholesale and retail keeps dropping.

    These smaller deals mean wholesale providers can grow customers into wholesale space – these customers are often high growth companies. It also opens up wholesale as an option to those customers around the 200-500kW range. Several providers have announced their intentions to pursue smaller deals, including Dupont Fabros.

    The leases signed in the quarter represent 2.2 megawatts and over 9,000 square feet of space. The company also renewed two leases with one customer totaling 2.3 megawatts and close to 11,000 square feet of space and signed close to 5 megawatts worth of deals post-first quarter results.

    The company also noted that average base rent per kilowatt per month increased to $98, up from $95 in 2014. Dupont Fabros commented that the uptick was due to the rationalization of available supply in their markets.

    The uptick in base rent is a positive sign for the wholesale provider space. In 2013, analysts were concerned with rising capital expenses and declining rents. Analysts are currently positive on REITs, following five publicly traded data center REITs delivering healthy returns over the past 12 months with better growth rates than the REIT category as a whole.

    Dupont Fabros also began development of the second phase of its Virginia ACC7 data center during the quarter. The expansion will add close to 9 megawatts and 50,000 square feet.

    This is the first quarter for Christopher Eldredge, who was recently appointed president and chief executive officer of Dupont Fabros. “The first ten weeks of my tenure as CEO have been very exciting,” he said. “Demand for DFT’s product remains strong in our prime markets. We continued to experience leasing success in Santa Clara as SC1 Phase IIB opened 100% leased, and we also executed our first lease at CH2. We have also embarked on the development of a strategic plan, the results of which we plan to share with you when complete.”

    1:57p
    Red Hat Introduces Cloud Suite, Its Take On End-To-End Cloud Stack

    Red Hat introduced Red Hat Cloud Suite, its take on the full stack that includes Infrastructure-as-a-Service, Platform-as-a-Service and a unified management framework that supports a hybrid deployment model, called Red Hat CloudForms. The suite is a fully open source solution from bare metal to applications.

    Red Hat is combining three of its popular offerings here in a bid for deeper integration and an easier, turnkey solution across the stack. The IaaS is based on Red Hat Enterprise Linux OpenStack Platform, the PaaS is based on OpenShift while management comes via Red Hat CloudForms and Red Hat Satellite.

    “Not everybody know that we have products across the full stack,” said Rob Cardwell, vice president, Application Platforms, Red Hat. “This provides customers with more speed, agility and consistency.”

    There is a convergence occurring, according to Cardwell. Infrastructure people want to move to the application layer in terms of what they’re provisioning and the application guys want to move down to the infrastructure layer.

    “They’re certainly starting to overlap,” said Cardwell. “The reason for the third leg (unified management via CloudForms) is we want to make it easier for the Ops guy to handle Dev and vice versa.”

    All three products do have out-of-the-box integration with the other layers, and there are reference architectures that describe how to go about deploying these three pieces. However, this is a tightly coupled combination.

    Significant General Availability (GA) releases of each component are happening over the next several months. Each will have its own release cadence, but as the components go through major upgrades, the integration points become even better, according to Cardwell.

    Those upcoming releases will include better support for Docker Containers and container management systems like Kubernetes.

    The unified management piece, CloudForms, extends from the company’s acquisition of ManageIQ. Management capabilities extend across different layers – providing management, monitoring and deployment on-premises and off-premises. “We’re fulfilling our open hybrid cloud vision,” said Cardwell. “CloudForms is the central control point.”

    Red Hat will continue to see competition at each point layer. In terms of IaaS, there are a multitude of OpenStack vendors and distribution providers. In terms of PaaS, OpenShift is most often compared to Cloud Foundry. There is also competition in terms of other vendors with their own combo, such as OpenStack and Cloud Foundry combinations like the recently announced Mirantis offering.

    “We have customers using each of these pieces, but it’s also serving notice that Red Hat is serving the full stack,” said Cardwell. “What won’t be highlighted too much in this announcement, is our the middleware offerings. JBoss is available in OpenShift if you want to extend the cloud suite.”

    2:00p
    Mirantis Extends OpenStack Alliances

    Mirantis, a provider of a distribution of OpenStack, last week revealed that it has inked alliances with both Oracle and Pivotal, a unit of EMC.

    Under terms of the deal with Oracle, the Oracle database running on Oracle Solaris operating systems can now be provisioned via the OpenStack Murano application catalog project.

    The alliance with Pivotal calls for integration of the Cloud Foundry Platform-as-a-Service (PaaS) environment commercially sold by Pivotal and the distribution of OpenStack created by Mirantis. Under the terms of that deal Mirantis has also agreed to resell the Pivotal Cloud Foundry PaaS.

    Boris Renski, chief marketing officer for Mirantis, says that while not every customer is trying to implement OpenStack and Cloud Foundry at the same time, most installations of the Cloud Foundry PaaS are dependent on a version of OpenStack being installed.

    To that end, IBM, Hewlett-Packard and others have gone to great lengths to package OpenStack and Cloud Foundry together; a move that Mirantis is now able to counter via its relationship with Pivotal.

    The alliance with Pivotal comes on the heels of an agreement between Mirantis and EMC to create a reference architecture for deploying OpenStack across server and storage infrastructure.

    In general, Renski says that IT organizations that make use of hardened distributions of OpenStack from vendors such as Mirantis are enjoying considerably more OpenStack success than those that choose to download raw open source OpenStack code.

    “We have customers that have scaled OpenStack to thousands of physical hosts,” says Renski. “There are a lot of OpenStack configuration issues so if you use the raw bits OpenStack may not scale or even work at all.”

    Renski says that OpenStack continues to gain momentum as an alternative to commercial management frameworks that wind up being a lot more expensive to deploy. Most existing IT organizations, however, already have major commitments to platforms such as VMware. Over time it will be interesting to see whether OpenStack simply supplants VMware or if the two frameworks will be deployed in parallel to support different classes of workloads inside the data center.

    Nor is it clear just how entrenched VMware is inside the data center. While the VMware hyper visor is widely deployed, there’s never been much consensus surrounding management frameworks inside the data center. Now as IT organizations ponder just how they will make the transition to a new era of software-defined data centers it’s clear that battle of management supremacy inside those next generation data center is only just beginning.

     

    2:05p
    How Green is your Data Center?

    Kobi Haggay is Vice President of Products and Marketing for RiT Technologies.

    The demand for data center operations and energy costs are rising. Global data center traffic is estimated to grow threefold from 2012 to 2017 and power requirements will increase as well according to Devva Bodas, lead architect for The Green Grid Association.

    Many organizations are announcing green initiatives. Apple announced that its massive data centers in Ireland and Denmark will be powered entirely by renewable energy, and Amazon is investing in an Indiana wind farm. But it’s not just the huge enterprises that feel the pressure to be energy efficient. Specifications for communications rooms and data centers are also challenging because of tougher space, power and environmental constraints; and reducing energy costs requires more precision than ever for companies of all sizes.

    Here are some measures you can take to reduce your carbon footprint.

    You Can’t Manage what You Can’t Measure

    The first step in greening your data center is understanding, in detail, what you have, what you use and don’t use, and why. In order to do so, companies need to take inventory of their data center resources. Everyone wants to skip over this task because it’s tedious. There is the temptation not to “waste” time and money gathering data and analyzing it and to do something immediately to make things better. But you can’t make informed decisions unless you can show on paper what you have, what you use, and how much it costs.

    There are tools that can provide complete real-time control of all network physical components and their connections. The AIM (automatic infrastructure management) standard, due to be released later this year, describes systems that will enable the automatic documentation of all network components and their connections resulting in a better-managed network with greater energy efficiency.

    The idea behind AIM is simple yet powerful: to enable real-time management of the network infrastructure by using self-aware network components, a central data repository and intelligent processes. To accomplish this task, IP discovery is used to build an accurate topological map in real time.

    Everything can be documented in the database including racks, servers, switches, routers, patch panels, storage, PDUs, power strips, UPSs, switchgears, and switchboards, etc. This information can be stored at a high level of granularity where all ports and all slots are described and named to optimize space in equipment rooms and space in racks. Configurable devices, such as enclosures, can be managed with a high degree of accuracy showing the exact configuration, and making sure that the right blades are put in the right places.

    These systems can also be used to identify underutilized components for better capacity management. When capacity isn’t optimized, it results not only in higher energy costs, but it can also contribute to longer response times and lower productivity.

    In addition, more accurate and complete documentation reduces the labor required to repair, move or change devices. Human error, such as miscalculations, incorrect port locations or mispatches, lead to wasted resources and unnecessary downtime.

    Even more importantly, AIM can support remote network moves, adds, and changes that eliminate travel time and expenses.

    Monitoring for Energy Efficiency

    The same system can be used to monitor temperatures, including return water for cooling systems to reduce energy costs. Real-time monitoring of power and energy ensures that cooling matches the real need.

    In addition, by monitoring when IP addresses are used, it is possible to pinpoint underutilized work areas and to dim lights and reduce cooling or heating requirements accordingly. This increased level of surgical energy deployments can result in significant savings.

    According to Gartner, intelligent infrastructure management in the data center can cut operational costs by 20 to 30 percent, including optimized power and space utilization. This is why AIM platforms are advocated by industry leaders as a new best-practice for managing data centers.

    It may be difficult to compete with Amazon and Apple when it comes to energy efficiency. However, when the planning and organization of the data center is supported by AIM, data center managers can start with a complete and accurate knowledge of what they have. This level of precision can only result in reduced energy requirements and more efficient operations.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    3:00p
    Cloud Security – Tips for a Better Cloud Architecture

    In our previous article, we took a look at how cloud providers are actively creating policies that ensure optimal cloud security. With so much cloud growth, it’s becoming very clear that more organizations are adopting some kind of cloud model to optimize their own businesses. Still, where big cloud service providers are doing a good job around security, there are still areas for improvement within the private data center. Also, smaller cloud providers must always ensure the integrity of their client base.

    Consider this very recent Ponemon study looking at data breaches. Although the study looks at a number of different security elements, here are three important points to consider:

    • The cost of data breaches increased. Breaking a downward trend over the past two years, both the organizational cost of data breaches and the cost per lost or stolen record have increased. On average, the cost of a data breach for an organization represented in the study increased from $5.4 million to $5.9 million. The cost per record increased from $188 to $201.
    • Malicious or criminal attacks result in the highest per capita data breach cost. Consistent with prior reports, data loss or exfiltration resulting from a malicious or criminal attack yielded the highest cost at an average of $246 per compromised record. In contrast, both system glitches and employee mistakes resulted in much lower average per capita costs at $171 and $160, respectively.
    • The results show that a probability of a material data breach over the next two years involving a minimum of 10,000 records is nearly 19 percent.

    With that in mind, what are areas that need improvement when it comes to cloud security; and what are some overlooked security aspects that should be considered when creating a cloud platform? Let’s look at a few areas that have challenged organizations when it comes to cloud and multi-tenancy.

    • Checking for port openings. If you’re a small organization, this might be a bit easier for you. But what if you’re a large data center or cloud organization? What if you have multiple data center points and different firewalls to manage? How well are you keeping an eye on port controls, policies and how resources are distributed? Most of all, what if you decommission an application using a specific port; do you have policies in place to shut that port down? Port, network and security policy misconfigurations are potential causes for a breach. Even if you have a heterogeneous security architecture, know that there are tools that will help you monitor security appliances even if they’re from different manufacturers.
    • Improperly positioning hypervisors and VMs to be outside-facing. I still see this happen every once in a while. In some cases, a VM must face externally facing or a hypervisor needs to be positioned in the DMZ. However, it’s critical to take extra care with these kinds of infrastructure workloads. Are they interacting with other internal resources? How well are network policies controlling access to that hypervisor and the VMs? Remember, your hypervisor has access to a lot of critical components within your data center. Even host-level access can be dangerous if not properly locked down.
    • Not properly locking down portals, databases, and applications. You can have the best underlying server, hypervisor and even data center architecture; but if your applications have holes in them, you’ll have other problems as well. Some pretty big breaches have happened because a database wasn’t properly locked down or an application wasn’t patched. This is a critical piece which can’t be overlooked especially if these applications are being proved via the cloud.
    • Not ensuring critical data is locked down properly. There are powerful new tools around IPS/IDS and data loss prevention (DLP). Are you deploying them? Do you have policies in place for monitoring anomalous traffic hitting an application? Do you know if a user is accidentally (or maliciously) copying data from a share or network drive? How good are your internal data analytics? These are critical questions to ask to ensure that your environment is locked down and that data isn’t leaking. Big cloud providers go out of their way to ensure that multi-tenant architectures stay exactly that – multi-tenant. Your data must be isolated when needed and have very restricted access. Furthermore, that information must regularly be tested and truly segmented using next-generation networking and security policies. If not, the results can be similar to what Sony, Target, or even Anthem experienced.
    • What are you monitoring externally vs internally. Visibility and monitoring are critical to keeping a secure cloud and data center architecture. Log correlation and even management allow you to catch issues quickly and even isolate them to network segments, VMs, or even a physical server. New security tools allow you to control the flow of information very granularly within your own ecosystem. So much so that you can specify that just one server communicates over a specific vLAN pointing to a specific port on a unique switch. And, you can encrypt that data internally and externally. The key is being able to monitor all of this in the process and automate responses. This not only creates better visibility, but allows your security model to be even more proactive.

    Remember, the cloud has a lot of moving parts. Much like gears, these parts all work together to allow complex workloads to be delivered to a variety of users spanning the world. It’s important to note that cloud adoption will only continue to grow. By monitoring and testing your own cloud and data center environment and applying security best practices, you will be prepared for whatever comes your way.

    4:05p
    Internap Advances OpenStack Offerings

    At the Vancouver OpenStack Summit Monday, Internap announced two key initiatives to broaden and enhance its OpenStack-powered cloud portfolio. The company added a new bare-metal service and advanced interoperability through identity federation and DefCore standards.

    Using its bare-metal AgileSERVERs with OpenStack Internap says it will offer the ability to provision and manage bare-metal instances, choosing from a range of server options through the OpenStack Horizon management dashboard. The new offering will also feature IPMI, NIC bonding and up to 10 VLANs per environment, according to the company. The bare-metal product will be initially offered out of Internap’s Montreal data center for the beta program.

    Internap has been an ongoing contributor to the OpenStack project since 2010. Its bare-metal AgileSERVERs are a core product offering across its global data centers, and its AgileCloud is 100 percent OpenStack under the hood.The OpenStack Ironic Bare Metal Provisioning program was released in the Icehouse OpenStack release a year ago, and can be thought of as a set of hypervisor APIs and plugins for physical servers.

    Internap also announced that it will take full advantage of OpenStack’s Kilo release and corresponding federated identity enhancements in 2015. It also completed the OpenStack Foundation’s new set of interoperability tests for products branded as “OpenStack-powered.”

    “The true power of the OpenStack ecosystem is realized through the broad support of our community partners in leveraging the platform’s flexibility to offer developers increased choice in building their applications,” said Jonathan Bryce, executive director at the OpenStack Foundation. “Internap is a longtime leader in the OpenStack community. Their work in bare-metal provisioning provides a hosted alternative for businesses looking for options beyond VMs.”

    6:00p
    Alcatel Lucent Innovates Undersea Cable System Technology

    Alcatel-Lucent announced advancements for its undersea cable systems and a technological record for subsea communications.

    In the Alcatel-Lucent Submarine Networks (ASN) division the company launched the ASN 1620 SOFTNODE platform, which it says is capable of 240 terabit per second transmission on multiple fiber pairs. In an early trial the company says that the new platform demonstrated 12.6 Tbps of data per fiber pair over the Africa Coast to Europe (ACE) system.

    The new platform is targeted at replacing legacy undersea platforms, offering larger spectrum and more fiber pairs to serve modern needs. Alcatel-Lucent says the new SOFTNODE engineering allows bit rates to be fined tuned according to each specific application – for example 400G for regional use and 300G for transoceanic transit. It can also support distances up to 14,000 kilometers with multi-PoP (point of presence) to multi-PoP network configurations with multi-degree nodes connected to more fibers, according to the company.

    Alcatel-Lucent also announced that it has achieved a record 610 kilometers (approx. 380 miles) distance for 100Gbps subsea communications, using the same fiber for both signal and amplifier transmission. With more than 12,500 miles of unrepeatered cable systems in more than 40 projects that the company has deployed, it says this breakthrough will lead to significant improvements to cable system efficiency, creating benefits for operators in terms of total cost of network ownership.

    The undersea cable projects are built to not use repeaters to amplify the signal, and will offer additional capacity and/or complement terrestrial networks to gain resiliency. They can also offer a design capacity of 25 Tbps depending on the system configuration.

    Olivier Gautheron, Chief Technology Officer of Alcatel-Lucent Submarine Networks noted that ASN’s breakthrough “further demonstrates the application of our own innovation to extend the reach of unrepeatered submarine cable systems in order to enhance their performance to help operators get the most out of their networks and build new revenue streams.”

    7:07p
    Google Introduces Preemptible VMs and Another Round of Cloud Price Cuts For Compute

    Google cut cloud pricing across all Google Compute Engine instance types by up to 30 percent and introduced a new class of preemptible virtual machines (VMs) that provide short-term capacity for a low, fixed cost.

    Google committed to following Moore’s Law with cloud pricing last year – you can set your watch to cloud price cuts. Google Compute Engine pricing is now roughly half of what it was when Compute Engine first launched in 2013.

    The preemptible VMs are a way for Google to sell capacity on its cloud that would otherwise go idle, at a reduced cost to the customer. Meant for flexible workloads and short duration batch jobs, they are identical to regular VMs, except when it comes to availability and price. They are 70 percent cheaper than regular instances, with the caveat that availability is subject to system supply and demand.

    The preemptible VMs are similar to Amazon Web Services’ EC2 Spot Instances, in that their suited for interruption tolerant tasks. Unlike AWS’ Spot instances, the price is fixed instead of driven by the market. Both approaches have their advantages, with Preemptible VMs more suited for those seeking fixed costs. Spot instances have been around for several years, and occasionally have been subject to big price variations.

    The Preemptible VMs continue Google’s addition of functionality found in other clouds, but with it’s own spin: this time it’s fixed costs.Google recently introduced a low-cost cold storage service called Cloud Storage Nearline, which put a spin on cold storage. Nearline makes data accessible in a matter of seconds rather than the standard hour or so for other cold storage offerings.

    The price cuts to Compute Engine were:

    • Standard configuration 20 percent reduction
    • High Memory 15 percent reduction
    • High CPU 5 percent reduction
    • Small 15 percent reduction
    • Micro 30 percent reduction

    Europe and Asia also received similar cuts.

    Google claims its Cloud Platform costs 40 percent less for many workloads compared to other public clouds when it comes to price/performance.

    “Our continued price/performance leadership goes well beyond list prices,” wrote Urs Hölzle, Senior Vice President, Technical Infrastructure. “Our combination of sustained use discounting, no prepaid lock-in and per-minute billing offers users a structural price advantage which becomes apparent when we consider real-world applications

    << Previous Day 2015/05/18
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org