Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, March 27th, 2014

    Time Event
    1:20p
    Schneider Electric Takes StruxureWare to the Azure Cloud

    Schneider Electric can now develop custom, cloud-based energy management tools thanks to an alliance with Microsoft. The partnership combines Schneider’s StruxureWare Resource Advisor with the Windows Azure cloud platform.  This allows companies to build tools designed to meet specific energy efficiency needs, improve processes and operations.

    “We are seeing a major shift in the industry away from one-size-fits-all solutions, and the cloud-enabled capabilities of the combined StruxureWare Resource Advisor and Windows Azure technology allow us to deliver right-sized tools based on customers’ specific energy management needs,” said Pascal Brosset, Chief Technology Officer, Schneider Electric. “By leveraging key aspects of our respective technologies, we’re able to produce high performance, cloud-based solutions, creating true value and competitive advantages for customers across multiple industries.”

    The Schneider collaboration with Microsoft will enable the creation of some very industry specific energy and sustainability tools. How specific can this be tailored? The first StruxureWare Resource Advisor on Azure cloud tool to be delivered is the Sustainable Apparel Coalition (SAC) Higg Index 2.0. It’s a web-based tool that helps retail organizations standardize how they measure and evaluate environmental performance of apparel products across the supply chain. That’s pretty specific. By using the StruxureWare/Azure combo, the tool is made available and accessible to the SAC’s global membership.

    StruxureWare Resource Advisor provides secure access to data, reports and summaries to drive customer energy and sustainability programs. It’s very configurable, and made even more flexible deployed atop of Azure. Schneider Electric will continue to roll out the Windows Azure platform globally to support StruxureWare cloud-based software offerings.

    The StruxAzure combo (note: they don’t officially call it that, but you know what I mean) provides:

    •  Accelerated deployment with instant access to online, web-accessible sustainability and energy management software
    •  Global reach and low cost of entry as software tools can be scaled quickly, replicated and rolled out globally with local language support
    •  Reduced cost resulting from eliminating the need to install or maintain additional on premise IT infrastructure
    •  Ability to leverage installed investments with secure cloud connectivity to existing onsite facility energy metering, monitoring and building management systems
    •  Increased mobility for employees to access information wherever they are, rather than having to remain at their desks
    • Flexible system architecture based on open standards and protocols, promoting systems integration.
    1:45p
    Zettaset Adds Data-in-Motion Encryption for Hadoop

    Big data management company Zettaset has added a new data-in-motion encryption capability to its Orchestrator management and security add-on application for Hadoop. Data-in-motion encryption provides organizations with an additional layer of protection for their Hadoop clusters and sensitive data, eliminating access by unauthorized users.

    Orchestrator data-in-motion encryption ensures that all networking connections to the Orchestrator web-based console are completely secure within a Secure Socket Layer (SSL) tunnel. Communication links between all cluster nodes are encrypted and authenticated to eliminate the possibility of unauthorized access to data by anyone within the corporate network or Hadoop cluster.

    “Risks associated with Hadoop projects include security and privacy challenges,” wrote Gartner Research Director & Analyst, Bhavish Sood. “Hadoop certainly wasn’t built with enterprise IT environments in mind because there is a shortage of robust security controls in Hadoop.”

    The addition of in-motion encryption comes after the company announced data-at-rest encryption capabilities last fall. Node performance with encryption is especially critical as more enterprise customers scale up their Hadoop clusters and move from pilot to production. Orchestrator’s management console automates virtually all manual configuration processes within Hadoop, eliminating the need for professional services and reducing IT resource requirements in the enterprise.

    As a part of the Orchestrator solution, encrypting data-in-motion adds to the list of benefits, including others like fine-grained role-based access control, automated active directory and LDAP integration, activity monitoring, automated installation and configuration of security components into the Hadoop cluster, and patented high-availability to ensure that every Hadoop service is protected with automated fail-over. Data-in-motion encryption for Hadoop will be bundled with every Orchestrator application license.

    1:47p
    Leaders Achieve Best Practice With DCIM

    Lara Greden is a senior principal, strategy, at CA Technologies. Her previous post was titled, Bringing DCIM Technology into Your Data Center You can follow her on Twitter at @laragreden.

    When you look around at the people responsible for driving the use of DCIM software at their organization, you will find people with highly visible roles that are leading their organization’s to drive significant growth. Recently, I heard from a person in a DC Ops role about how DCIM is part of their tactical plan to help the company meet a goal of nearly 50 percent growth over the next three years on today’s $30B revenue base. Those are not small numbers.

    Implementing DCIM software, to drive best practices in data center operations and services to help an organization grow, requires leadership. Here are three key principles of effective leadership demonstrated by true stories of how DCIM software is helping real people lead change for their business.

    Data-driven Decision Making

    One of the main drivers for adopting DCIM is the need to get data to help make better decisions. In this day and age, leaders expect decisions to be data-driven. For example, is there enough power, space, and cooling to meet the upcoming rollout plans? Which data center is the best choice given the expected workload profile for a new app? And not only do executives expect such decisions to be data-driven, but they expect the underlying data to be high quality and increasingly real time, not just nameplate values. Thus, understanding how easy is it to acquire data into a DCIM system and be able to use it to make decisions is a key aspect that organizations need to focus on during vendor selection processes.

    Data center operations are dynamic by nature, but lack of an accurate view of power and cooling introduces risk as new devices are provisioned.

    DCIM offers a way forward by providing data collection, visualization in the context of the power chain and 3D views of the environment, and analytics to make data driven decisions. A manager of critical infrastructure for a global financial institution recently summarized it well in saying that by having accurate power, temperature, and other environmental data, they are able to make better decisions on using the capacity in their infrastructure without putting customer experience at risk. This data also enables the organization to more easily plan for future capacity needs.

    Anticipate the Future

    In order to lead the adoption of best practices in your data center, you need to be able to anticipate the future, communicate it clearly, and craft it into reality.

    The future is DevOps, and part of that future means being able to access data from all parts of data center operations so that Development can use it to improve the quality of product releases. Likewise, data center op’s knows all too well that increased speed is the marching order in today’s world. However, leadership is about crafting the vision for speed and reliability into reality.

    Tools such as DCIM help the organization get there. Imagine a future in which no servers are placed on breakers that would trip because of insufficient power, and one in which you don’t have to waste significant investment on over provisioning or over building data center space because safety factors are set unnecessarily high.

    DCIM software can help turn that vision of the future into reality. It gives a consolidated view of real time data across power, space and cooling, across one or many data centers. Couple it with a view across other sources of machine data in the data center, such as compute, memory, and I/O, and you have an even more powerful tool to help anticipate the future and craft it into reality.

    Make Meaningful Commitments

    Leaders need to make meaningful commitments, and hold themselves and others accountable with systems for measuring success.

    When speaking with people across various sectors – including manufacturing, retail, financial, government – as well as the leading colo providers and service providers who use DCIM in serving their customers, their key focus is on excellence. They are using DCIM for monitoring and intelligent alerting to save costs; as well as to make commitments when planning for future infrastructure needs. Above all, DCIM technology is helping them break down siloes so they can intelligently manage power, space, and cooling to deliver business needs.

    When it comes to making commitments for driving data center best practices, don’t make the mistake of viewing the major commitment around DCIM as being about cost cutting. While important and part of overall continuous rationalization efforts, cost cutting is not the type of bold commitment that demands CIO attention and thus the best, most capable leaders will not be assigned to such projects. Rather, if you are looking to drive best practices in your organization’s data center operations, you need to view DCIM as a means of achieving the ultimate customer experience, speed, agility, and growth.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:54p
    Abstracting the Data Center: A look at the DCOS Platform

    It’s time to take a step back and look at the data center model that’s impacting today’s business, . It’s time to see just how far this platform has come and exactly where it’s going. It’s time to say hello to the truly agnostic data center. Almost every new technology is being pushed through some type of data center model.

    Inside of your current data center model – what do you have under the hood?

    • Storage, Networking, Compute
    • Power, Cooling, Environmental Controls
    • Rack and Cable Management
    • Building and Infrastructure Security

    Although some of these underlying components have stayed the same. Requirements from the workloads that live on top have drastically evolved. Through it all, we’ve also seen an evolution of the physical aspect of the data center. We’re creating powerful multi-tenant, high-density platforms capable of handling users and the new data-on-demand generation. With all of these new technologies and demands, the modern data center has truly become a distributed node infrastructure.

    So here is the real challenge: how do you control and manage it all? How do you control practically every aspect that is critical to data center functionality? Most of all, how do you do it on a distributed plane?

    Data Center Abstraction

    Data center abstraction is an emerging field where all physical and logical components are abstracted and inserted into a powerful management layer. This new model is sometimes referred to as the software-defined data center. However, today we’re focusing on the management layer. The data center of the future will be truly agnostic where all resources become presented to a powerful management layer, which can then be controlled anywhere and anytime. This is the data center operating system.

    One example of this data center operating model is provided by IO and its IO.OS environment, which helps control many of the absolutely critical components, from chip to chiller. The great part is that this DCOS layer has visibility into every critical aspect that a data center has to present.

    This conversation takes us far beyond standard DCIM. We’re now looking at an open data center and open cloud architecture. So what makes up a solid data center operating system? What has IO.OS done to really help organizations regain control of their global data center footprint? Let’s examine what it takes to create a powerful DCOS framework.

    The Control Layer: Let’s start at the top. The control panel and management layer of a DCOS platform incorporates an easy-to-follow yet very granular interface. There is direct visibility into everything that is residing within your data center. This includes energy management, controlling QoS, monitoring the current state of all VMs, and even creating sensor setpoints throughout your data center. Here’s the important piece: you will have visibility into you entire data center and cloud environment.

    The Integration Layer: What if you have outside cloud instances? What if you have big data engines? What if you need visibility into some resources that are “outside” of your data center? A big piece of the DCOS framework revolves around creating a more open infrastructure. Whether this means integrating with a big data engine or applying key APIs to allow communications between applications and resources, your DCOS model must help extend your infrastructure. This means incorporating front and back-end resources and pushing critical data to the control layer of the DCOS. Now, imagine integrating with automation, logging, and other critical systems that were once islands within a data center.

    The Proactive Layer: Imagine changing the temperature settings within a specific section or rack within your data center based directly on pre-set thresholds. How about modifying environmental and resource variables based on compliance with set application requirements? Automation and maintaining an intuitive data center infrastructure are key pieces in the DCOS model. Proactively staying ahead of service impacts across your entire data center (both logical and physical) allows administrators to focus on efficiencies, not fires. You basically have an intelligent proactive DCOS layer which allows you to make granular system adjustments on the fly. Ultimately, this allows your data center to change at the speed of business. Which, in today’s world, is pretty fast. Here’s the reality: it’s not even about “real-time” any more. A solid data center operating system provides the proactive element, as well as an intuitive structure, where metrics can be gathered and system changes can be planned around existing and future demands.

    2:30p
    Embrane Gets $14 Million to Accelerate App-Centric Network Services

    Cisco leads a $14 million funding round for Embrane to enhance its go-to-market strategy, Arista expands its cloud platform with two new 100GbE modules, and Dell launches new networking solutions, featuring the Z9500 Fabric Switch.

    Embrane raises $14 million. Embrane announced it has raised $14 million in Series-C funding, led by Cisco. The round also includes new Embrane investor Presidio Ventures, and participation from existing investors Lightspeed Venture Partners, New Enterprise Associates and North Bridge Venture Partners. The new funds will enhance its go-to-market strategy and expand support for third-party virtual network service appliances. This brings the total amount of funding to $41 million to date. “We are always looking for innovative technologies that can enable new solutions. For the SDN market, which we believe is poised for significant growth, Embrane is the only purpose built end-to-end solution,” said Shunichi Aramaki, CEO of Presidio Ventures. “After careful analysis, it became clear Embrane has a unique offering that not only enable first movers, but also bridges any size traditional networks to transition to SDN without operational disruption – all of which are important to its continued leadership in the space.” “As the dust settles on all of the SDN hype, it has become clear that companies with real products and real customer deployments today will become the leaders of tomorrow,” continued Pete Sonsini, General Partner at NEA and current Embrane board member. “Adding Cisco as a strategic investor as well as Presidio Ventures certainly gives Embrane the resources it needs to accelerate its momentum.”

    Arista adds new 100GbE modules. Arista Networks announced the expansion of its cloud platform with two new 100GbE modules. The Arista 7500E 100GbE line cards provide full support for IEEE 100GbE standards on both single-mode and multi-mode fiber, and broaden the choices for 10/40/100G with investment protection for existing customers. Additionally, those deploying the Arista DANZ features within Arista EOS can now get intelligent traffic visibility on 100GbE networks by leveraging Tap Aggregation on Arista 7500E modular platforms. “We selected the Arista 7500E platform for our data center to provide customers with true 100 Gigabit Ethernet services,” said Ihab Tarazi, Chief Technology Officer at Equinix.  “Because of the open architecture and cloud interconnect protocols, including VXLAN, the programmability of Arista EOS and multi-speed mode, we are able to allocate data center resources at 10/40 or 100 gigabits in a reliable and cost-effective manner.”

    Dell launches networking fabrics and control for SDN, NFV. Dell announced new networking solutions, featuring the Z9500 Fabric Switch. The new Dell Networking Z9500 is an energy-efficient, high density per rack unit 10/40 GbE core switch. It is designed to address data center 10/40 GbE aggregation requirements through centralized core or distributed core architectures for high performance enterprise data centers, cloud computing, provider hosted data centers, and enterprise LAN cores. The 3RU switch features 132 40GbE ports and expandable to 528 10GbE ports, and has pay-as-you-go licensing for 36, 84, or 132 port options. A new Active Fabric Controller is a purpose-built SDN platform designed to simply and securely configure and deploy networking functionality in cloud and XaaS environments. “Dell is committed to changing the game in networking. As a follow on to our recent Open Networking announcement, I’m excited about demonstrating more innovation in bringing new and open solutions to our customers regardless of size,” said Tom Burns, vice president and general manager, Dell Networking. “We’re extending our leadership in SDN, NFV, and advanced new architectures that maximize customer choice and provide superior economics to the way networking has always been done.”

    << Previous Day 2014/03/27
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org