Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, September 16th, 2015

    Time Event
    12:00p
    Maya’s Approach to DCIM Software is Maximum Interoperability

    Data center management isn’t rocket science, but having some rocket-science chops can’t hurt.

    Canadian software company Maya HTT started in the early 80s making software that ran thermal analysis on spacecraft for the Canadian Space Agency. It’s branched out widely since, but about eight years ago its management team realized a lot of the knowledge the company had could translate well to software used to manage data centers.

    That’s when Maya embarked on creating what today is called Datacenter Clarity LC, its DCIM software. The idea to create a DCIM product came when Maya was doing a Computational Fluid Dynamics analysis of energy efficiency of a data center for a customer in Europe, Inta Zvagulis, the company’s CEO, said. As a software-development house familiar with modeling complex physical objects and managing massive software projects, Maya management felt the company could create a strong DCIM offering, she said.

    DC Clarity, which Maya brought to market about four years ago, includes features like real-time monitoring, alarms, event notifications, asset management, asset visualization, and energy efficiency management with CFD integration, among other capabilities. The company has secured Siemens as an official reseller of the software.

    Olivier Allard, a senior DCIM exec at Maya, said one of the main strengths of the solution is the breadth of data types it can ingest and process. The software supports more than 800 protocols, he said. It can integrate with an existing BMS (Building Management System) and existing sensors, or simply collect data directly from servers and switches over the network.

    Companies that have deployed Maya’s DCIM software include large IT service providers, telcos, a major utility, and colocation providers. The company considers its play in the colo market particularly strong. “We have a particularly wonderful capability for colos that far exceeds what we’ve seen in the marketplace,” Zvagulis said. Those capabilities are visualization, ease of use, and real-time monitoring.

    The solution enables colo providers to offer their customers remote monitoring of their environment in the data center, Allard said. Through a web browser, they can track things like their electrical consumption, or temperature. It gives them a higher-than-usual level of understanding of their colo environment, he said.

    Like other DCIM software vendors, Maya has been receiving more and more requests to integrate its software with IT Service Management solutions. It has integrated with BMC Remedy Service Management, but DC Clarity also has an open API, which enables it to integrate with any software, Allard said.

    Some customers have requested to integrate DC Clarity with ERP (Enterprise Resource Planning) software. One colocation customer is now using DC Clarity as a communication link between sales people and data center managers. When a sales person closes a sale, they input it into the ERP system, which through DC Clarity alerts the data center manager that a new customer is coming in so they can be ready.

    If you want to get to know more DCIM software companies, learn how to go through the product selection process, how to deploy and manage DCIM software, or to stay abreast of the latest news in the DCIM world, explore the Data Center Knowledge DCIM InfoCenter.

    3:00p
    Atlantic.Net Launches First International Data Center in UK

    logo-WHIR

    This article originally appeared at The WHIR

    SSD cloud hosting provider Atlantic.Net has launched its first international data center on Tuesday, partnering with Infinity to enable customers to launch cloud servers in its Slough, UK, data center.

    The data center will offer lower latency for Atlantic.Net’s customers in Europe, Marty Puranki, founder, president and CEO of Atlantic.Net tells the WHIR in an interview. “[A European data center] is something our customers have been asking for so we’re happy to do it,” he said.

    Puranki said that Atlantic.Net expects Europe to account for about one-third of its customer base.

    “What happens is we have European customers that haven’t brought all their workloads over to us or they’ve signed up and they aren’t using our service, they are waiting for us to open in Europe,” he said. “We think it’s going to be a pretty big deal for our business because it’s such a large part of the market.”

    The UK infrastructure is Atlantic.Net’s sixth data center. The company has data centers in the US and Canada as well. Customer support will continue to be based in the US, which won’t be a problem for European customers, Puranki said, since it is 24/7.

    Initially European customers will be able to spin up virtual machines in its UK data center, but eventually everything that is available in its US and Canadian data centers will be available there, made easier with its platform that on a “go-forward basis” will allow the company to roll out new services in the US and then in the UK as well.

    Puranki said that in addition to partnering with Infinity for the data center, the company has launched agreements with EU Networks and Cogent for Internet.

    He said that while the UK data center has been in the works for quite a while, there were a lot of things that slowed the company down in the short term, such as compliance, importing into the UK, and getting registered for VAT taxes to be able to collect them from its customers.

    Puranki said Atlantic.Net has its eyes on an Asian expansion, a cloud market that is expanding as more enterprises get comfortable with hosting their infrastructure in the cloud.

    Earlier this year, Atlantic.Net launched HIPAA compliant hosting services.

    This first ran at http://www.thewhir.com/web-hosting-news/atlantic-net-launches-first-international-data-center-in-uk

    3:30p
    One Size Does Not Fit All: The Compute Era Begins

    Susan Blocher is VP of Global Marketing for HP’s Servers Business Unit.

    If there’s a defining characteristic of business in this millennium, it’s that failure comes faster. Don’t take my word for it; ask the 60,000 or so people who were working in Blockbuster’s 9,000-plus retail stores in 2004. Six years after that peak, the company filed for bankruptcy. Failing to adapt to the changing technology landscape, the on-demand economy and consumer behavior proved to be catastrophic.

    There’s a lesson here for others. According to recent IT surveys from leading industry analysts and consultants, line of business executives believe that IT will play a substantial role in transforming their business over the next five years. Just enough time for the next Blockbuster to find itself looking for a bailout suitor.

    At the same time, emerging technologies such as cloud computing, advanced mobility and Big Data present new business opportunities. The trouble is, too many executives believe their IT organizations aren’t equipped to capitalize on these trends quickly enough to deliver differentiated services as they’re created. Simply put, they are saddled with traditional IT systems that are inefficient, slow and manually-driven.

    A new approach is needed. Rather than seeing infrastructure as a collection of servers, storage and networking gear, forward-thinkers are aggregating pools of end-to-end Compute resources for use from the edge to core, up and down an integrated workload stack, and with an advanced set of economics and automated operational approaches to power a New Style of Business.

    The Compute Paradigm: Flexible Consumption Models

    There was a time when technology needed to be a fixed point. Servers and software could be tightly configured to handle a limited number of operations, squeezing cost out of the enterprise. Automation allowed for efficient handling of processes that rarely changed, because they didn’t need to. This “one size fits all” approach will no longer work.

    In the Compute era, IT leaders need to offer users and departments flexible consumption models for achieving business outcomes. We’re already seeing this dynamic at work in the public cloud as online retailers scale up resources to handle the holiday shopping rush. What if this same flexibility were afforded to the business unit manager needing to unify a distributed development team ahead of a key deadline? What if business leaders could simply define their goals and order internal IT resources to support them, on-demand, like any other service?

    Financing should be just as flexible. Traditional, top-down IT may work for some companies. Others may prefer a managed hosting model where owned resources are governed and apportioned by a third-party. Others may prefer to rely on the public cloud. At HP, we see a growing number pooling all their in-house gear and software for use as a service that IT leaders broker and departments consume according to budgetary limits.

    We don’t see this as a nice-to-have but rather as a strategic imperative. Business moves too fast, especially when so much of it is governed by systems of engagement. Adapting to the users that “engage” with these systems — from mobile banking and e-commerce to online music stores — is no longer optional. Systems in the Compute era are designed with this sort of flexibility in mind, breaking the fixed, brittle molds created by their predecessors and built with three distinct characteristics for serving business needs:

    1. Converged. Discrete servers are ineffective for serving ever-changing markets. Instead, we need pools of resources, virtualized and converged with networking, storage and management that can be shared by many applications as well as managed and delivered as a service.
    1. Composable. In the Compute era, infrastructure isn’t metal, it’s fluid. Pools of processing power and storage are captured in a networked fabric and disaggregated so they can be quickly composed to service workloads and then decomposed back into the pool for others to use as the occasion calls for it. Importantly, this work is performed entirely in software, and as such, requires now new architecture in order to implement.
    1. Workload-Optimized. There’s a reason why legacy IT systems are rigidly implemented. Rigidity, when applied to a specific problem, puts optimal resources at work in the right place. Flexible, assemble-on-demand Compute infrastructures confer this same level of customized performance but without calcifying the underlying system.

    The Evolving Enterprise: Predictive, Autonomic Compute Power at Work

    How far can we push the Compute model? That remains to be seen, but there’s no doubt we’ve come a long way already. Organizations that used to spend thousands of dollars on licensing to slice up inefficient servers to get more value from them are now designating their IT chiefs to build service bureaus that collect and distribute precious Compute resources where they’re needed, just in time.

    We’ve already added analytics capabilities that allow systems to preemptively add Compute power to departments known to need it at certain times of the day or year, like the e-tailer needs extra processing to handle traffic on Black Friday and Cyber Monday. Longer term, we’ll have autonomic systems that mirror the human immune system, applying software patches as if they were white blood cells dispatched to heal a wound, such as a cybersecurity breach.

     

    In that sense, Compute isn’t so much a technology model as it is an approach that’s flexible, service-oriented and designed to capture opportunities as they happen — and head off disasters before it’s too late.

    Don’t let your company fail to capture the opportunities when the technology is at your doorstep. Get ready for the next era – the era of Compute.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

     

     

    5:42p
    HP to Lay Off Another 25K-30K Workers Ahead of Corporate Split

    varguylogo

    This post originally appeared at The Var Guy

    By DH Kass

    HP said it will sever another 25,000 to 30,000 jobs, mostly from its enterprise services unit, as part of an overall plan to save some $2.7 billion in costs as the vendor careens to its November 1 planned split into two separate entities.

    The vendor divulged the planned cuts along with financial projections for the new HP Enterprise operation, which will formally begin operations in about a month, at a meeting on Tuesday with financial analysts.

    HP said its printer and PC business will pare some 3,300 jobs over the next three years.

    The layoffs are in addition to the 55,000 HP previously planned and the recent shuttle of enterprise services staffers to IT consultant Ciber. This current round of cuts is just shy of 10 percent of HP’s total workforce, a significant staff reduction by any measure.

    HP said it will take a $2.7 billion charge to its earnings for restructuring costs over a three-year period beginning with its FQ4 financial statement for the period ending October 31. The charges include $2 billion in severance payments and some $700 million in cost reductions from lease and property disposals associated with the split.

    In HP’s FQ3 2015, sales in the enterprise services unit fell 11 percent to $4.9 billion. HP chairman Meg Whitman, however, trumpeted its performance, saying on an earnings call that ES “significantly improved its sequential revenue trajectory and delivered another quarter of sequential and year-over-year profit improvement.”

    At the time, HP chief financial officer Cathie Lesjak said some 3,900 employees had exited the company during FQ3 and suggested that by the end of FQ4 another 5 percent could be laid off. That figure, as it turns out, was short of the actual number of job cuts.

    “It has been a bumpy road. There’s no question about it,” Whitman told analysts. “These restructuring activities will enable a more competitive, sustainable cost structure for the new Hewlett Packard Enterprise,” she said. “We’ve done a significant amount of work over the past few years to take costs out and simplify processes and these final actions will eliminate the need for any future corporate restructuring.”

    HP said that it expects HP Enterprise will produce more than $50 billion in annual sales. The company projects fiscal 2016 non-GAAP diluted net EPS to be in the range of $1.85 to $1.95, and estimated GAAP diluted net EPS to be in the range of $0.75 to $0.85.

    The vendor said that about 37 percent of HP Enterprise’s sales will come from enterprise services, 7 percent from software and about 50 percent from servers, storage, networking and technology services sales.

    The vendor also said it expects cloud revenue in fiscal 2015 to be approximately $3 billion and to grow more than 20 percent annually for the next several years.

    HP Enterprise will not rely on a few customers to generate the lion’s share of its sales, HP said. Mike Nefkens, HP Enterprise Services head, said that no one account comprises more than 10 percent of HP Enterprise’s sales.

    The PC and printer focused HP Inc. will concentrate on returning cash to shareholders and expanding into new markets such as 3-D printing, Whitman said.

    This first ran at http://thevarguy.com/business-technology-solution-sales/091515/hp-layoff-another-25k-30k-workers-ahead-corporate-split

    6:00p
    Salesforce Unveils IoT Business Process Engine

    talkincloud

    This article originally ran at Talkin’ Cloud

    At the Dreamforce 2015 conference this week Salesforce unfurled Salesforce IoT Cloud, an instance of the Salesforce App Cloud based on an event processing engine, dubbed Thunder, that is optimized to drive business processes within the context of an Internet of Things (IoT) environment.

    Primarily designed to give end users greater control over IoT business processes, Dylan Steele, senior director of product marketing for Salesforce IoT Cloud, said the new Salesforce cloud service makes use of open source big data technologies such as Apache Spark, Storm and Kafka to create a real-time processing engine on top of a microservices architectures. Layered on top of that is a rules engine through which end users can apply the analytics gathered from Salesforce IoT cloud an apply it to processes involving specific customers.

    Rather than creating just another platform for developers to create IoT application, Stelle said Salesforce is more focused on turning the billions of event that can be generated in an IoT environment into information that can both be consumed by the average business executive and then applied to customer records residing in other software-as-a-service (SaaS) applications managed by Salesforce. End users then make use of graphical tools in Salesforce to set filters to identify relevant data from event streams that then create real-time, actionable intelligence that can be fed into Salesforce applications, said Steele.

    In order to support all those end users, Steele said the Salesforce IoT Cloud is has been designed from the ground up to ingest billions of events.

    Down the road Steele said that Salesforce will focus more on attracting developers to build applications on top of it cloud platforms using a set of developer tools that will be constructed for that specific purpose. For now, Steele said the primary focus is going to be on showing users of Salesforce applications how to tap into the potential of IoT without having to wait for an internal IT organization to stand up an massive big data platform on their own.

    Obviously, Salesforce is looking to steal a march on IoT rivals by leveraging its traditional strengths with end users to deliver a platform that makes good on many of the promises of IoT sooner than later. Steele said Salesforce isn’t interested in managing all the sensor and gateways that make up an IoT environment. Rather, Salesforce views those IoT endpoints as a means through which data can be ingested into the Salesforce IoT Cloud.

    The degree to which Salesforce can leverage its large base of customer relationship management and marketing applications to establish a dominant position in the IoT environments remains to be seen. But given the potential trillions of dollars of business value IoT systems are expected to generate, what matters now most for solution providers and their customers now is figuring out how to turn all that potential into a new everyday business reality as quickly as possible.

    This first ran at http://talkincloud.com/cloud-computing/salesforce-unveils-iot-business-process-engine

    6:28p
    RagingWire Data Center to Host Shopify Merchant Storefronts

    While the Facebook pages of Shopify merchants will be served out of Facebook’s data centers, those businesses’ digital storefronts created by Shopify will soon be hosted in RagingWire facilities.

    Following Facebook’s announcement this week of a partnership with Shopify to enable Facebook users to buy products from Shopify’s 175,000 sellers without leaving Facebook, NTT Communications-owned US data center provider RagingWire said it has added Shopify to its list of colocation customers.

    RagingWire has data centers in Sacramento, California, and Ashburn, Virginia. It is also building a new campus in the Dallas-Fort Worth market. A company spokesman declined to provide details on Shopify’s deployment, such as the amount of capacity the web company is taking and where, citing confidentiality agreements with the customer.

    The deal is part of a data center expansion initiative for Shopify, according to a statement by Dale Neufeld, the company’s director of technical operations. The company said it has processed more than $10 billion in sales across 150 countries since its founding in 2004.

    RagingWire, which builds large-scale data center campuses and counts Twitter as one of its customers, was acquired by NTT last year for $350 million as part of the Japanese telco’s global data center business expansion push. NTT bought an 80-percent stake in the company that started in Sacramento.

    The acquisition gave RagingWire access to capital to fund further expansion, such as the latest Dallas build.

    7:30p
    VMware Ships New Software-Defined Data Center Tech, and Private OpenStack Cloud is Front and Center

    Right off the heels of the VMworld 2015 conference at the start of the month VMware this week announced that the raft of products launched at the show are now generally available.

    The updates stretch across the entire VMware data center stack. VMware Integrated OpenStack 2, VMware Site Recovery Manager 6.1, VMware Virtual SAN 6.1, VMware vRealize Log Insight 3, VMware vRealize Operations 6.1, and VMware vSphere APIs for IO Filtering are now shipping.

    The most significant of those offerings is VMware Integrated OpenStack 2, which is based on the Kilo release of the open source cloud infrastructure framework. While in many ways the stack of technologies called OpenStack represents a potential threat to VMware, from a practical perspective it may be years before all the technologies included in that stack are mature enough to support every class of workload running in the data center.

    More on VMware’s easy button for private OpenStack clouds here

    In the meantime, by layering support for OpenStack APIs on top of VMware data center software, the company is making it possible for IT organizations to deploy cloud-native application workloads alongside more traditional enterprise applications – one example would be ERP – which OpenStack is not ready to support, Mark Chuang, senior director of product marketing and product management for VMware, said. As such, IT organizations that have invested in VMware management tools can extend the reach of the VMware management framework into realm of OpenStack on an as-needed basis.

    Chuang said IT teams need to approach OpenStack as a modular framework that they can implement when and where they see fit. For example, the Neutron virtual networking layer does not have to be deployed in order to support applications looking to invoke an OpenStack API. Nor are they obliged to run KVM, the open source hypervisor used in most typical OpenStack clouds, said Chaung.

    The key components of the VMware data center stack now available make it possible to implement disaster recovery across a virtual network using VMware vMotion software and protect data better using VMware Virtual SAN 6.1 software across multiple clusters. Via VMware vRealize Log Insight 3, VMware is bringing the number of logs that can be analyzed up to 15,000 messages per second, while VMware vRealize Operations makes it simpler to visually keep track of application workloads.

    Finally, VMware vSphere APIs for IO Filtering extends replication and caching to third-party products from Asigra, EMC, Infinio, PrimaryIO, Samsung, SanDisk, and StorageCraft.

    Put it all together, and VMware is steadily filling in all the components of a software-defined data center architecture that can span private and public clouds running any type of application workload, Chuang said. In some cases, IT organizations may want to make use of OpenStack APIs to invoke those workloads. But if they do, it shouldn’t prevent them from extending a proven IT management framework to manage those workloads alongside every other workload in the enterprise.

    “Whether it is public or private, it’s still accessible by a software-defined architecture,” Chuang said. “All the resources can be defined in software.”

    << Previous Day 2015/09/16
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org