Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, September 24th, 2015

    Time Event
    12:00p
    Schneider Building SaaS Version of DCIM Software

    NATIONAL HARBOR, Md. – Schneider Electric, the French energy management and automation giant, has in the works a Software-as-a-Service version of its data center infrastructure management software suite called StruxureWare for Data Centers.

    The version hosted in the cloud and billed for on a pay-as-you-go basis will be part of the next release of the DCIM software, release 8.0, expected early next year, Domenic Alcaro, Schneider’s VP of mission critical services and software, said in an interview on the sidelines of the Data Center World conference here this week. Today, the software can be deployed on premises only.

    DCIM software delivered via cloud is rare in the market. One of the vendors that has a SaaS DCIM offering is Nlyte. Australian data center provider NextDC provides DCIM as a cloud service to its colocation customers. Broadly, however, it is a category of software products that has not made a big push into the cloud.

    That Schneider, one of the leading vendors in the comparatively small DCIM software market, is working on a SaaS version of StruxureWare may be a sign that DCIM is starting to mature as a category. In Alcaro’s opinion, the technology has recently risen out of the “Trough of Disillusionment” phase of Gartner’s Hype Cycle for emerging technologies.

    The market research firm defines five stages all new technologies generally go through: Technology Trigger, Peak of Inflated Expectations, Trough of Disillusionment, Slope of Enlightenment, and Plateau of Productivity. Alcaro thinks DCIM is now exiting the trough and starting its way up the slope.

    As it enters this new phase, some things about DCIM software have become clearer, and the biggest of them all is that DCIM is not something a data center operator can implement simply, without paying for services. It’s less like Microsoft Office, where you buy the package, install it, and get to work, Alcaro said; it’s more like an enterprise software product by Oracle or SAP, where implementation is a big job in itself.

    People “don’t blink an eye” when they hire an SAP or Oracle implementation team, he said, yet a lot of the recent disillusionment with DCIM can be attributed to complexity of implementation.

    Companies need to allocate resources for proper DCIM implementation and be prepared to change processes and dedicate staff to it, he said.

    One of Schneider’s DCIM software clients, a company with a global data center infrastructure, has allocated several million dollars in internal resources for DCIM implementation, Alcaro said, and that’s a good thing, because it’s a sign that the customer is realistic about what it will take while recognizing the product’s value. That several million dollars excludes any software licenses, maintenance costs, or any vendor services the company, whose name he declined to disclose citing confidentiality agreements with the client, is going to incur.

    The implementation is so complex because of deep integration with existing systems it requires. Integrating with Building Management Systems and IT Service Management Systems requires professional services, whether the DCIM software is hosted on-prem or in the cloud. Without those integrations, you simply will not get the full benefits of DCIM, regardless of how advanced the analytics engine is, Alcaro said.

    Along with the costs of implementation, the value of DCIM has also become clearer. “The ROI is there,” he said. “That’s the good news.” It brings operational savings from energy efficiency and labor and capital savings from reclaimed data center capacity and improved downtime avoidance. If you use your existing infrastructure more efficiently, you need less of it.

    It also presents a revenue-generation opportunity for colocation providers. Another new feature in release 8.0 of StruxureWare for Data Centers will be the ability for colos to provide customers with a portal for visibility into their data center environment.

    For in-depth guides to selecting, implementing, and operating DCIM software, visit the DCIM InfoCenter on Data Center Knowledge.

    Note: AFCOM, the company behind Data Center World, is a sister company to Data Center Knowledge.

    3:00p
    Netflix to Use StackStorm for IT Automation Under Cassandra

    StackStorm this week revealed that it has signed Netflix as one of its first major customers, while at the same time unveiling an enterprise edition of its open source event-drive IT automation framework.

    During a keynote presentation at the Cassandra 2015 conference in Santa Clara, California, Christos Kalantzis, director of cloud database engineering for Netflix, described how Netflix will rely on StackStorm to initially automate the management of both its implementation of the open source Cassandra database and later on the rest of the Netflix IT environment.

    Netflix currently stores all its metadata about usage and billing in the NoSQL Cassandra database. StackStorm will provide them with a more comprehensive framework for troubleshooting and providing auto remediation capabilities, said StackStorm CEO Evan Powell.

    As the developer of the original framework, StackStorm provides commercial services around a framework that is designed to be a superset of multiple existing IT automation tools. As such, StackStorm provides a unified approach towards invoking multiple IT automation tools.

    In the case of Netflix, Powell said, StackStorm is being used to monitor the Simple Queing Service (SQS) on Amazon Web Services where the implementation of Cassandra that Netflix uses resides. Powell said StackStorm sensors listen to SQS and issue triggers into the StackStorm internal message bus. A rules engine then listens to that message bus and does pattern matching to identify performance threshold violations. The system then checks to see if there is a tool that can be invoke to automatically remediate the problem. In cases where no solution to the problem can be found, StackStorm sends an alert to the appropriate admin.

    The end result, said Powell, is not only less fatigue for the IT staff, but also a process through which IT organizations can actually gain insight into how their processes actually work.

    “A lot of the IT automation systems in place today are opaque,” said Powell. “Our approach is designed to build trust over time.”

    With the release of the enterprise edition, StackStorm is making available a visual automation authoring and management utility, called StackStorm Flow, as well as support for LDAP, role-based access control and integration packs that connect StackStorm to thousands of applications and systems.

    In addition, StackStorm is making available a quick-start program alongside professional services that can invoke telemetry data via a private channel set up by StackStorm.

    Pricing for the enterprise edition, which can be used for free for 30 days, starts at $500 per month.

    3:30p
    How to Optimize Your Data Center and Save Millions

    Gary Oliver is President and CEO of Blazent.

    Here’s a helpful tip: Don’t try using the aphorism, “What you don’t know can hurt you,” with your data center professionals. Trust me, they know it already. And they live in daily fear that this lack of insight could strike down their infrastructure (and your company) at any minute.

    Every CEO, CIO and IT professional operates with the knowledge that they could potentially be tomorrow’s headline. Often those headlines are due to security breaches, but those breaches can also result from an unprotected or unrecorded IT asset. The recent four hour New York Stock Exchange outage, for example, which led the NYSE to cancel all open orders, stemmed from issues around updating software/lifecycle management tools, an eminently preventable occurrence – if IT knew what it didn’t know.

    Why are there so many headlines when we’re spending more on IT than ever before? It’s simple: We are involved in a struggle with our data – and the data is winning. Think for a moment about what the IT department has to track and utilize: data center hardware and software; virtual assets; a dizzying array of end-user computing devices; the networking infrastructure; public and private clouds; and, of course, all the data and applications at the core of their operation.

    As today’s enterprise IT environments increase in complexity, dynamism and scale, the key to surviving and thriving in this new business and IT environment lies in the data center. Because a complete and accurate picture of an enterprise’s data and infrastructure is the only way to make factual and informed decisions today, while planning for tomorrow’s IT and business future.

    Here are four tips to gaining and maintaining a complete and accurate picture of your entire data operation:

    Create a Strong, Functioning 1.0 IT and Data Infrastructure

    It may feel like we’re in the middle of a data storm, buffeted from all angles, but the pace and quantity are leisurely compared to what is to come (Data 2.0 – the world of Big Data – and Data 3.0 – the Internet of Things). The inability to articulate, gather and manage all your data leaves you exposed to any number of IT and financial exposures. To list just a few:

    • IT compliance issues can drive up costs
    • Operational shortfalls can increase risk
    • Change collisions can cause significant outages (NYSE)
    • Failing IT or software audits
    • Significant delays in identifying and resolving incidents

    The good news is: if you put the correct foundation in place – platforms and tools capable of identifying and managing all your data and IT assets – that foundation should be able to scale to handle your Big Data and IoT data.

    Get Your Hands Around ALL Your Data

    And by all we mean machine and human data, structued and unstructured. Assets and data. There are over 200 data types out there (the standard enterprise works with about 25, but that’s still a lot to understand and track), not to mention shadow IT operations. The tools and platforms are now there to identify and manage all of this data, but you need the commitment and resources to apply and manage them.

    Once You’ve Got All that Data, Put it to Use

    Even the most sophisticated enterprises use only a fraction of their data. And many smaller enterprises, envious of their bigger compatriots, acquire data sets that they are unable to process and manage. The key, again, is to gather all your existing data and put it to good use (find those servers you didn’t even know existed, which software isn’t upgraded, etc.) then build upon this foundation. Once you have corralled and applied your existing resources, you will be ready for the machine learning and predictive tools and systems that can take what you have and optimize it.

    Push Yourself as Hard as You Push Your Data

    Only by truly and efficiently using your data can you take your IT and business decisions to the next level. Ask the tough questions that would make you more competitive, efficient or profitable, and then task your IT department and data professionals with providing the right answers. Finally, as with Data 1.0 and its successors, scale your questions so that you are pushing your data—and yourself—to new levels of operational efficiency.

    In most enterprises, the CIO now has a seat at the Big Table, which is as it should be. Data is no longer just a part of an organization’s supporting infrastructure; it is often the difference between profit and loss, even solvency and insolvency. And the CIO will be tasked with not only making sense of the enterprise’s data, but of optimizing it – both for IT operational efficiency and business advantage.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:30p
    Here’s What Caused Sunday’s Amazon Cloud Outage

    logo-WHIR

    This article originally appeared at The WHIR

    A minor network disruption at Amazon Web Services led to an issue with its NoSQL database service DynamoDB, causing some of the internet’s biggest sites and cloud services becoming unavailable on Sunday, and it has continued to plague the public cloud service with a disruption Wednesday.

    In a blog post outlining the issue experienced Sunday, AWS explained that the US-East region experienced a brief network disruption impacting DynamoDB’s storage servers.

    Understanding How DynamoDB Manages Table Data and Storage Servers Requests

    DynamoDB’s internal metadata service manages the partitions of sharded table data, keeping track of partitions spread across many servers. The specific assignment of a group of partitions to a given server is called a “membership”.

    The storage servers that hold data in partitions periodically check they have the correct membership, and for new table data present on other partitions, through a request to the metadata service. Storage servers also send a membership request after a network disruption or on startup.

    If storage servers don’t receive a response from the metadata service within a specific time period, they will retry, but also disqualify themselves from accepting requests. But these requests were already taking longer to deliver because of a new feature that has been expanding the amount of membership data being requested.

    The new DynamoDB feature called “Global Secondary Indexes” allows customers to access table data using multiple alternate keys. For tables with many partitions, this means the overall size of a storage server’s membership data could increase two or three-fold, causing it to take longer to fulfill membership requests. AWS said it wasn’t doing detailed tracking of membership size, and didn’t provide enough capacity to the metadata service to handle these larger requests.

    Again, storage servers unable to obtain their membership data within an allotted time retry, and remove themselves from taking requests.

    A Network Disruption Triggers a Meltdown

    When the network disruption hit on Sunday morning, storage servers simultaneously requested membership data, including some enormous membership lists, essentially flooding the metadata service with many large requests which timed-out. As a result, storage servers could not complete their membership renewal and became unavailable for requests but still continued to retry requests for membership data, further clogging DynamoDB with requests.

    By 2:37 am PDT, around 55 percent of requests to DynamoDB failed.

    AWS attempted to fix the issues by adding capacity to the metadata service to deal with the additional load of the membership data requests. However, the service was under such high load, administrative requests couldn’t get through.

    At around 5am, AWS paused metadata service requests, which decreased retry activity and relieved much of the load on the metadata service so that it would respond to administrative requests, allowing admins to add capacity. DynamoDB error rates dropped to acceptable levels by 7:10am.

    Preventing DynamoDB From Causing More Problems

    AWS said it is doing four things to ensure a similar event doesn’t happen again: increasing the capacity of the metadata service; implementing stricter monitoring of DynamoDB performance; reducing the rate of storage node membership data requests and allowing more time to process queries; and segmenting DynamoDB so there are essentially many metadata services available for queries.

    Ensuring DynamoDB works correctly is especially important because many services such as Simple Queue Service (SQS), EC2 Auto Scaling, CloudWatch, the AWS Console and others were affected by DynamoDB’s high error rates.

    Meanwhile, there was a second – albeit less critical – issue reported on Wednesday where latency and errors were reported for the DynamoDB metadata services, along with disruptions to Elastic Block Store (EBS), new instance launches, and Auto Scaling services in the US-East-1 region.

    Errors are seriously concerning for enterprises relying on AWS for their businesses.

    Some of the companies reportedly impacted in the Sunday outage included Netflix,Reddit, Product Hunt, Medium, SocialFlow, Buffer, GroupMe, Pocket, Viber, Amazon Echo, NEST, IMDB, and Heroku.

    In its message to customers, AWS apologized for the cloud outage, noting, “For us, availability is the most important feature of DynamoDB, and we will do everything we can to learn from the event and to avoid a recurrence in the future.”

    This first ran at http://www.thewhir.com/web-hosting-news/amazon-sheds-light-on-dynamodb-disruption-that-caused-massive-outage

    5:00p
    Analytics Becomes Next Great Cloud Service

    talkincloud

    This article originally ran at Talkin’ Cloud

    When it comes to analytics there are primarily two types of use cases. The first generally involves fairly sophisticated end users that access analytics applications via a traditional user interface. Less conventionally, however, more users of other types of applications are starting to invoke analytics engines via an application programming interface (API).

    Case in point is Salesforce, which at the recent Dreamforce 2015 conference announced that it has opened the programming model surrounding the Salesforce Wave Analytics Cloud. Anna Rosenman, ‎senior director of product marketing for Salesforce Wave Analytics Cloud, says that while the first version of this cloud offering was aimed primarily at line of business users, Salesforce is now also focusing on recruiting independent software vendors (ISVs) to make use of Salesforce Analytics Cloud as part of applications that invoke Salesforce customer records.

    At Dreamforce 13 ISVs showcased applications that invoke Salesforce Wave Analytics Cloud, including a quote-cash application from Apttus; a customer lifecycle management application from SteelBrick; a customer churn application rom Vlocity; and a set of ERP applications from FinancialForce.com. All told, Salesforce claims that more than 80 companies have now joined the Salesforce Wave Analytics Cloud partner ecosystem.

    When all is said and done, it’s likely that a lot more end users are going to wind up invoking analytics via APIs that are exposed by cloud services than those that currently do so using standalone applications. It’s not that users of standalone analytics applications will give up on those applications. Rather, more end users will be exposed to analytics within the context of whatever application they happen to be using because it’s becoming simpler for ISVs to invoke analytics as a service in the cloud.

    Of course, not every ISV is going to do that of their own accord. In fact, many IT organizations will more than likely have a strong preference for one analytics engine that serves multiple applications. As such, many of them will be looking to solution providers to integrate both new and existing applications with a specific cloud analytics service.

    Regardless of who actually does the integration, the one thing that is for certain is that analytics is about to become a mainstream part of just about every analytics application environment. In fact, it may prove to be impossible to sell any application that in form or another doesn’t provide access to a rich set of analytics services one way or another.

    This first ran at http://talkincloud.com/cloud-computing/analytics-becomes-next-great-cloud-service

    5:30p
    Dell Enlists Cloudera and Synsort to Drive Hadoop Adoption

    varguylogo

    This post originally appeared at The Var Guy

    One of the rarer opportunities in the channel is a truly “greenfield” application that pulls along a massive amount of IT infrastructure. For that reason the channel as a whole should be looking to drive more adoption of Big Data analytics applications. In fact, with that goal in mind, Dell has been steadily building out a reference architecture for Hadoop that runs on top of its PowerEdge servers.

    This week Dell extended that architecture to include the distribution of Hadoop from Cloudera that has been validated alongside extract, transform and load (ETL) software from Syncsort. The end goal, said Armando Acosta, Hadoop product and planning manager for Dell, is to make use of ETL software that traditionally has been used to move large amounts of data in mainframe environments and apply those concepts to Hadoop.

    Most IT organizations have a large amount of data they have been collecting for years residing in a data warehouse. Rather than continuing to pay for the commercial software licenses associated with storing that data, many IT organizations are looking to shift that data in bulk into an open source Hadoop environment. And rather than having to validate an ETL solution themselves, Dell is working with Cloudera and Syncsort to create a reference architecture that will prove to be the foundation for a modern data warehouse, Acosta said.

    The rate and degree to which modern data warehouses based on Hadoop will replace traditional data warehouses, however, is subject of some debate. Many IT organizations have made massive investments in existing data warehouse applications that they are not ready to part with. In those scenarios Hadoop is being used primarily as a low-cost storage environment for data that needs to move into the data warehouse to be processed. At the other end of the spectrum, a significant number of business intelligence and analytics applications that work natively against Hadoop are becoming available. As these applications become more widely adopted, the need for legacy data warehouse applications may be sharply reduced.

    Whatever the ultimate outcome, one thing is for certain: A lot of data will be moving in and out of Hadoop in bulk for a long time to come. As such, solution providers would do well to start thinking about not only how to stand up a Hadoop cluster but also all the related technologies required to make that Hadoop cluster operational.

    This first ran at http://thevarguy.com/big-data-technology-solutions-and-information/092415/dell-enlists-cloudera-and-synsort-drive-hadoop-adoptio

    << Previous Day 2015/09/24
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org