Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, May 29th, 2014

    Time Event
    11:00a
    IT Ops Analytics Startup ExtraHop Closes $41M Series C

    ExtraHop, a Seattle, Washington-based IT operations software startup, has closed a $41 million Series C funding round – its biggest to date.

    The seven-year-old company, founded by two former F5 Networks engineers who were involved in creation of TMOS, the traffic management software that underpins F5’s networking products, has now raised a total of $61.6 million and sold into some big-name customers, such as Lockheed Martin, Morgan Stanley, McKesson and Purdue Pharma.

    Gathering big data for IT operations

    ExtraHop’s solutions gather network data and apply analytics to distill IT operations intelligence. Its latest product, ExtraHop for AWS, enables IT managers to monitor performance of applications running in Amazon Web Services’ infrastructure cloud in real time. The company’s flagship product is the EH8000 appliance launched in April 2013.

    The main appeal of ExtraHop’s technology is the way it simplifies gathering all data from the network for the user. By tapping into wire data, ExtraHop collects real-time information about every aspect of the network, from physical connections to the way applications relate to the network. Monitoring physical and logical connections, it is able to decode all wire protocols, from HTTP/S through LDAP and SQL.

    It does not require the user to install software agents on the servers on their network or do anything special to make the data available, according to video testimony by Jim Hutchins, CTO for one of ExtraHop’s customers T2 Systems.

    New VC on board

    Technology Crossover Ventures joined the group of venture capitalists funding the company and led the latest round. Companies in TCV’s portfolio include the likes of Splunk, Zillow, Expedia and Spotify. Ted Coons, a TCV principal, has also joined ExtraHop’s board of directors.

    “Just as Splunk’s platform fundamentally changed the way businesses leverage machine data, ExtraHop has transformed wire data into a key source of visibility and intelligence for IT operations teams with its Wire Data Analytics platform,” Coon said, drawing a parallel with the successful vendor of analytics software for machine data that had a blockbuster IPO in 2012.

    Investors that have funded the company before and joined the latest round are Meritech Capital Partners and Madrona Venture Group. ExtraHop closed its Series B round in 2011. The company said its revenue grew 150 percent between 2012 and 2013.

    Its founders are Jesse Rothstein (CEO) and Raja Mukerji (president), who started the company after spending six and seven years (respectively) working at F5, a major Seattle networking technology vendor.

    12:00p
    Cisco Kicks Off Manufacturing of UCS Servers in Brazil

    Cisco‘s manufacturing operation in Brazil has recently delivered its first Unified Computing System (UCS) servers. The San Jose, California-based vendor committed to investing over R$1 billion in Brazil over four years back in 2012, which included plans to expand local manufacturing in the country.

    “Expanding local output by manufacturing Cisco’s UCS servers in Brazil, alongside the Innovation Center in Rio de Janeiro and additional investments, reinforces our long-term commitment to the country,” said Rodrigo Dienstmann, president of Cisco in Brazil. “The UCS server has been an unprecedented global success for Cisco. By optimizing delivery schedules and with more market competitiveness, we expect the Cisco UCS to become the foundation of a wider adoption of cloud computing and convergent systems in Brazil, adding value and increasing productivity.”

    Where servers are built and how they are shipped around the world has relevance in light of recent revelations by Edward Snowden and The Guardian that the U.S. National Security Agency may have been tampering with IT gear exported from the U.S. allegedly to install “backdoor” surveillance devices. While Cisco’s investment plans in Brazil go back to a time long before the scandal, local production might mean more than just local economic benefit now that the information about potential NSA tampering has been released.

    Cisco CEO John Chambers was vocal about the issue. In a letter to President Barack Obama earlier this month, he wrote that the practice might lead to “a fragmented Internet, where the promise of the next Internet is never fully realized.”

    The company is starting local production of blade and rack servers from the UCS family, a converged infrastructure stack that combines computing, networks, management, virtualization and storage access into a single integrated architecture. Launched in 2009, UCS took networking company Cisco to second place in the blade segment rankings.

    Brazil is the main battleground for cloud in South America. Cisco is investing heavily in making sure it has market share in the emerging market. The two main reasons the company decided to manufacture UCS locally was Brazil’s increasing adoption of cloud computing and rising demand on the part of Brazilian companies for efficient data center solutions. As local manufacturing expands, Cisco says it expects to become one of the main server suppliers in Brazil and Latin America.

    As part of its investment plan, Cisco also committed to opening of an innovation center in Rio de Janeiro and investing in a Brazil-focused ICT and digital economy venture capital fund. The company also plans to strike intellectual property agreements and partnerships with Brazilian companies and entities to collaborate on innovation.

    12:00p
    IBM Risks Losing Traditional Enterprise Deals in Name of Cloud Solutions

    IBM is faced with the difficult task of balancing its traditional enterprise business with a growing cloud business. Its new Cloud Business Solutions portfolio has components you’d find in a traditional enterprise deal but IBM sells them ‘as a service’. They contain consulting services, pre-built assets, software products, ongoing support and SoftLayer cloud infrastructure in a single client agreement.

    The company plans to deliver a total of 20 solutions in the portfolio this year, each aligned to a specific industry need, from mobile to fraud prevention. They ntegrate assets and expertise from IBM Research with advanced analytics “as a service” and are packaged for fast, simple access and rapid implementation. Each solution has six to eight industry -specific models.

    Traditionally theses type of package deals would require a multi-year commitment and a ton of upfront capital. The new breed of solutions from IBM don’t ask anywhere near such a level of commitment. “It makes a ton of sense if there’s technology risks, limited capital; maybe the usage patterns are highly variable, so they want to leverage the cloud,” said Dave Seybold, vice president of IBM’s Global Business Services division. “We are going beyond infrastructure services. These are industry- , client-specific mission-critical solutions. What we’re trying to do is get clients to take risks. Leveraging labor and support models without term.”

    IBM is focusing more on systems of engagement than systems of record. ”We’re renting cloud, software, passing that license through the client and offering managed infrastructure services on a monthly basis; same for application management,” Seybold said. “We’re putting it all together in a single contact with a baseline charge and a usage fee model. This is the way clients need and want to buy.”

    Risking loss of lucrative long-term contracts

    There are risks for IBM here, namely threatening a traditional enterprise IT business that has been very lucrative for the vendor. The new approach to services takes away guaranteed multi-year contracts or big upfront software license sales, but these are risks IBM is willing to take. “It’s recognizing revenue only as the client takes advantage,” Seybold said. “The risk is if the solutions don’t succeed, there’s less ability to recoup cost on our part. But if we do this appropriately, we shouldn’t have to worry. That’s the way clients want to buy. This is the way we need to do business.”

    He said it was all about flexibility. There are enterprises that know what they want; in those cases, they can still sign those multi-year deals. “We have to be willing to sell it or rent when required. In general, this is going to become a business model that clients are going to choose. Maybe if the solution is [needed for] seven-10 years, it’s probably more cost effective to buy upfront. When that’s the right model, they’re going to buy it. We’ll execute a more traditional model. But when there’s technology risk, any number of factors, uncertainty, this is an alternative. ”

    “IBM is making a lot of changes to the way it does business to accommodate this initiative. ‘We’re made up of divisions; we’re bringing all of those groups together. You have to eliminate overlap. You have to make it simple, easy.”

    First Cloud Business Solutions include:

    • Care Coordination: The solution enables stakeholders in a healthcare ecosystem to collaborate. It integrates the capabilities of care coordination, management, analytics and patient engagement.
    • Customer Data: This solution brings together disparate internal and external data sources and applies analytics to improve marketing operations and planning performance across all channels.
    • Mobile: This is a suite of solutions for industries such as banking, retail, healthcare, insurance and travel and transportation. It includes mobile accelerators, mobile designs and development models that support an agile, iterative development process.
    • Predictive Asset Optimization: The solution enables a proactive analysis process using equipment and application data to plan, monitor, manage and mitigate equipment failures.
    • Smarter Asset Management: Asset management is critical to business operations, but not an area in which business leaders want to invest significant capital. This solution provides clients with minimal resources and low capital investment, optimized for maximum performance.
    12:30p
    Are All Security Vulnerabilities Preventable?

    Winston Saunders has worked at Intel for nearly two decades and currently works on making the data center more secure and efficient. Winston is a graduate of UC Berkeley and the University of Washington. You can find him online at “Winston on Energy” on Twitter.

    As a relative newcomer to the security arena, it’s taken some OJT (on the job training), a birth (or two) by fire, and some blood, sweat, and tears to start “seeing my way.” While information security is a complex field, I like to think that what I may still lack in direct experience I can, in some ways, make up with an open mind, broad experience, and some “out of the box” thinking.

    A big part of my job is developing systems and processes to detect and prevent product vulnerabilities at their earliest possible (and least costly) points in the architecture and design phases of program execution.

    Changing the Mindset

    Several years ago, I worked in a manufacturing environment where one had to be careful around hazardous energies, chemicals, ergonomic risks, etc. The risks are very real; a mistake in the wrong place at the wrong time could result in serious chronic or acute injury. But in reality, actual workplace exposures were very low.

    The difference was mindset. In some industrial environments, workers just acquiesce to the idea that “accidents are unavoidable.” They presume because it’s dangerous stuff that danger is inherent in the work. But experience shows that is flat-out wrong.  The correct mindset is that “all accidents are preventable.” In industrial settings, systems and procedures can and should be put in place to reduce risk and prevent accidents. And, if an accident happens, systems should be examined and improved to ensure the accident doesn’t occur again. All accidents are preventable. This mindset change has been shown over time to be incredibly effective in reducing accidents and injuries.

    Security Vulnerabilities Are Preventable

    Today one can read that “many hackings were preventable,” as if all were not. That “many device vulnerabilities are preventable” , as if some vulnerabilities are unavoidable.

    But aren’t designed-in security vulnerabilities just “accidents” in the development process, and aren’t all accidents preventable?  It’s time for us, as an industry, to borrow a page from the industrial safety playbook. Just as industrial accidents are preventable, so are security vulnerabilities. As I look at vulnerability detection and prevention measures common in software and hardware development, we need to adopt a more aggressive stance. I believe this also applies to building automation systems.

    “All vulnerabilities are preventable.”

    Can  mindset change alone guarantee that that no security breaches will happen? Of course not. Systems are not perfect. When problems occur effort must be expended to understand and address root cause. But if we accept that we can learn and improve systems and processes to continuously eliminate vulnerabilities, we will at least defer from the dangerous attitude of inevitability. All vulnerabilities are preventable.

    One of my colleagues here at Intel is fond of saying, “security isn’t special.” In a sense he may be right. Isn’t information security in many ways a play from an older playbook of risk mitigation and continuous improvement, applied to a new and exciting context?

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

     

     

    1:52p
    7×24 Exchange Spring Conference

    7×24 Exchange will host its spring conference June 1-4 at the Boca Raton Resort & Club in Boca Raton, FL. The theme of the conference is The Data Revolution.

    The 7×24 Exchange is aimed at knowledge exchange among those who design, build, operate and maintain mission-critical enterprise information infrastructures, 7×24 Exchange’s goal is to improve end-to-end reliability by promoting dialogue among these groups.

    For more information and to register visit the 7×24 Exchange website.

    Venue

    Boca Raton Resort & Club

    501 E Camino Real Boca Raton, FL 33432

    See hotel website for more information.

    For more events, return to the Data Center Knowledge Events Calendar.

     

    6:18p
    Colos Spend More on Data Centers as Enterprises Tighten Purse Strings

    Companies leasing out data center space as a business are spending more and more on expanding data center capacity, as enterprise data center spending continues to decline.

    That is according to results of this year’s data center industry survey by the Uptime Institute. The organization released results of the latest annual survey at the Uptime Symposium in Santa Clara, California, earlier this month.

    “Budgets are up, but they’re not up for everybody,” Matt Stansberry, Uptime’s director of content, who presented the survey results at the Symposium, said.

    Nearly 90 percent of colocation providers surveyed have seen their data center budgets grow year over year, and about half of those budget increases are fairly large. Only half of the enterprises surveyed (except those in the financial services industry) received data center budget increases, while the other half reported either flat or shrinking budgets.

    The numbers indicate that the trend of enterprises increasingly choosing to outsource data center operation to commercial providers is continuing. Enterprise data center spending has been going down as providers build more and more data center capacity for the past several years.

    Financial services companies were a notable exception among enterprises. More than 60 percent of companies in this category reported budget increases, which is indicative of a rapidly growing role technology plays in financial services.

    Here is Stanberry’s drill-down into the budget trends:

    Uptime survey budgets slide

    Source: Uptime Institute’s 2014 Data Center Industry Survey Results slide deck

    The survey group was split almost equally among enterprises and third-party service providers.

    About half of survey respondents were from North America; about 20 percent from Europe. The remaining 30 percent was split predominantly between Asia Pacific and Latin America, with some participation from Russia, Africa and the Middle East.

    Of them, more than 50 percent were facilities managers; about one-fourth were IT managers, and the rest vice presidents and C-level execs.

    Further shift from on-prem. slow

    Enterprises plan to have more computing capacity in colocation facilities or in public clouds over the next 12 months, but not by a lot.

    Currently, 25 percent of the respondents’ total capacity is running in colos and seven percent is outsourced to public cloud providers. The colo portion is expected to grow by only one percent over the next 12 months, and the public cloud portion will grow by two percent, Uptime concluded.

    Of the companies with 5,000 servers or more that use colocation providers, 37 percent use more than five different providers. The majority of that 37 percent are financial services companies.

    Uptime survey colo users slide

    Source: Uptime Institute’s 2014 Data Center Industry Survey Results slide deck

    IT operations teams within organizations exert the most influence on selection of colocation providers, the survey found. The primary selection criterion is availability, and the primary drivers of outsourcing to colocation providers are ability to scale geographically and reduction of capital costs.

    About half of financial services companies surveyed have a formal process for defining requirements of their third-party data center providers, while only about 30 percent of traditional enterprises use a formal process for this. The rest of the respondents did not know whether there were such processes in their organizations.

    7:41p
    Supermicro Launches Low-Power Atom-Based Cold Storage Servers

    Supermicro introduced new SuperStorage Server solutions that minimize power consumption and reduce cooling requirements by spinning down or powering off idle drives. The solutions include configurations based on the vendor’s compact low-power Intel Atom C2750-based server board for cold storage and 3.5 inch hard drives for maximum capacity. The server can also be configured with Intel Xeon E3-1200 v3 and E5-1600/2600 v2 processors for more data-intensive workloads.

    Using the Intel Atom C2000 series of processors provides a way to drive power efficiencies in the hyperscale environment of cloud data centers. Intel has positioned the 22nm Atom chip for areas such as cold storage and entry-level networking. Many hardware vendors have utilized the Intel Atom processors, typically used for cell phones, to help deliver high density and efficiency in a small footprint.

    Cold storage is low-performance storage infrastructure designed for infrequently accessed data. Facebook, for example, has built a separate small cold-storage data center on its Prineville, Oregon, campus just to store old content users look at rarely.

    “Supermicro’s new 1U storage server is exactly the best solution for today’s tiered storage architectures that need rapid access to data with minimum power consumption and heat dissipation,” said Charles Liang, president and CEO of Supermicro. “Our new system is designed to save energy while providing maximum accessibility to infrequently accessed data. With our compact Atom C2000 serverboard and Xeon UP configurations, we have achieved the perfect balance between performance, capacity and power savings for a wide range of applications while maintaining a highly scalable, cost-effective storage solution.”

    The high-density storage servers come in standard 1RU 32-inch-deep chassis and can be configured for a variety of targeted workloads, from cloud-based cold storage with drive spin-down to big data and object storage platforms, Hadoop and analytics. While highly configurable, due to their complexity they are only sold as completely assembled systems.

    Each server can support up to 48TB or 72TB across 12 hot-swappable bays.  The A1SA7-2750 contains one Intel Atom with eight cores and redundant 400W power supplies, while the X9SRH-TPF has one Intel Xeon E3-1200 v3 series with six to 12 cores and redundant 600W power supplies.

    10:48p
    Salesforce and Microsoft Partner to Integrate Products

    Microsoft and Salesforce are integrating Salesforce’s highly successful Customer Relationship Management Software-as-a-Service offering with Microsoft’s Windows operating systems for PCs and mobile devices and its Office 365 suite of SaaS applications.

    Microsoft CEO Satya Nadella and Salesforce CEO Marc Benioff announced the partnership on a call with press and analysts Thursday. The expanded partnership between the companies – which have been and will continue to be competitors in some areas – also includes use by some Salesforce development teams of Microsoft’s Azure cloud offerings.

    The partners expect to start previewing Salesforce1 for Windows and Windows Phone 8.1 in in the fall of 2014, general availability slated for 2015. Timeline for interoperability between Salesforce and Office 365 was not provided.

    The companies did not disclose terms of the agreement.

    Both firms focus on core strengths

    The partnership is about combining Microsoft’s core strategy, focused on Office 365 and Windows, with the core strategy of Salesforce, which revolves around its CRM applications, to create more value for users, Benioff said.

    “The reason that these relationships work is because Microsoft’s core strategy is Windows and Office,” he said. “That’s where the revenue comes from.” The same is true for Salesforce and its CRM products.

    “We both want to grow our revenues, so we know we need to be investing in our core and our strategies.”

    Nadella said that going forward such partnerships would be crucial for Microsoft, which is focused on building a strong platform play, suggesting that more deals along similar lines were to be expected in the future.

    “We want to be a broad platform provider in this mobile-first, cloud-first world,” he said. “I want to approach partnerships that really add value to the entire industry.”

    While there will continue to be areas where the two companies will compete, Microsoft’s recently appointed CEO said, the market calls for providers with broad partnership bases and a platform approach.

    One of the biggest areas the companies compete in is Platform-as-a-Service. In addition to CRM, Salesforce has a very popular PaaS offering called Heroku, which competes with Microsoft’s Azure PaaS.

    Azure just another tool in the bag

    More use of Azure by Salesforce developers simply meant more variety in the way the company builds its services, Benioff said. Heroku, for example, is built on Amazon Web Services’ cloud infrastructure, while developers of Salesforce’s core CRM products deploy on dedicated infrastructure in the company’s data centers.

    The partnership announced Thursday is an opportunity to make more services from Azure available as appropriate.

    But Salesforce customers should not have to worry about the nuts and bolts of the infrastructure their CRM service runs on. While the company makes its own infrastructure choices transparent to its customers, “in no case do the customers have to be aware of what’s underneath,” Benioff said.

    All they need to know is whether they are having a great experience using Office 365 or Salesforce, he explained.

    << Previous Day 2014/05/29
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org