Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, December 16th, 2014

    Time Event
    1:00p
    Facebook to Deploy TrendPoint Data Center Power Meters

    Facebook is deploying power quality meters by Corona, California-based TrendPoint Systems across its data centers, the vendor announced Tuesday.

    The social network’s data center facilities team has been a TrendPoint customer since about two years ago, when it used the vendor to implement branch circuit monitoring across busway and power distribution infrastructure.

    For Facebook and other operators of massive data center facilities, incremental improvements in energy efficiency can potentially translate into big savings because of their scale. Because data center infrastructure is the biggest expense for any of these companies, they are constantly in the process of optimizing the way they design and operate it.

    The power quality meters, called Ensure Enkapsis, will integrate with TrendPoint’s other products in Facebook data centers as well as with the company’s data center infrastructure management software. Facebook uses a DCIM solution custom built by CA Technologies.

    Enkapsis meters provide waveform capture metrics and all downstream power and environmental monitoring.

    “TrendPoint Systems will provide us with a granular understanding of power consumption across our distribution equipment – information that will help us improve our IT infrastructure,” Tom Furlong, vice president of infrastructure at Facebook, said in a statement.

    Facebook has three large data centers it owns and operates in the U.S. and one in Sweden. The newest Facebook data center, located in Altoona, Iowa, came online in November.

    Facebook also takes space with wholesale data center providers on East Coast and West Coast in the U.S., but has been gradually moving infrastructure out of those spaces.

    2:00p
    Snappy Ubuntu for Docker Containers Comes to Google’s Cloud

    Canonical, the company behind Ubuntu, one of the most popular Linux distributions, has made snappy Ubuntu Core, a version of the operating system designed for cloud infrastructure and optimized for running Docker containers, available on Google’s cloud.

    Canonical first announced snappy Ubuntu Core earlier this month and made it available on Microsoft’s Azure cloud. Now, developers can use the lightweight OS as an option on Google Compute Engine, the giant’s Infrastructure-as-a-Service offering.

    Docker is a San Francisco company built around technology developed by the eponymous open source project. At the heart of Docker is an application container, which is basically a standard for an application to communicate its infrastructure requirements. Containers make applications easily portable across different kinds of infrastructure, from a developer’s laptop to a virtual machine in a data center or any public cloud.

    Docker the company designs and sells enterprise-hardened tools for deploying applications that consist of multiple Docker containers and across clusters of servers.

    Earlier this month Alex Polvi, CEO and co-founder of CoreOS, which also provides a lightweight Linux distribution for cluster deployments, and which has been a major Docker supporter, questioned Docker’s pursuit of an orchestration-tool business and said its technology had some serious security flaws. CoreOS has proposed its own container standard, which Polvi said addressed problems in Docker.

    Canonical’s snappy Ubuntu Core competes with CoreOS. It is also a lightweight version of Linux and, like CoreOS, it is designed to make updates fast and easy.

    “This is the smallest, safest platform for Docker deployment ever, and with snappy packages, it’s completely extensible to all forms of container or service,” Mark Shuttleworth, Canonical founder, said in a statement.

    The OS is now available on Google Compute Engine, which Canonical said was the “fastest cloud in the industry.”

    Google itself was a pioneer of application containers, which have played a big role in the company’s cloud architecture. Google is a major supporter of Docker and has its own open source orchestration technology for Docker containers called Kubernetes.

    Google, Amazon Web Services, and Microsoft Azure also provide commercial container orchestration services on their public clouds.

    2:00p
    Atlantic.net Launches First West Coast Data Center

    Cloud provider Atlantic.Net has opened its first West Coast data center in Santa Clara, California. The new location joins three data centers in the Southeast. The company is based in Florida.

    Atlantic.net is tapping Telx as its California data center provider, which has a facility on the massive Vantage Data Centers campus in Silicon Valley, starting with one cage and room to expand.

    The cloud provider recently announced the opening of its first international data center in Toronto and a new center in Dallas, with plans to add more domestic and international locations.

    Atlantic.net CEO Marty Puranik said the company has done particularly well with developers, and the huge number of developers in Silicon Valley holds a lot of promise. Some of the moves it has made to further entice developers recently is the introduction of a 99 cent budget VPS server and support for additional operating systems.

    The company’s branding may help it get attention of California data center customers, Puranik hopes. “It actually stands out,” he said. “In a way it works out for us.”

    The company is in the process of hiring in the area.

    As for where it will take its cloud next, Puranik said that it is seeing a lot of demand for managed cloud. Offering compliant infrastructure is also a big growth area.

    In terms of customers, it is seeing developers, marketing agencies, and other hosting providers sign up. Resellers have been a solid area of growth.

    Outside of the U.S., the company is seeing good traction with Brazilian customers. “I thought it was because we’re in Florida, and it’s a gateway market, but it turns out it’s been a good market for hosting providers,” said Puranik. “Import taxes and bandwidth means it’s so expensive to host in that country.”

    Atlantic.net started as a dial-up provider, before evolving into colocation and finally cloud. The cloud business has grown nicely. Puranik said that the company is seeing 20 percent annual growth and its cloud user base has grown 60 percent annually.

    Other Vantage customers in Santa Clara include Cloudera and MarkLogic.

    4:30p
    Security is Key: Five Data Center Protection Questions to Keep Your Data Safe

    Joe Sturonas, a 25-year veteran of the commercial software industry, is responsible for product development at Smart Encryption provider PKWARE, including software engineering, documentation, quality assurance and technical support.

    Thanks (or no thanks) to Edward Snowden and Jennifer Lawrence, encryption is cool again. Of course, security architects and CISOs have known encryption was hip since the Clipper chip was dismissed. For years, the challenge was balancing the strength of good encryption with the competing interests of budget and usability.

    A Software-Defined Perspective

    If you could re-do your entire business stack, the experts wouldn’t suggest going the new security appliance route. At Gartner’s big IT Security and Risk Management event this summer, the preferred, emergent security option landed around a software-defined approach. Software-defined has different connotations to different architects, developers and data center operators, but from a security perspective, it means strong security centered on information (read: people) but flexible across hardware.

    Gartner security pro Neil MacDonald and his colleagues framed the “transformational” software-defined security changes in a release tied to the event.

    Software-defined “is about the capabilities enabled as we decouple and abstract infrastructure elements that were previously tightly coupled in our data centers: servers, storage, networking, security and so on. … Software-defined security doesn’t mean that some dedicated security hardware isn’t still needed – it is. However, like software-defined networking, the value and intelligence moves into software.”

    One Call for a New Security Approach

    Over the last year, we’ve worked with a large national telecommunications provider. Like any enterprise, they’ve spent decades acquiring new tools and hardware, slipping each shiny new object into their stack. After a while they look back at all these layers and wonder how it all got so out of hand.

    Most glaring was how this vendor protected information coming in and out of their data center. On-premises a few years back, they had shoehorned in a proprietary crypto appliance, which was essentially impenetrable when it came to anything at the data center itself. The problem was today’s business inevitability of IP and customer info creeping out of the data center. The security architects knew it was happening, and at first, felt their crypto appliance could handle whatever they weren’t able to squash in terms of unapproved devices or external storage connections. What they realized, however, was that their “magic box” (the security appliance) forced them to serialize every crypto operation. Employees were getting around this by skipping the security steps. The chief security officer and her team realized that the proprietary nature of the appliance’s security setup would have moved them toward an uncomfortable remedy of opening up a hole in their firewall so other information sources could get secure access. The security architects were faced with the potential of revealing their secrets, the biggest no-no for any magician. Security, they determined, could not be an all or nothing situation.

    Questions for Better, Usable Protection

    During that talk, and based on what we’ve heard from analysts, we developed five key questions that have helped data center customers get the most from their security plans. By no means comprehensive – security is a process, after all – this handful of questions is meant to give a rounded view for you and your team of architects and developers.

    What is realistic to use for data protection given existing systems, platforms and languages? Maybe an obvious first step, but you have to start somewhere. Here’s where that pesky budget discussion comes back up. A word of caution: the compact, ease-of-implementation of that “magic box” cost the security team much more in terms of add-ons and headaches down the road. Security is not Boolean. Prioritize, classify and protect the most sensitive, valuable data first. When you are hacked (because every organization will be hacked), if they take only the public and unclassified data, you protected the most important information.

    What happens if “Vendor X” is hacked or goes out of business? There is lock-in risk with any software or hardware. With security solutions in particular, do your homework on a vendor’s history and the benchmarks by which they defend their protection claims. In addition, some wiggle room for growth in external data sharing and internal programming preferences should receive heavy consideration.

    Who in the data center chain-of-command should see what? This issue emerged with another customer recently, wherein database administrators watched encrypted data turn into credit and debit card information all day long. Unstructured and structured data protection features – and the administrative approvals – bring a guard against insider threats and common errors.

    How will you handle the key management headache? Managing keys is the tough part of encryption. It’s also the part most susceptible to businesses settling on simplistic, one-stop-shop appliances. Get the background on the types of keys that fit your risk appetite and user needs.

    Do you have cryptographers on staff? If you do, you’re one of the few. Developers are awesome at learning how to craft features to scroll and correlate sales trends. Without that cryptographic background, they may be setting themselves up for a trial by fire in constructing in-house security solutions. To make sure crypto is actually used for info moving across and out of the organization, developers should come at implementation with a focus on the business drivers, employing security as a business driver and not as a hurdle.data cent

    Thinking Security, Freely

    It’s not sleight-of-hand that pushes any security leader or team into choices around data use, storage or security. But critical thinking can keep enterprises nimble and ready for whatever change is to come with securing data and data centers. Harry Houdini explained his illusions this way: “My brain is the key that sets me free.” With the right questions, you can set up your team and business free to move along a stronger security and encryption process beyond reliance on any one magic box.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    8:02p
    Big Switch SDN Fabric Gets Red Hat OpenStack Certification

    Big Switch Networks, one of the hottest startups building software defined networking solutions for data centers, has gotten its SDN fabric for bare-metal switches certified as compatible with Red Hat’s OpenStack distribution.

    SDN is a way to make data center networks agile by creating virtual (or logical) networks on static hardware infrastructure. Because the network is virtual, network configurations are done automatically and quickly, which also opens up a lot of automation possibilities.

    Web-scale data center operators, such as Google and Facebook, have been using SDN software they write internally to manage networks built on low-cost commodity hardware. Companies like Big Switch are trying to sell more traditional enterprises on the web-scale approach as alternative to the usual route of buying all-in-one hardware-software bundles from the likes of Cisco and Juniper.

    Even some of the so-called “incumbent” network vendors have been adjusting to the new market, where competition from commodity hardware vendors and SDN startups has been heating up. Juniper announced a white-box data center switch of its own this month, and Dell has opened two of its switch product lines to two non-Dell operating systems – one by Big Switch and the other by Cumulus Networks – earlier this year.

    This week Dell also announced its “open” switches would support OpenStack SDN by a startup called Midokura.

    Cisco’s reaction to the SDN movement has been to support leading SDN standards, but at the same time to focus on its own proprietary SDN technology called Application Centric Infrastructure.

    Support for Multiple Open Source Clouds

    OpenStack is a widely used open source cloud architecture, and Red Hat is one of the more well-known OpenStack distribution vendors. Big Switch has an OpenStack SDN partnership with another major distribution provider named Mirantis.

    A company can use the combination of OpenStack and Big Cloud Fabric to create a private or a public cloud, build a big data analytics environment with virtualized compute, or build infrastructure to provide virtual desktops to employees.

    OpenStack isn’t the only cloud architecture Big Switch supports. Big Cloud Fabric has also been certified to gel with the Citrix CloudPlatform, based on another open source cloud project called CloudStack.

    Commenting in a statement on Tuesday’s announcement, Mike Werner, senior director for global technology ecosystems at Red Hat, said, “This collaborative approach of engineering, testing, and certification between our two companies allows our mutual customers to confidently deploy next-generation private or public clouds.”

    8:46p
    Rackspace Building OpenPOWER-Based Open Compute Server

    Rackspace is working with three open communities to form a mega-open platform. Two of those communities are widely known: OpenStack for cloud software and the Open Compute Project for data center hardware. Its part in the third was formally announced today.

    Rackspace is now an official member of the IBM-led OpenPOWER foundation, which tackles the server firmware through an open approach. This is about innovating at the firmware level, the big holdout for openness. Rackspace has been involved with OpenPOWER behind the scenes for more than 18 months.

    The company announced it has joined the consortium and that it is building an OpenPOWER-based Open Compute server platform that will run OpenStack services. It will engage with partners in the community to build the platform and contribute it and its open source POWER server firmware set to OCP.

    Firmware can be a tricky area as developers don’t normally worry about firmware and managing memory requirements. “These are parts of the system many developers are not even aware of,” said Aaron Sullivan, director and principal engineer at Rackspace. “Firmware has different programming models. The OpenPOWER needs the community involved, needs developers on board to achieve real performance gains.”

    Firmware: Last Element to Open Up

    Rackspace believes it’s important to rally the wider open source community to affect real progress. “Server chips, firmware, and buses, this is the last batch [to go the open source route],” Sullivan said.

    “There were some features of [IBM’s] POWER platform that initially intrigued us for its efficiency gains and existing applications. But there were all kinds of aspects to it that needed to be changed – firmware and management stacks grew up differently. We grew up in a world of Linux and HP. We started giving IBM feedback on that and they were responsive, making changes.”

    IBM saw a lot of similar customer response and decided to go the open route with the POWER architecture. The consortium was launched in Summer 2013 with IBM and Google as two prominent members.

    The aim is to build advanced server, networking, storage, and GPU technology on the POWER server platform. The consortium makes POWER IP licensable to others and makes POWER hardware and software available to open development.

    Project’s Roots in Bare Metal Cloud

    It wasn’t until the development of Rackspace’s bare metal private cloud service OnMetal that the company really got its hands into the firmware.

    “With OnMetal, we had to get into system firmware for the first time in a serious way,” said Sullivan. “We found that doing it in the open community is a lot easier, but it was still hard. We couldn’t release those aspects we did outside of the development – someone can’t take what we did because they don’t have the right firmware, and then there’s NDAs and firewalls between us and the engineers and all that stuff. But those changes are starting to happen in the system. You really need that stack open all the way down so they can involve [the community] can get involved. ”

    OpenPOWER now has 80 organizations represented including Google, IBM, Canonical, Nvidia, Samsung, and Mellanox. It is also partnered with the Linux foundation. Rackspace’s involvement will prompt more community participation.

    Open technology is driving cloud and OpenPOWER is driving openness in one of the last holdouts — server firmware. “We look forward to more and more progress in the open space,” said Sullivan.

    9:30p
    Cloud Stocks New Relic and Hortonworks Make Strong Market Debut

    logo-WHIR

    This article originally appeared at The WHIR

    Two companies in the cloud and big data industry, New Relic and Hortonworks, launched Initial Public Offerings last week and have been well-received by traders.

    New Relic, which debuted Thursday on the New York Stock Exchange under the ticker “NEWR”, provides cloud application performance monitoring. And Hortonworks, a provider of software and services around the open-source Hadoop software for big data, started trading on the NASDAQ on Friday as “HDP”.

    According to the Wall Street Journal’s MoneyBeat blog, some worried that the IPOs were priced below what their privately raised capital suggested, leading many to assume their valuation might have been inflated as part of a collective delusion around the worth of companies within a tech bubble. This strong response from investors allayed many of these concerns.

    New Relic went from its initial value of $23 a share and closed at $33.99 at the end of its first day of trading – a 48 percent increase.

    New Relic could be in a good place to take advantage of the growth in big data. Yet, as big data is becoming big business, larger companies have been taking notice. For instance, IBM and Apple have recently been collaborating on big data solutions, and Cisco has unveiled major plans around big data within the data center as well as the Internet of Things.

    Hortonworks was initially offered at $16 per share, and closed the day at $26.38.

    Over the past few years, enterprises have been given more options for using Hortonworks Hadoop as a cloud service. It has been available from Rackspace for more than a year. And in October, the Hortonworks Data Platform became Microsoft Azure certified, along with interoperability with Microsoft Azure HDInsight, Microsoft Analytics Platform System and HDP for deploying Hadoop in the cloud.

    Earlier this year, Hortonworks acquired XA Secure, a developer of security and governance tools for big data platforms including Hadoop. It did this in an effort to bolster the security of Hadoop deployments as enterprise adoption increases.

    Another cloud tech IPO from last week was Workiva, which provides a cloud platform for enterprises to collect, manage, report and analyze business data in real time. It started trading on the NYSE on Friday at $14 and closed the day 30 cents lower.

    While all these three companies may have great revenue figures to go along with their IPOs, according to SmallCap Network contributor John Udovich, they all also have big losses. He said, this means the stocks may be volatile, but not something that investors avoid altogether.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/cloud-stocks-new-relic-hortonworks-make-strong-market-debut

    10:00p
    Private Equity Firm and Pension Fund Buy Riverbed for $3.6B

    After rejecting a $3 billion takeover bid from hedge fund group Elliot Management at the start of the year, Riverbed Technology closed out the year by announcing that it has agreed to be acquired by private equity investment company Thoma Bravo and the Ontario Teachers’ Pension Plan for approximately $3.6 billion.

    After a complete strategic review of options and continued pressure from Paul Singer’s Elliot Management to sell, the Riverbed board unanimously decided that partnering with Toma Bravo was the right choice for the company, according to a release. Chicago-based Thoma Bravo has investments in many companies complimentary to Riverbed and its portfolio, as well as one, Blue Coat Systems, that is a competitor to Riverbed.

    A little over a month ago Riverbed offloaded its SteelStore backup appliance line to NetApp for $80 million. After entertaining several acquisition offers in the past year Riverbed can close this chapter in company history and get Jesse Cohn, Elliot’s activist portfolio manager who has been pressuring the company, off its back. Elliot has been publicly involved in many technology deals lately, some, like Riverbed and Dell, taking companies private, and others leading to acquisitions by larger companies.

    In a statement on Riverbed acquisition, Jerry Kennelly, the company’s chairman and CEO, said, “Thoma Bravo is a highly regarded private equity firm with deep experience in the technology industry and a 30-year track record of helping companies like ours flourish. With the benefit of Thoma Bravo’s knowledge and insights, combined with the added flexibility we will have as a private company, Riverbed will be able to focus on reaching the next level of growth, which will benefit our employees, customers, and partners.”

    This is the largest acquisition for the tech-focused Thoma Bravo ever. The $3.6 billion offer equates to $21 per share, which is a premium over Riverbed’s current $20 per share price. A week ago Thoma Bravo agreed to sell Tripwire to Belden for $710 million, and just last Friday it completed its $2.4 billion acquisition of Compuware.

    “Riverbed’s strong product portfolio provides unmatched optimization, visibility and control across the hybrid enterprise, which has positioned the company extremely well in a rapidly-changing landscape,” Orlando Bravo, a managing partner at Thoma Bravo, said in a statement.

    Hardware and tools vendor Riverbed is a global company serving many Fortune 100 and Forbes Global 100 clients. The company began as a WAN optimization appliance provider, and grew into a leader in the application performance infrastructure space. Its SteelHead WAN optimization appliance is still a flagship offering, but the company has built a platform for optimizing scalable delivery of applications from private or public clouds to remote and mobile workers everywhere. It also broadened its portfolio through acquiring Mazu and CACE Technologies for network optimization, and OPNET and others in recent years.

    The Riverbed acquisition is expected to complete in the first half of 2015, although it faces regulatory approvals and antitrust review in several countries. Riverbed says it will keep Jerry Kennelly, who also co-founded the company, as as CEO.

    << Previous Day 2014/12/16
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org