Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, January 24th, 2013

    Time Event
    12:30p
    Does the Supply Chain Figure Into Microsoft-Dell Chatter?
    Serious Server Density: Packed racks of servers in an IT-PAC at the new Microsoft data center in Quincy, Washington (Photo: Microsoft Corp.)

    Serious Server Density: Microsoft buys a lot of servers, as seen in these packed racks  at Microsoft’s data center in Quincy, Wash. (Microsoft Corp.)

    Why is Microsoft interested in an ownership stake in Dell? Bloomberg reported this week that Dell is discussing a leveraged buyout by private equity firm Silver Lake, in which Microsoft may provide about $2 billion towards the deal. Early analysis has focused on pressures in consumer products like the PC and tablet, or the enterprise IT market for servers and hypervisors. Other coverage has looked at how a Microsoft investment in Dell would excite Microsoft’s channel partners but might strain relations with OEMs.

    So here’s another element to add to the mix: Dell has historically played a meaningful role in the supply chain for Microsoft’s cloud computing infrastructure, supplying many of the servers that fill the company’s global network of data centers – as well as some of its modular data centers.

    Dell’s Data Center Solutions (DCS) unit, which specializes in large shipments of custom servers for cloud customers, has built many of the servers powering the Windows Azure platform and Bing Maps, to name just two areas where the companies have publicly acknowledged their collaboration. Microsoft hasn’t provided a look inside its newer data centers in several years, so it’s hard to know if Dell continues to provide its servers.

    It remains to be seen whether a Dell buyout materializes, and if so, whether Microsoft participates as an investor. But it’s worth noting that Dell DCS is a significant supplier to the largest data center providers, having shipped more than 1 million servers in its first five years. Here’s a video overview of that milestone:

    1:00p
    SuperMicro Shares Surge on Positive Earnings
    supermicro-fattwin

    One of Super Micro’s Fat Twin servers, which the company says can operate at higher ambient temperatures of up to 47 degrees C (116 degrees F). Strong sales of Fat Twin boosted SMCI’s earnings. (Photo: Super Micro)

    Shares of server vendor Super Micro Computer (SMCI) soared nearly 19 percent Wednesday in trading on the NASDAQ stock market after the company reported earnings that exceeded the expectations of the analyst community. Super Micro reported revenue in the quarter of $291.5 million, well above the $280 million consensus among securities analysts. The company also issued strong guidance for the next quarter.

    On the strength of that report, investors bought up Super Micro shares, which gained $1.96 to close at $12.49 a share, an increase of 18.6 percent. Volume was 1.57 million shares, well above the average volume of 207,000 shares traded per session.

    “Net sales were a record high this quarter as we achieved 16.6% growth over last year, further demonstrating our ability to grow market share even during uncertain economic times,” CEO Charles Liang said in a statement. “Our rackmount servers, especially FatTwin solutions, and our storage products were key drivers of our revenue this quarter. Profitability improved due to slightly better margins and better operating expense leverage.”

    Liang said sales had been particularly strong for FatTwin servers optimized for Hadoop-based cloud applications. SuperMicro says the FatTwin can operate in higher ambient temperatures of up to 47 degrees C (116 degrees F), allowing users to save energy and slash their total cost of ownership.

    3:16p
    AppDynamics Snags $50 Million Funding

    Application performance management provider AppDynamics announced a $50 million Series D growth financing round. The financing will support expansion into the global enterprise market, increase research and development investment, and accelerate hiring to meet the high demand for its platform. The round was led by new investor Institutional Venture Partners (IVP), and also included AppDynamics’ current investors: Greylock Partners, Kleiner Perkins Caufield & Byers, and Lightspeed Venture Partners.

    “AppDynamics is poised to become the leader in a multi-billion dollar market with its unique technology and its customer-friendly sales model,” said Steve Harrick, General Partner at IVP. ”IVP is a believer with conviction in AppDynamics’ vision, market opportunity, and leadership team. The rapid growth and customer momentum at AppDynamics points to a bright future for AppDynamics. Supporting a company that solves practical, real-world problems in large markets fits squarely with IVP’s investment strategy and we believe AppDynamics is on a trajectory to become the next great enterprise software company.”

    The latest funding round follows a two year period of explosive sales growth of 300 percent compound annual growth rate.  The San Francisco company was founded in 2008 and recently strengthened its executive leadership team.  AppDynamics currently monitors over 51 billion transactions per day among the customers of its Pro product.

    “Finding the root cause of application performance in a complex, distributed app isn’t like trying to find a needle in a haystack – rather it’s like try finding a needle in a stack of needles,” said Ryan Aylward, CTO at Glassdoor.com. ”AppDynamics actually makes this possible. It’s an easy-to-use solution that really helps my team ensure that our web site is always operating at peak performance.”

    “Businesses all over the world are adopting cloud and agile architectures to run their mission-critical applications and AppDynamics is becoming the leading management platform to help them through that transition,” said Jyoti Bansal, CEO of AppDynamics. “In 2012 we saw rising demand for AppDynamics solution, particularly from organizations looking for the next generation of an Application Performance Management Platform that can effectively scale with their growth.”

    3:30p
    Protecting your Business with Juniper Security

    The modern infrastructure has evolved to support more users, applications and a distributed data center. Cloud computing and virtualization have shifted how IT administrators, managers and even C-level executives look and deploy security. Simply from a user’s perspective – there are more devices accessing corporate data. Furthermore, the current “data-on-demand” environment is forcing security professionals to look at new avenues to efficiently secure their environment.

    This is where the Protecting your Business with Juniper Security webinar can really help. Join Chris Hoff, Chief Security Architect at Juniper, as he dives into the many questions that IT security professionals have around a heavily accessed, distributed and virtualized environment. The idea here is to open a discussion around a new type of data center infrastructure and how security will play a vital role.

    New initiatives are being planned every day by CIOs, CTOs and CSOs. These initiatives include:
    • Employee productivity and satisfaction
    • Advanced levels o infrastructure agility
    • Creating new cost and optimization efficiency
    • BYOD and IT consumerization
    • Deploying application, desktop and infrastructure virtualization
    • Data center consolidation and modernization
    • New and agile deployment options
    • Scalability and operational simplicity

    As more devices connect to network, administrators must figure out a way to secure these end-points while still delivering a powerful end-user experience. The Protecting your Business with Juniper Security Juniper Webinar analyzes the best ways to safely secure a now much more diverse infrastructure. This webinar covers everything from BYOD to data center virtualization. With any environment, security is always going to be an important consideration. This webinar not only outlines a unified security approach by Juniper – it directly simplified the deployment and security management process. Click here to view this network security webinar.

    3:45p
    VMware Invests $30 Million in Puppet Labs

    IT automation software Puppet Labs has received a $30 million investment from VMware, the company said this week. The funding is part of a broader strategic partnership in which the two companies will be working together to integrate better.

    It makes sense for VMware to invest, as Puppet Labs is a compatible management tool used by much of VMware’s user base.  Puppet Labs will be using the funding to accelerate product development, increase market adoption in new geographies, and work jointly with VMware to market and sell its solutions. VMware continues to make moves in the Software-Defined data center world, and Puppet Labs is one example of the rise of DevOps – tight collaboration and integration between software developers and IT professionals, a relatively new approach to doing things.

    “Puppet Labs is at the forefront of IT automation, and is a catalyst in the DevOps movement – accelerating service delivery and business agility” said Ramin Sayar, Vice President and General Manager, Virtualization & Cloud Management, VMware. “This strategic investment and partnership will further accelerate the software-defined data center, and will allow a more extensive automation and orchestration solution across infrastructure and application elements for VMware-based private and public clouds, physical infrastructures, OpenStack and Amazon Web Services.”

    Puppet Labs has raised money in the past, most recently an $8.5 million series C at the tail end of 2011.  Total funding in the company is now $45.5 million.

    There’s already a lot of interoperability of the VMware and Puppet Labs products.  Puppet Enterprise, Puppet Labs’ flagship commercial product, enables system administrators to automatically and dynamically provision and manage virtual machines, taking advantage of the VMware vSphere API. Customers use the integration of VMware vFabric Application Director integration with Puppet Enterprise for dynamically provisioning private cloud services, leveraging the more than 750 ready-to-run application configuration modules available on Puppet Forge, Puppet Labs’ online content marketplace.  As part of the strategic partnership, going forward VMware and Puppet Labs will collaborate on additional product integrations, including VMware vCloud Automation Center™, VMware vCenter Operations Manager, and VMware vCenter Configuration Manager.

     

    4:15p
    Video: Servers Stacked Like Books on a Shelf

    One of the interesting facets of the open hardware movement is the potential to bring different approaches to common problems. We saw several different approaches to server chassis design at last week’s Open Compute Summit. An example can be seen in Dell’s latest generation of Open Compute hardware. On this video, Dell Solutions Architect Rafael Zamora demonstrates Dell’s C8000 Open Compute chassis (originally codenamed Zeus). Either 19 inches or 21 inches wide (Open Rack standard), it holds computational units and power units positioned vertically. The units are slotted in sleds and line up like “books on a shelf.” This enables either a single-wide or double-wide configuration and the ability to slide sleds out and “hot swap” drives, thus maintaining systems without powering down the entire server. Dell, which has a long relationship with Facebook, has been collaborating on the Open Compute project and working with Facebook since 2008. This video runs 5 minutes, 30 seconds.

    For a comparison of other Open Compute designs, see our coverage of Facebook’s three-wide server design, and the rackmount designs from AMD and others.

    For additional video, check out our DCK video archive and the Data Center Videos channel on YouTube.

    4:30p
    Joyent Offers Hadoop Solution for Big Data Challenges

    Joyent announced a new Apache Hadoop-based solution, built using the Hortonworks Data Platform (HDP), that allows companies to run enterprise-class Hadoop on the high-performance Joyent Cloud.

    As a new entrant into the big data landscape Joyent is addressing industry demand to reduce costs and decrease query response times. Software product development services company Altoros Systems said that Hadoop clusters on Joyent Cloud produced a nearly 3X faster disk I/O response time versus identically-sized infrastructure. Through the use of the Joyent operating system virtualization and CPU bursting technology, Joyent says it is able to extract better response times and deliver results to data scientists and analysts faster.

    “We are pioneering a new era of big data and our Hadoop offering is just the start of our 2013 agenda,” said Jason Hoffman, CTO and Founder, Joyent. “We intend to continue bringing our technical expertise to the market and reverse the typical understanding of big data implementations – that they’re expensive and hard to use. We’re committed to meeting the insatiable demand for faster analytics and data retrieval, changing how computing functions for the enterprise.”

    Global telecom Telefonica was an early adopter of Joyent’s Hadoop solution.   “Joyent technology powers our service – Instant Servers – and is providing Telefonica Digital an advantage to deliver the fastest performing Hadoop big data solution in our marketplace,” said Carlos Morales Paulin, Global Managing Director, M2M, Cloud Computing and Apps, Telefonica. “Joyent has inversed the big data cost equation while at the same time innovating how computing on large distributed and unstructured data can be accomplished for large enterprises. Our customers can now get insight from their data quicker than ever before without the massive cost that’s typically associated with high-performance big data solutions.”

    The Apache Hadoop solution is available immediately for Joyent customers.

    7:38p
    Host.net Acquired by Canadian Private Equity NOVACAP

    Canadian private equity firm NOVACAP announced the acquisition of US-based Host.net, a network infrastructure services provider that focuses on colocation, cloud computing, virtualization and storage. The deal signifies continued engagement by private equity firms in the internet infrastructure space, NOVACAP’s entrance into the U.S. market, and it is the 100th transaction for DH Capital, which served as exclusive financial advisor to Host.net. Terms of the deal were not disclosed.

    Host.net is based in South Florida, and has over 700 customers ranging from small to large multinationals. Some private equity backing will allow Host.net to continue its growth and expansion. “The transaction will allow Host.net to continue to lead the industry, to grow to the next stage by adding more data centers, and to expand their portfolio of services to remain the industry’s benchmark,” said Ted Mocarski , Senior Advisor at NOVACAP, which has $790 million in assets under management, and is one of Canada’s leading private equity firms.

    NOVACAP will leverage experience acquired in the Canadian market as it expands its investment strategy to the United States. “It is part of a plan to increase our presence in the United States, and this agreement shows that we are a serious player in the market,” said Pascal Tremblay, President of NOVACAP Technologies. “Our expansion will benefit our portfolio of companies, and will help us find additional opportunities throughout North America and in international markets.”

    “We are delighted to be working with NOVACAP, whose insight and investment will definitely benefit Host.net’s growth strategy,” said Jeffrey Davis, Co-Founder & Chief Executive Officer of Host.net, in the release. “Their experience in the industry will bring focus to the strategic steps needed in order to grow the company.”

    Host.net’s management team will remain in place and will be supported by newly appointed board members. “With this new acquisition, NOVACAP wishes to show its confidence in Mr. Davis’ team, ” said Tremblay.

    Host.net was founded in 1996 and is headquartered in Boca Raton, FL. The company operates multiple enterprise-class data centers connected to an extensive fiber-optic backbone delivering Internet, MPLS and layer 2 communications using a wide array of last-mile options. It serves customers in most major metropolitan regions of North America as well as portions of Europe.

    10:01p
    Google’s Site Reliability Team: Ask Them Anything!

    The Google Site Reliability Team is currently taking questions over at Reddit. “We make Google’s websites work. Ask Us Anything!”  Participants include Site Reliability Engineers Kripa Krishnan, Cody Smith, Dave O’Connor and John Collins.

    Among the questions: When was the last time Google’s main page was down? “Home page outages almost never affect all users simultaneously,” Smith writes. “There are many different systems involved in simply connecting users to Google, and most incidents happen outside of our network. We do occasionally have network outages, which are regional, e.g. a few states or countries. We also occasionally introduce language-specific bugs, e.g. garbling CJK. As far as I can recall, the last global outage was back in 2005.”

    They’ll be wrapping up soon, but readers interested in wading through the AMA will find some interesting insights into how Google manages web reliability.

    << Previous Day 2013/01/24
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org