Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, February 5th, 2013
| Time |
Event |
| 1:37p |
Morphlabs Offers Service Providers a Turnkey Public Cloud Morphlabs is targeting service provider public cloud needs, believing the road to prosperity for them is through achieving higher densities on its all-in-one solution. Morphlabs, a provider of fully integrated Infrastructure as a Service, is targeting hosting providers with a modular OpenStack Public Cloud Platform called mCloud Osmium that it says is very competitive in terms of cost and performance compared to Amazon Web Services, allowing them to compete in the new cloud world.
The company also announced that it has enhanced its Private Cloud offering, mCloud Helix Private Cloud, for increased capacity and resiliency. The compute power is up 2.5x over the previous mCloud Helix in the same footprint, and increased options for modularity have been added.
Higher Densities = Higher Margins
mCloud Osmium is a multi-tenant, scalable public cloud solution with built-in billing software. Through a highly configurable structure, service providers can begin building public cloud infrastructures in 100 vCPU and 15 terabyte (TB) blocks, that scale as needed. mCloud Osmium eliminates complex software licensing, allowing service providers to simply subscribe to software $10/vCPU/month and $100/TB/month to immediately begin offering OpenStack-driven public cloud solutions.
mCloud Osmium is a fully integrated solution that Morphlabs says is competitive in regards to both cost and performance compared to AWS. The company believes that its solution drives much higher densities and therefore offers a very compelling story in total cost of ownership (TCO).
“We can drive better price and performance to compete effectively with other solutions,” said Yoram Heller, Vice President of Corporate Development for Morphlabs. “It allows offering virtual machines that are priced competitively – a third of the cost – and outperform AWS.”
The company claims a return on investment (ROI) with a 50 to 60 percent margin. “Our goal is to enable the long tail – a low barrier to entry and transparent pricing,” said Heller.
Morphlabs says it’s been seeing a broadening customer set. Historically, the majority of its customers have been enterprises looking to reduce TCO, but it has been gaining traction among service providers for the private cloud integrated solution (mCloud Helix Private Cloud). Service provider demand for the capability to launch public cloud was the genesis of the new offering. mCloud Osmium allows service providers to get to market with a public cloud offering quickly, without the research and development or large capital expenditures.
“We had customers who were asking us several things,” said Heller. “How do you make your environment more resilient? How do you add just more compute or more storage? Can I scale beyond 500vCPU? The answer is the new product geared towards service providers and public cloud – you can scale it in small chunks, and we’ve made it cost competitive compared to AWS.”
Payment Gateway Included
The Stripe payment gateway is integrated into the platform to make it easier to manage the creation and billing of the cloud offering out of the gate. “Stripe allows a provider to set different plans and subscriptions – you can do it all here, don’t have to worry about integrating into anything,” said Heller. “Our belief is that people are going to use it if it’s extremely simple to turn on.”
The company also provides a calculator showing what return on investment based on what you decided to charge per VM.
mCloud Helix is now version 2.0, with the compute power up 2.5x over the previous mCloud Helix in the same footprint, and increased options for modularity have been added.
Available in both on-premises or hosted with a service provider, the mCloud Helix 2.0 features larger internal storage and compute – moving from 3TB to 4TB and 80vCPU to 200vCPU, respectively – while delivering both storage and computing expansion capabilities to modularly grow the infrastructure as needed. Scalability of the mCloud Helix 2.0 has greatly increased allowing for private cloud deployments of up to 2,000 vCPU. | | 1:40p |
Piston Cloud Gets $8 Million to Accelerate OpenStack Adoption Piston Cloud Computing has secured $8 million in Series B funding from Cisco and other partners to fuel its product development in the OpenStack enterprise cloud marketplace. Cisco Systems, Data Collective and Swisscom Ventures as well as Divergent Ventures, Hummer Winblad and True Ventures joined together as principal investors.
This is the second round of financing for Piston Cloud, which gained $4.5 million in a first round of financing in July 2011.
“Our investors’ confidence in Piston Cloud validates our strategy and the pioneering work we have done over the last two plus years in the OpenStack community,” said Jim Morrisroe, CEO, Piston Cloud. “This new investment will enable us to build on this foundation and to accelerate our growth as we work to enhance our products, grow our customer base and establish new partnerships.”
Founded in early 2011 by technical team leads from NASA and Rackspace and based in San Francisco, Piston Cloud is built around OpenStack, a scalable open source cloud framework. In its brief history, Piston Cloud has achieved quite a few major benchmarks including launching the first commercially OpenStack distribution, Piston Enterprise OpenStack, which is software for building, scaling and managing a private Infrastructure-as-a-Service (IaaS) cloud on bare-metal,.
Piston Cloud also introduced a Virtual Desktop Infrastructure (VDI) solution built on OpenStack, via an exclusive licensing agreement with Gridcentric.
“Working with Piston Cloud from day one, we have been thrilled with the company’s growth and traction in the market. We believe the company has the talent, technology and vision to deliver next-generation cloud to the enterprise and beyond,” noted Kevin Ober, Managing Director, Divergent Ventures.
The OpenStack market continues to heat up, with two significant announcements in January. Mirantis, an OpenStack cloud integrator, announced a $10 million round of funding and Rackspace Hosting announced partnerships with leading hardware and software providers to create three new Private Cloud Open Reference Architectures.
For more news on cloud computing, bookmark our Cloud Computing Channel. | | 2:30p |
Equinix Monitors Global Infrastructure with ScienceLogic  The interior of an Equinix data center in Silicon Valley. (Photo: Equinix)
Equinix is monitoring its worldwide footprint through the help of ScienceLogic. The global data center behemoth is leveraging ScienceLogic’s unified monitoring platform to increase productivity and improve IT operations.
ScienceLogic global monitoring and alerts provide immediate visibility to the global technical team, and, integrate seamlessly with the global ServiceNow ticketing system for end-to-end service management. “The openness of the ScienceLogic platform allowed us to use it where we needed it most, as a global IT infrastructure monitoring solution,” said Equinix CIO Brian Lillie. “I am very happy with the ScienceLogic solution.”
“We had several key requirements for an IT infrastructure management platform, including single pane of glass monitoring of our information assets, visibility into our infrastructure that is tied into alerting and ticket management, and of course a system that was easy to deploy and easy to use,” said Steve Upp, Director of Server Operations at Equinix. “We previously had monitoring from different solutions that never completely met all of our needs. You can’t operate as a knowledge-aware IT organization without knowing what’s going on throughout your global IT infrastructure. The ScienceLogic platform does all the heavy lifting and really works out-of-the-box, allowing us to be more proactive and service-oriented.”
Multiple Benefits for Equinix, Customers
David Link, co-founder and chairman of ScienceLogic, said this was a win for everyone along the value chain. Equinix consolidates operations alignment with globally distributed infrastructure, streamlining management and ongoing support for service delivery. Business users can use custom dashboards related to their job function, and Equinix’s end customers benefit from strong service delivery on customer-facing portals and applications.
“Equinix is a perfect example of a global company getting real value from a globally consistent monitoring solution,” said Link. “Many of our customers came to us because the multitude of monitoring tools they were using just weren’t working. We founded ScienceLogic for exactly that reason. Equinix is now in a position to offer their customers more value and enhanced services because they can proactively monitor and manage their systems with the ScienceLogic Smart IT platform.”
ScienceLogic continues to win customers across a variety of businesses in industries. This is a big win in the service provider segment where the company continues to gain traction.
“We have a really elegant solution for service providers, where multi-tenancy support shines in our product and many of our Service Provider customers use our product to lower their operations costs and build new service offerings that they sell to their customers,” said Link. | | 3:00p |
Get Hands-On Practice With GNS3 Keith Barker is a trainer and consultant with more than 27 years of IT experience. He has been named a Cisco Designated VIP and is the author of numerous Cisco Press books and articles. He is also the host of a comprehensive online GNS3 video training series through CBT Nuggets.
 KEITH BARKER
CBT Nuggets
To become an expert in Cisco technologies or products, you need both study and hands-on practice. Study resources, including books, classes, and videos, are easy to find. But it can be very challenging for a Cisco student to access live gear to practice the implementation, verification and troubleshooting of both simple and complex network configurations.
How then does a network engineer or aspiring network engineer get hands-on practice? One solution is GNS3.
What is GNS3?
GNS3 is a hardware emulator (called a hypervisor), which creates a virtual environment on a host computer (such as Windows, Linux or Mac). By running Cisco IOS software on the hypervisor, the user can create complex networking scenarios with the real look and feel of hardware devices. This is because the learner is interacting with the real Cisco IOS operating system that is running on virtualized routers.
GNS3 was introduced many years ago, when an open-source hypervisor called Dynamips was written to emulate Cisco routers. Dynamips was intended to emulate Cisco IOS hardware – and was fairly complicated to implement – which put it out of reach of the average learner. But more recently, a GUI was added to manage the process of the hypervisor, and the GUI is currently called the Graphic Network Simulator (GNS3).
GNS3 is the front-end for multiple hypervisors, including Dynamips and Qemu, with the latter being used to emulate hardware used by the Cisco Adaptive Security Appliance (ASA) firewall.
Benefits of GNS3
The primary draw of GNS3 is that it affords anyone with a computer a way to practice network topologies. But it goes beyond that: users can quickly set up routers and firewalls, using simple drag-and-drop actions on the screen. It’s easy to add Ethernet connections between the devices, or to add hardware modules to the routers in the virtual GNS3 topology for more complex network designs.
Though GNS3 supports hardware emulation for several models of Cisco routers, it doesn’t provide the actual IOS. Many learners choose to purchase a single physical router, and use the related IOS image in their virtual GNS3 topology.
Layer 2 Ethernet switching inside of a GNS3 topology is limited to the switchport modules that can be added to the virtual routers inside of GNS3, which doesn’t support the full layer 2 switching capabilities that a physical switch would provide. As a result, many people want to have interaction between their GNS3 network and live network gear, which they can do by using logical Ethernet connections. And, if desired, they can set up 802.1q or ISL trunks between the devices in GNS3 and live physical network devices.
Several interesting possibilities for GNS3 exist, including:
- Using Windows, Linux or Mac OSX as the host computer for GNS3
- Integrating the host computer as a node on the GNS3 network(s)
- Virtual PC integration (VMware, VirtualBox) into GNS3 topologies
- Virtual PC Simulator to create several virtual “PCs” on a GNS3 network
- Ethernet and Trunking capabilities between GNS3 devices and live networks
- Virtual “appliances” as nodes on the GNS3 network
- Distributed computing GNS3 hypervisor support
- Router, ASA, and Switchport module emulation
- Wireshark Integration for GNS3 protocol analysis of GNS3 network traffic
- Cisco Configuration Professional (CCP) access to GNS3 routers
- ASDM (ASA Security Device Manager) GUI access to GNS3 firewalls
Challenges of GNS3 and Tips for Success
In my experience, Cisco students who have tried GNS3 but haven’t kept using it are those who ran into one of two common issues: They just didn’t feel they had the time needed to get proficient with GNS3, or their CPU pegged at 100 percent and they gave up.
Online training materials are available for anyone who has run into the first issue, to help learners become proficient with creating topologies within GNS3. And some basic mistakes which result in 100 percent CPU utilization include:
- Not understanding/tuning the “idle-PC” value
- Not opening a console session after starting the virtual router, or
- Allowing a console session to reach the inactivity timer.
Each of these issues is simple to fix, but users must be aware of them.
Often, a basic explanation is all it takes to change the situation from GNS3 being something students try and leave alone, to GNS3 being one of the most valuable tools they use for practicing, and for proof-of-concept designs, protocol analysis and verification.
For those interested in learning more about GNS3, I’ve compiled a series of “MicroNuggets,” or short, free video tutorials, for further learning.
Note: GNS3 (and its associated hypervisors) can be downloaded at http://www.GNS3.net
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 3:09p |
Dell Confirms $24 Billion Buyout Led by Silver Lake  Dell, which has been a major provider of servers and modular data centers to customers like eBay (whose module of Dell gear is pictured above). Today, Dell confirmed plans to go private in a $24 billion buyout led by Silver Lake. (Photo: eBay)
PC and server vendor Dell Inc. has agreed to be taken private in a $24.4 billion leveraged buyout led by Silver Lake Partners and Michael Dell, the tech industry legend who founded the company in his dorm room and built it into a juggernaut. The long-rumored deal comes at a crucial point in the life of the company, which is facing challenges to its core markets for PCs and volume servers, and is seeking to build its own suite of cloud computing services.
Under the terms of the agreement, Dell stockholders will receive $13.65 per share in cash, a 25 percent premium to the price of Dell’s shares before deal rumors went public. Michael Dell, who owns approximately 14 percent of Dell’s common shares, will continue to lead the company as Chairman and Chief Executive Officer and will contribute his shares to the new company, as well as making a “substantial additional cash investment.”
“I believe this transaction will open an exciting new chapter for Dell, our customers and team members,” said Michael Dell. “We can deliver immediate value to stockholders, while we continue the execution of our long-term strategy and focus on delivering best-in-class solutions to our customers as a private enterprise.
“Dell has made solid progress executing this strategy over the past four years, but we recognize that it will still take more time, investment and patience, and I believe our efforts will be better supported by partnering with Silver Lake in our shared vision,” Dell continued. “I am committed to this journey and I have put a substantial amount of my own capital at risk together with Silver Lake, a world-class investor with an outstanding reputation. We are committed to delivering an unmatched customer experience and excited to pursue the path ahead.”
Dell’s once-dominant position in the consumer computing market has been challenged by the shift to mobile devices and tablets, as well as smaller notebook formats like netbooks and ultrabooks. Dell has been an important player in the data center market with its Data Center Solutions (DCS) unit, which has focused on custom products to support bulk sales to the largest cloud computing companies. DCS has developed a modular data center design that has been used to deploy more than 1 million servers for customers including Microsoft and eBay.
Last year, Dell announced plans to build a global network of data centers to support its own suite of cloud computing offerings, and has opened facilities in London and Quincy, Washington. The company’s cloud push has provided it with a services play in the fast-growing cloud computing market, where its hardware position is being challenged by rise of open hardware designs emerging from the Open Compute project.
Silver Lake has been an active investor in the data center sector, backing Vantage Data Centers. Silver Lake has also worked closely on several deals with Microsoft, which will kick in $2 billion in debt financing to support the deal. In addition to equity investments from Silver Lake, Michael Dell and his MSD Capital and the debt from Microsoft, the deal will be backed by loans from BofA Merrill Lynch, Barclays, Credit Suisse and RBC Capital Markets.
“Michael Dell is a true visionary and one of the preeminent leaders of the global technology industry,” said Egon Durban, a Silver Lake Managing Partner. “Silver Lake is looking forward to partnering with him, the talented management team at Dell and the investor group to innovate, invest in long-term growth initiatives and accelerate the company’s transformation strategy to become an integrated and diversified global IT solutions provider.”
“Microsoft has provided a $2 billion loan to the group that has proposed to take Dell private,” the company said in a statement. “Microsoft is committed to the long term success of the entire PC ecosystem and invests heavily in a variety of ways to build that ecosystem for the future. “We’re in an industry that is constantly evolving. As always, we will continue to look for opportunities to support partners who are committed to innovating and driving business for their devices and services built on the Microsoft platform.” | | 3:30p |
The Total Economic Impact of Desktop Virtualization There is little argument that virtualization can help an organization be more resilient, agile and support more growth. Over the last couple of years, almost every IT department has worked with or at least tried some type of virtualization technology. Whether this is server, application or even desktop virtualization – each platform has great benefits and very strict considerations. A series of in-depth interviews with four existing Cisco VXI customers, two in financial services, one in education, and one in healthcare, revealed that these organizations reduced their desktop acquisition costs, ongoing IT management and operational costs, and future IT headcount growth costs.
The core of the conversation revolves around the deployment of the right systems, with the right amount of planning, for an environment with a solid use-case. The Total Economic Impact Of Desktop Virtualization white paper takes a look at how the implementation of Cisco’s VDI/VXI solution can have direct operations as well as cost saving possibilities. In the in-depth analysis from Frost and Sullivan, this paper presents the structure of these benefits as well as costs and how they fit into an IT plan. Each of the below benefits and costs are further elaborated upon in the interviews conducted with Cisco’s VXI customers. For example:
Benefits. The composite organization experienced the following benefits that represent those reported by the interviewed companies:
o Endpoint cost savings.
o Reduction in future IT growth headcount.
o IT staff performance improvement.
o Server, storage, and network refresh savings.
o Data center space savings.
o Power and cooling savings.
o Desktop administrator productivity improvements.
o Ongoing desktop management productivity savings.
Costs. The composite organization experienced the following costs:
o Cisco hardware and annual maintenance costs.
o Internal resources allocated to planning, testing, and deployment.
o Third-party desktop software and storage cost estimates.
o Professional fees.
Download this white paper today to see how desktop virtualization can help your organization. Furthermore, this paper will outline the various factors which affect the benefits and costs of deploying a virtual desktop solution. When designing a VDI platform, organizations which plan out their end-points, their infrastructure, and the end-user experience will be able to launch a much better solution. Once those questions are answered, the organization will need to see and understand the cost/benefit analysis to see how the VDI cost structure breaks down. This white paper will show how users utilize a VDI environment, where the strengths lay, and how to best plan for such a deployment. | | 4:00p |
Ravello Raises $26M To Let Users Roam From Cloud to Cloud There’s been a lot of activity lately in the area of enabling cross-cloud activity, and Ravello Systems has just closed a Series B to do just that. The round brings Ravello Systems’ funding to $26 million in total. Ravello is now backed by Sequoia Capital, Norwest Venture Partners and Bessemer Venture Partners.
Ravello’s Cloud Application Hypervisor allows enterprises to take any application and run it in any cloud without any modifications. The company also announced public beta of its service for developers.
Formerly from Red Hat, the company founders were the original team behind the now-standard KVM Hypervisor.
“The leadership team behind Ravello has a track record of developing innovative technologies in the virtualization infrastructure space and backing it up with solid execution,” said Adam Fisher, partner, Bessemer Venture Partners. “Their previous virtualization initiative, KVM has been a tremendous success in the market with record breaking virtualization performance and scalability. This time around, HVX is another ground-breaking technology and if any team can deliver, it’s these guys.”
“We have developed a Cloud Application Hypervisor that encapsulates multi-VM applications along with their entire environment including the VMs, networking, storage etc. so that enterprises can run any application in any cloud without making any changes,” said Benny Schnaider, president and chairman of the board, Ravello Systems.
The cloud application hypervisor consists of three core technology components:
- a new, high-performance nested hypervisor, HVX, the engine behind Ravello’s ability to normalize application environments across any cloud without any changes;
- an IO overlay that consists of software defined networking and storage, enabling any networking topology on top of any cloud; and
- an application framework that enables a monolithic definition of an end-to-end mutli-VM application including all of its infrastructure.
Ravello makes it possible to move a multi-VM application from on-premise VMWare infrastructure and move it to AWS, or vice versa, without any changes to the application. It sounds simple enough, but there’s a lot of complexity in moving something from one type of cloud to another. This capability is applicable in disaster recovery, bursting to the cloud, and simply switching providers without the lock-in worries that continue to plague potential cloud customers, or moving from off-premise to on-premise in general.
“Enterprises cannot use the public cloud the way that they would like to which is to be able to rent capacity on demand and simply spill-over bursty workloads,” said Rami Tamir, CEO, Ravello Systems. “That’s not possible today because the public cloud environment is completely different from the enterprises’ internal data center. The industry needs a solution to normalize the application environment across the private and public cloud, so that enterprises can truly begin using the public cloud.”
A Team With History and Expertise
Founded in 2011 by Tamir and Schnaider, Ravello’s executive team brings deep expertise in virtualization, cloud, networking and storage technologies, and is the same team behind Qumranet. That company developed the KVM Hypervisor (acquired by Red Hat in 2008), and it’s now the standard virtualization technology in Linux. Also joining the team is Navin R. Thadani (SVP, Products), also formerly of Red Hat.
“Ravello’s technology has broad applications in terms of hybrid cloud computing and true application mobility,” said Vab Goel, partner, Northwest Venture Partners. “An end-to-end solution in this space will go a long way in getting real enterprises to adopt the public cloud.”
“The rest of the market is focused on solving the enterprise public cloud adoption problem with ‘management only’ solutions – which is like managing complexity with more complexity,” said Shmil Levy, partner, Sequoia Capital. “Ravello Systems is the first company to tackle the problem head-on from an infrastructure perspective. It’s very much like what VMware did back in the early 2000s to the enterprise data center.”
Public Beta – Revello’s Cloud Application Hypervisor
Ravello Systems announced the public beta of its service, designed to enable developers to harness the public cloud to develop and test applications. By easily enabling cloud usage, developers overcome internal data center capacity constraints and leverage the unlimited resources of the public cloud to develop and test their applications.
“Traditional cloud management vendors have been positioning the cloud as a journey. That’s because with existing cloud management tools, it is,” said Navin R. Thadani, SVP Products at Ravello Systems. “Ravello’s unique technology enables organizations to encapsulate multi-VM applications and deploy them instantly on any cloud without making any changes. Development and test is a classic use-case for developers to start using the public cloud today. We are pleased to open up our beta and look forward to working with the extended developer community.”
There can be cost savings gained by using cloud over traditional approaches. Through eliminating contention of resources, ensuring developers always have access to replicas of production instances so that they can collaborate, work in parallel and be much more efficient. “Enterprises looking for a more cost-effective way to develop and test their applications are increasingly looking to the public cloud as the answer. Yet, they’re held back by the difficulty of making that transition between private data center and public cloud because of differences in infrastructure and, often, significant changes to their code,” said Donnie Berkholz, analyst, RedMonk. “Ravello’s novel approach to providing a consistent environment has the potential to enable more businesses to benefit from the agility of the public cloud without all of its costs.”
“Our organization runs production deployments on-premise, but we lack capacity internally for application development and testing,” said Sreesan P.D., Systems Engineer, Ideamine Technologies. “So we tried using the cloud to develop our multi-tier application, but had to make extensive changes and ended up with two completely different environments. With Ravello, we get exactly the same environment in the cloud, and can now develop and test our application on replicas of our production – all with the unlimited resources of the cloud.”
Ven Shanmugam, senior manager of corporate strategy at Rackspace, noted, “Enterprise customers look for innovative ways to use the cloud for development and test and then deploy their applications back on premise. Ravello Systems enables customers to leverage the Rackspace Open Cloud for their development. We are excited to have Ravello Systems as part of our Cloud Tools Marketplace and to further accelerate enterprise adoption of our open cloud services.”
For more news on cloud computing, bookmark our Cloud Computing Channel. | | 6:49p |
Cisco Expands SDN, Unveils New Nexus Switch  The Nexus 6004 is Cisco’s highest density 40 Gigabit Layer 2/Layer 3 fixed switch. (Photo: Cisco Systems)
Cisco (CSCO) announced a variety of innovations for its Unified Data Center strategy, with a high density 40 Gigabit switch, a hybrid cloud solution, and a new controller.
Cisco unveiled the Nexus 6000, a 96 port line rate 40-gigabit fixed form factor switch with Ethernet and Fiber Channel over Ethernet (FCoE). Operating the Cisco NX-OS the 6000 series is purpose-built for high performance access, leaf/spine architectures for virtual and cloud deployments of converged networks. Two new Nexus 6000 models are being introduced – the 4U Nexus 6004 with 96 ports of line rate 40GE, and the 1U Nexus 6001 with 48 ports of GE/10GE fixed ports.
“As a leading service provider for cloud, managed hosting and colocation, Savvis relies upon technologies such as Cisco’s Data Center Unified Fabric solutions to help us quickly deliver secure, robust, agile infrastructure to our clients,” said Ken Owens, cloud CTO, at Savvis, a CenturyLink company. “Cisco continues to build on its reputation as an industry leader in providing end-to-end cloud-ready architecture through its recent cloud innovations, which enable service providers like Savvis to enhance their cloud capabilities.”
Cisco has also added a Network Analysis Module (NAM) services blade to bring application awareness and performance analytics to the Cisco Nexus 7000. A new Nexus 5500 40GE module provides the option of deploying 40GE uplinks in an existing Nexus 5500 to reduce over-subscription, and the new Nexus 2248PQ switch provides a 10GE top-of-rack fabric extender with 40GE uplinks.
“Like many IT organizations, we are tasked with increasing business agility while lowering the total-cost of ownership,” said Ansh Kanwar, director, network services, Citrix. ”We have standardized on FEX architecture which gives us a choice for deploying different server environments with operational simplicity. The Cisco Nexus 6004 along with the new Nexus 2248PQ offers a solution to manage large pool of 10G servers. Introduction of 40G technology on these platforms allows us to build a robust DC fabric that can scale significantly to meet our DC growth in the coming years.”
New InterCloud Designs
To address security and complexity issues in the hybrid cloud Cisco announced the Nexus 1000V InterCloud. This new design provides the foundation for a secure hybrid cloud, bridging the enterprise and cloud providers, and preserving existing networking capabilities and L4-7 services. It is based on Nexus 1000V switches and Cisco NX-OS software. The Virtual Network Management Center (VNMC) InterCloud offers new capabilities including a single policy point for network services across both enterprise and provider domains; the ability to manage virtual machine lifecycle across multiple hypervisors in hybrid clouds; and the ability to manage multiple provider clouds via APIs.
Building on the Summer 2012 introduction of the Open Network Environment (ONE), Cisco continues to open the fabric with the introduction of a new ONE Software Controller to support a highly available, scalable and extensible architecture. It interfaces with OpenFlow, provides consistent management, troubleshooting and security features, and includes built-in applications that include network slicing functionality for enabling logical partitioning of network resources. Cisco has also expanded platform support for OpenFlow to Nexus 3000, Nexus 7000, ASR 9000 and Catalyst 6500 models. | | 7:29p |
Deal News: Oracle Acquires Acme Packet Oracle, IBM, The 451 Group and Twitter have announced acquisitions recently:
Oracle to acquire Acme Packet. Oracle (ORCL) announced it has entered into an agreement to acquire Acme Packet (APKT), a global provider of session border control technology, for approximately $1.7 billion. Deployed widely across global enterprises and 89 of the world’s top 100 communications companies Acme Packet solutions enable trusted, first-class delivery of next-generation voice, data and unified communications services and applications across IP networks. The combination or Oracle and Acme Packet will deliver an end-to-end portfolio of technologies that will support the deployment, innovation and monetization of all-IP networks. “The proposed acquisition of Acme Packet is another important piece in Oracle’s overall strategy to deliver integrated best-in-class products that address critical customer requirements in key industries,” said Oracle President Mark Hurd. “The addition of Acme Packet to Oracle’s leading communications portfolio will enable service providers and enterprises to deliver innovative solutions that will change the way we interact, conduct commerce, deliver healthcare, secure our homes, and much more.” Shares of Acme Packet gained 22 percent on Monday after the deal was announced.
IBM to acquire Star Analytics. IBM announced a definitive agreement to acquire the software portfolio of Star Analytics, a privately held business analytics company headquartered in Redwood City, California. Star Analytics software automatically integrates essential information, reporting applications and business intelligence tools across their enterprises, on premise or from cloud computing environments. As a compliment to IBM’s business analytics initiatives, organizations will gain faster access and real-time insight into specialized data sources. ”Star Analytics software allows organizations to move critical analytics source data at will and use it regardless of which application they need to use it with, providing both flexibility and accessibility,” said Quinlan Eddy, CEO, Star Analytics. “As part of IBM, we can now bring our technologies to a broader range of clients to help them uncover new, untapped growth opportunities.”
The 451 Group acquires Tech:Touchstone. The 451 Group announced the acquisition of Tech:Touchstone, a UK-based events company cofounded in 2007 by ex-Gartner executive Simeon Turner. Tech:Touchstone events facilitate in-person sharing of insight between senior IT executives and create an environment where they have an opportunity to learn directly from their most trusted sources of information – peers and industry analysts. Turner will become Managing Director, 451 Group Events, which is responsible for producing all events, such as the Uptime Institute Symposium. The 451 Group will expand the locations of its events throughout Asia Pacific, Latin America,Europe and in other geographical areas, reflecting the global nature of the firm’s business and clientele. ”Joining a dynamic and rapidly growing firm like The 451 Group is an exciting opportunity for me and the Tech:Touchstone team, and significantly enhances the value we can bring to our clients,” said Turner. “451 Research, Uptime Institute and The 451 Group’s recent addition, Yankee Group, provide the rich source of thought leadership and global presence necessary for a thriving events portfolio that delivers highly valuable, actionable insight and facilitates quality networking for all of our clients. We look forward to ensuring that The 451 Group Events will be the preeminent producer of IT events within the industry.”
Twitter acquires Bluefin Labs. Business Insider reports that Twitter has acquired social TV analytics company Bluefin Labs. Although a price was not mentioned, it was supposedly Twitter’s largest acquisition to date - over its $40 million acquisition of TweetDeck in 2011. Bluefin Labs offers detailed reports about which brands are discussed most on social media. Twitter scored better than both Facebook and Google + during the Superbowl Sunday with 27.7 million public tweets, and has long been a conversation hub for television. Late last year Twitter and Nielsen entered an exclusive multi-year agreement to create the Nielsen Twitter TV Rating research metric. Bluefin Labs offers detailed reports about which brands are discussed most on social media. |
|