Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, June 25th, 2013
| Time |
Event |
| 12:18p |
Cisco Unveils Major Refresh of Catalyst Switch Family  The new Cisco Catalyst 4500 Supervisor 8E extends wired and wireless convergence to chassis-based switches. (Photo: Cisco Systems)
Taking care to nurture a dominant position in its routing and switching franchise while still evolving its products, Cisco launched a significant refresh to the the service provider and enterprise networking portfolio. At the annual Cisco Live event in Orlando Monday Cisco (CSCO) announced new and updated products for Catalyst switches and Integrated Services Router families. The event conversation can be followed on Twitter hashtag #clus.
Catalyst Innovations
Revitalizing and innovating on its decade old Catalyst 6500 switching family, Cisco announced the Catalyst 6800 backbone switching line. Fostering simplicity and investment protection, Cisco laid out its future of enterprise networking with several campus and branch additions that have backwards compatibility to the Catalyst 6500. The Catalyst 6807-XL is a 7-slot, 10-rack modular chassis with up to 880 Gbps per slot capacity and 11.4TB per port switching capacity. It is optimized for 10/40/100 Gbps and designed for the next-generation campus backbone.
The Catalyst 6880-X is a 3-slot switch with 16 fixed 10 Gbps port Supervisor engine and four half slots for optional 10 Gbps or 40 Gbps line cards. The Catalyst 6800ia simplifies campus network IT by virtually consolidating access switches across multiple locations into one extended switch, and includes all the feature richness of the Catalyst 6800 and 6500 switches for access networks.
Updates to the Catalyst 4500 and Integrated Services Routers (ISR) family were made as well. The Cisco 4500E Supervisor Engine 8E extends wired and wireless convergence to chassis-based switches, by adding the fully programmable Cisco UADP ASIC, introduced originally in January with the Catalyst 3850 switch. A new ISR 4451-AX boosts the branch router portfolio with up to 2 gigabit-forward performance with native services and a pay-as-you-grow purchase model.The new Aggregation Services Router (ASR) 1000-AX provides services for WAN aggregation by integrating Application Visibility and Control and AppNav with Virtual Wide Area Services (vWAAS) into popular customer configurations.
“The network is more important than ever before in enabling the user experience in today’s applications. Network intelligence, simplicity and innovation will be the key factor in unlocking new business opportunities and competitive differentiation for our customers.” said Rob Soderbery, senior vice president and general manager, Cisco Enterprise Networking Group. “Today’s launch of the Catalyst 6800, Catalyst 4500 Supervisor 8E, ISR 4451-AX and ASR1000-AX demonstrate how Cisco’s Enterprise Network Architecture is delivering against these promises.”
Cisco ONE
Cisco ONE is Cisco’s portfolio that addresses software-defined networking (SDN) and, more broadly, incorporates programmability across the network infrastructure to enable innovation, investment protection and lower operating expenses. A key element of the Cisco ONE strategy is the onePK API toolkit, which allows developers to create networking applications that help business applications.
Cisco will support onePK across its entire enterprise routing and switching portfolio within the next 12 months, beginning with the ISR 4451-AX and ASR 1000-AX routers announced today, which will support onePK in the first quarter of 2014. his open API combined with the control services layer, allow application developers to view the network as a single entity, so they can write applications once and run them across multiple network environments whether wired or wireless, with consistent application performance. | | 12:30p |
Cloudy Gathering at GigaOm Structure 2013 GigaOm Structure 2013 brought together cloud enthusiasts – from those who develop, implement and deploy cloud technology to end-users at Fortune 500 companies – in San Francisco last week. The event was filled with the traditional tech companies such as Microsoft and megascale companies such as LinkedIn and Twitter, startups such as Cumulus Networks, as well as CIOs from old-school companies such as Clorox Co., Revlon and Pabst Brewing Co.
For photo highlights of the event see GigaOm Structure 2013: Maturation of the Cloud. | | 12:30p |
The Modern Data Center: An Integrated World Vanessa Alvarez is head of Marketing @ScaleComputing and a former Forrester analyst.
 VANESSA ALVAREZ
Scale Computing
The data center of yesteryear is no longer enough for businesses today. Businesses, large and small, are in a fast paced competitive landscape right now, regardless of industry. A big driver in this is the fact that technology has truly become part of a business’s competitive advantage. Virtualization and cloud have had a tremendous impact on IT. Mid-market and large enterprises are at different stages in their virtualization initiatives and have started to think strategically about private and public cloud and what it means to them.
These shifts inevitably change how data centers and IT environments in general operate. IT is forced to do more with less. The technology silos of server, storage and networking are breaking down, largely due to virtualization, which created interdependencies between these silos. As a result, organizational silos were forced to work together, something unheard of in IT. Legacy infrastructure can no longer meet the demands of new business initiatives. But making large dollar capex investments is something that needs to be considered and planned out.
What Choices Do Enterprises Have?
Luckily for businesses, there are many options to be considered. There is the possibility of leveraging public clouds to achieve some of the cost savings, agility and flexibility it offers. But starting at home is important. Creating more efficient and flexible data centers and IT environments is the first place to start. It’s no longer efficient or smart to manage three different technology silos that are very interdependent of one another. Integrating these into a single model that abstracts all the complexity of these silos and bring together server, storage, virtualization, networking, automation, agility and flexibility into a single platform is the first step in creating these efficiencies.
An integrated approach allows for infrastructure resources to be optimized automatically for business applications through intelligent software, eliminating the inefficiencies of over-provisioning and providing automatic cost savings. It also alleviates IT from the guessing game of how much resources are needed; managing three different boxes and multiple platforms; struggling through the myriad of licensing fees and multi-vendor red tape; and ultimately from the shackles of legacy infrastructure that no longer works. These operational challenges are often difficult to quantify but end up costing businesses a great deal of money. An integrated approach allows IT to be more focused on other areas, and become more valuable to the overall business.
When doesn’t this approach work? There are many instances where businesses have not quite gotten to the point where they’re able to let go of technology silos. It’s difficult when you’ve lived in one world for a long time and all of a sudden there’s change. Many businesses also have long-standing relationships with their existing vendors and can’t let go of the brand name IT infrastructure. The reality is that the status quo may work for a while, but in the long term, it’s costing the business money and resources.
Integration Preparation
Prepare your organization for the next generation of integrated infrastructure for your data center and IT environment. If you’re down the path of virtualization, look for solutions that will bring everything you need in one platform. Looking to build your own private cloud and achieve the same efficiencies of public cloud, but within your own environment? Look to integrated platforms that can serve as the foundational infrastructure for your private cloud.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 1:00p |
Fusion-io Upgrades Caching Software to Boost Virtualization Performance Fusion-io (FIO) announced a new unified virtualization software solution with an upgrade of its ioTurbine software that provides improved performance and enhance the value of virtualization investments.
ioTurbine delivers the option for hypervisor caching, virtualization-aware caching in the guest VM, dynamic reallocation of cache memory during live migration of virtual machines in VMware vMotion, and unified management of caching across virtual and physical environments.
“Just as Fusion-io revolutionized storage by integrating flash in the server to bring performance closer to enterprise applications, enabling virtualization-aware caching is a significant breakthrough in intelligent virtual performance acceleration,” said Vikram Joshi, Fusion-io Chief Technologist and Vice President. “As adoption of virtualization is at different stages at different companies, ioTurbine provides customers with the option to cache in the guest VM or the hypervisor. Caching at the guest brings performance to the application where it is needed without costly and ineffective storage sprawl. Fusion ioTurbine makes it possible to further streamline the stack with optimized software to accelerate the adoption of flash in virtual environments, while giving customers flexibility and control over their infrastructure.”
Through virtualization-aware guest level caching ioTurbine virtualization software provides application performance by identifying the application’s most immediate data requirements. It supports a choice of server and storage systems, and multiple operating systems, including VMware ESXi, Windows, and Linux. The software provides terabytes of auto-tiered caching for up to 40 times faster database performance. It maximizes hardware investments by providing double VM density and offload SAN workloads up to 96 percent in environments with a variety of read/write mixes.
Fusion ioTurbine is available as stand-alone software or bundled with the Fusion ioCache data accelerator platform for virtualization. The ioCache platform features 750GB of Fusion ioMemory capacity for ultra low latency application acceleration. The ioTurbine software caches data to the ioCache platform for rapid processing of critical application data, while non-critical data moves across the network to slower, high capacity storage arrays.
“Fusion-io is extending the ioTurbine caching technology to include the option to cache on physical servers, in the hypervisor, and the guest virtual machine, while preserving the ability to support native vMotion,” said David Floyer, Wikibon Chief Technology Officer. “This unified approach gives leading deployment flexibility and control to enterprise users requiring high performance cache in the migration to total server virtualization.” | | 1:30p |
United Metal Products Sells Chil-Pak  A look at a Chil-Pak modular chilled water plant installation, with one module in place and space for additional units to expand when more capacity is needed.
United Metal Products, Inc. announced today that it has sold its interest in Chil-Pak, a manufacturer of modular chilled water plants, to Air Matrix, L.P., which is located in Texas. The buyout allows United Metal Products to continue its focus on providing energy efficient cooling solutions to the data center industry, including direct evaporative and indirect evaporative cooling systems and custom air handlers .
“United Metal Products has been about providing the best in class innovative evaporative solutions and custom air-handlers since we were founded in 1978,” said Steve Kinkel, President of United Metal Products. “Over the last year we have seen a significantly increased demand for our indirect solutions, in particular the IRA and the IRAe. The sell-off of Chil-Pak allows UMP to continue to focus our efforts, resources, and personnel on providing innovative energy efficient cooling solutions to the data center industry.”
United Metal Products will continue to manufacturer all of its equipment in the United States in their Tempe, Arizona facility, and industry leader Harold Simmons will remain in his role as the Global Director of Strategy & Mission Critical Solutions for United Metal Products, Inc.
United Metal was founded in 1978 and specializes in evaporative cooling, and has deployed its cooling units for a broad range of industrial customers to cool more than 300 million square feet of space. For the first 25 years of the company’s existence, data centers weren’t a meaningful customer segment. But as data centers grew larger, their operators began seeking more efficient ways to manage their cooling, and many turned to evaporative cooling.
Chil-Pak makes modular chilled water plants for data centers that can be deployed to speed a greenfield build or to add cooling capacity to a facility that has maxed out its existing chiller plant. | | 2:15p |
Puppet Labs Rolls Out Puppet Enterprise 3.0 Puppet Labs, a provider of IT automation software for system administrators, has made Puppet Enterprise 3.0 available. It delivers a complete cloud automation solution for enterprise customers, providing scalability and orchestration capabilities and a unified, software-defined approach to automation of compute, network and storage layers. This enterprise edition enables customers to reap the full benefits of cloud computing.
“Some of today’s largest and most successful SaaS clouds use our solutions to automate their infrastructure,” said Luke Kanies, CEO and founder of Puppet Labs. “With Puppet Enterprise 3.0, we’re making the scalability, orchestration and software-defined capabilities that power those SaaS clouds accessible to any IT organization, enabling customers to take full advantage of their dynamic infrastructure.”
Jay Lyman, senior analyst, 451 Research noted, “Today’s enterprise data centers and IT operations are often built around silos of compute, network and storage resources managed separately by different IT teams with different tools, making it difficult to move quickly or drive efficiency, innovation and business value. Puppet Enterprise 3.0 is a cloud automation solution that manages compute, network and storage resources whether on-premises or in the cloud so IT teams can break out of silos and speed operations across the organization.”
New Capabilities in Puppet Enterprise 3.0
Scalability and Performance
With today’s on-demand and flexible cloud architecture, customers expect to create and manage hundreds—or even thousands—of applications and virtual machines in just minutes. Puppet Enterprise customers can expect a nearly 200 percent increase in performance from the latest release. They can also expect a nearly 100 percent scalability improvement, enabling them to manage twice as many cloud nodes in the same deployment.
Orchestration
It’s almost impossible to discover and manage large numbers of virtual, transient cloud nodes with traditional methods, such as CMDBs, custom scripts and spreadsheets. The orchestration engine in Puppet Enterprise 3.0 has new capabilities tailored for customers automating cloud infrastructure, enabling them to efficiently discover and then orchestrate complex management operations on large volumes of cloud nodes.
Software-Defined Infrastructure
The management of compute, network and storage resources is typically siloed into separate organizations using separate tools, which slows time to production. Puppet Enterprise 3.0 enables IT teams to deploy cloud applications faster, with fewer errors, by providing a unified, software-defined approach to automating management of compute, network and storage resources. Pre-built configurations for automating Juniper, Cisco, NetApp and many others devices are freely available on the Puppet Forge, Puppet Labs’ online repository of more than 1,200 ready-to-run configurations.
Other features:
- Discover and Browse Resources: Puppet Enterprise’s graphical user interface enables the user to discover and browse any and all service resources throughout their infrastructure, in real time.
- Enterprise Platform Support: Support for all major enterprise platforms, including Microsoft Windows, Red Hat Enterprise Linux, IBM AIX, Solaris, and all major Linux distributions.
- Reusability of 1,200+ Puppet Forge modules: Users can now take advantage of pre-built configurations available on the Puppet Forge, without the need to modify modules for their specific environments.
Pricing and Availability
Puppet Enterprise 3.0 is at www.puppetlabs.com. Anyone can download and use Puppet Enterprise to manage up to 10 nodes, free of charge. Pricing starts at $99 per node. Documentation for installing and using Puppet Enterprise can be found here. | | 3:15p |
With Manta Storage, Joyent Eyes Data Services There’s an old saying from Wayne Gretzky that great hockey players “skate to where the puck is going” rather than where it’s been. Joyent believes the intersection of cloud computing and Big Data is where the puck is going. Towards that end, the company today rolled out Manta, a new object storage service that allows Joyent customers to bring together compute and data analysis on the same cloud infrastructure.
Joyent describes the new service as “the first true convergence of compute and data on the market . . . offering managed utility compute in-place on unstructured data.” Manta allows for the execution of compute tasks such as log analysis, search index generation and financial analysis without any data movement or any setup of compute clusters or processing software. Code is brought in parallel to physical servers in secure containers, while data is automatically merged using the industry-standard Map/Reduce pattern.
Moving Up the Stack
CEO Jason Hoffman says Joyent and the Big Three infrastructure as a service (IaaS) players – Amazon, Google and Microsoft – have pretty much standardized on eight virtual machine instance types. That means the playing field must shift from pricing to services.
“I think the four of us have pretty much commoditized those instances,” said Hoffman. “There is now a common sore of offerings. What we’re now focusing on is how to add data services. I suspect we are going to see IaaS moving up the stack toward managed data services. I think that’s what the next 18 to 24 months will look like.”
In introducing Manta, Joyent shared the experiences of early adopter customers who are using the new service to streamline their data analysis.
“Copying data across a network from storage onto a compute cluster can take hours,” said Konstantin Gredeskoul, CTO of the online shopping community Wanelo (short for Want. Need. Love). ”Joyent Manta Storage Service strips the need to invest any time moving the data around, making ad-hoc querying and analysis near-instantaneous, seamless and cost-effective. We are now able to perform complex cohort analysis and retention reports across hundreds of gigabytes of data in a couple of minutes. When compared to traditional methods such as data warehousing, this is game changing.”
“Fifty percent of the world’s smartphone traffic goes through Ericsson and we are continuously evaluating new technologies to increase the ability of the network to manage growing data volumes in the most responsive, secure, cost effective ways possible,” said Vish Nandlall Ph.D, CTO & Head of Strategy and Marketing for Ericsson North America. “Joyent’s new compute-on-storage innovation is a fundamental paradigm shift that changes the economics and utility of object storage and high-performance big data analysis.”
Additional features of Manta include:
- a multi-datacenter object store with fine-grained replication controls;
- no object size limits;
- strongly consistent writes and highly available reads;
- per object replication policies; and
- a filesystem-like namespace, including directory queries.
Joyent has partnered with data storage management company Panzura to ensure enterprise customers to securely migrate data from existing NAS, backup and archive storage solutions to Manta.
“Joyent Manta Storage Service represents an exciting alternative for customers looking to offload and consolidate storage services to the cloud,” said Jim Thayer, vice president of channels and business development at Panzura. “By combining the Joyent and Panzura platform and integrating with Dell hardware, enterprises can efficiently and quickly access and process unstructured data with lower cost and a streamlined infrastructure.” | | 3:30p |
Channel IaaS Provider PeakColo Secures $3 Million Debt  The executive team at Denver-based PeakColo, which has experienced strong growth by focusing on sales of turnkey “white label” cloud offerings through reseller channels. (Image: PeakColo)
Channel-centric Infrastructure as a Service (IaaS) provider PeakColo has secured a debt facility of $3 million from Square 1 Bank. The funding will support what the company says is explosive demand in its existing markets, as well as provides further working capital for growth into new markets.
PeakColo also closed an additional $1.5 million in follow-on equity funding to the $7.5 million in growth equity capital that was secured in August 2012. This financing will be used to support PeakColo’s expansion of its cloud node capacities in existing locations. The follow-on funding was secured from Meritage Funds and Sweetwater Capital, for a total of $9 million in funding to date.
PeakColo offers turnkey infrastructure services including hosted private and public clouds, and white label cloud services. The platform is VMWare vCloud-powered. The company is currently in six geographies across the United States and Europe, including Seattle, Denver, Chicago, New Jersey, New York, and the United Kingdom. More on the PeakColo can be found in an April profile.
“PeakColo has experienced tremendous growth over the past few years, growing 100 percent year over year, with targets to continue the same aggressive growth rate in the years to come,” said Luke Norris, CEO and Founder of PeakColo. “Because of the high demand for enterprise class cloud services, we already have existing cloud nodes that require capacity expansions. PeakColo’s 100% channel centric model drives us to constantly expand and update our equipment, network, and IT architecture to keep up with the demand from our channel partners. We continue to see existing financial institutions make further investments in PeakColo, which illustrates a true confidence in our business model.”
“Square 1 is working closely with us to help support our high growth, capital intensive business along with our innovative technology initiatives,” said Sharon Kincl, Vice President of Finance and Administration for PeakColo. “Because of Square 1′s extensive experience working with venture capital-backed technology companies, they understand our Infrastructure-as-a-Service business model. Square 1 will be an excellent resource for us as we build out robust environments that can scale quickly and provide storage and network resources for our channel-centric providers and their end-user clients.” | | 5:00p |
Big Data News: MapR and VMware Partner on Hadoop The MapR Hadoop distribution is now certified for VMware vSphere, RainStor gains Hortonworks Data Platform certification, and Fujitsu refreshes its Big Data products and services.
VMware and MapR partner on Hadoop. MapR Technologies and VMware announced that MapR’s Distribution for Apache Hadoop is now certified for VMware vSphere. After the companies completed an extensive validation process, joint customers may now easily deploy and run the MapR Distribution for Hadoop on VMware vSphere and receive commercial support. By virtualizing the MapR Distribution for Hadoop, IT departments can achieve the benefits of running Hadoop in a pool of existing compute and storage resources, and on one common platform such as VMware vSphere. Enterprises are now capable of rapidly deploying new virtual Hadoop clusters in minutes using Project Serengeti and VMware vSphere. “MapR and VMware have worked closely to offer enterprises a dependable, high-performing and easy- to-use Hadoop distribution to help them gain insights from Big Data,” said Fausto Ibarra, senior director, Product Management, VMware. “As Hadoop technology continues to evolve, customers are realizing the benefits of combining a Hadoop distribution explicitly designed for enterprise-use cases, with a powerful and flexible virtualization platform such as VMware vSphere. Our joint efforts will help enterprises to accelerate their adoption of Hadoop.”
RainStor certified for Hortonworks Data Platform. RainStor announced that it has successfully completed product testing and benchmarks resulting in certification of its designed-for-Big Data database on the Hortonworks Data Platform (HDP). Built and packaged by the core architects of Hadoop, HDP includes all the necessary components to manage a cluster at scale and uncover business insights from existing and new data sources. The RainStor solution was tested for data load, data compression and query response performance and showed that it achieved he loading of 1 billion records in 47 minutes, a 29X data compression rate and a query response rate for RainStor SQL that is on average 4 to 10 times faster than Hive. “RainStor’s database product is architecturally designed to run on HDFS and with its efficient way of storing data in a highly compressed format, you not only gain savings as your data volumes grow but you also speed up query performance,” said Ajay Singh, Director of Technical Channels, Hortonworks.
Fujitsu restructures big data products and services. Fujitsu announced that it is restructuring its lineup of big data products and services under the FUJITSU Big Data Initiative. The big data initiative center organization will be comprised of 800 people, and will serve as the foundation for delivering a complete array of big data products and services that accelerate innovation for customers and society. In addition to its Cloud PaaS Utilization Platform services launched last year and a high-performance PC cluster system that was developed for the K computer, Fujitsu will launch a support program to utilize big data and bring these previous offerings within a single structure. A new facility will be opened within Fujitsu Trusted Cloud Square to hold workshops with customers on the use of big data and for performing verification testing. |
|