Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, May 22nd, 2015

    Time Event
    12:00p
    Moving to the Cloud and Understanding ROI

    No longer just a general buzz term, cloud computing has established itself as a real technology with various uses. In fact, Cisco reports that global data center traffic is firmly in the zettabyte era and will triple from 2013 to reach 8.6 zettabytes annually by 2018. A rapidly growing segment of data center traffic is cloud traffic, which will nearly quadruple over the forecast period and represent more than three-fourths of all data center traffic by 2018.

    Adoption rates are increasing as more organizations turn to a truly distributed infrastructure model and use more WAN-based tools. As underlying hardware components become better and more bandwidth becomes available, cloud computing has become a solid consideration for a wide array of industry verticals. Everyone should be either adopting it, or at least considering it… Right?

    In having numerous conversations with different customers using various technologies – there’s still very active conversations around cloud computing. However, these conversations are changing. Managers aren’t really asking what the cloud is – now, they’re curious to where they can actually apply it.

    “There is a flawed perception of cloud computing as one large phenomenon,” said Chris Howard, research vice president at Gartner. “Cloud computing is actually a spectrum of things complementing one another and building on a foundation of sharing. Inherent dualities in the cloud computing phenomenon are spawning divergent strategies for cloud computing success. The public cloud, hybrid clouds, and private clouds now dot the landscape of IT based solutions. Because of that, the basic issues have moved from ‘what is cloud’ to ‘how will cloud projects evolve’.”

    The cloud is everywhere

    Let’s take the word “cloud” and break it down quickly. The term really just means data distribution over the WAN. This can be a private, public or hybrid model. Because of the massive presence of the Internet, most organizations are already using the cloud without even knowing it. The difference is that they’re utilizing cloud computing components within their means and needs. This means that they are not using more than they require and truly just utilize basic components of the Internet.

    On the other hand, some organizations keep a completely localized environment and only use WAN-based technologies to share files, store backups, or host sites on the Internet. Really, all of these technologies were available before “cloud” became popular. Because of this, administrators are asking, “Why do I need more when I already have so much?” The answer may be very simple – depending on the industry, they may be quite right.

    Understanding ROI and Use-Cases

    There are distinct advantages to moving to a cloud model.

    • Disaster Recovery
    • Backup and Storage
    • Testing and Development
    • Easier Management
    • Data Center Consolidation
    • Enabling Mobility
    • Offloading Security
    • And so on…

    Let me give you two specific examples around ROI:

    • There was a recent study conducted by the International Working Group on Cloud Computing Resiliency. This report showed that since 2007, about 568 hours were logged as downtime between 13 major cloud carriers. This has, so far, cost the customer about $72 million. In DR alone – a cloud model might absolutely make sense for you and present an excellent ROI.
    • Similarly, a recent Ponemon study looked at data breaches. Breaking a downward trend over the past two years, both the organizational cost of data breach and the cost per lost or stolen record have increased. On average the cost of a data breach for an organization represented in the study increased from $5.4 million to $5.9 million. The cost per record increased from $188 to $201. By moving a workload to a secure, cloud-hosted architecture, you can reduce the chances that sensitive data might leak. After all – looking at pretty much all of the major breaches that have happened recently – none of them happened within a major cloud provider.

    Although these examples are great – there is a challenge here as well. Too many organizations got caught up in the hype of the cloud conversation without really doing a true cost/benefit analysis. This can involve several business stakeholders, interviews with internal and external resources, and a clear vision for where the organization is going. Instead of jumping on a technological bandwagon, organizations should take the time and understand where a given technology fits in with their business strategy.

    In some cases, a certain type of cloud model is just not the right fit. Whether it’s cost prohibitive or it just doesn’t provide any additional benefits – cloud computing can deliver a ROI or it can be a hindrance.

    Where use-case become everything

    I’m not saying to be complacent. Complacency in IT can ruin an organization or a career. In the case of cloud computing, it may be the right move to wait on the technology. The advice is simple: Take the time to understand the cloud model and how it fits your business.

    Just like any technology, there will be benefits and challenges. In some cases, moving to the cloud may just not be conducive to the goals of the organization. It’s quite possible that a company has no intention of expanding, or moving their infrastructure to the Internet. Or, there may not be a need to offload workloads into the cloud. Also, there may be other good technologies to help deliver data and content to the end-user.

    The bottom line is this – the cloud model has a powerful presence and many organizations are adopting some part of the technology. But like any tool, piece of software or technological advancement, there needs to be a fit.

    1:00p
    Rackspace and Cern Unite OpenStack Clouds for Scientific Discovery

    Two years after signing a contributor agreement with Cern openlab to deliver a massive OpenStack-powered hybrid cloud solution, Rackspace announced an extension of that agreement to continue work in creating a reference architecture and operational model for federated cloud services.

    Cern, the European Organization for Nuclear Research and Rackspace partner to support CERNs large computing environment, and just this week witnessed the culmination of some of that work with the announcement at the Vancouver OpenStack summit for broad support for OpenStack Identity Federation.

    Rackspace says the next iteration for the Cern openlab is to extend the federation concept to developing standardized templates for full multi-cloud, open standard orchestration capability, which they expect will enable customers to spin up an environment across multiple cloud platforms with a single action.

    For its part, Rackspace will fund a research fellow at Cern, to help with the federation project as well as provide services and remote assistance in design and implementation from Rackspace’s product teams. Cern in turn will be using Rackspace Public Cloud and OnMetal services for testing. Just as the work these two did over the first phase of project was contributed to OpenStack, the next phase is expected to give back to OpenStack Heat orchestration, Glance image, Keystone service catalog and Nova compute projects.

    “Our CERN openlab mission is to work with industry partners to develop open, standard solutions to the challenges faced by the worldwide LHC community. These solutions also often play a key role in addressing tomorrow’s business challenges,” said Tim Bell, infrastructure manager in the IT department at CERN. “After our work on identity federation with Rackspace, this is a very important step forward. For CERN, being able to move compute workloads around the world is essential for ongoing collaboration and discovery.”

    Earlier this spring Seagate announced that it was also partnering with Cern openlab to help manage the 100 petabytes of data that the Large Hadron Collider has generated to date. Using the Seagate Kinetic Open Storage platform will enable Cern to improve performance and save cost by connecting object-oriented applications directly to the storage device, and will allow Seagate to further the open storage platform by testing it in an unparalleled data creation environment.

    2:00p
    C7 Expanding Utah Data Center

    C7 Data Centers has commenced construction of Phase 2 at its flagship Granite Point II Utah data center. The first phase of 35,000 square feet opened in 2013 is nearing full occupancy so the company is adding an additional 30,000 square feet. Phase 2 is expected to be available in October 2015.

    Phase 2 will complete Granite Point II’s 95,000 square feet of space. The company is expanding power capacity in conjunction with the expansion. The Granite Point campus will have over 11 megawatts serving the 250,000 square foot campus, with an onsite substation planned for 2016.

    The Utah data center provider leverages the geographical advantages of Utah’s cold desert locations for cooling efficiencies, using both ambient air, cold air containment and actuated cooling to variable server heat loads. Because of these cooling efficiencies, C7 said it can deliver 75-80 percent of its power capacity to the critical load versus an average of 50-60 percent.

    Utah features some of the nation’s lowest power rates, ambient air cooling and a low disaster risk profile. While traditionally a disaster recovery hotspot, C7 has noted an increase of production customers. CEO Swenson also said that, while local customer growth has been solid, the company is seeing an increase of clients outside of the state and across the world.

    “Granite Point II is an extreme departure in data center design from the typical industry standard; in its aesthetic, ‘just in time’ effectiveness, and efficiency in provisioning and cooling,” said CEO Wes Swenson in a release. “We have had an overwhelming response to the product; it’s not like anything else in the market.”

     

     

     

    2:29p
    Friday Funny: Pick The Best Caption For iRobot

    So if you know me, you know how much I LOVE my irobot Roomba, so much in fact that I actually did a full review on youtube. So I thought, Kip shares the same excitement over the irobot too, unfortunately I don’t think it’s the best the way to keep the data center clean…

    Here’s how it works: Diane Alber, the Arizona artist who created Kip and Gary, creates a cartoon and we challenge our readers to submit a humorous and clever caption that fits the comedic situation. Then we ask our readers to vote for the best submission and the winner receives a signed print of the cartoon.

    Congratulations to Andrew H., whose caption for the “Smoking Rack” edition of Kip and Gary won the last contest with: ” Hey Kip, will you flip my grilled cheese? It looks like that side is done.”

    Several great submissions came in for last week’s cartoon: “Help Wanted” – now all we need is a winner. Help us out by submitting your vote below!

    Take Our Poll

    For previous cartoons on DCK, see our Humor Channel. And for more of Diane’s work, visit Kip and Gary’s website!

    3:00p
    Colo Provider Digital Fortress Adding Cloud and Managed Services

    Pacific Northwest data center operator Digital Fortress is moving up the stack into managed services. The company also announced two big colocation deals with undisclosed companies in Seattle totaling almost 1 megawatt.

    CEO Matt Gerber joined the company last September to help lead the building of managed services and cloud businesses. Many mid-tier data centers are finding customers are requesting more than power and pipe, so many regional players are extending beyond facility management.

    However, it’s not as simple as offering new services – a company must make an organizational shift as well. To this end, it hired key talent in the form of new vice president of Sales and Marketing Steve Voit and new vice president of operations Robi Johnson.

    Voit is a former T-mobile executive who will be charged with expanding customer base and product lines. Johnson was instrumental in helping 2nd Watch develop and operate a top AWS partner managed cloud business, and will be instrumental in building the managed services practice at Digital Fortress.

    The challenges with building a managed services business are the skillsets that are required to step up and across the stack, said CEO Gerber, and both hires are key to that organizational shift.

    Digital Fortress will first begin offering core IT infrastructure management. A natural extension of what it already provides, it will extend monitoring and managing, showing how well customer infrastructure is running down to disk utilization, providing more proactive hands on help with getting up running, and monthly maintenance like patching. The company has engaged with a handful of existing customers as well as a proof of concept with one of its largest customers, said Gerber.

    In terms of cloud offerings, Gerber said the company is still in the evaluation stages in terms of potential private cloud offerings and that it will also offer managed public cloud alongside colocation and hands on management.

    “While colocation is showing healthy growth, we do see this great secular rotation out of retail colo and into cloud,” said Gerber. “If you’re going to be a successful operator, you need to be on the receiving end or help the customers ascending into that.”

    In terms of the two recent colocation deals, Gerber couldn’t disclose the names or the sectors of the companies involved in the deal, but said that it was one of the largest data center deals in play for Seattle for the last 12 months.

    The company was created in 2012 when two local operators named Fortress and Digital Forest merged. The merged company then acquired 18,000 square feet of additional space in a downtown Seattle facility, which it upgraded to the tune of millions of dollars.

    “We’re always actively looking at other markets,” said Gerber. “When you look at our vision for the business going forward, the model we’re creating here in the northwest is applicable globally.”

    3:00p
    OpenStack Magnum: Containers-as-a-Service For Cloud Operators

    OpenStack recently demonstrated software at the OpenStack Summit in Vancouver that will allow cloud operators to offer Containers-as-a-Service as a managed hosted service.

    Called Magnum, it is a multi-tenant Containers-as-a-Service designed for OpenStack that combines OpenStack, Docker, Kubernetes, and Flannel to produce a containers solution that works like other OpenStack services. Magnum exists not to provide a better kind of container or to reinvent the wheel, but to make current container technology and tools work with OpenStack as well. Magnum officially joined the OpenStack project list in March.

    The OpenStack Containers Team developed the API service to make container management tools such as Docker and Kubernetes available as first-class resources in OpenStack. Application containers have distinctly different lifecycle and operations than Nova (machine) instances. In the same way that project Nova provides plugability for hypervisors, Magnum does the same for containers.

    Magnum offers a choice of Container Orchestration Engines to deploy and manage containers in arrangements called “bays.” Currently, Magnum is using Swarm and Kubernetes for clustering, with others expected in the future. Using a modular architecture, it’s easy to unplug and switch if the prevailing technology shifts.

    “It provides them a way to future proof their bet,” said Otto. “If we don’t provide a solution for extracting, they either need to wait and see, which is bad, or fork out ahead of time – also bad.”

    Cloud and containers are increasingly built around the idea of interconnection of services and this is leading to a standardization of parts.

    Magnum shares its name with a type of gun, and a historical manufacturing progression with the cloud.

    “There was a time when gun manufacturers and car manufacturers moved from handcrafting to standardized parts and assembling,” said Van Lindberg, VP of legal at Rackspace and an OpenStack board member. “It became about speed of production. Cloud architectures are increasingly built around the idea of standardized images, containers and parts, in the service sense. We’re getting to the place where we’re starting to not hand create.”

    While the containers’ story has largely focused on making developers’ lives easier by packaging dependencies along with an app so it just works anywhere, another big driver for adoption is overall efficiency. Otto believes that containers will bring another order of magnitude in terms of efficiency, comparable to what virtualization did in the 2000s. However, containers are a complement, not a replacement to virtualization, he stressed.

    The ability to create an encapsulation that makes it portable with all the dependencies attached solves a very important operational problem, said Otto. With complex applications, by the time you deploy into a production environment, many times it doesn’t work – this is caused by “environmental drift”. Containers solve the problem by eliminating a lot of overhead.

    Containers make it so what you move around is smaller, meaning better operational efficiency: no more unnecessary repetition of a common stack, or operating language, and so on. For organizations that have hundreds of different applications, this adds up significantly.

    “In cloud today, 80 percent is similar from customer to customer. There’s no need to carry that around; that can be another level of commonality,” said Otto, adding that containers are not a security isolation instrument, while hypervisors are.

    “Containers are convenient, but not a security barrier,” he said. “They can be arranged in more secure ways but only if you know what you’re doing. Containers are not a replacement for virtualization.”

    The OpenStack Containers Team was founded around the same time as the OpenStack Atlanta Summit last year. The Magnum project is diverse, with 18 companies contributing more than 100,000 lines of code and 1,800 patch sets. “It’s a very active project by any measure and speaks to the excitement around bringing the functionality to users,” said Otto.

     

    3:36p
    DCIM News 5/22/15

    Device 42 adds software license management to its software, Schneider Electric wins top DCIM product of the year award and talks about DCIM isolation from other enterprise systems, and Vigilent is named a top startup by the TiEcon entrepreneur conference.

    1. Device42 launches new software license management module. Device42 has added a Software License Management module to its infrastructure management software. The new module performs Windows and Linux autodiscovery, easy setup of software licensing models and complete auditing of software licenses.
    2. Schneider Electric wins DCIM award at DCS Awards 2015. The Schneider Electric StruxureWare for Data Centers DCIM software won the DCIM product of the year award at the 2015 European DCS awards. Its integrated DCIM suite has won the award for the second year running, and provides live dashboards, mobile operation for real-time tracking and on-the-go access via smart phones and tablet apps.
    3. Schneider Electric – stopping DCIM from becoming isolated. A Schneider Electric blog post discusses the future of DCIM software and the integration, interaction and interfaces that DCIM should have with other enterprise management systems.
    4. Vigilent named TiE50 top startup. Dynamic cooling management systems provider Vigilent announced that it was named a TiE50 Top Startup winner at TiEcon 2015 – a conference fro entrepreneurs. This was the fourth consecutive win of the award for Vigilent, who was recognized for demonstrating market leadership, innovation with demonstrable business results and unique intellectual property.
    4:00p
    Top Five Benefits of Transaction-Centric NPM

    While Application Aware Network Performance Monitoring (AA NPM) solutions have improved IT’s visibility into application usage and behavior, most organizations continue to struggle with many of the performance management challenges they hoped AA NPM would address. In contrast, a transaction-centric perspective of network performance brings diverse IT and business teams to a collaborative table via a common language.

    In this webinar we’ll highlight the importance of transaction recognition for effective – meaning business-savvy – performance management and collaboration. Included in the webinar:

    • The evolution of the modern application
    • A look at how cloud, the user, and the modern workload are all impacting the application and your performance
    • The importance of a common definition of the term “transaction”
    • The top five NPM benefits of having a true transaction perspective
    • Examples of how network operations teams can effectively collaborate with development and business peers

    Register Now


    Meet the Presenters

    Gary KaiserGary Kaiser
    Product Management
    Dynatrace
    View Full Bio

    Bill KleymanBill Kleyman
    VP of Strategy and Innovation
    MTM Technologies
    View Full Bio

    4:30p
    Load DynamiX Extends Storage Testing Reach

    There’s no doubt that from a storage perspective cloud computing introduces a lot of additional complexity into the data center environment. To help IT organizations better understand the nuances of cloud storage, Load DynamiX, a provider of a storage I/O testing platform, has added the ability to run tests against NFSv4.1, CEPH, Amazon S3, and OpenStack Swift cloud storage along with support for Fiber Channel over Ethernet (FCoE).

    Len Rosenthal, vice president of marketing for Load Dynamix, says more often than not most IT organizations wind up over provisioning storage because they don’t have a lot of visibility into actual I/O performance inside their data center environments.

    “Storage today on average makes up 40 percent of the IT budget,” says Rosenthal. “We enable organizations to cut costs by simulating storage I/O performance against production data.”

    The Load DynamiX platform consists of a dedicated appliance to run the test and the modules needed to create them. Priced starting at $100,000, Rosenthal says the Load DynamiX platform is designed to take a lot of the trial and error now associated with deploying storage systems, with an eye towards reducing the amount of physical space needed for storage systems inside the data center.

    In terms of OpenStack Support, Load DynamiX currently supports both the Swift and Cinder storage protocols. Later this year, Load DynamiX will add support for the Manila file-based protocol once it is finalized by the OpenStack community.

    Originally developed to enable storage vendors to test their offerings, Rosenthal says Load DynamiX is now being embraced by IT organizations that need to address I/O storage bottlenecks across block, file, and object-based storage systems.

    To simplify that process in complex data center environment, Load DynamiX in this release is also making available a Composite Workload Editor facility that can be used to create multi-threaded I/O patterns across multi-tiered storage infrastructure and access protocols.

    While vendors generally rate their storage systems in terms of maximum throughput, the number of IOPs a storage system can support tends to vary widely by application environment. Making things more challenging is that in the era of software-defined storage, many IT organizations have started to embrace white box storage systems that can be configured with any number of solid-state and magnetic storage systems.

    Optimizing and then validating each storage system configuration across any number of types of application workloads represents a major investment in both time and money that most IT organizations would probably prefer to be able to apply somewhere else.

     

     

     

    4:45p
    Report: Facebook Interested In $1B Fort Worth Data Center Project

    Documents unveiled by the Dallas Morning News reveal Facebook is potentially attached to a massive, $1 billion data center project in Fort Worth, Texas. The project includes a planned 750,000 square foot data center, built in increments of 250,000 square feet.

    The name of the company behind the secretive project is being closely guarded. Documents filed with the state of Texas reveal the identity of the planning team working on the project are the same companies behind Facebook’s $300 million data center in Des Moines, Iowa opened last year. The law firm representing the unknown company also works for Facebook.

    Fort Worth recently approved financial incentives for the 110-acre data center project, planned in the AllianceTexas development. The Northwest Independent School District is also working on tax abatements.

    Facebook is also reported to be considering a data center in Ireland. Yesterday, Dupont Fabros revealed the social network signed up for another 7.5 megawatts in Dupont Fabros’ Ashburn campus. Facebook also has data centers in Oregon and North Carolina.

    Dallas-Fort Worth is a bustling data center market. Rich in fiber-optic infrastructure, the Dallas-Fort Worth area is a key Internet hub. It is the birthplace of several of the biggest hosting and cloud providers in the world such as Rackspace and IBM’s SoftLayer.

    5:00p
    Infoblox Unveils Secure DNS Server

    Moving to address emerging security issues relating to how DNS is used inside the data center, Infoblox this week unveiled an appliance that can detect and block DNS attacks.

    Designed to be deployed in a network rack, the Infoblox Internal DNS Security appliance prevents hackers from launching attacks against DNS server that data center operators deploy inside a data center to manage external requests.

    In recent months Arya Barirani, vice president of product marketing for Infoblox, says DNS servers inside the data center have become targets because firewalls don’t inspect DNS queries. Unfortunately, as one of the earliest Internet technologies ever developed, Barirani says DNS assumes a level of trust that doesn’t exist in the IT world today.

    “Hackers are now going after the soft underbelly of the network,” says Barirani. “We’re starting to see more attacks aimed at the DNS server inside the data center.

    Specifically, Barirani says the Infoblox Internal DNS Security appliance is designed to harden the DNS server in a way prevents distributed denial of service (DDoS) attacks from being launched. Barirani says it also prevents malware from hijacking it to communicate with a botnet server or steal data using DNS queries.

    The Infoblox Internal DNS Security appliance is a complement to the Infoblox External DNS Security appliance that is designed to mitigate external threats such as volumetric DDoS, DNS hijacking, DNS-based exploits, and reconnaissance attacks. When a DDoS attack is detected, the appliance is designed to blocking hostile DNS traffic.

    In addition, Infoblox has exposed a set of application programming interfaces through which its appliance can consume threat intelligence provided by third-party security intelligence services. In the event of a DNS attack, the Infoblox appliances can be alerted to the threat before those attacks cripple a local DNS server.

    At this point, however, the biggest challenge may be finding who inside the data center is actually responsible for securing the DNS server. The networking team often tends to think of anything to do with security to be outside their purview. The IT security team, conversely, assumes anything associated with DNS is being handled by the networking specialists inside the data center.

    Hackers, meanwhile, are not only getting savvier about exploiting seams between how teams inside data centers are organized, they have access to advanced analytics tools that make it easy to identify vulnerable DNS servers. Given that level of sophistication of those tools it’s probably only a matter of time before an existing DNS server gets compromised.

     

     

     

     

    6:00p
    Peer 1 Hosting Launches New Hybrid Cloud Solution

    logo-WHIR

    This article originally appeared at The WHIR

    Peer 1 Hosting launched a hybrid cloud solution this week it calls True Native Hybrid Cloud, a unified platform that it says represents a shift in the industry definition of “hybrid.” True Native Hybrid Cloud was built specifically for hybrid customers from the ground up, enabling maximization of cloud benefits and flexibility without burdening the enterprise with the complexity of other self-service platforms, the company says.

    Peer 1 claims that its platform is a different kind of hybrid from competitor hybrid cloud offerings, which are standard cloud platforms with integrated dedicated servers, the company says. Its approach includes using the infrastructure of its enterprise private cloud Mission Critical Cloud with bare metal servers and a complete range of services and cloud solutions all accessible through its On Demand Cloud Platform web portal.

    The ease of scaling up and down, activating and decommissioning services from a single web interface gives enterprise customers the agility to take advantage of utility billing.

    “The hybrid cloud market is accelerating considerably and True Native Hybrid Cloud has been designed specifically to address the market need for ‘true’ hybrid, providing businesses with the ability to instantly provision and easily control bare metal and virtual cloud services via a single web interface,” said Toby Owen, VP of Product at Peer 1. “It puts the power back in the hands of the customer, giving them exactly what is required to adapt quickly to changing business requirements.”

    Owen points to a reputation for self-service hybrid solutions being too technical as one of the benefits of Peer 1’s “native” hybrid cloud. Hybrid cloud has been growing rapidly and a survey released by Peer 1 last month shows adoption could rise from 10 percent to 28 percent by 2018. Gartner said earlier this month that 2016 will be a defining year for cloud, with hybrid pushing past private cloud as the first choice of businesses. A Carbonite and IDC survey last month also showed SMBs targeting hybrid, and also concerned about complexity.

    This first ran at: http://www.thewhir.com/web-hosting-news/peer-1-hosting-launches-new-hybrid-cloud-solution

    7:16p
    Cisco Extends Storage Switch Lineup

    Looking to make it easier to consolidate the number of racks and amount of cabling required in the data center, Cisco this week added 96-port Fibre channel and a 40G converged Ethernet switches to its storage portfolio.

    In addition, Cisco announced it is adding 16G support to its existing MDS 9700 and 9250i platforms and support for IBM FICON Distance Extension using 10G FCIP and specialized acceleration technologies.

    Nitin Garg, senior manager for product Management in the Cisco Data Center Group, says with these additions Cisco is trying to make it simpler to scale storage deployments in the data center while also reducing the number of switches that actually need to be deployed.

    “We’re trying to make it easier to adapt to changing needs over time,” says Garg. “That means being able to support multi-protocol capabilities.

    In general, Garg says IT organizations need to be able to deploy Fibre channel to support high-performance workloads. But there are also scenario where IP Ethernet and Fibre Channel over Ethernet need to be supported. The Cisco lineup of network storage switches is designed to all run the same operating system, which Garg says makes it simpler to manage the overall storage environment.

    Garg says the 96-port Cisco MDS 9396S Fabric Switch can be configured with as few as 12 ports and then upgraded in a set of 12 port modules that doesn’t force customers to pay for 96 ports when, for example, they only need 48 at the moment.

    The 40G Ethernet support in the Cisco Nexus 7700 and Nexus 7000 platforms, meanwhile, enables IT organizations to support FCoE, NAS, iSCSI, IP-based object storage, and LAN connectivity on a single platform.

    While Cisco has become a force to be reckoned with in terms of delivering blade servers, the company’s efforts in leveraging its switching expertise in the realm of storage have been somewhat overshadowed. Garg says the end goal is to be able to not only mix and match Cisco switches as needed, but also be able to connect to both Cisco Unified Compute Systems and mainframes.

    Given the amount of diversity that currently exists inside most data center environments being able to use a common base of switches to connect to multiple classes of servers no doubt offers a certain amount of appeal. Of course, Cisco is not the only switch vendor pursuing that strategy. But it may be one of the few vendors with the dedicated networking and server expertise needed to make it really work.

     

     

    << Previous Day 2015/05/22
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org