Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, July 30th, 2015

    Time Event
    12:00p
    Brocade CEO: Specialized Data Center Network Gear on Its Way Out

    The days of having x86 servers by Intel or AMD running business apps on one side of the data center and specialized network chips by Cisco, Juniper, or Brocade running networking apps on the other are numbered. That’s according to Brocade CEO Lloyd Carney, who believes x86 servers will replace most if not all specialized networking hardware in data centers of the future.

    “Ten years from now, in a traditional data center, it’ll all be x86-based machines,” he says. “They will be running your business apps in virtual machines, your networking apps in virtual machines, but it’ll all be x86-based.”

    Software-defined networking and network function virtualization, in combination with constant advances in capabilities of the x86 server architecture, are changing everything about data center networking technology and economics, and in Carney’s opinion it’s only a matter of time before most of the world’s data center operators delegate the majority of network functions to x86 servers.

    That’s the data center architecture of the future, Carney says, because of the performance and price-point gains Intel, followed by AMD, continue to churn out and because of the focus the chipmakers now have on packet forwarding. “Two years ago we did 1 Gig packet forwarding in an Intel pizza box; two years later we’re doing 10 Gig packet forwarding in an Intel pizza box, using the same software by the way. Intel just turned the crank on the architecture, and the same software gave you an extra 10X of performance.”

    Bigger Piece of a Smaller Pie

    So, what does that mean for Brocade? His forecast is that the overall size of the data center networking market will shrink, while Brocade’s share of that market will increase. The Silicon Valley company has been investing in all things networking software, and Carney sees the imminent sunset of dedicated networking hardware as a big opportunity for Brocade, whose current data center market share is fairly small in comparison to Cisco’s and other incumbents’, such as HP, Juniper, Arista, and Huawei’s.

    For the sake of comparison, Brocade’s 2014 revenue was about $2.2 billion, while Cisco reported more than $47 billion in revenue in fiscal 2014. But, as dedicated hardware that traditionally handled network functions gets replaced by software, the incumbent vendors’ hardware revenue will shrink. Those are functions like routing, application delivery controllers, or load balancing.

    The incumbents have not been ignoring this trend, but unlike the incumbents, Brocade doesn’t have a massive data center hardware business to protect. “We don’t sell hardware components that compete in that space,” Carney said. “We’re displacing others. It’s always tough for the incumbent to pull the trigger on displacing themselves; for us, it’s a green field.”

    As Fibre Channel Market Starts to Flat-Line, Strong Ethernet Play is Crucial

    The company’s SDN play revolves to a great extent around an SDN controller technology it gained with its acquisition of a startup called Vyatta in 2012. The Vyatta controller, which is now called the Brocade SDN Controller, is based on OpenDaylight, an open source SDN technology and a Linux Foundation project the networking company is deeply invested in. Brocade also supports OpenFlow, a popular open SDN protocol.

    Growing revenue in the data center Ethernet switch market, virtual or physical, is important to Brocade long-term. While it’s been in the Ethernet switch space for years, it’s known primarily for its enormous presence in the Fibre Channel storage networking market, which is showing signs of slowly drying up. It is still a $1 billion-plus market, but it’s starting to flat-line, Rohit Mehra, a VP at IDC Research covering enterprise and data center network infrastructure, says. The Fibre Channel market will be there for a while, but most of the growth is going to be in the IP and Ethernet networking space.

    “They need to be playing the game from both angles,” Mehra says about Brocade. “They need to understand that the growth is going to be on the Ethernet side and continue to position their wins appropriately. Because they have had success in the sectors that are not Fibre Channel.”

    How Much is Too Open?

    Among those wins in the data center space is Brocade’s network fabric technology. Analysts and customers have cited automation and ease of use as its strengths. According to Carney, the company has its heritage in storage networking to thank for that. “We kind of cut our teeth in fabrics on the storage side of the house,” he says.

    Data center operators don’t need to do a lot of manual configuration work when scaling their infrastructure using the fabric. “You plug the boxes in; they auto-learn,” Carney says. The fabric knows what should connect to what and load-balances automatically. “In the old IP world, that’s a whole lot of spanning-tree entries.”

    All key vendors claim to have good automation tools, Mehra says. But there’s still a lot of work to be done, especially in the area of interoperability between disparate tools. “We have to find ways to have common automation tools, as opposed to entire suite being completely proprietary,” he says.

    As vendors like Brocade collaborate on standards like OpenDaylight and OpenFlow, they have to find a balance between standardization and automation. It’s easier to automate one company’s proprietary stack. But customers generally don’t like locking themselves into a single vendor’s technology, and automation across multiple companies’ products is a lot more complicated. But if vendors agree on a common set of automation tools, how does a company like Brocade differentiate? Finding that balance will be key as the data center networking market moves forward.

    3:00p
    Five Tips for Eliminating Migration Migraines

    David Wegman is Senior Vice President of Integrated Accounts for Vision Solutions.

    While many CIOs are concerned about the fallout of failed migrations, which are a painful waste of time and resources, their concerns are not unfounded.

    According to its 2015 State of Resilience report, Vision Solutions revealed that more than one-third of respondents reported experiencing a migration failure. While failures are a relatively common occurrence, they are not inevitable.

    Uncertainty, risk and extended downtime don’t need to be migration realities. By choosing a trusted partner and utilizing modern technology and methodology, companies can achieve near-zero downtime during migrations, minimizing their impact.

    Companies should consider the following when selecting a migration solution:

    Make Real-Time Replication a Priority

    Solutions that offer the most flexibility and currency of data possible while minimizing impact to users during testing and migration typically require a software-based solution that replicates any activity taking place between the production server to the target server in real-time. This allows IT to keep the production server up and running rather than freezing it or periodically pausing it for snapshots. The production server remains fully functional, data is as current as the last transaction and users continue working. IT can test applications on the new server, and prove the migration methodology and plan, without impacting the production environment. Ultimately, this makes IT more productive – all while migration is taking place.

    A second consideration is the ability to take the distance from production server to target server out of the equation. Because real-time replication sends changes as they occur, it minimizes the amount of communication line and distance is less of an issue. When coupled with compression and throttling in a product, this creates a high degree of efficiency.

    Finally, because databases and servers are in sync at all times, IT does not need to freeze production and wait for final validation of the testing server. Weekend migrations are no longer the norm as the switch to new environments can occur at any time the business is ready and take place in as little as 20 minutes– a notable improvement over switch times in traditional migrations.

    Unify Consoles to Simplify the Process

    Another feature companies should demand in their migration solution is a unified console that allows IT to work on all types of migrations with a common workflow across operating systems and platforms. This provides a major advantage as it mitigates the need for different skillsets typically required for different types of migrations by platform or workload.

    While IT staff certainly needs to understand the underlying architectures and databases, by using a uniform console and workflow, it reduces training time and maximizes the existing team’s skillsets. A single operator can perform parallel migrations across multiple platforms after product training sessions, minimizing the drain on resources.

    Consolidate Migration Streams for Faster Execution

    Simultaneous executions also ease the impact to the business. A solution that allows users to run parallel streams of migrations saves companies significantly more time than traditional methods. This method facilitates near-zero downtime, shortens time to completion, mitigates costs and frees up IT resources to focus on other projects.

    Minimize Risk Via Automation

    Traditional methods typically require a fair degree of manual work, which equals a higher degree of risk. While no migration can happen without people, automated solutions diminish risk by reducing amount of human interaction. This is very important for organizations to keep in mind because failing to choose a solution that provides APIs the ability to automate as much of the work as possible introduces additional human interaction and therefore risk. Without automation, failed migrations, migrations that run over budget or those last longer than expected can occur.

    Gain Flexibility Through Hardware- and Software-Independent Solutions

    Every server is different, and topologies change rapidly. Companies need to address migrations across server types, chipsets, storage devices, databases, versions and the like in any migration plan. A hardware- and software-independent solution reduces the risk potential in these areas. This model allows users to migrate data seamlessly from any one type of environment to another. The options are virtually endless – from physical to virtual to cloud across any operating system, chipset or storage device.

    Using platform-independent technology makes many scenarios possible including migrating between storage from different vendors, migrating to a server located anywhere in the world, consolidating servers with many-to-one migration and moving operations to a new data center across extended distances with very little downtime.

    While data migrations will always entail a certain amount of risk and downtime, modern solutions have greatly improved chances for a positive outcome. By following these steps, companies can act confidently, instead of out of fear, to execute migraine-free migrations.

     

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:57p
    RF Code Unveils Data Center Asset Management Framework

    As the data center itself becomes more of a virtual entity, managing all the assets that make up that entity has become more challenging. In a world where virtual data centers are really data centers within data centers, IT organizations have a hard time keeping track of where anything is actually located.

    To address that issues RF Code this week launched CenterScope, a framework consisting of best practices and streamlined methodologies built around the company’s real-time environmental monitoring and asset management software.

    Designed to make it simpler to track assets that not only exist inside traditional data centers, but also small server rooms inside globally distributed data centers, colocation sites and other locations, Richard Jenkins, vice president of marketing and business development for RF Code, said the goal is to provide a lightweight framework that can be easily deployed to track IT assets.

    Rather than deploying a large-scale data center infrastructure management (DCIM) application to discover and track that information, CenterScope provides an alternative approach to capturing much the same level of detailed information that might be enough for most organizations or can be fed into a DCIM application running in a centralized data center facility.

    Tools included within CenterScope include predictive analysis, U-level rack asset management, 3D data center visualization, asset lifecycle reporting, dynamic global mapping, and a set of open application programming interfaces (APIs).

    “We’re providing a bag of tools in a framework to easily capture data,” Jenkins said. “In some case that’s enough; in other cases that data gets fed into a DCIM application.”

    The challenge, he said, is that IT infrastructure tends to move around. All told, industry analysts estimate that as much as 20 percent of all IT infrastructure moves within a given year.

    The RF Code approach to asset management is designed to more flexibly adapt to the requirements of distributed computing environments that are subject to frequent changes. With the rise of the Internet of Things, he added, those IT infrastructure assets are becoming more distributed than ever.

    As the lines where one data center ends and another begins continue to blur, IT organizations are challenged from an asset management perspective like never before. From a business perspective there’s clearly more riding on those IT assets.

    In fact, Jenkins noted, many data centers are now the single most expensive investment many companies make, which in many cases means that for the first time businesses are now holding IT organizations truly accountable for how that asset is actually managed.

    7:30p
    Cloud Provider ElasticHosts Launches Container Hosting Service Springs.io

    logo-WHIR

    This article originally appeared at The WHIR

    London-based cloud hosting provider ElasticHosts has launched a container-based cloud infrastructure service, Springs.io, which features elastic capacity scaling and pay-per-use billing at lower prices than equivalent virtualization-based cloud services like Amazon Web Services.

    Springs.io builds upon ElasticHosts’ container technology which automatically scales to provide customers capacity to match their needs at any given time. As capacity demands increase, new containers can boot up in just two seconds and they don’t have the overhead of virtualization.

    The usage fees are relatively simple and based on actual loads. Processing is priced at $0.008 per MHz used per hour, and a GB of RAM is $0.011 per hour. Storage is $0.25 per month per GB of SSD used. After a free TB of initial data transfer, additional transfers are charged at $0.050 per GB. And a static public IP address is $2 per month.

    Users can choose a limit to how large their applications automatically scale ranging from 500 to 20,000, and 256MB to 32GB of RAM.

    While it’s not using Docker or Linux Containers (LXC), Springs.io uses similarLinux kernel containerization technologies for container isolation and control. It is also built on high-speed SSD storage to enhance performance.

    The aim behind Springs.io is to provide a simple, economical, and highly scalable hosting service to meet the needs of most Linux developers, agencies, and SMBs, and that containers provide the right technology to achieve this.

    “We believe this is how all cloud infrastructure will look in years to come and are proud to be leading the charge,” Springs.io founder Richard Davies said in a statement.

    “We have been listening to the market and what we are hearing is that people are craving simplicity,” he said. “While some customers need greater support and configuration, many don’t, and we wanted to provide a service for users that are looking for a more simplistic offering… Businesses need a whole new service that strips away any complications, which is what Springs.io offers.”

    As Computer Weekly notes, Springs.IO provides a similar proposition to what ElasticHosts offers with its Elastic Containers product, but launching Springs.IO as a separate entity seems to be a way to capitalize on recent interest in containerization technologies. But creating a new division also ensures that Springs.IO can navigate a new container-based hosting business model without carrying the baggage of a traditional hosting provider.

    This first ran at http://www.thewhir.com/web-hosting-news/cloud-provider-elastichosts-launches-container-hosting-service-springs-io

    9:22p
    Keeping the Airflow Going in Smaller Data Centers

    Just because a data center is small doesn’t mean that it’s not faced with the same heating and cooling challenges faced by data center operators running much larger facilities.

    At the Data Center World conference in National Harbor, Maryland this September, Daniel Kennedy, director of sales engineering for Tate, a provider data center airflow management products and services, will showcase how one small data center operator saved on energy costs by reworking airflow through its data center that enabled it to shut down one of its coolers.

    When it comes to operating a data center that may only be 1,000 to 2,000 square feet, Kennedy said, many organizations assume there’s not much to be done in terms of making the overall environment more energy efficient. In reality, that data center operators can increase the overall IT infrastructure density of those environments without actually increasing their energy costs or exceeding the thermal limits of the environment.

    “Many of the operators of smaller data centers have not optimized airflow in years,” Kennedy said. “But it turns out they can take advantage of many of the same advances that have been made by larger data centers in recent years.

    Despite the overall trend towards data center consolidation, Kennedy said, most of the smaller data centers that already exist are not going away anytime soon. Most of the organizations that run these data centers have already made significant capital investments in them. In addition, those smaller data centers tend to be in locations that for one reason or another are crucial the organizations that make use of them.

    The critical thing, he said, is to get a better handle on the requirements of the application workloads actually running in those environments. Only then, says Kennedy, can the organizations that depend on those applications make the most efficient use of the space being allocated in ways that will be much more energy efficient that most organizations would have thought otherwise possible.

    For more information, sign up for Data Center World National Harbor, which will convene in National Harbor, Maryland, on September 20-23, 2015, and attend Daniel’s session titled “The Small Data Center Efficiency Potential.”

     

    10:25p
    AT&T’s Eddie Schutter Takes Helm as eBay Data Center Chief

    Eddie Schutter, who until recently has overseen AT&T’s enormous data center fleet, has joined eBay as head of global data center network and security services. Schutter has been at AT&T for about 20 years.

    In his new role, he will be managing the online auction giant’s Global Foundation Services organization, which is responsible for the the entire eBay data center infrastructure. He will report to Dean Nelson, who has led the organization for more than three years.

    Nelson is a well-known data center executive who spearheaded eBay’s adoption of some of the industry’s most innovative approaches to data center design, including the use of data center containers, water-cooled IT cabinets, and on-site fuel cells as the primary source of data center energy.

    Schutter announced his new role at the DCD Internet conference in San Francisco Thursday. The move is a “big shift for me, obviously,” he said, adding that AT&T’s is a “great data center organization” that has done a lot of things for the data center industry in the past and is doing a lot of things now that will impact it in the future.

    AT&T and eBay are obviously two very different businesses, but they both have some common data center challenges. Around-the-clock availability, global scale, and data center energy consumption are challenges most companies that provide services worldwide share.

    AT&T’s data center organization he led until now has about 10 million square feet of data center space under management, Schutter said.

    While most people know AT&T as cellular network carrier, the company has a massive portfolio of data center and cloud services for enterprises. And both telephone users and enterprises demand that their devices and services are always on.

    eBay’s Global Foundation Services has traditionally pursued innovative technologies to increase energy efficiency of its data centers. The eBay data center in Phoenix, for example, has a mix of traditional raised-floor space and data center containers by both HP and Dell on the roof.

    The newest eBay data center in Salt Lake City, Utah, uses fuel cells by Bloom Energy, fueled by natural gas, as its primary source of power. It relies on the local utility grid for backup and has no diesel generators or UPS systems – both mainstays in most of the world’s data centers.

    eBay is currently building a new data center in Reno, Nevada, together with Switch SuperNAP, a data center provider known for its massive Las Vegas data center campus. The facility, Switch’s first outside of Las Vegas, is in the Reno Technology Park, a large site with access to a variety of renewable energy sources that’s also home to an Apple data center campus and neighbors a Tesla battery manufacturing plant.

    10:44p
    Rackspace Announces VMware-based Private Cloud Solution

    logo-WHIR

    This article originally appeared at The WHIR

    Hosting provider Rackspace launched Rackspace Private Cloud, a new offering that runs on single-tenant dedicated architecture and uses VMware’s virtualization software vCloud.

    Rackspace Private Cloud is aimed at providing greater control and security than multi-tenant clouds, but also with the scalability, flexibility and resource optimization typical of shared cloud offerings.

    Rackspace CTO John Engates said Rackspace is aiming to provide a “true hybrid experience” with its options ranging from dedicated hosting, to multi-tenant cloud, to specialized application hosting (such as email hosting), to the new private cloud offering.

    “We view the Rackspace Private Cloud as having a large impact for our enterprise customers who need the flexibility, scalability, and reliability provided byVMware’s virtualization, paired with the control and security of a dedicated environment,” Engates stated. “It is the best of both worlds for those enterprises looking to explore the benefits of external cloud solutions, without as much risk.”

    Rackspace Private Cloud is an evolution of Rackspace’s dedicated virtual server offering, which has generated significant revenue is the past year due to its flexibility, asset utilization and lower capital and operating costs achieved through VMware virtualization, according to the company.

    Customers are provided monitoring and maintenance of the VMware software stack, including installation and configuration of the vCenter Server, vCenter Server backups, and hypervisor monitoring. Rackspace also backs up user VMs, and patches and monitors the guest OS and the antivirus software on Rackspace-provided OS images.

    Rackspace Private Cloud can be deployed in any Rackspace data center where there are available servers.

    This first ran at http://www.thewhir.com/web-hosting-news/rackspace-announces-vmware-based-private-cloud-solution

    << Previous Day 2015/07/30
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org