Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, December 11th, 2014

    Time Event
    12:55a
    SF Bay Area Data Center Operators Prepare for Storm

    If weather forecasts play out the way they are supposed to this week, San Francisco Bay Area data center operators will get a chance to show their customers why they are paying the big bucks.

    Data center uptime, or keeping servers running regardless of rough conditions, is of course the core part of any data center provider’s value, and disaster preparedness is something data center operators spend a good chunk of their time on throughout the year.

    In the U.S. the biggest test for resiliency in recent years – a test not every data center passed – was Hurricane Sandy, which made landfall on the East Coast in late October 2012. Sandy caused widespread and prolonged power outages, and flooding that followed damaged critical infrastructure.

    The torrential rain and hurricane-strength winds meteorologists are saying will besiege the Bay Area Thursday may cause power outages, according to utilities PG&E and Silicon Valley Power. The former serves most of the region and the latter serves Santa Clara – the heart of the Silicon Valley and home to a dense cluster of data centers.

    Precautionary Measures

    While data center providers do a lot of preparation and drills to maintain data center uptime during outages as part of their regular routine throughout the year, they usually take a few extra precautions if they know that a storm is on its way.

    Staff at a Telx data center located within a Digital Realty Trust facility at 200 Paul Avenue in San Francisco have contacted all contractors that serve the facility, from generator fuel suppliers to UPS and generator vendors, to make sure they have resources on standby, Paul Sidore, director of west region operations at Telx, said.

    The staff also checked the facility’s roof to make sure there weren’t any loose objects there or objects that would cause water build-up. “Our engineering staff will be there, ready to go 24 by seven,” Sidore said.

    All maintenance activities scheduled for Thursday have been postponed, he said.

    Telx has taken similar precautions to ensure data center uptime in Santa Clara, where it leases space from Vantage Data Centers.

    Chris Yetman, senior vice president of operations at Vantage, which has a massive campus in Santa Clara, has put together a list of 10 steps every data center operator should take to maintain data center uptime and staff safety when they know a particularly bad storm is coming.

    All Vendors on Standby

    Digital Realty staff have also checked their buildings’ roofs and confirmed availability of fuel with the company’s suppliers, David Schirmacher, senior vice president of portfolio operations at Digital Realty, said in an email.

    “In the event of a power outage, Digital Realty’s fuel supplier program is one of the largest in the industry; we have first-response status on par with the U.S. Federal Emergency Management Agency and the Department of Defense,” Schirmacher said. “So, if the need arises, we have ready access to diesel fuel to power the generators in our data centers.”

    The data center landlord also notified all building systems vendors that they may be required to respond on short notice. Vendors contacted include suppliers of generators, UPS systems, chillers, fuel delivery systems, and switchgear.

    The company also put on hold all non-essential maintenance activities.

    Equinix Tests Gensets, May Book Nearby Hotels for Staff

    Operations personnel at Equinix have tested generators and verified fuel levels, David Morgan, the company’s vice president of IBX operations, said. The company also made special arrangements to make sure staff is available on site to address any unforeseen problems.

    If the situation becomes extreme, Equinix has cots for staff to sleep in at the sites, and will secure hotel rooms for staff nearby if necessary.

    CoreSite Confident in Facilities Staff Competence

    Billie Haggard, senior vice president of data centers at CoreSite, another major Silicon Valley data center provider, said the company was monitoring the situation.

    “We are closely monitoring weather forecasts in preparation for the large amount of rain that is predicted to fall in the Silicon Valley area over the next few days,” he said in an emailed statement. “CoreSite facilities are constructed to withstand heavy amounts of rain, and we have highly-skilled facilities staff on-site that have been trained to deal with weather-related situations.”

    12:55a
    10 Things Data Center Operators Should Do Before a Storm

    As residents of San Francisco Bay Area brace themselves for torrential rain and hurricane-strength winds forecasted to descend on the region early Thursday morning, data center operators around the region have taken extra precautionary measures to ensure safety of their staff and data center uptime. PG&E and Silicon Valley Power, electrical utilities that serve the region, have both said the storm may cause power outages.

    Chris Yetman, senior vice president of operations at Vantage Data Centers, which has a massive campus in Santa Clara, has put together a list of steps every data center operator should take to maintain data center uptime and staff safety when they know a particularly bad storm is coming.

    “This should be a living list that you walk through even with no storm on the horizon, and that you can add to from what you’ve learned during prior events for future planning,” he said.

    Here it is, Chris Yetman’s list of things to do if you are data center operator and you know a bad storm is coming:

    1. Safety is the number-one concern. Make sure people know how to report anything that looks unsafe in and around the campus.

    2. Walk the entire campus to ensure there are no loose items that can blow around. This is important for safety as well as the damage avoidance of having something slam into a wall.

    3. Have a list of available standby staff in the event of an emergency. Know who lives close by and could come in even under tough circumstances. If needed, consider reserving a few nearby hotel rooms if you are concerned about poor driving conditions and want to minimize the risk of employees having to drive in a severe storm.

    4. If you have on-shift people that may be stuck because travel is unsafe, make sure you have a stash of food and water to allow them to be comfortable while they wait out the storm. (I have in some environments literally had bedding and cots available in storage.)

    5. Know the locations of all your storm and roof drains. Keep them clean and clear of debris. Ahead of storm, pre-walk the site and inspect every one of them and be sure they are clear and that you know what to do in an event one gets clogged.

    6. Inspect all roof tops to ensure all loose items have been removed or secured properly. Review the panels on any rooftop equipment to be sure they are all properly secured.

    7. If you have known building leaks that you have not yet repaired, then prepare your response ahead. Have buckets, towels, squeegees, and whatever is likely to be needed near the leak location.

    8. Place any moveable outside gear indoors. This can be anything from unsecured benches and tables to the forklift you might normally park outside in the campus.

    9. Check all generators to ensure there are no pending issues. Make sure they are clear of any nearby debris and fuel is at an acceptable level for extended run time if needed.

    10. If for any reason it’s been more than 30 days since your last generator run/test, then you should run them to be sure they are ready.

    1:00p
    Mesosphere Turns Data Center into One Huge Computer

    Mesosphere stands to make a major impact in the data center. The company formally launched its Data Center Operating System (DCOS) into early access earlier this week and plans public launch in the first half of 2015. It also recently raised more than $36 million in a Series B funding round, which the company will use to build out its engineering and sales teams even further.

    Remember all of that cool software on computers before you got Windows? Remember how that whole world opened up to you even more once you had a visual operating system, and how Windows ushered in the creation of other very cool software? Mesosphere stands to do the same with data centers. It is accomplishing this task in an age where the data center is becoming increasingly distributed and virtualized — the age of web scale.

    DCOS helps with better data center resource utilization and better fault tolerance. Mesosphere abstracts the complexity and handles the overall orchestration, helps prevent failures, ensure failover, and respond to demand surges. There’s even a tool called Chaos for simulating failure, modeled after Netflix’s Chaos Monkey.

    The data center OS provides an API and software development kit that lets programmers develop for a data center like it’s one big computer.

    “You can write new applications and fire off tasks into your cluster by specifying how many resources each task should occupy,” Mesosphere CEO Florian Leibert said. “You don’t need to wire different machines, just use the Mesosphere fabric.”

    Mesosphere does a lot of complex things but makes them look really easy. For example, Chronos is a distributed and fault-tolerant job scheduler that supports complex job topologies. The tool is normally used by sophisticated engineers, but Mesosphere makes it dead simple to install it on a Mesosphere cluster and use it across data centers.

    VMware recently integrated Mesosphere with VMware vSphere to help run applications and services at scale. “Mesosphere will have a positive impact on the data center,” Kit Colbert, VMware’s vice president and CTO for cloud-native apps, said via email. “As applications become more distributed, their scale and complexity will increase.”

    Colbert said non-web-scale customers can reap the same benefits of Mesosphere web-scale companies like Twitter and Airbnb have. “The challenge is how these technologies can be implemented in the data center so they meet all enterprise requirements around security, compliance, SLA management, and more.”

    Khosla: a Perfect Philosophical Fit

    New investor Khosla Ventures led the recent round with additional investments from Andreessen Horowitz, Fuel Capital, SV Angel, and others. The company raised a $10.5 million series A in June, and its total funding is now approximately $50 million.

    In June, venture capitalist Vinod Khosla said at GigaOm’s Structure conference in San Francisco that the most important opportunity in the business of IT was getting rid of all the IT people. The opportunity is in building a data center OS that will automate resource management much like a computer OS does.

    He made the comments directly after Mesosphere had raised its Series A, which was led by Andreessen Horowitz. Now, it comes as no surprise that Khosla Ventures would take a more active role in shaping Mesosphere’s future.

    “The industry needs a new type of operating system to optimize and automate the complex landscape inherent to the agile IT era: a growing fleet of distributed web, mobile, analytic server applications, operated as application-centric abstractions on commodity server and storage pools in dedicated data centers and public clouds,” Khosla said in a statement this week.

    Foundation of a Data Center

    DCOS is built on Apache Mesos, the tool famous for helping Twitter get a handle on data center operations and killing the “fail whale.” Mesos abstracts CPU, memory, storage, and other compute resources, creating virtual pools.

    The work Mesosphere has done around Mesos is significant. Its data center OS isn’t just a commercial version of Mesos, but a wider platform and ultimately operating system that uses Mesos as a cornerstone.

    “When it comes to commercial pieces, Mesosphere adds modules and extensions to the open source that are really relevant to large deployments,” said Leibert. “Most of the things we’ve developed that are all parts of the bigger picture are free and open source and widely used, while some are not — like allowing you to start up a service and makes sure that it fails over and scales up.”

    A user interface abstracts these complex functions and makes sure DCOS is not only powerful, but intuitive and pretty. There’s also a command line interface if a user wants to bypass the visual interface.

    The company recently acquired a design studio in New York. “Design is important. It’s a challenging task,” said Leibert. “We’re coming from Airbnb, where we had a lot of design resources in terms of public-facing, but internally we also took design seriously. Sophisticated engineering was previously done with unsophisticated design tools, and we wanted to change that.”

    Roots at Twitter and Airbnb

    Connection between the founders goes back some years, said Leibert. Co-founder Benjamin Hindman was co-creator of Mesos and former Twitter lead engineer. While he was creating Mesos, his family was Leibert’s host family while he was an exchange student. Leibert was involved in helping both Twitter and Airbnb scale. Another co-founder, Tobi Knaup, is also a longtime friend and previous tech lead at Airbnb.

    All three led very interconnected and parallel lives, and like many other startups, saw what was bugging them and fixed it. “I can count back to the Airbnb days; in the middle of night I’d get a call and have to go in,” said Leibert. “This automates the solutions to those problems.”

    Beyond Web Scale Market

    Mesosphere is often only spoken of in the context of the largest web-scale companies, but its market is wider. “If you are starting a new company, you should build atop of Mesosphere day one,” said Leibert. “Our system is built for the enterprise as well.”

    Its wide appeal prompted developer-focused cloud provider DigitalOcean to partner and offer Mesosphere. DigitalOcean has grown immensely due to its developer-friendly take on cloud.

    “It’s the new and better way to sort of extract DevOps work,” Mitch Wainer, DigitalOcean co-founder and chief marketing officer, recently commented on Mesosphere. “To organize and structure your large-scale infrastructure environments.”

    What’s in DCOS

    The DCOS consists of a distributed systems kernel with enterprise-grade security, based on Mesos. It includes a set of core system services, including a distributed init system (Marathon), distributed cron (Chronos), service discovery (DNS), storage (HDFS), and others, all capable of launching containers at scale.

    The company’s growing team has worked to extend the libraries, platforms, cloud hosts, and Linux distributions supported by DCOS, as well as features around security, cost-accounting, alerting, and other core enterprise features.

    Apache Spark, Apache Cassandra and Google’s Kubernetes are all natively supported. Kubernetes helps manage the deployment of Docker workloads. It supports all modern versions of Linux and runs on-premise, on bare-metal servers, or in a virtualized private cloud, such as VMware or OpenStack.

    4:30p
    IBM Gets Two Patents for Cloud Data Control

    IBM has added a few more cloud patents to its holster. While not immediately impactful, going forward the patents will help with cloud delivery models.

    Two new inventions have been patented that use analytics to gain more control over cloud data. One deals with dynamically moving workloads based on an automatic analysis, while the other is dubbed an “express lane” that gives certain data priority when using analytics.

    IBM is looking to not only enable cloud infrastructure, but to enable effective working with the data within. Both patents are about the level of control over where and how data is stored, accessed and processed. Both have implications on analytics, an area of particular interest for IBM.

    Many of the big tech giants are taking an enterprise slant to their cloud offerings to further draw a line between them and public clouds. Dealing with big data and analytics are driving cloud innovation. The two patents deal with more effectively prioritizing working with data and dynamically moving those workloads around based on cost.

    HP is also focusing on Big Data cloud offerings, and so is SAP. Service providers who don’t necessarily have a direct analytics play are also getting in on the act. OVH is launching a big data cloud based on IBM systems. Rackspace recently launched a big data cloud as well.

    The express lane patent isn’t a workaround of net neutrality, but about moving certain processes ahead of the pack when conducting an analysis. U.S Patent #8,639,809, “Predictive Removal Of Runtime Data Using Attribute Characterizing” organizes the data queue when performing analysis for efficiency.

    “Processing data in a cloud is similar to managing checkout lines at a store — if you have one simple item to purchase, an express lane is preferable to waiting in line behind someone with a more complicated order,” said IBM inventor Michael Branson, who co-invented the patented technique with John Santosuosso. “Cloud customers don’t want data that can be analyzed and dealt with simply to sit idle behind data that needs more complex analysis. Applying real-time analytics in a cloud can help ensure each piece of data gets the proper attention in a timely manner.”

    The other patent deals with dynamically moving workloads between or within clouds to increase performance and lower costs. U.S. Patent #8,676,981 B2 is for routing service requests based on lowest actual costs in a federated cloud service. It’s based on an automatic analysis that determines the most efficient and effective use of available resources. For cloud and service providers, it can help isolate and automatically support their customers’ workloads and better fine-tune usage-based consumption.

    IBM invests more than $6 billion of dollars a year in research and development and holds more than 1,560 cloud patents.

    4:30p
    Reducing Energy Consumption and Cost in the Data Center

    Rich Gadomski is a member of the Active Archive Alliance and VP of Marketing at Fujifilm Recording Media.

    It is probably safe to say that most data center managers are dealing with the challenge of increasing data growth and limited IT resources and budgets. With “save everything forever” strategies becoming more prevalent for many organizations, the strain on IT resources will only get worse over time.

    Data center managers are faced with planning for the future and the mandate to change their current rate of spending on equipment and operations. One area of focus has been the massive energy consumption of data centers and the impact of storage.

    Energy Consumption in the Data Center

    Although the dire predictions of the 2007 EPA report on data center energy consumption have not panned out, there are still ongoing energy consumption concerns and data centers are not off the hook.

    Earlier this year, a report by Greenpeace criticized big data centers for using dirty energy (coal, gas, nuclear) as opposed to clean energy (wind, solar). A more recent report from the Natural Resources Defense Council (NRDC) claims waste and inefficiency in U.S. data centers – that consumed a massive 91 bn kWh of electricity in 2013 – will increase to 140 bn kWh by 2020, the equivalent of 50 large (500 megawatt) power plants. However, the 2014 Uptime Institute annual data center survey reveals that data center power usage efficiency (PUE) metrics have plateaued at around 1.7 after several years of steady improvement.

    One reason for the heavy energy consumption by data centers is that they rely heavily on spinning hard disk drive technology to store their data. Often the response to increasing data growth has been to add more disk arrays to solve the problem. A hard disk drive platter spinning 24/7/365 at 7,000 or 10,000 RPMs requires power to not only spin it, but to cool it as well. Otherwise the heat generated by the constant spinning would corrupt and eventually destroy the data.

    Managing Data Center Growth

    While every organization will have different data profiles, studies show that during an average life cycle data becomes inactive after a short period of 30 to 90 days. If that is the case, it makes sense to move that data from expensive, primary disk storage, to more economical tiers of storage such as low cost capacity disk and/or tape.

    In the process of moving files from high cost to low cost tiers, data accessibility does not need to be sacrificed. One method of ensuring it’s not is with active archiving strategies. An active archive file system gives you the ability to store and access all of your data by extending the file system to all tiers of storage and does so transparently to the users. As a result, active archives provide organizations with a persistent view of the data in their archive and make it easy to access files whenever needed.

    By policy, files migrate from expensive primary storage to economy tiers freeing up primary storage for new data, reducing backup volume and reducing overall cost by effectively storing data according to its usage profile. The idea here is to stop backing up (copying) data that is not changing or seldom retrieved anymore, and move it into an active archive.

    Optimizing Performance in Storage Environments

    Today, automated tape libraries capable of scaling into the petabytes play a key role in active archiving where the data is still easily accessible and well protected, but consumes no energy until it is retrieved. TCO studies of disk vs. tape show a significant advantage for tape with disk consuming up to 105 times more energy than the equivalent amount of tape storage.

    While disk and now flash technology get a lot of attention, today’s tape is performance optimized with LTFS (Linear Tape File System) and tape NAS appliances. Its capacity is constantly increasing thanks to advanced tape drives, high density automated tape libraries and new media innovations like Barium Ferrite (BaFe), shown to have superior performance and longer archival life compared to conventional metal particle (MP) tape. BaFe particles have been used in LTO-6 tape and in the production of multi-terabyte enterprise data tape, supporting the needs of various customers in diverse industries for their large volume backup and archival needs.

    Tape can serve as an efficient active archive tier that fits in the same commonly used network storage environments. Networks can move hundreds of terabytes or petabytes of content easily onto a network share powered by tape without introducing any change to their users. By moving data from high energy consuming, expensive primary disk storage to cost effective tape technology within an active archive environment, organizations can significantly decrease energy consumption and space requirements leading to an overall decrease in data center expenses.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:30p
    Cisco and IBM Team Up on Converged Infrastructure

    Cisco and IBM have teamed up on an integrated platform that combines Cisco Unified Computing System network and server hardware and IBM Storwize storage. It’s yet another turnkey cloud box play along the lines of VCE.

    The partnership combines the best of breed from two tech heavyweights. VersaStack will target big enterprise data center, private cloud, big data, and analytics needs.

    A massive convergence continues to occur. The aim of these joint out-of-the-box solutions that combine infrastructure and platform is to make IT data center architectures simple. Infrastructure density is key to ruling the cloud and data kingdom, as DCK’s Bill Kleyman puts it.

    Cisco has generally upped its convergence game and has treated UCS as the centerpiece of its cloud strategy. Cisco has had its hands in two of top three converged infrastructure plays, according to Gartner’s leadership magic quadrant. Cisco also has a NetApp alliance for FlexPod and plays a big role in VCE. Another vendor in the “leaders” category is Oracle with its Exadata systems.

    VCE is a company formed by Cisco and EMC. Its Vblock also contains Cisco UCS servers. EMC recently took full control of VCE, and Cisco waited only a few months before hooking up with IBM here in a similar play.

    IBM sold off its x86 server business, which arguably partially set the stage for this partnership. IBM and Cisco have long partnered, but are reuniting in a way.

    Way back in 2009, IBM partnered with Brocade right after IBM’s partner Cisco Systems entered the blade server market, potentially putting Cisco and IBM in one another’s cross-hairs – though Cisco denied this and said the IBM move was anti-HP driven. Fast-forward to today and there’s a converged infrastructure play and IBM doesn’t provide the server. Strange world.

    HP, incidentally, recently opened up its converged infrastructure for use with Cisco switches.

    UCS has served as an integration point for anything you can imagine. A partnership with Red Hat combined UCS with Red Hat OpenStack. Schneider’s Struxureware DCIM integrated UCS. What’s the point of all of this? Cisco recently found out it was its own grandpa.

    Converged infrastructure is a big and growing business. Cisco’s UCS plays a fundamental piece in a lot of converged infrastructure. It recently unveiled two types of UCS servers for scale out data centers and edge locations.

    5:22p
    Datto Acquires Backupify to Expand Cloud DR and Backup Capabilities

    Datto is acquiring Backupify to expand backup and cloud disaster recovery capabilities. The companies play in different markets and back up different types of data. Terms of the deal were not disclosed, but Backupify has about 100 employees, while Datto has about 400. Together the two will have close to 2 million customers and 8,000 partners worldwide.

    Consolidation continues to occur in the cloud-enablement world, and the acquisition is the latest example. Both companies are niche players with little overlap in terms of customers and backup offerings. Datto provides hybrid cloud-based backup, protecting apps running on-prem or in a private cloud, and Backupify provides cloud-to-cloud backup for Software-as-a-Service applications like Salesforce.

    The business models are also different. Datto focuses on the channel, particularly service providers who in turn serve SMB. Backupify has bigger accounts.

    The focus in the backup and disaster recovery worlds has been on how to deal with the changing delivery models for data. CenturyLink recently acquired a cloud DR company called DataGardens, a company that is particularly strong in moving data between on-prem and cloud, or from one cloud to another cloud.

    Mozy, Carbonite, Dropbox, SugarSync, Box all offer backup in some form or another and have growing enterprise cloud DR plays. Consolidation is occurring between companies with strong niches like Datto and Backupify, driven by desire to provide a wider set of capabilities and keep up with some of the big general cloud storage players moving deeper into business-focused offerings.

    The two combined can offer backup across a wider swath of services delivered both locally and in the cloud. The result is what the company calls “Total Data Protection Platform.” It will appeal to those that use a mixture of SaaS and on-prem, as well as expand Datto’s potential audience. Support has also been expanded.

    Datto appeals to those with a lot of privately hosted apps, but the general trend is moving more services to the cloud through multi-tenant apps from Google and Salesforce. To address this trend, the company either had to build more products in house or acquire the capability. They chose the latter route.

    Both companies have raised funding in the past. Backupify raised about $20 million, and Datto raised about $25 million. These rounds are relatively small compared to the $100 million round for Box. Dropbox was considering an Initial Public Offering at one point, but the plans appear to have been shelved. Box has filed for an IPO, but the float is yet to happen.

    “At a time when data lives in and flows freely from on-premise servers and systems, virtualized environments, and third-party clouds, data protection and recovery takes on a whole new meaning,” Austin McChord, founder and CEO of Datto, said in a statement. “Backupify provides must-have solutions for companies entrusting their data to SaaS applications, and our team will help complete the vision of creating a Total Data Protection Platform that extends across a company’s entire digital ecosystem, as well as expand to new global markets, including Europe, Asia, and Latin America.”

    5:38p
    Pure Storage Launches Converged Infrastructure Line

    Pure Storage announced FlashStack CI, a new line of converged infrastructure solutions that consists of reference architectures, deployment and sizing guidelines, and single-call support. Delivered as a pre-validated turnkey solution, the initial configurations combine Pure Storage FlashArray 400 Series arrays with Cisco UCS blade servers, Cisco Nexus switches, VMware vSphere 5, and VMware Horizon 6.

    The new converged infrastructure offering from the all-flash enterprise storage company caps off a busy and successful year. Pure Storage reached a $3 billion valuation after raising $225 million last spring, and then purchased 100 patents from IBM in the summer. Last month the company introduced its “forever flash” program as a fresh approach to array lifecycle management – taking the storage acquisition and maintenance business model to a new level.

    Pure Storage says it will use FlashStack authorized support partners within its Pure Storage Partner Program to deliver the solutions to enterprise and service provider customers. Prime use cases for FlashStack storage are virtual server and virtual desktop deployments. Working closely with VMWare Pure Storage offers FlashStack CI for VMWare Horizon for virtual desktops, and FlashStack CI for VMware vSphere for virtual servers.

    “As an industry leading integrator of Cisco-centric converged data center infrastructures, Datalink is pleased to add FlashStack to our suite of offerings. We have already installed FlashStack in our lab and in production environments, and stand ready to provide our customers with a single point of accountability – spanning design, installation, support, and management – for this multi-vendor, converged solution,” Shawn O’Grady, chief operating officer at Datalink, said in a statement. “We are pleased to be expanding our partnership with Pure Storage as the first FlashStack Authorized Support partner in the United States.”

    Adding global availability next year, the FlashStack CI solutions with single-call support are now available in select regions supported by FlashStack ASPs.

    8:00p
    Dell to Ship Open Switches with Midokura’s OpenStack SDN

    Dell and Cumulus Networks have initiated a Software Defined Network startup called Midokura into their open and disaggregated data center network alliance. Midokura has an overlay network virtualization solution for OpenStack that will now be available together with Dell’s commodity hardware and the Linux-based network operating system by Cumulus.

    There is a growing number of data center operators, such as telcos, cloud service providers, and to a lesser extent enterprises, who are interested in low-cost commodity network hardware that is not tied to proprietary software – the opposite of the full-solution model major network vendors have traditionally sold. There is also growing interest in being able to virtualize and automate network management capabilities, such as the ones Midokura’s OpenStack SDN technology provides, to make networks more agile.

    Dell’s first foray into the world of disaggregation between network hardware and software occurred in January, when it announced it would offer the Cumulus OS as an alternative to its own Dell OS software on two of its top-of-rack data center switches. In April, Dell added a third network OS option –the Switch Light Operating System by Big Switch Networks.

    Earlier this month Juniper, previously one of the monolithic proprietary network vendors, took things a step further in the direction of open networking, announcing it would not only ship a commodity white box switch that would be open for use with any OS the customer desires, but also that it would contribute the box’s design to the Open Compute Project, the Facebook-led open source data center and hardware design initiative.

    OpenStack SDN Built for Multitenancy

    Midokura’s SDN solution is called Enterprise MidoNet. It is based on an open source SDN project the company started earlier this year, Midokura CEO Dan Dumitriu said.

    Its strength is in its multitenant capabilities. The virtual network overlay, which runs on servers and network switches, logically separates applications in a private cloud environment or users in public clouds.

    “Initially … we designed MidoNet for a multitenant public cloud use case,” Dimitriu said. But his team quickly saw that there was a growing market for multitenant virtual networks in the enterprise data center space as well.

    The software uses network switches to do what they do best – reliably move packets. All network management functions are done at the edge of the network on x86 severs.

    The solution uses Dell switches running Cumulus for non-virtualized workloads that have to stay on physical hardware.

    In addition to OpenStack, Dell supports cloud architectures by VMware and Microsoft. The company works closely on OpenStack with Red Hat.

    Dell Sharpens Focus on SDN, Open Networks

    Dell has been placing a lot of focus on data center networking in recent years, and SDN and open networking are two big new areas of focus, Tom Burns, vice president and general manager of Dell Networking and Enterprise Infrastructure, said.

    There are two Dell switches that can be used with Cumulus or Big Switch network OS software: the 10 Gig S48 and the 40 Gig S6000. The company is working to add a 1 Gig product to the open network portfolio in the first quarter of next year, Burns said.

    Technologically, the company’s approach to SDN has been to ensure the least amount of disruption possible for the user. “All of our data center products today enable customers to go to SDN when they are ready to move to SDN,” he said. “We do not require customers to rip and replace, update software, or do anything from physical perspective.”

    So far, open networking has not eaten into Dell’s traditional networking business, Burns said. Revenue from the sales of disaggregated switches – which a lot of customers are running in trials – is tiny, compared to overall Dell networking sales.

    But, Dell expects this business to pick up over the next two years. “My goal is to bring these types of solutions to the masses as quickly as possible,” Burns said.

    10:00p
    Parallels Cloud Server Plans To Add Docker Support in Q1 2015

    logo-WHIR

    This article originally appeared at The WHIR

    Parallels Cloud Server is planning to offer native support for Docker application deployment, allowing service providers to deliver container-based Virtual Private Servers to the growing number of developers building Docker applications.

    Parallels Cloud Server is a containers-based virtualization platform for web hosts and cloud service providers. An update scheduled for the first quarter of 2015 will provide Docker integration, allowing service providers to run Docker images in a secure, high-density virtual server environment.

    This also essentially allows them to compete with larger cloud hosts, such as Amazon Web Services and Microsoft Azure, which already offer Docker support.

    Parallels virtualization CTO James Bottomley said in a statement, “With Parallels solution, the service provider can offer the customer an environment where Docker is running on top of Parallels Cloud Server containers-based virtual machines for optimal performance and at high-density.”

    Parallels Cloud Server will allow service providers to support Docker developers on their current infrastructure and with the same management tools with which they’re familiar.

    “Support for Docker applications on Parallels Containers will be an innovative addition to our stack of services,” said Laurens van Alphen, an infrastructure architect at Dutch cloud and managed services provider Keenondots. “Our customers are using Docker and we will have the ability to offer them the benefits of container density and better performance than competitors using hypervisors-based virtual machines.”

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/parallels-cloud-server-plans-add-docker-support-q1-2015

    << Previous Day 2014/12/11
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org