Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, March 31st, 2015

    Time Event
    1:00p
    Symantec Signs Multi-Megawatt Lease at Santa Clara Data Center

    Vantage Data Centers has won a multi-megawatt deal with security-software powerhouse Symantec for its Santa Clara data center campus. This is the company’s second lab customer in recent months but its first mixed-use infrastructure deal ever. Symantec will deploy both lab space and critical IT infrastructure at the facility.

    Because outsourcing lab infrastructure to a colocation facility doesn’t always make sense, as many wholesale facilities are designed uniformly for high redundancy, Symantec will consolidate its lab space into a single location at Vantage’s Silicon Valley campus. It also ensures that lab customers won’t end up having to pay for redundancy they don’t need.

    The Symantec deal with Vantage is a hybrid of lab-style space and traditional 2N space. “It’s the first mixed deal that we’ve done,” said Chris Yetman, chief operating officer at Vantage. “It required significant collaboration.”

    There is varying security standards and security zones that are separately caged off, all within the same environment. Part of the space will be 2N, while Symantec’s lab environment will have UPS.

    Vantage won its previous lab-style deal in the competitive Santa Clara data center market because it was able to provide flexible space to accommodate 12 kilowatts per rack in a non-redundant (‘N’) power configuration. The latest deal and a few other similar ones in the works are sign of a mini-trend. However, most of Vantage’s pipeline includes more traditional wholesale data center leases.

    The company’s CEO Sureel Choksi recently explained that the reason there was demand for lab space in data centers in Silicon Valley was skyrocketing office-space rent. Companies have traditionally built IT labs in their office buildings.

    These types of customers often don’t make sense for wholesale providers with uniform space, and wholesale doesn’t make sense for lab customers who don’t want to pay for redundancy they don’t need or use. Vantage was able to tailor the space and make it a win-win for both parties.

    “We need to build in a way that accommodates a 2N, or what could be a lab requirement,” Choksi said. “While we continue to offer traditional 2N space, we have the ability to tailor lab and production environments. The deal we announced late last year I think created a little bit more visibility in the market.”

    Choksi believes the company’s ability to tailor wholesale space to specifications contributed to the win.

    “Customers are becoming more savvy, particularly when it comes to business requirements and how that drives data center requirements,” he said. “The notion of offering a flexible tailored data center is resonating well and acting as a huge differentiator.”

    3:00p
    Why PayPal Replaced VMware With OpenStack

    Close to 100 percent of traffic running through PayPal web and API applications, as well as mid-tier services, is now served by the company’s own private OpenStack cloud.

    OpenStack has replaced VMware in the eBay-owned online-payment firm’s data centers. The transformation was gradual and started during the 2011 holiday shopping season, when PayPal’s infrastructure team started routing about 20 percent of workloads to the OpenStack cloud.

    The reasons the company made the switch are some of the most-frequently cited reasons for switching from a proprietary platform to an open source one: to have more freedom to customize and to avoid vendor lock-in.

    “With OpenStack, PayPal has more control over customization and more choice in the vendors it uses for its hybrid cloud environment,” Sri Shivananda, vice president of global platform and infrastructure at PayPal, said via email.

    In a blog post, Shivananda wrote that it now also takes minutes to provision capacity and deploy new Java applications – something that used to take days.

    A company can build an OpenStack cloud on its own or hire a specialist with its own distribution of the popular open source cloud architecture. PayPal chose the DIY route, Shivananda said.

    Its private OpenStack cloud runs on 8,500 physical servers.

    VMware and OpenStack don’t have to be mutually exclusive. The Palo Alto, California-based server-virtualization and cloud giant has its own OpenStack distribution that’s integrated with its hypervisor and other data center infrastructure solutions. VMware Integrated OpenStack comes with the latest vSphere 6 suite.

    The vendor’s pitch for using its flavor of the open source cloud is having a production-ready solution and support. “We want to help [customers] leverage their existing investments and expertise to confidently deliver production-grade OpenStack, backed by a unified support from VMware,” Roger Fortier, a VMware spokesman, wrote in an email.

    OpenStack adoption is on the rise. Examples of other big-name users that have deployed it in production include Walmart Labs, Time Warner Cable, and CERN, the European Organization for Nuclear Research, which operates the Large Hadron Collider. As of November of last year, CERN was well on its way to growing the size of its OpenStack cloud to 150,000 processor cores.

    BMW has set up an OpenStack cloud as well and has been testing the waters.

    A recent survey of IT professionals found that about one-third of all cloud infrastructure users had private clouds, and that about half of those private clouds were built using OpenStack.

    But the open source cloud software has also become a way for service providers to build public clouds. Rackspace has one of the biggest public OpenStack clouds. Another example is Internap, which launched its public cloud services underpinned by OpenStack earlier this year.

    Walmart’s private OpenStack cloud served traffic during last year’s holiday shopping season too, John Engates, CTO of Rackspace, said. Rackspace helped Walmart Labs’ team build the cloud infrastructure.

    “They’re using our distribution of OpenStack,” he said. “They’re paying us a great deal for support and services to go with that cloud.”

    As users increasingly opt for the open source cloud, vendors that have built businesses around proprietary cloud technologies are getting involved in the OpenStack project and coming up with products and services around it to avoid losing market share. There are also companies that are dedicated to standing up OpenStack clouds for customers as their core business.

    But there are also users like PayPal that have enough of their own engineering muscle to build cloud infrastructure using OpenStack on their own. Convincing customers like these to pay for a vendor’s package of a piece of open source technology is a tough sell.

    3:30p
    Revising Data Backup and Business Continuity Handbook

    George Bedocs is the vice president of infrastructure engineering for Datto.

    Every enterprise IT team carries some version of “The Worst Case Scenario Handbook” in its collective psyche. In the data center, that mental handbook includes envisioning what might happen if a storm takes out your power or – worse – an employee makes a serious error that yields Mother Nature-level repercussions.

    Without adequate backup, downtime can translate into tens or hundreds of thousands of dollars or more per incident. But, as we all know, traditional backup is not the same as true business continuity.

    When seeking business continuity, sometimes the best resource is a knowledgeable channel partner with a long resume of handling backup solutions for its partners. Channel partners can draw on this experience to help businesses move past traditional backup methods and toward modern data protection solutions.

    Traditional Backup Versus Business Continuity

    There are a few different strategies enterprises employ to back up their data. The first, unfortunately, is to take the data center copy of “The Worst Case Scenario Handbook” and hide it behind the bookshelf. Maybe the team decides the enterprise is in a “safe zone,” far from hurricane, tornado or earthquake-prone regions. The unpredictability of natural disasters aside, the “it-won’t-happen-to-us” approach ignores the fact that network outages, human error and equipment failures are far more likely to cause downtime than any other factors.

    The second backup strategy is better, but not by much. It relies on a traditional approach that goes back more than 40 years. And yes, can you believe we’re still talking about tape backup? These businesses save their data to tape and ship it to remote locations from which they can retrieve it when disaster occurs. With legacy backup technologies such as tape, downtime can stretch into days or even weeks. Further, traditional approaches rely on manual processes, which increase the risk of human error, and they’re difficult to test to ensure reliability.

    Enterprises that bring in channel partners to help them move beyond the above limitations quickly learn that business continuity, not just backup, should have been the focus from the start. In terms of recovery time, avoiding human error, verifying systems, achieving faster backup times, improving security, complying with regulations and more, smarter business continuity solutions are far better equipped to not only back up the data, but make sure it is accessible and always up to date.

    The Cost of Backup-Only Plans

    Downtime is expensive, both in terms of productivity and data loss. The details depend on the size of your company, but in general, a mid-size company loses $215,638 per hour of downtime, while an enterprise loses $686,250 per hour, according to the Aberdeen Group. That price goes up quickly when businesses are ill-equipped to retrieve their data after a disaster and return to business as usual. The span of time between a typical data loss incident and resumption of normal business operations is seven hours, according to IDC research, but the firm says 18 percent of IT managers report it takes them much longer to get back to business – between 11 and 24 hours, or even more.

    Data backup is important, but it doesn’t ensure business continuity, which is essential to keeping costs down after a disaster. A channel partner can help you determine what lost data would actually cost you, and then set the recovery time objective (RTO) and recovery point objective (RPO) targets that are within your company’s acceptable threshold for downtime.

    Hybrid, Image-Based Backup Keeps Business Running

    Hybrid, image-based backup is at the core of successful business continuity solutions today. A hybrid solution combines the quick restoration benefits of local backup with the off-site, economic advantages of a cloud resource. Data is first copied and stored on a local device, so that enterprises can do fast and easy restores from that device. At the same time, the data is replicated in the cloud, creating off-site copies that don’t have to be moved physically.

    Channel partners are also helping enterprises make a critical shift from file-based backup to image-based. With file-based backup, the IT team chooses which files to back up, and only those files are saved. If the team overlooks an essential file and a disaster occurs, that file is gone. With image-based backup, the enterprise can capture an image of the data in its environment. You can get exact replications of what is stored on a server — including the operating system, configurations and settings, and preferences. Make sure to look for a solution that automatically saves each image-based backup as a virtual machine disk (VMDK), both in the local device and the cloud. This will ensure a faster virtualization process. If a server goes down, the team can restore it in seconds or minutes, rather than spending hours or days to requisition a new server, and install and configure the operating system.

    This is the difference between traditional backup and business continuity. In the event of the worst case scenarios, which happen far more often than most enterprises would like to imagine, the ability to keep your business running has immeasurable value.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:03p
    Open Source Router Aims to Transform Data Center Networks

    An industry syndicate has launched a Linux distribution optimized to support deployment of routing software and software defined networks (SDN) on x86 servers.

    The open source CloudRouter Project, led by CloudBees, Cloudius Systems, IIX , NGINX, and the OpenDaylight Project, is promising to give IT organizations more control over networking functions deployed on x86 servers in the cloud. Open source router software such as this is a challenge to traditional networking equipment deployed inside most data centers today.

    Massive-scale data center operators, such as Facebook, have been building their own networking equipment to have more control of the functionality. But there are also various open source or open standards organizations working to change the way data center networking has been done traditionally.

    The open source router project is part of an Open Networking movement that is starting to gain traction in data center environments. Jay Turner, CloudRouter Project lead and senior director of DevOps at IIX, says that as a cloud service provider IIX saw a need for an open source implementation of networking software that combined routing and SDN functionality.

    “We saw a need for a high-quality SDN and router distribution,” says Turner. “We developed this initially to meet our own internal needs.”

    While most open networking technologies are still relatively immature, IT organizations are looking to reduce the cost of networking by replacing proprietary routers and switches based on custom ASIC processors with software that runs on standard x86 servers.

    Turner concedes that it will take a fair amount of time for this transition to play out across most data center environments. But as more application workloads make the shift into the cloud, service providers not only want to reduce networking costs; they also want a simpler way to expose to customers network resources that can dynamically scale up and down.

    Based on the Fedora distribution of Linux managed by Red Hat and the Helium release of the open source controller created by OpenDaylight, the open source router provides container support for Docker, Cloudius, OSv, and KVM images along with network connectivity that supports IPSec VPN, SSL, or L2TP security options. In addition, the CloudRouter Project included tools to analyze network protocol traffic along with high availability and failover capabilities.

    Optimized for providing network services, CloudRouter Project also provides a much smaller operating system footprint in terms of the amount of IT infrastructure resources consumed than alternative approaches to cloud networking, Turner says.

    It is one of several open source networking projects that are promising to transform the economics of networking in the age of the cloud. The degree to which any one of these projects winds up succeeding remains to be seen. But the one thing that is clear is that open source networking software as an alternative to proprietary networking technologies are here to stay.

    4:30p
    New AWS Instances Chew Through Very Large Data Sets

    Amazon Web Services introduced new dense-storage instances for its EC2 cloud meant for processing multi-terabyte data sets.

    The new D2 instances provide additional compute power and memory compared to HS1 instances as well as ability to sustain high rates of sequential disk I/O for access to extremely large data sets (or, if you’re reading this 10 years from now, small data sets). People are getting comfortable with storing and processing larger amounts of data in the cloud, so instance sizes are growing in tow to handle more heavy-duty jobs.

    The instances are based on Intel’s Haswell processors running at base clock frequency of 2.4 GHz. Each virtual CPU (vCPU) is a hardware hyperthread on an Intel Xeon E5-2676 v3 chip.

    The largest of the new instances are capable of providing up to 3,500 MB/second read and 3,100 MB/second write performance with Linux.

    Specs and pricing for the new D2 family. Pricing is based on US-East and US-West AWS regions.

    New D2 instances are meant for very large data sets. Pricing is based on US-East and US-West AWS regions (CLICK TO ENLARGE).

    The largest instance also comes with bonus features of NUMA support and CPU power management. NUMA (Non-Uniform Memory Access) allows specifying an affinity between an application and a processor that will result in use of memory that is “closer” to the processor and therefore more rapidly accessed.

    It’s possible to launch multiple D2 instances in a placement group (logical grouping of instances in single availability zone, meant for applications in need of low network latency, high network throughput or both).

    The D2 instances provide the best disk performance when you use a Linux kernel that supports Persistent Grants – an extension to the Xen block ring protocol that significantly improves disk throughput and scalability.

    Storage on D2 is local, so it’s advised to build redundancy in storage architecture and use a fault-tolerant file system. Each instance is EBS-optimized by default. “EBS” stands for “Elastic Block Storage.”

    Enhanced networking is available on D2, joining availability on C3, C4, and I2 families. Enabling enhanced networking results in higher performance (packets per second), lower latency, and lower jitter.

    “With Enhanced Networking and extremely high sequential high I/O rates, these instances will chew through your Massively Parallel Processing (MPP) data warehouse, log processing, and MapReduce jobs,” AWS chief evangelist Jeff Barr wrote in a blog post. “They will also make great hosts for your network file systems and data warehouses.”

    5:38p
    APAC Cloud Growth Depends on Stable Infrastructure: Report

    logo-WHIR

    This article originally appeared at The WHIR

    Small and medium-sized enterprises represent well over 90 percent of all businesses in Asia, and businesses in Indonesia and India seem to have a particularly healthy appetite for cloud services. On Wednesday, the Asia Cloud Computing Association released a report that is among the first to publish statistics on 14 Asia-Pacific (APAC) markets offering cloud providers important insights to grow business in these countries.

    “[T]hese statistics all appear to under represent and underplay both the opportunity and the impact of cloud computing services on the SME landscape across Asia,” the report said. It argues that cloud technology can level the playing field for SMEs in developing economies because it allows business to use the latest technology on a pay-per-use basis making cloud services affordable and scalable as the business grows.

    “Cloud computing promises to be the great leveller, bringing enterprise grade tools and capacity within reach of SMEs,” Lim May-Ann, Executive Director of the ACCA said in a statement. “In addition, it will bring next generation infrastructure benefits within reach of emerging economies without the need for crippling capital expenditure.”

    Successful providers can draw on three areas of focus. Education and awareness, implementation, and selling other services. There have been many conversations in the industry about working with SMEs to add services and scale as they grow. The best providers give the customer the ability to add on as they go and make the process easy to understand from a cost and technical basis.

    Education in developing countries may be even more important since the study found that in Indonesia for example, only three percent of SMEs knew the basics of cloud computing. In more developed countries such as Japan only one-third of business owners understood cloud. Educating customers on the benefits and scalability of cloud may provide the ability to generate new revenue in these markets.

    Indonesia and China have the most SMEs but may not be the best place for providers to focus on growth. Cloud services may still be too expensive for them and the infrastructure is not yet stable enough to support reliable cloud services.

    Providers may find greater opportunity by focusing on SMEs in more developed countries such as Australia, Hong Kong and Japan where over half of the GDP is generated by them. SMEs in these areas may be better able to afford cloud services. “Thus, the current sales pitch, encouraging the adoption of cloud computing so as to reduce IT costs may well be ignored in much of the developing world,” said the report. “Instead, focusing the capabilities of software and making cloud computing more user-friendly may have stronger results.”

    The report discovered that the drivers for cloud adoption and sectors that may be best suited to purchase new services were different in each country. Expansion into new geographies should be examined by cloud providers with an eye towards fit to the industries most ready to adopt and the reasons businesses there are ready for cloud. “While factors like the absolute and economic size of the market, and the contribution that the SME base makes to GDP are important, our research shows that policy and market approach are just as vital,” said John Galligan, Chairman of the Cloud Segments Working Group, and also Regional Director Government Relations, Asia Pacific for Microsoft.

    With broadband in these countries being accessed by mobile the report suggest that developing services for mobile first may be a good move. This is consistent with other reports showing mobile payments on the rise in China. Mobile payments overall aregrowing at 60 percent, indicating an increased used of mobile devices worldwide. By 2020, broadband subscriptions will reach 8.4 billion with many of these coming from China and other APAC regions.

    This piece originally ran at http://www.thewhir.com/web-hosting-news/apac-cloud-growth-depends-stable-infrastructure-report

    6:32p
    Atlantic.net to Expand International Cloud Data Center Footprint

    Florida-based Atlantic.net’s infrastructure is set to expand with another cloud data center in New York, as well as new international locations in Singapore and the U.K.

    Atlantic.net began as a dial-up provider, moved into the colo business, and eventually offered a regional Virtual Private Server offering. The VPS business grew, prompting expansion to Toronto, Canada, and Dallas, followed by the company’s first West Coast data center location with Telx in San Francisco. CEO Marty Puranik said San Francisco was the fastest-growing region in company history.

    The company will soon add a location in New York, followed by the two international locations before July.

    With New York, the company wanted a second East Coast location in addition to its cloud data center location in Florida. Florida acts primarily as a gateway to Latin America and as an attractive location for Florida locals.

    The data center market in Ashburn, Virginia, is growing at a faster clip than New York, but the company decided New York was a better place to be.

    “We found with our customer base, on a pure network basis, Ashburn would have more carrier density,” said Puranik. “However, 60 percent of our customers are international, but they don’t know where Ashburn is. A lot of our customers want to be in and requested New York. They want us in premier cities.”

    For international cloud data center expansion, the company’s reps flew out to several markets to tour facilities. Puranik said it was important to be where customers are going rather than where they are, and again, city recognition came into play.

    In the U.K., Puranik felt that the action was going further inland and chose a location in Slough, a London borough.

    “We’re getting a lot of traction in the U.K.,” said Puranik. “Again, if I was going by number of peers or network, I’d probably go Amsterdam. My customers want a U.K. product and they want it basically in London. We’re on a different map than different clouds.”

    Other factors that went into the decision included data-sovereignty and accessibility issues.

    Singapore continues to be a popular first location in Asia Pacific, as it acts as a gateway to China and other markets in the region. Quality of facilities, accessibility and widespread use of the English language were also important factors in that decision.

    Following an extensive tour of facilities in several markets, Puranik noted that he saw a lot of segmentation happening in the industry. Some specialize in high density, while others push the number of carriers. New builds are often focused on high density, while there’s older data centers formerly used for legacy internet-facing properties that are being repurposed. Different data centers are tuned for different customers, and the customers are noticing.

    He uses a car analogy: “There’s the Lexus of the market, and the Camry of the market – we go for something in between,” he said. What does Atlantic.net go for? “The high end of the Toyota market.”

    The do-it-yourself and developer cloud market is growing. There are newer entrants like DigitalOcean and long-time VPS providers like Linode that have always served this market, and giants like Microsoft Azure are adding developer-friendly services.

    “The developer market is a focus for everybody. In terms of who’s buying, nobody looks for box software anymore, so developers are all focused on SaaS. All the development is going into automating everything. In terms of competition, really, it’s a brand new market, so everybody is growing.”

    The company has also recently given a face lift to its web page and control panel. The control panel was redesigned to be more intuitive and to form the framework for an upcoming one-click app-deployment feature, which Puranik said was very close to launching.

    A new how-to section helps the company catch up with the rest of the developer cloud market as well. How-to sections aren’t new and have long been a staple of the web hosting market. Cloud is enabling a new generation of DYI users. What used to be a niche is now mainstream.

    8:53p
    Ushering in the Era of Commodity Storage in the Data Center

    Just imagine, commodity servers and off-the-shelf drives as far as the eye can see. All being managed by virtual servers and logical controllers! Well, we’re not quite there yet, but the wheels of software-defined technologies are certainly pushing the modern data center into this direction.

    Over the course of a few years, data center giants like Facebook, Google, and Amazon began developing their own networking, servers, and even storage platforms. Why? Because it simply made sense for them.

    • They had the manpower to support hardware systems.
    • They had developer support internally to create software and code.
    • They were able to create management run-books to control parts, assets, and the overall data center.
    • They had a very well developed cloud management layer capable of dynamic scale.

    Now let’s take a look at the modern organization. We see how server virtualization has impacted enterprises all over the world. With all of these advancements, other pieces of the data center were bound to catch up to the virtualization revolution as well. We saw this with software-defined networking, and now we’re seeing it with storage.

    Software-defined storage is a lot more than just a buzz term. It’s a way for organizations to manage heterogeneous storage environments under one logical layer. When convergence around network, storage, and compute intersect with software-defined technologies, you create the building blocks for a commodity data center.

    But this conversation isn’t entirely about software-defined storage. Today, we look at three technologies which are directly allowing greater commodity storage and data center platforms to become realities. These three technologies involve your hypervisor, the software layer, and new kinds of physical commodity platforms.

    Let’s look at two technologies here (although there are more doing SDS) and see what they really give you.

    • Your next-generation hypervisor. One of the most powerful server virtualization technologies is now acting as a converged storage hub for your data needs. Let me give you an example, with vSAN you have a platform which is capable of abstracting and pooling server-side flash and disk to deliver high performance, resiliency, and persistent at the logical storage tier. If you’re an organization running multiple data centers on top of a vSphere hypervisor, vSAN should be a consideration. Through this layer, you’re basically creating a hardware-independent architecture where everything is controlled from the hypervisor. VM-centric policies can be applied, you can scale up and out without disruptions, and even build in redundancies to hardware that can be commodity. The amazing piece here is that vSphere can also help you manage SDN, allowing for true software-defined data center capabilities. Still, from a storage perspective, you suddenly abstract the hardware and allow your VMware hypervisor to take over. This type of virtual control can scale private and even public cloud environments.
    • The software control layer. Your workloads simply require resources to let them run. Your challenge revolves around effectively presenting those resources to your applications and users. The future of commodity storage won’t really care the kind of hardware running underneath. Furthermore, it also won’t care what kind of hypervisor you’re running either. The concepts around USX are similar to vSAN. USX runs as a virtual appliance, consolidates storage resources, and allows for unified management. One of the biggest differences here is that USX is being designed around the concept of hypervisor agnosticism. Currently, it works on VMware, but future releases hope to take the hypervisor question completely out of the equation. In this scenario, imagine running two data centers with different server virtualization platforms. One can be XenServer and the other can be VMware. Both of them have USX running on top. The USX virtual machine will allow for the passing of data, policies and entire workloads regardless of the actual hypervisor. On top of all of this you have a hybrid cloud model with software-defined storage managing, pooling, accelerating and optimizing existing SAN, NAS, RAM and any type of DAS (SSD, Flash, SAS). Software-defined technologies aren’t limited to private data center platforms. USX, for example, has integration with OpenStack, and even VMware’s vCAC automation technologies. Administrators can have heterogeneous storage platforms within multiple locations running on different hypervisors, all being managed by one logical storage solution.
    • The commodity data center layer. Let me give you a quick example. Cumulus Networks has its own Linux distribution, Cumulus Linux, which was designed to run on top of industry-standard networking hardware. Basically, it’s a software-only solution that provides the ultimate flexibility for modern data center networking designs and operations with a standard operating system, Linux. Furthermore, Cumulus can run on “bare-metal” network hardware from vendors like Quanta, Accton and Agema. Here’s the big part: customers can purchase hardware at cost far lower than incumbents. Furthermore, hardware running Cumulus Linux can run right alongside existing systems because Cumulus Linux uses industry standard switching and routing protocols. Hardware vendors like Quanta are now making a direct impact around the commodity conversation. Why? They can provide vanity-free servers with storage options capable of supporting a much more commoditized data center architecture.

    There’s a reason we looked at these three technologies. One offers direct integration with an existing powerful hypervisor model, while the other abstracts the hypervisor and acts as its own VM. Finally, the physical data center piece allows for all of this commoditization to actually happen. The point is that these virtual machines and services simply don’t care about the brand feeding it storage resources. To them, a flash array is a flash array. These logical controllers care about efficiency, resiliency, and the ability for you to manage your data center more easily.

    Before everyone jumps into the commodity data center argument there are a couple of things to be aware of. Many organizations are simply not ready to take on the hardware management project. And, there is the very real fact that software-defined storage is barely even breaching its 2.0 days. But the though process is nevertheless interesting. In working on a number of projects across a various industries, we’re seeing many more organizations introduce “white-box” hardware to have it be managed by a virtual machine. This is now happening at the storage layer with IT shops much smaller than that of Amazon.

    Over the next few years, many more organizations are going to offset a part of their data center with commodity gear which will be managed at the virtual layer. It simply makes sense. Everything from management, workload migration, and even optimization is controlled from one plane. The big question revolves around adoption and adoption pace. How fast will your organization adopt a software-defined technology? Or, maybe it has already.

    << Previous Day 2015/03/31
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org