|
| |||
|
|
Understanding the Different Kinds of Infrastructure Convergence As company computing demands change, what will the architecture that supports modern businesses and their cloud initiatives look like? One of the hottest concepts to emerge is infrastructure convergence. We have unified architecture, converged storage, converged infrastructure, and now also hyper-convergence. But what does it all mean? How can convergence apply to your business and your use cases? Let’s take a look at each type of converged infrastructure separately. Unified InfrastructureThis is where the conversation begins. Traditionally, rack-mounted servers support a one-application-per-server scenario. Virtualization changed all that. Unified infrastructure commonly defines a chassis and blade server environment. Here’s the big point to consider: the modern blade and chassis backplane has come a long way. In fact, you can now integrate directly into fabric interconnects to provide massive blade throughput. Furthermore, you can create hardware and service profiles that allow you to set hardware-based policies around things like UUID, WWN, MAC addresses, and more. Using this kind of architecture, you could create a follow-the-sun data center infrastructure capable of on-boarding new sets of users while using the same hardware components powered by hardware and service profiles by dynamically re-provisioning chassis resources. Although these kinds of systems are powerful and extremely agile, they can be pricey. High-end blade architectures can be costly when compared to alternatives. The most critical aspect to understand, however, will be your use case and how blades might apply.
Converged Node-Based ArchitectureThe evolution of compute and storage saw a turn when converged infrastructure was introduced. Basically, these are smaller node-based units combining storage and compute on one box, sometimes referred to as an appliance. Need to grow? Simply add another node or a full appliance and go. This has become a fantastic way to augment data center resource utilization. Instead of purchasing pricier gear and storage organizations can offload big workloads to smaller converged infrastructure nodes. Producing a lot of resources pushed directly into the workloads sitting on top, converged infrastructure is a great scale-out solution. There are some cautions though. Many converged infrastructure solutions only support one hypervisor model or another. For example, if you’re a XenServer shop, be aware of what you can integrate with your environment. Also, many converged infrastructure technologies won’t integrate with things like FC/FCoE. Still, if you’ve got a solid use case for a converged infrastructure technology, you’ll be happy with great performance and a solid price.
Hyper-Converged InfrastructureThis is where it gets a bit more interesting. First, let’s differentiate between converged and hyper-converged infrastructures. They key differentiating point – and the whole premise behind hyper-convergence – is that this model doesn’t actually rely on the underlying hardware. Not entirely at least. This approach truly converges all the aspects of data processing at a single compute layer, dramatically simplifying storage and networking through software-defined approaches. The same compute system now works as a distributed storage system, taking away chunks of complexities in storage provisioning and bringing storage technology in tune with server technology refreshes. Here’s the big piece to remember: since the key aspect of hyper-convergence is software doing the storage controller functionality, it’s completely hardware-agnostic. This means that hyper-convergence completely abstracts the management process and allows you to custom-build the underlying hardware stack, which in turn could lead to some serious cost savings. What if you prefer one type of vendor because your entire data center is built around them? Fine. Maybe you like white-box or commodity servers? Those work too. As long as a hyper-converged virtual appliance is running in the hypervisor, you can control the underlying set of resources. Furthermore, this level of convergence opens up a new level of API integration. With an open API architecture and a lot of intelligence in the software, new kinds of hyper-convergence technologies can integrate with OpenStack, CloudStack, IBM, vCenter, vCAC, VVOLs, VAAI, S3, etc. This takes the conversation around convergence to a whole new level by combining functionality of compute, storage, networking, all converged on a single device through intelligent software and basic hardware components. Let’s assume that one set of hardware is running on one kind of hypervisor in a primary data center, while running another (different vendor’s) set of hardware on a different type of hypervisor at a secondary data center. As long as the same hyper-convergence virtual appliance controls the underlying resources – while connected to both data centers – entire data sets and VMs can be migrated between heterogeneous infrastructures.
The reality here is that we’re creating a much more fluid data center architecture. Soon, an entire hardware stack will be abstracted and managed from the virtual layer. The ultimate goal is to allow data, VMs, and applications to flow from on-premise data centers to the cloud and everywhere in between. This agility allows organizations to quickly respond to new kinds of business demands by directly applying resources precisely where they’re needed. The future of the data center revolves around supporting an ever-evolving user. Hyper-convergence allows you to utilize heterogonous hardware systems, coupled with different hypervisors, all to deliver dynamic resources to a variety of points. Moving forward, businesses will continue to depend more and more on the underlying data center. Keeping your infrastructure agile will help you retain your competitive edge. |
|||||||||||||