Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, December 2nd, 2015

    Time Event
    1:00p
    Understanding the Different Kinds of Infrastructure Convergence

    As company computing demands change, what will the architecture that supports modern businesses and their cloud initiatives look like?

    One of the hottest concepts to emerge is infrastructure convergence. We have unified architecture, converged storage, converged infrastructure, and now also hyper-convergence. But what does it all mean? How can convergence apply to your business and your use cases? Let’s take a look at each type of converged infrastructure separately.

    Unified Infrastructure

    This is where the conversation begins. Traditionally, rack-mounted servers support a one-application-per-server scenario. Virtualization changed all that. Unified infrastructure commonly defines a chassis and blade server environment. Here’s the big point to consider: the modern blade and chassis backplane has come a long way. In fact, you can now integrate directly into fabric interconnects to provide massive blade throughput. Furthermore, you can create hardware and service profiles that allow you to set hardware-based policies around things like UUID, WWN, MAC addresses, and more. Using this kind of architecture, you could create a follow-the-sun data center infrastructure capable of on-boarding new sets of users while using the same hardware components powered by hardware and service profiles by dynamically re-provisioning chassis resources. Although these kinds of systems are powerful and extremely agile, they can be pricey. High-end blade architectures can be costly when compared to alternatives. The most critical aspect to understand, however, will be your use case and how blades might apply.

    • Use cases: A chassis and blade environment is great for a big scale-out architecture, such as a large telecom or service provider. This kind of environment utilizes a vast number of resources and might need to deploy hundreds if not thousands of racks of gear. Blades can isolate workloads, create powerful orchestration rules, and provide dynamic support for business needs.

    Converged Node-Based Architecture

    The evolution of compute and storage saw a turn when converged infrastructure was introduced. Basically, these are smaller node-based units combining storage and compute on one box, sometimes referred to as an appliance. Need to grow? Simply add another node or a full appliance and go. This has become a fantastic way to augment data center resource utilization. Instead of purchasing pricier gear and storage organizations can offload big workloads to smaller converged infrastructure nodes.

    Producing a lot of resources pushed directly into the workloads sitting on top, converged infrastructure is a great scale-out solution. There are some cautions though. Many converged infrastructure solutions only support one hypervisor model or another. For example, if you’re a XenServer shop, be aware of what you can integrate with your environment. Also, many converged infrastructure technologies won’t integrate with things like FC/FCoE. Still, if you’ve got a solid use case for a converged infrastructure technology, you’ll be happy with great performance and a solid price.

    • Use cases: A medium-sized organization wanting to offload VDI architecture from traditional rack-mount servers and a standard SAN to a more efficient platform that is also price-conscious may choose a four-node appliance that can be later upgraded. A compact converged appliance will help eliminate several servers in a rack, free up a lot of disk, and improve performance. Furthermore, desktop and application delivery architecture are all under one hypervisor, making it even easier to manage resources between the converged infrastructure unit and VMs.

    Hyper-Converged Infrastructure

    This is where it gets a bit more interesting. First, let’s differentiate between converged and hyper-converged infrastructures. They key differentiating point – and the whole premise behind hyper-convergence – is that this model doesn’t actually rely on the underlying hardware. Not entirely at least. This approach truly converges all the aspects of data processing at a single compute layer, dramatically simplifying storage and networking through software-defined approaches. The same compute system now works as a distributed storage system, taking away chunks of complexities in storage provisioning and bringing storage technology in tune with server technology refreshes.

    Here’s the big piece to remember: since the key aspect of hyper-convergence is software doing the storage controller functionality, it’s completely hardware-agnostic. This means that hyper-convergence completely abstracts the management process and allows you to custom-build the underlying hardware stack, which in turn could lead to some serious cost savings. What if you prefer one type of vendor because your entire data center is built around them? Fine. Maybe you like white-box or commodity servers? Those work too.

    As long as a hyper-converged virtual appliance is running in the hypervisor, you can control the underlying set of resources. Furthermore, this level of convergence opens up a new level of API integration. With an open API architecture and a lot of intelligence in the software, new kinds of hyper-convergence technologies can integrate with OpenStack, CloudStack, IBM, vCenter, vCAC, VVOLs, VAAI, S3, etc. This takes the conversation around convergence to a whole new level by combining functionality of compute, storage, networking, all converged on a single device through intelligent software and basic hardware components.

    Let’s assume that one set of hardware is running on one kind of hypervisor in a primary data center, while running another (different vendor’s) set of hardware on a different type of hypervisor at a secondary data center. As long as the same hyper-convergence virtual appliance controls the underlying resources – while connected to both data centers – entire data sets and VMs can be migrated between heterogeneous infrastructures.

    • Use cases: Your organization is growing very quickly, both organically and through acquisitions. This means a constant rotation of different hardware sets, new data center additions, and support of an ever-growing number of users. This is where hyper-convergence really shines. To help offset such a large number of new users, you deploy two 24TB appliances built around your required vendor. From there, you deploy the software-defined storage policies and work to create a central storage control infrastructure. Now, you have complete visibility into all storage resources while still processing workloads on the hyper-converged platform. As another initiative, you plan to migrate the workloads being controlled by the hyper-converged VM appliance into the cloud. The cool aspect of working with this kind of VM-based appliance is the capability to integrate with OpenStack, vCAC, and other cloud orchestration platforms. Now, this organization can control resources located both on-premise and in their cloud.

    The reality here is that we’re creating a much more fluid data center architecture. Soon, an entire hardware stack will be abstracted and managed from the virtual layer. The ultimate goal is to allow data, VMs, and applications to flow from on-premise data centers to the cloud and everywhere in between. This agility allows organizations to quickly respond to new kinds of business demands by directly applying resources precisely where they’re needed. The future of the data center revolves around supporting an ever-evolving user. Hyper-convergence allows you to utilize heterogonous hardware systems, coupled with different hypervisors, all to deliver dynamic resources to a variety of points. Moving forward, businesses will continue to depend more and more on the underlying data center. Keeping your infrastructure agile will help you retain your competitive edge.

    4:00p
    Why You Still Have a Bandwidth Problem

    Dave Ginsburg is CMO at Teridion.

    Innovation is moving at electric speeds, bringing us new and exciting technology on a daily basis. We’ve witnessed many things – the inception of self-driving cars, smart watches, fitness trackers and home automation. These innovations hit the market over the past few years and have changed how we fundamentally interact with technology. We can stream videos on our devices regardless of whether we are on a bus or on an airplane, and cloud-based music services know our tastes better than we do. But even with all the innovation, videoconferences still lag, files upload slowly and ads on websites take forever to load. So why do these bandwidth issues still exist in a time of such great innovation, and what can we do to move past them?

    Applications Growing Up Fast

    The Internet has been unable to keep pace with advancements in personalized, user-driven applications and services, such as unified communications services, social media or even news services that are riddled with multimedia advertisements. The complexity of these technologies is putting a major strain on infrastructure due to their requirement of speed, always-on reliability and a high-quality end-user experience. The sheer volume of Internet traffic flooding our networks only exacerbates the issue. We are barred from exhausting what an application has to offer, and is capable of achieving.

    For cloud-based services, latency and packet loss are killers. A major reason this remains an issue is because distributed applications are growing in popularity. Applications and content now need to be served to users in remote parts of the world without sluggish response times or worse, downtime. When users are far from servers, performance issues are much more likely to ensue. For example, a connection that supports 1.8 Gbps between London and Frankfurt on a low-loss connection is reduced to just 2.2 Mbps to Singapore at only .1 percent loss, typical between regions. This is close to a 1,000 times decrease.

    Limitations of Content Delivery

    The Internet’s routing and transport protocols are no longer sufficient. In an attempt to remedy this ongoing problem, companies have built CDNs and WAN acceleration solutions, but there’s a limit to their positive impact on user experience. Geography, the need for pre-provisioned PoPs, and cloud provider limitations all play a role in end-user experience, or lack thereof.

    Content delivery networks are created to, of course, improve how content is delivered. But even these technologies are limited by poor Internet performance across regions when the CDN operator doesn’t control the underlying path. Attempting to stay in stride with the many types, sizes and complexities of new content hitting the market, content delivery providers are turning to traditional methods, such as adding new data centers. While more capacity marginally aids bandwidth problems, building out infrastructure is time consuming and cost intensive – especially when the goal is for content to reach many regions with no change in user experience.

    There is a common falsehood in looking at published CDN statistics. Admittedly, the majority of bits will be carried by CDNs in the coming years, a result of the mass adoption of streaming media. However, the number of discrete applications served will move in the opposite direction. Distributed gaming, video conferencing, ad serving and even social networks are all increasingly personalized and non-cacheable. Some providers instead employ the use of a quicker refresh of edge caching, which only adds cost and complexity to the problem – a Band-Aid for a bullet hole.

    Networking Optimized in the Cloud

    Network architectures are advancing, but not fast enough. The biggest promises of the Internet are within reach but service providers are holding back because they cannot guarantee quality, stability and speed. Networks need to address the vast number of applications and devices, while removing previous geographical and device constraints. Since the volume of Internet traffic feeds the problem, it’s also important to take into account Internet traffic congestion in as close to real time as possible. The network can then be adjusted to handle Internet traffic in accordance with the status of the network to better serve users.

    The value of the cloud has always been rooted in its flexibility – capacity on-demand, consumption-based pricing, among more. This elasticity, however, has yet to translate to the extent it should. Compute and storage in the cloud is nothing new, but networking has yet to reap the full benefits of the cloud. If networks can take advantage of the flexibility of the cloud and less tied to physical infrastructure, businesses will receive a similar on-demand approach.

    A Proactive Internet

    Rather than taking a reactive posture, as is common with legacy solutions, networks need to be proactive. Otherwise, businesses will continue to be stunted by the abundance of traffic coming from users all over the world. We need to know what’s happening at a granular level in our networks in order to enable greater flexibility, and act on that knowledge through innovative cloud-based routing architectures. That way, we are better positioned to solve problems, support these new applications and ultimately bring users an experience they can enjoy.

     

    7:13p
    Is Facebook Really Planning a Taiwan Data Center?

    It’s no secret that the market for internet services in Asia is growing fast, so it’s not surprising that Facebook may be looking for a good place to build a data center in the region.

    “May be” is the important bit here. Like other mega-data center operators – companies like Microsoft, Google, or Amazon – whose customer base spans the globe, Facebook is always looking for a good place to build its next data center. Because it can sometimes take several years to get a location approved and secured for one of these massive projects, data center site selection in multiple places around the world is an ongoing affair for Facebook and its web-scale peers, and the fact that Facebook is evaluating a site says little about its actual construction plans.

    A county official in Taiwan recently told Reuters that Facebook was evaluating a potential site for a data center in the country, which would be the social network’s first data center in Asia Pacific. A local newspaper in Bozeman, Montana, reported this week that local officials there have met with Google about potential a data center site, and that Facebook has also shown interest.

    These reports mean the companies may be looking at potential sites to do as much groundwork as possible in advance, in case they make the decision to expand in one of the locations, at which point they need to get the data center up and running quickly. They also simply show how eager local officials are to attract these mega construction projects, which often cost hundreds of millions of dollars.

    Facebook may be looking at one or more sites in Taiwan — it will eventually need a data center in Asia — but it may also be looking at sites in Hong Kong, Singapore, Indonesia, or India.

    Rumors that Facebook was interested in a data center site in Taiwan surfaced as far back as 2011. The company promptly denied the reports. As of today, Facebook does not have a data center in Asia, company spokesman Michael Kirkland said.

    If Facebook announces a new data center location any time soon, it will be a second data center in Europe, he said. It is currently looking at multiple sites in Ireland and elsewhere in Western Europe and is close to making a decision. The company launched its first European data center, in Luleå, Sweden, in 2013.

    The rest of its data centers are all in the US: Prineville, Oregon, Forest City, North Carolina, and Altoona, Iowa. It also leases wholesale data center space in Silicon Valley and Northern Virginia.

    8:01p
    VCE and Cisco Team Up on Data Center Security

    varguylogo

    This post originally appeared at The Var Guy

    VCE has teamed with Cisco Systems to optimize its data center security for enterprises running on Cisco infrastructure. The partnership is expected to help businesses gain more flexibility and agility when managing their hybrid networks.

    VCE has unveiled the next-generation version of its Vblock System integrated with Cisco Application Centric Infrastructure (ACI) to help customers build flexible, highly secure data centers that can meet the demands of cloud computing, VCE said in a press release.

    VCE’s Vblock System combines compute, network and storage technologies from Cisco, EMC and VMware to provide dynamic pools of resources that an enterprise can intelligently provision and manage depending on their changing business needs.

    The combination of Vblock and Cisco ACI allows data center managers to define a policy based on what an application requires, such as adherence to security compliance and data governance mandates, and continue to enforce the policy even as an app scales up, according to VCE. Customers also can quickly manage applications across leading cloud management platforms, and choose network and security services from more than 45 technology partners in the ACI ecosystem, the company said.

    “Customers are finding that with ACI on a VCE Vblock System they have better visibility across the entire enterprise and application deployment time, and time spent on networking administrative activities are significantly reduced,” said VCE’s COO Tim Page. “Most importantly, making networking changes are much faster with significantly lower risk, enabling IT to quickly respond to new or changing business requirements.”

    Arquiva, a communications infrastructure provider, is one of the customers that said it’s seen the benefits of using VCE with Cisco ACI. The company needed a new network to adopt a cloud strategy and improve the efficiency of its IT stack and also wanted to invest in new services for its customers, according to VCE.

    Arquiva chose a Cisco software-defined network and is extending its VCE converged infrastructure with the ability to gain application-level control and visibility across its entire data center to provide network consistency and high-level security, according to VCE.

    The combination of Vblock with Cisco ACI is allowing the company to simplify troubleshooting and rapidly resolve problems for physical, virtual and cloud workloads to enable faster deployment and operational control,” said Paul Freemantle, IT and connectivity director for Arqiva, in the release.

    “It became evident pretty quickly that Cisco ACI on a VCE Vblock System will enable Arqiva to deploy more quickly and have greater operational control through multi-layered security capabilities, to meet business and IT objectives,” he said. “We also feel confident that the deep engineering knowledge that Cisco and VCE have of one another’s products will ensure we can deploy our new network with minimum risk, and at speed so we can realize the benefits immediately.”

    This first ran at https://thevarguy.com/network-security-and-data-protection-software-solutions/vce-teams-cisco-boost-data-center-security

    << Previous Day 2015/12/02
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org