Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, February 11th, 2016
Time |
Event |
1:00p |
Next-Generation Convergence is the Future of Cloud and Data Center New levels of resource management are introducing new challenges in cloud computing and the modern data center. We’re seeing different kinds of applications, users, and even entire business units accessing data center resources, and there are no signs of data center and cloud utilization slowing down.
Cloud computing adoption is growing, and by 2016 will increase to become the bulk of new IT spend, according to Gartner. 2016 will be a defining year as private cloud begins to give way to hybrid cloud, and nearly half of large enterprises will have hybrid cloud deployments by the end of 2017.
“Overall, there are very real trends toward cloud platforms, and toward massively scalable processing. Virtualization, service orientation, and the internet have converged to sponsor a phenomenon that enables individuals and businesses to choose how they’ll acquire or deliver IT services, with reduced emphasis on the constraints of traditional software and hardware licensing models,” said Chris Howard, research vice president at Gartner.“Services delivered through the cloud will foster an economy based on delivery and consumption of everything from storage to computation to video to finance deduction management.”
Today, the data center is tasked with supporting more users, who are accessing more applications and resources. All of this translates to creating better data center controls and enabling even greater levels of multi-tenancy.
But what type of architecture can actually support this level of growth? What kind of ecosystem can aggregate resources and allow entire data centers to be truly agile? What options are out there which can redefine data center economics and improve management, automation, and even cloud integration?
The answer may very well revolve around new technologies from various converged and hyperconverged architectures. According to a recent Gartner report, “hyperconverged integrated systems will represent over 35 percent of total integrated system market revenue by 2019.” This makes it one of the fastest-growing and most valuable technology segments in the industry today.
Read more: Why Hyperconverged Infrastructure is so Hot
And there are good reasons for this kind of growth. Let’s examine where converged technologies will be impacting both the data center and the cloud architecture of the future.
- Convergence will create new levels of resource controls. There’s been a major resurgence behind a number of virtualization technologies, including VDI, application delivery, and user management. Because of this, resource management has been a critical initiative for a number of organizations. The challenge revolves around resources which are isolated, hard to get to, or not properly utilized. This is where converged infrastructure comes in. Remember, this spans logical and physical deployments of convergence. Converged infrastructure solutions act as central points for resource control; it’s as simple as that. Data center and cloud administrators have fewer management points and greater levels of control over their critical (and expensive) resources. Moving forward, there will be more virtual technologies and even more integration with cloud. Organizations challenged with growth and resource management need to look at converged architecture solutions to simplify management.
- Converged infrastructure integrates APIs and creates a more open architecture. There are two ways to look at converged infrastructure: either you have a physical architecture or you have a virtual appliance which aggregates resources. Both will create a much more interconnect-friendly architecture and allow for the integration of third-party solutions. This can be monitoring, security, networking services, application management, cloud services, and more. One of the biggest goals of a good converged infrastructure solution is to actually create an open architecture capable of integrating with other data center and cloud resources. All of this allows administrators to quickly integrate tools which help keep the overall architecture running healthy. It also helps prevent infrastructure lockdown. Even if you chose one converged infrastructure vendor, the best will still allow you to integrate with other ecosystem components. This is a big initiative for many cloud and data center environments – the ability to create a robust architecture that can support the business, while still integrating with a number of external resources.
- Converged infrastructure allows for better data center economics. Data center environmental variables are always critical concerns for data center efficiency. Converged infrastructure systems actually help with the overall data center footprint, power utilization, and even cooling efficiencies. As you structure your next-generation data center or cloud platform, converged infrastructure can help redefine data center economics. During hardware refreshes, it’s critical for organizations to understand their use cases, where their business is going, and how IT will support these goals. Also, they need to look at the underlying ecosystem built around efficiency. Converged infrastructure can remove older, isolated components and allow you to aggregate critical resources. Similarly, you can create greater levels of multi-tenancy where you can segment user groups and applications, while still delivering a powerful user experience.
- Convergence will allow for data center and business agility. Converged infrastructure will absolutely allow for greater levels of business agility. You can provision and deprovision resources based on context, application, desktops, and even business unit. This means that organizations can adapt to very quickly changing business dynamics. Ultimately, by being able to be truly agile, convergence helps create very real competitive advantages. By supporting more business use cases and allowing for greater resource agility, your business can quickly support a very diverse user-base and a growing business. The ability to respond to market dynamics is absolutely impacted by the capabilities of your cloud and data center ecosystem. This is why it’s so critical to deploy components which are capable of rapid scale and business support.
- Converged architecture will impact security design. With the rise of virtualization delivery and new levels of content delivery, the conversation around security is only heating up. Converged infrastructure actually plays a big role in creating the next-generation security architecture. You can aggregate resources, set very specific control and management policies, and even allow access to workloads based on user context. Lost resources can be very real security concerns. By aggregating data around application, desktop, and content delivery you create controls over sensitive data points. From this converged architecture you can control where these resources go and how they interact with cloud technologies. Moving forward, organizations will be fighting a number of advanced persistent threats (APTs). Converged infrastructure introduces new levels of management, workload controls, and cloud integration. Now, organizations can deploy convergence in more use cases, especially where security might be a priority. From there, they can set strict multi-tenancy policies and manage access based on a number of granular administrative controls.
New converged infrastructure solutions come in both physical and logical forms. You can have a virtual appliance which aggregates distributed resources or a physical converged infrastructure deployment. This can be a multi-rack ecosystem or even a smaller, node-based, architecture supporting a specific use case. The point is that converged infrastructure helps combine critical resources into one logical management layer.
One point to remember here is that the concept around “converged infrastructure” isn’t entirely new. Many will argue that converged systems have been around for some time. The major difference, however, has been the major optimizations, management tools, and even API integration points that now make up the modern converged ecosystem. Furthermore, connecting into the entire backplane and reducing the amount of hops that data must take greatly helps with content and resource delivery.
So, although the concept has been around, the next-generation convergence environment is helping redefine how organizations deploy cloud and data center solutions. Your future data center and cloud ecosystem may very well be a part of a powerful converged infrastructure solution. Either way, these types of systems (both logical and physical) can impact a number of different organizations and many different verticals. | 4:00p |
Eight Key Features for IT Managers in Latest Docker Release As Docker the company continues to push its eponymous platform for building, deploying, and running applications in Linux containers into more enterprise data centers, more and more improvements are focused on the traditional enterprise IT musts, such as high availability and security.
The latest release of Docker, Docker 1.10, which came out earlier this month, adds a lot of features that are important to IT managers that deploy containerized applications in their data centers. Scott Johnston, Docker’s senior VP of product, ran us through some of the key additions in the latest release that will matter to them the most:
1. Automatic Rescheduling when Servers Fail
Swarm, Docker’s tool for managing server clusters to containerized applications run on, can now automatically reschedule containers when a node in the cluster fails. Because it is aware which containers run on which node, if one of the nodes fail, it will schedule those containers to run on a healthy node in the cluster.
The feature is experimental, meaning Docker isn’t ready to commit to productizing it just yet or guarantee that it will work as expected. Rescheduling workloads upon failure is fundamental to high-availability IT systems, and Docker wants to bring that to infrastructure that hosts Docker containers.
Read more: Making the Internet Programmable One Docker Container at a Time
2. Better Clustering Capabilities for Servers
Until this release, if a node failed to attach itself to its intended cluster, the cluster would just launch without it. Now, the node will continue trying to join the cluster until it makes a pre-determined number of failed attempts.
3. Separate Privileges for Container and Host
Addressing a security issue, Docker has separated access privileges inside a container from access privileges outside. In other words, if a user has a certain set of privileges inside a container, those privileges do not necessarily apply at the host level. If a user with root access inside a container can install malicious code, it doesn’t mean they’ll be able to do the same to the host, containing the scale of damage they can do.
4. Simpler Way to Lock down Docker Engine
Docker has exposed application syscalls to system administrators to help them secure the Docker environment in the past, but it required some deep kernel knowledge to implement. The latest release introduces seccomp profiles, which is an industry standard way to limit the types of syscalls an application can make. seccomp stands for “secure computing node.”
The feature abstracts a highly technical level of kernel calls to the level of security policy. In many cases sysadmins are likely to already have seccomp profiles for Linux hosts, and now they can apply those to the Docker Engine and containers running on it.
5. Content-Addressable Image IDs
This new feature maps the name of a Docker container to the content inside. This adds an extra level of security: if a container image gets tempered with, it no longer matches with the address of the content, and it will be easy to detect the tempering.
6. Authorization Plugins
New plugins give the administrator an easy way to set policies for access to the Docker daemon.
7. DNS Server Comes to Docker
DNS, the well known and loved system for managing IP addresses within a system is now embedded in Docker Engine. Hostname lookups can now be done with a DNS server, which makes the system overall more reliable and scalable.
8. Network Becomes an Object for Compose
Compose, Docker’s system that defines a containerized application and all of its infrastructure requirements, so that it can be deployed in any environment, now treats networks as objects, which is the same way it treats containers and storage.
When developers build containerized apps, they usually don’t know what network stack their apps will run on in the data center. To them, the network is an abstraction. When IT managers deploy the app, they want to reference a specific network stack, and this new feature allows easy mapping between the application’s abstraction of the network as defined by the developer and the implementation of that network interface in the data center. | 5:58p |
IBM SoftLayer Blocks Services in Iran as US Lifts Sanctions 
By The WHIR
IBM’s SoftLayer services are not available in Iran as the cloud giant has blocked all IPs coming from the country, according to a report by VentureBeat on Wednesday. The ban has impacted SoftLayer customers including Café Bazaar, an app store in Iran, and Elex-Tech, the developer of the Clash of Kings game.
The blocking measures support IBM’s compliance process for foreign countries, according to the report. IBM blocks services in countries that don’t comply with standard procedures, and will unblock services if the content is not banned under US trade and economic sanctions.
Martin Blanc of Bidness Etc. notes that “the discontinuation of service doesn’t make much sense as sanctions have already been lifted from the country.” After the sanctions lifted in January 2016, Microsoft resumed email services in Iran while Apple and Lenovo are among US technology companies exploring a possible return to the Iranian market, according to a report by the International Campaign for Human Rights in Iran. Docker is also looking into resuming services in Iran.
“US sanctions on Iran have drawn harsh criticism over the years. But sanctions have intensified under five presidents, indicating a broad bipartisan consensus that sanctions, for all their faults, are an important part of the US policy mix towards Iran,” Patrick Clawson, director of research at the Washington Institute of Near Easy Policy, where he directs the Iran Security Initiative, said in The Iran Primer.
Experts said the removal of sanctions will have a positive impact on Iranian internet users, including the immediate benefit of being able to access security international SSL certificates, according to a report by the Campaign.
Iranians were previously blocked from purchasing international SSL certificates because of financial sanctions, and the national SSL certificates available allowed state authorities to decrypt the connection.
Iranians will also be able to purchase hosting services from companies based out of the country, which will disallow state authorities from gaining access to user accounts, something that was permitted with domestic hosting companies.
The lifted sanctions should also enable more investment in the country’s telecommunications infrastructure, which is in dire need of modernization, and activists hope lead to improved Internet speeds.
The WHIR has reached out to IBM SoftLayer for comment.
This first ran at http://www.thewhir.com/web-hosting-news/ibm-softlayer-blocks-services-in-iran-as-us-lifts-sanctions | 6:08p |
Curb Data Center Downtime with Predictive Maintenance Paul Lachance is President of Smartware Group.
As the the world becomes increasingly dependent on the Internet, data centers have come to power our everyday lives. In fact, the average US consumer spends roughly six hours a day online. When a data center goes down, it can negatively impact everything from professional and personal communications to finances and travel.
The financial implications of data center downtime are outrageous. Organizations lose an average of $138,000 for one hour of downtime. To put this in perspective, Amazon stands to lose $1,104 for every second Amazon.com is down. What’s more, 59 percent of Fortune 500 companies experience a minimum of 1.6 hours of downtime per week, which could lead to a loss of $46 million in labor costs annually.
According to the Uptime Institute, human error causes almost three-fourths of all data center outages. However, many other factors like cybercrime, natural disasters or flaws within the data centers themselves can also cause downtime. Even something as seemingly innocuous as a squirrel chewing through a cable can cause major damage to a data center.
Given the costs of downtime, data centers need to implement a more modern maintenance strategy. However, many of these centers still rely on spreadsheets or even pen and paper to track each piece of equipment, essentially adopting a reactive approach to upkeep. As a result, occasional downtime is expected and all too common. However, many of these outages can be prevented or minimized with the right maintenance approaches and technology.
Preventive and Predictive Maintenance
An ounce of prevention is worth a pound of cure—or in this case, millions of dollars. And with the prevalence of data-driven maintenance software, data centers can better equip themselves to identify and avert potential failure points, saving both themselves and their customers time and money.
One of the most basic approaches to avoiding downtime is through calendar-based maintenance. This means that maintenance teams schedule upgrades, monitoring upkeep on a weekly, monthly or annual basis depending on the machine. This approach prevents downtime and ultimately cuts costs by using the typical lifespan of each piece of machinery to anticipate usual wear and tear and fix any issues before significant problems arise.
The more advanced, data-driven approach to data center maintenance is predictive. This strategy relies on a computerized maintenance management system (CMMS) to monitor machine components before they break down and generates work orders to address issues that arise outside of regular calendar-based maintenance. In data centers, where uninterrupted service is critical, a CMMS offers a clear benefit by showing the actual condition of each machine rather than an assumption based on a schedule or historical data.
While a predictive approach may be a better fit for data centers, the key for all data facilities is to avoid reactive maintenance. Fixing issues once they’ve already occurred is exponentially more costly.
Data Centers Are a Perfect Fit for CMMS
It wasn’t until the last 10 years that we started to see CMMS as mission critical to data centers, which is interesting given the fact that no one pays a bigger price for downtime than data centers.
The good news is that unlike other industries or large facilities, data centers are generally more technologically inclined. This industry is far more likely to have a maintenance team embrace an advanced solution that would help prevent major outages, while a legacy manufacturing company with a less technologically-adept maintenance team may be more prone to resistance.
What’s more, modern CMMS vendors with cloud-based applications are run on data centers themselves, which means that they truly understand the nuances of the industry. These vendors complete regular audits and necessary certifications to ensure that all data stored is safe and secure as cyber security continues to cause issues within the industry. These CMMS vendors should also be well-versed in maintenance safety and offer regular notifications for technicians to refresh skills or complete professional certifications.
Avoiding data center downtime will only become more critical as our reliance on the internet continues to increase. Given the massive amount of machinery and intricacies involved in data centers, it only makes sense for a data-driven system to manage the maintenance of these facilities. For data centers, investing in CMMS technology now can prevent a significant loss of money down the road.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 9:14p |
What’s Behind Rackspace’s Private OpenStack Cloud Partnership with Red Hat 
This month we focus on data centers built to support the Cloud. As cloud computing becomes the dominant form of IT, it exerts a greater and greater influence on the industry, from infrastructure and business strategy to design and location. Webscale giants like Google, Amazon, and Facebook have perfected the art and science of cloud data centers. The next wave is bringing the cloud data center to enterprise IT… or the other way around!
OpenStack is hard. It’s hard to take the conglomeration of about 20 open source projects, each at its own stage of maturity, collectively referred to as OpenStack, and turn it into a functioning cloud. This has created a whole services market for companies that can help users stand up their own OpenStack clouds, and Rackspace is going after this market hard.
Managed services, making difficult things easier for customers, are bread and butter for the Windcrest, Texas-based company that played a key role in creation of the original open source cloud infrastructure project. Seeing an opportunity in helping enterprise IT shops that aren’t married to VMware and Microsoft – and therefore naturally inclined to go with those two vendors’ respective private cloud products – Rackspace has partnered with Red Hat to provide private cloud infrastructure based on Red Hat’s OpenStack distribution.
Why Red Hat? Because Enterprise
Of all the companies that have attempted to build an enterprise business around open source software, Red Hat has been one of the more successful ones, thanks to its enterprise distribution of Linux. Nearly two-thirds of all Linux servers running in enterprise data centers are running on Red Hat.
If enterprise adoption of OpenStack is to follow the same trajectory as the enterprise adoption of Linux – a parallel OpenStack supporters like to draw; a lot – aligning with the company that played a key role in pushing Linux along its trajectory makes a lot of sense. Besides, Red Hat already has more than a foot in the door with many enterprise IT shops, and all that’s left to do is sell them on OpenStack.
Others with a lot of traction in the enterprise – the likes of Cisco, IBM, and Oracle – have already laid the groundwork for getting into the business of making OpenStack easy for customers, mostly by acquiring small firms with OpenStack chops.
“Rackspace wants to get a piece of that too, because there’s a lot of untapped cloud revenue potential for service providers locked inside enterprise data centers,” Melanie Posey, a research VP at IDC who covers cloud, hosting, and managed network services, said.
Read more: Red Hat’s OpenStack Ambitions Move it Beyond its Linux OS Core
Rackspace is now offering its well-known managed services and support around Red Hat-based private OpenStack clouds that can be hosted in Rackspace data centers, in customers’ own data centers, or in third-party colocation facilities.
“A lot of enterprises are being more receptive to that message coming from Rackspace if they go in there with Red Hat,” Posey said.
There’s also a long history between the two companies. Red Hat was an early investor in Rackspace, and Rackspace provides numerous services around Red Hat’s enterprise software products.
“We have a long history of being able to provide Fanatical Support on Red Hat products and solutions,” Darrin Hanson, VP and general manager of OpenStack Private Cloud at Rackspace, said.
Not the Only Way to do Cloud – by Far
The challenge for Rackspace, Red Hat, or anyone else selling OpenStack private clouds to enterprises is the massive install base of VMware and Microsoft software in enterprise data centers. VMware has its own software that can essentially turn existing VMs into private clouds that can then plug into public clouds, and Microsoft this month rolled out the beta version of Azure Stack, which provides a way to stand up a private cloud that to the user feels like its massive public cloud and, naturally, plugs into that massive public cloud, which becomes its virtually infinite extension.
Read more: What (Hardware) You Need to Build an Azure Cloud in Your Data Center
There’s also the challenge of explaining to enterprise IT managers why they should go through all the trouble of standing up private clouds in their data centers at all, when they can simply use Azure or Amazon Web Services. Posey said not all of those enterprise customers necessarily want to learn how to use the big public clouds, and many will simply stick with doing some projects in-house simply because that’s the way they’ve always done it.
Enterprises Choose Hybrid Cloud
Private OpenStack cloud also doesn’t preclude anyone from using public cloud services too, and there are ways to connect OpenStack to various public clouds. In fact, Rackspace provides managed services and support for both Azure and AWS, and a combination of OpenStack and one or more public cloud services is likely to become one of the more popular types of environments it will stand up for users.
Hybrid cloud, where a company mixes private cloud with public cloud, is on the rise. Only 18 percent of cloud-using respondents to the latest State of the Cloud survey by RightScale use public cloud only, and only six percent use just private cloud. All the others (71 percent) use some form of hybrid cloud.
VMware’s virtualization and cloud software vSphere is the most popular way to build private clouds, according to the survey. More than 40 percent of respondents said they used vSphere as a private cloud. OpenStack and VMware vCloud Suite each have 19 percent of all private cloud deployments among survey respondents. | 11:40p |
Netflix Shuts Down Final Bits of Own Data Center Infrastructure It’s done and dusted. Since someday last month, everything Netflix does runs on Amazon Web Services, from streaming video to managing its employee and customer data.
In early January, whatever little bits of Netflix that were still running somewhere in a non-Amazon data center were shut down, Yuri Izrailevsky, the company’s VP of cloud and platform engineering, wrote in a blog post Thursday.
To be sure, most of Netflix had already been running in the cloud for some time, including all customer-facing applications. Netflix has been one of the big early adopters of AWS who famously went all-in with public cloud. Thursday’s announcement simply marks the completion of a seven-year process of transition from a data center-based infrastructure model to a 100-percent cloud one.
Read more: Cloud Reboot Causes Cold Sweat at Netflix
Those last bits to migrate were billing and customer and employee data management.
The reason? Scale. Netflix today has eight times more customers using its video streaming service than it did in 2008, when it started using AWS. The streaming application is also constantly changing as more and more features get added and relies on more and more data.
“Supporting such rapid growth would have been extremely difficult out of our own data centers; we simply could not have racked the servers fast enough,” Izrailevsky wrote.
In January, Netflix expanded to more than 130 countries, and AWS, with its global footprint, made it a lot easier to do.
Read more: Condé Nast Parts With Delaware Data Center in Favor of Amazon’s Cloud (VIDEO)
Cloud Changed the Way Netflix Runs the Company
It took seven years because Netflix didn’t simply take everything it had running in its data centers and replicate it on AWS (that would have been the easiest way to go). This was a seven-year fundamental transformation of the way the entire company runs, Izrailzevsky wrote, and it involved a lot of learning.
Instead of a big monolithic application, where every change is centrally coordinated, the new Netflix app is a series of micro-services, each of which can be changed independently. “Budget approvals, centralized release coordination and multi-week hardware provisioning cycles made way to continuous delivery, engineering teams making independent decisions using self-service tools in a loosely coupled DevOps environment, helping accelerate innovation,” he explained.
Cloud Lowers Cost for Netflix
Interestingly, Netflix found that its operating costs had actually gone down as a result of moving to AWS. You often hear that once a company reaches a certain scale, continuing to rely on public cloud becomes more expensive than building its own data centers.
But that wasn’t the case for Netflix, according to Izrailevsky, who said the company’s cloud cost per streaming start was a “fraction” of the cost of streaming from its own data centers. Its cost went down because of the flexibility of being able to constantly adjust the mix of cloud instances in use and to grow or shrink capacity as needed.
In other words, a monolithic data center where physical boxes take time to install, decommission, or upgrade, gave way to a fluid infrastructure that can be adjusted on the fly for more efficient utilization of resources.
More details about the final step in Netflix’s transition to the cloud in Izrailevsky’s blog post. |
|