Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, July 26th, 2016
Time |
Event |
3:38p |
Microsoft: Azure Stack Will Be Sold Separately, Eventually During Microsoft’s Worldwide Partner Conference last week, the company published several blog posts on the status of Azure Stack, its forthcoming hybrid cloud-based extension of Azure into customer data centers, which is currently being tested in preview. At least two of these posts — one from Corporate Vice President Mike Neil, the other from CVP Takeshi Numoto — used the term “prioritize” to describe how Microsoft will introduce Azure Stack as an integrated turnkey platform, through server partners Dell, HPE, and Lenovo.
That immediately led to press reports stating that the company had decided to tie Azure Stack directly to these three server makers, and would not enable the final release version to be installed on existing customer hardware. In an exclusive interview with Datacenter Knowledge Monday, Mark Jewett, the company’s director of product marketing for Cloud Platform, expressly denied those reports.
Jewett explained that the company’s plan, at least for now, is to begin the general release of Azure Stack through integrated systems — which more than once, he called “the starting point” — then learn how those systems are being utilized on customer premises. From there, he said, Microsoft can work out a plan for rolling out a general release of the infrastructure software on its own.
Lessons Being Learned
“The first learning is an operational learning about what it takes not just to deploy, but continue to operate and update, what turns into a relatively complex system,” explained Jewett. In previous experiences with its Cloud Platform System, he said, Microsoft learned several lessons about how to work with joint teams of engineers.
That work will be critical, he went on to say, as the company determines how best to implement rolling firmware updates to individual servers in Azure Stack clusters. Such updates take place behind Microsoft’s own firewall every day, but on server hardware that it already knows, and that has passed its testing.
In customer environments where Azure Stack will need to co-exist with other infrastructure platforms — particularly with VMware’s vSphere, and with OpenStack deployments from firms such as Red Hat and Mirantis — Jewett said, “I think it’s fair to say that those solutions face some challenges, in terms of getting deployed and being operational.”
Mesosphere would appear to provide one mechanism for rolling out software deployments for distributed systems. Microsoft and HPE have both been partnering with Mesosphere in the deployment of Azure Container Service, the public cloud’s system for deploying Docker containers and microservices. But as Jewett told Datacenter Knowledge, such a system probably would not be feasible for deploying low-level software and server firmware.
Rather, he explained, in order for Azure Stack to maintain seamless compatibility with Azure — among other reasons — Microsoft will need to engineer a kind of synchronous rollout system that makes produces updates to its hybrid cloud platform on the same agenda as updates for its public platform.
“We believe that part of the value proposition of Azure Stack is its extension of Azure,” said Jewett. “Part of that, fundamentally, is pace of innovation.
“The promise of Azure Stack says it will operate with the same level, or pace of innovation, that Azure has. And I think the eye-opener for us is the extent to which customers and service providers embraced that.”
Maintaining Alignment
A typical server deployment philosophy in a data center, he described, would have a server image be optimized to its maximum performance, and then frozen so it cannot be touched. Presently, the Azure public platform does not work that way; the platform evolves with incremental updates that improve performance without customers experiencing downtime.
Evidently, prospective Azure Stack customers were sold on the idea of seeing that same, dynamic pace implemented in their own data centers, with Microsoft managing the agenda. Enacting that promise, said Jewett, “is where we maybe learned a little differently than what we had gone in anticipating.”
Microsoft introduced its Patch and Update Framework (P&U) with the standard edition of its Cloud Platform System (CPS), and has been maintaining it since last October through partners such as Dell. CPS was designed to run Microsoft’s previous Azure Pack software, and has been re-engineered to be an on-ramp of sorts for Azure Stack.
It had been planning to open that on-ramp this year, although this critical phase of Azure Stack’s development agenda appears to be responsible for delaying its rollout until “mid-2017,” according to the company.
“One of the challenges that people are having with existing solutions today is, those updates can come from a variety of different sources,” Microsoft’s Mark Jewett told Datacenter Knowledge. “Part of what we deliver with [CPS], and we will deliver with the Azure Stack integrated system, is a coordinated patch and update process that takes care of the updates, from the firmware all the way through to the software and services; covers not just troubleshooting issues, but also adding new services; and does that in a way that recognizes the system needs to continue to run… while that updating is done, in a smart way.”
Update packages will go through a rigorous validation process before they’re used by customers. But as Jewett described, that validation will need to be aligned with the same process for Azure running in Microsoft’s own data centers.
In order to achieve that alignment, he said, Microsoft will need to work more tightly with system vendors and service providers. We asked whether these working relationships would include Intel or ARM, which would obviously be responsible for producing firmware for vendors, though Jewett declined to go into that level of specifics.
Jewett said Microsoft should have more specifics to reveal when its Ignite conference in Atlanta kicks off this upcoming September 26. | 4:01p |
Mirantis to Fuse Kubernetes, CI/CD with Commercial OpenStack In a move with serious implications for the lowest software layers of data center infrastructure, commercial OpenStack producer Mirantis this morning announced it is partnering with the two most important players in the infrastructure space — Google and Intel — to produce a new version of the OpenStack platform designed to run inside Linux containers (such as Docker), for deployment through Google’s Kubernetes orchestrator platform.
“We are containerizing all of the OpenStack services,” explained Boris Renski, Mirantis’ co-founder and CMO, in an interview with Datacenter Knowledge, “and making it possible to natively run OpenStack on top of Kubernetes — to make it be orchestrated by Kubernetes.”
In a world where the components of the stack are so loose, and the preposition “on” is sometimes used interchangeably with “in” or “under,” it’s often difficult to miss the meaning of what should otherwise be a simple statement. What Renski is telling us is that Mirantis’ commercial OpenStack is itself will be deployed within containers, whose coordination with one another will be maintained using Kubernetes.
Google Leads the Way
As a result, OpenStack itself could become highly scalable on a per-component basis, like a microservices architecture. Currently, OpenStack’s contributors acknowledge the platform’s capability of being stretched to its limits to support massively scalable infrastructures. But in a containerized system managed by Kubernetes, as opposed to bare metal or a virtual machine managed by its own native Fuel component, OpenStack could become not only more elastic but much, much easier to maintain natively.
That’s a very different thing than running Kubernetes, and staging a self-contained, scalable, containerized environment within Kubernetes, on top of an OpenStack infrastructure.
“Most commonly, folks run container orchestration frameworks on top of a VM orchestration fabric,” explained Renski [pictured above]. “We are reversing the paradigm indeed. . . We’re trying to follow the established Google design pattern.”
Renski reminded us that it was Google that first introduced control groups (cgroup) into Linux, creating an effectively partitioned architecture that could be much more easily managed. While Docker Inc. was the first to popularize containers, especially on developers’ sandbox platforms, Google was deploying a primordial form of Kubernetes in-house, called “Borg.”
Now, the wish of data center operators has become to run their data centers the way Google runs its own. Kubernetes does bring that goal somewhat closer. But for data centers that are in the process of migrating to OpenStack, and trying to integrate their old, VM-based workloads with newer, containerized ones, the process has been (as this publication has explained not once but twice) “notoriously difficult.”
“We’re taking this established design pattern that is known to scale very well, and that is known to be the easiest to manage and operate design pattern for distributed cloud systems, and introduce them to OpenStack,” said Renski. “In terms of tangible benefits to end users, it makes it much simpler to patch and upgrade OpenStack, and makes the whole fabric much more stable.”
The CMO admitted to Datacenter Knowledge that his company’s working relationship with Google is not exclusive, although he did characterize their collaboration as tight.
Intel will also be involved with this project, Mirantis announced. The CPU maker is expected to grant Mirantis early access to its rack scale architecture projects, along with Intel’s next-generation monitoring libraries and tools, which involve new on-chip technologies being built into Xeon processors. Intel previewed some of those features last April, during its Cloud Day event in San Francisco.
As Renski understands things, some Intel engineers who work on OpenStack, along with others who contribute to Kubernetes, will be delegated responsibilities for driving the merged architecture going forward.
According to the current schedule, the three companies’ joint work, said Renski, should culminate in Mirantis OpenStack 10, scheduled for release in Q1 2017.
Micro-management
Mirantis itself will try to be first to take advantage of some of these architectural gains by implementing a CI/CD-based deployment scheme where the company implements patches and improvements to OpenStack on a more frequent, incremental basis. By letting the many services that jointly comprise OpenStack inhabit their own respective apartments, it becomes feasible for a managed service provider to maintain each OpenStack service independently.
“For us, this solves the problem of finally making OpenStack into a true microservices application,” remarked Renski, “that we can continuously patch and update following CI/CD principles — by effectively shipping containers to the customers, dropping them onto the Kubernetes substrate, and to some extent, solving the very acute problem of OpenStack lifecycle management and operation.”
But whether this solves an existing problem or creates new ones entirely, may depend on whether admins and DevOps professionals have changed their minds about rigorous deployment since Microsoft began its policy of rolling out Windows updates more aggressively than once per month. If continuous delivery hasn’t exactly been warmly embraced by enterprises, it has been begrudgingly accepted, at least insofar as applications are concerned.
Continuous deployment of infrastructure may be another matter. Mirantis’ Renski acknowledged during our conversation that adopting this principle, at this level, will require customers to undergo a degree of cultural change. But to the extent that some customers and prospective customers are unwilling to consider the need for such a change, Renski says he can actually do just fine without them.
“This has actually been a big point of contention for us, in trying to push OpenStack into the enterprise in general,” said Renski. Although most enterprises tell him they’re making investments in cloud infrastructure to improve their speed and agility, he acknowledged that infrastructure management patterns today prohibit them from implementing any changes whatsoever, to any layer in the stack, without significant testing in sandbox environments first.
“Our approach from day one with the customer has been to educate them, and explain to them, that OpenStack and cloud are basically means to an end,” he continued. “There is a very particular way in which we do cloud, and that way involves adopting CI/CD mechanisms, and this notion of continuously updating the fabric. . . All of that education has to be done for an organization, an enterprise, to really succeed with cloud.”
The reason Renski perceives that OpenStack deployments tend to fail in enterprises, is because they expect it to be a drop-in replacement for VMware. If that’s all a customer expects, he asserted, there’s no point in continuing with moving them toward cloud-native architectures where applications are built for scalability within the cloud.
“The short, brutal answer to your question is that enterprises are choosing consciously to stick with the approach of testing everything and updating once a year,” said Mirantis’ Renski. “They will fail regardless of whether they want cloud, and we are very up-front with them about it. And they’re just not a target customer for Mirantis specifically, or for cloud in general.”
Is This a Fork?
Does Mirantis’ move mean that Kubernetes effectively becomes the de facto orchestration layer for OpenStack? Renski told us that Kubernetes will become, starting with version 10, the orchestration layer for Mirantis’ own distribution. Its contribution to the product will be open source, and thus will be available for others to incorporate into their distributions.
“But we’re not going to do anything in the community that will effectively preclude anybody who doesn’t want to use Kubernetes, from not using it,” he added. “That’s simply not possible to do.”
Renski has a reputation in the OpenStack community for outspokenness. During a keynote appearance at OpenStack Summit in Austin, Texas, last April, he called out a Gartner analyst who spoke before him for daring to appear at a conference supporting a product three years after declaring, in his words, “OpenStack is crap.” From there, he laid into Gartner’s celebrated “bimodal IT” metaphor, receiving some cheers from the DevOps crowd for doing so.
“Mode 1 / Mode 2 is a pretty disastrous concept for me,” Renski reiterated.
“Success with OpenStack is one part technology and nine parts people and process,” the CMO told OpenStack Summit last April 25. “If you’re trying to succeed with OpenStack in your organization, and you’re embracing OpenStack just as technology, then you will most likely fail.” | 8:00p |
Virtual Infrastructure Resource Monitoring Best Practices Today, we’re going to take a look at a very critical monitoring and resource management aspect of the modern data center: Your virtualization layer (logical).
Pretty much all analysts agree that today’s data center is the driving engine behind major business initiatives. Most of all, today’s organizations rely heavily on their data center to enable real-world strategies and capabilities. However, the big challenge is actually creating monitoring and alerting systems which can look into some of the most advanced functions of the data center.
We know that virtualization continues to revolutionize how we deliver applications, workloads, and critical data points. We also know that the data center has evolved to support greater levels of density and new business initiatives. But how do you keep an eye on it all? How do you proactively manage one your most critical business components – the data center? Most of all, how do you optimize your entire virtualization ecosystem to ensure proper business and data center alignment?
The best way to do this is to look at new ways to monitor and manage data center and virtualization systems.
With that in mind; let’s start with the logical layer: virtualization.
The days of one application per server are coming to an end. With virtualization, IT shops can pack numerous virtual machines, running full operating systems and workloads, onto a single piece of hardware – which was unheard of just a few years ago. The best part is: You can run these workloads concurrently with negligible performance loss.
When it comes to working with a healthy virtualization ecosystem, there is a set list of metrics that should be monitored. This includes:
- Memory
- Host RAM utilization.
- VM Ram usage.
- Storage
- Disk space on the SAN.
- Space utilization on the VM.
- CPU
- Both vCPU and host CPU should be checked.
- Network I/O
- Check for heavy traffic patterns around VMs. Bottlenecks are not fun.
- WAN
- Ensure that remote links are operating properly.
- Link saturation between sites must be monitored.
Remember, there are many events that can cause a resource spike. Problems within the environment will cause issues to arise, a programming loop can peg a CPU, or even a network error can saturate links. You must proactively plan for this to keep your systems up and running. This means potentially forecasting for infrastructure spikes and having capacity to handle that.
Consider this example:
You’re a travel agency with all of your systems being virtualized. You know you’ll experience spikes in usage when there is a specific season in effect. So, during peak holiday or sales seasons, your servers may see a massive hit. To accommodate for this, companies worried about overworked VMs utilize something called Workflow Automation and Infrastructure Orchestration. That is, if a host is pegged with resource requests and current running VMs can no longer handle the load, automation software will kick in and spin up additional VMs on separate hosts to help handle the load. The great part here is that this process can be entirely automated to ensure business continuity and minimal business disruption.
So, when looking at purchasing any sort of resource monitoring software for your virtualization ecosystem, make sure it can answer the following questions:
- How many VMs do I have, and which ones are over or under provisioned?
- Where are the performance bottlenecks in my virtualized environment?
- How are my VMs configured?
- How many app servers will fit in my current environment, and when will I need more resources?
- What departments are using which resources?
- How is my server utilization being tracked over a period of time?
Furthermore, there are three major features that many IT managers will generally want to have with them in a management software:
Capacity Management
- Proactively monitor, predict, detect, and troubleshoot capacity bottlenecks with real-time dashboards and alerts
- Determine optimal VM placement, explore what-if scenarios, identify capacity shortfalls, and determine application-specific capacity needs
VM Sprawl Control
- Find idle/stale VMs, orphaned files, and over-allocated VMs
Performance Monitoring
- Proactively monitor virtualization-unique performance problems
- Deeply analyze storage I/O problems unique to virtual and private cloud deployments
- Troubleshoot application and workload issues
- Quickly discover and act on performance issues using flexible alerts and integrated recommendations
Final Thoughts
Creating a VM has never been easier. With just a few mouse clicks you have a new virtual machine ready to go. With that there are some important cautions. With such simplicity comes even more need for planning. Take the time to study your environment, understand the needs and then deploy the VMs.
Too often IT administrators get “click-happy” and deploy VMs at will. This creates VM-sprawl and can be difficult to manage.
When gathering metrics and understanding your unique environment – monitor your results over a span of time. This way, you’ll be able to know peak usage times, which machines are most heavily utilized, and where bottlenecks or I/O issues are occurring.
The more IT managers use their metric information, the better they are able to make decisions on their virtual infrastructure. And with that, they are able to deploy environments that best utilize their precious resources. | 8:12p |
Nearly Half of All Corporate Data is Out of IT Department’s Control
 Brought to You by The WHIR
Many organizations are not responding to the continuing spread of “Shadow IT” and cloud use with appropriate governance and security measures, and more than half do not have a proactive approach, according to research released Tuesday. The 2016 Global Cloud Data Security Study, compiled by the Ponemon Institute on behalf of Gemalto, shows that nearly half of all cloud services (49 percent) and nearly half of all corporate data stored in the cloud (47 percent) are beyond the reach of IT departments.
The report is drawn from a survey of more than 3,400 IT and IT security practitioners from around the world. It shows only 34 percent of confidential data on SaaS is encrypted, and members of the security team are only involved in one-fifth of choices between cloud applications and platforms.
IT departments are making gains in visibility, with 54 percent saying the department is aware of all cloud applications, platforms, and infrastructure services in use, up from 45 percent two years ago. Also, the number of respondents saying it is more difficult to protect data using cloud services fell from 60 to 54 percent, however those gains were offset by more broadly reported challenges in controlling end-user access.
“Cloud security continues to be a challenge for companies, especially in dealing with the complexity of privacy and data protection regulations,” Dr. Larry Ponemon, chairman and founder, Ponemon Institute said. “To ensure compliance, it is important for companies to consider deploying such technologies as encryption, tokenization or other cryptographic solutions to secure sensitive data transferred and stored in the cloud.”
The number of companies storing customer data in the cloud is increasing, with nine percent more organizations reporting the practice than in 2014, despite 53 percent still saying that is where it is most at risk.
Almost three-quarters say encryption and tokenization are important, and even more think it will be important over the next two years. However, almost two-thirds (64 percent) said their company does not have policies requiring safeguards like encryption for certain cloud applications.
Seventy-seven percent say managing identities is harder in the cloud than on-premises, yet only 55 percent have adopted multi-factor authentication.
“Organizations have embraced the cloud with its benefits of cost and flexibility but they are still struggling with maintaining control of their data and compliance in virtual environments,” said Jason Hart, Vice President and Chief Technology Officer for Data Protection at Gemalto. “It’s quite obvious security measures are not keeping pace because the cloud challenges traditional approaches of protecting data when it was just stored on the network. It is an issue that can only be solved with a data-centric approach in which IT organizations can uniformly protect customer and corporate information across the dozens of cloud-based services their employees and internal departments rely every day.”
The report recommends organizations set comprehensive policies for data governance and compliance, as well as guidelines for sourcing cloud services, and cloud data storage rules.
A study released in June by Alert Logic indicated that workloads were subject to the same security operations strategy regardless of the infrastructure they are on.
This article was first published by The Whir.
| 9:51p |
ZENEDGE Launches Single IP Protection at HostingCon  Brought to You by The WHIR
ZENEDGE launched Single IP Protection to general availability on Tuesday at HostingCon to provide enterprise-class network DDoS mitigation to organizations with smaller networks.
Network DDoS mitigation traditionally requires Border Gateway Protocol for routing decisions, which means they only work on networks with a minimum class C subnet including 254 usable and 256 total IP addresses, according to the company.
With the new offering, ZENEDGE assigns clients a DDoS-protected IP address range from its IP pool. It establishes a GRE tunnel to route traffic between the companies servers and the ZENEDGE protected IP network, and then directs new traffic through ZENEDGE via a DNS change.
“ZENEDGE serves many gaming companies, SaaS providers and organizations who are hosting their solutions in a colocated data center or in the cloud,” Leon Kuperman, CTO of ZENEDGE said in a statement. “While these organizations operate smaller networks and don’t control their routers, they are nevertheless consistently targeted with volumetric DDoS attacks.”
The company says gaming companies and others using proprietary protocols, UDP, VPN, or non-standard TCP ports.
With network layer DDoS attacks costing up to $40,000 per hour according to a 2015 report, the solvency of smaller organizations without protection could be at risk.
ZENEDGE received $4 million in a Series B funding round late last year.
This post was first published by The Whir. |
|