Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, August 27th, 2015

    Time Event
    12:00p
    How to Build Your Own Hybrid Cloud

    These days, in one way or another, many organizations are finding ways to leverage the many benefits of cloud computing. Over the course of the last five years the cloud model has evolved to support a number of new types of use cases, users, and applications. Through the evolution of the cloud we saw a number of different types of models emerge. Some of the most popular ones are:

    • Private
    • Public
    • Hybrid
    • Community

    Now we see a new type of trend emerge. Organizations have grown with the capabilities of cloud computing and are creating better deployments around the right type of model. Probably one of the most popular cloud architecture revolves around the hybrid cloud infrastructure. There are better interconnectivity capabilities, and organizations are able to distribute the environment much more effectively. These improvements in bandwidth, storage, network, and compute allow public and private data center resources to be shared more efficiently.

    Let’s create a hypothetical scenario. You’re an organization of a few thousand users. You see sporadic shifts in user count and data loads because of the nature of your business. Basically, there are times when resource constraints and uptime become serious concerns for your organization. For the most part, your data center is privately held with a variety of virtual workloads including:

    • Virtual applications
    • Virtual desktops
    • Databases
    • Mail servers
    • Other hosting servers

    Now a decision has been made to extend your existing infrastructure into a public cloud. This doesn’t necessarily have to be AWS or Azure. In fact, many organizations select popular data center providers to build their own cloud model. With that in mind and the goal set, what are the right steps to create a hybrid cloud environment? What are the right ingredients to help distribute data center resources and create an even more robust infrastructure?

    Although not all-encompassing, these are some of the recommended steps to consider when building out your own hybrid cloud platform:

    • The data center or cloud provider. What you’re basically doing is extending your existing platform into a cloud model. One of the first things that your organization must do is conduct a Business Impact Analysis as well as a Cloud Readiness Assessment. These two planning projects allow your organization to understand existing workloads, which need to be extended into the cloud and how it will impact your business. A Readiness Assessment allows you to further understand if your applications, users, and even data sets are ready for a cloud migration. Based on these analyses and an ROI report, you’ll have a few options. Build, lease, or cloud cloud. The exact answer will revolve around the findings in the respective reports.
    • Selecting your hardware. Given that you’ve now completed your analysis and know where you’re deploying your cloud, it’s time to look at a variety of hardware options. Server, storage, network, and compute platforms have come a long way. New converged platforms are the primary reason that the cloud is getting a lot smaller. There are direct benefits between rack-mount, blade, and converged platforms. It’s critical to look at purpose-built systems as well. If you’re looking to process numerous parallel workloads for a specific task, you’d probably look at HP’s Moonshot chassis. However, if you’re deploying a branch cloud location with virtual applications and some data sets, maybe a Nutanix platform is the right choice. In other instances, such as a much larger and more powerful deployment, you may need more horsepower. When that scenario arises, Cisco UCS or other powerful blade-based systems are amazing hardware options. Their level of scale and integration with the virtual layer make them highly efficient systems.
    • Creating a virtual platform. The modern data center is now being defined as the software-defined data center platform. Let’s be honest, if you’re moving to the cloud, you’ll need to look at logical and virtual controls. This starts at the hypervisor and can move all the way into managing Big Data on a Hadoop cluster. In between you’ll have virtual security services, logical management and monitoring controls, and software-defined technologies. The really cool part about the modern data center are all of the amazing logical controls that we have now. Network, storage, compute, and even the cloud can fall into the software-defined category. When creating your cloud platform, make sure to look at these systems to help you scale, be much more efficient. and improve cloud resiliency.
    • Integrating replication and distribution mechanisms. A big part of a hybrid cloud is the ability to replicate and distribute data. First of all, it’s important to understand what you’re replicating and to where. Many organizations deploy hybrid cloud platforms to help get applications and data closer to their user. Others use a hybrid cloud to control bursts and branch locations. Regardless, it’s important to know how data is being moved, backed up, and how it’s being optimized. Data replication can be a tedious process if not done properly. That said, it’s important to take security into consideration as well. Your data is a critical asset and it must be secured at the source, through the route, and at the destination. Fortunately, virtual security appliances and services can help make this process a bit easier.
    • Incorporating automation and orchestration. Part of the beauty of a hybrid cloud is the ability to set automation tasks and watch it all go. Resources can be provisioned and de-provisioned based on demand, management can be a lot more proactive, and your ability to control various components of your hybrid infrastructure can all fall under one management layer. Open-source management systems allow you to easily replicate data center resources into a hybrid cloud model. Technologies like CloudPlatform, OpenStack, and Eucalyptus all provide direct extensibility into a hybrid cloud model. Furthermore, automation tools allow for easier replication and control of critical resources. As you build your hybrid model, make sure to look at cloud-based orchestration and automation tools for help.
    • Balancing your workloads. Load-balancing technologies allow you port users, data points, and even applications be set to appropriate data center resources. For example, your load-balancing platform can be intelligent enough to point a user closer to a data point while controlling resources and application access. Next-generation load-balancing platforms are a lot more than just load-balancers. They provide aspects of next-gen security, a variety of virtual services, can act as an application firewall, and control a variety of on-prem and off-prem resources. Most of all, these platforms can be virtual or physical.
    • Management and control. One of the most important pieces to your hybrid cloud platform will revolve around monitoring, management, and control. Staying proactive and catching challenges before they become real issues is a big piece of running an efficient cloud environment. Remember, you now have an ecosystem of technologies all working together to replicate resources between a private data center and one that is now your cloud-based environment. Having that single pane of glass allows you to delegate controls and permissions and have direct visibility into every aspect of your extended data center model.

    Building the right type of hybrid cloud platform will require a bit of planning and preparation. However, when done right, a hybrid cloud becomes an extremely powerful extension of your existing infrastructure. With use cases spanning DRBC to seasonal workload bursting, hybrid platforms create great ways to utilize resources over a distributed plane. Next-generation load-balancing technologies now allow for the seamless transfer of users and workloads between vast data center points. All of this helps improve both data center and business operation processes.

    3:00p
    The Challenges of IT Maintenance

    Joel Nimar is the President of Pyramid Technology Services.

    The 21st century has brought about a boom in IT infrastructure, allowing companies to do business with greater speed and profit than ever before. Yet this massive growth has created challenges as well, in the form of legacy equipment that is typically maintained by costly OEM support structures. In fact, Forrester Research reports that the average business spends 72 percent of its total IT budget on sustainment, and only 28 percent on new technological investments. This pattern harms companies’ profitability.

    A solution, however, lies in employing an Independent Service Organization (ISO), dedicated to managing companies’ IT infrastructure needs, streamlining their services, and increasing their profits.

    The typical company uses many different technologies from multiple vendors, meaning that each unit has different maintenance needs and different warranty expiration dates; in fact, some may be past their warranty expiration dates entirely. To make matters more difficult, the quality of service differs widely depending on the unit, the geographic location of the company, and that organization’s specific needs.

    An oil pipeline company, for instance, that operates 24/7 in a remote area cannot wait days for a customer service call to make its way through a slow, decentralized bureaucracy. A government organization, meanwhile, can only turn to employees who are able to pass strict security requirements. Outsourcing your IT service to an ISO improves the service quality of the entire infrastructure. Companies with a global presence will face more administrative and financial burdens as they deal with multiple international vendors in a variety of currencies. In short, companies with a large IT infrastructure will save a great deal of money and manpower having an ISO deal with the logistical and administrative issues of managing multiple contracts both domestically and internationally.

    ISOs, the 21st Century Solution

    ISOs provide a simple and elegant solution: a single point of contact for all of a 21st century company’s IT infrastructure requirements. These organizations operate globally, which means that companies that partner with ISOs will experience a consistent quality of service no matter where they are operating, or what kind of technology they are employing. In fact, by consolidating service contracts and streamlining IT maintenance processes, ISOs not only provide companies with reliable IT infrastructure sustainment, but also help their partners to enhance their own efficiency. Any global organization that requires hardware maintenance across a broad range of equipment makes and models will benefit from working with an ISO.

    Tools to Evaluate an ISO

    Of course, it is important to evaluate potential ISOs before partnering with them, and there are certain questions that any company should ask. These include:

    • Does your ISO have experience with the manufacturers whose technologies you own?
    • Is your ISO willing to customize their service plans to meet your company’s needs?
    • Will your support engineer be trained to maintain both hardware and software?
    • Does your ISO have experience maintaining legacy systems past their warranties?
    • Can your ISO support you in a variety of geographic locations?
    • Will it provide call service in an hour, and onsite service when and where you need it?
    • Will the ISO designate one individual to be responsible for your maintenance needs?
    • Is the system to contact your ISO convenient and navigable?
    • Does the ISO offer services from start to finish in order to ensure that your needs have been met?

    Simplify and Streamline with an ISO

    An ISO that meets these requirements will allow your company to take an aggressive approach to technology management, empowering you to customize your IT maintenance to fit your specific needs. As IT infrastructures become increasingly elaborate and complex, savvy CFOs and IT managers employ ISOs to maintain their technology, to cut costs, to streamline their businesses, and to remain cutting edge in the modern world.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:10p
    ZeroStack Comes Out of Stealth to Help Enterprises Take Control, Simplify Private Cloud

    logo-WHIR

    This article originally appeared at The WHIR

    A former VMware engineer has brought what he learned from years of working with enterprise clients on their cloud needs to a new company that came out of stealth this week. ZeroStack, co-founded by Ajay Gulati, CEO, launched on Wednesday to make consuming private clouds as easy as possible without forcing enterprises to relinquish control.

    Along with co-founder and CTO Kiran Bondalapati, who was founding engineer atBromium, Gulati founded ZeroStack last year to help enterprises deploy private clouds without needing specific cloud expertise. The team includes senior employees from VMware and early employees from companies including Nutanix, Google andRiverbed.

    The company had its first version ready in May, and started doing customer deployments in June 2015. On Wednesday, ZeroStack also disclosed $5.6 million in Series A funding led by Foundation Capital, and appointed Mark Leslie, founder and CEO, Veritas, to the board.

    “[W]e are disrupting the private cloud market with a unique solution that can provide self-service cloud to enterprises in literally 15 minutes versus three to six months that it takes to build a private cloud and stitch together hardware and software technologies today,” Gulati said in an interview with the WHIR.

    To give enterprises simple deployment coupled with maximum control, ZeroStack combines on-premise deployment and a SaaS platform. The ZeroStack Cloud Platform offers compute, storage, networking and management software, according to a press release. The cloud platform is built on OpenStack and consists of the ZS1000 hyper-converged system and the ZeroStack SaaS platform, which provides monitoring, troubleshooting, capacity planning and chargeback.

    “When we started ZeroStack we looked at the existing two dominant cloud models where one is a private cloud that comes with high operational complexity…but once you set it up you have complete control, it’s behind your firewall, you have much better performance,” he said. “You look at public cloud on the other hand that has no operational complexity for the end users, there are no infrastructure decisions that you have to make, the vendor has already made and they are providing this as a service. But now everything is out of your control you have unpredictable performance. ”

    “In my mind these are two extreme scenarios where in one case you have complete control and headaches, and in the other scenario no control and no headache. At ZeroStack what we have built is a new way of deploying cloud that gives you the best of both of these options. ”

    ZeroStack is targeting two kinds of markets with its private cloud solution: the first is tech companies with 50-1000 developers, and the second is mid-to-large enterprises in non-technical sectors including education, media, pharmaceutical and retail. Gulati said that the fact that ZeroStack takes out a lot of the complexity of building a private cloud is particularly appealing to these less technical users. Nearly half of developers prefer using an on-premises, private cloud for development over public clouds like AWS or Azure, according to a recent report.

    “The solution doesn’t require any cloud experts to operate,” he said. “In order to build a private cloud with any stack…you have to hire experts to do that. This is not something regular IT in a company can do.”

    Gulati said that another benefit to ZeroStack is users can “start really small…and scale based on demand” which saves enterprises money by not needing to invest a lot upfront.

    This first ran at http://www.thewhir.com/web-hosting-news/zerostack-comes-out-of-stealth-to-help-enterprises-take-control-simplify-private-cloud

    6:02p
    Managed Private Cloud on the Rise as Data Center Outsourcing Service

    Managed virtual private cloud services are gaining popularity as part of data center outsourcing contracts enterprises make with the likes of IBM, HP, or Dell.

    That’s according to the market research firm Gartner, which published its annual Magic Quadrant report on the North American data center outsourcing market published late last month.

    Gartner puts managed private cloud in the category of “Infrastructure Utility Services,” or services that companies pay for based on resource usage or number of users served. “Increasingly, IUS are based on managed virtual private cloud services,” the report read.

    Industrialized infrastructure services are what is going to drive future growth in the data center services market in general, according to the analysts. That growth will come at the expense of growth and margins for traditional services, which Gartner expects to face further pressure.

    This category of services includes both Infrastructure-as-a-Service and Platform-as-a-Service.

    IBM and HP remained top data center outsourcing providers on the latest Magic Quadrant for North America, ahead of everyone else in terms of both vision and execution. Other providers Gartner named leaders were Dell, HCL Technologies, and CSC.

    Here’s Gartner’s 2015 Magic Quadrant for Data Center Outsourcing in North America:

    Gartner Data Center Outsourcing MQ NAM 2015

    While it named HP one of the leaders in the category, Gartner estimated that its data center outsourcing and infrastructure utility services revenue in 2014 was down about 5 percent from the prior year.

    HP’s key strengths in the outsourcing market are its infrastructure scale – close to 80,000 physical servers across about 30 data centers – its ability to provide just about every kind of data center outsourcing service imaginable, and strong management and budget oversight in service engagements.

    While a leader in every other respect, however, HP’s cloud services haven’t done as well as the company may have hoped, according to Gartner. HP says its managed cloud server offering called Helion has seen double-digit growth, but the penetration of its virtual private cloud, utility, or managed private cloud offerings remains below 15 percent, according to the analysts.

    HP’s rival IBM is the largest player in the market for both cloud and traditional enterprise data centers. Gartner estimates that Big Blue makes about $3 billion in annual sales on its data center outsourcing services.

    Its main strengths are focusing on solving specific business issues for clients rather than simply providing standard technology and support services, breadth and depth of resources, and willingness to switch to new service models.

    One of Gartner’s cautions about IBM was the potential for impact on existing interfaces and processes of the company’s recent restructuring, which included melding of its former Strategic Outsourcing and Integrated Technology Services. The new unit, called Infrastructure Services, combines everything from networking to mobility.

    IBM’s Global Technology Services segment, which includes Infrastructure Services, lost more than $1 billion in revenue in 2014, according to Gartner’s estimates. The analysts attributed this loss to new and emerging providers in the market who are agile and extremely competitive.

    Gartner said IBM needs to review its strategy and be more open to enabling its clients to use competitors’ cloud services, such as AWS and Azure, as opposed to driving them squarely to its own SoftLayer cloud.

    6:14p
    Executive Exodus Continues at VMware with Hollis Departure

    An exodus of executives and talks of reverse mergers over the past couple of days have put a cloud over VMware and its majority stakeholder, EMC, just ahead of next week’s VMworld in San Francisco.

    The revolving executive door continued to turn on Tuesday at Palo Alto-based VMware with the departure of Chuck Hollis, the company’s chief strategist and fourth C-level departure in a month, reported our sister site, The VAR Guy.

    It’ll be a bit of a reunion for Hollis, who after working 21 years for the EMC Federation, will join Oracle and report to his former boss from the early EMC days, Dave Donatelli.

    Also on the personnel front, VMware just appointed insider Ray O’Farrell as chief technology officer where he will oversee the company’s R&D business as well, according to The VAR Guy. He replaces the company’s third-ranking executive Ben Fathi, who has not yet announced where he’ll land next.

    If all the exits weren’t enough to make investors nervous, Fortune recently reported that EMC’s board may be looking at a proposal that would allow VMware to buy back its 80 percent holding, separating the two companies.

    When news of a possible merger hit the streets, shares of EMC stock rose nearly 4 percent, while VMware dropped by 5 percent.

    Read the complete post at: http://thevarguy.com/information-technology-channel-leadership-news/082515/chuck-hollis-leaves-vmware-oracle-prompts-leadership.

    6:26p
    Time to Test Your Data Center IQ

    The people who run data centers tend to work in isolation. More often than not most of the people that work for a company don’t actually know what IT folks do all day inside those data centers. The issue that creates is that the people running the data centers don’t have a way to easily compare the amount of knowledge and expertise they have with the rest of the overall IT community.

    To provide some insight into the relative depth of knowledge of the people who run the data center, Emerson Network Power commissioned The Ponemon Institute to launch a Data Center IQ benchmark survey of over 750 IT professionals that work in data center environments.

    At the Data Center World conference in September, Dan Draper, director of data center programs for Emerson, will not only present the results of that survey but also distribute a subset of the questions asked by The Ponemon Institute to IT professionals attending his session.

    “We’re going to have a lot fun,” Draper said. “There will be quizzes and trivia, and we may give away a prize to whoever scores the highest IQ.”

    Draper said that rather than focusing solely on energy issues, the survey encompasses five areas that pertain to data center operations to help identify the areas of data center management that are best and least understood.

    Not surprisingly, there is a significant knowledge gap between older and younger members of the IT staff in terms of depth of knowledge, with older workers tending to be able to draw on more experience to answer a broader range of questions, he said.

    Of course, everyone who works in a data center generally considers themselves to be an expert in their field. The Ponemon Institute Data Center IQ Benchmark survey will enable attendees at the conference to actually put those beliefs to the test.

    For more information, sign up for Data Center World National Harbor, which will convene in National Harbor, Maryland, on September 20-23, 2015, and attend Dan’s session titled “Measuring Your Data Center IQ: A National Study on Data Center Operational Expertise and Knowledge”

    << Previous Day 2015/08/27
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org