Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, April 27th, 2016
Time |
Event |
12:00p |
Server and Application Management in the Hybrid IT Era Gerardo Dada is Vice President of Product Marketing for SolarWinds.
As revealed by the results of a recent survey of IT pros, moving some parts of an organization’s infrastructure to the cloud is a priority, but one that presents a challenging management scenario. But server and application management in the cloud doesn’t have to be a daunting prospect. IT professionals can better equip themselves to manage—or prepare to manage—servers and applications in a hybrid IT environment by addressing several key considerations as well as leveraging certain best practices for an optimized data center.
To start, one of the most important things to remember in the hybrid IT era is that the cloud is not for everything. Too many companies begin implementing hybrid IT environments without first considering which workloads make most sense for which environments. While it’s tempting to look at the growing popularity and benefits of cloud computing and say, “Let’s move some of our applications to AWS and see how it works,” without a fundamental understanding of all your workloads and what they require for optimal performance, you will more than likely hinder your organization’s efforts to generate cost savings, greater performance and agility, or any other anticipated benefit of cloud computing.
For example, it’s easy to assume that the cloud is inexpensive and will save your business a lot of money. That’s certainly true if you think strategically about what to put there—maybe it’s an infrequently accessed web server or an application that is scaling so quickly that it’s more efficient to grow in the cloud. All too often, however, organizations underestimate service fees and the architecture required to meet SLAs and discover down the road that operating that workload in the cloud is actually more expensive. Even born-in-the-cloud startups may eventually grow to a size where it makes more sense to move portions of their infrastructure back to a cheaper, physical location.
And without the proper research, an unsuspecting administrator might move a mission critical workload to a cloud environment that is not designed to provide the level of uptime or security required, leading to a myriad of performance and data compliance issues.
Another challenge inherent to managing servers and applications in a hybrid IT environment is that with at least some portion of your infrastructure in the cloud, you’re left with not only less visibility, but less control. Imagine one of the recent widespread cloud outages where an entire geographical region of service goes down. Instead of being able to run down the hall and physically diagnose and troubleshoot the problem as you could with an all on-premises infrastructure, your time-to-resolution instead hinges on how quickly the provider can identify and solve the issue.
So, how do you best manage servers or applications that are hosted in the cloud while simultaneously maintaining your on-premises infrastructure? What is the best way to look at the data from both locations in the same way, and in a way that allows you to optimize your environment and end-user experience? Here are a few best practices you can leverage to help align the management of on-premises and cloud-based infrastructure and applications in the hybrid IT era.
- Monitor from the ground to the cloud – Similar to establishing a unified view across on-premises hardware, where your infrastructure might be comprised of any number of disparate vendor solutions, IT professionals must implement a tool that provides a view across the entire hybrid IT environment. The data generated by these tools will allow you to make informed decisions about what workloads belong on-premises or in the cloud. For example, with an effective monitoring tool, you might see that some components in the cloud are running slower or are more expensive, so you can bring them back on-premises. The opposite could also be true, where monitoring data reveals you’re running out of space and need to move some items to cloud for quick and easy scalability.
Beyond its importance in identifying workload requirements, disciplined monitoring in a hybrid environment also ensures the data center is operating as efficiently as possible. You should be able to see, through a single pane of glass and at any moment in time, when application performance in the cloud is slowing down or one of your physical servers is over capacity and in need of reprovisioning. This allows you to proactively identify problem areas and speed time-to-resolution before end-users are impacted and 100 help desk tickets show up in your team’s inbox.
- Identify your workload metrics – For anything you consider moving to the cloud, it’s critical to consider what kind of response time you want and expect, and how you’ll measure it. How mission critical is the application you’re considering shifting to the cloud? What are the SLAs? How stable is the load and how will the workload grow or evolve over time? How will you port the cost back to the business? Start with these types of questions and work backwards to identify the most appropriate technology for the workload. As IT professionals, we love to start with the technology and identify workloads later, but the reverse will help you create a chargeback/showback history that demonstrates to the business how a hybrid IT environment is beneficial and effective.
- Have a plan B – Some IT professionals expect cloud providers to ensure that things like security and network performance will “just work.” But at the end of the day, you and the rest of your IT department are ultimately responsible for infrastructure and application performance and everything that is done as-a-service needs to have a plan B. How will you know when there’s a problem? How will you know whether the problem is yours or the provider’s and what is your mitigation plan? What are the provider’s SLA details? What is their recommended architecture? You should think through and plan for the worst-case scenarios that could happen in a hybrid IT environment in the early stages to prevent such problems before they arise, and be prepared should they actually happen.
A unified monitoring strategy is the best way to stay ahead of potential performance, security and capacity problems, identify the root cause—is it a problem on your end or the cloud provider’s—and know when it’s time to turn to plan B.
- Remember that the cloud is not for everything – And that’s okay. The cloud is here to stay, and for many businesses it represents the future of IT. But that doesn’t spell the demise of on-premises IT infrastructure anytime soon, and the point of a hybrid IT strategy is to optimize your workloads based on their specific components and their requirements. If your database requires extremely high performance and is already perfectly functional on-premises, leave it on-premises. On the other hand, there is often plenty that can be moved to the cloud. For example, most web applications should be storing graphics, large files and videos in the cloud where they can enjoy the benefits of a CDN and take the load off web servers.
Ultimately, despite the race to the cloud, there is no “right” way to adopt elements of cloud computing and introduce hybrid IT into your organization—it’s different for every organization, and is more than likely a multi-year journey. Your business should develop a roadmap that helps chart future cloud integration based on a workload by workload evaluation that considers requirements, potential upside, costs and urgency.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 5:42p |
Texas Data Center Boom Filling up Public Coffers Texas has been enjoying a data center boom over the last two years or so. Companies from data center providers like Digital Realty Trust, RagingWire, or Aligned Data Centers to big internet players like Facebook are attracted to major Texas markets by their central US location, quickly growing population and workforce, a strong industry and public sector user base, relatively low energy costs, and a regulatory and tax environment that favors business.
According to Jones Lang LaSalle, the Houston market was fourth in the nation by the number of square feet of data center space under construction in 2015; Dallas was sixth; Austin and San Antonio together were in the ninth spot.
The New York Times covered the Texas data center boom this week, highlighting some of the biggest recent construction projects and pointing out how much of a money maker the data center industry has been for state and local tax coffers.
While data centers create relatively few jobs – nowhere near the amount of jobs factories bring – the amount of property and sales taxes governments collect from expensive equipment purchases their users make is well worth the tax breaks they are offered to lure them in.
“Each one of these data centers is a little gold mine cranking out wealth for the city,” John Jacobs, executive VP of the Richardson, Texas, chamber of commerce, told NYT.
Here is our coverage of the most recent data center construction projects in Texas:
Skybox Building Large in Dallas Market
Equinix to Open New Data Centers on Four Continents
RagingWire Takes Its Massive-Scale, Luxury-Amenities Data Center Model to Texas
Texas Colo with Efficient Data Center Cooling System Launched
$1B Facebook Data Center Project Underway in Texas
Read the full New York Times article on the Texas data center boom here. | 6:13p |
Out with the PC: Intel CEO Presents Optimistic Vision for Company Amid Layoffs  By Talkin’ Cloud
A week after announcing that Intel would be cutting 11 percent of its total workforce – or 12,000 jobs – CEO Brian Krzanich has shared more insight into how these changes will position Intel in the changing tech landscape.
In a blog post on Tuesday, Krzanich said that investments in high-performance computing, big data and machine learning capabilities, the Internet of Things (with a focus on autonomous vehicles, industrial and retail), 5G connectivity, and more will help position the 48-year-old company as a leader in cloud, Internet of Things and modernized data centers.
“The data center and Internet of Things businesses are now Intel’s primary growth engines, and combined with memory and FPGAs, form and fuel a virtuous cycle of growth. Together, these businesses delivered $2.2 billion in revenue growth last year, made up 40 percent of our revenue, and the majority of our operating profit,” Krzanich said in an email to employees on Apr. 19.
Read more: Intel: World Will Switch to “Scale” Data Centers by 2025
Pink slips are already being delivered this week in California and Oregon, where 784 employees and 565 employees will be laid off, respectively, according to a report by The Oregonian. An Arizona manufacturing plant will also see layoffs this week, with 560 workers expected to be delivered the news.
The majority of the layoffs are expected to be over the next 60 days.
In his blog post, Krzanich outlined five core beliefs on which Intel has based its strategy:
- The cloud is the “most important trend” shaping the future.
- The connection to the cloud makes the “many ‘things’ that make up the PC Client business and the Internet of Things…much more valuable.”
- “Memory and programmable solutions such as FPGAs” will support new products for the data center and the Internet of Things.
- Access to the cloud will be driven by 5G connections.
- “Intel’s industry leadership of Moore’s Law remains intact, and you will see continued investment in capacity and R&D to ensure so.”
This first ran at http://talkincloud.com/cloud-computing/out-pc-intel-ceo-presents-optimistic-vision-company-amid-layoffs
| 6:22p |
Microsoft Shares How It Hunted Rogue Actor Siphoning Corporate Data  By WindowsITPro
Last year, Microsoft made a big deal about how it was investing a billion dollars in building out its security apparatus. On the Microsoft Malware Protection Center’s Threat Research & Response Blog, they shared a little bit about how that has paid off with the story of how the Windows Defender Advanced Threat Hunting team, or just Hunters for short, thwarted a long-running attack that utilized a series of bad patches and deep discretion.
The attack utilized Hotpatching, which had been discussed as a possible threat vector a decade ago but, until now, never seen in the wild. It was performed by a rogue actor group Microsoft has codenamed Platinum (Microsoft uses chemical elements as code names for rogue actors).
What’s interesting in the case study is that this wasn’t exactly an unknown vulnerability, having been publicly discussed over a decade ago, but it was one that was still first successfully used almost a decade after it was publicly disclosed. Because the technique uses Windows Server 2003’s own update system against it, it bypassed most common security scanners.
But beyond the technical details, there’s a lot of high drama. Unlike many of the threats that are reported on publicly, Microsoft said Platinum seeks to keep a low profile for years, targeting “governmental organizations, defense institutes, intelligence agencies, and telecommunication providers in South and Southeast Asia.”
Windows 10 is not vulnerable to this attack, according to Microsoft.
This first ran at http://windowsitpro.com/security/microsoft-shares-how-it-hunted-secretive-rogue-actor-siphoning-corporate-data | 7:22p |
Nlyte Cooks Up DCIM Software Module for Government Agencies Along with the freeze on data center construction by government agencies, the most recent White House initiative to optimize the government’s data center infrastructure included a mandate for agencies to use DCIM software to monitor and manage their IT facilities.
The mandate is potentially a boon for DCIM software vendors, and at least one of them is looking to maximize business benefit from the latest Data Center Optimization Initiative, rolled out in March.
Read more: White House Orders Federal Data Center Construction Freeze
Nlyte Software, which claims it already has a lot of customers among federal agencies, announced an add-on to its software product designed specifically to help agencies comply with the many DCOI requirements, set goals and track their progress toward achieving them.
Only about 30 agencies have deployed DCIM software to date, Nlyte said, citing 451 Research. In a statement, Nlyte CEO Doug Sabella said the “mandate has significantly increased the number of federal agencies looking to deploy a DCIM solution to help optimize their data centers.”
The company’s new DCOI module was designed to make it easier for them to improve efficiency of their infrastructure and to document the improvements. It helps them compare current performance to DCOI objectives, in addition to:
- The ability to establish goal and target date configuration within charts, to help facility managers understand where mandates and objectives are being achieved.
- Multivariate analysis to establish a regression line for predictive “realization date” as to when the federal agency should ultimately be in compliance with the specific focus areas.
- New micro-permissions capabilities grant individuals specific and detailed dashboard access as necessary — throughout any part of the data center complex or across an entire global portfolio.
| 10:53p |
Report: VMware Cloud Chief Bill Fathers to Step Down Bill Fathers, head of VMware’s public cloud business whose future has become increasingly uncertain, is leaving the company, Fortune reported citing anonymous sources and a VMware spokesperson. The company announced his departure in an internal memo sent Tuesday.
Now called vCloud Air, the influential enterprise IT software company’s foray into the public cloud services market stared in 2013, the same year Fathers came on board following about six years at Savvis, a data center provider CenturyLink acquired in 2011. He led Savvis as president during the two years he stayed on board after the acquisition.
Launched initially as vCloud Hybrid Service, VMware’s public cloud was an attempt to make headway into the enterprise cloud market. The company’s strategy rested to a great extent on the hope that the enormous enterprise user base it had already amassed by 2013 would also use its cloud services if they could connect the VMware environments in their data centers to the public cloud seamlessly.
But as the biggest public cloud players, Amazon, Microsoft, and Google, pumped billions upon billions into their cloud services and the infrastructure to support them, it became increasingly difficult for any company that couldn’t match that level of investment to compete in the market.
Read more: The Billions in Data Center Spending Behind Cloud Revenue Growth
Adding to the uncertainty of vCloud Air’s future is the merger between VMware’s parent company EMC and Dell that’s currently in the works. After investors rejected a proposed plan to combine the cloud service with Virtustream, EMC’s other cloud services subsidiary, EMC and VMware said the plan was off the table but didn’t offer a clear alternative.
A VMware spokesperson told Fortune that vCloud Air would focus on more niche cloud services, such as disaster recovery or “data center extension” projects.
Fathers will be replaced by vCloud Air VP and general manager Allwyn Sequeira and VP of sales and customer success for VMware cloud services Laura Ortman. The two will act as co-managers of the unit. |
|