Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, April 11th, 2016

    Time Event
    9:00a
    Monitoring as a Discipline and the Systems Administrator

    Gerardo Dada is Vice President of Product Marketing for SolarWinds.

    Today’s rate of change in the data center is rapidly accelerating. From simply racking and stacking servers decades ago to the recent integration of new technologies like virtualization, hyperconvergence, containers and cloud computing, to name a few, traditional data center systems have undergone considerable evolution.

    And with the new reality of hybrid IT, in which an organization’s IT department must manage a set of critical services on-premises that are connected with another set of services in the cloud, the systems administrator’s role has become that much more complex. More importantly, businesses today run primarily on software and applications, and the expectation that these will always work and work well (fast) has never been higher.

    Thus, as systems complexity continues to grow alongside the expectation that an organization’s IT department should deliver a quality end-user experience 24/7 (meaning no glitches, outages, application performance problems, etc.), it’s important that IT professionals give monitoring the priority it deserves as a foundational IT process.

    Making the Case for Monitoring as a Discipline

    Traditionally, monitoring in the data center is somewhat of an afterthought. For most organizations, it’s “a necessary evil:” a resource that the IT department can leverage when there’s a problem that needs solving, and often a job that’s done with just a free tool, either open source or whatever was included by the hardware vendor.

    The truth is, an IT department will always be stuck on the reactive (troubleshooting) without better visibility into the health and performance of its systems and a tool that can provide early warnings. By establishing monitoring as a core IT function (a.k.a. monitoring as a discipline), businesses can benefit from a more proactive, early-action IT management style, while also streamlining infrastructure performance, cost and security.

    In the face of enterprise technology’s exponential rate of change, monitoring as a discipline is a concept that calls for monitoring to become the defined job of one or more IT administrators in every organization. The most important benefit of such a dedicated role is the ability to turn data points from various monitoring tools and utilities into more actionable insights for the business by looking at all of them from a holistic vantage point, rather than each disparately.

    Of course, a dedicated monitoring role may not be feasible for organizations with budget and resource constraints, but the primary goal is to put a much larger emphasis on monitoring in daily IT operations, using a comprehensive (although not necessarily expensive) suite of tools.

    Consider the host of data breaches that took place in 2015. Networks, systems and cloud providers alike were infiltrated, millions of individuals’ personal information was leaked or stolen and the monetary consequences totaled hundreds of millions of dollars. Many of these breaches could have been prevented with a holistic and dedicated approach to monitoring that included tracking network traffic, logs, software patches, configuration changes, credentials and which users attempted to access server data.

    In addition, more strategic monitoring—meaning tracking only select metrics that provide actionable insights and align with business needs—will help systems administrators fine-tune infrastructure. As much as 50 percent of an IT department’s infrastructure spend can be wasted as a result of inaccurate capacity planning, overprovisioning, zombie resources and resource hogs.

    This is a concern especially for systems administrators in hybrid environments, where careful attention should be paid to provisioning and workload allocation to realize maximum cost efficiency. For applications or workloads that may be hosted offsite, poor performance monitoring can also result in an inability to diagnose problems or latency issues.

    By leveraging insights from proactive and targeted monitoring, like historical usage and performance metrics, systems administrators can better optimize resources save their organizations money, and address performance issues before the end-user even notices anything is wrong.

    Getting Started

    Of course, refining or redesigning the way a business approaches monitoring will take time, and not every organization will have the resources to dedicate just one person to monitoring. But there are several ways systems administrators and all other IT professionals can bolster their skillset and integrate the principles of monitoring as a discipline into daily operations to increase efficiency and effectiveness in the data center.

    • Establish metrics that matter to your business. Monitoring can be very tactical. Many IT professionals rely on the data that monitoring tools generate by default, often hundreds of resource metrics of little value and a barrage of alerts. To create a more thoughtful monitoring strategy, IT departments should identify which metrics matter most to the business, such as overall system throughput, efficiency and health of crucial application components and services, and from there assign alerts.
    • Define alerts that are actionable and tied to your usable metrics. Many monitoring tools provide data at a very granular level. When IT professionals get tactical alerts every time a resource metric goes off the acceptable range, it results in most alerts being ignored. Alerts should be sent only when action is required, and the alert should provide the proper context to guide the action. User experience is where good alerts start. For example, an alert should notify an admin when website response time goes down, not when one of the Web server CPUs crosses the 80 percent threshold. This approach will help systems administrators focus on what is important and avoid being bogged down in endless, often irrelevant metrics and alerts.
    • Ensure your organization leverages a monitoring tool that provides full stack visibility. It’s no secret that IT has traditionally functioned in siloes. IT professionals have disparately managed servers, storage and other infrastructure elements for decades. But today’s businesses run on software and applications, which utilize resources from the entire system: storage, server compute, databases, etc., which are all increasingly interdependent. IT professionals need to have visibility into the entire application stack in order to identify the root cause of issues quickly and proactively identify problems that could impact the end user experience and business bottom-lines if not corrected quickly.

    System administrators, without the benefit of a comprehensive monitoring tool, are forced to go back and forth with multiple software tools—and in the case of hybrid IT, tools for both physical hardware and cloud-based applications—to troubleshoot issues. The result is often finger pointing and hours of downtime spent looking for the problem rather than fixing it, or better yet, preventing it. Organizations should look for and invest in a tool that consolidates and correlates data to deliver more breadth, depth and visibility across the data center.

    • Embrace performance as a requirement. In today’s business, uptime is not enough. End user’s performance expectations have increased dramatically, thanks largely to speed at which most of today’s websites function. An application that takes seconds to respond is almost as bad as an application that is down. The acceptable page-load time for customer-facing applications is now under two seconds. Furthermore, there is an increasingly obvious link between performance and infrastructure cost, especially in virtualized and cloud environments. As a result, applications need to perform at their best. Understanding what drives performance and what impacts performance over time is another aspect of monitoring that IT departments must embrace.
    • Be proactive. The daily job of some IT teams feels like a game of whack-a-mole, moving from fire drill to fire drill, consuming all of the team’s time and energy. When IT adopts monitoring as a discipline, problems can be caught and solved when the first warning signs show up, preventing fire drills and avoiding business impact. Being proactive also means doing proper capacity planning, security assessments, patching software, compliance reporting, fine tuning and other maintenance tasks that can be automated or simplified with the insights provided by a proper monitoring process. A proactive IT team suffers less downtime and spends more time on strategic initiatives that continuously improve the technology foundation on which the organization runs.

    In sum, monitoring as a discipline is a practice designed to help IT professionals escape the short-term, reactive nature of administration, often caused by insufficient monitoring, and become more proactive and strategic. From there, organizations can spend more time building the right monitoring system for their business that will intelligently alert administrators to problems. As the data center continues to integrate new technology and grow in complexity, and especially as hybrid IT increases, IT professionals should establish monitoring as a discipline, adopt best practices to improve systems awareness, tune performance and deliver the highest quality end-user experience possible.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:36p
    Microsoft Embraces EU-U.S. Privacy Shield
    By WindowsITPro

    By WindowsITPro

    Microsoft has formally endorsed the EU-U.S. Privacy Shield, becoming the first major US technology company to do so.

    The deal would replace the International Safe Harbor Privacy Principles agreement that previously governed data being passed between the the US and European Union countries, but which was invalidated last October.

    First and foremost, at Microsoft we believe that privacy is a fundamental human right,wrote John Frank, Microsoft Vice President of EU Government Affairs, in a post announcing the endorsement. In a time when business and communications increasingly depend on the transmission of personal data across borders, no one should give up their privacy rights simply because their information is stored in electronic form or their technology service provider transfers it to another country.

    Frank highlighted that the Privacy Shield would provide a strong starting point for data transfer agreements, while offering broader transparency on how both companies and government entities manage and share data.

    People won’t use technology that they don’t trust. Legal rules that clearly delineate individual rights, ensure transparency in how those rights are protected, and offer due process when people believe their rights have been violated, Frank wrote. They provide a foundation for trust that is essential to realizing the full power of these new technologies to drive innovation and advance human progress.

    According to Reuters, the endorsement is the first to come from a major US tech company, but Microsoft has been on the offensive to appeal to privacy wary European companies as well as courting European regulators. In March, it set up an Azure data center in Germany that was outside its own legal control.

    Original article appeared at http://windowsitpro.com/cloud/microsoft-embraces-eu-us-privacy-shield-people-won-t-use-technology-they-don-t-trust

    8:06p
    Study: No Cybersecurity Training Required at Top US Universities
    By The VAR Guy

    By The VAR Guy

    None of the top 10 U.S. computer science programs at American universities require a single course in cybersecurity before graduation, signaling a dearth of proper training for the next generation of IT administrators and developers.

    The statistic comes via an independent study of 121 universities commissioned by CloudPassage, a company specializing in cloud infrastructure security solutions.

    In fact, the only school out of the top 36-ranked computer science undergraduate programs in the country to require a cybersecurity course was the University of Michigan, which ranked in 12th on last year’s study, according to the report.

    Furthermore, 3 out of the top 10 computer science programs don’t even offer a cybersecurity class as an elective, the study found.

    Rochester University offers the most cybersecurity electives of any computer science program in the country, with 10 available security electives. The University of Alabama is the only school to require three or more cybersecurity classes for an information systems or computer science degree.

    CloudPassage CEO Robert Thomas said cybersecurity automation will only go so far in preventing the next major cyberattack – the long-term solution will be in educating IT professionals to identify these threats so they can develop more secure code.

    “I wish I could say these results are shocking, but they’re not,” said Thomas, in a statement. “With more than 200,000 open cybersecurity jobs in 2015 in the U.S. alone and the number of threat surfaces exponentially increasing, there’s a growing skills gap between the bad actors and the good guys. It’s not good enough to tack cybersecurity on as an afterthought anymore. This is especially true as more smart devices become Internet accessible and therefore potential avenues for threats.”

    All of the schools studied were taken from three 2015 rankings, including U.S. News and World Report’s Best Global Universities for Computer Science, Business Insider’s Top 50 best computer-science and engineering schools in America and QS World University Rankings 2015 – Computer Science & Information.

    These figures are especially worrying to members of the channel community because a lack of educated professionals could potentially lead to lost revenue or customer dissatisfaction because of unforeseen security breaches. Currently, the average cost of a cyberattacks is $551,000 for enterprises and $38,000 for small businesses, according to Kaspersky Lab’s 2015 IT Security Risks Survey. And with the number of new malicious software created increasing monthly, channel companies need to take every precaution necessary to ensure that their customers’ sensitive data remains safe.

    Original article appeared at http://thevarguy.com/network-security-and-data-protection-software-solutions/study-no-cybersecurity-training-required-top

    10:14p
    WordPress.com Secures Millions of Domains with Free, Automatic HTTPS
    By The WHIR

    By The WHIR

    The web just got a whole lot more secure.

    WordPress.com is offering free, automatic HTTPs for all custom domains hosted on WordPress.com, according to a recent announcement by the company.

    This is significant because there are more than one million custom domains hosted on WordPress.com, including automatic.com, the website of its parent company.

    The Let’s Encrypt project enabled WordPress to provide SSL certificates for a large number of domains, the company said, and after launching its first batch of certificates in January, it immediately started working with Let’s Encrypt to “make the process smoother” for its growing list of domains.

    Let’s Encrypt launched in December, and has issued more than one million SSL certificates since then.

    Making HTTPS easier for website owners to implement is not just beneficial for security, as WordPress points out; it also has implications on search rankings. In 2014,Google made waves by announcing HTTPS websites rank higher than HTTP websites.

    WordPress.com is among other hosting providers offering support for Let’s Encrypt. In January DreamHost announced its integrated support for Let’s Encrypt for all of its managed hosting customers. With the integration, customers can access free HTTPS protection with a single-click.

    Original article appeared at http://www.thewhir.com/web-hosting-news/wordpress-com-secures-millions-of-domains-with-free-automatic-https

    << Previous Day 2016/04/11
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org