Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, September 8th, 2015

    Time Event
    12:00p
    Native Cloud and Data Center Tools: Advantages and Challenges

    Building a cloud requires granular visibility not just into hardware components but into running workloads as well. Software vendors have taken their monitoring and management tools sets to entirely new levels as a direct response to consumer demands. Packages now have great tool sets capable of watching over both hardware and workload elements which are cloud facing.

    With native tools, there are very powerful features which come enabled by default and administrators to have to pay extra for those necessary monitoring capabilities. Before looking at other tool sets, IT managers and their staff should get closely acquainted with the features that came natively with their software. This means taking the time and learning the capabilities of these tools and how they directly impact the environment.

    What to Look for with Native Tools?

    There will be several key components which must be present when we’re working with native tools. We will need to have two types of visibility into the cloud environment: the hardware layer and the software layer. These are some of the important components to evaluate and configure when working with native tool sets:

    • CPU processors. The servers which are cloud facing will have certain CPU requirements. CPU cores should always be monitored and properly balanced between workloads.
    • Memory. One of the most important resources in a virtual and cloud environment. Memory needs to be well managed and controlled.
    • Fans. Native tools have the ability to see hardware components, fans included. This type of host health monitoring system can help catch major issues before they actually happen.
    • Temperature. Monitoring the temperature and environmental aspect of a cloud datacenter should be done outside of a native tool set for a hypervisor. There should be sensors and environmental controls in place. That said, native tools provide an extra layer of protection by monitoring internal host temperatures.
    • Voltage. Good PDU systems will help monitor power fluctuations coming into the rack. However, some tools can help you monitor voltage going into a given server.
    • Network. Monitoring aspects of the LAN is always important and is a great addition to a native tool set. Use this feature to check for anomalies and manage physical as well as virtual NIC configurations.
    • Storage. Native tools help administrators connect their environment to their SAN backbone. From there, these tools can monitor how storage is being used and distributed within the environment.

    Where to Avoid Challenges

    The most important thing to remember is that no one tool set is ever truly all encompassing. The draw back with some native tools is the ability to see outside of the given datacenter. Below is a short list of cautions when working with native tools:

    • Distributed data center visibility. Native tools can become limited when the demands of a distributed cloud infrastructure are placed upon them. Although great locally, sometimes native tools have limited visibility into the operations of other datacenters.
    • User count load-balancing. Cloud environments will successful if user count per server is well managed. Some native tools are just not designed to manage cloud environments and are geared more towards localized virtual infrastructures. So, even though they can see user count, sometimes managing the balance of which use goes where can be challenging.
    • Chargeback visibility. Large environments have been looking to tool sets to help them with their chargeback methodology. Native tool sets can be limited with this type of visibility.
    • Graphs and charts. Although most native tools will provide graphs and charts, some administrators require more. If that’s the case, there will be native tools that just won’t cover these needs. Be prepared to know the level of graphing required for your environment.
    • Alerting. The alerting and alarm capabilities of native tools have come a very long way. Still, for environments looking to leverage advanced alerting functions, native tools can sometimes fall short. For DR environments the needs of alerting are great. In a cloud function, administrators should look for tools which have multiple levels of alerting capabilities.
    • Application visibility. Cloud environments are built to give end-users easier access to their applications and workloads. Many times native tools can see what is running on top of a cloud server, but not necessarily how the application is performing. This is where administrators must know the needs and demands of their application environment to properly obtain the right tool sets to monitor the workloads

    Working with cloud and data center providers shouldn’t be a hair-pulling experience. Today’s providers offer very granular visibility into a variety of metrics surrounding your entire environment. It really all comes down to you understanding your requirements, the business use case, and how tool sets can help you out. There are some very powerful features being offered natively with cloud and data center packages. Take the time to understand what they can and can’t do. In those cases, you might have to look at third-party solutions. Either way, cloud and data center tools can be direct enables for your IT environment and for your business.

    3:00p
    Are You Ready for Appageddon?

    John Yung is CEO of Appcara.

    There’s no doubt about it — enterprises are moving applications to the cloud in greater numbers than ever before. According to a recent Cloud Business Summit prediction, more than 60 percent of business infrastructure will reside on cloud-based platforms by 2018. CIOs and IT team leaders are scrambling to find new ways to accommodate the fast growth of enterprise applications for an increasingly mobile-first workforce.

    To succeed in this rapidly evolving environment, enterprises need an advanced software platform for provisioning and managing applications in public and private cloud ecosystems. Companies require a scalable, enterprise-class app management system that enables the launch of apps on a standalone basis or as components of more complex workloads.

    So how do you get your business ready for the influx of mobile spending and proliferation of mobile applications? Here are three strategies to help you make it happen.

    Improve Agility With On-demand App Provisioning Capabilities

    One of the top business challenges today is that users demand app access via mobile devices, but IT has to bridge the gap across platforms and enable central data storage. Virtualization can provide on-demand app provisioning for content management, development, databases, CRM, ERP, life sciences, collaboration, big data and much more. With the right solution, CIOs and IT leaders can make their organizations more agile by supporting full Office 365 functions on mobile devices via the cloud.

    Use Automation to Cut Costs and Eliminate Manual Deployment

    To meet the rising demand for mobile capabilities, companies must find a more efficient way to deploy enterprise apps, such as Microsoft Exchange. With an advanced software platform, it’s possible to reduce an Exchange deployment process that used to take two weeks to a single click. A platform can eliminate the need for an onsite consultant — at the cost of about $20,000 for a two-week deployment process — and enable simplified backup customization as well as user interfaces for direct task completion.

    Access a Control Panel to Manage Apps in Real Time

    In addition to cutting costs via automation, businesses need a way to improve deployment accuracy and more tightly control app automation. A single-interface solution is ideal, with a control panel that enables the IT team to control app automation in real time. The solution should also automate system documentation. In order to enhance scalability, the solution should enable portability so that developers can drag and drop applications from private to public clouds.

    As mobility becomes a must-have and key differentiator for enterprises across all sectors, CIOs and IT leaders will face increasing demand to get applications to production faster, manage apps across multiple platforms in a BYOD environment, improve app deployment accuracy; and manage, document and automate the entire application lifecycle. The ability to effectively manage apps is rapidly becoming a critical success factor.

    Appageddon is coming, but deploying enterprise apps is only part of it. Enterprises must also effectively update, scale and monitor apps. CIOs and IT leaders can confidently meet these challenges and even turn Appageddon into a growth opportunity with an advanced software platform that allows them to efficiently provision and manage applications in public and private cloud environments. With an enterprise-class app management system, your company can compete for customers in this rapidly changing marketplace and win.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    6:48p
    Oracle Launches Cloud Data Center in Canada

    Oracle announced the launch of a cloud data center in Canada, saying there was demand for cloud services hosted within the country’s borders driven by concerns over data sovereignty.

    Data sovereignty has been a popular angle for public cloud providers going after specific markets since the Edward Snowden scandal. Microsoft said it was catering to Canadians’ data sovereignty “concerns” when it launched its first Azure cloud data centers in Canada earlier this year; Amazon cited similar concerns in Germany when it announced a cloud data center in Frankfurt in 2014.

    Oracle has about 20 data centers around the world. The company doesn’t share many specifics about its data center infrastructure, but when it announced the launch of a cloud facility in Japan earlier this year, it said it was its 19th worldwide.

    The new site in Canada will support Oracle’s cloud application services for human resources, enterprise resource planning, and customer relationship management. The company did not say where in Canada its new cloud data center is located.

    Oracle opened a data center in Brazil in June.

    7:00p
    AWS, GoDaddy Named in Ashley Madison Lawsuit

    A lawsuit has been filed against Amazon Web Services (AWS) and GoDaddy by three anonymous plaintiffs for hosting websites that reproduced the user data stolen from Ashley Madison. The suit seeks $3 million in damages, and also names 20 other defendants, three of whom operated sites which sold the leaked user data.

    The suit was filed in an Arizona federal court, and can be read via The Register (PDF). It acknowledges separate legal action directly against Ashley Madison, but alleges those named in the suit have been involved in the possession and distribution of stolen property.

    “(T)his action deals with a different injury inflicted upon Ashley Madison users by persons and entities who have obtained the stolen data, repurposed it such that it is more readily accessible and searchable by the media and curious Internet users, and actively distributed it for their own gain,” the lawsuit said.

    It also cites a ruling by an Ontario court that issued a restraining order against sharing the leaked data, referring to it as stolen property. The identities of the 3 site operator defendants are unknown by the plaintiffs, but AWS and GoDaddy were identified as hosts of the three sites through WHOIS records.

    “John Doe” anonymous defendants 4-20 are unnamed persons or entities, leaving room for the plaintiffs and their legal representative Internet law firm Kronenberger Rosenfeld to add more hosts or website operators in the future.

    The legal question from a hosting perspective is the accuracy and applicability of the phrase “are in willful, knowing possession of the Stolen Data.” This is the accusation brought against all defendants, and while the site operators will not be able to claim that their possession was not “willful” and “knowing,” AWS and GoDaddy will defend the companies’ role as, if anything, one of unwilling and unwitting use by the real criminals.

    The good news for GoDaddy and AWS is that there is likely precedent that they are not responsible for policing the content of the sites they host, such as GoDaddy’s 2014 court of appeals victory related to hosting “revenge porn” site Texxxan.com.

    A Russian court decision last year that a web host was responsible for pirated content on its servers is generally considered part of a draconian approach to Internet governance. Meanwhile, it is not clear that web hosts even have the legal right to erase customer content, given the ongoing struggle Carpathia is having with legacy Megaupload servers.

    This first ran at http://www.thewhir.com/web-hosting-news/aws-godaddy-named-in-ashley-madison-lawsuit

    9:42p
    ASHRAE Invites Industry Comment on Data Center Efficiency Standard Draft

    ASHRAE, the body that creates standards for heating and cooling of buildings, has opened the comment period on the second draft of a new standard for data center energy efficiency.

    ASHRAE doesn’t create building codes, but local and state officials in the US that do use its standards extensively. The data center standard’s first draft, announced in February, came under criticism by some in the data center industry for its use of PUE (Power Usage Effectiveness) to set efficiency benchmarks and generally for its prescriptive nature.

    The standard, 90.4P, is meant to be used in tandem with another data center standard, 90.1. An update to the latter drew similar industry criticism several years ago for favoring the use of economizers to achieve energy efficiency rather than making efficiency the goal in itself and leaving it to the data center operators to decide how to get there.

    “Reviewers of this draft should understand that the Committee intended for this standard to allow innovation while still saving energy in data centers,” Ron Janagin, chairman of the ASHRAE 90.4 Committee, formed to create the new standard, said in a statement. “The Committee believes that it has recommended the requirements for PUE in the Standard based on a justifiable 80/20 rule where only the lower performing systems will be affected.”

    One of the biggest changes in the second draft of 90.4P is removal of telecommunications facilities from its scope. A previous version would have applied to data centers and telco buildings. The current one covers data centers only.

    The draft also revises data center, computer room, and telecommunications exchange definitions as well as paths to compliance with its PUE requirements.

    The standard would apply to new data centers, expansions of existing data centers, and modifications of existing systems. In other words, facilities and systems already in place would not have to comply unless they are being changed in some way.

    Here’s a digital copy of the second draft of ASHRAE Standard 90.4P.

    You can comment on the draft here.

    The second review period ends October 19.

    A separate effort is also underway to expand ASHRAE’s envelopes for humidity in data centers to allow for more use of free cooling.

    10:01p
    Easynet Acquisition Puts Interoute Closer to Revenue-Doubling Goal

    After landing two new investors back in March, the CEO of European cloud provider Interoute set a goal of doubling the company’s revenue over the next five years.

    Today, just six months later, Interoute announced its acquisition of British telco Easynet, a move it expects to help accomplish just that in its division that sells telecom services to large companies and government departments, reported Business Cloud News.

    The company said that by including Easynet in the bottom line equation, it would have generated revenues in excess of $784 million by the end of June. Interoute will reportedly pay $619 million for “one of the champions of broadband competition in Britain.”

    No doubt when Aleph Capital Partners and Crestview Partners invested an undisclosed amount to Interoute for a 30 percent stake, it paved the way for acquistions of this nature that would fortify its position in Europe.

    It’s safe to say that Interoute has clearly kicked into growth mode and not likely to be acquired in what has become a fragmented European market. It does raise the question whether the company will elect to go public sooner, however.

    Last year, in an effort to build out its network, data center, and cloud platform, the European Infrastructure-as-a-Service provider launched close to 10 new zones, including its first two in the U.S.

    According to Interoute, the acquisition means that enterprise, government and service provider customers of the two companies will be privvy to a fuller suite of products, services and skill sets. Interoute’s portfolio includes 12 data centers, 14 virtual data centers and 31 colocation centers along with connection to 195 additional third-party data centers across Europe. It owns and operates 24 connected city networks within Europe’s major business centers.

    Easynet has established relationships with key vendors (Sports Direct, EDF, Bouygues, Anglian Water, Bridgestone, Levi Strauss and Campofrio Food Group), government bodies and industry standard accreditors and operates to a number of recognized industry standards. In an effort to meet in-country data needs, the company’s cloud platform is location-sensitive and meant to make compliance with European data-sovereignty laws easier.

    Additionally, the company is one of a select number of managed service providers approved and appointed by the Government Procurement Service to assist the UK Government in its mission to create a network of networks.

    “These are exciting times for our customers,” said Interoute CEO Gareth Williams, in a press release. “Interoute is creating a leading, independent European ICT provider. This is the next step in our acquisition strategy and moves us much closer to our goal of being the provider of choice to Europe’s digital economy.”

    Meanwhile, Easynet CEO Mark Thompson reassured customers that the merge will result in better service to clients. “The combined companies can offer broader and deeper connectivity options, as well as an expanded portfolio of products and services,” said Thompson. “The acquisition will expand an already market-leading cloud hosting capability in Europe.”

    Neither companies provided a closing date for the deal.

    << Previous Day 2015/09/08
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org