Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, June 8th, 2016

    Time Event
    12:00p
    Cold Storage in the Cloud: Comparing AWS, Google, Microsoft

    As the volume of data companies generate and need to keep balloons, the top cloud providers have come up with a type of cloud service that may replace at least some portion of the market for traditional backup products and services. Cold storage delivered as a cloud service is changing the way organizations store and deliver vast amounts of information. The big question is whether cold storage can provide for better backup economics.

    Amazon Web Services, Google Cloud Platform, and since April also Microsoft Azure now offer cloud cold storage services. Each has a different approach, so how do they stack up against each other?

    Addressing the Data Deluge

    Virtually all analysts are predicting that the cloud services market will continue growing and growing quickly. Gartner said recently that cloud will constitute the bulk of new IT spend this year. This will be a defining year for the space, as private cloud begins to give way to hybrid cloud, and nearly half of large enterprises will have hybrid cloud deployments by the end of 2017.

    So how much data are we creating. Cisco estimates that global data center traffic is firmly in the zettabyte era and will go from 3.4ZB in 2014 to 10.4ZB in 2019. A rapidly growing segment of data center traffic is cloud traffic, which in 2019 will account for 8.6ZB of that projected 10.4ZB.

    With Google and Amazon already in the cold storage market, Microsoft decided to join the game as well. In April, Microsoft announced the general availability of Cool Blob Storage – low cost storage for cool object data.

    What is It For?

    When Microsoft announced its Cool Blob storage in April it listed example use cases such as backup, media content, scientific data, compliance, and archival data. Essentially, any data that is seldom accessed is a good candidate for cool (or cold) storage: legal data, tertiary copies of information, data that is required to be retained for longer periods of time due to compliance, and archival information are all good examples. So what sets cold storage apart from more traditional storage options?

    Let’s start with a definition:

    Cold storage is defined as an operational mode and storage system for inactive data. It has explicit trade-offs when compared to other storage solutions. When deploying cold storage, expect data retrieval times to be beyond what may be considered normally acceptable for online or production applications. This is done in order to achieve capital and operational savings.

    Ultimately, it means working with the right kind of cold storage backup solution that specifically fits your business and workload needs. The reality is that not all cold storage architectures are built the same. Keeping this in mind, let’s examine the three big ones.

    Google Nearline: Google announced its Nearline archival storage product in 2015 and it was quickly seen as a disruptive solution in the market. Why? There was the direct promise of a very quick (only a few seconds) retrieval time. When compared to market leader AWS Glacier, this is pretty fast. According to Google, Nearline offers slightly lower availability and slightly higher latency than the company’s standard storage product but with a lower cost. Nearline’s “time to first byte” is between 2 and 5 seconds. Which, when you look at other solutions, can be seen as a real game-changer. However, there are some issues.

    One is that Google Nearline limits data retrieval to 4MB/sec for every TB stored. This throughput scales linearly with increased storage consumption. So, if you find yourself needing to download massive amounts of data, you might need to wait around a bit. Still, a feature called On-Demand I/O allows you to increase your throughput in situations where you need to retrieve content from a Google Cloud Storage Nearline bucket faster than the default provisioned 4 MB/s. Two things to keep in mind:

    1. On-Demand I/O is turned off by default.
    2. On-Demand I/O applies only to Nearline Storage and has no effect on Standard Storage or Durable Reduced Availability Storage I/O.

    Overall, Google promises a low-cost, highly durable and highly available storage service for data archiving, online backup and disaster recovery. Data is available instantly, not within hours or days. With a three-second average response time and 1 cent per GB/month pricing, Nearline gives you solid performance at a low cost. Furthermore, it lets you store “limitless” data and get access rapidly through Google Cloud Platform Storage APIs with an approximately three-second response time for data retrieval.

    Finally, some cool aspects here are the features provided. Aside from On-Demand I/O, you also have transfer services. This basically allows you to schedule data imports from places like Amazon S3, HTTP/HTTPS sites, and on-premise locations. This process can be automated for complete lifecycle management.

    AWS Glacier: As one of the first and leading cold storage solutions, Glacier was built as a secure and extremely low-cost storage service for data archiving and online backup. Customers are allowed to store large or small amounts of data. According to Amazon, pricing can start at as little as $0.01 per gigabyte per month, a significant savings compared to on-premises solutions. To keep costs low, Glacier is optimized for infrequently accessed data where retrieval times of several hours are suitable. Your experience with retrieval and delivery of say 1TB would be different between Glacier and Nearline. Glacier would have that storage object available in approximately three to five hours. Four hours into their download, a Google Nearline customer would be 5 percent complete downloading their 1TB of data with approximately 69 hours to completion.

    Within the Glacier environment, data is stored in “archives.” An archive can be any data, such as photos, video, or documents. You can upload a single file as an archive or aggregate multiple files into a TAR or ZIP file and upload as one archive.

    A single archive can be as large as 40TB. You can store an unlimited number of archives and an unlimited amount of data in Amazon Glacier. Each archive is assigned a unique archive ID at the time of creation, and the content of the archive is immutable, meaning that after an archive is created it cannot be updated.

    From there, Amazon Glacier uses “vaults” as containers to store archives. You can view a list of your vaults in the AWS Management Console and use the AWS SDKs to perform a variety of vault operations such as create vault, delete vault, lock vault, list vault metadata, retrieve vault inventory, tag vaults for filtering and configure vault notifications. You can also set access policies for each vault to grant or deny specific activities to users. Under a single AWS account, you can have up to 1000 vaults.

    Once your data is in the vault, administrators get the chance to use some granular control features including:

    • Inventory
    • Access controls
    • Access policies
    • Vault locking (write one read many controls, for example)
    • Audit logging
    • Integrated lifecycle management
    • High-level and low-level AWS API integration
    • Data Protection
    • Data Reliability

    Microsoft Cool Blob Storage: The launch of the Cold Blob storage service in April was a catch-up move by Microsoft.

    The Azure cool storage tier is optimized for storing data that is infrequently accessed and long-lived. Costs for the Cool Blob Storage range from $0.01 to $0.048 per GB per month, depending on the region and the total volume of data stored. The comparable range for the “Hot” Blob storage tier, which is for frequently accessed data, is $0.0223 to $0.061 per GB. Under some circumstances, the savings from storing some data in the Cold tier could be more than 50 percent.

    Here’s an important note: Keep an eye on charges and billing; things may still be changing. In this blog, Microsoft points out that in order to allow users to try out the new storage tiers and validate functionality post launch, the charge for changing the access tier from cool to hot will be waived until June 30th 2016. Starting July 1st 2016, the charge will be applied to all transitions from cool to hot.

    Microsoft highlighted that you will be able to choose between Hot and Cool access tiers to store object data based on its access pattern. Some capabilities to keep an eye on:

    • API integration (but only with other existing Blob storage offerings)
    • Security
    • Scalability
    • Multi-region distribution
    • 99% availability (the Hot tier offers 99.9%)

    Few More Words of Caution

    Nearline, Cool Blob Storage, and Glacier may be powerful and affordable, but end-to-end integration and management can still be a challenge. Management capabilities around backup and storage will be critical.

    AWS Glacier, for example, allows customers to set policies that only allow users to retrieve a certain amount of data per day. Furthermore, its users could also set a policy for retrieval that falls within the free tier. When compared to Google’s Nearline, the same sort of granularity seems to be missing. As for Microsoft, Cool Blob Storage is great as long as you have your data stored in Microsoft’s cloud to begin with.

    There’s no clear winner here. It will depend on your specific use case. As you build out your own cold storage architecture, make sure to create an environment based on integration best practices. This means understanding what kind of data you’ll be storing, retention policies, pricing, and of course how quickly you’ll be needing the information during a restore.

    4:29p
    Hosting Graphics-Rich Apps in the Data Center

    Karen Gondoly is CEO of Leostream.

    Hosting workstations in the data center — it’s a topic that deserves a second look. The mobile era is upon us, and with everyone demanding access to resources on the go, how do you mobilize graphically demanding applications in the data center for users that usually have workstations below their desks? While popular wisdom says that hosting graphics-rich applications is hard, thanks to recent advancements in workstation and hypervisor technology the answer may be easier than you think.

    In today’s atmosphere of data consolidation and security, it’s important to know that you can store your corporate data in your corporate data center, and still provide users with the access and performance they need. What’s the best option for your organization? Here are a few approaches to consider:

    Dedicated Hardware

    In the past, your most viable option for running graphically demanding applications was to use dedicated hardware. In this scenario, a Windows or Linux client OS is installed directly on the hardware the applications are installed. The downside? This approach and the hardware to support it can be expensive. That said, if you have an application that requires the heavy lifting of dedicated hardware, then don’t fight it. The key when using dedicated hardware is to maximize its usage by sharing the applications among users, and monitoring usage so that that you don’t waste resources. Connection broker technology can help in this regard by tracking resource consumption, pooling resources together and appropriately allocating the resources out to users.

    Pass-Through GPU

    GPU technology has begun to take off, and provides an entirely new approach for organizations looking to run high-end workloads on virtual desktops. Pass-through GPU, is the name of the game and simply means that each physical GPU in the workstation is passed through to its own virtual machine. Pass-through GPU has opened up windows of opportunity for those running 3D, CAD, video editing, etc. How does it work? The virtual machines are hosted on the hypervisor that is installed on the workstation. For example, if your workstation has two GPUs, you can host two virtual machines so that 4 GPUs = 4 virtual machines.

    With pass-through GPU, the operating system on each virtual machine has full and direct access to a dedicated GPU and can use the native graphics driver loaded in the VM. In the described environment, each physical workstation hosts multiple operating systems, which improves the density in your data center without compromising performance.

    Virtualized GPU

    Virtualized GPU takes things a step further. Instead of passing GPU directly to the virtual machine, a hypervisor is used to sit between the VM and the GPU. To elaborate, in this architecture, each physical GPU is shared by multiple virtual machines. (Again, the virtual machines are hosted on the hypervisor that is installed on the workstation.) The hypervisor provides additional technology that gives the virtual machine operating system direct access to the GPU, giving the performance of pass-through GPU while allowing greater density. Note that the virtual machines do share the resource of the GPU processing power.

    To date, only Windows OS are supported by virtualized GPU, for Linux users, this option is not yet available.

    Connecting the Users

    Dedicated hardware, pass-through GPU, and virtualized GPU all provide a path to securely hosting your data in the data center. Next, your users need a way to connect. There are two components that you will need to add into the mix. The first is a high-performance display protocol that is specifically designed to handle graphics-heavy applications.

    At a minimum, the display protocol connects the user’s client device to their remote desktop, and is responsible for remoting the graphical display to the user’s client device. Ideally, the display protocol goes above and beyond this and is responsible for the complete end-user experience, which includes things like redirecting USB devices from the client to the remote desktop, redirecting audio, and more.

    Second, unless you want your end users to memorize IP addresses or hostnames, you need a connection broker to offer out and connect users to their resources.

    A connection broker provides the login portal for the users who need access to the hosted desktops and applications. Behind that login portal, the administrator defines the connection broker logic that directs the user to the correct desktop based on who that user is, and where they log in from.

    Connection brokers record user information for the lifecycle of the user’s connection, from the moment they log in, to when they lock the desktop, to when they log out, allowing you to track and report on resource consumption. By watching the trends in application use, you know which applications are under utilized, or which you need to purchase more of ensuring that expensive applications are utilized to their greatest potential!

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:46p
    HPE’s Whitman Says Open to Cloud Deals With Amazon, Google

    (Bloomberg) — Hewlett Packard Enterprise CEO Meg Whitman is open to public cloud partnerships with Amazon and Google after a deal with Microsoft’s service provided a look at how she’ll try to navigate the market with a slimmer company.

    In December, HPE teamed up with Microsoft to sell Microsoft’s Azure cloud services to customers as part of a new agreement. Whitman said the partnership is going well, helping land deals in places such as Germany. HPE said in October it would stop offering public-cloud features amid competition, while still providing other cloud services and products.

    “We may do something over time with Google and Amazon,” Whitman said Tuesday during an interview at her company’s annual event, Discover 2016, in Las Vegas. “They are not enterprise companies for the most part. They may get there. I know that is their ambition.”

    See also: Top Cloud Providers Made $11B on IaaS in 2015, but It’s Only the Beginning

    Whitman is pushing ahead with new ways to approach a fast-changing industry that’s embracing some public cloud services first made popular by Amazon. She’s investing in potential areas of growth while exiting less promising businesses, making her company more nimble and able to react to shifts in customer tastes. Public cloud services let companies easily pay for outside computing power and storage via the internet from data centers run by providers such as Amazon and Microsoft.

    “In order to be nimble and fast you’ve got to be smaller,” she said. “We had to get smaller to go faster.”

    Last month, the company announced it will spin off and merge its enterprise services division with Computer Sciences Corp. in a deal valued at $8.5 billion for HPE shareholders. The agreement is part of Whitman’s drive to reduce the size of the company, which sells corporate computers and software, and free up resources to invest in newer areas, including the Internet of Things. The number of connected devices for businesses and consumers is exploding and HPE has the assets to find success in the market, she said.

    “We’re going to be able to double down in IOT,” she said.

    6:10p
    Zenium Buys Slough Site, Enters London Data Center Market

    Zenium, a data center provider that’s based in London but hasn’t until now had a facility in the London data center market, has acquired a site in Slough, planning to launch a data center there early next year.

    The company is only a few years old and has so far focused on the two markets where it has live data centers: Frankfurt and Istanbul. Now, responding to demand from existing and potential customers, the company is entering the London market.

    While this its first London data center, the company’s top management aren’t newcomers to the London data center market. Zenium’s founder and CEO Franek Sodzawiczny co-founded Sentrum, a London colocation provider whose three-site greater-London portfolio was acquired by San Francisco-based Digital Realty Trust in 2012 for about £716 million.

    The site, which will be called London One, will provide about 46,000 square feet of data center space and has 15MVA of power capacity available, Zenium said in a statement.

    See also: Why Equinix Data Center Deal is a Huge Win for Digital Realty

    8:56p
    SolarWinds Debuts New Features to Increase Hybrid IT Visibility
    By The VAR Guy

    By The VAR Guy

    The latest IT Trends Report from SolarWinds says that while 87 percent of organizations have already migrated some of their infrastructure to the cloud, 60 percent do not transition all services offsite. Hybrid deployments, they reasoned, are clearly the most popular model for IT infrastructure. This served as the jumping off point for the latest version of their popular Network Performance Monitor (NPM).

    Version 12 of the remote monitoring and management (RMM) software includes two new, first-to-market features that allow admins to utilize hybrid IT visual insights and analysis to gain greater visibility into performance across networks, both internal and those owned by their service providers and cloud vendor partners.

    “Historically what NPM has done is create visibility on your internal infrastructure: your switches, routers, firewalls, wireless backup points – all the infrastructure needed to deliver applications and make sure that users are connected,” said Mav Turner, Director of Product Strategy at SolarWinds.

    “What we’re doing with NPM 12, specifically with the Netpath feature, is moving that visibility from on-prem into the cloud. We’re providing a true hybrid IT management solution.”

    SolarWinds said the addition of NetPath gives NPM 12 users the ability to visually map hybrid network paths with on-premises data, meaning users can locate the exact location of a performance issue, no matter where it lives.

    Turner offered Microsoft Exchange as a practical example of the benefits of NetPath. Many SolarWinds customers use Exchange on-premises, so they own and manage the entire infrastructure required to troubleshoot any performance problems. But as more and more customers adopt Office 365, Microsoft’s cloud-based software suite, visibility into network issues drastically decreases.

    “The reality that we have found—and what our customers have struggled with as they depend more on these services—is there’s a lot of uncertainty and lack of information,” Turner says. “It’s very opaque.” And just because Exchange is now hosted on an external cloud network doesn’t mean SolarWinds customers suddenly stop turning to corporate IT if there’s an issue. Administrators still have a responsibility to correct any problems, but they lack the visibility necessary to gain meaningful insights.”

    Enter NetPath. The feature allows users to see the path across the Internet and any SaaS company’s infrastructure that customers are taking to connect to services. That means not only can NPM make network management across hybrid deployments more transparent, it also provides visibility into the data center and the infrastructure of cloud providers, giving actionable information to remediate any performance issues.

    The second new feature, Network Insight, allows admins to monitor their load balancing environments so they can better understand network intelligence through visualization of load balancing environments and component details in a single console. It even provides a graphical display of relationships and component status.

    “So if NetPath is about the breadth and seeing end-to-end what the network path looks like, Network Insight is about depth,” says Turner. “It really goes down into all the gory details of how the network devices function.”

    Over the last several years, SolarWinds has noticed an influx of new devices that have different functions in the network and different ways they monitor network performance. “What we’ve done with Network Insight is create a framework that allows us to support these modern network devices and represent them in a way that’s true to their form and function in the network,” says Turner.

    The first product SolarWinds is launching with NPM 12 is support for application delivery controllers and load balancers, with plans to add support for other types of devices like Next Gen Firewalls and WAN accelerators. The goal, says Turner, is to move beyond basic statistics to truly represent the form and function of these devices, whether they’re in a box, in a data center or virtualized devices running in Azure or AWS.

    The features were a direct result of daily interviews with customers, Turner said, in which SolarWinds engineers had to dig deep to understand the root cause of customer frustrations. “They didn’t come to us and say they wanted NetPath, they just kept coming to us and saying ‘we’re having trouble with Salesforce and NetSuite.’ We have all these SaaS applications that their business depends on and they had no visibility for them.”

    NPM 12 comes on the heels of SolarWinds’ announcement last week that it’s acquiring IT service management provider LOGICnow. While Turner expects future integration with LOGICnow services, there are no immediate plans. For now, SolarWinds is excited to be the ones to solve the problem of network visibility in hybrid deployments with these two new features.

    “That’s something people are surprised to see from SolarWinds, but it’s the same way we’ve always functioned, which is to understand our customers really well and provide solutions for them versus trying to make up a solution and educate the market on them,” says Turner. “It’s really about solving problems customers have today and doing it in a very accessible way.”

    This first ran at http://thevarguy.com/cloud-computing-services-and-business-solutions/solarwinds-debuts-new-features-increase-hybrid-it-vi

    << Previous Day 2016/06/08
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org