Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, July 10th, 2015

    Time Event
    12:00p
    What Chicago’s New ‘Cloud Tax’ May Mean for Service Providers

    Chicago recently instituted a “cloud tax” in order to capture some of the missed revenue from the shift away from brick-and-mortar stores and toward a digital, subscription-based economy.

    The recent ruling from Chicago’s Department of Finance modifies existing tax laws around amusement, translating the tax to the digital world. It means the 9-percent amusement tax now applies to services like Netflix streaming. While it’s creation seems tailored specifically to subscription entertainment services like Netflix, it may also apply to cloud-service providers, from infrastructure to cloud apps.

    451 Research senior analyst Liam Eagle said the language specifically mentions Infrastructure-as-a-Service as being subject to tax. This would mean cloud providers like Amazon Web Services would have to find means of collecting tax on services purchased by users in Chicago. Just how this cloud tax will translate to the IaaS world is unknown.

    “It raises the question of whether the law applies across other forms of infrastructure hosting services and which ones,” said Eagle. “I find it unlikely that the Chicago Department of Finance has an extremely nuanced view of the various types of internet infrastructure services. There are many examples of lawmakers trying to create legislation around, or apply existing legislation to technologies they don’t fully understand. But it’s probably safe to assume they intend it to apply across all types.”

    If a Chicago startup offers a streaming music service hosted on AWS, there’s potential to be taxed twice — as a user and as a provider.

    Cloud providers in Chicago are keeping an eye on things. A Cogeco VP said that while he can see where the city is coming from, there’s a lot to be desired in terms of execution.

    “I certainly can understand where Chicago and other municipalities are struggling with the decline in sales tax, as online disruptors attract consumers away from brick-and-mortar businesses, but to tax the consumer of cloud services is a reactive measure and puts the consumer in the place to make up for the lack of planning from the city,” said Toby Owen, VP of product management for Cogeco and Peer 1 Hosting.

    Owens compared the cloud tax to the reaction of licensed cab drivers when Uber launches in a city. “They are used to the old way, failed to plan for a disruptive technology and changing consumer demand, and just (literally) park their taxis in the street in protest,” he said, citing a recent protest involving thousands of cabbies blocking traffic for hours in central London in protest.

    Protesting and trying to punish the consumer with a tax is no different than the taxi driver’s tantrum of parking in the street and causing huge congestion,” said Owen. “Just because you missed the boat doesn’t mean you should punish those who didn’t.”

    The shift to cloud, from capital expenditure to operational expenditure, mirrors a wholesale shift in how commerce operates. Instead of driving to a Blockbuster and renting a movie, you’re now more likely to pay a monthly fee for a streaming video library. The way we do things has changed, but the way taxing occurs hasn’t. Chicago is attempting to recuperate losses felt from the shift, but that might inadvertently damage businesses, including data center service providers.

    “It is going to make certain data center locations more desirable than others when it comes to e-commerce,” said Philbert Shih, managing director at Structure Research.

    The tax issue is going to increasingly come on the radar as governments look to recover the dwindling tax revenue from brick-and-mortar businesses, he said. Shih also noted a particular implementation problem in defining what qualifies.

    “The impact on infrastructure service providers starts with how regional and local governments are going to pinpoint residency,” he said. “Is the tax applicable when a server is hosted in a given jurisdiction or is it where the customer logs in from? Or is the residency of the consumer the determining factor? Providers hosting retail sites are going to have to be knowledgeable about these issues when helping online retailers set up their web presence.”

    Owen suggests a more proactive approach of incentivizing cloud providers to locate their businesses in Chicago. Revenue isn’t evaporating but going to technology solution providers increasingly over sellers of physical items.

    “That means that the tax revenues collected by these service providers are going somewhere,” he said. “The smart cities who saw this trend and have been successful in wooing large tech and cloud providers to set up shop in their cities are reaping the benefits.”

    “Don’t tax the consumer; support them through partnering with cloud providers to choose your city as an operations base,” said Owen.

    Netflix publicly confirmed it will build the cloud tax into subscription costs for users in Chicago.

    4:22p
    Friday Funny: A Data Center Birthday

    It’s my birthday this month!!! And Kip was so kind to bring me a cake!

    Here’s how it works: Diane Alber, the Arizona artist who created Kip and Gary, creates a cartoon, and we challenge our readers to submit the funniest, most clever caption they think will be a fit. Then we ask our readers to vote for the best submission and the winner receives a signed print of the cartoon.

    Congratulations to Dustin, whose caption for the “Green Data Center” edition of Kip and Gary won the last contest with: “Well, at least we can smoke our own meat for Free!”

    Several great submissions came in for the “Hot Aisle” edition – now all we need is a winner. Help us out by submitting your vote below!

    Take Our Poll

    For previous cartoons on DCK, see our Humor Channel. And for more of Diane’s work, visit Kip and Gary’s website!

    5:31p
    HP Automates Storage Management for Unstructured Data

    HP this week unveiled a set of storage management applications that aim to converge storage and information management.

    The applications that make it possible to analyze unstructured data that resides in files to optimize where that data should be most efficiently stored.

    Joe Garber, vice president of information governance for HP, said HP Storage Optimizer will now make use of file analysis software that HP developed to enable IT organizations to reduce the size of the storage environment by eliminating files or moving them into an archive and optimize storage performance by placing files on the storage system that most efficiently meets performance goals and compliance mandates.

    “We’re automating the sorting of the content and then creating a data map,” said Garber. “We’re making it possible to get the right information to the right cloud.”

    When it comes to storage management for unstructured data, Garber said, most IT organizations have hygiene issues stemming from the fact that there is no easy way to determine not only what data is stored in any given silo but also where that file should be optimally placed.

    To facilitate that process HP ControlPoint software that is used to migrate that unstructured data has been updated to not only provide tighter integration with HP Storage Optimizer but also support the HP Haven Big Data platform and HP Helion cloud environments that are based on the OpenStack cloud management framework.

    Many IT management challenges that IT organizations face today stem from the fact that most of the ways data is managed is fundamentally inefficient.

    Organizations make multiple duplicate copies of data sets that wind up getting strewn all across the enterprise. The end result is a lot of investments in IT infrastructure to support all those data sets. If IT organizations can reduce the number of data sets they need to support, there can be a corresponding drop in the amount of IT infrastructure they need to acquire.

    Additionally, if IT organizations actually have insight into what files are being used, it becomes possible to make more aggressive use of cloud services that store that data more cost effectively online or simply move those files offline altogether.

    As enterprise IT continues to evolve it is becoming clear that storage management and information management are converging in ways that should finally give IT organizations some long overdue control over where and how data actually gets stored.

    6:18p
    Kickstarter Project Aims to Make Desktop-Sized Docker Container

    logo-WHIR

    This article originally appeared at The WHIR

    As a software for containerized applications, Docker is often compared to a shipping container allowing an application to be packaged and shipped. A new Kickstarter campaign is taking the analogy further by creating a physical, model-train-scale container containing a Docker server.

    The DC2 Desktop Container Computer is the first product from Dick Hardt’s new company Hardtware. It provides a desktop container server preconfigured with Docker in a container that’s a thirtieth the size of a full-size, 20-foot shipping container. It’s 3 ¼ inches (83mm) wide, 3 ½ inches (87mm) high, and 8 inches (203mm) long, and weighs 2.55 lbs (1.16 kg). You can customize the DC2 by selecting a variety of common shipping container colors and logos or add your own logo.

    The DC2 is fanless, and features a dual-core 1.33GHz MinnowBoard Max processor, a 60GB Solid-State Drive with 400MB/s transfer speed, and Gigabit Ethernet. With its open hardware design, there are various expansion boards that can be used to expand or repurpose the DC2.

    Having an external Docker container, however, isn’t just a novelty. A separate Docker container computer means that Docker VMs don’t leech local resources, and it makes it easier to set up Docker, and keep track of configurations like port mapping.

    DC2 can also act as a local testing environment. It runs Docker on a 64-bit Linux OS, meaning it’s compatible to later deploy Docker Containers on AWS, Azure, Digital Ocean, Google, Rackspace or any other 64-bit Linux Docker host.

    Kickstarter participants can pledge $399 for the full DC2 and $1,595 for a set of 4 DC2s with a shared power supply and a 5 port Gigabit switch. Those with an existing MinnowBoard Max can get an empty DC2 container by pledging $99.

    DC2 can also be considered a cheaper alternative to Amazon. For an upfront price of $399, DC2 is comparable to two m1.small from Amazon, which cost $412 per year.

    The DC2 project is also seeking feedback, suggestions and pull requests via its GitHub community page.

    If there is interest, Hardtware is also considering add-on LED displays that could show the disk activity with a blue LED and load on each core with green-yellow-red LEDs), and even a 96×64 OLED that could show more information.

    DC2 launched its Kickstarter campaign Monday, and DC2 has already raised $17,258 of its $30,000 goal with 26 days to go.

    This first ran at http://www.thewhir.com/web-hosting-news/kickstarter-project-aims-to-make-desktop-sized-docker-container

    6:25p
    Time to Rethink Data Center Design Conventions?

    Most data center operators are stuck between a rock and hard place. They need to reduce costs in a way that doesn’t in any way compromise application availability.

    John Sheputis, president of Infomart Data Centers, thinks they can have an easier time doing this balancing act if they embrace less-conventional approaches to data center design and data center management. Sheputis is speaking on the subject at the Data Center World conference in National Harbor, Maryland, this September.

    Of course, IT professionals in general and data center operators in particular are notoriously conservative when it comes to anything that might put application uptime at risk. After all, they generally get blamed for anything that goes wrong, so it’s often in their perceived best interest to play it safe.

    The trouble is that there’s always a competitor somewhere willing to go that extra innovative mile that winds up generating enough savings to give their organization a competitive edge simply because they were willing and able to actually do something different.

    One example of outdated data center design is raised floor. Sheputis thinks the days of having to build raised floors inside the data center are long over.

    Not only is the IT equipment inside the data center getting too heavy for raised floors, cold air does not rise. That means a lot of time, money, and effort is being wasted on cooling space below a raised floor for no apparent economic benefit, he said.

    Similarly, the time has come to rethink the connectors used to distribute power across the data center and maintenance cycles that often assume equipment needs to be replaced prematurely. “They can also run electricity at a higher wattage to increase power consumption efficiency,” he added.

    “Data center operators need to make more use of predictive analytics. Decisions need to be made on hard facts.”

    Because the data center is now the economic engine of the digital enterprise, data centers need to be tuned like any other engine, Sheputis said. That means finding innovative approaches to reducing costs without compromising the integrity of the application environment.

    The challenge, according to him, is that data center operators have fallen into something of a rut. Rather than explore new alternatives, there is a tendency to do things the way they have always been done.

    That approach, however, never changes the fundamental economics of building and managing a data center.

    For more information, sign up for Data Center World National Harbor, which will convene in National Harbor, Maryland, on September 20-23, 2015, and attend John’s session titled “Challenging Industry Conventions to Simplify Design

    7:00p
    IBM SoftLayer Cloud Adds Supercomputing Power with NVIDIA Tesla K80 Accelerators

    logo-WHIR

    This article originally appeared at The WHIR

    IBM began offering NVIDIA Tesla K80 dual-GPU accelerators on SoftLayer cloud servers this week, bringing supercomputing capabilities to enterprises, startups and research facilities.

    According to NVIDIA, GPU-accelerated helps applications run significantly faster because compute-intensive portions of the application are offloaded to the GPU, while the remainder of the code runs on the CPU. SoftLayer said customers using Tesla GPUaccelerators are seeing 10 times higher performance than today’s CPUs.

    According to NVIDIA, there are six cloud providers that offer GPUs in the cloud: SoftLayer, RapidSwitch, Penguin Computing, Peer 1 Hosting, Nimbix and Amazon Web Services.

    The Tesla K80s will initially be available in SoftLayer’s Dallas, Texas data center but will eventually be rolled out to all of its data centers, according to a report by The Platform.

    “While most cloud providers talk about having infrastructure on demand, for add-ons like GPU or even for anything exotic such as needing to fire up more than a few hundred nodes at the same time, you actually have to call first. SoftLayer has some nodes with Tesla K80s installed, and will roll them out worldwide, but it is really wanting to engage with customers to figure out the demand and build this to order,” The Platform reports.

    In addition to Tesla K80 GPU accelerator, IBM Cloud offers other NVIDIA GPU offerings, including the NVIDIA Grid K2 and Tesla K10 GPUs.

    “The Tesla Accelerated Computing Platform is used by researchers and data scientists around the world to drive innovation and scientific discovery,” Ian Buck, vice president of Accelerated Computing at NVIDIA said in a statement. “With the addition of Tesla K80 GPUs, SoftLayer’s unique cloud offering for HPC will dramatically expand access to supercomputing-class performance, accelerating the pace of important new advances.”

    SoftLayer already has dozens of customers using NVIDIA Tesla K80 GPU accelerators to support workloads. New York University recently used the accelerators to support a deep-learning course while MapD uses the accelerators for data and analytics.

    “While we offer on-premises appliance solutions, having a cloud GPU offering on SoftLayer is a perfect way to deploy to customers who would rather use MapD on a subscription basis or ‘try before they buy,’” MapD founder and CEO Todd Mostak said in a statement. “The fact that IBM Cloud offers SoftLayer bare metal GPU servers dovetails perfectly with our focus on overwhelming performance, as virtualization can increase query latencies.”

    This first ran at http://www.thewhir.com/web-hosting-news/ibm-softlayer-cloud-adds-supercomputing-power-with-nvidia-tesla-k80-accelerators

    << Previous Day 2015/07/10
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org