Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, August 15th, 2017

    Time Event
    12:00p
    Can Windows Server Make Hyper-Converged Infrastructure a Boring Commodity?

    When you hear about hyper-converged infrastructure, the conversation usually has to do with a handful of specialized startups or a couple of large hardware vendors that have placed big bets on the concept. But hyper-convergence, at its core, is enabled by software, not hardware design; as such, the market is open to entry by software makers, from tiny startups to the world’s largest producer of software: Microsoft.

    Windows Server 16, the latest version of the giant’s ubiquitous data center OS, includes all the elements required for hyper-convergence, which means you will probably see hyper-converged systems in data centers that haven’t necessarily bought a Nutanix box, a Dell EMC VxRail appliance, or another one of the options traditionally associated with the expression.

    “Microsoft has been absent in that environment (the hyper-converged infrastructure space), but from a technology standpoint their underpinnings are extraordinarily good,” Richard Fichera, a VP at Forrester Research, told Data Center Knowledge.

    But that may soon change. The software maker is actively working with numerous enterprise data center vendors on bringing a variety of hyper-converged systems to market powered by Window Server 16.

    One example is Storage Spaces Direct, a software-defined storage feature in Windows Server 16, which turns standard servers with local storage into enterprise-grade storage infrastructure.

    “Storage Spaces Direct is an extremely good product,” Fichera said. “It’s got tons of the right kind of features.” And it’s been battle-tested on one of the world’s largest hyper-scale clouds.

    “It’s quietly being used as part of the runtime of Azure for quite a while,” Fichera said about Storage Spaces Direct. “It’s rock-solid, production-proven at scale, but Microsoft has not aggressively marketed hyper-converged solutions as a standalone product.”

    Companies are interested in hyper-converged infrastructure because of the simplicity, flexibility, and low operating costs it promises. Transparency Market Research puts the technology’s global market value at $31 billion by 2025 (up from $1.5 billion in 2016), and it’s the biggest growth area for server vendors. It’s no longer considered good only for a few specific workloads, and its suitability isn’t limited to large enterprise data centers.

    Over the last few years, hyper-converged has gone from an “esoteric niche focused initially on VDI” to something much more mainstream, as vendors have improved the maturity and scalability of their solutions, Fichera said. “Hyperconverged answers some really significant operational problems like the cost of storage management and ease of provisioning.” Half of Forrester’s clients report that they’re running an enterprise app or database on a hyperconverged platform – but despite the fact that it includes the kind of software-defined networking, storage, and compute you need for hyper-converged systems Windows Server hasn’t been viewed as a player in this market.

    It’s There, But It’s Not Easy

    One reason the OS doesn’t yet play a bigger role in hyper-convergence today is the complexity of building and configuring Windows Server-based systems with the right mix of components, from motherboards with TPMs, to NVMe drives, RDMA NICs, NVDIMMs, and upcoming storage-class memory.

    The new Windows Server Software-Defined (WSSD) program is intended to simplify those decisions by offering validated combinations of hardware for specific scenarios, which have been through intensive failure simulations and come with deployment scripts and support to automatically integrate them with your infrastructure and management tools, including System Center.

    “The cost saving and the consolidation of compute, network, and storage into one node is very enticing — especially if you’re looking at refreshing your infrastructure with remote storage and different network switches,” Siddhartha Roy, principal group program manager for high availability and storage in Windows Server, told us. “We are seeing ‘software-defined’ in general creating a lot of pull.”

    But the industry “has got a little head of itself,” he said, with hyper-convergence seen as something anybody can do, when in reality it’s not that simple. “We found that it’s critical to have the right guard rails in many areas. The customer should not have to go through the same integration challenges and rediscover the pains that we already figured out in our integration and validation.”

    Pre-Configured and Stress-Tested

    WSSD systems are a collaboration between Microsoft and hardware vendors, turning the reference architecture that the Windows Server team comes up with into a hardware configuration to suit specific workloads. “We cover everything from capacity-optimized systems for archival and backup to multipurpose, to performance-optimised with really low latency,” Roy explained. “What is the right hard drive to SSD ratio? What type of data residency or what type of networking should I have? Do I need RDMA?”

    There are extra tests for the NICs to make sure they support the full range of software-defined networking; then a reference build of the full configuration has to pass a 96-hour run-down test that simulates failures, explained Subodh Bhargava, who runs the WSSD program. “We simulate a private cloud environment and compress a year of data center activity: all the error conditions and edge cases plus daily tasks like doing a backup and live migrating VMs, pulling out power cords, pulling cables, a NIC failing, a hard drive failing.”

    You use your own Windows Server licences for WSSD, but the solutions also include a deployment service, either from the vendor or with automated scripts. “We want to make sure the customer is not left stranded trying to figure it out themselves,” said Bhargava. “We don’t want to leave them asking, how do I deploy this on my site? How do I rack and cable this? How do I bring a network switch into this environment and configure it?” The scripts “start with bare metal, all the way up to creating the cluster, creating the software-defined storage, and software-defined network, the entire stack, and also bringing in System Center.”

    Not Only for Large Enterprises

    Interest in hyper-converged systems isn’t restricted to large enterprises, says Roy. “We see it reaching all the way from branch to workgroups and departments, to private enterprise ‘cloud’ where the objective is to host traditional VMs. It starts with departmental or workgroup and even regional branch offices. You have some entry-level storage or a SAN, and your SAN is getting to end of life, and probably your compute is too, and there’s opportunity to refresh both and consolidate both.”

    Retail sites are also looking at hyper-converged infrastructure. “For point of sale, in each store they need a little local compute and storage, and today that’s usually a home-grown system that was put together locally that’s not standardized across stores. We see a lot of interest in getting a two-node system with shared-nothing storage that’s self-managed.”

    Multiple Flavors Available

    To cover that range of customers there are standard and premium hyper-converged infrastructure WSSD solutions, as well as storage-only options. “Standard is for a customer who’s looking for smaller footprint,” Bhargava told us. “They’re going from a traditional disaggregated SAN or NAS and Hyper-V hosts or a Hyper-V cluster to something collapsed into one node. That not only reduces their data center footprint, with cooling and power, but it also helps them scale out instead of up.

    “The way to scale up today is that you typically scale up your Hyper-V host on the compute side, and for storage you’re adding more storage controllers or arrays. That is inherently cumbersome; customers aren’t always able to do it, and it is difficult to size that environment. What hyper-converged infrastructure does for you is allow you to scale out in this easy form factor. You can start with three or four nodes and add one node at a time and scale out compute, storage, and network all at the same time. It’s very good for customers who don’t want go through that hassle of managing the infrastructure.”

    Premium systems add load balancing, software-defined networking, and secure virtualization with shielded VMs. “That allows an enterprise customer to build almost an enterprise cloud using hyper-converged deployments to serve line of businesses and create a secure multitenant infrastructure.”

    The storage-only systems are for customers that just want the cost savings and performance of software-defined storage without having to spec out the hardware that will deliver the right performance and future-proof the system. “As well as going  from a high-touch SAN or NAS environment to a very low-touch software-defined environment and having that flexibility, one of the key benefits of software-defined is that you can ride the industry curve. As new technologies come along, like persistent memory, like RDMA NICs, like faster motherboards, you can simply take advantage of them.”

    Vendors Taking WSSD to Market

    Initially, six vendors are offering WSSD systems: DataON, Fujitsu, Hewlett Packard Enterprise, Lenovo, QCT, and SuperMicro, with more joining the program later this year. These aren’t the largest hyper-converged platforms, Fichera noted, but he expects to see more vendors supporting the program in time.

    “Technically [Windows Server 2016] is extremely capable and creating a structure around which the community can nucleate means we should see some growth and awareness. When Microsoft officially puts its stamp on something, it usually has an impact.”

    3:00p
    Is Your Cloud Provider Protecting You?

    Matthew A. Levy  is a Patent Attorney and Former IBM Software Engineer.

    The cloud has gone from being a buzzword to being a central part of the modern economy. Since businesses in nearly every industry use the cloud, Amazon Web Services (AWS), Microsoft’s Azure, Google Cloud Platform and IBM Cloud Services are each making aggressive market plays to compete for customers.

    This provides more opportunity than ever for developers, start-ups, and existing companies to leverage the power of the cloud. Unfortunately, there is a fly in the cloud’s Chardonnay, if you will – the threat of patent suits.

    There are major benefits to the cloud, but using it successfully can disrupt existing industries, which might trigger a competitor to respond by filing a patent infringement suit. And patent trolls are always lurking, ready to siphon off money from businesses.

    For example, a patent troll called Autumn Cloud has sued several dozen companies using patents from 2002. Autumn Cloud has sued banks, travel sites, video streaming sites and social media platforms, all based on their use of cloud services to provide mobile apps to their customers. Just operating in the cloud successfully attracted a patent troll; and Autumn Cloud is just one of many.

    This means that patent suits are a real risk that companies need to consider when entering the cloud. Patent trolls, also called “patent assertion entities,” have 15 or 20 year-old Internet and computing patents that could read on the cloud. Ongoing businesses may also have portfolios with older patents that potentially cover cloud technology.

    Companies shouldn’t be deterred from leveraging the cloud, but they need to consider the risks as part of their business planning.

    There is a real opportunity here for providers. The cloud can only grow if new businesses adopt it.  And if new businesses get scared off by the threat of patent lawsuits – well, that’s an obvious problem. There are three main cloud providers: Amazon Web Services (AWS), Microsoft, and Google. Only one – Microsoft – is addressing cloud IP risks for its customers.

    Microsoft recently announced a patent protection program called Azure IP Advantage, which includes three main protections for its cloud customers:

    1. Microsoft indemnifies all Azure customers for any intellectual property claim, including patent infringement, based on the Azure platform. This includes open source products and there is no limit to how much Microsoft will spend defending the claim.
    2. If a “consuming customer” is sued for patent infringement, they can select a patent (a “patent pick”) from a set of 10,000 patents to use to defend itself. (A “consuming customer” is a customer who spends a certain minimum on cloud services.)
    3. Every consuming customer gets a “springing license” to Microsoft’s patent portfolio, meaning if any Microsoft patent ends up in the hands of a patent troll, the customer is automatically licensed.

    It’s also interesting that Microsoft is trying to discourage patent litigation among Azure customers. The patent pick isn’t available for companies that have sued Microsoft or Azure customers within the past two years, turning Azure into a safe community all around.

    While Google doesn’t provide as robust a solution to IP risk, it does indemnify its cloud customers against patent infringement, and there’s no limit to the amount of indemnification. However, the company excludes any open source software included in Google Cloud Platform. Those exclusions are a big potential problem for customers that use things like Hadoop or other Apache products.

    Until just last month, AWS offered customers no protections whatsoever against patent infringement suits, and it even required customers to essentially give up their own IP rights in exchange for Amazon’s services.  AWS recently added some basic indemnification to its customer agreement and removed the provision taking customers’ IP rights, probably in response to customer pressure in light of Google and Microsoft’s competitive offerings. It’s good to see that Amazon is beginning to take the IP threat seriously, but it still has a way to go. Like Google, it doesn’t offer any protections for use of open source or other third-party software in connection with AWS.

    Looking at the current options for managing IP risk, there’s a clear divide among the leading providers; this means that virtually every company innovating on the cloud is making a decision affecting their IP risk and strategy – whether intentionally or not – when they choose their cloud provider. Given the risks, it’s worth considering the IP benefits provided by the major cloud platforms when choosing a provider.

    The good news is that if Microsoft’s Azure IP Advantage succeeds in attracting customers, we can expect to see more competition to better shield customers from patent infringement lawsuits. That would be a good thing all around.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:11p
    World’s Largest IPv6 Network Reaches a Milestone

    Everybody was probably sporting smiles on Monday at Hurricane Electric’s headquarters in Fremont, California. Why? The tier 2 network operator, which runs a vital part of the internet’s backbone, announced it’s become the first network in the world to connect to over 4,000 IPv6 networks. If they handed out gold medals for such things, this would be the company’s second trophy for the same category. In 2010 it became the first to connect to 1,000 IPv6 networks.

    That likely doesn’t mean much to the average Joe or Jane on the street and it probably won’t attract more than passing interest from most IT workers, but it’s an important milestone for anyone who earns their living by harnessing the infrastructure of the internet — like service providers and data center operators. In case you don’t know, Hurricane Electric is both.

    IPv6 — for Internet Protocol version 6 — is the new addressing system for the internet that will eventually replace IPv4, which has been used since the ancient days of ARPANET. IPv4 works just fine, but as a 32-bit system is limited to 4.29 billion addresses. That seemed like enough to last forever when it was first pressed into service in 1983, but that was before the days of the public internet, which began to rapidly eat through available IP addresses by the late 1990s. The limitation became even more alarming around 2007 with the introduction of the iPhone, which heralded the coming of the Internet of Things with billions of connected devices all needing their own IP address.

    The pool of available IPv4 addresses was officially exhausted on February 3, 2011 with the allocation of the last five blocks of numbers, although individual IP addresses have remained available due to they way they’re distributed.

    Luckily, the problem was noted early, and development on IPv6 was begun in the late 1990s, long before the numbers started to run too low.

    Hurricane Electric president Mike Leber:

    “We’ve been aggressively growing our next generation IPv6 network, which has created tremendous growth in the number of networks we connect to. This increase will provide more direct paths to more destinations, lower latency, and improve throughput for the networks and internet companies we serve, resulting in a better internet experience for their customers. We started building an IPv6 backbone long ago to be better prepared to best serve our customers so they can continue their growth unimpeded as the internet gradually transitions to this new protocol.”

    Being a 128 bit system, IPv6 can produce a number of individual IP addresses usually denoted as 2^128, which translates into a number too long to be meaningful if printed and when spoken would be 340 undecillion, which has nine more commas than billions. That’s enough to supply our IP address needs for a long, long time. It’s also enough to set aside large blocks of numbers to be used for other purposes. And because the numbers themselves are longer, they can contain more information, allowing for more efficient routing and package processing, directed data flows, simplified network configuration, and more.

    Hurricane Electric network map (click to download high-res. version)

    The two systems are not backwards compatible, however, IPv4-only hosts cannot directly communicate with IPv6-only hosts, which has slowed the migration to version six, even though it was officially “launched” in 2012 (and had actually been in use for a number of years before then). That’s also why IPv6 will eventually completely replace IPv4 — having two incompatible standards is less than ideal.

    For companies like Hurricane, that take part in the nuts-and-bolts running of the internet, that day can’t come soon enough — especially with version six’s added efficiency, which will allow existing bandwidth capabilities to go further. The tipping point — the day when IPv4 can be sunsetted — is definitely on the horizon. According to Google, the percentage of users that access its website over IPv6 passed 20 percent for the first time on July 22nd, 2017 — a figure that stood at 10 percent in January of 2016.

    IPv4 will still be with us for a while, however. A little over a year go, in March of 2016, Hurricane reached the milestone of being connected to 5,000 IPv4 networks, establishing itself as the most connected IPv4 network as well. Currently, the company is connected to more than 6,400 IPv4 networks — a figure it says will continue to grow.

    5:48p
    Microsoft Plans Two Cloud Data Centers for Australian Government

    Brought to you by IT Pro

    This week, Tom Keane, head of global infrastructure for Microsoft Azure, announced that the company will be adding two new Azure regions in the land down under.

    These two new data centers are expected to become available in the first half of 2018 and will be certified to handle Unclassified and Protected government data in Australia. This partnership between Microsoft and Canberra Data Centres (CDC) will allow Microsoft to be the only major cloud provider to deliver this level of service to the Australian government.

    These two new data centers in Canberra will join the already up and running Azure regions located in New South Wales (Australia East) and Victoria (Australia Southeast).

    This now bring the total of planned and announced data centers for Microsoft Azure to a total of six. Other planned regions are located in France (2) and Africa (2).

    “This announcement builds on recent news that dozens of Microsoft Azure services have received certification by Australian Signals Directorate, including services for machine learning, internet-of-things, cybersecurity, and data management. Along with Australian certifications for Office 365 and Dynamics 365, Microsoft is recognized as the most complete and trusted cloud platform in Australia. By comparison, other major cloud providers are only certified for basic infrastructure services or remain uncertified for use by the government.”

    We know Microsoft Azure is already in the process of complying with the upcoming General Data Protection Regulations (GDPR) that go into effect next year across the European Union, so seeing Microsoft take similar steps in other regions makes sense and gives regional governments options for moving to the cloud and remaining in compliance with data laws and regulations.

    More information about these new Azure regions are available on the Microsoft Australia News Center.

    8:00p
    Ambitious One-Gigawatt Data Center Planned in Norway

    Kolos, which is described as a US-Norwegian company, is planning one of the most ambitions data center projects ever.

    The company has secured a property in a small Norwegian town, inside the Arctic Circle, for what it envisions will be a 1,000MW data center powered entirely by renewable energy. It will be one of the highest-capacity data center in the world if it reaches even a fraction of that.

    Hyper-scale data centers, server farms built by internet giants like Facebook, Google, and Microsoft that are so large they’ve become a class of their own, usually reach several hundred megawatts. Las Vegas-based Switch says its data center campus outside of Reno, Nevada, will eventually reach 650MW, which the company claims will make it the world’s largest data center.

    If all goes as planned, the project, together with a data center inside an enormous abandoned mine by a company called Lefdal, will make Norway home to some of the world’s most ambitions and unusual data centers.

    Rendering of the planned Kolos data center campus (Image: Kolos)

    The expectation on the municipality’s part is that the project will bring new direct and indirect job opportunities to the area. Ballangen consists of a handful of villages, whose total population was about 2,600 as of 2012. A project of this scale has the potential to entirely transform such a community; Kolos is promising to eventually create up to 3,000 direct jobs.

    Similar to many other small municipalities in the Nordics, Bellangen recently lost a major industrial presence many of its residents depended on for employment. It used to be one of the country’s most important mining communities, but the last mine was shuttered in 2003, according to Norway Today. Its population is also aging, with most residents between 55 and 67 years old.

    Kolos says it’s raised “several million dollars” for the project from private investors in Norway and that it is working with an American investment bank to secure the rest of the sum it needs, BBC reported. The plan is to start with a 70MW data center.

    Kolos signed the land-purchase contract at a meeting with Ballangen residents and officials on March 30.

    Kolos execs meet with Ballangen residents and officials. (Photo: Kolos)

    At the meeting, company CEO Håvard Lillebo emphasized the region’s low-cost clean energy and access to network infrastructure:

    “In Northern Norway, we actually have Europe’s cheapest power, which is also 100% renewable. In addition, Ofoten and Ballangen have extremely good access to dark fiber, which is a prerequisite for running data centers.”

    Other factors that make the region attractive are its cold climate, which makes data center cooling cheaper, and proximity to the University of Narvik, which can be a source of educated workforce for the future facility.

    Ole Petter Fjellstad, the official who signed the contract with Kolos on behalf of the municipality, said he and his colleagues were confident that the agreement was in the community’s best interests after having three lawyers review it.

    The contract also has some baked-in safeguards to protect Ballangen’s interests in case something happens to derail the project:

    “Kolos also presented several scenarios in which the city of Ballangen would secure the rights to the property in case the data center could not be established. Included was explicit language that stated Kolos would not sell, list, or use the property for other reasons than what was intended, and that Kolos would return the property, free of charge, in case of bankruptcy or if the data center ground is not broken within four years.”

    10:30p
    Microsoft Acquires Cycle Computing for HPC in Azure

    Brought to you by IT Pro

    Microsoft announced the acquisition of Cycle Computing to support customers using High-Performance Computing (HPC) in the cloud. Terms of the deal were not disclosed.

    Cycle Computing’s CycleCloud software suite provides cloud orchestration, provisioning, and data management for big compute and large technical computing applications. Its software is used by customers in life sciences, manufacturing, financial services, engineering and research.

    “Now, we see amazing opportunities in joining forces with Microsoft. Its global cloud footprint and unique hybrid offering is built with enterprises in mind, and its Big Compute/HPC team has already delivered pivotal technologies such as InfiniBand and next generation GPUs,” Cycle Computing CEO Jason Stowe said. “The Cycle team can’t wait to combine CycleCloud’s technology for managing Linux and Windows compute & data workloads, with Microsoft Azure’s Big Compute infrastructure roadmap and global market reach.”

    In a blog post, Jason Zander, corporate vice president for Azure, said: “Azure has a massive global footprint and, more than any other major cloud provider. It also has powerful infrastructure, InfiniBand support for fast networking and state-of-the-art GPU capabilities. Combining the most specialized Big Compute infrastructure available in the public cloud with Cycle Computing’s technology and years of experience with the world’s largest supercomputers, we open up many new possibilities. Most importantly, Cycle Computing will help customers accelerate their movement to the cloud, and make it easy to take advantage of the most performant and compliant infrastructure available in the public cloud today.”

    “We’ve already seen explosive growth on Azure in the areas of artificial intelligence, the Internet of Things and deep learning,” Zander said. “As customers continue to look for faster, more efficient ways to run their workloads, Cycle Computing’s depth and expertise around massively scalable applications make them a great fit to join our Microsoft team. Their technology will further enhance our support of Linux HPC workloads and make it easier to extend on-premise workloads to the cloud.”

    As part of the acquisition, the Cycle Computing team will join Microsoft.

    << Previous Day 2017/08/15
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org