Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, August 23rd, 2016

    Time Event
    12:00p
    QTS Launches Public OpenStack Cloud

    QTS Realty Trust has added an OpenStack cloud to its arsenal of services, launching a public Infrastructure-as-a-Service cloud in one of its data centers on the East Coast powered by Canonical’s OpenStack distro. QTS and Canonical will also build private OpenStack clouds in any of QTS’s data centers for customers that need them.

    Until recently, QTS has provided public and private IaaS cloud based on VMware only. With the addition of OpenStack capabilities the company is going after customers that are building cloud-native applications whose IT teams are more DevOps-oriented, Jon Greaves, CTO of QTS, told Data Center Knowledge in an interview.

    These customers gravitate toward public cloud and often need a private-cloud element in their environment as well. Generally, they prefer that private cloud bit to be based on OpenStack rather than VMware, since many of the DevOps tools they use, tools that enable continuous integration and delivery, support OpenStack.

    Customers choosing QTS’s VMware-enabled cloud services are usually more traditional enterprise IT shops looking to consolidate their on-prem environments and leverage their existing investments in VMware licensing, Greaves explained.

    Two Types of Data Center REITs

    There are essentially two types of data center REITs: ones that provide services beyond the basic data center space, power, cooling, and connectivity, and ones that don’t. QTS falls in the former category.

    The ones that don’t provide those services themselves ensure they have plenty of other companies inside their facilities that can provide them to their customers, acting as middlemen and operating sophisticated platforms to enable this exchange of services.

    Basically, the more relationships a customer has inside your data center with companies other than yours, the harder it is for them to leave, so the two strategies for enabling these relationships achieve similar goals. One is obviously more costly than the other, but it also makes the amount of revenue a provider can squeeze out of every kilowatt of data center capacity higher.

    QTS is one of only two of the six publicly traded US data center REITs that provide those higher-level services themselves. The other one is CyrusOne, but its portfolio of higher-level services is much smaller than QTS’s. Another three REITs (Digital Realty Trust, Equinix, and CoreSite Realty) rely exclusively on partnerships with service providers, while the remaining one, DuPont Fabros Technology, has chosen to focus strictly on its bread-and-butter wholesale data center capacity product.

    See also: Why QTS Expects to Win Where DuPont Fabros Failed

    Provider and Middleman

    The two models aren’t mutually exclusive, at least in the sense that the companies that provide services up the stack also act as intermediaries between their customers and other service providers. That’s the approach QTS has taken, also adding to the mix managed services, which is a capability it expanded greatly last year by acquiring Carpathia Hosting, a managed hosting heavyweight where Greaves served as CTO prior to the acquisition.

    Read more: Why QTS Dished Out $326M on Carpathia Hosting

    The latest project, for example, is to roll out managed services for Amazon Web Services, expected in the fourth quarter, Greaves said.

    Two Public IaaS Regions, Custom Private Cloud Hardware

    The public OpenStack cloud is already live in the East Coast data center, and the company plans to launch a second availability region on the West Coast sometime in the near future. Greaves declined to name the specific data centers on either coast. QTS operates data centers in New Jersey and Virginia on the East Coast and in Sacramento, Silicon Valley, and Phoenix on the West Coast.

    The company is offering a lot of hardware customization for private OpenStack clouds it plans to build for customers, be it optimization for CPU- or memory-intensive workloads or environments that rely heavily or exclusively on SSD storage. One customer that’s already signed on for the private cloud product is a fabless semiconductor designer, for example, that needs a high-memory environment to run chip simulations.

    Integrated but Flexible

    Many QTS customers build their environments using a mix of QTS services, be they wholesale or custom data center space, retail colocation, or cloud and managed services. More than 50 percent of existing customers use more than one service at the company’s facilities, Greaves said.

    QTS’s strategy revolves around an integrated platform that provides customers with flexibility to deploy any of the service options it provides. So far, “the message seems to resonate well,” Greaves said.

    3:00p
    Microsoft PowerShell Goes Open Source
    Brought to you by MSPmentor

    Brought to you by MSPmentor

    Microsoft this week announced that PowerShell is now open sourced and available on Linux, the start of a development process that aims to enable users to manage any platform from anywhere, on any device.

    The task-based command-line shell and scripting language utilizes .NET framework and is widely used by managed services providers (MSPs) and other IT professionals to control and automate administration of operating systems and applications.

    Until now, PowerShell was only available on Windows.

    Opening the code to outside developers is part of Microsoft’s increasingly customer-centric philosophy, which calls for allowing the same tools and staff to seamlessly manage the growing number of diverse cloud and hybrid environments.

    See also: Red Hat, Microsoft and Codenvy Push DevOps With New Language Protocol

    “Microsoft wants to earn customers’ preference as the platform for running all their workloads – Linux as well as Windows,” said a blog post by Jeffrey Snover, a technical fellow with Microsoft Enterprise Cloud Group. “This new thinking empowered the .NET team to port .NET Core to Linux and that in turn, enabled PowerShell to port to Linux as well.”

    The new open source PowerShell is available immediately on Ubuntu, Centos, Red Hat and Mac OS X. Alpha builds and the source code are available on GitHub, the post said.

    “Now, users across Windows and Linux, current and new PowerShell users, even application developers can experience a rich interactive scripting language as well as a heterogeneous automation and configuration management that works well with your existing tools,” Snover wrote. “Your PowerShell skills are now even more marketable, and your Windows and Linux teams, who may have had to work separately, can now work together more easily.”

    Microsoft officials said they are expanding their community to encourage participation by developers, as well as working with third party companies, including Chef, Amazon Web Services, VMware and Google, to ensure a seamless experience across popular platforms.

    The newly created PowerShell Editor Service allows users to select from a variety of authoring editors, and Microsoft has enhanced the PowerShell Remoting Protocol to incorporate OpenSSH for native transport.

    Open source PowerShell will also improve the capabilities of Operations Management Suite (OMS), Microsoft’s cloud management solution.

    “OMS gives you visibility and control of your applications and workloads across Azure and other clouds,” Snover wrote. “Integral to this, it enables customers to transform their cloud experience when using PowerShell on both Linux and Windows Server.”

    “OMS Automation elevates PowerShell and Desired State Configuration (DSC) with a highly available and scalable management service from Azure,” he continued. “You can graphically author and manage all PowerShell resources including runbooks, DSC configurations and DSC node configurations from one place.”

    The current alpha release will be replaced in the future with an official Microsoft version of open source PowerShell.

    “We hope all of you will help us get it right,” the blog states.

    This first ran at http://mspmentor.net/msp-mentor/microsoft-powershell-goes-open-source

    3:30p
    Linus Torvalds on Early Linux History, GPL License and Money
    By The VAR Guy

    By The VAR Guy

    What drove Linus Torvalds to create Linux, the open source kernel that turns twenty-five this week? Here are some answers from Torvalds himself about Linux’s early history.

    More than a year ago, I wrote an article arguing that at the outset, open source “was about saving money, not sharing code.” At the time, I was just beginning a book project about the history of free and open source software.

    The book is now complete, and I have learned a lot since I wrote that article. One lesson was that it’s important not to mix up GNU and Linux. Both projects were crucial to forming the foundation of the free and open source platforms that power servers, clouds, containers and much more today. But philosophically, they’re quite distinct.

    In retrospect, what I should have argued in the article was that at birth, Linux was largely, though not totally, about saving money, rather than sharing code. Richard Stallman and the GNU project were a different story.

    Torvalds on Linux Before the GPL

    The GNU General Public License (GPL) has governed the Linux kernel since 1992. The GPL assures that the kernel source code will always remain available. The GPL doesn’t require Linux to be free of cost, although the source code is distributed without charge.

    Before the GPL, Torvalds distributed Linux under a different license of his own creation. A copy is available here. This is what Torvalds told me last May when I asked him about the original Linux license:

    So that original copyright license was just me writing things up, there was pretty obviously no actual lawyerese or anything there.

    The two important parts were the “full source has to be available” and “no money may be involved”. The note about copyright notices was because I tended to hate the copyright boilerplate verbiage at the top of every single source file, so I knew there weren’t all that many notices scattered in the sources themselves.

    The “no money” part came about because I had been annoyed with (being a rather poor student) having to pay something like $169 USD for Minix, and that had been a fair amount of money to me. I felt that part of the point was to make something available to others in my situation, and that it really should be “free” in the actual money sense.

    So for me, “free” as in “gratis” was actually an earlier concern than the whole “free as in freedom”. I still happen to believe that being available even if you’re a poor person who really doesn’t have any money at all is at least as important as anything else, because that’s a basic availability issue for many people.

    Why Torvalds Switched to the GPL

    Why did Torvalds abandon the original Linux license in favor of the GPL? This is what he told me on that point:

    The “source has to be available” obviously ended up being the important thing, and what caused me to switch to the GPLv2 was that a few months later (so late 1991 or early 1992) there were people who approached me and said that they’d want to distribute copies of Linux at local unix users groups meetings etc, and said that they’d like to at least recoup their costs.

    And put that way, I felt that (a) it was obviously reasonable to charge copying costs and (b) once you start doing that, there’s no clear limit, so clearly it must not be about money after all. I felt that as long as people gave access to source back, I could always make it available on the internet for free, so the money angle really had been misplaced in the copyright. So in the meantime people have pointed me to the GPLv2, and I decided that rather than just change my license by editing it again, I should just use an existing one.

    Part of it was also because I felt that the availability of gcc was very important to the project, so picking the GPLv2 as a homage to gcc was appropriate.

    Put another way: I still think that the availability issue is very important. But I think the GPL makes that a non-issue in practice, so
    making the license to be about the money side is pointless. And clearly _allowing_ the commercial side has been a very good thing for everybody.

    What Did Linux Do Differently?

    Linux was hardly the first free or open source software project. Much larger, more prominent and better-funded projects — namely, the GNU team in Massachusetts and the BSD team in Berkeley — were already trying (and, in the BSD case, had succeeded) to write a free Unix-like kernel. Torvalds had many fewer resources at his disposal. He was also someone whom no one had heard of.

    So, what did Torvalds do differently from his contemporaries to make Linux so successful? This is what he remembered setting Linux apart:

    here the Linux model made a difference was that it took a rather more pragmatic approach to the code sharing notion – using the license from the FSF, but believing in it as an _engineering_ choice and as a way to allow people to improve and share rather than as a moral imperative.

    And also, what was different from Linux compared to most other projects at the time was how non-centralized and open to outsiders the project was. Part of that was technology – it became much easier to work together over email, as the internet was really taking off more widely rather than being an enclave of a few research universities. But a lot of it was cultural: I was basically working alone “in the fringes” in Finland, so unlike a lot of other projects there was no core team where people were physically close to each other and mostly worked with people inside the same CS department or similar).

    And, to compare with the BSD’s, for example, there was no historical insider group either. So we were a lot easier to approach if you came from a DOS/Windows background, for example, because there was no supercilious “here’s a nickle, kid, go get a real computer” model.

    So there was no cabal, it was easy to send me patches, I wouldn’t have stupid paperwork rules like a lot of other projects had, and it really was a much more open project than a lot of software projects that
    preceded it…

    So I don’t think Linux was unique in any particular way, but it was a combination of things that made it pretty special.  I’m happy to say that a lot of those issues have just gone away, and most open source development today has a much more “Linuxy” approach to live than the horrible copyright license wars in the late eighties and early nineties.

    Conclusion

    In Torvalds’s own view, then, cost was a pretty important motivation for writing Linux. The sharing of code mattered as well, but on pragmatic rather than philosophical grounds. The GPL, combined with free Internet distribution of code, ended up serving both purposes.

    And that’s how we ended up where we are today — which is good, because if Torvalds had never GPL’d Linux, it’s doubtful that the Linux kernel and GNU utilities would have been combined in the important ways that they are today.

    This first ran at http://thevarguy.com/open-source-application-software-companies/torvalds-talks-about-early-linux-history-gpl-license-and-

    5:30p
    China’s Kingsoft Aims to Take on Alibaba in Cloud Computing

    (Bloomberg) — Kingsoft, the Chinese software company whose chairman is Xiaomi co-founder Lei Jun, is preparing to go head-to-head with Alibaba in the rapidly growing market for cloud computing services.

    Kingsoft CEO Hongjiang Zhang is banking on new businesses from mobile games to cloud computing to help pull the company out of the red. Cloud services in particular are expected to take off in coming years as Chinese corporations begin to move IT onto the internet, and no one company can own a monopoly in the market, Zhang told Bloomberg News.

    During Alibaba’s latest earnings call, Vice Chairman Joseph Tsai said that no Chinese company could match its firepower in the space, describing some of its rivals as “pretenders.” But Zhang said his much-smaller company can compete head-on against Alibaba and other industry titans. In its latest quarterly report, the company cited IDC research showing Kingsoft was the fastest-growing player in cloud services in 2015, when that business more than tripled.

    See also: Chinese Tech Giants Invest $300M in Data Center Provider 21Vianet

    “In gaming and video cloud and healthcare, we are the leader, so I’m quite confident we will have our place and we’re gaining market share while increasing revenue,” he said. “It’s still far from breaking even but the key point I want to make here is that break-even is not our priority and I don’t think it’s any player’s priority at this moment in China.”

    The cloud unit was a key reason for a near-30 percent jump in capital expenditure in its latest quarter. But that spending is taking its toll.

    See also: Alibaba Reaps Growth From Jack Ma’s Push Into Media, Cloud

    Kingsoft’s share price had fallen 21 percent this year before today, hammered in part by bets that have resulted in major writedowns. The company posted an 807.6 million yuan loss in the June quarter, thanks largely to provisions for impairment on the value of two of its investments. The company hopes Kingsoft Cloud and new mobile games will help reverse that trend.

    “We’re not a niche; we provide all the services our competitors provide,” he added. “This is an enterprise play and in the enterprise market there’s never one winner.”

    6:24p
    What is the Data Center Cost of 1kW of IT Capacity?

    It’s no secret that bigger data centers benefit from economies of scale. It costs less to provide X amount of data center capacity in a massive warehouse-scale facility than it does in a small data center.

    The number of factors influencing total data center cost is almost countless, but that economies of scale are real is generally accepted as a fact. However, little data has been available publicly on exactly how much of a difference those economies of scale can make. A recent study by the Ponemon Institute, funded by Emerson Network Power, aims to quantify this difference.

    And, as it turns out, the difference is huge. Even if you compare a data center that is 500 to 5,000 square feet in size to one that is between 5,001 and 10,000 square feet, it costs as much as 64 percent less on average to provide 1kW of IT capacity in the larger facility, the researchers found.

    The difference is much starker if you compare the smallest data centers to the largest ones. The average annual data center cost per kW ranges from $5,467 for data centers larger than 50,000 square feet to $26,495 for facilities that are between 500 and 5,000 square feet in size.

    See also: How Server Power Supplies are Wasting Your Money

    The researchers drew their conclusions by analyzing data from a survey of annual costs for 41 data centers in North America, including amortized plant, amortized IT assets, operating costs, and energy.

    That 64 percent drop in data center costs when comparing the two smallest size tiers used in the study is the largest. As you go up in size, the cost differences between neighboring tiers shrink but nevertheless remain substantial:

    • In the 25,001 to 50,000 square foot range, cost/kw were 23 percent lower than in the 10,001 to 25,000 square foot range;
    • In the over 50,000 square foot range, cost/kW were 21 percent lower than costs in the 25,001 to 50,000 square foot range.

    dc cost per kw emerson ponemon

    Source: Cost to Support Compute Capacity, Ponemon Institute, 2016

    Economies of scale make a difference in all four cost categories the study examined (plant, IT, operating, energy), but the impact is greatest in two of the biggest elements of total data center cost: operating and energy, which together constitute about 80 percent of the total.

    See also: Getting to the True Data Center Cost

    Energy cost to support 1kW of IT load in the smallest data center was 180 percent higher than in the largest data center analyzed. The difference in operating costs was 129 percent.

    The researchers also found that data center cost per kW decreases as rack density increases. It cost 68 percent less to support 1kW in facilities with average rack density of 8.5kW than in facilities with average rack density of 4.5kW.

    See also: Facebook Data Centers — Huge Scale at Low Power Density

    Download the report on the study in full here

    7:26p
    QTS to Provide Managed AWS Cloud Services

    QTS Realty Trust is planning to provide managed services for its data center customers who need help using Amazon’s cloud.

    The data center service provider is planning to launch managed AWS cloud services in the fourth quarter, QTS CTO Jon Greaves, told Data Center Knowledge in an interview.

    While its core business is providing data center space and power – both wholesale and retail colocation – the data center REIT’s business model consists of offering a mix of infrastructure outsourcing options, including cloud and managed services. Its managed services capability expanded greatly last year, when it acquired Carpathia Hosting, managed hosting firm, for $326 million.

    Greaves, who was Carpathia’s CTO prior to the acquisition, was named CTO of QTS earlier this year.

    See also: Why QTS Dished Out $326M on Carpathia Hosting

    More and more managed hosting providers have been adding managed cloud capabilities to their tool chests in recent years, seeing rising demand for services like managed AWS cloud or managed Microsoft Azure. The top players in this space, which Gartner broadly describes as cloud-enabled managed hosting, are Rackspace and Datapipe, according to the market analyst firm.

    Integrating the biggest cloud providers’ services with its own platform is a major strength in the managed hosting market, Gartner said in its 2015 Magic Quadrant report covering the space.

    Carpathia has had a managed AWS cloud capability, but the capability is now being integrated with its new parent company’s services platform.

    QTS also announced this week the launch of public and private cloud services built on OpenStack, the family of open source cloud infrastructure software. The company hopes this capability will put it in better position to attract companies with cloud-native applications and DevOps-oriented IT teams.

    These customers, Greaves said, gravitate toward public cloud services like AWS, but often choose to also deploy private cloud infrastructure in addition to their public cloud environments. They generally prefer OpenStack for their private cloud needs to VMware-based private clouds, which is something QTS has been offering for some time now.

    “If you’re building cloud-native apps, you want more AWS-centric, cloud-native interfaces,” he said.

    Read more: QTS Launches Public OpenStack Cloud

    << Previous Day 2016/08/23
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org