Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, November 18th, 2014
Time |
Event |
12:00p |
Startup Tier44 Gives Power Assure’s Tech a Second Chance Clemens Pfeiffer, former CTO of Power Assure, the data center management software vendor that went out of business earlier this year, has founded a new data center startup called Tier44 Technologies, which has bought all intellectual property associated with Power Assure’s technology.
Power Assure was a forward-looking company with support from major investors, as well as the U.S. Department of Energy, but failed to raise enough funding to keep going. Pfeiffer plans to continue with a similar product portfolio but change the focus from energy savings to reliability.
Santa Clara, California-based Tier44 is not simply Power Assure with a different name. “It’s significantly different, because we own it,” Pfeiffer, the new firm’s president and CEO, said. “None of the investors are moving along.”
The data center startup was incorporated in October, and the deal to buy the intellectual property closed this week. It is currently funded by the founding team, but that will change going forward, he said.
The technology that has changed hands includes data center infrastructure management solution EM/4 and its more recent release EM/5, as well as the capability to move applications from one data center to another and adjust IT capacity based on power availability, cost, or participation in utility demand response programs.
Market Appetite for Energy Savings Underwhelming
Tier44 is changing the messaging from energy savings (which was Power Assure’s shtick) to reliability and IT automation because the market doesn’t care about energy savings that much, Pfeiffer explained. Power Assure’s leadership learned the hard way that energy savings alone wasn’t enough to convince data center decision makers to buy its products.
The message is reliability improvements funded by energy savings down the road, he said. “And then it’s all about commercialization at this point.”
At least for the time being, Tier44 is not going to develop any new technology. Pfeiffer believes that Power Assure was way ahead of the market when it started, but now, six years later, the market is finally catching up.
Reliability via App Mobility
Its software enables reliability by automating failover from one data center to another in case of an outage. It involves a complex procedure that takes into account everything, including servers, network, storage, power, and cooling in the two data centers.
The closest technology out there is VMware’s vMotion, Pfeiffer said, but it focuses on VMs, without the broad consideration of physical resources Tier44’s technology takes.
Details on Power Assure’s Dissolution Scant
Power Assure was founded in 2007, and Pfeiffer was a member of the founding team. It raised more than $30 million from a number of investors, including the Swiss industrial automation giant ABB, and received a $5 million grant from the Department of Energy for its energy saving capabilities.
Pfeiffer said he did not have any details about the reasons behind the company’s dissolution, other than its inability to raise enough money. He and two other former Power Assure employees who formed Tier44’s founding team were all laid off in August.
“The only thing we know is we were able to buy the IP assets,” he said. | 1:00p |
Keystone NAP Converting Pennsylvania Steel Mill Into Data Center Keystone NAP is building out a data center at the site of a former steel mill in Bucks County, Pennsylvania. Commissioning is scheduled to be completed in Q1 2015.
The steel mill, which will soon be converted into a data center building, was constructed over 60 years ago and was once the largest vertically integrated steel production site in the U.S. Steel production slowed, but the massive power infrastructure remained. The company is tapping this power infrastructure and combining it with modular building through a partnership with Schneider Electric.
The goal is to bring a web-scale data center to enterprises, as well as to serve web-scale companies themselves. “Take a look at what companies like Amazon, Microsoft, and Facebook are doing at scale,” said Keystone NAP CEO Peter Ritz. “These are billion dollar projects. Keystone wants to bring that kind of discipline and efficiency to the enterprise. The reality is with enterprises, because it’s not their core business, it’s very difficult to do at scale.”
Bucks County is an hour drive north of Philadelphia. The market is underserved, but Bucks County and Philadelphia are in a good intermediary position between several core data center markets.
“We are seeing growing demand for data center space in the Northeast, and particularly in eastern Pennsylvania, as an alternative to markets such as New York and New Jersey,” said Kelly Morgan, research manager for data centers at 451 Research. “With its location outside of Philadelphia, as well as its combination of space, power, and application management services, we expect that Keystone NAP will see strong demand for this new facility.”
The company received Series A funding and is backed by prominent Philadelphia investors led by Ira Lubert along with additional investors arranged by DH Capital.
Lubert previously was an owner of NAP of the Americas, the massive data center building in Miami that ended up selling to Terremark (which is now part of Verizon).
Standing on the Shoulders of Giants
The modular approach the company is taking is nicely aligned with the capital plan, according to Ritz. “We thought that there was a tremendous opportunity at the site we selected to deploy this type of data center,” he said. “We’re standing on the shoulders of giants that used to be here,” he said, referring to the steel plant that necessitated a lot of deep engineering many years ago.
The project is similar to Steel Orca’s proposed project a few years back, right down to the location. Keystone is aligning many concepts and innovations at the right time.
The company is currently converting the former steel plant’s motor room building. The steel mill used arc furnaces that would run from 300 megawatts from the grid. “And they used to run a couple of them,” said Ritz. “All of those engineers in the 50s might not have had supercomputers, but the site has never lost power since it was built; a remarkable thing.”
Fuel Diversity
Four separate substations will make over 2,000 megawatts of onsite power available. The independent power generation plants include a trash-to-steam power feed, coal and gas power feed, natural gas power feed, and landfill gas power feed, which can be used as a strategic fuel reserve.
“We’re providing fuel diversity,” said Ritz. “During the Hurricane Sandy situation that the Northeast, particularly New Jersey and New York, faced, it caused a lot of enterprises to say, ‘I don’t want to be dependent on diesel providers.’ Our way is to be so close to generation and be so diverse that that issue doesn’t come into play.”
 Site has access to several power sources (click for larger image)
In terms of cooling, the site has access to Delaware river and the water basin and aquifer that forms the base of the river in addition to normal air cooling.
Schneider’s Customizable Modules
Keystone is building “KeyBlock” vaults in conjunction with Schneider. KeyBlocks are private modular, stackable data center vaults. Each KeyBlock provides dedicated, customer-configurable infrastructure.
The modules are quick to deploy and offer uninterruptible power up to 400kW per KeyBlock. Each KeyBlock is private allowing Keystone to provide custom Service Level Agreements.
Services Up the Stack, Beyond Facilities Management
In addition to standard facility management, Keystone will also provide “up the stack” application services and workspace recovery solutions. This includes transition services for helping a customer move into the data center and full responsibility when it comes to the network.
“As we looked at the landscape of data center operators, most come from real estate backgrounds,” said Shawn Carey, senior vice president of sales and marketing. “The underpinnings require a sophistication of running an application stack in a web-scale environment. We have a team and advanced services. It needs to be about more than just space and power.”
Early partnerships with Sunesys and Comcast deliver dedicated bandwidth to Keystone NAP through independent dual feeds on opposite sides of the campus. “We want to be on the hook for identifying bandwidth requirements and provisioning for building networks,” said Carey.
In terms of workspace recovery service, there will be room for housing hundreds in case of a physical disaster like Hurricane Sandy. The facility is located within reasonable driving distance of all the major data center markets in the Northeast. | 2:00p |
European Cloud Provider Interoute Launches LA Data Center European cloud provider Interoute has launched a Virtual Data Center (VDC) zone in Los Angeles, its second stateside and eighth new availability zone this year.
The Infrastructure-as-a-Service provider is growing out its infrastructure quickly, with eight new zones this year. The new location complements a recently opened VDC zone on the East Coast, in New York.
Interoute is going after American companies looking for a location-sensitive platform to make compliance with European data laws easier. It makes sense for Americans who do business in Europe and Europeans who want to do business in the states. With the new region, Interoute can provide cloud zones on both coasts making it more feasible for both groups.
Customers can restrict data to a single location or distribute across multiple locations according to needs. Data transfer is free between VDC zones.
“Interoute simplifies the complexities of doing business in Europe, giving customers the ability to reach over 500 million people whilst adhering to the continent’s different national laws and regulations,” Matthew Finnie, Interoute CTO, said in a statement. “Interoute VDC is designed to make doing business across multiple markets easier, with one price and one SLA everywhere. Opening an Interoute Virtual Data Center in L.A. allows American businesses to expand into European markets through a platform that offers public cloud convenience with private cloud confidence.”
Wyless is one example of a US-based company tapping Interoute to serve a global business. The company provides a platform for the development and deployment of Internet-of-Things and Machine-to-Machine applications. It needs to tap globally distributed infrastructure to manage large amounts of data in close proximity to its customers and partners.
Location-sensitive access is appealing to enterprises, however the company has a Jump StartUp program aimed at startups and developers as well.
The company has a total of 13 VDC locations in the U.S., Europe, and Asia, with eight coming in just the last year. In 2014, the company added VDC zones in Milan, Hong Kong, New York, London, Slough, Madrid, Frankfurt, and now Los Angeles.
Interoute VDC is also available in Amsterdam, Berlin, Geneva, and Paris. The company also has colocation in 31 data centers.
Interoute recently came up in a Gartner report on European cloud. It is also available on Digital Realty’s recently launched marketplace for cloud services. | 4:30p |
The Internet of Things and the Future of Storage Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, Stefan has designed and built numerous enterprise scale data storage solutions designed to be cost effective for storing huge data sets.
A “smart and safe city” initiative was recently launched in Kazan, Russia. The goal of this initiative is to transform the city gradually by creating a network of Internet-connected sensors and devices that will serve its population with greater efficiencies and better quality of life. As an example, connected cameras have been installed in the famous Gorky Park to enhance security and safety.
Kazan provides a real-world example of what the Internet of Things means for the data storage industry. Imagine all the new data the city’s interconnected devices and sensors will generate, and the storage it will require. Current storage approaches are already bursting at the seams, and the requirements for scaling up have proven costly. Service providers need to think about how to accommodate the incoming data deluge at a price they can afford.
Redundancy and Bottlenecks in the Hardware Age
Appliances are the primary architecture of most modern-day data centers. Storage appliances come with proprietary, mandatory software that is designed for the hardware and vice versa, and come tightly wedded together as a package. The benefits of this configuration include convenience and ease of use.
Redundancy is built into the appliance model as backup for failure caused by reliance on a single point of entry. Traditional appliances typically include redundant copies of expensive components. This model is effective but expensive. These redundant extra components also bring with them greater energy usage and additional layers of complexity. When companies, in anticipation of growth events like the Internet of Things, begin to consider how to scale out their data centers, costs for this traditional architecture skyrocket.
These standard appliances also suffer from vertical construction. All requests come in via a single point of entry and are then re-routed. Think about a million users connected to that one entry point at the same time. That’s a set-up for a bottleneck, which prevents service providers from being able to scale to meet the capacity needed to support the Internet of Things.
Freedom from Appliance Dependency in the Software-Defined Age
Another option in data center architecture is software-defined storage (SDS). By taking features typically found in hardware and moving them to the software layer, a software-defined approach to data center architecture eliminates the dependency on server “appliances” with software hard-wired into the system. This option provides the scalability and speed that the Internet of Things demands.
Because software and hardware do not have to be sold together as a package, administrators can choose inexpensive commodity servers. This provides a real cost savings. When coupled with lightweight, efficient software solutions, the use of commodity servers can result in substantial savings for online service providers seeking ways to accommodate their users’ growing demand for storage.
In addition to choosing commodity servers, administrators can also choose the specific components and software that best support their growth goals; they are no longer bound to the software that’s hard-wired into the appliances. While this approach does require more technically trained staff, the flexibility afforded by software-defined storage delivers a simpler, stronger and more tailored data center for the company’s needs.
Storage at Scale
Software-defined storage offers the benefit of scalability as well. A telco servicing one particular area will have different storage needs than a major bank with branches in several countries, and a cloud services host provider will have different needs still. While appliances might be good enough for most of these needs, fully uncoupling the software from the hardware can extract substantial gains in economy of scale.
Using a software-defined approach eliminates the potential for bottlenecks caused by vertical, single-entry-point architecture. Its horizontal architecture streamlines and redistributes data so that it is handled faster and more efficiently, and this non-hierarchical construction can be scaled out easily and cost-effectively.
To accommodate the ballooning ecosystem of storage-connected devices all over the world, service providers, enterprises and telcos need to be able to spread their storage layers over multiple data centers in different locations worldwide. With millions of devices needing to access storage, the current storage model that uses a single point of entry cannot scale to meet the demand of the Internet of Things. It’s becoming increasingly clear that one data center is not enough to meet the storage needs of the Internet of Things; storage must instead be distributed such that it can be run in several data centers globally.
Future-Defined Storage
Technology has produced a veritable Cambrian explosion with the vast web of interconnected sensors and devices known as the Internet of Things. It will touch every industry and organizations of every size, requiring greater storage capacity than ever before. Because traditional data center architecture has been so expensive, service providers in search of a more budget-friendly alternative are finding the answer in software-defined storage. By uncoupling from the hardware and using a horizontal architecture, software-defined storage enables cost-effective scalability and speed, both of which will serve customer needs for the long haul.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 5:30p |
Case Study: CPI’s Cabinets Prepare Telefônica Vivo for the Future Today, the home of all modern platforms and technologies is the data center, so the right model can mean everything for your business. Many organizations are working much more closely with their data center design partners to ensure that agility, scalability, and efficiency are all built in.
In this case study from CPI we look at Telefônica Vivo, the largest integrated telecom in South America, providing services to more than 100 million clients, or half of Brazil’s population. Vivo has seven data centers across the country. In an effort to consolidate and optimize them, the company wanted to build a new facility that would be robust enough to support its prepay and contract mobile segment for the next 10 years and beyond.
What was their challenge? To support all these systems, the company needed an environment that was not only reliable, but completely energy efficient.
Victor Bañuelos, CPI’s technical manager for Latin America and an expert in aisle containment, provided Vivo with custom computational fluid dynamics (CFD) models and calculations to prove savings and return on investment. CPI also provided the company with engineering models in AutoCAD Shapes and building information modeling (BIM) drawings of what the cabinet would look like.
During the selection phase, a competitor sent its cabinets for demonstration and testing. The cabinets were 600 mm wide x 1,000 mm deep, 42U high. CPI’s sample cabinet was 800 mm wide x 1,200 mm deep, 48U high with a Vertical Exhaust Duct.
Vivo initially had plans to use a Hot Aisle Containment (HAC) Solution but in the end, it decided on CPI’s GF-Series GlobalFrame Cabinet System with Vertical Exhaust Ducts, which were custom made to have 9-foot (2.7 meter) ducts to fit the space. CPI’s GlobalFrame cabinet supports high-density applications using CPI Passive Cooling Solutions, which isolate, redirect and recycle hot exhaust air, all while reducing operating costs. The cabinet is available in 30 popular frame sizes.
Download this case study today to learn more about the efficiency and scale results that Vivo was able to experience by deploying a dynamic server cabinet system. Furthermore, learn about the Power Usage Effectiveness (PUE) reading, which is currently even better than what was previously expected. | 6:17p |
China’s Milkyway 2 Ranked Fastest Supercomputer for Fourth Time China’s Milkyway 2 retained its ranking as the world’s fastest supercomputer on the most recent Top500 list for the fourth consecutive time with the same 33.86 petaflop/s (quadrillions of calculations per second) performance record it set in June 2013.
Milkyway 2 (also known as Tianhe-2) staying where it is reflects the rest of the 44th edition of the Top500 list, which recorded no changes in the top nine systems since the June 2014 edition and similarly low performance growth rates in the lower portion of the list. A new Cray CS-Storm system installed at an undisclosed U.S. government site entered the list at number 10, at 3.57 petaflop/s. The November 2014 list reported the lowest turnover rate in two decades.
The fastest supercomputers on the biannual list are measured by Linpack, the favored benchmark that many have said is losing relevance in determining the true performance measurement of modern high-performance computing systems. In an attempt to offer an alternative HPC metric the High Performance Conjugate Gradient (HPCG) metric was introduced last year, which attempts to better correlate computation and data access patterns found in modern applications. The two metrics could both be used to evaluate systems and judge true performance.
Many organizations have placed less importance on Linpack performance measurements and have not submitted their systems for consideration on the Top500 list. The Blue Waters Cray supercomputer at the National Center for Supercomputing Applications has not submitted its performance results to the Top500 at all since it was launched, but recently noted that the system achieved a peak performance over 13 petaflop/s. This speed record, if submitted, would rank Blue Waters as number four on the November 2014 list.
Other statistics noted in the November list showed continued growth in the use of accelerator/co-processor technology (NVIDIA GPU, Intel Xeon Phi). Systems by HP, IBM, and Cray continued to dominate the list. A total of 50 systems now have greater than 1 petaflop/s performance, up from 37 six months ago. | 9:30p |
Senate to Debate Surveillance Legislation Backed by Major Tech Companies This Week 
This article originally appeared at The WHIR
On Tuesday, the US Senate will begin debating and amending the USA Freedom Act, a bipartisan bill authored by Democratic Senator Patrick Leahy and Republican Jim Sensenbrenner designed to put limits on mass government surveillance.
The USA Freedom Act was passed through the House of Representatives in May in light of government surveillance practices made known publicly by Edward Snowden.
Privacy advocates and technology groups championed the bill originally but some revoked their support after compromises expanded the definition of what data the government can collect.
After weeks of analysis, Internet advocacy group the Electronic Frontier Foundation has come out in support of the current incarnation of the bill.
The group notes that the legislation could substantially improve America’s laws regarding mass surveillance with its statutory limits on mass surveillance by the National Security Agency (NSA). They note that it will also appears to bring more transparency to the Foreign Intelligence Surveillance Act Court (FISA court) with the addition of a special advocate to protect civil liberties in the FISA court, and new reporting around surveillance that forces the NSA disclose how many people are actually being surveilled under its programs.
The EFF notes, however, that it’s a preliminary step in addressing the problems in Internet surveillance. The bill does not address NSA’s programs to develop its own methods to cracking encryption standards, reported by ProPublica. It also doesn’t effectively address the collection of information on people outside of the U.S., nor does it limit the government’s ability to secretly intercept unencrypted packets passing between private data centers.
Major tech companies that handle and host data on the web have been majorly affected by government surveillance because, as companies that, it undermines consumer trust in the services they provide. Major tech companies including Apple, Dropbox, Facebook, Google, LinkedIn, Microsoft, Twitter, and Yahoo, been cooperating on a new campaign which began last year called “Reform Government Surveillance”, which is aimed at curbing unconstitutional government surveillance.
RGS has been pleading to senators to support the bill. On Sunday it sent an open letter to senators, stating: “We urge you to pass the bill, which both protects national security and reaffirms America’s commitment to the freedoms we all cherish.”
According to an Ars Technica report, senators could be voting on the legislation as soon as this week.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/senate-debate-surveillance-legislation-backed-major-tech-companies-week | 10:00p |
Canadian Web Host Files Lawsuit Against Police who Seized its Equipment in Citadel Botnet Investigation 
This article originally appeared at The WHIR
Vancouver web host White Falcon Communications and its owner Dmitri Glazyrin have filed a lawsuit against the Attorney General of Canada and two Canadian police for seized company servers and equipment in 2013. The seizure resulted from a US-led investigation into Citadel botnets, and the company alleges in the suit that the seizure effectively destroyed its business.
In June 2013, as US Marshalls were seizing servers from facilities in New Jersey and Pennsylvania suspected of operating Citadel botnets, Royal Canadian Mounted Police (RCMP) investigators Clint Baker and Paul Wrigglesworth carried out a warrant for White Falcon’s hardware.
“I believe that a computer…that has been associated with White Falcon Communications, was operating a command and control server,” RCMP Const. Wrigglesworth said in the search warrant, according to the Vancouver Sun. “This command and control server was controlling an unknown number of infected personal computers as a Citadel botnet.”
According to the Sun, Citadel malware had affected over five million people at a cost of $500 million at the time. The FBI and Microsoft identified White Falcon servers as Citadel botnet command and control servers. The warrant also mentioned that although the origin of the Citadel botnet is unknown, it is believed to have been operated from Russia or the Ukraine. White Falcon hosted many sites with the .ru TLD, and Glazyrin was raised and educated in Russia before immigrating to Canada.
Glazyrin maintains his innocence in the suit, which was filed in early November. “It is well known in the Internet security industry that legitimate businesses can be affected by botnet infections and indeed, the United States of America have a number of legitimate online business [sic] that had been affected with the ‘Citadel’ malware,” the claim states. “It did not occur to the Defendants Wrigglesworth and Baker that White Falcon Communications may have been the victim of the Citadel botnet and malware and instead jumped to the erroneous conclusion that the Plaintiffs herein were actively engaged in the crime of unauthorized use of a computer and possession of [sic] device to obtain computer service.”
Legitimate businesses which have been affected by botnet infections include industry leaders like Amazon and GoDaddy, which Solutionary found hosted a combined 30 percent of malware in a January report.
Several other incidents of cybercrime allegedly involving Russia have occurred since the seizure, including the theft of 1.2 billion credentials in what may be the largest data breach ever, which was revealed in August, as well as a recent breach of White House computer networks.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/canadian-web-host-sues-police-seized-equipment-citadel-botnet-investigation | 10:10p |
Atlanta, Seattle Among Cheapest Places to Lease Data Centers Leasing wholesale data center space is an expensive proposition for any company. Location has massive influence on cost, which adds to the complexity of the already extremely complex process of data center site selection.
In a recent report, commercial real estate firm CBRE analyzed total cost of leasing 1 megawatt of data center capacity for seven years in 23 U.S. markets and found that Atlanta, Colorado Springs, Northern Virginia, Portland, and Seattle are the most cost-effective places to do it. Boston, Des Moines, Kansas City, Northern Florida, and Omaha were on the opposite end of the spectrum.
The amount of activity in the lowest-cost markets is telling. Atlanta, for example, saw an 80,000 square foot lease by Twitter, opening of a Peak 10 data center, and acquisition of a local provider by zColo this year. Northern Virginia is on its way to overtake New York as the biggest data center market in the nation by 2015, according to 451 Research.
Here are the findings on a map, courtesy of CBRE:
 (click for larger image)
“For occupiers seeking to preserve capital or lease a data center, the selection process needs to carefully consider the primary cost variables of rent, power and taxes, and recognize the variability that exists from market to market,” Pat Lynch, managing director of data center solutions at CBRE, said in a statement.
Average cost for a 1 megawatt lease over a seven-year term across the 23 markets is $45.9 million, according to the report:
- $158 per kW per month, or $1.9 million – average first-year rent
- $0.076 per kWh, or $798,000 per year – average cost of power
- $1.9 million – average total tax payment over the life of the project
Tax Breaks in Immature Markets Not Effective
Many states use tax breaks to attract data center construction to areas where they want to boost the economy. Some states offer tax incentives not only to data center providers but also to companies who lease to nudge their data center site selection decisions in their direction.
Tax incentives in areas that don’t already have active data center markets, however, do not necessarily make it advantageous for somebody to lease there, CBRE found. Markets with little competition tend to have higher prices for tenants.
A less mature and competitive market like Des Moines will have higher lease rates than a first-tier market like Silicon Valley, according to the analysts. So, even though places like Kansas City, Des Moines, and Omaha offer good incentive packages, they cannot offset rates in those markets which are 120 percent to 140 percent higher than the average.
Government incentives usually shave off about 10 percent of the total cost of a long-term lease, according to CBRE.
Of the 23 markets the report analyzed, nine did not offer incentives for leased data centers.
Data Center Construction Boom Continues
Wholesale data center inventory across the country is continuing to grow to keep up with growing demand. As of the second quarter of this year, inventory in primary markets went up more than 30 percent year over year.
Primary markets boasted 1,140.9 megawatts in Q2 and 107.3 megawatts more was under construction, according to CBRE. The category includes Atlanta, Chicago, Dallas-Ft. Worth, New York-Tri State Region, Northern Virginia, Phoenix, and Silicon Valley. | 10:30p |
Microsoft Intros Docker Command Line Interface for Windows Microsoft launched a command line interface for Docker that runs on Windows. Until now, users could only manage Docker containers using a Linux machine or a virtualized Docker environment on a Windows machine.
Docker is one of the hottest emerging technologies at the intersection of software development and IT infrastructure management, and tech giants are racing to make sure their existing products and services support the open source technology as well as to develop new services around it.
Microsoft announced a partnership with Docker in October, saying it would bring native support for Docker containers to the next version of Windows Server in addition to support of the technology on its Azure cloud. Google added a Docker container management service to its cloud platform earlier this month, and Amazon Web Services did the same last week.
By standardizing the way applications communicate their IT resource requirements, Docker makes it easy to deploy an application on a variety of servers or clouds and to move it from one resource to another. It is both an open source technology and a for-profit San Francisco-based company.
The Docker CLI for Windows is now in the official Docker GitHub repository. “Today, with a Windows CLI you can manage your Docker hosts wherever they are directly from your Windows Clients,” Khalid Mouss, senior program manager for Azure Compute Runtime at Microsoft, wrote in a blog post announcing the news.
Microsoft has also created a Docker image for ASP.NET, its open source server-side Web application framework. ASP.NET is based on .NET, the popular development framework Microsoft open sourced last week. |
|