Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, March 12th, 2015
| Time |
Event |
| 1:00p |
Docker Buys Kitematic, Which Simplifies Docker on Mac Docker has added yet another deal to its growing list of acquisitions, and this one couldn’t be a better cultural fit.
Kitematic makes it possible for the Mac world to run, build, and ship Docker containers, something its current product wasn’t capable of doing. Because Mac computers lack the features to run Docker natively, the alternative has been to follow a guide and spend 30 minutes to an hour configuring it to work. This new product reduces time down to minutes and increases ease of use because it does all the setup in an attractive user interface.
Kitematic features a one-click installer, intuitive GUI, and easy discovery and download of public images. Other automated features include mapping ports, mounting drive volumes, modifying environment variables and getting log information.
As a result of the acquisition, Docker expects to attract many more developers to its ecosystem. In addition to providing a solution to run Docker on Mac, it opens up Docker to more programmers and a wider, more general audience.
The three-person Kitematic team is moving and setting up shop in Docker’s San Francisco office and currently looking to hire more talent. The team will continue to develop open source products, including bringing that magic to another major platform.
“We’re acquiring it to increase the size of community,” said Justen Stepka, director of product management at Docker. “One of the reasons we’re really excited is it’s easy to publish the platform over Windows. Windows distribution as well as Mac will really open up the platform.”
Kitematic is one of those cool tech stories. The trio met as seniors at the University of Waterloo and decided to develop an extremely fast and easy way to run Docker on Macs. Hacker News picked up the alpha version of Kitematic, which raised its profile. Then, Kitemagic saw 2,600 stars on Github in less than six months, with the number of downloads reaching more than 10,000. The attention from the community landed it on Docker’s radar quickly, said Stepka. The Kitemagic team flew down to California and the rest will be history.
Docker continues to acquire in strategic areas that extend the project and bench with talent. Stepka broke down the rationale for each acquisition so far:
- The company acquired Orchard Labs to define how to setup containers that can easily move together. Modern apps don’t run inside a single component, and Orchard delivered a Docker orchestration tool, called Compose, with capabilities to manage and monitor containers.
- The recent SocketPlane acquisition was an extension of Orchard, addressing how to network those containers. It is working on open source software in terms of defining APIs to build and network.
- Then Koality came along and formed Docker Hub Enterprise.
- Finally, Kitematic will open up Docker on Mac, making it easily accessible to more developers.
“Anything that increases the overall experience of the platform and improves the Docker community is of great interest,” said Stepka. | | 3:00p |
Zuora Raises $115M to Help Cloud Service Models Subscription management platform company Zuroa, whose X-as-a-Service offering helps cloud and hosted services transform and manage billing, has raised another $115 million as the startup approaches a $1 billion valuation. That brings total funding to $250 million, ushering in chatter about an IPO potential.
A veritable “who’s who” of investors in this space participated in this latest round: Benchmark Capital, Greylock Partners, Redpoint Ventures Index Ventures, Shasta Ventures, Vulcan, and New World Capital. SaaS trailblazers such as Salesforce.com CEO Mark Benioff and Workday co-founder Dave Duffield also contributed.
This broad interest illustrates a trend toward a services economy that is touching every part of the technology landscape. Traditional upfront costs and maintenance fees are increasingly being replaced with cloud service models across the board, resulting in a shift toward subscription or utility-based billing.
Zuora certainly hit the scene at the right time. Founded in 2007, it was an early play in the cloud world that fixed a very real problem: How do you run a business when revenue comes in over a period of time instead of upfront? In addition to technical hurdles, there were and continue to be business model hurdles.
Consumption- and subscription-based pricing means more invoices, more varied invoices and more time-consuming accounting. Zuora’s aim is to alleviate some of the burden.
“The subscription economy is permeating every industry—entertainment, technology, healthcare, manufacturing with IoT, consumer products, everything,” said Tien Tzuo, Zuora co-founder and CEO, in a press release. “Customers are now subscribers, and the new way to acquire, bill, and nurture customers is through monetizing subscriber relationships.”
Early entrant and competitor, eVapt, also developed a cloud billing solution called Sure!, a Magnaquest product. | | 3:30p |
Google Cloud Becomes Attractive Option for Businesses Yaniv Mor is CEO & Co-Founder of Xplenty, the big data processing platform powered by Hadoop.
Big data is the fuel driving today’s business engine. According to Gartner, more than 73 percent of organizations either have, or plan to invest in, advanced big data infrastructure and programs within the next 24 months. And to maximize the overall effectiveness and operational efficiency of these efforts, businesses are increasingly turning to the cloud, spending $13 billion in total on the category this year.
Since 2006, Amazon Web Services (AWS) has been the industry’s dominant vendor, with roughly 80 percent marketshare. But as the space matures, we’ve seen increased competition from players like Rackspace, Microsoft Azure, IBM Softlayer and perhaps, most notably, Google.
Though Google’s offering is still quite young with a customer base that is fairly small, its attractiveness for businesses is very real. Below we discuss three reasons why.
Why Google Cloud?
Price Flexibility. With Amazon in the driver’s seat, Google is very clearly trying to win the cloud computing space with the strength of its pricing structure. In October, to drive interest and put Amazon on its heels, they announced a 10 percent price reduction in all instances across all regions (even after price cuts in March). Unlike newer players in the space – think Digital Ocean and Profitbricks – Google can afford to wage a price war to encourage broad adoption. Their wallets are deep enough to weather cuts that drive scale.
Beyond just dropping prices, though, Google Cloud captures the cost flexibility businesses generally associate with cloud computing. One of the cloud’s main benefits is its ability to minimize cost by avoiding the often sizable upfront investments in IT hardware, as well as ongoing upgrades. The cloud also makes it possible to “pay-as-you-go” based on infrastructure and processing needs. With Google Cloud, billing is done by-the-minute, compared to other major Infrastructure-as-a-Service (IaaS) platforms that charge hourly. This means a more agile and cost-effective experience for users.
Big Data Analytics Services. BigQuery is among the top big data analytics services available on the cloud today and is central to the success of Google Cloud itself. Designed for scalability and ease-of-use, BigQuery can handle petabytes of data which it in turn displays in an SQL-esqe interface. This allows a handler to easily query the data, quickly identifying early trends, analyzing patterns, locating potential technical problems, and do a host of other things that are critical for any business long-term. Google built its cloud to handle analytics.
This simple and straightforward functionality brings users to Google Cloud, while also serving as a key initial step toward shrinking the barriers for big data cloud adoption moving ahead.
Google App Engine. As far as a Platform-as-a-Service (PaaS) goes, Google’s App Engine is currently one of the most comprehensive services available. Compatible with popular coding languages, including Java and Python, Google’s App Engine enables users to build and run applications themselves.
Additionally, App Engine provides one of the most important things to developers: scalability. Thanks to Google’s built-in technologies such as GFS and Big Table, all developers have to do is simply write the desired code and let Google take control over scaling processes as appropriate. Furthermore, given this easy-to-use blueprint, organizations can again cut out the leg work by keeping tabs on and adjusting their applications as they see fit, on their own.
The Long Road Ahead
Still, even with its very apparent strengths, Google has a long way to go before it can overtake Amazon as the category’s dominant player.
Consider, for instance, the sheer breadth of services available through AWS versus Google Cloud. There are many, many more – from a richer set of relational and NoSQL services all the way through code deployment and integration tools – and all are well integrated into the platform. That depth and flexibility is critical in accommodating the broad set of needs businesses and developers might have.
Alternatively, Google Cloud features far fewer products and services, limiting its inherent usability. Though this will likely change over time as the platform scales and matures, right now it’s well behind Amazon with regard to completeness. Separately, Amazon’s big network of partners and ISVs presents another hurdle to Google Cloud consideration, though the lack of services, in my view, is a bigger issue.
Together, these benefits make AWS the all-around better platform. But, through price cuts, BigQuery and Google App Engine, Google Cloud is beginning to win over businesses and position itself as a real challenger. Of course, with more meaningful competition, the harder it can be to select a vendor. So businesses need to look for a player that checks as many boxes as confidently as possible for their specific needs. At the end of the day, that’s what it really comes down to.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 3:30p |
Videotron Acquires Quebec Colocation Data Center Quebec telecommunications company Videotron has acquired 4Degrees Colocation and its data center in Quebec City, Canada for more than $31 million with potential for up to an additional $4 million if certain targets are reached. The move will allow Videotron to offer bundled services to businesses in the city.
The company’s Quebec colocation data center is a large, modern facility. Built just last year in the Quebec Metro High Tech Park, it is Uptime Institute Tier III-certified with room for 2,000 cabinets or 85,000 servers.
The shift toward hyper-consolidation is occurring throughout Canada. Telecoms, in particular, continue to invest in data centers ranging from single facilities to sizable footprints. As traditional business lines slow, data centers represent a growing opportunity to diversify and protect against threats to cable and telecom revenues.
Cogeco acquired Peer 1 Hosting for $482.5 million and is opening a massive data center in Montreal. Canada’s Shaw Communications bought Colorado-based ViaWest for $1.2 billion. Telus has also been expanding its data center business. Rogers Communications acquired Pivot Data Centers and BLACKIRON Data. Out west, Bell Canada along with a group of investors, acquired Q9 networks and its sizable presence.
There are also increasing investments and data center footprints on the part of CenturyLink, Equinix and Cologix. IBM just opened a SoftLayer data center in Montreal and OVH added a 10,000-server container late last year.
American companies are buying into Canada as well. For example, Internap acquired Quebec’s iWeb in 2013, and CenturyLink-acquired Savvis bought Canadian-managed provider Fusepoint way back in 2010.
Two reasons companies find Quebec City ideal for data centers are climate and clean energy. The city remains cold much of the year, which helps keep cooling costs down. Plus, it offers clean hydroelectric power. The combination creates the perfect environment for data centers, including Videotron’s newest one.
A subsidiary of Quebecor Media, the company offers cable, telephone and mobile services in addition to hosting and various data center services for business.
Rogers Communications attempted to acquire Videotron, but cultural sovereignty concerns on the part of a large stakeholder struck the deal down faster than a Maple Leafs’ hope for hockey playoff contention—its loss, Videotron’s gain.
“We are very pleased to acquire this promising business,” said Manon Brouillette, president and CEO of Videotron, in a press release. “Its expertise will benefit Quebec companies, which are looking for hosting services that meet the highest technical and technological standards in the industry. The integration of 4Degrees into our operations is very much in keeping with our vision of making life simpler than ever for our business customers. This investment also enables our teams to more effectively support Quebec companies of all sizes seeking integrated business solutions that meet their needs. By linking up with experts in a high-tech field, we are reaffirming our commitment to innovation and to supplying Quebec businesses with the technology of their choice.” | | 4:00p |
Compass Adds $25M Credit Facility to War Chest Compass Datacenters received an additional $25 million credit commitment to support future growth. Keybank acted as syndication agent, with finance from CIT Bank and credit commitment from CIT Group.
Compass provides high-end made-to-order data centers and focuses on second-tier U.S. markets. The company uses a single standard design to deliver Tier III-certified data centers wherever they are needed.
The credit facility will go toward fulfilling customer orders for data centers and towards expanding Compass’ operations into additional metro areas with burgeoning demand.
With the latest commitment, the company’s total credit facility is now $135 million. It added $100 million to the war chest in 2014. Data center construction is capital-intensive, with credit facilities playing an important role in running a smooth business.
“Strong relationships with our banking partners are critical to support the growth at Compass. I am very pleased to add CIT to our facility,” said Chris Crosby, Compass CEO, in a statement. “I envision 2015 to be a year in which we scale the business not only in terms of new customers, but also as it relates to the people, processes, and systems behind our unique approach to building and operating world-class data centers.”
Compass, which was founded about five years ago, has the data center construction and delivery process down to a science. It has delivered data centers for everyone from giant electrical utility American Electric Power to large service providers like CenturyLink. Other customers include Iron Mountain and Windstream Communications.
Other financial partners that have supported Compass include KeyBank, Regions Bank, and Raymond James. | | 4:00p |
How Big Data and Internet of Things (IoT) Impact Data Centers Big data just keeps expanding and expanding. Science Daily reported in 2013, a full 90 percent of all the data in the world was generated over the previous two years. Jorge Balcells, Director of Technical Services, Verne Global, noted that with 2.5 billion Internet users worldwide, and with about 250 million users in United States alone, the level of users has exploded, particularly in the last decade.
The number of devices and appliances of all types connecting into the Internet, as well as devices generating data, from our Fitbits to our phone cameras, the future potential for data generation leads to exponential increases in demand for compute and storage.
How does this impact data centers? That’s the topic of Balcells’s presentation at Data Center World Global Conference and Expo next month. The conference’s educational tracks will include many topical sessions, covering issues and new technologies that data center managers and operators face.
Larger Compute and Storage Demands Generate Power Demands
Balcells said he said his key points are around questions data center managers and operators should be asking, such as “Is the electrical infrastructure we have today (that is from years ago) able to cope with all data generated today? Can we cope to provide enough power?” This leads to the next consideration: “Do you know where you get power to run your data center today? And in 5 or 10 or 15 years?”
To support today’s needs of compute and storage, “we need to have power that is abundant, reliable, renewable, and energy efficient,” he said.
There certainly is agreement in the industry that growing data demands leads to greater power demand and costs. Verne Global, located in Keflavik, Iceland, has built its strategy around accessing renewable, reliable and cost-effective power. Balcells is uniquely positioned to lead a discussion of power considerations and their impacts on data centers.
The Power Bottom Line
The financial perspective on power is vitally important. As data center managers look into the future to plan costs, how does one calculate what power pricing is going to be when you don’t know what the future holds?, Balcells asked.
The power costs a huge impact on data center facility location today. When one looks at the market trends, the common denominator is “price of power,” said Balcells.
Locations Changing with Demand
“You see today, that people are not locating new data centers in large metro areas. In the last ten years, data centers are moving away from population hubs, and toward remote areas, such as the Pacific Northwest of the United States, areas such as Washington State, Oregon, and even Utah,” he noted. “Globally data centers are being located in the Nordic regions, including Iceland.”
For example, he said that Facebook built a data center in Sweden where the power grid is ultra-reliable, and Google went to Finland. For Google’s Finnish facility, the power will be coming from renewable sources, starting this year. (Previously reported by DCK: Starting in 2015, the Google data center in Hamina will be primarily powered by wind energy via a new onshore wind park. The company will sign additional agreements as it grows to power the data center will 100 percent renewable energy.)
This kind of power sourcing and reliability is not currently available in the United States. “For example, the Bay Area is not sustainable. There’s not the power nor the reliability there,” Balcells said.
The other benefit of Northern climates is the lower cooling demand. “Of a data center’s overall cost, cooling cost is 30 to 40 percent of the power costs,” he said “Data centers are looking to locate in locations where there is a cool climate year round.” This reduces the need to generate cool air (either through conventional cooling units or through evaporative cooling) to reduce the intake temperatures of the servers.
Utility Reliability
The electrical infrastructure that we depend upon 24/7/365 is not all that reliable. Balcells said that society quickly forgets the issues with power reliability. He cited Superstorm Sandy and the Northeast Blackout of 2003 as massive disruptions to the power grid.
“In 2003, 50 million people were affected. How soon we forget,” he said. “Reliability is an issue, not just in the U.S. but worldwide.”
To find out more about how big data and the Internet of Things are impacting data centers, you can register and attend the session by Balcells at spring Data Center World Global Conference in Las Vegas. Learn more and register at the Data Center World website. | | 4:30p |
8-Fiber Versus 12-Fiber for 10G to 40/100G Migration. Who Wins? Determining the right fiber count is vital for the cost and performance of today’s high-density 40/100G data center and network infrastructure applications. Recently, new 8-fiber solutions have proven to be more beneficial than traditional 12-fiber infrastructures.
Join us for a new webiner to learn how the new 8–fiber ribbon solution solves the considerable costs, optical performance, and compatibility issues of the traditional 12-fiber infrastructure. Register Now
By eliminating unused fibers, unnecessary connectivity points, and conversion module hardware, the 8-Fiber Solution ultimately yields the best possible optical performance and cost savings for your data center. In fact, the elimination of conversion modules can save hundreds of dollars per port, potentially generating hundreds of thousands of dollars in cost savings depending on your data center applications. Moreover, the 8-Fiber Solution optimizes 40G and beyond networks, while backward compatible with legacy networks (1G & 10G), creating a real-time, scalable, and already future-proofed network.
Register Now
The live event will occur on Thursday, March 26, at 1 p.m. EDT (10 a.m. PDT).
Can’t make the live event? No problem. All registrants will have access to the archived version. | | 5:22p |
HP Launches New Cloudline Servers for Service Providers 
This article originally appeared at The WHIR
HP introduced its new Cloudline servers Tuesday at the Open Compute Summit to provide service providers with low cost, scalable infrastructure. The Cloudline family of servers are intended to maximize data center efficiency and increase cloud service agility for providers running hyperscale architectures, the company said in a blog post.
HP Cloudline servers are customizable for various workloads, including cloud computing, web servers, content delivery, hosting, and big data. Cloudline is the result of a partnership with Foxconn, which gives it the benefits of an original design manufacturer (ODM) sourcing model. That sourcing model and the minimalist design allow HP to keep the price down.
“The business success of today’s service providers is directly correlated with their ability to cost-effectively acquire and operate their IT infrastructure to meet customer demand,” said, Alain Andreoli, senior vice president and general manager, HP Servers. “Built on open-design principles with extreme scalability, HP Cloudline servers help service providers reduce infrastructure cost and accelerate service delivery to improve business performance.”
The first five Cloudline products are expected to be available at the end of March, and are designed to play different roles. The CL2200 for instance is a 2U server for big data and storage-intensive cloud applications, while the CL1100 is a 1U system at a lower price point meant for front-end web serving. They will be available in various high-volume quantities.
Cloudline systems are based on the Intel Xeon E5 v3 processor platform. They will also easily integrate into a multi-vendor environment, HP says, because of the use of open management tools like OpenStack, and common industry interfaces like IPMI in hardware and firmware. Cloudline is also optimized for HP Helion, the OpenStack-based development platform the company launched last year.
HP will also deliver support services and finance packages for service providers, bundling them with the infrastructure and software necessary to quickly deploy a service to a new market. New support service include HP Service Provider Ready Solutions for a range of support services, HP Service Provider Growth Suite for investment and asset management solutions, HP DatacenterCare for Service Providers for support and consulting services, and HP PartnerOne Service Provider for streamlining and integration.
Worldwide server sales reached $50.9 billion last year, according to the Q4 2014 server report from IDC, released earlier in March.
This article originally appeared at http://www.thewhir.com/web-hosting-news/hp-launches-new-cloudline-servers-service-providers | | 8:18p |
How Microsoft Got Rid of the Big Data Center UPS Big uninterruptible power supply cabinets and rows of batteries that are similar in size to the ones under the hood of your car have been an unquestioned data center mainstay for years. This infrastructure is what ensures servers keep running between the time the utility power feed goes down and backup generators get a chance to start and stabilize.
But companies that operate some of the world’s largest data centers – companies like Microsoft, Facebook, or Google – are in the habit of questioning just such mainstays. At their scale, even incremental efficiency improvements translate into millions upon millions of dollars saved, but something like being able to shave 150,000 square feet off the size of a facility or improve the Power Usage Effectiveness rating by north of 15 percent has substantial impact on the bottom line.
Those are the kinds of efficiency improvements Microsoft claims to have achieved by rethinking (and finally rejecting) the very idea of the big central stand-alone data center UPS system. The company now builds what essentially is a mini-UPS directly into each server chassis – an approach it has dubbed Local Energy Storage.
This week, Microsoft announced it would contribute the LES design to the Open Compute Project like it has done with the designs of servers that support its cloud services in data centers around the world. OCP is Facebook’s open source data center and hardware design initiative that was started in 2011. Microsoft joined OCP last year and has already contributed two generations of server design specs to the open source project. LES is part of the second one, called Open CloudServer v2, which the company submitted in October.
 Microsoft’s cloud server blades on display at the Open Compute Summit in San Jose, California, in March 2015 (Photo: Yevgeniy Sverdlik)
It saves physical space (150,000 square feet for a typical 25-megawatt data center, according to Shaun Harris, director of engineering for cloud server infrastructure at Microsoft, who blogged about LES this week). It is also more energy efficient, because it avoids double conversion electricity goes through in a traditional data center UPS. Finally, Microsoft saves by not adding reserve UPS systems (in case the primary ones fail) and by not having to build a “safety margin” in the primary UPS. Data center designers usually go through a lot of trouble to make sure the central UPS plant doesn’t fail, because if it does, every server downstream will go down when the utility feed fails.
But building less stuff is only part of the story. Another big cost advantage of LES is that it uses commodity components that have been mass-produced for other industries for a long time, so they are less expensive than specialized data center equipment. The design uses battery cells that power cordless hand tools and electric vehicles, which ensures LES can “leverage industry volume economics and supply chain,” Harris wrote.
It also leverages the power train design of power supply units that already ship with every server, using something that has proven itself over time. “The LES design innovation takes commodity energy storage devices (batteries) and an industry proven PSU power train, fusing these together in a single package to maximize energy delivery efficiency and minimize cost overheads.”
Here’s a basic diagram that explains the LES design. Read Harris’s blog post for more details.

Microsoft’s server and data center engineers are not likely to stop at bringing UPS directly to the IT load. The company is also experimenting with bringing the primary power source all the way to the rack. Last year it announced completion of a proof-of-concept for a design that puts gas-powered fuel cells directly into the server rack, which both eliminates the losses that result from energy traveling through multiple devices before reaching destination and opens doors for using biogas to power IT – something Microsoft has been spending a lot of resources and brain power on. |
|