Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, October 15th, 2014
| Time |
Event |
| 1:00p |
CloudSigma Joins Short List of Canonical-Certified Ubuntu Cloud Providers CloudSigma is now one of the handful of Canonical-certified Ubuntu cloud providers. The certification means that Canonical has validated CloudSigma, and that the company’s Ubuntu server guest images have been optimized.
CloudSigma offers SSD storage and lets users define storage topologies on its cloud. It touts flexibility in resource provisioning as a major benefit. CloudSigma is cross-platform, but it already has a significant number of Ubuntu users.
The company has been working with Ubuntu over the past few months to become a certified partner. In addition to optimized guest images, customers coming to CloudSigma can be assured the Ubuntu server images are updated daily and have access to a full repository for Ubuntu-related cloud applications.
“The main benefit of the relationship is Ubuntu is just a smoother experience for customers,” ,” said CloudSigma CEO Robert Jenkins. “We have the full integration with Ubuntu, including orchestration, contextualization, and functionality. What they’re getting from an end-user perspective is a best-practice cloud image.”
The optimization does make a significant difference, according to Jenkins. Depending on the configuration, there is a performance gain over non-optimized Ubuntu images.
CloudSigma joins an exclusive list of Ubuntu-certified cloud providers. The others are IBM SoftLayer, VMware, Joyent, Amazon Web Services, and HP.
Ubuntu has a sustained track record as the most popular guest operating system in the world’s major public clouds, with around 70 percent of workloads running on Ubuntu thanks to its security, versatility and a policy of regular updates.
CloudSigma has been expanding its cloud’s physical footprint rapidly. It recently added three new locations in San Jose, California, Miami, and Honolulu. Its other cloud locations include dual sites in its home base Zurich and in the Washington, D.C., area. | | 3:30p |
The Trend for IT: Big Computing Version 2 Mark Harris is the vice president of data center strategy at Nlyte Software with more than 30 years experience in product and channel marketing, sales, and corporate strategy.
Information Technology has been with us for 60 years! It’s hard to believe but the first commercial mainframe was deployed in the early 1950s and with that introduction the world was forever changed. Information could freely be captured, massaged and reported. Analysis of information happened in seconds rather than weeks or months.
Information Technology became a new industry full of pioneering innovation with the common goal to facilitate that management of information to derive more value from it.
The first generation of IT
Keeping in mind that a generation is defined as a span of time equal to 27 years, the entire first generation of Information Technology (IT) is best characterized as centralized problem solving. Call it Big Computing Version 1.
Large centralized computing was done in elaborate structures, dominated by IBM. These large computing centers were expensive and therefore tasked to solve large problems. Business users punched control cards and later sat on the periphery of this massive computing capability while gazing at their ASCII CRT screens and took arms-reach sips of that massive centralized computing power. The typical IT new service delivery project spanned a year or more.
Distributed computing on a global scale
In the mid-1980s distributed systems arrived (due primarily to the invention of Ethernet and the inexpensive CPU) and the IT industry almost instantly took a 180 degree turn, moving computing close to the user.
Business problem solving could now be done on $2500 x86 machines which sat on the desktop. Every user got one of these devices, and everything in IT was networked which allowed information and resources corporate-wide to be accessed as if they were local to each user.
With the invention of the Internet in the mid-1990s this network-enabled and distributed model of corporate computing was extended to also include access to information that was available elsewhere.
Centralized computing meets distributed users
While the current model of computing is still dominated by this Internet-enabled distributed processing model, the whole world of IT is going through foundational changes due to public clouds, tablet/handheld, virtualized desktop initiatives and social media.
We are going back to a world where heavy processing is once again happening in hyper-scaled data centers. We are going back to the model of big centralized processing, with fairly thin user viewports that require little if any maintenance or support. Users gain access to enormous processing power and diverse information that resides in big processing centers. Remember that this big processing also relies on big networking and big storage. So, while the term “Big Data” has already become part of our commonplace vernacular, it only focuses on the storage access aspects of a bigger computing plan.
Coming of age, big computing
Perhaps thinking more broadly than just the data itself, we should start referring to today’s new generation as the era of “Big Computing Version 2” which would describe the huge hyper-scale data centers which house all of this enormous and centralized back-end processing.
Facebook, Apple, Google, and Amazon are all examples of these hyper-scale centers that are powering the back-end of everything we do today.
Big Computing Version 2 is much more than just Big Data. It also includes big networking, big storage, big processing and big management.
The key to success: big management
Big management is perhaps the most critical component to include in the strategic plan for these hyper-scale processing centers. The core economics of these data centers are based upon the ability to understand and optimize costs.
At the transaction level, the management of the data center itself drives the cost of processing those transactions. Big management is about managing those transaction costs through a wide range of mechanisms, physical and logical.
Big management sets the stage for data centers that can closely align supply and demand. Keeping in mind that the demand for processing changes every second, it’s easy to see how continuously optimizing the status of servers can dramatically affect the bottom line. From a business standpoint, the economics associated with Big Computing Version 2 will be defined by big management.
Value-oriented innovation
With Big Computing Version 2, there simply is no limit to the amount of processing that can be put together to handle any type of problem. Users can interact with all resources in a highly interactive fashion. Users can begin to march down a path to solve business challenges long before they understand the exact steps required to get there, or without even having to know precisely where they will arrive. Through their creativity and imagination, they can try various approaches and scan vast amounts of collective knowledge sets, attempting to solve their problems, in real-time.
The main difference from the first generation of Big Computing (25 years ago), is the need to think at the service delivery and transaction level. Historically the total cost for traditional IT was sunk into the overall company’s budget and then apportioned based upon simple and many times absurd units of measure (such as employee counts). Every group paid their fair share of the total cost for IT, regardless of whether they used it or not. Today, organizations are looking to tie their cost for IT directly to their actual usage. Big management is a critical part of that IT costing requirement.
Call it what you like: big computing V2 is here!
Regardless of what this new generation of IT is called (let me suggest, Big Computing Version 2), centralized and heavily managed resources are becoming king again. The mainframes have been replaced or augmented by dense server clusters and farms, applications have been decomposed and built on resilient scalable platforms and the ASCII screens have been replaced by thin tablets and handheld devices.
Big Computing Version 2 sets the stage for everything we do at work or at home. It enables the world’s knowledge to be gathered, centralized and accessed and will be the standard fare for years and years to come.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 4:00p |
The Critical Nature of Efficiency, Optimization and Predictive Reliability IT organizations are increasingly being called upon to cost-effectively deliver reliable support for the entire catalog of business services, or risk outsourcing to a managed service provider.
Previously, capacity planners and IT architects would use historical trends to predict capacity requirements and simply over-provision to account for any peaks caused by seasonality, error, or extraneous influences like mergers and acquisitions. Over-provisioning, combined with poor lifecycle management of new resources provisioned in the data center, has led to capacity utilization and inefficiency issues.
While historical data is great for understanding past issues and current state of the environment, the performance of servers, hosts and clusters is not linear; at some level of saturation, the performance of that infrastructure will quickly start to degrade. The impact is that business services dependent on that infrastructure suffer, and users experience longer response times, unavailable applications and unacceptable performance.
Lack of predictability and uncertainty has caused many IT organizations to suffer “VM Stall,” meaning they have been unable to advance to a strategic deployment of virtualization and achieve expected levels of consolidation. IT must understand the impact of change on the infrastructure to ensure performance. Without that insight, IT may be unwilling to take that risk.
In this whitepaper, you’ll learn how CA Technologies patented predictive capacity management enables the combination of real-world performance data and financial information with modeling, simulation and automation designed to deliver highly accurate, dependable projections of future performance and service levels. The business insights derived from this unique set of inputs give you information to help effectively plan capital budgets, improve spend on innovation, avoid costly downtime, and better manage risk across your portfolio of IT applications.
For example, you’ll find out how:
- Resource scores can help you identify how your infrastructure actually works. The CA Resource Score (Rx) is actually a collection (or “vector” if you prefer) of scores that characterize the capacity of a system to provide resources to users of the system and of the consumption of those resources by the users of the system.
- Resource score vector currently includes CPU, memory, network and IO characterizations. Memory, network and IO resource scores are typically intuitive to performance professionals.
Download this whitepaper today to learn how predicting the future will always be a complicated task and predicting future business metrics typically holds the greatest potential for error and surprise. | | 4:54p |
CenturyLink’s New Seattle Shop to Build Its Service Platform How does a telecom become a leader in cloud? So far, CenturyLink has provided the blueprint.
Following a number of key technology acquisitions, the company has opened a cloud development center in the heart of the action: Seattle.
The cloud center will act as the brain trust behind the CenturyLink cloud, helping to further usher in a unified platform across the entire portfolio and development of next generation services. It will also be an education center for customers to help them shore up cloud strategies and learn how to use CenturyLink’s services more efficiently.
The new center comes roughly a year following the acquisition of Infrastructure-as-a-Service and cloud management platform Tier 3, which will serve as the anchor and nucleus of the cloud development center. The center will drive a portfolio-wide integration of CenturyLink services into a unified platform, as well as the birthplace of new cloud services.
The unified platform will include colocation, cloud, managed services and network. There will also be cloud development focused on educating customers in practices such as DevOps and more efficiently using CenturyLink tools.
About 300 employees will ultimately work at the 30,000-square-foot facility. It will include collaborative spaces, “team room” workspaces and a large area for hosting developer and startup events.
Seattle is a logical decision for the location, given that other cloud giants are based in the area: Amazon Web Services and Microsoft Azure. The location is rich with cloud talent and innovation.
Larger companies tend to be cumbersome and resistant to change, but the CenturyLink of today looks very different than the one from a few years ago.
While building out the CenturyLink cloud, it remains a major retail colocation provider worldwide, able to provide everything from rack space to cloud to managed services. Colocation will continue to see tighter integration with the cloud platform going forward.
“Internally, CenturyLink is in a major transformation,” said the company’s cloud CTO Jared Wray. “We’re working hard and bringing in cloud-like attributes like automation to our entire portfolio.”
Wray said that the company is seeing more customer projects being born on cloud and more customers setting up “Hybrid IT”: a combination of some services in the cloud, some on-premises and some in CenturyLink colocation. A unified platform will help customers realize these mixed environments as well as help with managing hybrid infrastructure from a single portal.
CenturyLink recently integrated utility-style managed services into the platform. It used to take weeks to set up managed services, but the utility-style offering meant managed services could be provisioned on-demand through the platform. The next batch to undergo this transformation will be networking services, such as MPLS and Virtual Private Networks (VPN).
“Our vision is to have a single platform, giving customers the ability to procure or provision any of our services through a single interface,” said Wray. “We’re constantly adding to the platform. Right now CenturyLink has the networking core, which has been siloed, and it is starting to merge all together.”
Wray said that customers want more integration with cloud, and that the center will drive this innovation as well as new services down the line.
“Next-generation services will be a big push,” he said. “We want to enable new services and launch them on our cloud. From our hyperscale to recent managed services, we’re constantly evolving.”
Since acquisition of Tier 3 roughly a year ago, the company has:
- Launched globally available private cloud services in 57 data centers
- Added several new public cloud nodes, bringing the total number of worldwide locations to 12
- Contributed Panamax, a Docker management platform, to the open-source community
- Launched Hyperscale high-performance server instances designed for web-scale workloads, big data and cloud-native applications
- Continued to build on its commitment to Pivotal and Cloud Foundry open-source project by joining the Cloud Foundry foundation
CenturyLink also continues to expand its data center footprint, with several notable openings this year. Most recently it launched a second data center in Toronto and its first in Shanghai. | | 5:16p |
Docker Containers Coming to Windows Server Docker and Microsoft have partnered to bring Docker’s open platform for distributed applications to a future release of Windows Server. So far, Docker has only supported Linux, but Docker CEO Ben Golub told Data Center Knowledge in August that support for Windows was in the pipeline.
Docker Hub, the company’s online repository of Docker images, will also be integrated into Microsoft Azure directly through the Azure Management Portal and Azure Gallery.
Docker Engine, the open source runtime that builds, runs and orchestrates Docker containers, will work with the next Windows Server release, the companies said. Docker is an open-source engine that automates the deployment of any application as a portable, self-sufficient container that will run almost anywhere, such as data centers and clouds.
Docker has seen a lot of interest from developers, organizations, and tech giants wishing to ensure it is compatible with their offerings. VMware, Google and Pivotal teamed up to bring Docker to the enterprise. Microsoft integrated Kubernetes into its Azure cloud in August. Kubernetes, an open source Google project, helps manage deployment of workloads packaged in Docker containers.
Docker Engine Images for Windows Server will be available in the Docker Hub, a community and repository with more than 45,000 Docker applications.
Docker Hub integration with Azure will allow Microsoft’s ecosystem of Independent Software Vendors and cloud developers to access the work of Docker’s community to further innovate on both Windows Server and Linux.
“The strength of Windows Server in the enterprise makes its inclusion into the Docker project a watershed event for the Docker community and ecosystem,” said Solomon Hykes, CTO and founder of Docker. “Creating a common approach and user interface for containerization and distributed applications will catalyze a new wave of applications that will be transformative across all organizations.”
Microsoft is also contributing to Docker’s open orchestration APIs to help ensure portability for multi-container applications. Developers are able to directly work with pre-configured Docker Engine in Azure to create multi-container Dockerized applications.
“The power of Azure and Windows Server leveraging the Docker platform redefines what enterprises should expect and demand from their cloud,” Golub said. “Together, we will provide a framework for building multi-platform distributed applications that can be created with exceptional velocity and deployed and scaled globally.”
“We recognize the importance of providing flexibility to our customers as they look to innovate in this mobile-first, cloud-first world,” said Scott Guthrie, executive vice president of Cloud and Enterprise at Microsoft. “To deliver this flexibility, we are already providing first-class support for Docker and Linux on our rapidly growing cloud platform, Microsoft Azure. Today, our partnership with Docker further deepens our commitment to help create an open platform powered by choice, bringing together Windows Server and Linux to drive application innovation.” | | 6:00p |
365 Pitches Small Server Cabinets to SMBs With Modest Colo Needs 365 Data Centers has made “Compact Cabinets” available across its footprint of 17 U.S. data centers. They are about one-third of the size of standard server cabinets.
365 is pitching the offering to small-to-medium-sized businesses (known as SMBs), systems integrators and small managed service providers (known as MSPs). It’s a colocation offering for those not yet in need of regular-size server cabinets.
Following a recent announcement of a cloud storage offering, this is another step in the company’s journey to a more diverse portfolio of services. Together, a compact cabinet and cloud storage can make for an attractive hybrid set-up a smaller business can afford.
365 Data Centers acquired much of its footprint from Equinix’s divestiture of several facilities owned by Switch & Data, a provider it acquired in 2009. It is targeting the SMB market’s colocation needs in several second-tier and emerging markets.
In another signal to SMBs, 365 also eschews traditional annual colocation contracts in favor of month-to-month agreements.
The starting compact server cabinet comes with:
- Locking cabinet (14 RU x 24″ x 36″)
- 1Mbps of Internet service with burstable pricing options
- One cross connect
- Primary AC power with 120 volts x 15 amps
- 100 percent service availability SLA.
“SMBs, integrators and MSPs need both dedicated colocation and cloud services,” said Keao Caindec, chief marketing officer, 365 Data Centers. “Our Compact Cabinets are a cost-effective alternative for businesses that are not ready to put their mission-critical applications in the public cloud. They need a hybrid environment.” | | 9:02p |
IBM Gives Bluemix PaaS Users Cloud IoT Capabilities Continuing to grow the amount of things Bluemix can do, IBM has integrated Internet of Things functionality into its Platform-as-a-Service. The IBM Internet of Things Foundation is a cloud for IoT — a single service that enables someone to connect a device to the Internet, have it generate data, store that data and present it through an application the user has built on the Bluemix platform.
This is the latest in the growing amount of features users can access through the PaaS. Just last week, IBM made APIs for its “cognitive computing” system called Watson available on Bluemix. Providing a variety of advanced services on a PaaS makes it easier for developers to build applications with advanced functionality.
Equipment and asset manufacturers can use IoT to provide remote service and monitoring to residential and commercial customers. It has several potential applications across many industries. One example IBM gave was an oil-and-gas company that was remotely monitoring and providing predictive maintenance to critical equipment.
“Think of the IoT Foundation as an extremely fast on-ramp to the cloud for the millions of intelligent IoT devices that are now being shipped, and the billions already Internet connected,” said John Thompson, vice president, Internet of Things, IBM.
IBM has recruited a group of companies that signed on as initial partners to the cloud IoT initiative — a group it expects will grow. These initial partners are ARM Holdings, B&B Electronics, Elecsys, Intel, Multi-Tech Systems and Texas Instruments. | | 9:14p |
Private OpenStack Cloud Provider Blue Box Given $10M in Series B Funding Round 
This article originally appeared at The WHIR
Blue Box, a cloud startup that aims to deploy private clouds anywhere in the world in under an hour, has completed a $10 Million Series B financing round with investorsVoyager Capital and Founders Collective.
Blue Box launched in general availability in May 2014 with a “best of both worlds” solution designed to provide the agility and elasticity of public cloud coupled with the control, compliance, data sovereignty, and performance benefits of a traditional, on-premise private cloud.
According to a blog post from Blue Box founder and CTO Jesse Proudman, the Series B funding will help further develop its technology, including updates to its management suite, Box Panel, and adding features to Blue Box Cloud deployments. The company will also be hiring in its engineering department, and invest in sales and marketing.
Proudman sees Blue Box as a unique solution in the marketplace. It basically allows private OpenStack clouds to be consumed as a service rather than require companies to build their own private cloud solutions on their own infrastructure, using an OpenStack distribution. Blue Box provides customers a private cloud on dedicated hardware and manages it on their behalf.
“We feel fortunate to have entered the market with a unique product, at a point in time where OpenStack has reached a level of operational and feature maturity, and customers are ready to shift their focus from running infrastructure to building apps that bring true enterprise value,” Proudman wrote.
“OpenStack is the open source cloud platform for the future, and Blue Box has delivered a service powered by the software that is precisely what many enterprises and service providers want to consume.”
Private cloud is still enormously appealing for companies. A recent report from analyst firm Technology Business Research anticipates private cloud adoption to grow at a faster rate than companies choosing the public cloud. Private cloud was worth $8 billion in 2010 and $32 billion in 2013, and is expected to grow to $69 billion in 2018.
Blue Box is planning a comprehensive partnership strategy to be announced in within the next weeks and months.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/private-openstack-cloud-provider-blue-box-given-10m-series-b-funding-round | | 10:25p |
PowerSecure Buys Data Center Business from Electrical Contractor PDI PowerSecure, a Wake Forest, North Carolina-based vendor of industrial- and commercial-grade electrical equipment, has acquired the data center electrical services business of electrical contractor Power Design Incorporated (PDI) for $13 million cash.
The move is an attempt by PowerSecure to increase its share of the data center infrastructure market. It is a major expansion into the space for the vendor, whose play there has until now consisted of selling switchgear to be used in projects St. Petersburg, Florida-based PDI has done for its data center customers.
“This transaction accelerates this opportunity to serve this very important customer set,” Sydney Hinton, PowerSecure CEO, said in a conference call with analysts Tuesday.
Besides being a strategic move, the deal is a very real opportunity to increase PowerSecure’s revenue. Its management was aware of PDI’s sales pipeline prior to closing the acquisition and the decision to proceed very much rested on that opportunity.
“We have visibility into a lot of potential projects but we do have to do the selling,” Hinton said. “That’s a risk that we’ve assumed.”
The deals in the pipeline amount to $15 million to $20 million of additional 2015 revenue for the vendor that reported $270 million in sales for 2013, Hinton said. If its sales team manages to close the deals in PDI’s pipeline, that revenue will add $0.05 to $0.07 cents to its earnings per share for next year.
PowerSecure leadership does not assume it can convert the entire pipeline, but they are certain a lot of the opportunity is within their grasp. The company knows a lot about the deals in the pipeline because it has given quotes for switchgear on the projects.
“They’re on the green; they’re ready to be putted,” Hinton said.
Most revenue PDI’s data center business generates is concentrated with two big customers, he said. The company designs and deploys electrical infrastructure systems for large enterprise data centers and colocation providers.
Hinton did not specify who those clients were, but PDI’s website lists at least one major data center provider as a customer: CoreSite.
The long-term gain for PowerSecure is the specialized data center electrical infrastructure design skill-set it has gained from the acquisition. “The whole electrical design scheme is a nice pick-up for us,” Hinton said. About 20 PDI employees will join PowerSecure. | | 11:00p |
How is a Mega Data Center Different from a Massive One? What is a mega data center and how is it different from a massive or a large one? What is the difference between a small data center and a mini data center, and what does the expression “high density” really mean?
Everyone has their own meaning for each of those terms, which is a problem, according to the Data Center Institute, an industry think tank that is part of AFCOM. In an attempt to help establish a common language when data center industry professionals around the world talk about size and power density DCI has developed a set of standards it described in a paper published this week.
Disclosure: AFCOM and Data Center Knowledge are both properties of iNET Interactive and as such are sister companies.
“The goal is to have clear communication in the industry,” Tom Roberts, AFCOM chairman and one of the paper’s authors, said. “We’d really like this to become a standard guideline. If somebody says they have a large data center, then you’ll know what that really means.”
When Microsoft says it will build a 1.2 million-square-foot data center on a property in Iowa, you need to answer a few more questions to know what the actual compute capacity at the site will be. When Switch says its data center in Las Vegas can cool up to 1,500W per square foot, or when Digital Realty claims one of its facilities can support up to 15kW per cabinet, the figures, while may be true, are useless without further clarification.
The concept of “size” in data center discussions and reports is sometime defined by power capacity, utility supply, number of racks, building area or the compute room area. Density today also has a variety of meanings. DCI proposes that in data center context, size should only describe size of the compute space, and density should be measured peak kW load.
Size, according to the think tank, is defined using rack yield and area of the compute space. Here are DCI’s size definitions:

If rack yield and compute space in a facility do not match up to be on a single row on the chart, DCI recommends that whatever number is higher be used to determine size.
For density, the paper proposes that measured peak density be used, rather than design or average density commonly used in descriptions. The standard takes into account both rack density and compute space density, the latter defined as measured peak load divided by rack yield.
Here are DCI’s density definitions:

In preparing the paper, DCI has vetted its standards with data center professionals in the U.S. and Asia Pacific, Roberts said. In all, about 100 people have seen the standards and had a chance to comment on them before the paper was published.
The organization is still collecting comments and plans to make revisions to the paper. The final vetting period ends October 22. Send comments to troberts@afcom.com
The paper, titled Data Center Standards, is available for download at www.AFCOM.com |
|