Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, July 8th, 2014
Time |
Event |
12:00p |
CliQr Moves Your Enterprise Apps into the Cloud so You Don’t Have To As they were working at VMware as key engineers on the giant’s vCenter cloud initiatives and watched the rise of the Amazon Web Services’ public cloud empire, Gaurav Manglik and Tenry Fu realized that while the heavyweights had built successful products on the infrastructure side of things, there was still a missing piece.
Moving applications into the cloud was too complex of an affair for enterprises to really embrace cloud infrastructure. They needed help with cloud orchestration. “Companies like VMware were doing a great job in the infrastructure layer but we needed something for app management on top,” Manglik says. “So we had that general notion and decided to leave VMware to start something in this space.”
The pair of engineers left VMware in 2010 and shortly thereafter started CliQr Technologies, where they built a solution that automates every aspect of cloud infrastructure needed for deployment and ongoing management of applications. Using the CliQr interface, a user simply describes their application’s topology and infrastructure requirements, and CliQr does the heavy lifting of provisioning the right infrastructure and deploying the app. The user picks from a list of supported cloud providers but their application will not be married to any one cloud they choose.
CliQr raised seed funding and an $8 million Series A round from Google Ventures and Foundation Capital and was one of the first cloud orchestration vendors to support Google’s Infrastructure-as-a-Service offering called Compute Engine when it came out. Series B is already in the bag, but Manglik, the company’s CEO, would not share the specifics before the official announcement.
It’s not like Docker
There is a ton of cloud orchestration and automation startups promising the ability to easily deploy enterprise applications on any cloud. One that has gotten the most hype recently is Docker, whose solution packages applications in Linux “containers,” which can then be deployed on a variety of infrastructure types.
CliQr is different from Docker, Manglik explains. In CliQr, a Docker container is an image a user can plug into the application profile they create. “What Docker does is provide portability of containers across clouds,” he says. “What Docker does not do is the orchestration.”
A complex application that’s using Docker will typically have multiple containers, and deploying it still requires calls into cloud APIs (Application Programming Interfaces) to orchestrate infrastructure provisioning and configuration. Deployment of the Docker containers will need orchestration too. “That is something that needs to be done outside of Docker, and that’s what CliQr does,” Manglik says.
Cloud-borne in less than 15 minutes
A homegrown enterprise web application built on Java, for example, may have a topology that includes an Oracle database, a Weblogic application server, a load balancer and a NoSQL component. The user describes that topology through the CliQr UI, along with any data and data backup that needs to be plugged into the application and custom configuration requirements.
They describe interdependencies between all the moving parts and infrastructure requirements, such as the kind of storage, compute and networking resources and firewall rules. “All of that goes in a simple metadata profile,” Manglik says. Once the profile is created, the user’s job is done. After they specify which cloud they want to deploy in, CliQr picks up and does everything else.
The platform supports VMware, OpenStack and CloudStack private clouds. On the public cloud side, CliQr orchestrates AWS, Rackspace, Windows Azure, Google Cloud Platform, HP Public Cloud, Cloud N (an NTT Communications service) and Dimension Data.
CliQr has a number of preset profiles for some of the most common applications – Hadoop, High Performance Computing, Java web apps or Ruby on Rails workloads – and these can be deployed in the cloud within an hour.
Ultimately, the time to deploy depends on complexity of the application. “I personally on-boarded applications within 15 minutes,” Manglik says. The more complicated enterprise apps, such as Oracle’s Siebel Customer Relationship Management, may take two to three days, which is still a much shorter time than the weeks it can take a company to move them on their own. | 12:30p |
Cooling Trends: Cost-Cutting Opportunities Jeff Klaus is the General Manager of Data Center Manager (DCM) Solutions, at Intel Corporation. He can be reached at Jeffrey.S.Klaus@intel.com.
In the future, when you open the door to the data center, will you still hear that loud hum of the air handlers? Will the temperature drop as you step over the threshold? Some IT and facilities managers accept as inevitable that data centers have to operate between 64 and 68°F and that cooling systems have to scale in direct proportion to server and storage expansion. Fortunately, they are wrong. The latest technology advances and best practices are changing the cooling practices and approaches in the data center.
Smarter hardware and middleware
Data center equipment providers have responded to spiraling energy costs by building in more intelligence for thermal and power monitoring. Step one to harnessing cooling costs: start monitoring these smart devices. Real-time temperature can point to hot spots where cooling needs to be adjusted. Snapshots during low and peak periods of activity can also help data center managers gauge requirements for planning purposes.
More important, the fine-grained information makes it possible to track cooling efficiencies over time. By leveraging middleware that automates the collection and logging of the temperature and power consumption data, patterns can be extracted and analyzed. The same middleware technology can drive intuitive dashboards, with data displayed in the form of thermal and power maps.
Closed-loop control
Some of the same technology that can gather real-time temperature and power data can be used to adjust equipment on the fly and lower the demand for cooling. The world’s largest data centers employ power capping and dynamic workload management to respond to fluctuating demand while keeping energy – and therefore temperature – within pre-defined thresholds.
While tracking real-time conditions, data center managers can also employ energy management solutions that let them control server performance levels. Slower clock speeds reduce power draw and dissipated heat. When balanced against user requirements, subtle tuning of server speeds has been proven to significantly lower energy consumption and cooling requirements without affecting user experiences.
The combination of monitoring, power capping and dynamic server performance enables many cost-cutting practices. Identifying and minimizing the number of idle servers, for example, can reduce energy and cooling requirements by 10 to 15 percent in typical data centers. In general, the new approaches help data center managers avoid overprovisioning of both compute power and the related cooling systems.
Redefining “normal” operation
Armed with accurate data center energy and cooling data and closed-loop management, IT and facilities teams have been turning up the thermostats in data centers. Vendors have responded by confirming reliable operation at higher temperatures.
Bottom line, for every degree that ambient temperature is raised, the cooling costs drop by ~4 percent. Small temperature change can yield major savings, and with monitoring in place, data center managers can easily experiment while minimizing risks to equipment life and therefore service continuity.
The cost of cost-cutting
How practical is real-time monitoring, which is the key enabler for the cost savings just overviewed? Since the granular energy and temperature data can be collected programmatically, the solutions that unlock efficiency improvements do not require expensive hardware overlay networks. Additionally, the best-in-class middleware technology is hardware agnostic. This is driving energy management solutions and approaches that can be applied across hardware sold by Dell, HP, IBM, Intel, among others.
The business case for a dashboard that puts IT in control of power and cooling costs offers short payback periods. In addition, the middleware simplifies both deployment and ongoing support for these solutions. Without any major obstacles for adoption, IT and facilities teams should target the reduction of cooling costs as an achievable short-term goal with very long-term benefits.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 1:00p |
German Internet Exchange Operator Moves into Sabey’s Manhattan Tower DE-CIX, operator of the world’s largest Internet exchange point in Frankfurt, is expanding into Sabey’s skyscraper data center in downtown Manhattan.
The 32-story Intergate.Manhattan will be the eighth data center in the New York metro to host a Point of Presence for DE-CIX, which has been aggressively expanding its footprint in the market since it entered it in September 2013. This is the first venture into the U.S. for the German company, which in addition to Frankfurt and New York operates exchanges in Munich, Hamburg and Dubai.
The New York and New Jersey market is one of the world’s most important network interconnection hubs, along with markets like London, Frankfurt, Singapore and the Silicon Valley.
Euro exchanges moving into U.S. market
European Internet exchange operators have been expanding into the U.S. in parallel with establishment of Open-IX, an organization that endorses exchanges and data centers that satisfy its standards. Its mission is to encourage more competition and diversity in the U.S. exchange market, currently dominated by the likes of Equinix and Telx.
DE-CIX is not the first European operator to get into Intergate.Manhattan. AMS-IX (its competitor in Amsterdam) announced a PoP there in April.
DE-CIX will use Sabey’s connectivity services to plug into Manhattan’s most important carrier hotels at 111 Eight Avenue (owned by Google) and 60 Hudson Street, both of which are also its Internet exchange point locations. Sabey will also be a DE-CIX customer, using its peering services and Layer 2 connectivity to other New York data centers.
Presence in every relevant NY data center
Frank Orlowski, chief marketing officer for DE-CIX, said the company was committed to growing its New York exchange to be one of the world’s five largest.
“Expanding to Sabey’s Intergate.Manhattan facility is part of our effort to establish a DE-CIX presence in every relevant data center and carrier hotel in this metro,” he said. “DE-CIX New York is just a cross-connect away from 99 percent of the providers in this metro market.”
The operator’s first customer in New York was Akamai, which signed up to use its exchange at 111 8th Avenue in May. Its other New York City Internet exchange point locations (in addition to 60 Hudson) are at 32 Avenue of the Americas and 85 10th Avenue.
DE-CIX New York also has PoPs in Newark, New Jersey, and in Long Island.
Transformation of the Verizon building
Seattle-based Sabey bought its 1 million square foot skyscraper, known as the “Verizon building,” in 2011 for $120 million. The company has been upgrading the building’s infrastructure and installing floor-to-ceiling windows in some portions designated for office space.
The company’s president John Sabey told us in May that about 150,000 square feet of data center space had been built out. Its data center tenants include service providers Windstream and Datagram.
In addition to data center and office use, the developer is marketing the property to life sciences companies. | 1:30p |
Databricks Unveils Spark as a Cloud Service Databricks, founded by the creators of Apache Spark, recently raised a $33 million Series B and revealed a new cloud-based platform built around the open source cluster computing framework. The company also announced a couple of important partnerships with SAP and DataStax, bringing its Apache Spark 1.0 distribution to SAP’s HANA real-time analytics platform as well as improving compatibility with the Cassandra database.
Databricks offers a fully managed service for the open source framework. It is an integrated cloud platform that provides easy Big Data analytics and processing.
Spark is an alternative to MapReduce – the technology that Google recently said it was done using. Hadoop, which was originally inseparable with MapReduce, is very much alive and thriving, however, and Spark is one of the most promising technologies in the ecosystem.
Along with announcing that it no longer uses MapReduce, Google introduced a new cloud service, called Cloud Dataflow, which combines batch analytics with streaming analytics. Databricks’ new cloud service will be competing with Dataflow.
Spark allows in-memory analytics, which is much faster than MapReduce and enables stream analytics. With more than 200 contributors, it’s one of the most active projects in the Hadoop ecosystem.
Spark in the cloud
The Databricks cloud platform is a turnkey solution that brings Spark to a wider audience. It helps companies provision a Spark cluster easily, with the platform handling all the details: provisioning servers on the fly, streamlining import and caching of data, handling all elements of security and continually patching and updating Spark.
In addition to letting users deploy and leverage the rapidly growing ecosystem of third-party Spark applications, the Databricks cloud comes with a set of built-in applications which help customers access and analyze data faster.
“One of the common complaints we heard from enterprise users was that Big Data is not a single analysis; a true pipeline needs to combine data storage, ETL, data exploration, dashboards and reporting, advanced analytics and creation of data products. Doing that with today’s technology is incredibly difficult,” said Databricks founder and CEO Ion Stoica. “We built Databricks Cloud to enable the creation of end-to-end pipelines out of the box while supporting the full spectrum of Spark applications for enhanced and additional functionality.”
Spark provides support for interactive queries (Spark SQL), streaming data (Spark Streaming), machine learning (MLlib) and graph computation (GraphX) natively with a single API across the entire pipeline.
The Databricks cloud is currently in limited availability.
A deeper war chest
The Series B funding round in late June was led by New Enterprise Associates (NEA) with follow-on investment from Andreessen Horowitz. Both firms are active investors in the Big Data space. | 2:00p |
Location, Reliability and Service: Finding Your Colocation Provider Your organization is growing and your dependency around technology continues to increase as well. The modern business is building its organizational structure directly around the capabilities of IT and the data center. Now, as your business evolves, adopts new applications, and increases the amount of users connecting into the infrastructure, it’s critical to work with the right data center technologies to facilitate growth and expansion.
IT organizations use colocation centers for a variety of reasons. They get space, power, cooling, connectivity and security – everything needed to support their critical IT operations. They also avoid the hefty capital expenditure associated with building an onsite data center.
But not all colocation providers offer the same value and functionality, and many come with inherent risks. When choosing a colocation provider, careful consideration is essential. In this whitepaper from Cobalt Data Centers, we examine the risks and key aspects around selecting the right colocation partner for you organization.
This includes considerations around:
- Understanding data center risks
- Why location absolutely matters
- How to protect mission-critical systems and data
- Designing an advanced infrastructure
- Implementing superior service
- How to select the best colocation partner for your organization
It’s important to choose a colocation setting that is ideally located, provides the right protection, and offers the right level of service for mission-critical IT systems and data. Customers can’t afford unnecessary risk, and they rely on services that meet their most stringent IT requirements.
Download this whitepaper today to see why the right data center colocation partner is absolutely critical to the success of your organization. Remember, the capabilities of your business model are very much governed by the capabilities that your data center can deliver. In working with the right data center partner, you’re able to directly align business goals with the capabilities of IT. | 3:42p |
Report: Cloud Storage Firm Box Raises $150M in Run-Up to IPO Cloud storage provider Box raised $150 million from TPG and hedge fund Coatue Management, according to a report by the Wall Street Journal. Box was revving up for an initial public offering in April but decided to push those plans back because of an unfriendly market for cloud companies.
Its S-1, filed ahead of the potential IPO, revealed it had a net loss of $168 million last year. The company’s previous funding round, in November of last year, was $100 million at a $2 billion valuation. It now has a $2.4 billion valuation.
High cost to play
Despite Box being one of the growing and one of the better-known players in the cloud storage space, which also includes similarly named Dropbox, the online storage business it is in is rough and capital-intensive. The S-1 revelations are not unlike those of Carbonite (another player in the space) when it went public.
Box’s operational expenses and advertising budget were astronomical in relation to revenue in its S-1. It updated its S-1 however to show sales, and there are positive signs. Revenue was up 94 percent in the first quarter for 2014 compared to Q1 2012, up to $45 million from $23 million. Still, there was a loss of $38 million on the quarter compared to $34 million in Q1 the year prior.
The company has raised a staggering half billion to date across a dozen or so rounds, including angel funding from Mark “I own a ton of stuff including a basketball team” Cuban. Other notable investors include Salesforce.com, which has built a SaaS empire, and several well known venture capital firms.
Only storage is a tough market that has seen many large players like AOL and Yahoo shut down online storage operations in the past (about five years ago) because of struggles with profitability. There are several free options offering more storage than most consumers need, which makes it hard to get customers to pay.
User-base reassurance, differentiation
However, Box touts 8 million users and along with investors sees a lot of potential, not only in expanding the user base but better monetization of the product in general.
Despite a few big-name online storage closures, companies like Box, Dropbox and Drop.io (acquired by Facebook several years ago) arose alongside the cloud revolution, adding value and differentiation from the plain old online storage offerings, including collaboration capabilities and business-continuity-type features.
“The lines between storage and collaboration are becoming increasingly blurred,” said Philbert Shih, managing director of Structure Research. “This along with the continued growth in web-based data and content has caused a spike in demand for these services. It should come as no surprise that providers are keen to pursue this opportunity and are investing heavily as they compete on features and functionality and build out their data center footprints.”
Box’s latest enhancement include a note-taking feature on mobile devices and technology it got a hold of through acquisition of a company called Steem, which brought a filesystem that allows people to access files and view them without needing to download copies onto their devices and an on-the-fly video transcoder.
Business plan rather than potential
Companies like Box and Dropbox fought the tide. Box recognized the enterprise as an opportunity and began adding features to attract the sector.
Several companies have received astronomical evaluations based on user numbers rather than revenue. Facebook, Twitter, Box, Dropbox, Instagram and other cloudy companies have all been evaluated based on potential rather than actual revenue at one point or another.
Box has a business plan (whereas Instagram didn’t necessarily have a clear one when it was acquired) and its success depends on differentiating in a crowded market, raising average revenue per user and continuing to evolve into new and complimentary functionality. | 4:11p |
Data-Center-In-a-Box Startup NIMBOXX Raises $12M, Launches Product NIMBOXX, one of the startups trying to bring hyper-scale computing to the masses, has raised $12 million in funding and launched what it calls an ‘atomic unit’ of the software-defined data center.
The two-year-old company has developed a single converged platform of servers, storage, networking and security built from the ground up. The data center in a box recently won the GigaOm 2014 Structure Launchpad Competition, securing the most People’s Choice Award votes.
The Series A round comes from Hong Kong-based institutional investor SMC Holdings.
The company says its converged platform ties together mesh-based, scale-out, dynamic storage orchestration, self-balancing workload heuristics and a shared-nothing management model. It also provides dynamic adaptation to cache and storage changes based on multi-dimensional workload analyses.
Deployments can start with a single node and scale to hundreds. The foundation of the solution is the firm’s Mesh Operating System (MeshOS), which installs on the physical server and gives direct control over all hardware resources. This approach enables software-defined data center functionality, along with a RESTful API for integration with third-party applications.
“In working with the platform, I experienced 180,000 IOPS in repeated benchmark tests,” said Aaron Kent, IT director at Fulghum Industries and one of NIMBOXX’s early adopters. “It took less than 10 minutes to go from un-box to completed configuration set up the very first time.”
Initial plans for the company include selling its all-in-one solution (already available) to technology vendors delivering I/O-intensive applications, to service providers delivering hosted private cloud and on-premise solutions and to enterprises with Big Data applications.
“In the not-too-distant future, this will be recognized as a key step in the transformation that’s happening in the data center today,” said Rocky Bullock, CEO of NIMBOXX. “Hyper-convergence vendors have come before us, but we’ve taken an entirely different approach, building the entire software-defined data center stack from the bottom up. Our pilot results speak for themselves—the benefits of this approach are indisputable.” | 4:29p |
With Release 2.0 OpenStack Swift Matures, adding Storage Policies OpenStack Swift is cloud storage software for storing and retrieving lots of data through a simple API. The latest 2.0 release introduces storage policies, which come after a year of work from many contributors in the Swift community.
Swift was open sourced four years ago. The OpenStack storage project aims to provide a highly available, distributed object store, and release of version 2.0 means it’s growing up and gaining better consistency. Organizations can use Swift to store large amounts of data efficiently and on the cheap, across clusters of commodity servers.
Storage policies are important because they allow people to configure their Swift clusters to support the different needs of the data stored. Once policies are configured, a user can create a container with a specific policy that applies to all objects stored within.
SwiftStack, a major contributor, highlights two specific use cases on its blog: reduced-redundancy storage policy and geography-specific storage policy.
“We normally recommend 3x replication in Swift clusters,” writes SwiftStack’s John Dickinson, director of technology. “It provides a good balance between durability and overhead for most data. However, some data is trivially re-creatable and doesn’t require the same durability.”
By not storing extra copies of objects that don’t need typical redundancy, storage costs go down. Swift allows different replication factors to be used in the same cluster.
The ability to geographically distinguish data sets means a globally distributed company can distinguish which data are accessible from which branch offices. Using new fine-grain control over where data reside lets the individual offices find what they need faster.
Object-based storage is a storage architecture that manages data as objects as opposed to file systems and block storage, which manages data as blocks within sectors and tracks. There’s been a lot of investment in object storage and several vendors, ranging from startups like SwiftStack to giants like EMC, have been building solutions for it.
SwiftStack and Intel gave a talk on storage policies at the OpenStack Juno summit. You can watch the presentation here.
Up next for Swift is Erasure codes. Erasure codes work off of storage policies and bring in more savings and more flexibility in terms of how data are handled in Swift. | 10:06p |
Apple To Build Third Solar Farm Near North Carolina Data Center Apple is planning a third solar farm in the vicinity of its Maiden, North Carolina, data center.
Local officials approved a plan to annex land for the future solar farm to the City of Claremont on Monday, The Hickory Record reported. Claremont is about 20 miles north of Maiden.
The company is expected to invest $55 million in the 100-acre 17.5 megawatt plant, which will be its third in the area. It has one directly at the Maiden data center site and another one in Conover, another nearby town.
As concern with the impact of Internet on the environment has been raised by Greenpeace and in the press, some data center operators have been investing in renewable energy and actively promoting their actions. Companies like Google, Facebook and Microsoft, which also operate massive data centers, have made substantial investments in renewable energy generation to support their operations.
Apple is big on Solar, also planning a 20 megawatt solar farm in Reno, Nevada. The company’s data center energy supply is 100 percent renewable, making it a Greenpeace darling. Also in North Carolina, it uses fuel cells that make electricity out of biogas from nearby landfills.
The newspaper reports that the upcoming solar project in Claremont will create 75 indirect jobs, which the company is sourcing locally. The market value of the land is more than $1.4 million. Apple also gave the city two tracts of land earmarked for greenways, recreation and public purposes.
North Carolina law says that 80 percent of the appraised value of a solar energy electric system is excluded from the city’s tax base.
Prior to Apple getting the thumbs up from Greenpeace, Gary Cook, senior IT analyst at Greenpeace called Apple out at an Uptime Symposium saying that it and Facebook should “wield (their) power to alter the energy paradigm.” Apple has stepped up in a big way since and continues to expand its use of solar and other renewable energy.
Cook praises what Apple has done since. In response to the news he said, “Apple’s latest investment in solar energy shows that it is committed to maintaining its record of powering the iCloud with 100 percent renewable energy. iCloud users should feel good knowing that clean, solar energy is powering their songs, videos and photos.”
Cook also took the opportunity to call out another big fish Greenpeace has been gunning for: Amazon. “Jeff Bezos should take notice of Tim Cook’s leadership as Apple proves that we can power our online lives with renewable energy, leaving Amazon further in the dust,” he said.
“With Amazon’s release of the new Fire smartphone and the associated growth in its photo storage, Amazon needs to commit to powering its data centers with 100% renewable energy, as its peers have done, or it risks becoming an even bigger polluter.” |
|