Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, April 25th, 2013
| Time |
Event |
| 12:17p |
DevOps Meets the Enterprise: Chef Now Supported by IBM, Microsoft Chef is cooking in the enterprise. Opscode’s open source automation platform is now supported by IBM and Microsoft, the company said, making it easier for enterprise IT users to use Chef to manage and scale their server environments. The announcements today ahead of the annual #ChefConf user conference mark a coming of age for Opscode and Chef, helping extent the benefits of the “DevOps” movement beyond its origins in the hyper-scale computing community.
DevOps, which combines many of the roles of systems administrators and developers, was popularized at large cloud builders with dynamic server environments. An example is Facebook, which recently adopted Chef to manage its fast-moving infrastructure. Opscode’s team features veterans of Amazon Web Services, a driving force in the growth of cloud services and DevOps.
“Facebook, Google and Amazon have figured out how to leverage large-scale infrastructure, and we are now seeing a similar trend in the enterprise,” said Jay Wampold, VP of Marketing at Opscode. “IT is a front-office imperative in how companies engage customers and users. Enterprises were not built from the ground up for this. Now they have to retool.”
The Code-Based Business”
Chef is central to Opscode’s vision for this shift to “code-based businesses.” Chef is an open source framework using repeatable code – organized as “recipes” and “cookbooks” – to automate the configuration and management process for virtual servers. It enables users to deploy infrastructure as code across any operating system from Windows to Unix and Linux, across physical, virtual or cloud infrastructures.
In February Opscode unviled Chef 11, an updated version written in the Erlang programming language and using a PostgresSQL database. That’s a change from previous versions, which used Ruby as the configuration language and CouchDB as the database. The shift to an SQL database has helped make Opscode’s offerings more attractive to enterprise customers using Private Chef to automate infrastructure in their own data centers.
The IBM integration will further support this shift. Chef will now support IBM Power Systems and the AIX operating system, allowing enterprise customers to use Chef to automate the configuration of AIX-based cloud infrastructure. Opscode will provide IBM customers with tools to build and manage cloud resources and applications in large-scale AIX compute environments.
“Our collaboration with IBM is addressing a major transformation facing enterprises as they code their businesses to thrive in the digital economy,” said Mitch Hill, CEO of Opscode. “Leveraging the innovation and extensibility of open source, including OpenStack and Chef, Opscode and IBM are enabling businesses to maximize the potential of the cloud in rapidly delivering goods and services to market.”
Cookbooks for WebSphere, Windows Azure
IBM and Opscode are also collaborating on creating cookbooks for the IBM Software portfolio, beginning with the WebSphere Application Server Liberty Profile. This cookbook will provide reusable content to allow the rapid provisioning and full application lifecycle management of WebSphere Application Server Liberty Profile applications.
“By collaborating on product integration and Chef Community content, we’ll be able to offer enterprise businesses comprehensive solutions for gaining the most value out of cloud, with minimal risk.” said Moe Abdula, VP, SmartCloud Foundation at IBM.
Abdula will be presenting on #ChefConf’s main stage tomorrow, one of a number of enterprise presenters from companies including Disney, Forrester Research, General Electric and Nordstrom. About 700 attendees are expected at the event in San Francisco, part of a larger open source Chef community features more than 1,300 individual contributors, 200 corporate contributors, and 900 cookbooks. Since last year’s #ChefConf, Opscode says it has seen its commercial customer base double, including many Fortune 500 enterprises using Private Chef to build in-house clouds.
“A lot of mainstream enterprises are talking about revolutionizing the enterprise, and adopting Chef broadly,” said Wampold. “That involves a cultural change, and getting the whole group trained on Chef and integrated with the community.”
ChefConf is part of that process. So is Opscode’s collaboration with Microsoft Open Technologies to deliver a series of Chef Cookbooks providing cloud automation capabilities for Microsoft Azure, including cookbooks for automating Drupal and WordPress deployments on Windows Azure. Opscode also announced today that Chef provides integration with Microsoft’s IaaS offering, Windows Azure Infrastructure Services.
“The enterprise engagement and sales process has grown,” said Wampold. “We can take credit for building a great product. But the business needs his come around more quickly than anyone expected. “It’s clear the traditional enterprise vendors like IBM and Microsoft are seeing this transition. These companies are knocking on our door because they see us as a key enabler.” | | 12:49p |
Earth Day Sparks A Look At Data Center Energy Marina Thiry is director of strategic marketing – data centers for ABB.
 MARINA THIRY
ABB
As we celebrated Earth Day this week, many corporations are looking at their environmental strategies and are seeking to become more “green.” This begs the question: for organizations that are seeking to reduce their energy footprint, is it possible for a data center to employ distributed energy resources, like incorporating solar power, without giving up reliability? This is a worthy goal but there may be major issues that need to be addressed as the organization sets about working toward this goal.
First, let’s be clear. Yes, it is possible! The characteristics of distributed energy resources for the data center include distributed energy sources (that is, beyond the emergency back-up generator) and – this is key – centralized control that may operate with the main power grid, but can operate independently of it, too. Sometimes the latter is referred to as on-site generation. Another characteristic of a distributed energy resource system is energy storage, but no one really has this, yet, in the spirit of how we are discussing it.
Resilience and Sustanability
One of the advantages of distributed energy resources is that they add resilience and sustainability to the total energy system within the data center. A distributed energy unit can achieve as high or higher level of reliability than any single resource. The challenge is to manage, utilize and optimize the unit in a dynamically changing fashion.
From what I see, as businesses recognize the competitive advantages that an agile data center enables, they begin to invest in modernizing their data center infrastructure and operations so they can keep up with business requirements –whatever it takes to deliver more web services faster, and in a sustainable way. So, at ABB, we’re constantly innovating energy solutions so our customers can respond this demand.
Consider, for example, mobile applications. Apple reported that customers downloaded over 40 billion apps, with nearly 20 billion in 2012 alone. These mobile apps, and the information and transactions collected from them, create enormous increases in data, as well as huge increases in the IT infrastructure and the energy required to support the business requirements behind those apps. At the same time, data centers are faced with the challenge of consolidating their resources. So data center operators are in dire need of finding significant ways to optimize.
DCIM Allows For Energy Monitoring & Management
If you are interested in reducing your energy footprint, then one of the most effective strategies for attaining aggressive – yet sustainable – growth is using a data center infrastructure management (DCIM) system. A DCIM system capable of managing those energy assets is vital to lowering the operating costs while maximizing availability and reliability. This approach helps extend the life of the data center by safely and reliably boosting the productivity of existing assets, getting more from less while keeping check of the return from sustainable energy investments that also help to reduce the energy footprint.
The combination of distributed energy resources and DCIM offers significant reliability and efficiency improvements that begin with the energy source and purchase, and extends to improving energy utilization. A DCIM system like Decathlon provides the granular visibility, decision support and centralized control technologies—including the energy trading capabilities that enable data centers to exploit these new efficiencies safely and reliably.
Tips for Managing Distributed Energy
We recognize that every data center is different, and one data center’s successful approach may not work for another. Talk to experts who are well versed with power conversion and delivery technologies, utilities, and DCIM. Here are a few tips to keep in mind as you proceed in using distributed energy resources and DCIM:
- To enable the near instantaneous balance of the data center infrastructure energy supply and demand, you will need two-way communications to deliver real-time information. Consider how your approach will manage multiple levels of integration and interoperability among various components of your data center.
- As distributed energy technologies evolve, so will the applications and benefits. Think beyond the sources of energy. Consider, too, how your data center infrastructure will be able to manage distributed power generation, storage, process automation and demand response technology.
- Examine effective ways of integrating different forms of distributed energy resources. Depending on your data center geography, some energy sources may be more practical or offer better economies of scale.
- Finally, don’t underestimate the significance of the monitoring, decision support and process automation capabilities in your DCIM system. For example, consider the extent and depth of energy management capabilities, such as alerting you to purchase energy when it’s cheaper, and when to use more energy or less by scheduling compute loads during less expensive times. It won’t matter how robust or technically advanced your energy delivery network is if your data center infrastructure management is inadequate for the task.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 1:11p |
With $119M NexGen Deal, Fusion-io Targets Hybrid Storage  A NexGen hybrid storage appliance, which combines Fusion-io Flash memory with the company’s software. Fusion-io has now acquired NexGen.
Marking a strategic expansion of its product portfolio, Fusion-io (FIO) announced that it has acquired hybrid storage appliance company NexGen Storage for $119 milion. NexGen appliances are based on Fusion ioMemory, and targeted at small to medium size enterprises. By using software in combination with ioMemory and standard disk drives, NexGen transforms industry-leading x86 server platforms into hybrid storage systems that provide the performance of an all-flash array at a fraction of the cost.
At the core of the NexGen hybrid storage system is ioControl Management software, which shares all storage resources and maintains simultaneous performance targets for multiple applications. It enables IT teams to control and prioritize acceleration for applications.
“Many SME businesses have lean IT teams and budgets, making it critical to offer an integrated and affordable entry point for flash powered application acceleration that delivers consistent performance, even under demanding workloads like VDI and analytics,” said David Flynn, Fusion-io CEO and Chairman. “The hybrid NexGen solution combines memory attached flash and disk on leading server platforms to provide a system tuned to deliver performance, price and capacity. With this acquisition, we will maintain the current NexGen product model as we transition to supporting customers’ preferred server platforms with our OEM partners.”
Paying approximately $114 million in cash and $5 million in stock for all of the outstanding stock, warrants and vested equity awards of NexGen, Fusion-io will add around 50 NexGen employees to its team. NexGen Storage CEO John Spiers posted a note on the company blog that there really never was an exit strategy – but that he sees this next phase of growth as a new beginning rather than the end.
“We architected our solution around Fusion-ioMemory because it offered the highest reliability, the most predicable performance, and because it is built as a platform for easy developer integration,” said Spiers, co-founder of NexGen and new Fusion-io Senior Vice President and General Manager, NexGen Products. “The NexGen ioControl software uniquely eliminates the need for another layer of latency in storage tiering and the bottlenecks introduced by SSD storage controllers, making it the ideal hybrid system to evolve into an open, software defined platform at Fusion-io.”
Fusion-io also reported financial results for the quarter Wednesday. The NexGen Storage acquisition directly addresses a strategy for the small and medium size businesses, while progress continues to be made with system vendor partners such as HP, IBM and Dell. During the quarterly earnings call the company noted that in the last quarter four customers placed orders in excess of $5 million, and that its “relationship with Facebook and Apple is strong”. It was also noted that global music streaming service Spotify was added as a Fusion-io customer. | | 2:25p |
Pivotal Launches Enterprise PaaS, Receives $105 Million From GE Having been an official company for less than a month, EMC and VMware backed Pivotallaunched as a stand-alone company Wednesday, detailing its new enterprise Platform as a Service (PaaS) offering, and a $105 million investment from GE.
Former VMware CEO Paul Maritz addressed a webcast press event Wednesday in San Francisco, delivering the story line and mission of the newly formed company, positioning it as a new platform for a new era. Citing the changing market forces of cloud, big data, mobile, and social Maritz said that the enterprise needs a new class of applications that deliver better user experiences and that consumer-grade capabilities are needed in the enterprise. While still servicing the legacy needs of the enterprise, Pivotal One will integrate new data fabrics, modern programming frameworks, cloud portability for legacy systems.
“It is clear that there is a widespread need emerging for new solutions that allow customers to drive new business value by cost-effectively reasoning over large datasets, ingesting information that is rapidly arriving from multiple sources, writing applications that allow real-time reactions, and doing all of this in a cloud-independent or portable manner,’ said Maritz, the CEO of Pivotal. “The need for these solutions can be found across a wide range of industries and it is our belief that these solutions will drive the need for new platforms. Pivotal aims to be a leading provider of such a platform. We are honored to work with GE, as they seek to drive new business value in the age of the Industrial Internet.”
Pivotal One
The Pivotal One enterprise PaaS platform combines cloud fabric, data fabric and application fabric, to address what the company sees as an $8 billion market that is expected to grow to $20 billion in five years. Three components make up the Pivotal One platform: Data Fabric, Cloud and Application Platform, and Pivotal Expert Services. The Pivotal HD data fabric was announced in February by EMC Greenplum as a SQL parallel database on top of the Hadoop Distributed File System. With the addition of Greenplum HAWQ data services and Pivotal in-memory data grid technology, Pivotal HD provides proven technologies for analytical queries and transactional environments.
Pivotal Cloud and Application Platform is based on Cloud Foundry, the open source PaaS, and Spring, the application development framework for enterprise Java. The application fabric provides a rich developer ecosystem that enables rapid application development and support for messaging, database services and robust analytic and visualization instrumentation. Pivotal Expert Services delivers the business value of agile development and sophisticated data analytics to enterprise companies on a project-by-project basis.
As a new stand-alone company Pivotal draws on an experienced set of talent from EMC, VMware, Greenplum, and many other technology giants. Along with Martiz, former EMC Greenplum executives Scott Yara and Bill Cook join Pivotal as Senior Vice President, Products and Platform, and Chief Operating Officer. Pivotal begins operations with 1,250 employees, including over 700 developers.
Strategic investment from GE
At the launch event Wednesday, GE announced its plans to invest approximately $105 million in Pivotal. The companies also announced their intent to enter into a broad research and development and commercial agreement aimed at accelerating GE’s ability to create new analytic services and solutions for its customers. The investment in Pivotal and new business agreement align with GE’s focus on the Industrial Internet. The partnership is key for GE, as it is working to develop a software platform that it will deliver as a service to industrial customers in aviation, transportation, healthcare, energy and manufacturing. | | 2:32p |
Unleash the Full Potential of BYOD with Confidence In working with today’s business environment – managers now have to answer demands around more users, more data, more cloud and; many more devices. Of course, organizations don’t want to take ownership of user-owned devices. However, they still want to manage and control the workload that is delivered to these end-points.
IT consumerization will only continue to grow and evolve. Even now, users are utilizing 3-5 devices to access the Internet or utilize corporate resource. Because of this, many organizations have run into challenges around engineering and managing a BYOD solution. This includes:
- Onboarding users
- Ensuring high service quality and availability
- Maintaining security and mitigating risk
- Supporting diverse users and devices
- Enable a consistent user experience
- Accelerate deployment cycles with security
This is where an intelligent BYOD design can really help. Instead of deploying fragmented solutions – organizations must look at a unified platform that can help them control devices and still stay secure as well as agile.

[Image source: HP - Unleash the full potential of BYOD with confidence]
In designing a unified BYOD environment, it’s important to work with technologies which are capable of supporting this type solution. In this white paper from HP, you will learn about the unified BYOD infrastructure offering. In creating an easy-to-operate platform, administrators are able to gain more control over their infrastructure and the user end-point environment. As outlined in this white paper, HP’s BYOD solution benefits include:
- Universal policy provisioning and enforcement
- Flexible access control with device fingerprinting and self-registration portals
- Device posture assessment and control
- Rich traffic shaping and bandwidth management tools
- Comprehensive usage and performance reporting
- Detailed user behavior analysis
- Single pane-of-glass management across wired and wireless infrastructure
- Unified wired and wireless network
- Network ready for SDN
Download this white paper today to see how HP’s BYOD solution can help your organization deploy a logical, unified BYOD-ready environment. By creating an agile platform ready for IT consumerization, your organization can create a more power data center as well as a more prodtive workforce. | | 3:00p |
Design West: AMD Announces New System-on-Chip The Design West conference is underway this week in San Jose. It’s a four-day event launched last year as a technical conference for electronics design engineers, entrepreneurs, and technology professionals. AMD, Emerson and Wind River all have announcements from the event.
AMD announces new System-on-Chip. AMD announced the new AMD Embedded G-Series System-on-Chip (SOC) platform, a single-chip solution based on the AMD “Jaguar” CPU architecture and AMD Radeon 8000 Series graphics. Compared to the prior generation AMD G-Series APU the the new G-Series chip offers up to 113 percent improved CPU performance, and up to a 125 percent advantage when compared to the Intel Atom when running multiple industry-standard compute intensive benchmarks. The new processor family offers superior performance per watt in the low-power x86-compatible product category with 9W – 25W options. “As the Internet of Things permeates every aspect of our life from work to home and everything where in between, devices require high performance, I/O connectivity and energy efficiency in smaller packages,” said Colin Barnden, principal analyst, Semicast Research. “With this new AMD SOC design, the AMD Embedded G-Series platform offers the perfect mix of high performance, a small footprint, low energy use and full I/O integration to enable smaller form-factor embedded designs, cool and efficient operation, and simplified build requirements. AMD has leapfrogged the competition by combining the power of an x86 CPU and the performance of AMD Radeon graphics with the I/O interconnect all on a single die.”
Emerson launches new VPX system. Emerson Network Power (EMR) announced its latest embedded systems VPX system chassis, the KR8-VPX-3-6-1. Designed primarily for development, testing and lab duties, the KR8-VPX-3-6-1 can also be deployed in ground benign installations as it meets Emerson’s standard safety, electromagnetic compatibility (EMC) and environmental requirements. The chassis supports up to five 3U and 6U Eurocard formats that are most popular with users of VMEbus. It will enable original equipment manufacturers (OEMs) to rapidly develop, test and evolve their applications. “The new VPX-based KR8 chassis is designed to make it as easy as possible for developers to get ahead with their application development,” said Eric Gauthier, vice president product marketing for Emerson Network Power’s Embedded Computing business. “Coming hot on the heels of our recently announced VPX3000 system-level OpenVPX fanless enclosure and iVPX-7225 3U processor blade, this new development and deployment platform underlines Emerson Network Power’s commitment to having the building blocks in place to be a leading provider of custom and complete integrated solutions in this market.”
Wind River launches secure separation kernel. Intel (INTC) subsidiary Wind River introduced the latest version of its VxWorks MILS Platform, a secure separation kernel that is compliant to the Separation Kernel Protection Profile (SKPP). Part of the Wind River portfolio of trusted systems, the Type 1 hypervisor–based, multiple independent levels of security (MILS) platform is ready for use in security-critical systems that may require system-level high assurance evaluation or certification and accreditation (C&A). VxWorks MILS Platform partitions a single processor among multiple software components, with time and space resource allocation, information flow control, and fault isolation — all strictly enforced to conform to security policies defined by security architects and system integrators. “Companies responsible for creating robust infrastructure systems worldwide are demanding increased functionality and secure operation with high assurance of security from inadvertent or intentional errors or threats,” said Jim Douglas, senior vice president of marketing at Wind River. “VxWorks MILS can serve as the foundation for security-critical devices and systems in applications ranging from military and aerospace to industrial, medical, and automotive.” | | 3:30p |
Survey Says: Multi-Cloud Usage Growing, DevOps on the Rise 
Enterprise cloud usage has begun to hit a point of maturity, and there’s increasing preference for a multi-cloud approach, according to a survey from cloud management provider RightScale. The report also documents the rise of DevOps – the marriage and integration between Development and Operations – which has grown like a weed, taking a hold in 54 percent of respondents.
The company assessed what stages businesses and enterprises were in their cloud strategies, with 49 percent either partially or heavily using cloud, 17 percent developing cloud strategies and 26 percent working on proof-of-concepts or first projects. That means only 8 percent of respondents weren’t thinking about or planning to use cloud.
Larger organizations are choosing multi-cloud and hybrid cloud strategies by a large margin. Seventy seven percent of large organizations said they’re going multi-cloud while close to half at 47 percent said they were planning or are using hybrid. Enterprises with hybrid cloud strategies are making progress toward their goals, with 61 percent of those organizations already running apps in public cloud, 38 percent in private cloud and 29 percent in hybrid cloud environment.
The trend shows organizations moving up the maturity model, and reaping increasing benefits as they do. The top benefits reported in the survey are faster access to infrastructure, greater scalability, faster time to market with apps, and higher availability. The main issues surrounding cloud tend to disappear as these organizations move up in terms of cloud maturity. For example, security and compliance is often a major concern. It’s viewed as less of a challenge as cloud usage matures: 38 percent of beginners but only 18 percent of experienced cloud users viewed it as a challenge. Thirty percent of “Cloud Beginners” reported that they were gaining faster access to infrastructure, while 87 percent of “Cloud Focused” respondents realized that same benefit.
More cloud is being used, and it’s being used more often in a hybrid or multi-cloud situation, and DevOps has indeed taken a grip on the enterprise. RightScale has interest across the board here, from helping cloud watchers and beginners to more advanced users who want to scale out their use of cloud. The full RightScale 2013 State of the Cloud report is available here (registration required), and last year’s survey here | | 4:00p |
Cisco Launches New MDS Storage Products Cisco (CSCO) announced new MDS storage networking solutions for storage area networks, to help customers address rising cloud and big data requirements.
9710 Multilayer Director
Cisco says that at 24 terabits per second of total switching capacity, the new Cisco 9710 Multilayer Director delivers more bandwidth than any storage director in the industry. Powering both SAN and LAN networking operations it will support high-density Fibre Channel and Fibre Channel over Ethernet (FCoE). The new model builds on the MDS heritage of nonstop operations, including software upgrades, by providing the highest fault-tolerant capabilities with fully redundant (N+1) fans, switching fabrics, and power-supplies or grid redundancy (N:N). The new MDS 9710 Multilayer Director supports up to 384 line-rate 16 GB Fibre Channel ports or FCoE in a single 14 RU chassis.
“As the leading IT service provider in Norway, we look to Cisco as a key technology partner to support us in the process of consolidating and upgrading data centers. We require storage networks to support the next generation services and level of availability our customers demand,” said Jo Marius Pedersen, SAN specialist, EVRY. ”The converged management, predictable high performance, flexibility and reliability of the 16 GB optimized Cisco MDS Multilayer 9710 has through thorough testing shown unprecedented results. The changing technology landscape, with its exponential data growth and increasing pressure to reduce cost and complexity, forces us to constantly balance innovative and known solutions. We can definitely recommend businesses with similar challenges to evaluate the innovative MDS platform.”
9250i Multiservice Fabric Switch
The Cisco MDS 9250i Multiservice Fabric Switch delivers storage services, including Cisco I/O accelerator and Data Mobility Manager, which improve SAN efficiency by performing important storage services centrally in the fabric. This architecture reduces the time and resources required to perform common storage management functions and simplifies and accelerates data protection for regulatory compliance. It contains up to 40 line-rate ports of 16 GB FC/FICON, 8 ports of 10 GbE FCoE, and 2 ports of 1/10GbE FCIP/iSCSI, while delivering a rich set of storage services via licensing.
“Today’s announcement cements Cisco’s technology leadership in the storage director market,” said David Yen, senior vice president, Data Center Group, Cisco. “Cisco continues to deliver the greatest depth and breadth for an end-to-end data center unified fabric. Together with our ecosystem of partners, we are reshaping the data center into an IT linchpin that transforms business continuity and operations for customers, one that is critical to today’s competitive business environments.”
In the following video Cisco discusses the new MDS solutions, and how storage is changing, with 16 Gigabit Fibre Channel, solid state drives, and block and file-level storage designs.
| | 5:00p |
StackPop Aims to Become the Mint.com of Infrastructure Management 
There’s a flood of vendors out there looking to make infrastructure management as smooth and as manageable as possible. Stackpop is a cloud-based service that helps enterprises analyze and optimize their IT infrastructure spending. Its pitch is that its easy to use and brings in real savings. The New York-based company addresses the disconnect between finance and IT, helping track contracts and spending, and providing useful intelligence when it’s time to renegotiate or shift vendors and technologies. It also acts as a comparison tool and marketplace for buyers and sellers, pitting it against colo brokerages in addition to spend tracking and aggregation.
While its appeal is obvious for the end users of infrastructure, for the infrastructure providers themselves, StackPop has the potential to become a very potent marketplace to drive sales. It aids in comparing, configuring and buying from over 450 infrastructure providers in 40 countries and wants customers to never buy blind again. As it grows this part of what it does, it stands to gain insight to general buying and infrastructure trends across the world, so users can see what is being paid on average.
The challenge lies in making infrastructure services like colocation as transparent as cloud or hosting. As hybrid infrastructures continues to gain traction, a management platform like StackPop stands a good chance of becoming something of note. It already touts some large customers like gaming site IGN, social check-ins provider Foursquare, and online fashion retailer Gilt Groupe.
A large part of what StackPop does is analagous to personal finance and budgeting tools, such as personal finance portal Mint.com, only for IT infrastructure. Mint.com, now owned by Intuit, is an application that helps people understand their finances. It gives a user the ability to aggregate and monitor all financial accounts from one simple, attractive, and mobile-friendly app. It drew a large crowd of users as a startup and was soon acquired by the financial software giant Intuit. Mint.com and Stackpop are similar in terms of what they hope to achieve, although Stackpop’s features go beyond the “read-only” data aggregation seen at Mint.
Roots at Panther Express
Stackpop was founded in October of 2011. The co-founders were infrastructure guys, network and systems engineers. Co-founder and CEO Jason Evans says the talent behind the company grew its global chops at content delivery network Panther Express, where they started with 10 servers and, grew the company to 45 global locations before being acquired by CDNetworks.
Evans moved to Mediamap, a real-time bidding platform that had a handful of servers on Rackspace upon arrival, which built out to 6 global data centers including Asia Pacific and Europe, Middle East and Africa.
“We’ve always had the idea of creating better tools to help grow and scale the infrastructure,” said Evans. “The original idea for Stackpop came from we had an unused cage we had purchased on a 2-year agreement at Panther, and it was a big waste. We tried to sublease it but couldn’t.”
That unused IT purchase highlighted a problem, and an opportunity. “The idea came out of the question – ‘How do we create a second level marketplace for space capacity at a discounted rate?’” said Evans. In April of 2011, Stackpop was formed to pursue solutions.
Evans teamed with Stackpop CTO, Aram Grigoryan, also a Panther Express alumni, and put together a seed round of funding in the fall of 2011. The beta for the serviced came out March of last year, and has closed $1.2 million in transactions through the platform and through its partners. Provider feedback has been positive.
Infrastructure Spending Insight That Goes Beyond Cloud
There are a lot of web sites that provide information about infrastructure. However, there’s no one transactional platform that drives and dominates infrastructure spending. “There’s a need for a more transparent and transactional platform,” said Evans. “We had to get a little more involved from a personal level.” |
|