Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, May 20th, 2015
| Time |
Event |
| 12:00p |
Equinix Using Bloom Biogas Fuel Cells At Silicon Valley Data Center Equinix is installing Bloom Energy biogas fuel cells at its Silicon Valley data center. The initial deployment will be a 1 megawatt biogas fuel cell in support of its SV5 facility. Expected to reduce CO2 emissions to near zero, the fuel cell will provide an estimated 8.3 million kilowatt-hours per year of clean, reliable electricity to power a portion of the SV5 data center.
Equinix has taken a key step towards its goal of 100 percent clean and renewable energy across its data centers. Currently, Equinix gets approximately 30 percent of its energy from clean, renewable sources, using a variety of mechanisms including fuel cells, solar, power purchase agreements, utility programs, renewable energy credits and carbon offsets.
The fuel cells use biogas rather than natural gas – natural gas is a clean option but still a fossil fuel. The project also includes uninterruptible power modules that are configured to protect a portion of the data center’s energy load from electrical outages.
Through a proprietary solid oxide technology, the cell generates electricity through a clean electrochemical process using air and fuel and resulting in only water and a small amount of carbon dioxide as by-products.
The installation is clean as well. Installed using no combustion, it saves at least 1.6 million pounds of avoided CO2 emissions from the California grid before biogas is even used. There’s also indirect water savings because 1 megawatt worth of demand at a local gas or coal-fired power plant is avoided.
Using the biogas fuel cells may help the colocation provider make its services more attractive to customers that care about powering their infrastructure with clean energy. Equinix customer Etsy recently discussed the importance of clean and renewable energy when selecting a data center provider.
In addition to the clean and renewable energy story, there are potential operational savings. Gas is a lot cheaper than electricity in California, a state whose electricity prices have been volatile, one example being the California Energy Crisis of the 2000s. It’s why the recent Bloom deals have been California data centers. However, projects such as these can be seen as pilots that could lead to widespread adoption as it makes both environmental and economical sense.
“This project demonstrates Equinix’s commitment to find cost-effective ways to reduce our carbon footprint and move toward 100 percent renewable energy,” said Sam Kapoor, chief global operations officer, Equinix. “By working with Bloom Energy to purchase 100 percent biogas and fuel cells, we’re able to support the energy needs of our customers in an environmentally responsible way.”
Bloom Energy continues to gain traction for its fuel cell technology with major data center operators. Its data center business has been primarily with single-tenant facilities in the past – an Apple data center in North Carolina and an eBay data center in Utah are two marquee examples. However, multi-tenant providers are starting to embrace Bloom’s technology. CenturyLink recently purchased a 500kW on-site power generation plant from Bloom for a California data center.
Several providers like ViaWest, CenturyLink, and Equinix have all mentioned increased customer inquiries into renewable energy options. Fellow colocation provider and TelecityGroup bidder Interxion recently discussed the colocation’s provider’s role in renewable energy, as well as its own work towards it achieving 100 percent renewable in all but one of its markets. Given, Europe is friendlier to renewable and clean energy usage than the United States. At least for now.
Cloud providers have faced increasing pressure to go renewable from the likes of Greenpeace, and many have responded with initiatives such as Amazon’s recent pilot of Tesla batteries. Microsoft recently opened a zero-carbon, biogas-powered data center in Wyoming that combines modular data centers with an innovative way to leverage waste from a nearby water treatment facility.
As colocation adoption increases, so will the demand for renewable and clean energy.
“Companies are increasingly turning to data center colocation services in order to interconnect with other businesses and they want to do this in an environmentally responsible way,” said Peter Gross, vice president of mission critical systems, Bloom Energy.
Bloom fuel cells make a very unusual, lean electrical design possible, as demonstrated by eBay’s deployment. Bloom can replace both the UPS and the generator. Other providers going natural gas include AT&T, which purchased several Bloom boxes for use across a dozen sites a few years ago. NTT America also deployed Bloom Energy Savers in California. | | 12:01p |
Hyperconverged Infrastructure Software Play Springpath Names New CEO Hyperconverged infrastructure software company Springpath has named industry veteran Terry Cunningham as new CEO. Mallik Mahalingam, co-founder and previous Springpath CEO and CTO, will continue in his role as CTO.
VMware veterans founded Springpath, a hyperconverged infrastructure software platform sold As-a-Service. The basic premise of its approach is to break reliance on capital-intensive hardware that this type of software and functionality are often coupled with. The company exited stealth in February with $34 million in funding.
Cunningham spent five years as president and general manager of Seagate’s cloud storage and DR division, Evault, and was chief operating officer at Veritas following the Veritas-Seagate Software merger in 1999. He also has startup experience – his first startup was Crystal Decisions, an enterprise reporting platform, where Cunningham drove go-to-market partnerships with the likes of Microsoft and SAP. Cunningham will focus on bringing the same rich partnerships and go-to-market strategy to Springpath.
Hyperconvergence is disruptive, and Cunningham believes it will be a key enabler in making DevOps a reality for the enterprise.
“The idea of integrating compute and storage, but integrating a flash layer, allows private cloud scale-out architectures,” said Cunningham. “During the cloud area, the only way to accomplish scale was buying services on public cloud. We are bringing the same scale out but in your private data center.”
The software only approach is key, according to Cunningham. Over the next five years, he believes enterprises will buy a lot of servers – and they don’t want to be tied down to a particular vendor as they move towards hyperconvergence.
“We’re not in the business of building servers and customers don’t want to be buying another server with a different vendor,” said Cunningham. “So we say, stick with the vendor you want and we’ll support it. It’s not just we’re software and not hardware, it’s also because of the software platform.”
Springpath’s approach also allows customers to change components on the fly. “We provide the ability to integrate new technology, unlike anybody else,” said Cunningham. “We can scale compute, flash and hard drive layers independently. If you need more compute, you can just add blades.”
Not being tied to hardware also means the company is a Value Added Reseller’s (VAR’s) dream, according Cunningham. For those companies that want the appliance experience, VARs can provide hyperconvergence on whatever hardware vendor they’d like.
Springpath is at the intersection of several key trends and concepts Cunningham has worked on throughout his career; his roots are in storage and cloud services. Cunningham briefly retired, which he said gave him a chance to take a step back to look at the bigger picture – it’s easy to get bogged down in details during your day job, he said. Everything led to hyperconvergence.
“Everyone’s trying to be more agile, to run a leaner operation, and we think from a timing perspective, we’re well positioned,” said Cunningham. “I get to build from scratch, which I’ve done in the past. I’m excited to do it again.”
| | 1:00p |
Peak 10 Expands In North Carolina With Raleigh Data Center Peak 10 has several expansions afoot, the latest being construction and renovation of a new 31,500 square foot Raleigh data center. The company is adding 17,000 square feet of raised-floor data center space.
The new data center brings Peak 10’s Raleigh data center footprint to 100,000 square feet of space. The facility is contiguous to Peak 10’s two existing data centers in the area and will accommodate new and existing customers throughout the central and eastern regions of the state.
Other expansions underway include Louisville, and an upcoming 60,000 square foot greenfield facility in Tampa Bay, Florida – its first data center built from the ground up. Last year, the company added a third facility in Atlanta, and expanded in Cincinnati. The company was acquired by GI Partners in May 2014 and has been in high growth mode since.
Chairman and CEO David Jones recently noted that the company has been taking trips and researching additional potential markets. Building, refurbishing or acquiring are all on the table.
“My strategic focus is looking at two things: what acquisitions will make sense and what markets tend to behave like markets we’re in,” said Jones. “This will be a big expansion year for our company as we make significant investments to launch new facilities in four of our established markets, and seek new locations to expand our geographic footprint.”
Peak 10 operates 25 data centers in 10 U.S. markets, with the hub of its strength in the southeastern United States. The company offers cloud, managed services and colocation and often acts as trusted IT advisor to midmarket companies and up. The company also continues to drive automation through cloud offerings, with several software projects underway. Its biggest bucket is colocation but cloud is showing the fastest growth. Jones said that private cloud continues to drive colocation, cloud services and managed services.
“Our consistent growth in all of our markets, including Raleigh, is the result of mid-market businesses in need of a true IT partner that can offer consultative expertise to support their long-term growth goals,” said Jones.
The data center is located in North Carolina’s Research Triangle Park, a major east coast hub for technology. The 7,000 acre development is home to over 170 companies.
“The construction of our third facility here allows us to deliver our enterprise-level data center services to more customers, supporting both their business growth and that of the region,” said Kurt Mosley, director of service delivery for Peak 10’s Raleigh operations.
Jones said GI Partners has been paramount in its expansion strategy, both in terms of footprint and products. GI Partners has a high profile in the data center and web hosting sector from its ownership of The Telx Group and The Planet (acquired by SoftLayer prior to SoftLayer’s acquisition by IBM), as well as its role in forming Digital Realty Trust. | | 1:00p |
Pattern Driven Enterprise Cloud Transformation – An Innovative Approach to Hybrid Cloud Adoption Biswajit Mohapatra is an IBM Certified Consultant and Global Integrated Delivery Leader for IBM Application Development and Innovation Digital Modernization Service (DMS) practice. Vinay Parisa is an IBM Certified IT Architect and cloud architect for IBM’s Application Development & Innovation Digital Modernization Services practice in India.
Cloud computing offers vast technological and financial benefits for companies, providing access to the latest trends and unlimited computing capacity. While business and IT use the cloud for different reasons and with different goals, both roles are unified with respect to the cloud’s overall value: the ability to deliver IT without boundaries, create an impact with innovation and build lasting customer relationships. However, while companies recognize the benefits of cloud computing, they are often uncertain about which form suits them best.
Organizations are creating new business initiatives to meet the demands of cloud, analytics, and mobile and social (CAMS) strategies. Today’s “as-a-service” economy is changing the way business applications are developed, deployed and managed. Businesses need to be able to deploy new applications (systems of engagement) on public clouds to improve their customer engagement. In order to do so, they seek Platform-as-a-Service (PaaS) solutions for experimentation and innovation, yet are required to keep mission critical applications in on-premise traditional IT environments.
Enterprises realize that no one cloud matches their needs, and that they require a model that works best for them. Hybrid cloud and DevOps methodologies are being increasingly adopted to enable continuous delivery and to accelerate the deployments across public, private and traditional IT environments. Concepts such as cloud orchestration, automation, containers, and Software Defined Environment (SDE) have emerged as ways to develop and manage complex on-premise/off-premise/public/private infrastructures.
IBM defines SDE as an entire IT infrastructure that’s programmable as individual systems and controlled – not by hands and hardware – but by software. An SDE assigns workloads dynamically to IT resources based on application characteristics, best-available resources and service level policies.
An SDE also delivers IT services in the most efficient way possible by creating a responsive, adaptive environment with open standards. This approach includes compute, storage and networking. The popularity of Open Source configuration management software like Chef and Ansible; and packaging and deployment platforms such as Docker, reflect IT’s interest toward Software Defined Environments.
Patterns are central to the SDE. They describe the structure of the cloud services, their components, and the relationship between components and the manageability of services.
Patterns can be infrastructure based and capable of being applied to one or more systems; or applied to software or applications which utilizes a full stack to address scaling and high availability.
While a hybrid cloud may be adopted for new business initiatives, organizations are left to transform and modernize the legacy and existing application portfolio. A pattern driven methodology can assess the portfolio, and identify and develop full stack application patterns that provide business agility, repeatability and standardization,
Patterns enable the flexibility and capability to streamline processes and decisions as well as reduce the complexity of IT environments. Organizations are racing to embrace CAMS, however, significant growth and competitive advantage can only be achieved by infusing new technologies. Patterns and SDE provide holistic software delivery processes that enable faster cloud adoption.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 3:00p |
SingleHop Acquires Datagram SingleHop revealed that it is acquiring Datagram, a provider of hosting services located primarily in the Northeast.
That acquisition comes on the heels of deal two months ago in which SingleHop acquired Server Intellect, a provider of consulting services focused mainly on Microsoft Windows Server.
SingleHop CEO Zak Boca says that acquisition of Datagram also represents an expansion of the data center services the company offers. Previously, SingleHop only provided access to infrastructure-as-a-service (IaaS) offerings via data centers located in Chicago, Arizona and the Netherlands. Datagram now adds hosting services in New York and Connecticut that enable customers to deploy their own IT infrastructure in data centers managed by SingleHop.
“Right now we have 1,500 servers under management,” says Boca. “Datagram enables people to now bring their own servers and SANs into our data centers.”
Datagram will continue to operate as an independent business unit of SingleHop, and effective immediately, will be able to deploy its services across all SingleHop data centers.
While Boca acknowledges that competition in the IaaS and hosting business is fierce, he says there are classes of enterprise customers that need access to IT infrastructure that is only one hop or less away from an Internet peering exchange. In the age of the cloud Boca says that applications deployed in the cloud are especially latency sensitive. As a result, there is something of a mass application migration underway that is witnessing the deployment of enterprise applications on IaaS platforms and hosting centers rather than inside traditional data centers or public clouds.
A big reason for that shift, says Boca, is that applications that require five milliseconds or less of latency can’t be delivered via a public cloud such as Amazon Web Services or even a local data center. As a result, there is a significant amount on industry consolidation in the hosting sector that is being primarily driven by the need to concentrate cloud application workloads around Internet peering exchanges.
The degree to which enterprise IT organizations will want to make use of existing SingleHop servers running VMware or Microsoft Hyper-V virtual machines versus deploying their own servers and storage in a hosting center will vary.
In the case of the IaaS offering, Boca notes that all the risks associated with IT capital investments are assumed by SingleHop. In the case of the hosting service, the enterprise IT organization is still responsible for both acquiring not only the server and storage infrastructure, but if they so choose managing it as well.
Although Boca would not make any specific commitments to additional types of acquisitions, he did acknowledge the company is looking to expand the range and scope of the data center services it provides; assuming, of course, in a highly volatile market SingleHop itself doesn’t become a target on someone else’s acquisition list. | | 3:01p |
MuleSoft Nets Huge $128 Million Investment For API-led Connectivity MuleSoft, an API-led connectivity platform software provider, announced it has secured an enormous $128 million financing round led by Salesforce Ventures. Mulesoft declares its mission is to connect the world’s applications, data and devices – to be an authoritative voice on application programming interfaces (APIs).
Bringing the total financing raised to date to $259 million, MuleSoft says the new Series G financing will help the company accelerate its technology platform innovation, as well as fuel its international expansion and extend its reach to serve global enterprise customers. It also brings the company into a new league, valuing it at $1.5 billion.
Participating in this financing round was new strategic investor ServiceNow, as well as Cisco Investments, an existing strategic investor.
The 9 year old global company said 2014 was the most successful year in company history, with a number of new customers and bookings surpassing $100 million.
MuleSoft’s product offerings show up in 3 different Gartner Magic Quadrants, for on-premise application integration suites, application services governance, and enterprise integration as a service.
MuleSoft CEO Greg Schott said that APIs are “the rocket fuel for connecting what’s never been connected before.” Schott added that his company “is positioned to seize this tremendous market opportunity with a world-class team, innovative technology and an impressive customer base. This investment validates our vision to attack the largest unsolved problem in IT with a platform for API-led connectivity.”
MuleSoft is in good position to capture the changing nature of applications. Rather than traditional, monolithic applications, the trend is towards micro-services, or several apps performing functions that used to be performed by one big application. These applications and services need to tie-in and integrate with one another, which is where the API, and MuleSoft’s offerings, come in. It’s API-led connectivity builds on the tenents of Service Oriented Architecture tuned for today’s application services. | | 4:24p |
The Uptime Institute’s New Efficient IT Stamp of Approval The Uptime Institute, a division of 451 Research, has announced its new Efficient IT Stamp of Approval, a third party assessment for data center efficiency and sustainability. Uptime Institute has completed an extensive pilot phase and announced the public rollout during the opening keynote of the tenth annual Uptime Institute Symposium.
The Uptime Institute has been around for two decades and is best known for its Tier Certification program. The Institute later expanded into an Operational Stamp of Approval, and now it’s using its knowledgebase to provide management-based recommendations around cost and resource savings. There are two award levels for Uptime Institute Efficient IT: Approved and Activated.
IT efficiency is assessed across leadership, operations, and design. Data centers have long been focused on tuning facilities for power and cooling efficiencies, the biggest costs, however, efficiency extends beyond nuts and bolts. Just as people are often the greatest threats to uptime, they can be one of the greatest threats to efficiency
“Uptime Institute Efficient IT is about better use of resources to better enable the business,” said Julian Kudritzki, chief operating officer of Uptime Institute. “Efficient IT isn’t bought, it’s managed. But in many siloed organizations, there is no effective structure to recognize achievement or set meaningful goals. Uptime Institute Efficient IT Stamp of Approval codifies the management processes and leadership behaviors that ensure sustained cost savings and resource agility.”
The stamp of approval is based on behaviors and outcomes rather than prescriptive requirements, said Uptime Institute. The approach for the Efficient IT Stamp of Approval is similar to Uptime Institute Management & Operations (M&O) Stamp of Approval. CenturyLink pursued the M&O stamp across its entire footprint.
The Uptime Institute Efficient IT Stamp of Approval benchmarks the enterprise’s achievement in terms of planning, decision making, actions taken, and monitoring to improve asset utilization and extend lifecycle across compute, storage, and network systems, and the data center itself.
Early adopters of Uptime Institute Efficient IT include leading integrated healthcare provider Kaiser Permanente, with two Stamps of Approval, and Mexico-based CEMEX, one of the world’s largest building materials companies, with an Activated stamp for its site in Monterrey.
| | 5:00p |
Storage Virtualization, Hyperscale, and the Abstraction of Data We’ve hit an interesting point in the data center world. Organizations are now actively working on reducing complexity, optimizing their existing infrastructure, and really trying to optimize the end-user experience. The interesting part is how all of these organizations (large and small) are actually controlling all of this new data. At this point, business of all sizes are admitting that they have a series of storage challenges.
- How do I continue to accelerate user experience without adding more hardware?
- What are my options in using logical optimizations around storage?
- How do I seamlessly extend my data into the cloud…securely?
- Is a commodity data center a reality for me?
Believe it or not – software-defined storage (aka storage virtualization) are ways that many of the challenges listed above can be overcome. The amazing part is that your organization no longer has to worry about expensive gear to keep pace with constantly changing demands of the user. There are always more options.
- Understanding data abstraction (virtualization) and hyperscale. New kinds of storage technologies revolving around storage virtualization and hyperscale are creating very real buzz in this industry. Software-defined storage allows you to point any and all storage resources to a virtual appliance for very granular controls. From there, new kinds of hyperscale platforms allows for a completely storage heterogeneous environment to work in unison. Most of all – SDS and hyperscale appliances can take older storage appliances and present next-gen storage features all at the hypervisor layer. Not only can this extend the life of your existing storage arrays – it also directly optimizes any of the storage being presented to the intelligent, virtual, hyperscale (and SDS) appliance.
- Creating a logical controller head. Picking up on the last point – many organizations now have a number of storage appliances serving specific purposes. For example, an Nimble Storage array can be in place optimizing VDI workloads while a NetApp FAS acts as a filer and data storage array. With a logical SDS storage controller – you still have these powerful and great technologies in place; but you’re optimizing presented storage via a software controller. Legacy storage appliances can now leverage powerful features to better enhance VM performance. For example – where an older physical appliance can’t support VAAI (a limitation of 256 managed VMs per LUN); it can now point storage to a logical layer and allow that virtual appliance to act as a VAAI engine. Thereby eliminating the limitation and prolonging the life of the appliance. Here’s the other reality – there is nothing stopping you from buying your own Cisco C-Series server and provisioning it with your own disks. Then, simply point those storage resources to an SDS controller – and congratulations. You just created your own storage appliance with a virtual controller.
- Using cloud extensions. This is a big one. Your data must be agile and must be able to scale between data centers and cloud points. SDS and data virtualization allows you to seamlessly connect with a variety of cloud providers to push and replicate data between your data center and a hybrid cloud architecture. New SDS solutions now directly integrate with technologies like OpenStack, vCAC and other open-source cloud connectivity devices. What’s more amazing is that you can connect your Hadoop environment right into your SDS layer. This is true data control at the virtual layer because the information can traverse any storage environment – cloud based or on premise.
The proliferation of cloud computing and mobile devices will have an impact on your business. The cloud is producing so much new data that the modern data center must adopt new ways to control all of this information. Here’s the important part – controlling this data is only one of the steps. For many organizations deploying a Hadoop cluster is critical to quantify and analyze vital data points.
Consider the following from Cisco’s latest Cisco Global Cloud Index:
- Quantitatively, the impact of cloud computing on data center traffic is clear. It is important to recognize that most Internet traffic has originated or terminated in a data center since 2008.
- Data center traffic will continue to dominate Internet traffic for the foreseeable future, but the nature of data center traffic is undergoing a fundamental transformation brought about by cloud applications, services, and infrastructure.
- The importance and relevance of the global cloud evolution is highlighted by one of the top-line projections from this updated forecast: By 2018 seventy-eight percent, or over three-quarters of data center traffic, will be cloud data center based.
With these stats in mind – how ready are you to take on the data challenges surrounding a next-gen workload? Is your organization controlling data at the virtual layer? Remember, the amount of data hitting your storage environment will only continue to increase. Deploying powerful data abstraction and management solutions today can greatly impact your evolving business model moving forward. | | 5:59p |
Report: Facebook Data Center Potentially Coming To Ireland Facebook might be planning a new data center in Ireland at a cost of $220 million, reported the Irish Times. The company is also actively searching for electrical engineers with data center experience to work at its Irish operations.
The social media giant is reportedly about to file for planning permission to build a 200,000 square foot facility in Meath, about 30 minutes northwest of Dublin. Details are sparse and Facebook is not commenting, however Ireland has been a data center hotspot in recent years.
Dublin is unique amongst major European data center hubs in that its initial appeal was based on climate, rather than connectivity. However, along with data centers comes that connectivity. For example, a new submarine cable is in the works, which will boost direct connectivity to North America for Ireland greatly. Microsoft recently invested in the project.
Facebook’s other European data centers are in Lulea, Sweden, another location with favorable climate for data centers.
Ireland has seen increasing data center activity. In the last few months, Apple revealed plans for a $1 billion dollar project in Ireland located in Athenry, close to Galway. Apple is also investing in renewable energy projects there and across its footprint.
Microsoft has a data center in Dublin , as do Google and Amazon. Digital Realty Trust launched a new facility in Dublin last year and TelecityGroup is also present. Technology companies have created several jobs in Ireland, and it looks like Facebook is creating around 60 more.
Cork, located in the southwest region of the country is also particularly active. It is home to European headquarters for Apple and Logitech. Amazon has also set up shop in the Cork Airport Business Park, and EMC filed for a new data center in Cork last year.
More than 1.2 billion people use Facebook worldwide and Facebook has had to scale its data centers in tow. Because the company has faced unique infrastructure challenges, it’s also innovating at the data center.
The company has been active not only in building, but in progressing design through initiatives like Open Compute Project. Last year, Facebook said using Open Compute designs to steamline its data centers and servers helped the company save $1.2 billion. The company showed additional OCP love by sharing its networking innovations.
The company has streamlined the way it builds data centers, adopting an Ikea-esque approach. However, in March, A U.K. engineering company filed a lawsuit against Facebook, with accusations of Facebook using its proprietary data center designs and promoting their public use through its Open Compute Project | | 6:00p |
DataHero Raises $6.1m For Self-Service Cloud BI Where data is stored and how it’s accessed is shifting and Business Intelligence (BI) is evolving. Cloud BI provider DataHero has raised $6.1 million Series A to address the architectural shift and simplify data analysis. The funding will go towards expanding development of its core technology, and scaling its operations and support infrastructure.
DataHero provides self-service BI delivered as Software-as-a-Service. It aims to take traditional business intelligence and make it a quick and easy to use service, much like Salesforce did with Customer Relationship Management.
DataHero’s core technology consists of a data classification engine and pre-built connections to dozens of popular cloud services. Through a drag-and-drop interface, users are able to import and analyze data from popular services including Salesforce, HubSpot, Marketo, Google Analytics, Dropbox, Stripe, and Excel.
The round was led by existing investor Foundry Group.
“We invested in the company three years ago because we identified a real market gap for data analysis for the non-technical user,” said Ryan McIntyre, managing director and co-founder of Foundry Group. To thrive in today’s business environment, companies need to shift from traditional gut-based decision making to a data-driven mindset for everyone in the organization. This is what DataHero is all about.”
Along with the funding, the company also named software veteran Ed Miller as new CEO. With over 25 years of experience, Miller has led several startups to successful outcomes, most recently Xythos, which was acquired by Blackboard. Blackboard was later acquired for $1.6 billion by Providence Equity partners.
While traditional data analytics solutions assume you’re dealing with on-premise data; DataHero said its cloud BI is different, in that it assumes you’re dealing with a large number of geographically distributed data sources.
As traditional enterprise software moves to SaaS, the business model is affecting the product. SaaS providers need to provide a quick way to get up and running and a compelling reason to continue to use the service, as they need to win the customer’s business every month. A company would be more inclined to stick it out with something it paid a lot of money for upfront, whereas a SaaS subscription is likely to get cancelled if it isn’t easy to use and worthwhile on an ongoing basis.
Cloud BI provider Bime said last year that, while there still remains some psychological barriers to performing BI in the cloud, in terms of technology, SaaS BI has surpassed on-premise offerings.
Following funding, the company announced a partnership with HubSpot to bring self-service analytics to marketers. HubSpot provides the full funnel marketing and sales solution, while DataHero delivers the visualized analytics to support that funnel. Users can track when customers close deals, filter by custom fields, and display their data in different time groupings. |
|