Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, August 7th, 2013
| Time |
Event |
| 11:45a |
Tidemark Nets $13 Million for Enterprise Analytics Tidemark raises $13 million to continue its product momentum and innovation, Nasuni launches Cloud Mirroring, and Caringo adds Amazon S3 support in version 2.0 of its CloudScaler software.
Tidemark raises $13 million. Cloud-based enterprise Analytics company Tidemark ended its fiscal year with a bang – seeing 250 percent growth and receiving $13 million in new financing. The company announced that it has secured $13 million in new venture financing, to further propel its legacy replacement momentum and to support additional innovation. Tenaya Capital led the round, with participation from existing investors Greylock Partners, Andreessen Horowitz and Redpoint Ventures. This latest round brings the total amount raised to over $48 million.“Tidemark is innovating in a space that hasn’t significantly changed in the last 15 years,” said Tom Banahan, Managing Director of Tenaya Capital. “After evaluating several companies in this important category, we concluded that Tidemark was the only company truly disrupting the legacy approach to analytics. Tidemark’s founders and management have deep domain experience and they have built an exceptional team across every piece of the business. We are excited about our opportunity to help accelerate Tidemark’s disruption of the enterprise analytics marketplace.” Adding numerous Fortune 1000 customers, Tidemark recorded 250 percent year-over-year growth in the first half of its fiscal year ending July 31.
Nasuni launches Cloud Mirroring. Enterprise storage provider Nasuni announced the availability of Cloud Mirroring. This new feature is designed to provide customers an even higher level of availability and redundancy. Cloud Mirroring delivers unmatched data protection in the cloud that is stronger than any other cloud storage provider or cloud-integrated storage solution can offer. With this mirroring process the data resides in a primary cloud, while a copy of the data resides on a secondary cloud, and the entire process is managed by the Nasuni service behind the scenes. “Customers have made it clear that there’s no such thing as too much data protection when it comes to cloud storage,” said Andres Rodriguez, CEO of Nasuni. “In true Nasuni fashion, we wanted to be the first to give the enterprise Cloud Mirroring functionality that’s simple to use and adds no additional bandwidth demand. All of our customers are still covered by our SLA, but for those who want to go the extra mile to protect their data, only Nasuni offers Cloud Mirroring.”
Caringo launches CloudScaler 2.0. Object storage software provider Caringo announced the latest version of its CloudScaler enterprise gateway that, combined with CAStor, provides enterprises and service providers robust and efficient object storage as the foundation for dependable and scalable cloud storage service. New in version 2.0, Caringo has added Amazon S3 API support and increased control, authentication and metering of CAStor. “For many enterprise use cases S3 is not a candidate. Cloud storage service providers want on-demand storage like S3, but still need the performance, security and control of having storage behind their firewall,” said Mark Goros, CEO of Caringo. “CloudScaler 2.0 and CAStor empower our customers to build solid and dependable storage services while maintaining control and ensuring content integrity to meet the most demanding cloud storage use case requirements.” | | 12:30p |
Optimizing Energy Savings in Federal Data Centers Jay Owen is Vice President, Schneider Electric IT Federal Solutions.
 JAY OWEN
Schneider Electric
With less than two years remaining for managers of federal data centers to attain their consolidation goals – and achieve greater energy efficiency while doing so – strained agency budgets are growing ever tighter, and the initial capital investment is becoming a daunting hurdle. But there is a well-tested solution to solve this dilemma: the Energy Savings Performance Contract (ESPC).
What’s Behind the Consolidation Movement?
Most federal data centers had long been considered “excluded” from meeting federal energy mandates like those in EPAct 2005 and EISA 2007. This exclusion ended in recent years as a result of an executive order (EO 13514) and the Chief Information Officers Council’s Federal Data Center Consolidation Initiative, which require agencies to consolidate data centers and improve their efficiency and sustainability. Such requirements were driven by several key factors that increased scrutiny of these energy hogs: rising energy and operational costs, data center sprawl that quadrupled the number of federal data centers between 1998 and 2009, low utilization rates for CPUs and servers, and an increase in underutilized properties combined with the desire to remove them from the balance sheet. In addition, agencies now recognize the importance of incorporating data center efficiency into their energy management plans to comply with legislation and take full advantage of technologies that provide huge savings without sacrificing availability.
But without a capital investment, the best-laid plans cannot be accomplished. Enter the ESPC. With an ESPC, private sector investors provide the upfront capital rather than appropriations, i.e. taxpayer funds.
How Does an ESPC Work?
The beauty of an ESPC is that the agency forms a partnership with an energy service company (ESCO) that secures private sector financing to provide the upfront renovation costs. Why would a private investor want to provide that capital? Because the agency pays back the investors over a fixed period of time with surplus funds created by energy savings, and the ESCO guarantees the energy savings. The key to leveraging ESPCs for consolidation is to identify enough savings to justify the consolidation costs. That’s typically not hard to do – most data centers consume 50 times the energy of an average office building.
By incorporating energy conservation measures (ECMs) like lighting and cooling system retrofits into consolidation plans, federal data center managers can employ ESPCs to implement large-scale consolidation and optimization projects without congressional appropriations. In addition, ESPCs can help agencies overcome one of the biggest challenges of consolidation: incorporating high density equipment into an existing infrastructure that supports low density. How? By utilizing a high density pod—a pre-designed collection of IT cabinets, power distribution and dedicated cooling deployed as a unit. The high efficiencies of this pod method can significantly improve the Power Unit Efficiency (PUE), which is the commonly used measure of data center efficiency. The savings from this improved efficiency make high density pods a perfect candidate for inclusion in an ESPC.
Finally, leveraging ESPCs for data center consolidation also helps agencies target their goal of entering into a minimum of $2 billion in performance-based energy contracts by the end of 2013, as required by the 2011 presidential memorandum, Implementation of Energy Savings Projects and Performance-Based Contracts. That’s why agencies should consider an ESPC as a best practice to reach energy efficiency and data center consolidation goals.
What about funding?
Energy savings are just one source of funding for a data center consolidation or upgrade through an ESPC. With this source, energy savings are paid to the financier providing the upfront capital until the financing agreement is complete. Any savings above the financing payments, or those that accrue once the term is up, remain with the agency as a cost reduction.
Other ESPC funding sources include savings that result from reduced operations and maintenance (O&M) costs, as well as utility and tax incentives. The ESCO can identify and secure these incentives for the agency.
Capital dollars may also be used to help fund an ESPC. Coupling appropriated funds with private sector financing as a “down payment” can expand the facility upgrades achieved and shorten the contract term for the agency.
What’s Involved in Implementing an ESPC?
The success of an ESPC depends on clear definition of the desired outcome. To start, the ESCO will perform a feasibility assessment of the site to identify needs, the level of savings potential and performance requirements. This will enable the agency to determine a strategy based on the desired results, which could include critical infrastructure improvements to enhance mission support, establishment of a healthier, safer working environment, and thorough project commissioning along with life-cycle O&M support to sustain savings and deliver long-term improvements.
One additional, essential element of success for achieving data center consolidation and efficiency goals is the installation and utilization of an energy management system (EMS). With an ESPC, data centers can deploy scalable EMS platforms to monitor and control energy consumption from end to end.
An EMS brings greater visibility to data center efficiency, or lack thereof, by continuously collecting and reporting energy usage data for power conversion and distribution, server load and computing operations, cooling equipment, and on-site generation (renewables, waste heat for cooling, etc.). Data center managers can leverage the EMS to establish factual baselines, which they can use throughout the data center life cycle for monitoring, verification and regulation of performance.
U.S. data center efficiencies have traditionally been low due to a variety of factors including ineffective cooling, air containment and air distribution, improper placement of IT rack enclosures (versus hot and cold aisle arrangements), and lightly loaded UPS systems. Some data center power systems operate with efficiencies of 67 percent and below. By providing visibility into energy usage and waste, an EMS delivers identified measures that, when implemented, have proven to bring efficiencies into the 90 percent range, which is an exceptional difference.
What’s the End Result?
Through an ESPC, federal data centers can attain both consolidation and energy efficiency goals that may have been out of reach under the current fiscal constraints. Data center managers also gain access to the best technologies and custom-tailored solutions to deliver greater efficiency and optimal performance.
If they haven’t already, federal data center managers should be looking at ESPCs as a best practice for optimizing their facilities.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 1:55p |
Interxion Reveals European Expansion Plans In Earnings Interxion detailed several expansion projects during its earnings. Expansions in Copenhagen and Stockholm were completed, while new expansion projects in Stockholm, Vienna, and Zurich were announced today.
In Stockholm, Interxion (INXN) is constructing the second phase of STO 2 (STO 2.2) in response to continued demand. STO 2 is being completed in two phases – the first phase being the announced expansion completion – each providing 500 square meters of equipped space. Phase 1 has 2MW of power and is available now, while the latest expansion is scheduled to be operation in the first quarter of 2014.
The company has seen strong demand in Stockholm, previously expanding thrice in the last two years. Now it has further expansion underway.
In Vienna, Interxion has constructed the fourth phase of VIE 1 (VIE 1.4). This market is being driven by demand from financial services and cloud communities of interest. VIE 1.4 became operational in the third quarter of 2013 and provides approximately 400 square meters of Equipped Space;
In Zurich, Interxion is on the fourth phase of ZUR 1 (ZUR 1.4), once again in response to continued demand. ZUR 1.4 will provide approximately 500 square meters of Equipped Space and is scheduled to become operational in the fourth quarter of 2013.
The capital expenditure associated with these projects is approximately €11 million (approx $14.6 million with current conversion rates). This is included in the company’s 2013 capex guidance.
The completed expansions to Stockholm and Copenhagen were announced last February. These two expansions combined were estimated to cost €17 million ($22.7 million). The Copenhagen expansion provides 300 square meters of equipped space while the Stockholm expansion added 500 square meters, for an increase of total increased space of 800 meters in the quarter. The company now has a total of 78,900 square meters.
Revenue generating space in the quarter increased by 1,200 square meters, to 58,200 square meters. The company’s utilization rate is now 74%
“Interxion’s second quarter results reflect solid execution against our market segmentation strategy, which has delivered sustained, profitable growth despite the effects of a continued unfavourable macroeconomic environment,” said David Ruberg, Chief Executive Officer of Interxion. “Growth in our communities of interest and structural drivers, such as the onset of migration to cloud computing, are underpinning continued demand for Interxion’s highly connected data centres.”
| | 2:30p |
Procera Launches Virtualized PacketLogic Solutions Internet Intelligence company Procera Networks (PKT) announced the launch of Virtualized PacketLogic solutions based on the European Telecommunications Standards Institute (ETSI) Network Functions Virtualization (NFV) standards. As an appliance, the solution will allow network operators to reduce the cost of acquisition and ownership for Internet Intelligence solutions.
“We’ve been noting a definite interest among operators toward virtualizing their policy decision and enforcement functions as a way to reduce costs and bring services to market more quickly,” said Shira Levine, Directing Analyst, Service Enablement and Subscriber Intelligence at Infonetics Research. “We believe that this trend will accelerate as Software Defined Networking (SDN) begins to gain traction, driving demand for DPI technology that can mine intelligence from the network and feed it up to the control layer.”
Any hardware solution that can run standard-based virtualization software can be deployed with the PacketLogic solution modules (PSM, PRE, and PIC), including deployments that run all of the modules on a single hardware platform. The Virtual PacketLogic solutions can be used for any portion of the lifecycle of a Policy Enforcement deployment from the initial functional evaluation, trial deployment, service rollout, bandwidth expansion, or geographic expansion.
“Virtualization is a natural evolution of the PacketLogic architecture,” said Alexander Havang, chief technology officer for Procera. “Procera has always maintained hardware independence, and has delivered the highest performing solutions available on the market using off-the-shelf hardware technology. We have used virtualization extensively internally, and our customers are asking to deploy this in their networks today. Speed to market with new services and the ability to deliver targeted niche services also becomes a much simpler and more rapid process.”
The Virtual PacketLogic solutions will be available for trial in the third quarter of 2013, and are expected to be generally available by the end of 2013. | | 7:39p |
Shepard Takes on Dual Roles at IO, Fortrust 
The relationship between IO and FORTRUST is evolving in interesting ways. This week FORTRUST said it continues to experience strong demand for IO Anywhere data center modules, with more than 1.5 megawatts of capacity deployed in the first year of their “Powered by IO” partnership.
On Tuesday FORTRUST announced that David Shepard, Senior Vice President at IO, will assume the additional role of FORTRUST Senior Vice President of Sales and Marketing. Shepard led the development of the Powered by IO program, and will now help market the solution for both providers.
As the first participant in the Powered by IO program, Fortrust committed to use IO modular data center technology exclusively as it expands its Denver data center, where it offers colocation and disaster recovery services. i/o Anywhere is a family of modular data center components that can create and deploy a fully-configured enterprise data center.
FORTRUST now has IO data modules in all three of its locations, which include its own data center in Denver and IO-operated facilities in Phoenix and Edison, New Jersey. By using IO Anywhere, FORTRUST can expand its capacity in increments of 200kw and 400kW within 60 days of ordering a new unit.
“David and I worked together closely to enable FORTRUST’s adoption of IO’s modular technology and the development of the FORTRUST and IO partnership,” said Rob McClary, Senior Vice President and General Manager of FORTRUST. “We welcome Mr. Shepard and look forward to advancing the industry through this truly unique partnership. With his support, FORTRUST will continue to serve customers agile, efficient and on-demand data center capacity and services.”
“David Shepard has years of experience conveying IO’s value proposition to prospective customers,” said Steve Knudson, Vice Chairman and CEO of FORTRUST. “Having that expertise at our fingertips will enable both IO and FORTRUST to maximize the value of this partnership.”
Powered by IO sites and service providers are certified by IO to host IO’s technology to provide Data Center as a Service (DCaaS) in exclusive markets. Partners will share resources, training, technology, leads, sales and marketing. IO partner certification covers operations, sales, design and includes real-time monitoring by the IO.OS. Partner companies commit to using IO modular data center technology in their data centers. | | 7:43p |
IBM, Google Team on OpenPOWER Consortium  Rows of custom-built servers inside a Google data center. Will these soon be powered by IBM POWER microprocessors? (Photo: Google)
In a bid to reinvigorate its POWER processor architecture, IBM this week announced a new development alliance called the OpenPOWER Consortium, with Google, Mellanox, NVIDIA and Tyan as initial members.
Battling a diminishing server market overall, on top of competition from the Open Compute Project and other industry initiatives, IBM hopes that OpenPOWER will build advanced server, networking, storage and GPU-acceleration technology on the POWER platform. The consortium makes POWER IP licensable to others and for the first time will make POWER hardware and software available to open development.
In doing this, IBM and the consortium can offer unprecedented customization in creating new styles of server hardware for a variety of computing workloads. IBM added variety to its own systems on Tuesday the addition of new FLEX systems with POWER processors.
“The founding members of the OpenPOWER Consortium represent the next generation in data-center innovation,” said Steve Mills, senior vice president, and group executive, IBM Software & Systems. “Combining our talents and assets around the POWER architecture can greatly increase the rate of innovation throughout the industry. Developers now have access to an expanded and open set of server technologies for the first time. This type of ‘collaborative development’ model will change the way data center hardware is designed and deployed.”
As a large maker of its own customized servers Google’s involvement in the consortium signals an interesting twist in the processor battleground. A year ago Intel noted that Google was in the top five of server manufacturers that account for 75 percent of Intel’s server chip revenues. There is nothing that guarantees Google will build POWER-based systems, but with Google’s love of open systems and drive to innovate its data centers, it is certainly a possibility. NVIDIA and IBM will work together to integrate the CUDA GPU and POWER ecosystems.
“We are happy taking part in the OpenPOWER Consortium and its mission to further accelerate the rate of innovation, performance and efficiency for advanced data center solutions,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “Open source and community development are key to enabling innovative computer platforms and better serve the scalable and emerging applications in the areas of high-performance, Web 2.0 and cloud computing. Mellanox’s mission is to provide the most efficient interconnect solution for all compute and CPU architectures and deliver the highest return-on-investment to our users.”
IBM says OpenPOWER is open to any firm that wants to innovate on the POWER platform and participate in an open, collaborative effort |
|