Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, September 5th, 2013
| Time |
Event |
| 12:08p |
Cloudsite Acquires Kyoto Cooling  A KyotoCooling wheel being installed in the BendBroadband data center in Oregon. KyotoCooling has been acquired by Cloudsite Development. (Photo: Bend Broadband)
Cloudsite Development has acquired a controlling interest in KyotoCooling B.V., a provider of cooling solutions known for its “heat wheel” approach to air economization. Cloudsite has been a long-time partner of KyotoCooling, and was the first data center customer to install its technology in Asia. Cloudsite has named Earl Hoover as the new President and Managing Director of KyotoCooling.
Cloudsite Development is a Dallas-based investment company founded in 2009 by John Drossos and Leslie Alexander. Drossos is the founder of Dallas private equity firm Broadband Venture Partners, and Alexander is the owner of the Houston Rockets NBA basketball team. Cloudsite is developing data centers in the United States and China. The first Cloudsite data center is currently being developed in Tianjian, China in cooperation with the Tianjin Economic Technological Development Area.
“KyotoCooling has become the focal point of our data center facility solutions, and we see the market for waterless, airside cooling solutions to be rapidly growing and very attractive from an investment standpoint” said Drossos, the CEO of Cloudsite.
“We welcome the investment from Cloudsite, which further strengthens our financial position and allows us to grow our market presence worldwide”, said Pedro Matser, Principle of KyotoCooling B.V. “These new funds will be used to expand sales and marketing efforts and accelerate growth in the European and Asia-Pacific markets.”
In connection with the Cloudsite investment, KyotoCooling has entered into a licensing agreement providing cooling manufacturer Air Enterprises with the right to market and sell KyotoCooling technology throughout North America.
“Uniting KyotoCooling and Air Enterprises will revolutionize the data center cooling industry,” said Martin Ellis, CEO of Air Enterprises. “This strategic partnership will combine KyotoCooling’s research and development expertise with the engineering, manufacturing and project management skills of Air Enterprises, and enable us to deliver the leading data center cooling solution to the North American market.”
KyotoCooling uses outside air economization and the Kyoto Wheel to adjust temperatures within the data center while minimizing the effect of outside contaminates. There are now 76 installations located in 10 countries using the patented technology. KyotoCooling B.V. was founded in 2007 is based in Amersfoort, The Netherlands. | | 12:23p |
Emerson Network Power Updates Trellis DCIM 
Emerson Network Power has updated its Trellis DCIM platform, adding greater depth to its power management and mobility capabilities.
“With every release that we do, we’re enhancing the base level platform,” said Steve Hassell, President, Data Center Solutions at Emerson Network Power. “Like any product, we take the feedback from our users. It’s about knowing what you have, knowing what your doing with it, then answering progressively complex questions.”
The two new modules are:
- Trellis Power System Manager, which gives deeper visualization of the power system. Its utilization, and dependencies.
- The Trellis Mobile Suite, an app that allows users to manage data center assets in real-time, anywhere and anytime, by transforming a mobile device into a portal to the Trellis platform.
“The two modules provide more in-depth capabilities,” said Hassell. “From a power standpoint, Energy Insight in Trellis was good for high level PUE (Power Usage Effectiveness) stuff. Now you can look at the data center the way that an electrical engineer looks at the data center. The power chain management piece is a key part of what we added in 2.1.”
The Power System Manager module provides a complete visualization of the power system, including utilization and dependencies. Users can view the complete power chain from the grid to the rack via a one-line diagram. It lets users document the complete power system, view capacity utilization in real-time and forecast energy consumption. This improves business continuity and reduces operational costs. It also helps users plan for capacity more accurately.
The ability to gain insight into power usage is big. “We had one customer that thought they were out of power, getting ready to do a fairly large expansion,” said Hassell. “What they really had was stranded capacity, they were really only using 15 percent.”
The module lets the user know exactly which racks or devices will be affected by any power system failure or by scheduled maintenance, so they can identify and address potential issues before they cause an outage. Additionally, the module lets users better plan for power maintenance and perform risk assessments.
Mobile Management Module
The Mobile Suite module adds capabilities that greatly enhance management. “One of the toughest things with the inventory management model is getting the inventory into the application and use it in a way that’s pretty productive,” said Hassell. “With this module, you can use something like an iPad. You’re able to read barcodes, we did some things with facial recognition and it will identify the server.”
While servers look pretty similar to the naked eye, the new module means that with an iPad, you can go up to a server take a snapshot and it identifies it for you. “Servers have very subtle nuances,” said Hassell. “One of the things that we store is that visual image,” said Hassell.
Before this, it was difficult to get the inventory in the first place. A lot of data center floors didn’t have wireless access, so the device used to take inventory was physically connected. In some cases, staff wrote on paper. “With the iPad, it can do the two way communication dynamically, if not it’ll store it locally.”
“Discussions with our closest customers on our advisory boards indicated they needed additional flexibility and holistic insight for power management, and that they would benefit from the ability to take Trellis anywhere, 24/7,” said Hassell.
Bridging the IT/Facilities Gap
The biggest promise of DCIM seems to be bridging the gap between IT and facilities, who often have their own way of doing things that leads to a disconnect in both operations and understanding. “We’re very good at building closed looped control systems that let physical devices act as one,” said Hassell. “We’re building a connection between IT and physical, and letting you make better decisions.
“This shows further commitment that it really does take a platform where different modules reinforce themselves,” said Hassell. “With Trellis, you don’t have to buy the whole thing; you can start out where you need to and expand. When you look across all our data center customers, we cover a pretty wide swath of the IT customers out there. We’re taking a lot of input and we’ll continue to release capabilities at a fast pace.”
Hassell compares DCIM to virtualization and VMWare. “With DCIM, you’re abstracting the infrastructure in the way server virtualization abstracted,” said Hassell. “Virtualization as a technology, (and) even VMware went through a period of time before the big adoption wave. I’m not sure we’re really at the inflection point. I’d say the community is always very bullish. It’s gone past the frothy promise. There’s a tremendous amount of interest a lot of proof of concept. A lot of customers are starting off in one place in their environment and moving up.” | | 12:30p |
Big Data: What It Means for Data Center Infrastructure Krishna Kallakuri is a founding partner, owner and vice president of DataFactZ. He is responsible for executing strategic planning, improving operational effectiveness, and leading strategic initiatives for the company.
 KRISHNA KALLAKURI DataFactZ
Today, we collect and store data from a myriad of sources such as Internet transactions, social media activity, mobile devices and automated sensors to name a few. Software always paves the path for new and improved hardware. In this case, Big Data, with all its computing and storage needs, is driving the development of storage hardware, network infrastructure and new ways of handling ever-increasing computing needs. The most important infrastructure aspect of Big Data analytics is storage.
Capacity
Data over the size of a petabyte is considered Big Data. The amount of data increases rapidly, thus the storage must be highly scalable as well as flexible so the entire system doesn’t need to be brought down to increase storage. Big data translates into an enormous amount of metadata, so a traditional file system cannot support it. In order to reduce scalability, object oriented file systems should be leveraged.
Latency
Big Data analytics involves social media tracking and transactions, which are leveraged for tactical decision making in real-time. Thus, Big Data storage cannot appear latent or it risks becoming stale data. Some applications might require real-time data for real-time decision making. Storage systems must be able to scale-out without sacrificing performance, which can be achieved by implementing a flash based storage system.
Access
Since Big Data analytics is used across multiple platforms and host systems, there is a greater need to cross-reference data and tie it all together in order to give the big picture. Storage must be able to handle data from various source systems at the same time.
Security
As a result of cross-referencing data at a new level to yield a bigger picture, new considerations for data level security might be required over existing IT scenarios. Storage should be able to handle these kinds of data level security requirements, without sacrificing scalability or latency.
Cost
Big Data also translates into big-prices.The most expensive component of Big Data analytics is storage.Certain techniques like data de-duplication, using tape for backup, data redundancy and building custom hardware, instead of using any market available storage appliances, can significantly bring down costs.
Flexibility
Big Data typically incorporates a Business Intelligence application, which requires data integration and migration. However, given the scale of Big Data, the storage system needs to be fixed without any need of data migration needs and simultaneously flexible enough to accommodate different types and sources of data, again without sacrificing performance or latency. Care should be taken to consider all the possible current and future use-cases and scenarios while planning and designing the storage system.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 2:30p |
Optimize Storage to Improve Cloud and Virtualization Performance The reality within today’s IT world is that there is more data, a lot more users and new requirements around next-generation infrastructure. As the user consumes more resources and more information – the IT department is left with the challenge of running a lean, optimized environment.
A large part of any modern-day infrastructure will be the storage component because it is one of the central pieces behind a cloud and virtualization environment. For many organizations, NAS implementations are growing out of control and wreaking havoc for IT. The causes are many and varied. Applications leveraging NAS vary from virtual server and virtual desktop implementations to digital imaging, computer generated animation rendering, energy exploration, and financial modeling—not to mention the billions of users around the globe who continually create and consume unstructured data such as video, audio, and text files.
So why not insert an appliance that’s capable of optimizing the entire storage operation? The Avere FXT Series provides high-performance, scale-out clustering for NAS environments. Avere FXT Edge filers are designed to optimize NAS environments by improving application performance, delivering global data access regardless of the data location, and offering both capital and operational cost savings. The Avere appliance is equipped to forego conventional remedies calling for expensive storage controllers and disk over-provisioning that result in extra hardware, management, and power costs.
In this white paper, you will learn the key features which operate to improve and optimize your platform. These include:
- Dynamic tiering that places data on the most effective storage tier to ensure optimal performance and efficient resource utilization of the high-speed tiers.
- Non-disruptive, scale-out clustering with linear performance scaling.
- Wide area file services to ensure high-performance data access from any location, including private compute or storage clouds.
- File system virtualization and visibility that enables all NAS Core filer devices to be viewed and managed as a single, logical file system global namespace.
Not only that, this white paper also takes a look at the Avere FXT Edge filer from a lab perspective. ESG Lab performed hands-on evaluation and testing of Avere Edge filer appliances with AOS 3.0 at the Avere corporate headquarters in Pittsburgh, Pennsylvania. Testing was designed to demonstrate how the Avere global namespace architecture enables advanced data management capabilities using industry-standard tools, protocols, and methodologies. Also covered in the validation were the advanced data management features FlashMove and FlashMirror.
Download this white paper today to learn how Avere improves both performance and management with its Edge Core Architecture. Furthermore, you’ll be aware of how Avere FXT Edge filers are built using RAM and SSD, as well as SAS HDDs, utilizing automatic tiering across both the Edge filer cluster tiers and the multi-vendor back-end Core filers to accelerate performance and minimize cost you’re your infrastructure grows and expands, this platform will remain a single self-contained logical unit for management and client access. In turn, this will create an optimized environment capable of greater scale and resiliency. | | 3:30p |
Strata Conference & Hadoop World Strata Conference + Hadoop World, where the discussion is focused on big data, data science, and pervasive computing, will be co-presented by O’Reilly Media and Cloudera, on October 28-30 in New York City, NY.
The event brings together influential decision makers, architects, developers, and analysts to consider the future of their businesses and technologies. Over the course of three days participants can attend keynotes and intensely practical, information-rich sessions exploring the latest advances, case studies, and best practices.
Since joining forces last year, Strata + Hadoop World is also one of the largest gatherings of the Apache Hadoop community in the world, with emphasis on hands-on and business sessions on the Hadoop ecosystem.
The event also includes:
- A Sponsor Pavilion with key players and latest technologies
- A vibrant “hallway track” for attendees, speakers, journalists, and vendors to debate and discuss important issues
- Plenty of events and networking opportunities to meet other business leaders, data professionals, designers, and developers
For more information and registration, please visit Strata & Hadoop World website.
Venue
New York Hilton Midtown (map)
1335 Avenue of the Americas
New York, New York, 10019
See website for hotel information. | | 5:05p |
Data Center Jobs: ABM Facility Services At the Data Center Jobs Board, we have a new job listing from ABM Facility Services, which is seeking a Chief Engineer (Critical Facilities) in Hillsboro, Oregon.
The Chief Engineer (Critical Facilities) is responsible for all start up and continuing operations for large data center (requires 7-10 years in critical facilities operations with 3-5 as Chief or Assistant Chief), performing all engineering operations and mechanical/electrical maintenance of the facility under the direction of building management and the ABM Engineering Branch Manager, and continued and uninterrupted operation of raised floor 24-7-365. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 8:10p |
EMC Revamps VNX For Flash Storage  EMC today introduced new products for its VNX series of midrange storage. (Photo: EMC)
EMC announced technology advances in the EMC VNX line of midrange storage, new capabilities for the EMC VSPEX, and the upcoming general availability of a new Software-Defined Storage platform. EMC also previewed “Project Nile”, a commercially-available complete, Web-scale storage infrastructure for the data center.
“Customers are demanding more performance and efficiency from their current data center infrastructure while, at the same time, deploying new architectures for their next generation mobile and Web applications,” said EMC President and Chief Operating Officer David Goulden. ”By fully embracing and exploiting disruptive technologies such as Intel MultiCore, virtualization and flash, EMC is providing customers with the products and solutions they need to help transform their IT department–not only delivering unprecedented levels of performance and efficiency but also providing the agility needed for their business to remain competitive. ”
EMC announced a new selection of mid range VNX unified storage systems targeting application performance, storage efficiency, data protection, data availability and ease-of-use: VNX5200, VNX5400, VNX5600, VNX5800, VNX7600, VNX8000 and VNX-F. Through new MCx software the new VNX fully unleashes the power of flash, accelerating application and file performance by up-to four times. EMC has a number of videos of partners and customers talking about their experience with EMC VNX here.
“The midrange market faces some serious challenges, among them being incredible growth, increasing management complexity, pressures to remain competitive, and budgets that rarely grow,” said Ashish Nadkarni, Research Director, Storage at IDC. ”Customers require technologies that squeeze every last drop of value out of their assets in both physical and virtual environments – and for their sake they better have flash technology as part of their plan. In keeping with its track record in the storage market, EMC has delivered a new storage platform that presents a fundamental change to how midrange storage customers can make the most out of flash within their arrays – compounded with a totally new level of price and performance that the midrange storage market has not witnessed before.”
A new flash-only VNX configuration (VNX-F) is also now available for environments demanding higher performance at lower latency for long periods of time. VNX-F delivers consistent high performance at lower latency compared to all disk or hybrid versions of the VNX. New EMC XtremSW Cache 2.0 software offers deeper integration with EMC arrays including the new VNX Series—further driving down latency by 65 percent.
EMC announced that the ViPR Software-Defined Storage Platform is planned to be generally available later this month. ViPR is scheduled to include both the ViPR Controller and ViPR Object Data Services. ViPR Object Data Services gives customers the ability to view objects as files, providing file access performance without the latency inherent in current object storage models. ViPR-supported storage includes the new EMC VNX unified storage platform as well existing EMC VNX, EMC VMAX, EMC VPLEX, EMC Isilon, EMC RecoverPoint and third-party arrays including NetApp storage. ViPR provides the foundation for building Web-scale data centers without the need to hire thousands of technical experts to create a custom environment.
“Cisco and EMC have partnered closely to deliver three paths to speed our customers’ journey to the cloud: custom-designed infrastructure, validated reference architectures via EMC VSPEX Proven Infrastructure, and pre-integrated converged infrastructure with VCE Vblock Systems,” said Satinder Sethi, Vice President, Data Center Group at Cisco. ”Over the past year, Cisco solutions integrated with offerings from EMC and VCE have generated significant momentum with customers and partners. Today, Cisco and EMC have hundreds of joint channel partners and thousands of joint customers around the world. Together, Cisco and EMC plan to accelerate this success with our mutual channel partners. We also believe that EMC’s next-generation VNX technology will complement Cisco’s Unified Compute and Unified Fabric solutions, helping customers maximize their existing infrastructure and further simplify cloud deployments.”
“Today is a significant milestone in EMC’s vision to deliver customers – both large enterprises and service providers – the foundation on which to build a Web-scale data center capable of growing to tens and hundreds of petabytes of information,” said Amitabh Srivastava, President, Advanced Software Division at EMC. ”By delivering ViPR ahead of industry expectations, we will provide our customers with a lightweight, software-only approach to storage management and a foundation for next generation applications. This approach not only solves the problems they face today, but provides a path to the future.” | | 8:30p |
Windstream Moving Into Sabey’s Intergate.Manhattan  Windstream has leased an 11,000 square foot space at Sabey’s Intergate.Manhattan (pictured above) for a central office and satellite antenna farm. (Photo: Sabey Data Centers)
Sabey Data Center Properties has a new tenant for its Intergate.Manhattan project. Telecom provider Windstream Corp. has selected Sabey’s New York facility at 375 Pearl Street as the site for a new central office to provide capacity for future growth as well as further protection against natural disasters like Superstorm Sandy.
Windstream will be a powered shell customer at Intergate.Manhattan, building out its own 11,000 square foot facility. Windstream has a 15-year lease for 664 kW of critical power, with the option to expand to 1 megawatt Windstream also plans to install a major satellite communications center at Intergate.Manhattan, with an antenna farm on the roof of the 32-story data center.
“Sitting at a confluence of the world’s transatlantic cable and fiber routes, Intergate.Manhattan is a crucial presence as our Sabey Data Center network expands,” said John Sabey, Presiden of Sabey Data Center Properties. “Equally important, Intergate.Manhattan will be the next and best carrier hotel on the East Coast, offering unprecedented opportunities for network carriers to expand their customer operations.”
Intergate.Manhattan will be Windstream’s third central office in Manhattan and its fourth serving the New York metropolitan area.
“Intergate.Manhattan is a secure, hardened site that provides Windstream with important network diversity as well as added protection against natural disasters like Superstorm Sandy,” said Joe Marano, Executive Vice President of Network Operations for Windstream. “In addition, it offers ample room for expansion as more and more business customers choose Windstream’s data, voice, network and cloud solutions.”
Impact of Sandy
Sabey executives said the Windstream lease demonstrates the impact of Superstorm Sandy in the site selection decisions of data center customers in the New York market.
“Clearly, the fact that Intergate.Manhattan was untouched by Superstorm Sandy was a major factor in Windstream’s decision to expand at 375 Pearl Street,” Daniel Meltzer, Sabey Vice President of Sales and Leasing, said. “This expansion will ‘future proof’ Windstream for the next 15 years.”
Sabey has scheduled final commissioning for Intergate.Manhattan for October, and is now commencing mission-critical operations at the 1-million-square-foot tower. Sabey, a Seattle-based developer, outfitted 375 Pearl Street the property with all new core infrastructure and upgraded the power capacity from 18 megawatts to 40 megawatts.
The building was developed in 1975 as a Verizon telecom switching hub and later served as a back office facility. Verizon continues to occupy three floors, which it owns as a condominium. The property was purchased in 2007 by Taconic, which later abandoned its redevelopment plans. Sabey and partner Young Woo acquired the building in 2011.
Sabey now operates 3 million square feet of data center space as part of a larger 5.3 million square foot portfolio of owned and managed commercial real estate. The company has developed a national fiber network to connect its East Coast operations with its campuses in Washington state, where it is the largest provider of hydro-powered facilities. Sabey’s data center properties include the huge Intergate.East and Intergate.West developments in the Seattle suburb of Tukwila, the Intergate.Columbia project in Wenatchee and Intergate.Quincy.
Michael Morris of Newmark Grubb Knight Frank executed the point-of-entry lease on behalf of Sabey Data Centers. Michael Rareshide of Partners National represented Windstream in the lease negotiations. |
|