Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, February 28th, 2013
| Time |
Event |
| 12:30p |
HP, Dell Announce New Big Data Analytics Solutions Here’s a roundup of some of this week’s headlines from the Big Data sector:
HP leverages Hadoop for providing context to big data. HP (HPQ) announced new offerings to help organizations to gain security intelligence from large data sets to better detect and prevent threats. The security information and event management (SIEM) capabilities of HP ArcSight with the HP Autonomy IDOL content analytics engine automatically recognizes the context, concepts, sentiments and usage patterns related to how users interact with all forms of data. “Many organizations have not been able to access the critical information they need to combat potential threats,” said Art Gilliland, senior vice president and general manager, Enterprise Security Products, HP. “With the integration of cloud monitoring, content analytics and Big Data processing, HP provides clients with the context needed to effectively stop potential breaches.”
Dell advances Kitenga Analytics. Dell announced the latest release of its Kitenga Analytics solution, which allows data scientists to analyze structured, semi-structured and unstructured data stored in Hadoop. Version 2.0 of the software adds new search, indexing and sentiment analysis functionality, has additional support for Predictive Modeling Markup Language (PMML) and combines search and analytics in a single, unified environment. “Dell is committed to building a complete analytical fabric of all data sources to reduce the cost, complexity and risk that companies face as they wade through Big Data to improve decision making,” said Darin Bartik, executive director, product management, Information Management at Dell Software. ”With Kitenga Analytics and Toad Business Intelligence Suite, Dell Software is creating a world-class solution set to transform how companies search, correlate and use information to solve their business problems and achieve a bigger competitive edge.”
DataStax Enterprise 3. DataStax announced the general availability of DataStax Enterprise 3 (DSE), its Apache Cassandra-based big data platform. The new release provides the most comprehensive security feature set of any NoSQL platform, while still delivering its core benefits of scalability, easy manageability and continuous availability. DSE 3 is a complete integrated big data platform that combines a production-certified version of Cassandra with Apache Solr and Apache Hadoop to deliver continuous availability support and performance across multiple data centers. “DSE is already running mission-critical apps but one hurdle to widespread adoption remains – security,” said Billy Bosworth, CEO, DataStax. “Now with DSE 3, not only do we meet the needs of app and database teams, but also those of the chief security officer.”
ParStream and Colfax partner. Data analytics provider ParStream and Colfax International announced a partnership aimed at creating a plug-and-play solution for big data analytics. With the ParStream technology pre-loaded on a customized Colfax server, deployment will be seamless and faster, since it will virtually eliminate any additional effort on part of the customer. This joint software-hardware solution will enable customers across several industries to seamlessly gain new insights from big data in real-time. “At ParStream, we are always on the lookout to make life simpler for our customers. We found a great partner in Colfax- they understood our goal and are working with us to achieve it,” said Michael Hummel, CEO, ParStream. “The servers are pre-qualified by ParStream for compatibility and performance. This ensures that the load on the CPU is reduced, paving the way for a less stressful I/O environment. We look at it as enabling Out-of-the-box Big Data Analytics.” | | 1:30p |
A Great Time to be in the Data Center Industry Tom Roberts is President of AFCOM, the leading association supporting the educational and professional development needs of data center professionals around the globe.
 TOM ROBERTS
AFCOM
Today, you can plug in the words “data center design and build services” into an Internet search engine, and it renders results literally in the millions.
A decade or so ago, however, data center specialists were scarce. Finding an architectural and engineering group that understood the complexities of the data center and spoke our language proved challenging, to say the least.
It was certainly a source of frustration for me and my industry peers. I was director of data center operations for a healthcare group back then, and it became painfully obvious that we had outgrown our second-floor office building location and needed more space and efficiency to accommodate present and future growth.
We approached the project logically, looking at site locations, talking with real estate groups, reviewing utility capabilities and conducting site evaluations. Yet, each time we met with prospective builders and/or designers and brought up our needs for a “hardened” data center with built-in redundancies, N+1 cooling, hot and cold aisles, raised flooring, emergency backups, etc., their eyes glazed over.
Most of them, while completely proficient in building and designing other structures, didn’t fully grasp the concept that data centers must be able to withstand power outages, natural disasters and equipment failure on a 24/7 basis. It took just as much effort to explain the “room to grow” aspect of the project.
Then, during the actual design process, it seemed that regardless of what we discussed in meetings, something different came back in the design plans. An obvious gap in communication and imbalance between demand for, and supply of, data center specialists existed.
Different Ecosystem Today
Thankfully, that changed soon enough. IT gained clout and visibility with the maturity of companies like Yahoo, Facebook and Amazon—all start-ups in 1994-1995. Data centers came into a whole new light and had to step up their games to keep pace with the evolution of computing needs. It often required complete redesigns or building from scratch … .and the market responded admirably.
The need for businesses to have an online presence to complement brick-and-mortar operations to stay competitive ushered in the era of more data, more applications, more servers, more end users, and it all took more energy and “out-of-the-box” thinking to accomplish.
For example, I didn’t have the multi-million dollar funding required to install dual power feeds for our facilities, so we implemented emergency redundant backup systems instead. It took a lot of meetings and conversations to obtain this understanding. “Back in the day,” data centers housed a bunch of old servers that ran at 2-3 kW per rack, taking up a lot of space and were not very efficient. Now, it’s all about consolidation, doing more with less, and on the average generating 8-12 kW per rack and in many cases, much more.
Partners Abound
The good news is you won’t have any problem finding companies that not only speak our language, but do it fluently. The challenge is to find one that will work with you, and at the same time, bring fresh ideas to the table. I recommend you zero in on those that not only understand, listen, and contribute, but that fit your culture too—a major factor in the selection process.
Your company is likely one of three types: A process culture defines a company that likes to follow the letter of the law and doesn’t want to bend or break any; a normative culture that has very stringent procedures and very high standards of ethics, and procedures match ethics; or a cross between the two – a collaborative culture – that suggests a higher threshold for creativity and willingness to combine efforts.
So, for example, if you come from a process culture and try to work with a company that just fires out ideas with little regard for getting from A to B to C in that order, your clashing styles will prevent progress and increase frustration. It’s in your best interest to search out companies that match your culture. … a luxury that should be appreciated and not taken for granted.
It is a great time to be in the data center industry – just think what we will know tomorrow.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 2:00p |
Microsoft Joins Open Data Center Alliance Microsoft has joined The Open Data Center Alliance (ODCA), an industry group that publishes usage models for Open Specifications for Cloud Computing. The Open Data Center Alliance is comprised of more than 300 companies that represent over $100 billion in annual IT spending.
“In line with Windows Azure’s commitment to openness and interoperability, we are pleased to join ODCA and work with industry leadership on standards for the cloud,” said Bill Hilf, general manager, Windows Azure. “We are dedicated to serving the industry and customers by providing an open, reliable and global approach to the cloud, and we look forward to contributing to the ODCA’s mission.”
ODCA aims to be a voice for enterprise IT in articulating the requirements for the transforming enterprise IT landscape with focus on topics including open, interoperable delivery of compute infrastructure as a service, cloud security, and best practices for adoption of big data analytics. Microsoft brings a valuable perspective to the organization.
“The ODCA brings together leaders from across industries to work together towards a vision of open, industry standard cloud solution delivery,” said Mario Mueller, BMW’s Vice President of IT Infrastructure and Chair of the Alliance. “In order to truly accelerate availability of cloud services, enterprise IT needs to work closely with cloud service and solution providers. Microsoft’s participation is a valuable addition to the organization’s mission, and we heartily welcome their membership.” | | 3:37p |
Verne Global Orders More Modules for its Iceland Data Center  A Colt data center module being moved into its home at the Verne Global data center in Iceland. Verne has ordered additional modules for an expansion of its data center near Reykjavik. (Photo:Colt)
There will be soon be more modular data centers loaded onto ships and heading for Iceland. Data center developer Verne Global has selected Colt’s ftec modular data centre for an expansion of its facility in Keflavik, Iceland. The modules will be fabricated at Colt’s manufacturing facility in northern England, and then shipped to Iceland in May and assembled onsite and ready to go live in the third quarter of 2013.
The Verne Global facility, built in a former NATO command center, takes advantage of Iceland’s vast supply of renewable energy (hydroelectric and geothermal), along with a cool climate that allows the use of air-side free cooling for the entire year. Colt customized its modular data center hall design, equipping it with cooling modules that allow Verne to cool servers using air from outside the data center. In winter months, the system gives Verne the option of mixing the chilly outside air with exhaust heat from servers.
Existing tenants at Verne Global include automaker BMW and managed hosting provider Datapipe. The company says demand remains strong, prompting the need for the additional modules.
“As cloud, mobile and big data applications drive organixations to look for cutting edge solutions for their data storage needs, interest in our Icelandic facility continues to gain momentum and we find ourselves needing to expand our current footprint,” said Jeff Monroe, CEO for Verne Global. “Our partnership with Colt allows for flexible and rapid expansion of our business with a superior product that meets our specific requirements.”
Focus on Flexible Design, Phased Growth
Colt’s ftec design, introduced in November, is the latest version of the UK company’s modular data center. It uses a standardized, reusable design that can deliver excellent energy efficiency. Colt introduced its modular offering in 2010, offering more than 120 design variations and the ability to deliver modules to either a Colt facility or customer-owned site. With ftec – with the “f” emphasizing the flexibility of the product – Colt has introduced features to further reduce risk and deploy capacity efficiently, particularly when the data center is in low load.
“By putting flexibility right at the heart of our data centres both in the design phase and throughout the life cycle, we achieve market-leading cost savings for customers in terms of energy efficiency and an unrivaled time to market of less than four months,” said Guy Ruddock, Vice President of Design and Delivery for Colt. “In the case of Verne Global’s campus, we’ve specifically customized our design to fully harness Iceland’s fresh air cooling which is available 365 days a year. This, coupled with the unique 100 percent renewable, dual sourced power supplying the data centre hall, provides industry leading efficiency and reliability.”
The initial phase of the Verne Global, deployment involved moving a 5,000 square foot data center nearly 1,000 miles across the ocean? Colt loaded 13 of its factory-built modules onto a container ship, which sailed them from northern England to Reykjavik. This video provides an overview of the logistics involved in this unusual deployment. | | 4:03p |
Baidu Deploys Marvell ARM-Based Cloud Server  A look at the new Baidu Cloud server, powered by ARM chipsets from Marvell. (Photo: Marvell)
There’s been lots of buzz about adapting the ARM chips that power iPhones and iPads into servers, but few examples of these processors being used in production. Here’s one: Marvell (MRVL) said this week that its chipset is included in the first commercial deployment of ARM-based servers at Chinese search engine giant Baidu.
Baidu, which is one of China’s largest Internet companies, will use Marvell’s implementation of ARM in its ARMADA XP CPU server SoC (System on Chip) in its Baidu Cloud storage application. The new servers will help slash power usage across Baidu’s growing server footprint, which has prompted the company to focus on energy efficiency and sustainability.
“The world’s first large-scale deployment of ARM servers in the data center represents Baidu’s leadership in cloud computing system infrastructure,” said Wang Jing, vice president of engineering at Baidu. “In order to bring greater storage density to the data center, lower TCO (total cost of ownership) and deliver efficiency to a new level, Baidu integrated leading design capabilities with Marvell’s advanced chipset solutions. This project represents Baidu’s success in building cloud computing data centers.”
Baidu has customized its ARM servers to work with its cloud storage requirements and is using the complete Marvell platform solution of quad core ARM-based ARMADA XP SoC products, including its CPU, storage controller, and a 10Gb Ethernet switch. In addition, Marvell has incorporated its low-power Ethernet physical layer (PHY) transceivers. The ARMADA XP chipset is at the core of Dell’s “Copper” ARM server as well.
“Marvell is proud that both our passion to drive breakthrough technology and innovation and our vision to make an early investment in ARM more than a decade ago has led to this important milestone of becoming the first semiconductor company in the world to commercially launch an end-to-end SoC platform that supports a new era of server demands in the modern data center,” said Ramesh Sivakolundu, vice president, Cloud Services and Infrastructure Business Unit, Marvell.
“Based on the ARM architecture and combined with our dedicated engineering, the Marvell server SoC is unique in its ability to deliver the low power consumption, high storage and computer density that can help companies cost-effectively support a new era of cloud- and Web-based services,” Sivakolundu continued. “The Baidu implementation brings Marvell’s vision for ARM architecture full circle.”
| | 7:48p |
Dell, Riverbed Announce Virtual Desktop Solutions Dell and Riverbed announce Virtual Desktop Infrastructure (VDI) solutions with VMware, as the VMware Partner Exchange conference took place this week in Las Vegas.
Dell and VMware optimize solution for desktop virtualization. Dell and VMware announced Dell DVS Enterprise - Active System 800, a pre-integrated system specifically designed for VMware-based VDI workloads. Dell’s Active System serves as the foundation for its Desktop Virtualiztaion Solutions (DVS) portfolio for provisioning DVI workloads. DVS Enterprise – Active System 800 with VMware Horizon View 5.2 combines Dell server, storage, networking, thin clients and infrastructure management software into a pre-integrated system to provide general purpose virtualized resource pools for virtual desktops. In addition, a full suite of consulting, deployment and support services are available to ease integration and ensure robust operation of the system once deployed.
“Dell and VMware have intensified our strategic partnership for end-to-end computing,” said Maryam Alexandrian, executive director of worldwide sales, channels and field marketing, Dell Cloud Client Computing. “Together we are deepening our engagement with our channel partners, accelerating our joint engineering and solution collaboration, and investing in go-to-market initiatives to address opportunities in the mid-market.”
Riverbed strengthens VMware alliance.
Riverbed (RVBD) announced the availability of solutions developed in collaboration with VMware that can provide a reliable and consistent desktop virtualization end user experience in those organizations that deploy VMware Horizon View 5.2. Riverbed Granite has achieved VMware Ready status, to help organizations to overcome the challenges of wide area networks (WAN)s, such as limited bandwidth, latency, and unforeseeable outages when delivering virtual desktops to remote locations. A new Stingray Traffic Manager gives a fine-grained application-level control and high availability for end-users that require constant access to desktops, applications and data from any device or location.
“By deepening our partnership with VMware, we give enterprises the ability to provide end users a seamless experience when accessing their desktop, applications and data regardless of location,” said Venugopal Pai, vice president, Global Alliances and Business Development at Riverbed. ”Simultaneously, we are enabling enterprises to benefit from greater control and flexibility in distributed organizations.” | | 10:30p |
Vantage Lines Up 1 Megawatt Lease in Santa Clara  The exterior of the V2 data center on the Vantage Data Centers campus in Santa Clara, Calif. (Photo: Vantage)
Vantage Data Centers has signed a new long-term customer lease for a 1 megawatt data hall on its campus in Santa Clara, California, the company said this week. The deal continues the busy pace of leasing in Santa Clara, the hub of data center activity in Silicon Valley.
Vantage said construction is already underway on the new customer installation, with opening anticipated in the second quarter. The company also said it has completed commissioning of an additional 3 megawatts of critical power at its V1 facility, bringing the total power capacity at its Santa Clara campus to 23 megawatts.
Vantage’s first building, a 6 megawatt retrofitted facility known as V3, is fully leased to four tenants, including Mozilla Corp. Vantage then leased its entire V2 data center to a single tenant in a 9-megawatt lease. Telx is the anchor tenant in V1, where Vantage is now building out the remainder of the building in phases.
“Our growth continues in Santa Clara and we’re excited to see that our custom solution model continues to be effective in a very competitive marketplace,” said Vantage CEO Jim Trout. “As we continue to build new solutions to support our leasing back-log and lease new space, we remain convinced that the industry leading customers with whom we work continue to value corporate cultural alignment, demonstrated delivery capabilities and customization to support their business objectives.”
Vantage says its 18-acre Santa Clara campus can support up to 47 megawatts of critical power demand, can can deploy up to 15 megawatts of space. That’s a point of interest in Santa Clara, the data center capital of Silicon Valley, where the supply of data center space has been closely watched.
Santa Clara Market Dynamics
Just a year ago many believed the Santa Clara market for wholesale data center space had too much supply, with DuPont Fabros, Vantage, Digital Realty and CoreSite all bringing new space online. But active leasing in the second half of the year – including two large leases by DuPont Fabros that brought its huge facility to 75 percent occupancy – has changed the market dynamics.
Digital Realty, which has the largest data center footprint in Santa Clara, estimates that there were 23 megawatts of available wholesale data center space in Silicon Valley at the end of 2012. But the company’s data shows that the Silicon Valley market absorbed 33 megawatts of space last year and 36.2 megawatts in 2011, which suggests that at current demand levels, the existing space could be filled as soon as the third quarter of 2013.
That view aligns with the assessment of DuPont Fabros. “We continue to see good demand and are optimistic about fully leasing the (SC1) property by mid-2013,” said President and CEO Hossein Fateh.
But the ready inventory and competitive nature of the Santa Clara market has meant a slightly lower return. “When we delivered the property in the fourth quarter of 2011, our goal was an 18-month lease up and achieving a 12 percent unlevered return,” said Fateh. “With the most recent leasing in the fourth quarter, we’re now targeting a 10 percent unlevered return.” |
|