Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, March 18th, 2014

    Time Event
    12:00p
    Servergy Sees a Future of Denser, Greener Servers
    bill-mapp-rack-470

    Servergy founder and CEO Bill Mapp in front of a rack of servers at a recent industry event. The company has unveiled its initial line of CleanTech servers. (Photo: Rich Miller)

    SAN JOSE, Calif. – Bill Mapp wants to create data centers that are both denser and greener. In 2009 Mapp founded Servergy, and set out to rework server design to pack more computing horsepower into a smaller footprint.

    After four years in the lab, the company announced $20 million in funding late last year to support the launch of its first Cleantech server, a “brawny core” server about the size of a legal pad. Each Cleantech CTS 1000 server is a half-rack unit deep and uses about 100 watts of power, allowing users to build high-density solutions in a compact footprint.

    But Servergy’s ambitions go beyond just rethinking the form factor and power envelope for servers. At last month’s Open Compute Summit, Servergy unveiled a prototype for a highly flexible new 64-bit server platform. The Cleantech Multi-Architecture Platform will support a range of processors – including x86, ARM, Power or MIPS – to create specialized computing solutions for applications that must move large volumes of data at high speed.

    A Sweet Spot in Acceleration

    “We’re not just a server company,” said Mapp. “I/O acceleration is really our sweet spot in this big data world we’re living in.” That need for speed and scale has importance in many areas of the next-generation data center architecture.

    Servergy is an IBM business partner, and in November it raised $20 million funding from accredited investors to support the launch of its CleanTech servers.

    Cleantech servers are based on Freescale Power chips and run PowerLinux software from SUSE. They are available in either DuoPak or QuadPack configurations, allowing users options for density and form factor within their racks. Here’s a look at a CTS 1000 server:

    servergy-cleantech-server

    A close look at Servergy’s CTS-1000 server, which is about the size of a legal pad. (Image: Servergy)

    The servers are designed and built in the U.S., weigh only nine pounds, and feature two 10GbE and two 1GbE ports. The CleanTech CTS line uses Power Architecture technology, an instruction set architecture that spans applications from satellites to automotive control to servers. Initially developed by IBM, Motorola and Apple, Power Architecture technology has since gained support from the world’s most innovative brands and become the preferred platform for many mission critical applications and markets.

    Engineering Key to Efficiency

    “We went all the way back to the innovations lab drawing board and used a zero-sum step-function engineering approach from the ground up to create the next generation of hyper-efficient enterprise servers,” said Mapp. “Our servers are small, fast, efficient, cool, quiet, light-weight, but also powerful and scalable enough to handle the rapidly growing data, space, cooling and energy demands of data center operators globally.”

    Servergy is active in the Open Compute Project, and at January’s Open Compute Summit in San Jose, Servergy Executive Vice President William Mapp provided an advance look at the company’s roadmap going forward.

    servergy-william-mapp-accel

    At the Open Compute Summit in January, Servergy’s William Mapp displays one of the company’s new motherboards that complies with the OCP GroupHug standard.

    William Mapp showed off a motherboard that aligns with the “GroupHug” project, which creates a common slot architecture specification for motherboards that supports processors from multiple vendors. He said the new board allows a holistic approach to IT infrastructure solutions that can accommodate a range of processor options.

    Mapp said the new technology could dbe ideal for network offload, off load engines, fabric controllers and ASIC implementations.

    “We want to be able to step back, look at the workload, and say ‘what is performing the best,’” said Mapp during his presentation. “We could go a lot of different ways. Benchmarking is required, and every watt makes a huge difference.”

    12:25p
    Tomorrow’s Data Centers: Mother Nature Cools Best

    Charles Doughty is Vice President of Engineering, Iron Mountain, Inc.

    Chuck-Iron-MountainCHARLES DOUGHTY
    Iron Mountain

    We’re all familiar with Moore’s Law, stating that the number of transistors on integrated circuits doubles approximately every two years. Whether we measure transistor growth, magnetic disk capacity, the square law of price to speed versus computations per joule, or any other measurement, one fact persists: they’re all increasing and doing so exponentially. This growth is the cause of density issues plaguing today’s data centers. Simply put, more powerful computers generate more heat which results in significant additional cooling costs each year.

    Today, a 10,000 square-foot data center that is running about 150 watts per square foot costs roughly $10 million per megawatt to construction, depending on location, design and cost of energy. If the approximately 15 percent rate of data growth of the last decade continues over the next decade, that same data center would cost $37 million per megawatt. A full thirty percent of these costs are related to the mechanical challenges of cooling a data center. While the industry is experienced with the latest chilled water systems, and high-density cooling, most organizations aren’t aware that Mother Nature can deliver the same results for a fraction of the cost.

    The Efficiency Question: Air vs. Water

    Most traditional data centers rely on air to direct cool a data center. When we analyze heat transfer formulas, it turns out water is even more efficient at cooling a data center, and the difference is in the math, namely the denominator:

    formula-air-water
    air-water-volume

    With the example above, the energy consumed by the 10,000 square-foot data center creates over 5 million BTUs of heat rejection. Using the formulas in the figures above and assuming a standard delta T of 10 degrees, this data center would require more than 470,000 cubic feet per minute (CFM) of air to cool that facility, but only 1,000 gallons of water per minute. In order to cool this data center, the system would need between 150-200 horsepower to convey that many cubic feet of air per minute, but only 50-90 horsepower to convey 1,000 gallons per minute – roughly 462 times more efficient! If analyzed on a per cubic foot basis – one cubic foot of air to one cubic foot of water, water is actually about 3,400 times more efficient than air.

    Physics 101: The Thermodynamics of the Underground

    However, for an underground data center, there’s more at work. In a subterranean environment, Mother Nature gives you a consistent ambient temperature of 50 degrees. (So to begin with, you don’t have to depend on cooling systems as much since it is cool to start. Then you can get further efficiencies by using an underground water source or aquifer.)

    The ideal environment for a subterranean data center is made of aquifers, or stone that has open porosity like basalt, limestone and sandstone; aquicludes, such as dense shales and clays, will not work as effectively. In a limestone subterranean environment, heat rejection can increase from 4 to 500 percent because of the natural heat sink characteristics of the stone. The most appealing implication here is that the stone can manage the energy fluctuations and peaks inherent to any data center. (the limestone absorbs heat which further reduces the need for cooling)

    As the water system funnels 50 degree water from the aquifer to cool the data center, the heat is rejected into the water which is then funneled back about 10 degrees warmer. Mother Nature deals with that heat by obeying the second law of thermodynamics which governs equilibrium and the transfer of energy. For the subterranean data center operator, this means working within the conductivity of the surrounding rock, thus it is important to be knowledgeable of the lithology and geology of the local stratus, along with understanding, the effects of a continuous natural water flow and the psychrometric properties of air.

    The Cost of Efficiency

    Of course, there are other data center cooling strategies being used aside from the subterranean lake designs including well systems, well point systems and buried pipe systems to name a few. Right now, well systems are being used in Eastern Pennsylvania to cool nuclear reactors producing hundreds of megawatts of energy with mine water. Well point systems are generally used in residential applications, but the concept doesn’t scale well without becoming prohibitively expensive. Buried pipe systems are used quite a bit and require digging a series of trenches backfilled with a relatively good conductive granular material, but beyond 20-30 kilowatts, this method does not scale well.

    How much cost do each of these methods incur? An underground geothermal lake design will cost less than $500 per ton, while well-designed chill water systems range from $2,000-4,000 a ton. The discrepancy in cost is created by the mechanics – in a geothermal lake, there are no mechanics: water is simply pumped at grade. Well and buried pipe systems can cost more than $5000 a ton, and these systems do not scale very well.

    By understanding Mother Nature and using her forces to our advantage, we can increase the capacity and further improve on the effectiveness of the geothermal lake design. By drilling a borehole from the surface into the cavern, air transfer mechanisms can easily be incorporated; anytime the air at the surface is at or below 50°, that cool air will to drop into the mine. Even without motive force or air handling units, a four to five foot borehole can contribute about 30,000 cubic feet of air per minute! If an air handling unit is add, the 30,000 CFM of natural flow can easily become 100,000-200,000 CFM. What was a static geothermal system is now a dynamic geothermal cooling system with incredible capacity at minimal incurred cost.

    Opportunities for the Future

    When analyzing and predicting what data centers are going to look like in the future, a recurring theme starts to emerge: simplicity and lower-cost. Because of the cost pressures facing IT departments and CFOs alike, underground data centers using hybrid water, air and rock cooling mechanisms are an increasingly attractive option.

    There are even opportunities to turn these facilities into energy creators. For example, by adding power generating turbines atop boreholes, operators can harness the power of heat rising from the data centers below. Furthermore, by tapping into natural gas reserves, subterranean data centers could become a prime energy source, thus eliminating the need for generators and potentially achieving a power usage effectiveness measurement of less than one. The reality is that if you know Mother Nature well, you can work with her – she’s very consistent – and the more we learn, the more promising the future of data center design looks.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:00p
    Open Source Technologies Provide Cloud-Ready Big Data Control

    The amount of data traversing the modern cloud platform is breaking new ground. Annual global data center IP traffic is projected to reach 7.7 zettabytes by the end of 2017, according to the latest Cisco Global Cloud Index report. Overall, data center IP traffic will grow at a compound annual growth rate (CAGR) of 25 percent from 2012 to 2017.

    Now, much more than before, organizations are relying on large sets of data to help them run, quantify and grow their business. Over the last couple of years, already large databases have evolved into giga, tera and even petabytes.

    Furthermore, this data no longer resides within just one location. As these data growth numbers indicate, with cloud computing, this information is truly distributed.

    Big data and data science is taking off in pretty much every industry.

    • Science: The Large Hadron Collider conducts about 600 million collisions per second. As a result, only working with less than 0.001% of the sensor stream data, the data flow from all four LHC experiments represents 25 petabytes annual rate before replication (as of 2012). This becomes nearly 200 petabytes after replication.
    • Research: NASA’s Center for Climate Simulation (NCCS) stores about 32 petabytes of climate observations and simulations on their supercomputer platform.
    • Private/Public: Amazon.com handles millions of back-end operations every day, as well as queries from more than half a million third-party sellers. The core technology that keeps Amazon running is Linux-based and as of 2005 they had the world’s three largest Linux databases, with capacities of 7.8 TB, 18.5 TB, and 24.7 TB.

    Organizations have been forced to find new and creative ways to manage and control this vast amount of information. The goal isn’t just to organize it, but to be able to analyze and mine the data to further help develop the business. In doing so, there are great open-source management options that large organizations should evaluate:

    Apache HBase: This big data management platform was built around Google’s very powerful BigTable management engine. As an open-source, Java-coded, distributed database, HBase was designed to run on top of the already widely used Hadoop environment. As a powerful tool to manage large amounts of data, Apache HBase was adopted by Facebook to help them with their messaging platform needs.

    Apache Hadoop: One of the technologies which quickly became the standard in big data management can be found with Apache Hadoop. When it comes to open source management of large data sets, Hadoop is known as a workhorse for truly intensive distributed applications utilization. The flexibility of the Hadoop platform allows it to run on commodity hardware systems and can easily integrate with structured, semi-structured, and even unstructured data sets.

    Apache Drill: How big is your data set? Really big? Drill is a great tool for very large data sets. By supporting HBase, Cassandra, and MongoDB – Drill creates an interactive analysis platform which allows for massive throughput and very fast results.

    Apache Sqoop. Are you working with data potentially locked within an older system? Well, that’s where Sqoop can help. This platform allows for fast data transfers from relational database systems to Hadoop by leveraging concurrent connections, customizable mapping of data types, and metadata propagation. In fact, you can tailor imports (such as new data only) to HDFS, Hive, and HBase.

    Apache Giraph: This is a powerful graph processing platform built for scalability and high availability. Already used by Facebook, Giraph processes run as Hadoop workloads which can live on your existing Hadoop deployment. This way you can get powerful distributed graphing capabilities while utilizing your existing big data processing engine.

    Cloudera Impala: The Impala model sits on top of your existing Haddop cluster and monitors for all queries. Where technologies like MapReduce are powerful batch processing solutions – Impala does wonders for real-time SQL queries. Basically, you can get real-time insight into your big data platform via low-latency SQL queries.

    Gephi: It’s one thing to correlate and quantify information – but it’s an entirely different story when it comes to creating powerful visualizations of this data. Gephi already supports multiple graph types and networks as large as 1 million nodes. With an already active user community, Gephi has numerous plug-ins, and ways to integrate with existing systems. This tool can help visualize complex IT connections, various points in a distributed system, and how data flow is happening.

    MongoDB: This solid platform has been growing in popularity among many organizations looking to gain control over their big data needs. MongoDB was originally created by the folks at DoubleClick and is now being used by several companies as an integration piece for big data management. Designed on an open-source, NoSQL engine, structured data is able to be stored and processed on a JSON-like platform. Currently, organizations such as the New York Times, Craigslist and a few others have adopted MongoDB to help them control big data sets. (Also check out Couchbase Server).

    Our new “data-on-demand” society has resulted in vast amounts of information being collected by major IT systems. Whether these are social media photos or international store transactions, the amount of good, quantifiable, data is increasing. The only way to control this growth is to quickly deploy an efficient management solution.

    Remember, aside from just being able to sort and organize the data, IT managers must be able to mine the information and make it work for the organization. Business intelligence and the science behind data quantification will continue to grow and expand. Organizations seeking to gain an edge on their competition will be the ones with the most control around their data management system.

    1:30p
    GoDaddy Interviews Underwriters as it Prepares for IPO: WSJ
    server-room-470

    A look inside one of the server rooms at a Go Daddy data center in Phoenix. (Photo: Rich Miller)

    This post originally appeared on The WHIR.

    GoDaddy is in the process of selecting underwriters for its IPO, according to a report by the Wall Street Journal over the weekend, citing two anonymous sources familiar with the matter.

    If the rumors are true, it won’t be the first time GoDaddy has filed to go public. In 2006, GoDaddy filed for an IPO, dropping out due to poor performance of tech IPOs at the time. Years later, in 2011, GoDaddy was acquired by private investment firm Silver Lake Partners, Technology Crossover Ventures and KKR & Co for $2.25 billion.

    Now, the landscape for tech IPOs is much different, with several hosting and cloud providers having gone public over the past few years. In October, Endurance International Group – the parent company of several popular hosting brands like HostGator and BlueHost – went public, raising only $250 million of an anticipated $400 million in its IPO. But since then its shares are up 18 percent, the WSJ reports.

    This year has been a transformative year for GoDaddy, and it has made several moves indicative of a major shift at the company, trading in racy GoDaddy girls and its one-size-fits-all approach to focus more on the needs of the small business customer. Under its new CEO, Blake Irving, GoDaddy has grown through acquisitions, adding a number of new capabilities to its small business hosting offerings, and its acquisition of Media Temple helped it reach a new kind of hosting customer.

    Beyond that, on the technology side, GoDaddy has been overhauling its hosting, recently rolling out upgrades to its Windows hosting, and Parallels Plesk support.

    GoDaddy is not commenting on the report.

    This post originally appeared at http://www.thewhir.com/web-hosting-news/godaddy-interviews-underwriters-prepares-ipo-wsj.

    1:35p
    Wine & Wisdom: Keeping The Up in Uptime

    Wine & Wisdom: Keeping the “Up” in Uptime will be presented on Thursday, April 3 from 4:30 p.m. to 7 p.m. at The Wiley in Atlanta, Ga.

    This complimentary, educational data center seminar (featuring a wine tasting too!) will focus on uptime and efficiency in data centers. The event also features utilities expert Mark Bramfitt and end-user Joe Parrino of T5 Data Centers, among other speakers.

    For more information, visit the event website.

    Venue:

    The Wiley (map)
    144 Walker St SW
    Atlanta, GA 30313

    For more events, return to the Data Center Knowledge Events Calendar.

    1:43p
    Structure Data 2014

    The world’s biggest and most innovative companies are using data to make better products, build bigger profits and even change the world.

    GigaOm’s Structure Data 2014 will draw 900+ big data practitioners, technologists and executives together to examine how big data can drive business success. The event will be held at Chelsea Piers in New York City, on March 19-20.

    Topics of exploration include:

    • Changing the World with Big Data: Learn how organizations like Google are leveraging big data for the greater good.
    • Toeing the Line between Privacy, Profit and Protection: Is the right to privacy paramount? Or are we putting the public at risk to protect an ideal? What will the outrage about consumer privacy mean for businesses’ bottom line?
    •  Deep Learning: The Holy Grail for Big Data: From automated text analysis to natural-language processing to image recognition, new applications are delivering rich new insights.
    •  The Industrial Internet: As the Internet of Things takes off, companies like Ford and McLaren are using big data to transform the way consumers interact with their products—and the way products interact with consumers.
    • Do You Need Data Scientists? What should they look like? We’ll take a deep dive into the methods companies are using to capture, store, analyze and serve the data that’s driving their businesses.

    For more information and registration, visit the conference website.

    Venue
    Pier Sixty Chelsea Piers
    23rd Street and West Side Highway, New York, NY 10011

    For more information on venue & transport, visit this website.

    For more events, return to the Data Center Knowledge Events Calendar.

    1:45p
    Pivotal Launches HD 2.0 and GemFire XD In-Memory Processing

    Pivotal builds on its Business Data Lakes architecture with the release of Pivotal HD 2.0 and GemFire XD in-memory database, and in-memory database company VoltDB raises $8 million for accelerating sales and marketing as well as global expansion.

    Pivotal Advances HD 2.0. Building on the Business Data Lakes architecture Pivotal launched HD 2.0 rebased and hardened on Apache Hadoop 2.2, and announced Pivotal GemFire XD, an in-memory database integrated with Pivotal HD 2.0.  The combination of Pivotal HD 2.0, the HAWQ query engine, and Gemfire XD constitute the foundation for the Business Data Lake architecture, the big data application framework for enterprises, data scientists, analysts and developers that provides a more flexible, faster way to develop data savvy software than what they can do with Hadoop alone. New in Pivotal HD 2.0 is enterprise integration of GraphLab, an advanced set of algorithms for graph analytics that enables data scientists and analysts to leverage popular algorithms for insight. Improvements to HAWQ feature MADlib Machine Learning library, Language translation, and Parquet support. Pivotal GemFire XD bridges GemFire’s proven in-memory intelligence and integrates it with Pivotal HD 2.0 and HAWQ. This enables businesses to make prescriptive decisions in real-time, such as stock trading, fraud detection, intelligence for energy companies, or routing for the telecom industries.

    “When it comes to Hadoop, other approaches in the market have left customers with a mishmash of un-integrated products and processes,” said Josh Klahr, Vice President, Product Management at Pivotal. ”Pivotal HD 2.0 is the first platform to fully integrate proven enterprise in-memory technology, Pivotal GemFire XD, with advanced services on Hadoop 2.2 that provide native support for a comprehensive data science toolset. Data driven businesses now have the capabilities they need to gain a massive head start toward developing analytics and applications for more intelligent and innovative products and services.”

    VoltDB raises $8 million.  Database company VoltDB announced that it has closed $8 million in Series B funding. The round was led by a Silicon Valley luminary with participation from two additional independent investors as well as existing stakeholders Sigma Partners and Kepha Partners. With more than 400 commercial customers VoltDB supports next-generation “smart” applications that tap Big Data and the Internet of Things. The new funds will be used to accelerate sales and marketing as well as global expansion. VoltDB’s “no compromise” design gives customers the world’s most powerful platform with unparalleled speed and capacity to process, analyze and make decisions on massive amounts of incoming data in real-time (milliseconds). VoltDB’s customers use the company’s unique in-memory architecture to power everything from mission critical enterprise applications, transportation systems and electricity management, to mobile and advertising networks. “Organizations everywhere are looking to drive competitive business value from new Big Data applications that can consume, analyze and act on massive amounts of dynamic data in real-time,” said VoltDB CEO Bruce Reading. “This is exactly what VoltDB was built for – and we are seeing demand in every corner of the world. This new round of funding will enable us to expand to serve this growing customer base.”

    3:52p
    Wise.io Raises $2.5 Million to Grow Its Machine to Machine Technology

    Focusing on easy-to-use machine learning applications for a growing customer experience market, Berkeley, California company Wise.io announced that it has raised $2.5 million in Series A funding led by Voyager Capital and named predictive analytics technology industry veteran Jeff Erhardt as CEO. Company co-founder Joshua Bloom will assume a new role as CTO leading the technology direction for Wise.io.

    “Machine Learning is unquestionably the future of advanced analytics for the enterprise.  When I first met Wise.io, I was struck by the caliber of the team and the unequaled performance of their core technology,” said Daniel Ahn, managing director at Voyager Capital who joined the Wise.io board as part of the transaction.  “Ultimately, what distinguished Wise.io from the other vendors and compelled us to invest was their focus on providing a complete turnkey product that was easily accessible to business users.”

    In developing its machine-learning technology, Wise.io identified the need to provide more than just a high-performance toolkit targeted at expert users. In addition to being highly scalable, the Wise.io technology is easy to implement and provides unrivaled insights into the value hidden within data.  Formed from a group of experts in statistics, computer science and machine learning the Wise.io team created automated machine-learning frameworks that were used to discover and understand some of the rarest phenomena in the universe, from peculiar stars to exploding white dwarfs. The company has more than a dozen production customers that range from Fortune 500 enterprises to innovative startups.

    “Customer experience management needs to view the entire customer lifecycle through a data-driven lens,” said Erhardt, who most recently served as chief operating officer at Revolution Analytics.  “Leveraging the data that companies already collect, our machine learning applications for CX generate greater value than conventional solutions. More importantly, they can be quickly implemented and easily interpreted by business decision makers.”

    “As a rapidly growing company, we faced the challenge of being overwhelmed by our volume of sales leads.  Ultimately, we could not add staff quickly enough to continue manually evaluating and prioritizing our opportunities,” said Adam Breckler, Co-founder and Vice President of Products at Visual.ly.  “Without requiring any technical resources, we integrated Wise.io’s intelligent lead scoring application with our Pardot marketing automation system, and immediately achieved greater efficiency, improved management visibility, and higher conversions.”

    9:30p
    A New Look for Data Center Knowledge

    Today we’ve rolled out a new design for the Data Center Knowledge web site. If you’re reading us on email, mobile or via a third-party app, please take a moment and click through to check out our new look.

    We’ve updated the site to make it easier for you to find and share the day’s top data center stories. You’ll want to take note that we have added a “Top Trending Articles” box (to the left of this article you are reading), which provides links to the DCK stories that popular with readers, based on social network shares.  It’s a great way to find out which items are the day’s hottest stories.

    For those of you who are new to the industry, or to our site, Data Center Knowledge is a leading online source of daily news, trends, and thought leadership about the data center industry. We cover the moves in the marketplace, the cutting-edge trends driving the powerful growth in demand for mission-critical facilities, the challenges and opportunities presented by high-density computing and its impact on power and cooling, and the evolution of the industry to include cloud computing and modular data centers.

    It’s a remarkable change from DCK’s beginnings back in 2005, when I cobbled together our initial design from the old “Moveable Type” blog platform using a template. Maybe some of you remember it:

    The original design of DCK in 2005.

    The original design of DCK in 2005.

    Nowadays, they keep me far away from the site code, and we’re blessed to have a team of talented team of developers, designers and product managers at our parent company, iNET Interactive, who have created DCK’s new look.

    One thing hasn’t changed: We remain committed to bringing you the latest news and analysis on the newest data center technology and infrastructure, and working to the highest standards of journalistic excellence. Enjoy! As always, we welcome your feedback.

    << Previous Day 2014/03/18
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org