Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, December 13th, 2013
| Time |
Event |
| 2:00p |
Convergence 3.0: Simplivity Packs The Stack Into A Single Box  Doron Kempel, the CEO of Simplivity, with the firm’s converged units, called OmniCubes. (Photo: Colleen Miller)
LAS VEGAS – What’s in your racks? Servers and storage, right? Maybe switch gear, too. But as the need to handle data has grown, so has the number of specialized appliances required to manage that data, including domain and protection appliances, back-up appliances and WAN optimization devices. All these items have their own overhead, in terms of footprint and power, and different management applications, with different screens.
We are seeing the evolution of technology to “converge” this hardware and firmware stack, into a single box. The first endeavors, from some of the largest players, included servers, storage and switch, with VMWare. Some call this Convergence 1.0. Then, companies introduced servers, switch and storage, with a virtualized environment of all the resources, this could be called Convergence 2.0.
Doron Kempel, CEO, of Simplivity, says his company has introduced Convergence 3.0, or the whole stack in one box, including servers, switch, storage, de-dupe and back up, and a WAN function.
“This is the software defined data center, right here,” said Kempel, on the floor of the Gartner Data Center Conference 2013 at The Venetian. “So we de-dupe and compress the data, so that operation is only done one time, not in each appliance,” he said. By reducing IOPs, the technology only writes data one time, increasing performance.
How does the OmniCube work? The unit is a combination of hardware and software. The OmniCube is a 2U appliance which features a customized PCIe card to improve performance, as well as software that can be run on the OmniCube, or out in your cloud instances, such as AWS. The cubes are plugged into VCenter for the ability to monitor and manage the systems.
What is the benefit of approaching convergence in this way? There’s high availability of the system, with no single point of failure; increased performance, with the units optimized for IO intensive workloads and serviceability, with extensive reporting, alerting and “call home” capabilities.
John Doerr, General Partner of Kleiner Perkins, and a highly respected VC in the technology space, has called SimpliVity “one of the biggest innovations in enterprise computing since VMware, OmniCube is radically simplifying IT infrastructure with systems that are better, faster, smaller and less expensive than competitive offerings. SimpliVity is well positioned to transform IT.”
The company started in Boston in September 2009, and since has raised $101 million in funding. And the OmniCubes have been deployed in a number of scenarios, such as regional banks, a large dairy distributor and a municipality which uses the infrastructure to run its 9-1-1 system. In addition, large organizations are using the units to build cloud infrastructure. For example, Swisscom, a large Swiss telecommunications company is using the cube to build out its cloud.
While these deployments may be small, such as two units in one location, and two units in another, the future will include large enterprise scalability. “These will scale to thousands of nodes,” said Kempel.
 | | 3:01p |
Legacy Systems: Tried and True Systems Whose Time Has Come Duane Harris is CEO of Nemonix Engineering.
 DUANE HARRIS Nemonix
In an era of technology turmoil—with news of security breaches, overloaded servers and major corporate and government computer failures headlining the front-pages—there are unexpected islands of calm populated by extremely stable legacy hardware systems. Ironically, some of these oldest citizens of the data center also happen to be the most stable and secure. The extended value these legacy systems still generate in modern computing is a story worth noting—and perhaps learning from.
These servers are still some of the most reliable, secure and indestructible systems in the data center today. At the top of this list of end-of-life machines is hardware running OpenVMS, an operating system built by Digital Equipment Corporation back in 1977, and updated by HP ever since. New and old versions of OpenVMS still run mission-critical applications on legacy hardware, as well as modern, Intel Itanium-based hardware manufactured by HP for some of the biggest names in government and industry.
Below are some of the benefits of OpenVMS that other systems have yet to match:
- Disaster Recovery. OpenVMS’ fault tolerance and disaster recovery features are legend. During the 9/11 tragedy, a major international bank with North American headquarters, located less than 100 yards from the World Trade Center was among a mere handful of companies that remained online—primarily due to its reliance on an OpenVMS-based disaster recovery strategy. The intense heat in that international bank’s New York data center crashed all but OpenVMS-based AlphaServer hardware. The Alphas used server clustering and hard drive volume shadowing to keep the bank’s primary system running off of drives located 30 miles away.
- 100 Percent Uptime. For enterprises requiring 100% uptime, there are few industrial-strength operating systems that can keep up with OpenVMS. For example, one of the world’s largest defense contractors has been using OpenVMS to track missile sites across the world for more than 30 years. The organization has no plans on changing either the operating system or the legacy hardware it runs on. Staying operational is so mission critical that any downtime could be disastrous, and any risk of downtime due to a new platform migration is intolerable. Many of Nemonix’ own OpenVMS customers, in both industry and government, refuse to move from OpenVMS because it simply works. And, it works with minimal intervention and few if any patch requirements. Some of our customers, such as a major U.S.-based chemical company, have commenced the migration to Windows, only to discover that the newer systems are not necessarily more stable ones. In fact, downtime instances have risen sharply in comparison to OpenVMS.
- Low Cost of Ownership. Some users stay with legacy platforms because the cost in dollars and in lost production is too high. For example, nuclear power plants must comply with very tight regulatory requirements due to the potential risk of catastrophic loss of life and property. Regulations require that if any core system hardware is changed, the entire plant must be recertified, not just the new hardware. Plant recertification could cost millions of dollars, in addition to the relatively small cost of the hardware. Similarly, in other commercial enterprises, the cost of new hardware and software is a fraction of the overall ripple effect to business processes, ancillary software licensing, retraining, retesting, recertification, and production down time. Changing to a new platform often has a huge cost footprint, far exceeding the actual cost of the system itself. Additionally, cost is reduced through the more effective management requirements of OpenVMS-based systems. A study by Wipro showed that the costs to manage an environment with 40 servers and 10 database servers were reduced by half on VMS-based systems.*
- Stellar Security. Perhaps OpenVMS’ greatest claim to fame is its completely stellar security record. OpenVMS systems provide a level of security that is unmatched in the industry. According to the same Wipro study cited previously, OpenVMS is ten times more secure than other popular operating systems available today,* and has 75-to-91 times fewer unaddressed security vulnerabilities on any given day. Additionally, while many of today’s popular anti-virus programs for newer, non-VMS systems address most infections, they are not 100% effective. Additional vulnerabilities in these newer systems continue to be discovered, potentially enabling malicious compromise or re-infection before security software publishers have a chance to update their virus/malware filters.
In the great rush to upgrade to the latest greatest, it is worth taking note of the value that can be leveraged in your existing legacy hardware/software installations. Often, the business process surrounding the legacy platform is extremely valuable, representing many years of fine-tuning. It is worth pausing to save what is valuable before dismantling the entire architecture. The following are tactics that companies can leverage to extend the value of their legacy systems, and, in some instances, upgrade performance at the same time:
- A hardware system at the end of its factory life cycle is not necessarily at the end of its usefulness to the enterprise. There are technology refresh solutions that can add up to 10 years of productivity back into legacy hardware. Unlike a refurbish, where only apparent problems are solved, refresh addresses the complete system, replacing every known failure point in the hardware and then putting these extremely reliable computer systems under new one year factory warranty, extendable to 10 years. Some of these failure points include fans, batteries, power supplies, electrolytics, etc. You end up with “like new” hardware with up to 10 years of additional warrantied productivity.
- If the hardware must go, consider keeping the software platform in place. The pressure to upgrade hardware is understandable. Legacy hardware faces diminished parts supplies and a shrinking pool of technicians familiar with the aging platform. However, if you run mission-critical systems that rely on the stability and security of the OpenVMS software platform, you don’t want to introduce new risks of downtime or security breaches by migrating to an entirely new software platform. Fortunately, you can migrate to x86 and use emulation software to virtualize your legacy hardware.
- The benefits to emulation can be substantial:
- New hardware without the cost of new software licenses, other than the emulation package itself.
- Lower risk of downtime by keeping existing software applications in place.
- No expensive retraining required—your business processes are effectively unaffected.
Even though we live in a throwaway culture, where the next smartphone is just six months around the corner, the reality is that many of our mission-critical applications—from aerospace and defense, to energy and government—are still hosted, and will be hosted into the foreseeable future, by legacy computers. Enhancing the value of these stalwart systems, whether through refresh or emulation, only increases the efficiency, productivity, and stability of the overall enterprise.
*Source: *http://download.microsoft.com/download/1/7/b/17b54d06-1550-4011-9253-9484f769fe9f/TCO_SPM_Wipro.pdf
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.. | | 3:06p |
Oracle Launches Fifth Generation Exadata Database Machine Oracle (ORCL) launched its fifth generation database machine, with increases in performance and capacity, new software capabilities to optimize OLTP, and a new focus on Database as a Service (DBaaS) and Data Warehousing. Oracle Exadata Database Machine X4 has enhanced hardware, and is fully compatible and interoperable with previously released Oracle Exadata Database Machines, enabling customers with existing machines to easily expand with the newest system.
The new X4 offeres improves end-to-end performance, with up to 2.66 million read IOPs per full rack, and up to 1.96 million write IOPs to flash. It has a 50 percent increase in database compute performance on X4-2 systems using two 12-core Intel Xeon Processors E5-2697 v2. It can also push close to a 100 percent increase in InfiniBand network throughput and the addition of Network Resource Management to ensure ultra-low response times for latency critical database operations.
Capacity
With up to 44 TB per full rack the Exadata Database Machine X4 has a 100 percent increase in physical PCI Flash capacity, and up to another 100 percent increase in logical flash cache capacity to 88 TB per full rack. Its Exadata Flash Cache Compression transparently compresses database data into flash using hardware acceleration to compress and decompress data with zero performance overhead at millions of I/Os per second. Using memory expansion kits, the X4-2 can have up to 4 TB of memory per full rack, and over 200 TB of disk storage capacity. It features a 33 percent increase in high capacity disk storage capacity to 672 TB per full rack.
“Enterprise data center managers are often burdened with the complexity and cost associated with managing data in siloed environments,” said Carl Olofson, research vice president at IDC. “Oracle Exadata Database Machine X4 aims to provide database administrators a single, centralized and automated environment to manage databases. The new system’s technology upgrade should deliver significant performance improvements that address the demand to implement Database as a Service, offering the scalability, control and availability to support a full range of database workloads.”
With the increase in flash capacity, the X4 is sufficient to hold the vast majority of OLTP databases entirely in flash memory. Random I/O rates that are critical for OLTP applications have been improved close to 100 percent to 2.66 million 8K database reads and 1.96 million writes, even with full flash compression enabled. Performance of Data Warehousing workloads is accelerated by new Flash Caching algorithms that focus on table and partition scan workloads that are common in Data Warehouses. Tables that are larger than flash are now automatically partially cached in flash and read concurrently from both flash and disk to speed throughput. | | 3:30p |
Big Data Software Specialist Talend Raises $40 Million Talend receives $40 million to accelerate its big data efforts, and Cloudera certifies WANdisco’s non-stop Hadoop technology.
Talend receives $40 Million investment. Big data integration software provider Talend announced the completion of a $40 million funding from Bpifrance and Iris Capital with participation from existing investors Silver Lake Sumeru, Balderton Capital and Idinvest Partners. Talend will use the investment to accelerate innovation, augment its portfolio and support its go-to-market efforts in the big data space.
“Our advantage is simple. The ability of Talend’s solutions to evolve within a quickly changing technology landscape enables organizations to remove barriers to the adoption of modern data platforms such as NoSQL or Hadoop,” said Bertrand Diard, co-founder and chief strategy officer of Talend. “With our unique architecture and scalable platform, Talend helps companies of all sizes obtain a fast return on their data assets, regardless of how they need to use them.”
“Talend is in a unique position to serve the fast growing big data market,” said Mike Tuchen, CEO of Talend. “Our customers are often confronted with the limits of legacy integration platforms that are unable to deal with new challenges created by the explosion of data, the ubiquity of hybrid cloud architectures and the ever growing expectations from the business. Talend’s agile and open model puts us in a unique position to quickly deliver innovative and powerful solutions to address these needs.”
Cloudera and WANdisco collaborate. Cloudera and WANdisco (WAND) announced that WANdisco’s Non-Stop Hadoop technology is certified to run on Cloudera’s Distribution for Hadoop version 4 (CDH4) providing 100 percent uptime for global multi-data center deployments. With this partnership, Cloudera and WANdisco are addressing critical enterprise requirements for continuous availability, performance and scalability as large global organizations move from using Hadoop for batch storage and retrieval to mission critical high volume real-time applications. “Hadoop is the platform for the next generation of enterprise applications,” said David Richards, Chairman and CEO, WANdisco. “Working with Cloudera is a natural fit given their significant footprint in the enterprise. We believe that continuous data availability is a ‘must-have’ for organizations looking to leverage Hadoop for strategic enterprise systems and WANdisco’s Non-Stop Hadoop Technology is the only solution that delivers it.” | | 3:49p |
Natural Cooling: Vote For Your Favorite Caption It’s Friday, which means it’s time for a bit of fun to roll into the weekend. DCK readers submitted quite a few good captions for last week’s cartoon from Diane Alber featuring our favorite data center staffers, Kip & Gary. Now it’s time for you to help decide which caption for the Natural Cooling cartoon is the best.
Scroll down and vote for the best entries.
The caption contest works like this: We provide the cartoon and you, our readers, submit the captions. We then choose finalists and the readers vote for their favorite funniest suggestion. The winner will receive his or her caption in a signed print by our artist Diane Alber.
Take Our Poll
For the previous cartoons on DCK, see our Humor Channel. For more Kip and Gary humor, visit their website. | | 5:00p |
QTS Tenant Powers Up, Adds 5 Megawatts in Atlanta Data Hub  The exterior of the QTS Metro Atlanta data center.
A major customer of QTS Realty Trust is adding five megawatts of power and 25,000 square feet of space to their existing data center at QTS’ Metro Atlanta facility, the company said this week. QTS isn’t saying who it is, but all signs point to Twitter, the fast-growing microblogging service.
In its announcement, QTS described the company as a California-based company that’s an existing tenant and leases at least 100,000 square feet of space at the Metro facility, where it moved in during 2012 and has phased expansions through 2014.
Those descriptions all fit Twitter. QTS doesn’t publicly discuss clients, but DCK has reported that Twitter expanded its infrastructure to the East Coast with its presence at the Metro Atlanta data center. The new deal would add capacity above and beyond its existing commitments, starting in 2015.
“We are proud that QTS is able to expand our relationship with this fast growing company,” said Dan Bennewitz, chief operating officer, sales and marketing for QTS Realty. “We believe this expanded commitment shows the unique value QTS provides to support their business and rapid growth.”
The 970,000 square foot Metro Technology Center in downtown Atlanta is one of the world’s largest data centers. The huge building offers plenty of room for expansion for growing tenants. This allows companies like Twitter to gradually expand their data center space and power costs over time, rather than purchasing a larger amount up front and seeing some of the capacity go unused as it ramps up its operations. QTS also offers flexible pricing on power usage, which can be attractive to companies facing rapid growth. The provider’s PowerBank plan allows large customers to scale their available power up and down as their requirements change.
The customer expansion in Atlanta will help boost leasing activity at QTS Realty, which went public through an IPO in October and now trades on the NYSE under the symbol QTS. The company has 10 data centers in seven states with 3.8 million square feet of data center infrastructure and supports more than 875 customers. | | 5:48p |
Hybrid is Hot: Nimble Storage Soars on IPO  Shares of Nimble Storage soared nearly 50 percent today after going public with an IPO on the New York Stock Exchange.
LAS VEGAS – Storage has been a hot topic in 2013, with the demand for storage only on the increase. One of the hot names has been Nimble Storage, which went public this morning, raising $168 million in an IPO on the New York Stock Exchange. Shares of Nimble (NIMBL) soared 60 percent in their first trading session, closing at $33.60 a share after pricing at $21.
The Silicon Valley-based company is a leader in the market for hybrid storage, using both flash and disk in a broad-based platform, with a management layer. The hybrid approach has the benefit of increased performance and capacity, with data protection and monitoring in the mix also. That’s why hybrid storage has captured the attention of both customers and investors.
Radhika Krishnan, Vice President of Product Marketing and Alliances for Nimble, said the company has “huge momentum” and now has 2,100 customers of its storage products. The company started at the mid-market with customers with 50TB of data, and is adding large enterprise customers now, as well as cloud and service providers. The company is still relatively new, first shipping product in 2010.
“We have a 3U box with levels of storage from 10 Terabytes to 100s of Terabytes,” said Krishnan, who discussed the company’s success at this week’s Gartner Data Center Conference. “We vary the amount of flash, changing the ratio of disk to flash depending on the need.”
Differing Storage Products in the Market
In the storage market, there are the traditional providers of disk storage. such as EMC or NetApp, providers of flash-only storage. such as Fusion-IO or Violin Memory, and the ones who combine flash and disk in one unit, such as Nimble does.
Within the flash market, there are three ways to consume flash today. Companies such as Fusion-IO have server-side flash storage, which is extremely fast and can scale to a large amount of storage in a small footprint. (Fusion-IO recently introduced some hybrid solutions as well.) There are flash-only arrays, such as Nimbus and XtremeIO (purchased by EMC last year), which provide arrays of flash-only units. And there is Nimble, which hybridizes arrays with flash and disk, where the price difference is 15 to 30 times less, according to Krishnan.
Why wouldn’t clients use a flash-only solution? “Flash has endurance issues,” Krishnan said. “How frequently you write to flash wears flash out.” And it is known to be a more expensive option. If you bring down the price of flash, the endurance challenge becomes worse.
“So you want to minimize the number of writes to flash,” said Krishnan. ”There are players – Dell, EMC, NetApp – who are retrofitting flash on top of the file system. They don’t get the value of the flash that way.”.
Architecting From the Ground Up
Nimble Storage stared from the ground up, architecting its product differently, and using the disk for sequential IO, and using flash to dynamically cache hot data to accelerate reads. “So we use each for what it’s best at,” she said. “That way we get more capacity and more performance. We look at metrics such as capacity per dollar invested.”
Nimble Storage uses Cache Accelerated Sequential Layout (CASL), as its foundation for high performance and capacity savings, integrated data protection, and lifecycle management. CASL enables flash-bashed dynamic cache, which accelerates read access to application data by holding a copy of active “hot” data in flash (leading to high read throughput and low latency) and write-optimized data layout (data written by a host is first aggregated or coalesced, then written sequentially as a full stripe to a pool of disk). CASL’s sweeping process also consolidates freed up disk space for future writes. There is also inline universal compression, at 30 to 75 percent with no added latency.
The data protection is handled through the data management system, where we use snapshots and replication, which addresses the back up window problem. You can recover fairly easily with this method. The snapshots are the most efficient. We also have partnered with back up provider CommVault, so you can store the snapshots off-premise.
Service and Support
Krishnan explained that support service for storage can be a “very painful, onerous process.” So Nimble added a system called Infosight that proactively monitors the arrays and flags issues proactively. “There are millions of sensors of the data, and they are being analyzed continually. So users not only know something is wrong, they have a correlation of what’s gone wrong,” she said, noting that 90 percent of our support cases are proactively managed with this system. The monitoring also allows customers to understand capacity trends, performance, or remedial actions before failures.
“For 2014, scale is the big thing,” Krishnan said. “Within the last year, we’ve achieved the ability to scale the customers’ environment including 100s of TBs, 1000s of IOPs, and so on. This coming year, we will continue to focus on large enterprises’ specific requirements.” |
|