Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, August 14th, 2013
| Time |
Event |
| 1:03p |
Violin Memory Launches New Flash Memory Array  The Violin Memory series 6000 storage array.
Violin Memory has launched the 6264 flash Memory array, a 3U flash memory storage device with a capacity of 64 TB. The 6264 combines Violin’s flash controller technology with Toshiba’s latest generation of 19nm flash technology, for twice the density and three times the economics of its predecessor, while significantly reducing power consumption. Violin Memory CEO Don Basile was a keynote speaker at the Flash Memory Summit this week in Santa Clara, California.
“Competitive architectural approaches based on SSDs short change the actual performance capabilities of flash memory,” said Basile. “As semiconductor process geometries shrink, flash memory gets slower and more error-prone. Violin’s unique flash management IP enables us to increase performance and capacity in the same footprint while ensuring the data resiliency required in Tier 1 enterprise storage deployments. Our goal is to deliver memory storage at the cost of legacy disk.”
“SSD-based approaches are challenged to deliver the performance and reliability necessary due to inherent characteristics of smaller process geometry NAND flash,” said Jeff Janukowicz, research director for solid state storage and enabling technologies at IDC. ”Solutions, such as Violin’s 19nm-based 6264 memory storage system, that can overcome these technical challenges, are well positioned to capitalize from the economic advantages associated with smaller geometry NAND flash.”
Also announced this week is the Violin Symphony Data Management System. Symphony centralizes and simplifies the management of petabytes of flash storage across hundreds of Violin flash Memory Arrays. Data center operators can measure application performance, monitor storage usage and track service level trends using customizable dashboards on any mobile device. Symphony’s built-in advanced analytics engine offers powerful insights on various health and performance aspects of Violin flash Memory Arrays, and provides proactive, real-time alerts on various storage metrics enabling monitoring and management on the go. | | 1:55p |
Optimizing A Data Center’s Energy-Carbon Performance Winston Saunders has worked at Intel for nearly two decades and currently leads server and data center efficiency initiatives. Winston is a graduate of UC Berkeley and the University of Washington. You can find him online at “Winston on Energy” on Twitter
 WINSTON SAUNDERS
Intel
A recent article on the carbon footprint of data centers by Eric Masanet, Arman Shehabi, and Jonathan Koomey, published in the online journal Nature Climate Change, gives some very clear signposts to those wanting to optimize the carbon-performance of their data center. The ideas are presented in the article in a terse academic style. But as these ideas have, I think, real business value, I wanted to summarize them in a blog post for a more general audience.
The idea of Nature Climate Change article is to take a step toward a more comprehensive look at the carbon intensity of data centers. As readers here know, a universal metric gauging the carbon intensity of data centers in terms of a “unit of work output per unit of work input” has eluded the industry.
The familiar PUE metric, which promotes infrastructure efficiency, addresses only part of the problem. Ideas proposing server efficiency and proxy studies have looked at various the computational efficiencies of the data center. But a comprehensive look requires understanding the entire data center including network and storage energy use, definitions of useful and non-useful work, as well as some measure of software efficiency capability.
Broadening the Scope
The work of Masanet et. al. takes the valuable step of extending the carbon footprint to the entire data center, including the embedded carbon footprint of not only the servers, but the networking and storage systems.
A key point made is that the embedded carbon of IT infrastructure is much less than the carbon from operational emissions (as based on standard U.S. energy sources). Hence to have a low carbon data center, one must optimize work efficiency. And while PUE is an essential element of that, it is not sufficient.
To make this clearer, the authors introduce a conceptually simple yet powerful way to represent what they call the energy-carbon performance map. I reproduce it (with permission) below to highlight a couple of key points.
 Reprinted with author’s permission. Click to enlarge.
Increased IT Efficiency Leads to Improved Energy-Carbon Performance
The first key point is that IT efficiency is the dominant factor in improving the energy-carbon performance of a data center. This is consistent with the findings of the DCUE arguments I made in a blog post here at Data Center Knowledge some time ago.
The second key point is that improving infrastructure, say as the authors illustrate, from a PUE of 1.8 to an optimal 1.1 only marginally improves the energy-carbon performance.
Thus, if one truly wants to look to optimizing a data center carbon footprint, one should look beyond the infrastructure and look to the IT equipment in the data center itself. The data center is an information factory. To optimize it overall, you need to look at the sum of the parts.
For more information you can read Jon’s informative blog item here, and you can also request a copy of the article. And I’m sure if you have comments or questions Jon or the other authors will address them either personally or thru a reply to a comment below.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 2:00p |
New Tools Are Simplifying Backup and Replication 
Creating a truly distributed environment which is capable of resiliency and agility isn’t always easy. Site-to-site replication is certainly much more possible now than it was before. However, there are still considerations around deploying such a solution. It can get even more interesting when large amounts of data have to be either backed up or replicated over the Wide Area Network (WAN). Fortunately, increased hardware performance and more bandwidth availability make creating such an environment a clear possibility.
As data is moved from one site to another, numerous variables come into play. Aside from just bandwidth, infrastructure components must be in place to help support this type of initiative. In designing this type of environment, administrators must be aware of the tool sets that they are using.
Using native tools
In deploying any type of new technology – especially when it comes to storage controllers and virtualization solutions – administrators need to start with the native tool set. Probably the first step in deploying a system which will be used in a replication scenario will be to granularly understand the environment. For storage platforms, native tools can deliver almost all of the necessary functionality that an environment may require. However, it’s not as easy as that. Vendors like NetApp, EMC, IBM, and VMware all release native tool sets which are extremely diverse and powerful. The learning process will become the challenge for administrators. Before jumping into any sort of third-party tool, admins first need olid knowledge of what they have in front of them. In cases where the technology is very new for the company, it’s recommended that engineers and even IT managers take some training courses to help them better administer their platforms. From a virtualization perspective, native tools can accomplish the following tasks:
- Isolate the data which needs to be replicated.
- Set up replication scheduling.
- Help with bandwidth control.
- Copy/Clone workloads as needed.
- Interface directly between storage controllers and virtual platforms.
- Capability to connect to remote storage systems for replication.
Always take the time to learn what your technology has to offer and how your organization can best utilize it.
Using Third Party Tools.
Once a solid understanding of the native tool set has been established, administrators can look outside of their existing tool bag for some help. In some cases, organizations may need to back up large amounts of data or transfer it over longer distances. Although native tools can help with that, there may be a need to control that process at a very granular level. In those cases, third-party tools which can directly connect into your environment can really help. Some examples include product like Veeam or even Microsoft’s SCCM/SCOM platform. In using third-party tools, administrators are able to have more granular control over certain portions of their environment.
- Enhanced data migration, replication, and control policies.
- Enhanced data distribution.
- Better backup capabilities – onsite and remote.
- Better control over security.
- Better resource management and dynamic resource allocation.
There are numerous other third-party tools which can promise to enhance your existing environment. In selecting the right tool set, make sure that it’s able to directly tie into your current infrastructure and can support your organization’s growing needs. Some solutions can have a bit of power behind them – but lack support over the long run. Take appropriate planning steps and test out the software, if possible.
Using Management/Monitoring Tools
Both native and third-party tools are able to offer good amounts of visibility into an environment. However, in some cases, native tools may just not be enough. In the backup and replication process, there are a lot of components which need to be monitored to ensure that the entire job completes properly. If the native tool set isn’t enough, find a good third-party solution that can help. Either way from a monitoring and management perspective, it’s important to have visibility into the following:
- Bandwidth usage.
- Data usage.
- Latency and transmission operations.
- Resource utilization.
- Alerts, alarms, and administrative notifications.
Having a proactive eye on a backup and replication process is essential since catching a problem early in the process can mean the difference between a hiccup and downtime.
Combining It All Together
It’s important to understand that native and third-party tools aren’t an either/or matter. Good administrators will leverage the power of both to better accomplish their duties. A solid backup and replication plan will certainly include both approaches. Where the native tools fall short, the third-party provider can help take over. In combining the two together, administrators should always have knowledge of how the solutions operate. Storage, bandwidth and computing power are all very precious – and often very expensive – IT resources. Because of that, using a variety of tools to best control an environment can help save time, money, and reduce management overhead.
The power of technology comes in the flexibility of its design. This means that if engineers deploy a well-planned storage replication infrastructure, they’ll have more control over what they need to do. Flexibility doesn’t just mean growth capabilities. An environment which is capable of integrating with third-party tools is able to stay agile. Although there are many great products out there which help control replication, backup and recovery – using native tools should never be disregarded. For organization to truly be elastic, there has to be ability to adapt to both the needs of the business and the demands of the market. | | 2:33p |
Ravello’s Cloud Hypervisor Enters General Availability Cross-cloud enabler Ravello Systems has entered general availability for its Cloud Application Hypervisor. The company aims to eliminate the boundries between on-premise applications and public clouds like Rackspace, Amazon Web Services, and HP. Last February, the company raised $26 million to this end.
Since February, over 2,000 enterprises replicated more than 30,000 applications representing more than 1 million CPU hours deployed, according to Ravello. These applications ranged from a few VMs to complex, applications spanning over 100s VMs with multiple subnets and several virtual network appliances.
Different virtualization, networking and storage stand in the way of leveraging the cloud for proper development and testing. The cloud hypervisor makes public clouds look and feel like the enterprise data center and is a nice ramp to using cloud.
“Most enterprises recognize the need to test on replicas of their production applications,” said Paul Burns, president and IT analyst, Neovise. ”However, it requires too much effort to recreate complex multi-tier production environments and there often isn’t enough capacity in the internal data center. The public cloud can solve the capacity issue but it’s still a very different environment usually requiring long migration and automation projects.”
Replicating Apps for Multiple Clouds
Ravello features high-performance nested virtualization (HVX), software defined networking and storage, and an application framework. Enterprises can easily create replicas of their on-premise, multi-tier VMware or KVM based applications in any public cloud without making any changes. Spin up as many instances on Amazon Web Services, Rackspace, or HP cloud as needed.
There obvious cost benefits to this, namely enterprises don’t need to built out that massive test capacity on-site. Considering the intermittent nature of this infrastructure, it makes sense to rent it rather than having that capital expenditure sit idly most of the time.
ScanCafe, a photo digitization and photo concierge service, is one example of Ravello in practice.
“Earlier we sometimes felt we were rolling out code a bit like we were rolling dice because we were privileging agility. We’d rather spend our resources in developing features for our customers than in building the type of test infrastructure and automation that would be required for flawless deployments,” said Laurent Martin, president and CTO, ScanCafe. “With Ravello we no longer need to compromise; we are able to get our applications to market faster and with better quality.”
Ravello has a usage-based pricing model, making it economically feasible to develop and test on replicas of production with no capacity constraints. For bursty workloads like development and test it does not make economic sense for enterprises to build internal data center capacity for peak usage, since on average, resource utilization may be as low as one percent,” said Navin R. Thadani, SVP of products, Ravello Systems. “The public cloud sounds promising but is too different an environment, and still does not solve the infrastructure automation problem. Consequently testing is still mostly on-premise. It is rarely as frequent or as efficient as it needs to be. Hence, development cycles are far too slow.” | | 5:08p |
Netcraft: NSA Surveillance Disclosures Not Slowing US Hosting Growth Have headline-making revelations about the National Security Agency’s surveillance programs prompted customers to rethink hosting their data in the United States? Not according to early data from Netcraft, the UK research firm that tracks trends in Internet infrastructure. Netcraft’s monthly Web Server Survey suggests that if multi-national customers have concerns about being hosted in the U.S., they’re not acting on them – at least not yet.
“Despite speculation that the recent PRISM revelations would result in a mass exodus from American data centers and web hosting companies, Netcraft has not yet seen any evidence of this,” the company wrote in its blog. “Within the most popular 10 thousand sites, Netcraft witnessed only 40 sites moving away from US-based hosting companies. Contrary to some people’s expectations, 47 sites moved to the US, which actually resulted in a net migration to the US.
“This trend is also reflected by the entire web server survey, where a net sum of 270 thousand sites moved to the US from other countries (in total, 3.9 million sites moved to the US, while 3.6 million moved from the US). Germany was the most popular departure country, with nearly 1.2 million sites moving from German hosting companies. This was followed by Canada, where 803,000 sites hopped across the border to the US.”
Some analysts have predicted that disclosures by former NSA sysadmin Edward Snowden would prompt companies to avoid hosting and cloud platforms with U.S. ifnrastructure that could be subject to NSA surveillance or data requests. A new report from The Information Technology & Innovation Foundation speculated that the economic impact to the American Internet industry would reach tens of billions of dollars,
“On the low end, U.S. cloud computing providers might lose $21.5 billion over the next three years,” the report states. “This estimate assumes the U.S. eventually loses about 10 percent of foreign market to European or Asian competitors and retains its currently projected market share for the domestic market. On the high end, U.S. cloud computing providers might lose $35.0 billion by 2016. This assumes the U.S. eventually loses 20 percent of the foreign market to competitors and retains its current domestic market share.”
Why has there been no change thus far. There are several possible explanations. Migrating IT infrastructure isn’t simple (as leading hosting companies can attest) and companies with concerns may be taking their time before exiting the U.S. In addition, many multi-national companies have likely already weighed the potential data privacy issues raised by the passage of the Patriot Act in the U.S., which has for years been a consideration in site selection and outsourcing decisions for overseas firms.
For more on the topic, see Bill Kleyman’s recent analysis, “How Surveillance Impacts the Cloud and the Data Center.” | | 6:19p |
Best of the Data Center Blogs for August 14 Here’s our review of noteworthy links for the data center industry for August 14th:
3 Common Reasons Why You Can’t Meet Your Disaster Recovery RTO – At the SunGard Availability blog, Michael Maliniak looks at recovery time objectives: “Since so many customers do set such unrealistic RTOs with no hope of ever meeting them, I thought I’d share the top three reasons I’ve come across as to why people can’t meet their disaster recovery RTO.”
Top 12 Data Center Trends thru 2015 – At SwitchScribe, Mark Thiele makes the case for outsourcing: “As much as there is massive opportunity for change in IT today, many of our investments will continue to weigh us down for years to come. This anchor will slow our adoption of strategies, solutions, and technologies that have the potential to seriously improve how IT is delivered.The data center isn’t immune to these changes or to the anchor effect. In fact, the data center is for many companies their biggest technology investment ball and chain.”
California Hosts the Most Porn in the US: - From The WHIR: “The US hosts 60 percent of porn websites, hosting 241.1 million more pages than the Netherlands, which takes the second spot on the list.”
Myanmar Internet Disruptions - From network monitoring specialist Renesys: “In the weeks preceding the recent 25th anniversary of the 8888 Uprising, Myanmar’s Internet experienced several technical disruptions leading to concern about the nation’s recent transition to democracy and liberalization of its economy.”> | | 6:47p |
Outages for New York Times, Microsoft Cloud Services  The New York Times published stories on its Facebook page while its web site was offline this morning.
If you were trying to use Outlook.com to email a link to a NYTimes.com story about events in Egypt, it’s not been a good day. Both Microsoft and the New York Times have been trouble shooting service problems.
NYTimes.com, the web site for the New York Times was offline for several hours this morning, returning a “Http/1.1 Service Unavailable” message. The Times tweeted that the outage, which also affected the paper’s email, was the “result of an internal issue, which we expect to be resolved soon.” While the Times worked to restore its web site, it directed readers to the paper’s Facebook page, where it posted full text of several stories about the political unrest in Egypt.
On its status page, Microsoft has acknowledged ongoing problems for Outlook (“We’re having a problem accessing email. You might not be able to see all your email messages”) and says it has recently resolved an outage for its SkyDrive storage service. |
|