Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, December 10th, 2015
| Time |
Event |
| 1:00p |
Why Hyperconverged Infrastructure is so Hot LAS VEGAS – Hyperconverged infrastructure did not exist as a concept two or three years ago. Today, it is one of the fastest-growing methods for deploying IT in the data center, as IT departments look for ways to adjust to their new role in business and new demands that are placed on them.
Gartner expects it to go from zero in 2012 to a $5 billion market by 2019, becoming the category leader by revenue in pre-integrated full-stack infrastructure products. The category also includes reference architectures, integrated infrastructure, and integrated stacks.
“Hyperconvergence simply didn’t exist two years ago,” Gartner analyst Andrew Butler said. “Near the end of this year, it’s an industry in its own right.” But, he added, the industry has a lot of maturation ahead of it, which means far from all vendors who are in the space today will still be in it a few years from now.
In a session at this week’s Gartner data center management summit here, Butler and his colleague George Weiss shared their view of what hyperconverged infrastructure is, why it’s so hot, and what might it all mean for data center managers.
They also addressed some of the most pervasive myths about hyperconvergence. Check those out in a separate post here.
What is Hyperconverged Infrastructure?
Given that the concept is only about two years old, it’s worth explaining what hyperconverged infrastructure is and how it’s different from its cousin converged infrastructure.
Hyperconvergence is the latest step in the now multiyear pursuit of infrastructure that is flexible and simpler to manage, or as Butler put it, a centralized approach to “tidying up” data center infrastructure. Earlier attempts include integrated systems and fabric infrastructure, and they usually involve SANs, blade servers, and a lot of money upfront.
Converged infrastructure has similar aims but in most cases seeks to collapse compute, storage, and networking into a single SKU and provide a unified management layer.
Hyperconverged infrastructure seeks to do the same, but adds more value by throwing in software-defined storage and doesn’t place much emphasis on networking. The focus is on data control and management.
Hyperconverged systems are also built using low-cost commodity x86 hardware. Some vendors, especially early comers, contract manufacturers like Supermicro, Quanta, or Dell for the hardware bit, adding value with software. More recently, we have seen the emergence of software-only hyperconverged plays, as well as hybrid plays, where a vendor may sell software by itself but will also provide hardware if necessary.
Today hyperconverged infrastructure can come as an appliance, a reference architecture, or as software that’s flexible in terms of the platform it runs on. The last bit is where it’s sometimes hard to tell the difference between a hyperconverged solution or software-defined storage, Butler said.
Why is Hyperconvergence So Hot?
To understand why hyperconvergence has gotten so popular so quickly it’s necessary to keep in mind other trends that are taking place.
There’s pressure on IT departments to be able to provision resources instantly; more and more applications are best-suited for scale-out systems built using commodity components; software-defined storage promises great efficiency gains; data volume growth is unpredictable; and so on.
More and more enterprises look at creation of software products and services as a way to grow revenue and therefore want to adopt agile software development methodologies, which require a high degree of flexibility from IT. In other words, they want to create software and deploy it much more often than they used to, so IT has to be ready to get new applications up and running quickly.
How Companies Use it
But at this point, companies seldom use hyperconverged infrastructure for those purposes. Today, it’s used primarily to deploy general-purpose workloads, virtual desktop infrastructure, analytics (Hadoop clusters for example), and for remote or branch office workloads.
In fewer cases, companies use it to run mission critical applications, server virtualization, or high-performance storage. In yet fewer instances, hyperconverged infrastructure underlies private or hybrid cloud or those agile environments that support rapid software-release cycles.
Gartner expects this to change, as the market evolves and users become more familiar with the architecture.
It Will Not Solve World Hunger
It’s important to keep in mind that hyperconvergence is just one of the approaches to infrastructure and not the ultimate answer to the IT department’s problems. Vendors still have to prove themselves out and show that their solutions have staying power, and that they can beat competition from SAN and blade solutions, which are very much alive and kicking.
Hyperconverged infrastructure’s promise is simplicity and flexibility, but those two words mean different things to different people. When thinking about hyperconvergence, Gartner’s advice is to figure out what those words mean to you and then see which vendor’s message resonates the most with that.
“It’s not going to solve world hunger,” Butler said. “It is an interesting solution [when used] in the right place.” | | 1:00p |
Five Myths about Hyperconverged Infrastructure As hyperconverged infrastructure emerges as one of the favorite new platforms underneath applications running in enterprise data centers, a number of myths have emerged about it. Because it is new – hyperconverged infrastructure didn’t exist two years ago – it’s natural that many people don’t quite understand it and that myths perpetuate.
Gartner analysts Andrew Butler and George Weiss outlined the most widespread myths about these systems in a presentation at the market research firm’s data center management summit this week in Las Vegas. Here are some of the highlights:
1. It’s Open and Standards-Driven
That’s true but only to a point, according to Butler. The systems are usually built out of commodity x86 hardware and capable of running any type of Windows or Linux, so from the basic hardware perspective yes, they’re open and standardized. But once you’ve gone with a hyperconverged vendor, if you want to grow your deployment, all you can do is add more nodes by the same vendor.
2. It’s Low-Cost
At scale, hyperconvergence doesn’t really yield any significant cost savings over traditional architectures, Butler said. Once you get into deployment of significant size, you can easily spend millions of dollars on a hyperconverged system.
3. VDI is the Top Use Case
There’s a misconception that virtual desktop infrastructure is the most common and best use case for hyperconvergence. That’s not entirely true, according to Butler. Many other use cases are emerging, and while VDI will be one of the prevalent ones, it will not be the single killer app for the architecture.
4. It Will Kill SAN
While that’s what hyperconverged infrastructure vendors want you to think, there are no indications that things are headed that way, Butler said. SAN may generally be “approaching a twilight of its life,” he said, but it is still a huge market for it, and hyperconvergence will not be the decisive nail in its coffin.
5. It Creates More Silos in the Data Center
The worry about non-interoperable silos is perennial in the data center industry, and hyperconvergence does not change that in any way. Vendors address interoperability issues in a variety of ways, and users shouldn’t approach interoperability of hyperconverged infrastructure any different from the way they approach interoperability of other systems in their data centers. | | 4:00p |
The Need for [Data] Speed Joe Dupree leads the marketing team at Cleo.
Everyone knows the fable of the tortoise and the hare. But in business, forget the moral of the story. When it comes to file and data delivery methodologies, companies will always bet on the hare, never the tortoise. Especially when business relies on software solutions to improve the speed and accuracy of data-driven decisions and confident execution ahead of the competition, this only makes sense. On this point, forward-thinking organizations constantly seek better approaches to move and manage information throughout integrated systems and to geographically disperse endpoints across complex business networks in the fastest possible way.
Why Speed Matters
In theory, faster data movement means business moves faster. Increasing the efficacy of business processes and operations through accelerated transfer speed is an effective pathway to increasing turnaround. That, in theory, spells a quicker ROI coming from new software or technology designed to facilitate rapid data movement.
High-speed file transfer signifies an optimal capacity to rapidly send large files to customers and other trading partners under strict time mandates. After all, time is money. In today’s business, if the data wasn’t important enough to get wherever it’s going quickly, it probably didn’t need to be sent at all.
But before all that, there was this:
The Age of the Tortoise
Predating the current state of relatively stable and comparatively quick file transfer technology was the Age of the Tortoise. The corresponding go-to tortoise methodology was enabled through single-purpose devices combined with dedicated communication connectivity to allow for bandwidth saturation in packet delivery. However, the persistent effort to enable rapid file transfer performance in this manner led to a common problem extending from bandwidth limitations and overloaded networks due to transfer congestion. Making matters worse, latency and reliability constraints surfaced if distance or pathway complexity was present. In the global economy where remote employees, international offices, and distant trading partners are most certainly present, these factors are guaranteed to be factors at play.
In the Age of the Tortoise, IT made use of available protocol and authentication standards. Despite running on specific hardware and dedicated connections, transmissions were subjected to connectivity issues giving rise to “send and pray” practices, and subjecting internal and external B2B data flows to integrity loss and dropped packets. Due to technological limitations, this flawed approach to high-speed shifted the heavy lifting to the CPU. Not only was speed reduced by a factor of 10x or more, the amount of data passing through the system ended up being limited to around 1Gb – far from a favorable transfer rate, and hard to call high-speed.
Due to the combination of factors detailed above, the best possible outcome during the tortoise years was not only slow, but at the very least hampered by performance inconsistency – never a good thing when dependability is an essential KPI. Instead of the lightning-quick rabbit businesses were hoping for, they ended up with a half-blind, sluggish, and unreliable living fossil.
The Age of the Hare
Today’s business technology climate is one influenced by emerging data security parameters that are forcing adaptation over acceptable business information and file transfer practices. For the most part, gone are the tortoise-defined days when a company would overnight a hard drive or USB. The risks of damage, loss of intellectual property, or exposing critical data outside the corporate sphere are just too great. Instead, companies are looking to new technology to handle the movement of increasingly large files and aggregate data sets.
Even as the data explosion is testing the boundaries of currently allocated file transfer tools, technology evolves in leaps and bounds. As such, we are currently in the Age of the Hare. Advances in processing speeds mean that faster data transfer techniques are continually being developed. Beyond which, software capabilities have grown exponentially, and expanded applications for traditional file transfer solutions have in many cases supplanted special purpose hardware as the basis for foundational point-to-point data movement in the business integration ecosystem.
The vested interest for businesses to speed up transfer rates is driving unmistakable innovation across the board. Software technology providers ushering in the Age of the Hare have responded to increasing file sizes with vanguard high-speed data movement solutions. And current high-speed transfer software is rewriting performance metrics around what constitutes next-generation file and data transfer techniques necessary for the next phase of business integration.
Application Scope and High-Speed Opportunity
The use of high-speed or extreme file transfer today is predominantly relegated to moving media and entertainment assets. However, potentially untapped markets including healthcare and the data-centric service industries represent expanded applications for the technology and significant growth areas for technology providers in the Age of the Hare.
Next-wave uses for high-speed data transfer tools include:
- Eliminating data silos and aggregating global business information
- Replacing outdated file transfer techniques that cannot support increasingly large file sizes (Solutions specific to the task of accelerated data movement can exceed traditional FTP by a factor of 1000x)
- Support critical, time-sensitive SLAs/KPIs for large file and aggregate data delivery
And although, more often than not, it’s out with old and in with the new in IT, the irony is, many companies are stuck on the dominant Tortoise Age technology when the parameters of doing business today calls for a much more powerful solution. For many this means trusting time sensitive data to FTP—data that is millions of times larger than the business data moved by FTP back when it was invented, in 1971.
Speed is an Edge – In Technology and Business
High-speed file transfer enablement is about more than just the technical edge of a single technology. Business performance matters more. And what matters most in business (apart from cost) is speed. To be viable, the application of hare, not tortoise technology has to realize tremendous increases in the speed while retaining deliverable reliability, and adding efficacy to essential operations. The current jump in high-speed file transfer solution capabilities will allow businesses to augment the demands of larger files and aggregate volumes that increasingly cancel out applying legacy tools or methods to try and do the job. Ultimately, speed sharpens the business edge. And as high-speed transfer capacity is the cutting edge of data in motion, businesses should keep betting on this hare.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 10:36p |
RiskIQ Makes Facebook Threat Intelligence Accessible to Security Researchers 
This post originally appeared at The Var Guy
Security and visibility and intelligence provider RiskIQ has integrated its PassiveTotal threat analytics platform with Facebook’s threat intelligence sharing platform, giving its customers broader access to data that could help them prevent and protect against Internet security threats and improve their overall security posture, the company said.
The integration of Facebook ThreatExchange and PassiveTotal—with the latter providing a visual front end for the former—allows RiskIQ customers to centralize data from ThreatExchange alongside critical data sets such as passive DNS, WHOIS and SSL Certificates within PassiveTotal, RiskIQ said in a press release. This can accelerate security investigations and automate the sharing of findings with the security community, the company said.
The integration also goes the other way, meaning that all members of Facebook’s ThreatExchange will now have access to high-value threat indicators from RiskIQ’s collection of malvertising and other Web-based attack activity, according to a blog post from RiskIQ Labs.
“The addition of data related to exploit kits, hijacked websites and malicious traffic distribution infrastructure to Facebook’s ThreatExchange will give members the edge to combat malvertising threats, ransomware and other criminal-based attacks without spending time doing the research,” according to the post. “Each of these threat types affects organizations on the Internet broadly, with attackers capable of penetrating perimeter controls and leveraging tactics that scale attacks beyond traditional defensive measures.”
Indeed, sharing threat intelligence is the most effective way for organizations to pre-empt and protect themselves from attacks, and more organizations are getting on board with this method of security prevention, said Elias Manousos, CEO of RiskIQ, in the press release.
“We believe the process of sharing should occur without friction and that’s why we’ve added full integration of Facebook’s ThreatExchange within the PassiveTotal platform,” he said.
PassiveTotal allows users to set global controls on how, with whom and what data is shared so they can automate intelligence sharing with the ThreatExchange community, according to RiskIQ.
Once the initial configuration is complete, users can begin searching within PassiveTotal much like they normally would. If PassiveTotal finds data related to a search within ThreatExchange, it will display a tab and show the data along with who submitted it into the exchange, according to RiskIQ. PassiveTotal also, when available, will automatically extract details such as tags or the status of an indicator, including malicious, suspicious or others, the company said.
Users also can configure PassiveTotal for real-time sharing, according to RiskIQ. The platform can automatically add findings to ThreatExchange as investigations are being conducted, facilitating larger, inter-company intelligence sharing efforts that previously would only be performed through email, if at all, the company said.
The integration of PassiveTotal and Facebook ThreatExchange is available now.
This first ran at http://thevarguy.com/computer-technology-hardware-solutions-and-news/riskiq-makes-facebook-threat-intelligence-accessible | | 11:39p |
Report: Microsoft to Build Huge Texas Data Center Campus Microsoft has bought property in Texas where it plans to build a massive data center campus over the course of five years.
As it continues to grow its cloud services business, Microsoft has been expanding the data center capacity to support those services around the world at a rapid pace. Global data center construction has been viewed as an expensive arms race with its chief competitor in cloud, Amazon Web Services, as companies spend billions of dollars to improve the quality of their services to users and increase the amount of locations where they can store their data and virtual infrastructure.
Microsoft announced a multi-site expansion initiative in Europe last month, and in September said it had launched three cloud data centers in India. Amazon in November announced it was preparing to bring online cloud data centers in the UK and South Korea.
News of Microsoft’s land acquisition in San Antonio was reported by the San Antonio Business Journal Thursday. The report cited officials of the Texas Research and Technology Foundation, which controls the Texas Research Park where the Redmond, Washington-based tech giant bought 158 acres of land.
The company plans to build an eight-data center campus on the property in four phases, expecting to break ground on phase one in January. The development will total 1.2 to 1.3 million square feet, TRTF president York Duncan told the Business Journal.
In addition to building data centers Microsoft – like all other web-scale data center operators – also leases space from data center providers. Microsoft is the single largest tenant of the wholesale data center provider DuPont Fabros Technology, contributing more than one fifth of the provider’s total annual rent revenue. |
|