Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, August 3rd, 2017
Time |
Event |
12:00p |
Red Hat’s Permabit Deal Means Smarter Storage in More Data Centers Red Hat’s announcement this week that it has “acquired the assets and technology” of Permabit Technology should eventually have a mixed bag of effects on data center operators of different types.
Permabit, a Cambridge, Massachusetts-based company, provides software for data deduplication, compression, and thin provisioning. Although the wording of Red Hat’s announcement makes it unclear whether it’s bought the company outright or just some portion of it, we know that 16 Permabit employees will be making a move to Red Hat. Also not disclosed was how much money changed hands in this transaction.
It’s easy to see why Red Hat would be interested in Permabit’s technology. Red Hat is a leader in the hybrid computing field, in which companies maintain on-premises data centers running private clouds while also incorporating public cloud services. This means that a majority of its customers are in charge of their own storage, making Permabit’s proprietary deduplication and compression technology’s ability to reduce storage capacity requirements a big plus. Big Red also gains data reduction software released last year, designed specifically for use with Linux.
Red Hat also has had an extensive close look at the technology it’s purchased. Last year, when developing its Linux products, Permabit announced it was collaborating with Red Hat to test and certify its VDO data reduction for Linux with Red Hat Ceph Storage and Red Hat Gluster Storage software.
Mixed in with some impressive results from its Red Hat collaboration, Perabit claimed use of the technology can help reduce data center floor space:
“Organizations struggling to keep up with demand for cloud storage are suddenly confronting the need for huge data center expansion which can run up to $3000/square foot.
“Combining Permabit VDO with open source private cloud storage maximizes data center density, reducing the need for expansion while also lowering operational costs of power and cooling. Permabit Labs testing of VDO with unstructured data repositories on Red Hat Storage saw data reduction rate of 2:1. Permabit Labs Testing in Virtual Disk Image environments saw VDO compression and deduplication deliver up to 6:1 data reduction rates with Red Hat Storage.”
That’s good news for colocation tenants and generally for anyone looking to find money in the budget to buy additional storage. While the implications for colo providers are not as good, the impact should be minimal.
Although Red Hat’s newly acquired technology is currently proprietary, the company says it plans to release it under an open source license. “This will enable customers to use a single, supported and fully-open platform to drive storage efficiency,” it said, “without having to rely on heterogeneous tools or customized and poorly-supported operating systems.”
After being open sourced, if the tech works as advertised, it’ll eventually end up being incorporated into the products of other enterprise Linux distributions, such as SUSE and Ubuntu. It’ll be in Red Hat products first — Red Hat won’t have to release the source code until it begins to distribute it — but it’ll eventually wind its way into other distributions. For the same reason, we’ll probably also see major public cloud providers working to integrate it into their platforms.
The technology might benefit Red Hat and Linux in another way. In an article for The Register, Simon Sharwood wondered if deduplication and other tech the company is gaining might be used to bring ZFS capabilities to Linux.
ZFS, short for Zettabyte File System, is a file system developed primarily for servers and considered off-limits for Linux due to a license compatibility issue with the GPL, Linux’s license. In addition to being able to manage huge amounts of data, as its name suggests, it offers improved preservation of data integrity, allows the pooling of multiple drives into a single pool of storage, and more.
Sharwood might have a point. If so, this would make ZFS somewhat irrelevant as far as Linux is concerned. It might even cause Oracle to change the license to something that’s GPL compatible. | 3:00p |
10 Things Data Center Operators Can Do to Prepare for GDPR As we explained in an article earlier this week, the new European General Data Protection Regulation, which goes into effect next May, has wide-reaching implications for data center operators in and outside of Europe. We asked experts what steps they would recommend operators take to prepare. Here’s what they said:
Ojas Rege, chief marketing and strategy officer at MobileIron, a mobile and cloud security company based in Mountain View, California:
Every corporate data center holds an enormous amount of personal data about employees and customers. GDPR compliance will require that only the essential personal data is held and that it is effectively protected from breach and loss. Each company should consider a five-step process:
- Do an end-to-end data mapping of the data stored in its data center to identify personal data.
- Ensure that the way this personal data is used is consistent with GDPR guidelines.
- Fortify its protections for that personal data since the penalties for GDPR compliance are so extensive.
- Proactively establish a notification and forensics plan in the case of breach.
- Extensively document its data flows, policies, protections, and remediation methods for potential GDPR review.
Neil Thacker, deputy CISO at Forcepoint, a cybersecurity company based in Austin, Texas:
Data centers preparing for GDPR must be in position to identify, protect, detect, respond, and recover in case of a data breach. Some of the key actions they should take include:
- Perform a complete analysis of all data flows from the European Economic Area and establish in which non-EEA countries processing will be undertaken.
- Review cloud service agreements for location of data storage and any data transfer mechanism, as relevant.
- Implement cybersecurity practices and technologies that provide deep visibility into how critical data is processed across their infrastructure, whether on-premises, in the cloud, or in use by a remote workforce.
- Monitor, manage, and control data — at rest, in use, and in motion.
- Utilize behavioral analytics and machine learning to discover broken business processes and identify employees that elevate risk to critical data.
See also: What Europe’s New Data Protection Law Means for Data Center Operators | 3:30p |
NVMe 101: What Is It, and Why Should You Care? Rob Commins is VP of Marketing for Tegile.
Eight years after its introduction into the enterprise market, flash storage is finally hitting its stride. A recent survey from ActualTech Media shows that hybrid storage systems (a mix of flash storage and disk) are now found in 55 percent of data center environments, and almost one-third of data centers now have at least one all-flash array. This means that more than two-thirds of data centers now use flash technology. As storage prices continue to drop, adoption will continue to grow year-over-year.
However, despite the impressive speed and performance that flash storage brings to the data center, the reality is that businesses have hardly scratched the surface of its capabilities due to architectural limitations.
A new technology called Non-Volatile Memory Express (NVMe) is about to change that.
What is NVMe?
NVMe is a host controller interface and storage protocol designed specifically for solid state drives (flash) and any other persistent memory technology developed in the future. Currently, the two most common controller interfaces found in the data center – SAS and SATA – both act as bottlenecks to flash storage performance. This is because they were designed only with the performance characteristics of spinning disk in mind.
NVMe is designed from the ground up to take advantage of the unique capabilities of solid state drives, using Peripheral Component Interconnect Express (PCIe) as its serial expansion bus.
The development of NVMe began in 2009 as an effort to create a new industry standard for storage devices to operate with host computers. Seven years of hard work has resulted in numerous benefits compared to older common controller interfaces. Here’s what NVMe brings to the table:
Superior Performance
Most people don’t realize that flash storage is a parallel storage medium. Unlike SAS or SATA, which treat flash in a serial manner, NVMe is designed to take advantage of the awesome power of parallel processing. This means it can handle 64,000 queues of data, and each queue can process 64,000 commands – at the same time. To put this in perspective, SAS and SATA can only hold a single queue, with just 32 and 256 commands, respectively.
NVMe is also incredibly efficient. It’s I/O request path is much shorter than its predecessors. In fact, the amount of CPU commands is reduced by half, and it supports interrupt steering – meaning data transfers can intelligently overcome even the largest interrupt signals. This translates into lower latency on both the software and hardware level, and is less taxing on your power consumption.
An Architecture Built for Scaling
Every data center administrator is familiar with the perils of buying new storage: Instead of focusing simply on the best array for you, you have to address a labyrinth of questions. For example, will your array be compatible with your specific controller interface? Will it function with your operating system? Should you be thinking about a gradual pivot from SATA to SAS? Using NVMe means you no longer have to think about these questions. It is designed specifically for PCIe, and almost all vendors have agreed to adhere to NVMe protocols.
Make no mistake, the high IOPS and low latency of NVMe will deliver significant competitive advantages to any business, having a positive impact on everything from high performance computing, to virtualization and the private cloud. The price range is high at the moment, but just like flash (or any other innovative technology), it will commoditize and drop.
Case in point: G2M research predicts that NVMe will be a $57 billion market by 2020, with a 95% compound annual growth rate (CAGR). The way I see it, we’ve crossed the chasm, with over half of data centers deploying flash in some capacity. With NVMe finally here, the concept of an all-flash data center – once considered a pipe dream – is suddenly in view.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 4:02p |
Europe’s Cyber Victims Are Racking Up Hundreds of Millions in Costs
Aaron Ricadela (Bloomberg) — Global hackers have unleashed a brace of attacks in recent months, but while their haul in bitcoin has been paltry, the revenue hit to companies infected is reaching into the hundreds of millions of dollars.The WannaCry ransomware that spread in May featured a flawed design that led to its prompt shutdown, and June’s computer virus called Petya was designed to wipe data rather than collect money. The two attacks netted the attackers around $140,000, according to analysis of their bitcoin wallets.
The fallout for companies affected is proving costlier. Nivea skin-cream maker Beiersdorf AG said Thursday that Petya cost 35 million euros ($41.5 million) in first-half sales. The company has yet to report the costs of held inventory and halted production in 17 plants. Computers at its Hamburg headquarters and nearly 160 global offices were also knocked off-line. “We have worked here day and night, 24/7, across the globe,” Chief Executive Officer Stefan Heidenreich told analysts.
See also: 5.5 Million Devices Operating With WannaCry Port Open
Reckitt Benckiser, the U.K.-based maker of Dettol cleaners and Durex condoms, last week lopped 90 million pounds off its expected sales this year after the June attack knocked 2,000 servers and 15,000 company laptops out of commission. The company was still manufacturing at less than full capacity in July. French building materials manufacturer Cie. de Saint-Gobain said July 27 the cyber-attack would drain about 250 million euros in sales this year.
“I don’t think you can model a cyber attack,” said Robert Waldschmidt, an analyst at Liberum Capital in London, who covers Reckitt and Beiersdorf. “Companies can only try their best to prepare defenses. This may mean that IT and consulting costs need to rise a bit to improve these defenses and or implement new ones.”
Danish shipping company A.P. Moller-Maersk A/S told customers last week it’s still clearing backlog from a shutdown of its online ordering system after its machines were infected by malware.
Companies are now piling up the sandbags in the expectation of another attack. Germany’s national Deutsche Bahn railroad created a “cyber rapid deployment force” of highly trained IT specialists with computer-threat experience to be available around the clock against future attacks, a spokesman said. The group restored service to ticket machines and departure boards after the WannaCry attack, he said.
U.K. advertising agency WPP Plc plans to invest more in thwarting hackers after a Petya infection spread across the group, which Chief Executive Officer Martin Sorrell called “an increased cost of doing business.”
These costs can be hard for investors to estimate. “Saint-Gobain has spent some cash to respond to the attack and says it’s in a more solid position now to face future attacks,” said Eric Lemarie, an analyst at Bryan Garnier & Co. in Paris. “They say they will implement some IT programs a bit differently, but that’s it, really. The group hasn’t really provided a specific figure that would need to be spent in the future to manage this risk.”
The resulting hundreds of millions in lost sales among European groups may be dwarfed by the disruptions at American firms including FedEx Corp., Merck & Co. and speech software maker Nuance Communications Inc.
However, fewer European companies are insured against cyberattacks than American groups, creating market opportunity for insurers including Allianz, Zurich Insurance Group, Munich Re and Swiss Re, Charles Graham, an analyst at Bloomberg Intelligence, said. Saint-Gobain’s CFO last week said cyber-attack related damage isn’t typically covered by insurance contracts.
Lloyd’s of London said in a report last month a potential global cyber attack could wreak as much financial damage as Hurricane Katrina, and estimates the the worldwide cyber-insurance market is worth between $3 billion and $3.5 billion. It could rise to between $8.5 billion and $10 billion by 2020, according to Munich Re.
Spending by companies and governments to update old systems like ones that fell prey to WannaCry and Petya makes cyber security “an attractive multi-year” investment, said Patrick Kolb, who manages a $520 million IT security and safety fund at Credit Suisse.
“’If it’s not broken do not fix it’ simply doesn’t work for IT security,” he said. “The financial impact from business disruption is likely to be far larger than $300 of ransom.”
It could cost companies a few million euros in the short term to gird IT systems against further attacks, and expenditures could hit the bottom line each year if the breaches keep coming, said Liberum’s Waldschmidt. “After the attack you can endeavor to model it and need to consider how extensive the hit was and how long business will be impacted,” he said “It’s similar to modeling a holiday such as Easter.”
| 6:40p |
Are You Ready for a Multi-Cloud Future? Sponsored by: Dell EMC and Intel


There are so many conversations around cloud, moving to various types of cloud services, and how to leverage the power of hybrid. But, it’s important to note just how much cloud services – and hybrid, in particular, have been growing and where they are impacting your business. A recent WSJ article points out that CIOs are knitting together a new IT architecture that comprises the latest in public cloud services with the best of their own private data centers and partially-shared tech resources. Demand for the so-called hybrid cloud is growing at a compound rate of 27%, far outstripping growth of the overall IT market, according to research firm MarketsandMarkets.
Here’s the big factor to consider: The cloud will be distributed with 60% of IT done off-premises and 85% in multi-cloud by 2018.
So where are you on that journey? And how ready are you for a multi-cloud environment? Most of all, do you fully realize what the biggest benefits of moving into a hybrid architecture are?
Today’s data centers aren’t always built around efficiency. Fragmentation and infrastructure complexity can lead to downtime and management challenges. Hybrid cloud systems not only integrate with underlying virtualization systems, but also allow you to create a more efficient data center management environment. You’re combining network, storage, and compute into the management layer to control your most critical resources and optimize your users. The beauty of the modern cloud and data center architecture is that you can create intelligent network and management policies that scale on premises systems and into the cloud. This kind of seamless cloud delivery allows the user to be continuously productive while still accessing either on premise or cloud-based resources.
So, with this in mind – let’s look at a few ways to prepare and deliver a hybrid cloud ecosystem.
- Get the full benefits of hybrid – involve your business! A great way to prepare for the age of hybrid is to make sure your business actually knows what this means to them. Don’t just forklift your business processes and put them into the cloud. Make sure to understand where cloud services fit in and where they can be leveraged. To get started, work with internal managers, business unit leaders, and IT champions to establish areas of concern or inefficiency. From there, leverage hybrid cloud services and partners to offload those services into a cloud model. By involving your business with the hybrid cloud architecture, you design a platform that’s continuously evolving with the market and your organization.
- Make sure to leverage hybrid cloud services where you need them. An amazing part of the hybrid cloud is being able to scale on demand. Furthermore, you can use consumption models that allow you to leverage only the services that you require at a given moment. So, if you’re deploying a new application or even an entire call center, look to hybrid to help you offload those functions. That’s the beauty of cloud and hybrid cloud in general – where you’re able to leverage the services that you specifically need to a group of users, applications, workloads, and much more.
- Empower IT by integrating virtualization, apps, and data. Believe it or not, you may already have a lot of the necessary components for a solid hybrid cloud initiative. In fact, you can further empower your IT systems by integrating hybrid cloud into your virtualization and application delivery ecosystem. Remember, today’s virtualization platforms are designed to integrate with various types of cloud services. For example, if you have new VDI initiatives, or are trying to work with things like big data – a hybrid cloud platform can absolutely help. Or, maybe you have a cloud initiative – OpenStack, for instance. There are powerful, Ready Solutions, part of a Ready Bundle from Dell and Intel, which are specifically designed for Red Hat OpenStack and the hybrid cloud. So, not only are you able to leverage cloud resources, you can do this from an OPEX cost perspective.
- The future is multi-cloud – make sure you keep your cloud agile. A major part of working with hybrid cloud is the agility this architecture brings. However, you truly do need to involve your IT and business units to make this a successful endeavor. To make your cloud and business more agile; you’ll need to ensure that you keep pace with the evolution of technology. Remember, digital transformation can absolutely include a migration into a hybrid cloud platform. However, you don’t have to go through this journey alone. Great partners and ecosystem solution providers like Dell-EMC and Intel are ready to help you on this path and enable powerful levels of business and IT agility. These types of partners are committed to simplifying the path to delivering hybrid cloud services by developing the foundational technology that powers high-performance, energy-efficient, highly available, and secure cloud environments. Working with the right team, you can create a cloud model that’s right for your business and helps create real-world competitive advantages.
As you explore your own cloud options, know that hybrid cloud allows organizations to create powerful agile environments; enabling better business economics and faster responses to the market. However, as with any technology, planning, execution, and management are always critical.
Hybrid cloud options give organizations the ability to deliver rich content to the very mobile and digitized end-user. Most of all, these types of cloud environments create new kinds of business and IT economics impacting various technologies, data center real estate, and even how new strategies are developed. As you create your own hybrid cloud architecture, make sure to take planning, management, security, your users, and – most of all – your business into deployment considerations.
| 7:37p |
What’s New in Red Hat’s Latest Enterprise Linux Release  Brought to you by IT Pro
These days when a new version of an operating system is released, there’s usually not a lot of gee-whiz new whistles and bells to make the front office folks salivate — especially if it’s a point release. But there are still plenty of new features to make DevOps folks happy — nuts and bolts stuff that makes everybody’s life easier.
That’s the case with the release earlier this week of Red Hat Enterprise Linux’s latest and greatest, RHEL 7.4. It doesn’t really do anything new and exciting enough to impress non-techie bosses, but it’s loaded with new features that count where it matters.
Let’s just hit the high spots.
Security: There are many new security enhancements in this new version, starting with updated audit capabilities to help simplify how administrators filter the events logged by the audit system, gather more information from critical events and to interpret large numbers of records.
There’s also USB Guard, a feature that should allow admins to sleep better at night. It allows for greater control over how plug-and-play devices can be used by specific users to help limit both data leaks and data injection.
Also notable is the built-in enhanced container security functionality with full support for using SELinux with OverlayFS, which helps secure the underlying file system and provides the ability to use docker and namespaces together for fine-grained access control.
Performance: New performance features start with NVMe Over Fabric Support for increased flexibility and reduced overhead when accessing high performance NVMe storage devices located in the data center on both Ethernet or Infiniband fabric infrastructures.
There are also performance enhancements for when RHEL is deployed on public clouds. These include decreased boot times and support for the Elastic Network Adapter on Amazon Web Services to enable new network capabilities.
Containers: RHEL Atomic Host is a lightweight operating system based on RHEL that’s optimized to run Linux containers, and it’s included in RHEL 7.4.
Atomic Host offers integrated support for SELinux and OverlayFS, as well as full support for the overlay2 storage graph driver, for improved security without sacrificing performance.
It also offers full support for package layering with rpm-ostree. This provides the ability to add packages, like monitoring agents and drivers, to the host operating system.
In addition, Atomic Host offers LiveFS as a Technology Preview, which makes it possible to install security updates and layer packages without a reboot.
Management and automation: With data center footprints spanning everything from bare-metal to the cloud, controlling IT environments continues to become more complex. In addition to Red Hat Satellite and Ansible Tower, RHEL 7.4 introduces Red Hat Enterprise Linux System Roles as another Technology Preview, which enables an automated workflow via Ansible automation to be created once and used across large, heterogeneous RHEL deployments without additional modifications.
Multiple architectures: RHEL 7.4 supports about any hardware architecture that might be found in any data center. Supported architectures include IBM Power, IBM z Systems and 64-bit ARM (as a Development Preview). For the IBM Power Little Endian architecture, this release enables support for the High Availability and Resilient Storage Add-Ons as well as the Open Container Initiative (OCI) runtime and image format.
All of these new features come, of course, on a rock solid operating system that in many ways sets the bar for enterprise Linux. |
|