Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, June 27th, 2017

    Time Event
    12:00p
    Will You Build a Data Center Storage Tier With Next-Gen Storage? Not for a Few Years

    Despite the way SSD prices have fallen, the all-flash data center continues to be something of a rarity – although Gartner has predicted that up to a quarter of data centers might use all-flash arrays for primary storage by 2020, with ROI of under six months in some cases. By then, you’ll be using NVMe protocols over PCIe rather than SAS or SATA to connect those to the server or storage array, or maybe an NVMe over Fabric (NVMf) architecture using RDMA or Fiber Channel. You might also be considering next-generation non-volatile storage like Intel’s Optane.

    Rather than replacing hard drives directly, these low latency, low-power technologies are likely to give you a high-performance storage tier for your most demanding workloads.

    Flash’s smaller size and lower power and cooling requirements looked like the perfect fit for data centers limited by space and power limits. With fewer moving parts, it’s not as susceptible to shock and vibration, either; while humidity turns out to be the cause of most disk drive failures in hyperscale cloud data centers, vibration is as much of a problem in enterprise data centers. That also makes SSDs an attractive option in less traditional data center locations, like oil rigs and container ships where hard drives would struggle.

    But it wasn’t just the higher price of SSDs that held back adoption, especially as flash arrays use data deduplication and compression to keep their raw media costs down. Taking advantage of the higher performance of flash for the applications that use that storage means a lot of integration work. You might be able to remove existing layers of overprovisioning and caching to reduce costs when you adopt flash, but that means changes to your data center layout. And with faster storage performance, it’s easy for networking to become the bottleneck.

    Plus, the SATA and SAS protocols used to connect storage were designed for tapes and hard drives that can’t handle the large numbers of simultaneous I/O requests that flash can. As flash capacity increases, connecting SSDs using protocols that only allow for a limited number of storage request queues becomes increasingly inefficient, which is why the storage industry is currently switching over to NVMe and PCIe to get much higher IOPS and throughput.

    “A huge difference between NVMe and SCSI is the amount of parallelism that NVMe enables, which means that there will potentially be a huge bandwidth difference between NVMe and SCSI devices (in densely consolidated storage environments, for example),” IDC Research Director for Enterprise Storage Eric Burgener told Data Center Knowledge. “NVMe is built specifically for flash and doesn’t have anything in it to deal with spinning disk, so it’s much more efficient (which means you get a lot more out of your storage resources). Latencies are lower as well, but that difference (around 200 microseconds faster than 12Gb SAS devices) may not make that much of an impact because of other bottlenecks.”

    So far, most NVMe SSDs have been directly attached to servers. “At this point, 99 percent of all the NVMe devices being purchased are bought after market by customers who put them into x86 servers they own that have a lot of PCIe slots,” says Burgener.

    A number of storage vendors are already using some NVMe technology in their arrays, for cache cards and array backplanes, and marking their existing all-flash arrays as “NVMe ready”. Pure Storage had already put NVMe cache cards (customized to be hot pluggable) in their FlashArray//M, and now has announced the FlashArray//X, using NVMe throughout the array – for the devices, controller and backplane, and with a dedicated 50 Gb/S RDMA over Converged Ethernet (RoCE) NVMe Fabric. Pure has already demonstrated that fabric working with Cisco UCS servers and virtual interface cards, and Micron recently announced its SolidScale platform using a Mellanox RoCE NVMe Fabric.

    “I think over the next three years we will see more array vendors using NVMe cache cards, NVMe backplanes, NVMe controllers, NVMe over fabric, and then finally all NVMe devices,” he predicts. Vendors who already have software to manage tiered data placement in hybrid flash arrays (like Dell EMC, HDS, HPE, IBM, and NetApp) might also introduce multi-tier all-flash arrays, using a small NVMe cache with SAS SSDs; these would be cheaper but more complex.

    Optane – the brand Intel uses for the 3D XPoint persistent memory it developed with Micron when it’s packaged as storage rather than memory – will also start out as direct-attached SSDs using PCIe, and move into arrays as capacities and volumes increase. In early tests, Optane is comparable to fast SSDs on throughput and latency but much better at sustaining those under load – even with a large number of writes being performed, Intel claims the read latency stays low. The write endurance is also far better than NAND flash, and Optane can read and write individual bytes, rather than the pages of flash and the sectors of a hard drive.

    “Having Optane SSDs as the caching layer within a high performance storage system will enable infrastructure teams to move even the most demanding OLTP database workloads onto a simplified shared storage pool, without having to worry about these applications being disrupted by other data center services,” claims James Myers, director of data center NVM Solutions Architecture at Intel.

    Initially you’ll either treat Optane like an SSD or use it as slightly slower DRAM. You can to use Optane SSDs in Storage Spaces Direct in Windows Server 2016 and Azure Stack as well as VMware vSAN, and the next release of Windows Server will support it as storage class memory.

    But in time it will show up as something between storage and memory. “Instead of doing I/O (reads and writes of 4K blocks for example) we’ll be doing loads and stores on bytes of data on our new byte-addressable persistent memory,” explains Alex McDonald from the Storage Networking Industry Association Europe. “We need a new programming model for that, because it’s not like regular DRAM. You can’t clear it by removing the power, for example, so clearing problems using the big red switch won’t work. That’s just one consideration of something that’s nearly as fast as DRAM but doesn’t drop bits when the power goes off.”

    The Need for Speed

    Before you invest in a new storage tier, you need to understand your workloads. Are writes more important than reads? Is latency an issue? Making the wrong choice can significantly impact application performance. As Brian Bulkowski, co-founder and CTO at database company Aerospike, notes “transactional systems are becoming more and more complex; they now process a high volume of data while a transaction is happening”.

    You also need to make sure that your applications can take advantage of better storage performance. In-memory databases, for example, are designed on the assumption that storage is slow and likely to be distributed across different systems.

    “There are very few workloads that today require an end-to-end NVMe system, with an NVMe host connection, NVMe controllers and backplane, plus NVMe devices,” says Burgener. There are correspondingly few vendors selling them (like Apeiron Data, E8 Storage, Excelero and Pavilion Data).

    “Customers that have bought these systems tend to use them for extremely high-performance databases and real-time big data analytics where a lot of the data services needed by an enterprise (RAID, compression, encryption, snapshots, replication, and so on) are provided by the database or are not used,” he explains. Where those data services are included, he finds “they lack maturity and broad features,” making these systems more of a niche play for at least the next three years (although overall market revenues for these systems will grow every year).

    One exception is Pure Storage, which has been offering what Burgener calls “the full complement of enterprise-class functionality” on its arrays for a number of years. The all-NVMe FlashArray//X is Pure’s highest-end array and other models still use SAS rather than NVMe. “Most customers probably won’t need the performance of the X70 for quite a while,” he notes, but when they do, Pure may have the advantage of a more mature offering.

    Bulkowski suggests there’s “a bifurcation in the flash market” due to vendors prioritizing the read-optimized, cheaper, slower, high density MLC flash that hyperscale customers want, over the faster, low density, write-optimized SLC flash. He expects Optane, with its higher performance, to eventually replace SLC (which is dropping out of the market), though it does require rethinking how your software works with memory; for example using Optane to store indexes with other, slower, storage handling the bulk of your data.

    Some Aerospike customers are using the database for workloads that will be able to take advantage of faster storage, like web advertising (where low latency is crucial), as well as for fraud detection, threat prevention and network optimization in telecoms. IoT and machine learning will also drive some demand for extremely high write throughput, as well as large data sets. An enterprise hybrid transaction/analytical processing system might have a 10TB in-memory database; a machine learning data set for fraud detection would be closer to 100TB.

    At the other end of the scale, as Burgener points out, “for handling mainstream workloads, there is a lot of room for growth still with SAS-based SSDs and arrays.” The full range of NVMe standards for dual port, hot plug and device drivers are still in development and he suggests “there will likely still be some shifting there before those standards settle. Those features are needed for widespread enterprise use, and SCSI is already very mature in those areas.”

    Between the initially high cost of NVMe and the need for standards to mature and volumes increase (bringing those costs down), IDC predicts that NVMe will only replace SATA and SAS in mainstream systems by 2021. Optane doesn’t ship until Q3 this year and may not ship in volume until 2018.

    “It’s an interesting product, it pushes performance limits, and it will enable the development of more real-time workloads, but volumes will be limited for the next several years.”

    “Optane will be sparingly deployed in server-based storage designs (putting Optane devices into x86 servers) and won’t start to appear in any arrays until the end of 2018 at the earliest,” Burgener predicts; “It’s just not needed for most workloads. Even at that point it may be used for cache cards and possibly a small high-performance tier to front end slower SSDs: HPE plans to use Optane as a caching layer in its 3PAR SANs.” It’s also going to be very expensive relative to SAS SSDs for probably at least the next few years. In-memory databases are one area where Optane will first be used.” That’s going to offer a boost for machine learning too.

    If you do have a workload that needs this faster storage tier, Burgener cautions that you’ll also need to look at your network architecture. One NVMe device will max out a 40GbE host connection, and a single Optane device can max out four 100GbE connections. “Unless you increase the host connection bandwidth significantly, you leave most of the performance of the array inaccessible to the application workloads,” he warns. That means to get the full performance of NVMe and Optane arrays, you’ll need to move to NVMe over Fabric – which again means changes to your network infrastructure.

    You may also need to design your storage tiers to extend beyond the traditional data center. Eventually, McDonald predicts, “this stuff [will be] so fast (and hopefully cheap relative to its performance and capacity) that it will be pretty ubiquitous as the compute storage layer.

    In other words, your data will be in one of three categories; super-hot and on something like these technologies; super-cold and being stored on much slower media; or super-distant, being generated on peripheral and the edge by remote devices (think IoT, sensors, smart city, retail outlets and so on) that will use cheap flash.” Your storage tiers could extend from super-fast persistent memory for the hottest data that’s being processed all the way to cloud storage, but making that work will need a smart, automated data fabric to make it seamless.

    3:00p
    Energy Conservation Strategy Supported by Value of Tape

    Rich Gadomski is VP of Marketing, Fujifilm Recording Media USA, Inc. and member of the Tape Storage Council.

    In today’s business climate, achieving results while maximizing resources has become more important than ever. IT leaders are focusing intently on every possible way to drive performance and extract as much value as possible from their systems and technologies while striving to conserve energy and reduce waste.

    One critical area that IT leaders are looking at continues to be data storage. In today’s era of big data and the Internet of Things (IoT), data storage and the need for long term data retention are at the forefront of delivering value to governments, businesses and organizations of all sizes. The challenge is how to store more and more data cost effectively considering the limited resources available.

    One of those limited resources is certainly energy. The world’s data centers now consume almost as much energy as the country of Spain and consume just over 2 percent of the total U.S. electrical output. Two of the highest areas of energy consumption are related to servers and disk storage. While cutting edge data centers rightly focus on including renewable sources of energy to help power their data centers and reduce their impact on the environment, best practices also dictate moving less frequently accessed data from expensive flash and higher energy consuming disks to more energy efficient tape systems.

    According to the Tape Storage Council’s 2017 State of Tape Industry report, energy costs for tape capacity are typically less than 5 percent of the equivalent amount of disk capacity. Since tape cartridges spend most of their life in a library slot or on a shelf, they consume no energy when not mounted in a tape drive. As capacity demands increase, tape capacity can be added without adding more drives.

    Improving data center energy consumption requires long term planning and must include an effective storage strategy. The tape industry has been fueled by a decade of strong technological development and continues to play a major role in traditional backup, active archive and disaster recovery applications, in addition to effectively addressing many new large-scale storage requirements for the unknown appetite of the IoT. As a result, the role tape serves in today’s modern data centers is steadily expanding. IT leaders and cloud service providers are leveraging tape for its significant reliability, security, operational and economic advantages.

    In addition to energy savings, users can also benefit from reliability levels for tape that are quickly improving as the Bit Error Rate (BER) for enterprise tape formats and LTO-7 tape is rated at one bit in error per 1×1019 bits read. This makes the top-rated tape drives 1,000 times more reliable than the top-rated HDDs at 1×1016. The tape BER is most impressive leading the entire storage industry and going forward, significantly higher levels of tape reliability can be expected.

    Manufacturer’s specifications indicate that today’s enterprise tape formats and LTO tape media have a life span of 30 years or more while a tape drive is typically deployed 7 to 10 years before replacement. By comparison, a standard disk drive is typically operational from 3 to 5 years before replacement.

    Tape cartridge capacities and data transfer speeds are also expected to grow rapidly for the foreseeable future with no fundamental technology limitations in sight. In fact, Fujifilm in conjunction with IBM announced in April of 2015, the achievement of a new record in areal data density of 123 billion bits per square inch on data tape utilizing Fujifilm’s Barium Ferrite magnetic particle technology. This density breakthrough equates to a standard LTO cartridge capable of storing up to 220 TB of uncompressed data, more than 36 times the storage capacity of the current LTO-7 tape. Thanks to this technology breakthrough, the long term ability to achieve future capacity requirements of tape technology roadmaps is ensured.

    In terms of Total Cost of Ownership (TCO) advantage compared with other storage mediums, tape is the most cost-effective technology for long-term data retention. Once again, tape capacity can scale without adding more drives while this is not the case with HDDs where each capacity increase requires another drive and quickly becomes more costly than tape as capacity demand increases. Well known TCO studies are publicly available and reveal that the TCO for HDDs is approximately six times higher than the equivalent capacity tape systems thanks in part to tape’s low energy consumption.

    Finally, in terms of security, tape has always been the only truly removable storage medium. This provides what is commonly being referred to today as an “air gap” advantage where, by virtue of its ability to be disconnected from the network, data contained on tape can be impervious to virus or ransomware type attacks.

    Clearly, tape technology offers the most cost-effective, reliable, safe and energy efficient solutions available to the storage industry. The continuing innovative technology advancements and compelling value proposition demonstrate that tape technology is not sitting still. Expect this promising trend to continue throughout 2017 and beyond as the march to Exascale storage solutions draws near.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

     

    6:00p
    New Cyberattack Goes Global, Hits WPP, Rosneft, Maersk

    (Bloomberg) — A new cyberattack similar to WannaCry is spreading from Europe to the U.S., hitting port operators in New York and Rotterdam, disrupting government systems in Kiev, and disabling operations at companies including Rosneft PJSC and advertiser WPP Plc.

    More than 80 companies in Russia and Ukraine were initially affected by the Petya virus that disabled computers Tuesday and told users to pay $300 in cryptocurrency to unlock them, according to the Moscow-based cybersecurity company Group-IB. Telecommunications operators and retailers were also affected and the virus is spreading in a similar way to the WannaCry attack in May, it said.

    Rob Wainwright, executive director at Europol, said the agency is “urgently responding” to reports of the new cyber attack. In a separate statement, Europol said it’s in talks with “member states and key industry partners to establish the full nature of this attack at this time.”

    Kremlin-controlled Rosneft, Russia’s largest crude producer, said in a statement that it avoided “serious consequences” from the “hacker attack” by switching to “a backup system for managing production processes.”

    U.K. media company WPP Plc.’s website is down, and employees have been told to turn off their computers and not use WiFi, according to a person familiar with the matter. Sea Containers, the London building that houses WPP and agencies including Ogilvy & Mather, has been shut down, another person said. “IT systems in several WPP companies have been affected,” the company said in emailed statement.

    Global Attack

    The hack has quickly spread from Russia and the Ukraine, through Europe and into the U.S. A.P. Moller-Maersk, operator of the world’s largest container line, said its customers can’t use online booking tools and its internal systems are down. The attack is affecting multiple sites and units, which include a major port operator and an oil and gas producer, spokeswoman Concepcion Boo Arias said by phone.

    APM Terminals, owned by Maersk, are experiencing system issues at multiple terminals, including the Port of New York and New Jersey, the largest port on the U.S. East Coast, and Rotterdam in The Netherlands, Europe’s largest harbor.

    Cie de Saint-Gobain, a French manufacturer, said its systems had also been infected, though a spokeswoman declined to elaborate, while Mondelez International Inc. said was also experiencing a global IT outage and was looking into the cause. Merck & Co. Inc., based in Kenilworth, New Jersey, has also reported that its computer network was compromised due to the hack.

    WannaCry Warnings

    The strikes follow the global  ransomware assault involving the WannaCry virus that affected hundreds of thousands of computers in more than 150 countries as extortionists demanded $300 in bitcoin from victims. Ransomware attacks have been soaring and the number of such incidents increased by 50 percent in 2016, according to  Verizon Communications Inc.

    Analysts at Symantec Corp., have said the new virus, called Petya, uses an exploit called EternalBlue to spread, much like WannaCry. EternalBlue works on vulnerabilities in Microsoft Corp.’s Windows operating system.

    The new virus has a fake Microsoft digital signature appended to it and the attack is spreading to many countries, Costin Raiu, director of the global research and analysis team at Moscow-based Kaspersky Lab, said on Twitter.

    The attack has hit Ukraine particularly hard. The intrusion is “the biggest in Ukraine’s history,” Anton Gerashchenko, an aide to the Interior Ministry, wrote on Facebook. The goal was “the destabilization of the economic situation and in the civic consciousness of Ukraine,” though it was “disguised as an extortion attempt,” he said.

    Kyivenergo, a Ukrainian utility, switched off all computers after the hack, while another power company, Ukrenergo, was also affected, though “not seriously,” the Interfax news service reported.

    Ukrainian delivery network Nova Poshta halted service to clients after its network was infected, the company said on Facebook. Ukraine’s Central Bank warned on its website that several banks had been targeted by hackers.

    9:37p
    Lenovo’s Bid for Data Center Growth

    The takeaway from Tech World Transform, the Lenovo lovefest held last week in New York City, is that Lenovo has its eyes on the data center market. So much so that Emilio Ghilardi, the company’s North America president, said he sees the data center, and especially hyper-converged infrastructure, as the company’s “growth engine.” To fuel the growth, the company announced a new portfolio of data center products under a new brand called ThinkSystems. In addition, the company introduced ThinkAgile, another new brand for turnkey infrastructure offerings.

    “You can think about [ThinkSystems] as our server portfolio, our storage, and our networking, both top of the rack and embedded into our blade infrastructure,” Kamran Amini, general manager of Lenovo’s server and storage system unit, told Data Center Knowledge. “ThinkAgile basically is our software defined platform, and that leverages the best of breed of our ThinkSystem product line, so there’s server technology that you run on.”

    The company has plenty of room for growth in the data center market. According to a March server market report from Gartner, in the fourth quarter of 2016 Lenovo made a fifth place showing, based on revenue, with a 6.4 percent market share. This was down from a same-time-the-previous-year share of 7.5 percent, representing a 16.7 percent drop in server revenue.

    But Lenovo now seems to be going the extra mile in its bid to gain data center traction. ThinkSystems represents the first major upgrade to its server portfolio since it acquired IBM’s x86 server business more than three years ago, and includes multiple new server lines, storage systems, and the ThinkSystem RackSwitch line, which is designed to ease the deployment of hybrid cloud data centers. Indeed, all of Lenovo’s new offerings seem to spin around making things easier on data center operators.

    The new servers are an assortment of top-of-the-line next generation products designed for a variety of use cases. The most noteworthy feature, however, might be the inclusion of a built-in management tool, XClarity Controller, that runs on its own dedicated microprocessor. “We kept getting feedback from our clients about how as servers are getting more complicated and richer, they’re taking longer to boot,” Amini said. “With XClairity Controller, we’re doing 2X faster boots to OS, from gen to gen, and we’re getting 6X faster firmware updates on our server.”

    The feature isn’t only about shortening boot time and firmware updates, but provides graphical real time access to manage everything from configuration to deployments on bare metal or virtual environments. “It’s very easy to consume. You don’t need to have a Ph.D. in management software to learn it, understand it, and actually use it. It’s really targeted to drive faster access to the server itself.”

    This is an addition to Lenovo’s existing XClarity line that includes XClarity Administrator, which offers similar functionalities but on a different scale.

    “XClarity Controller is based on the server, a one-to-one management,” Amini explained. “XClairity Admin is one-to-many, [including] not only servers but storage and networking management. It’s a virtual machine that you can put anywhere in your data center — it can run on a laptop, for example, or a server running VMs. We try to make it very easy to deploy, because we know that clients vary. An SMB client to a cloud service provider to a large enterprise or managed service provider — all do their stuff differently.”

    Also, he points out, because XClarity Admin is a software-based solution, users don’t have to purchase expensive switches or hardware to manage multiple devices as they might with some other server makers.

    Of the new servers being introduced by Lenovo, perhaps the most interesting is the ThinkAgile SX for Microsoft Azure Stack. As the name implies, this offering is designed to be a turnkey solution for companies wanting to incorporate the public cloud, specifically Microsoft Azure, into an on-prem private cloud.

    “It comes racked, cabled and configured,” Amini said. “The customer hooks up the power. The customer — or our service reps that are there, depending on the customer’s expertise — hooks up the networking. You set up your security and your IP settings and the platform is running.”

    The problem with this one is that it’s narrowly focused on Microsoft Azure. Connections with other public cloud providers are possible only by going through Azure Stack, which means there’s more than just a small amount of vendor lock-in. There might be a problem getting that to fly.

    9:43p
    Global Ransomware Attack Cripples Networks – Again

    Brought to you by MSPmentor

    For the second time in as many months, hackers today are unleashing a massive multinational ransomware attack that has crippled a host of networks across the western hemisphere.

    The attack appears to have begun sometime Monday, with the hardest-hit targets comprised of Ukranian infrastructure, including power companies, airports, banks, state-run television stations, postal facilities and large industrial manufacturers.

    Also affected were foreign operations of U.S. pharmaceutical firm Merck, advertising conglomerate WPP, French building materials vendor Saint-Gobain, Danish shipping giant AP Moller-Maersk and Pittsburgh, Penn.-based Heritage Valley Health Systems.

    The as-yet-unidentified hackers appear to be demanding payments of $300 (USD), and as of midday on the east coast of North America, the attack was said to still be spreading.

    “The ransomware, called Petwrap, is based on an older Petya variant, originating from the GoldenEye malware in December 2016,” Phil Richards, chief information security officear for IT services firm Ivanti – formerly LANDESK – said in a statement. “The new ransomware variant also includes the SMB exploit known as EternalBlue that was created by the United States National Security Administration, and leaked by the Shadow Brokers hacker group in April 2017.”

    “The Petya component includes many features that enable the malware to remain viable on infected systems, including attacking the Master Boot Record,” he added. “The EternalBlue component enables it to proliferate through an organization that doesn’t have the correct patches or antivirus/antimalware software.

    “This is a great example of two malware components coming together to generate more pernicious and resilient malware.”

    Early last month, a similar ransomware campaign – also using the EternalBlue exploit purportedly stolen from the NSA’s cyber weapons toolkit – resulted in more than 200,000 attacks across 150 countries.

    That attack, dubbed WannaCry, also involved demands for $300 in bitcoin digital currency.

    “This is the same EternalBlue exploit that WannaCry used,” said Allan Liska, a cyber security analyst at threat intelligence software vendor Recorded Future. “It also has a secondary capability: There’s an information stealer that is bundled in this attack.”

    “In addition to doing the ransomware, it’s also stealing credentials,” he went on. “If it can’t use the EternalBlue, it’s taking the stolen credentials from that box and jumping to another box in the network to try to copy the ransomware over that way.”

    Liska, co-author of the November 2016 book “Ransomware: Defending Against Digital Extortion,” said the new attack reflects a series of sophisticated improvements to the malware used last time.

    “Last month was just the EternalBlue,” he said. “This is the attack where all the security experts last time were saying ‘good thing they didn’t do that.’”

    “This is the stuff that WannaCry left off,” Liska continued. “It’s added additional capabilities and made it much easier to spread around networks – even those that are fully patched.”

    For IT managed services providers (MSPs), protecting clients still largely boils down to a thorough and consistent patching regimen, and user education.

    Also, Liska recommends locking down systems to prevent the running of administrative commands from too many workstations.

    “Those should be locally locked down,” he said. “As an MSP, that’s where you can help their customers architect their networks to be more secure.

    “We need to start teaching system admins that if you need to run those commands, do them from your desktop and target to workstations that you’re troubleshooting.”

    As with WannaCry, Liska expects this attack to diminish in scope and intensity during the coming days, with only occasional flare-ups of the malware popping up from time to time.

    “That’s the problem with the worm,” he said. “We’re still seeing WannaCry running around but we’re seeing less and less of that. That’s what I think will happen here.”

    << Previous Day 2017/06/27
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org