Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, July 17th, 2017

    Time Event
    4:01a
    One Click and Voilà, Your Entire Data Center is Encrypted

    IBM’s latest z Series mainframe, unveiled today, has a novel security feature the company says users have long wanted but couldn’t get: the ability to easily encrypt all their data, at rest or in motion, with just one click.

    The 14th-generation mainframe, called IBM Z, introduces a new encryption engine that for the first time will allow organizations to encrypt all data in their databases, applications, or cloud services, with no performance hit, said Mike Perera, VP of IBM’s z Systems Software unit, in an interview with Data Center Knowledge.

    “It’s a security breakthrough that now makes it possible to protect all the data, all the time,” he said. “And we’re really doing it for the first time at scale, which has not been done up to this point, because it’s been incredibly challenging and expensive to do.”

    Cybersecurity has become a top priority for IBM’s mainframe customers in recent years. This group includes government agencies and many of the world’s largest financial institutions, retailers, healthcare organizations, and insurance firms — in other words, primary targets for professional hackers.

    While much of the IT infrastructure conversation today revolves around cloud, mainframes are still a $3 to $4 billion business for IBM, according to the market research firm IDC.

    “This is a significant technology that they are bringing to the market,” Peter Rutten, analyst for IDC’s Servers and Compute Platforms Group, said about the new encryption capabilities. “Data centers previously had to decide what they would encrypt. Everything was not encrypted because it was a manual process. But as attacks on data increasingly become more frequent and intense, it has become more important to encrypt all the data, wherever they go – at rest or in flight. With this technology, the whole system in its entirety is .”

    The technology will also help mainframe users meet new data compliance requirements, such as the European Union’s General Data Protection Regulation, pointed out Judith Hurwitz, president of the market research and consulting firm Hurwitz & Associates.

    Bold Claims

    Besides working for its existing customer base, IBM is hoping the new IBM Z will attract new customers, such as companies that have traditionally used x86 servers (the bulk of the market) and companies that want to provide cloud services, Hurwitz said.

    For example, it also announced today the launch of IBM Cloud BlockChain data centers in six cities worldwide. These data centers are securing the cloud service using IBM Z’s encryption technology.

    The company says IBM Z handles encryption 18 times faster than do x86 systems and runs Java workloads 50 percent faster. A single system can support more than 12 billion encrypted transactions per day.

    “For quite a long time, people have looked at Intel as a convenient platform, but IBM would love to break that stranglehold and have people take a look at the mainframe,” Hurwitz said. “There are companies that need that level of strength and transaction management.”

    Better Encryption through Software and Hardware Changes

    Customers of the previous z13 mainframe can take advantage of the new pervasive encryption features by upgrading the operating system and software, but they won’t get the performance boost that the new IBM Z mainframe provides, Perera said. IBM advanced its cryptographic technology through a combination of hardware and software innovations

    They include four times more silicon dedicated to cryptographic algorithms over the previous z13 mainframe. It also includes new processor designs and upgrades to the operating system, middleware, and databases.  The result is a sevenfold increase in cryptographic performance over the z13 mainframe, he said.

    To further beef up security, IBM has also encrypted APIs and encryption keys. “If someone were to get access to the keys, they can’t do anything with them,” Perera said.

    The company expects to ship the mainframe in the third quarter.

    The new encryption technology will also help mainframe customers meet new data compliance requirements, such as the European Union’s General Data Protection Regulation, said Judith Hurwitz, president of Hurwitz & Associates, a market research, consulting and analyst firm.

    3:00p
    Here’s How Azure Stack Will Integrate into Your Data Center

    Azure Stack, the turnkey hybrid cloud system that you can now order from server vendors like Dell EMC and Hewlett Packard Enterprise or get as a managed service from providers like Avanade on hardware in your own data center, is intended to be concrete proof of Microsoft’s view that cloud is an operating model and not a place. It’s obviously designed to let you integrate private and public cloud services – but how well will it fit into your existing infrastructure?

    What it gives you is a system that’s not exactly the same as Azure running in an Azure data center but that’s consistent with it, using the same management API and portal, with many of the same services, giving you a unified development model. Think of it as a region in Azure. Not all Azure regions have exactly the same services available, but they all get the core services, ranging from storage, IaaS, and Azure Resource Manager to Key Vault, with Azure Container Service and Server Fabric coming to Azure Stack next year. Some public Azure services may never make it to Azure Stack, because some things only make sense at hyper-scale.

    Compliance, Performance, Data

    You can use Azure Stack to run cloud workloads that you don’t want in the public cloud for compliance reasons – the most common consideration when businesses weigh cloud services. That includes both the Azure services and third-party PaaS and IaaS workloads, such as Cloud Foundry, Kubernetes, Docker Swarm, Mesosphere DC/OS, and open source stacks like WordPress and LAMP, which come as services from the Azure Marketplace rather than bits you download, install, and configure manually. Just as interesting is the ability to use cloud tools and development patterns without the latency of an internet connection – whether you have poor connectivity (on oil rigs and cruise ships, in mines, and other challenging locations) or need to process sensor data in near-real-time.

    The hybrid option is going to be the most powerful. You can use Azure services like IoT Event Hubs and Cognitive Services APIs with serverless Functions and Azure Stack to build an AI-powered system that can recognize authorized workers and unauthorized visitors on your construction site and warn you when someone who’s not certified is trying to use dangerous machinery. Microsoft and Swedish industrial manufacturer Sandvik showed a prototype of that at the Build conference this year.

    That’s the kind of system you’d usually choose to build on a cloud platform, because setting up IoT data ingestion, data-lake, and machine learning systems you’d need before you could even start writing code would be a complex and challenging project. With Azure Stack, developers can write hybrid applications that integrate with services in the public Azure cloud that can be a first step in an eventual migration (if the issue is data residency and a cloud data center opens in the right geography), or to augment a system you never plan to put in the public cloud, and have the same DevOps process covering both environments.

    Image: Microsoft

    You can also use Azure Stack to run existing applications, especially if you want to start containerizing and modernizing them to move from monolithic apps to microservices. “You can connect to existing resources in your data center, such as SQL or other databases via the network gateway that is included in Azure Stack,” Natalia Mackevicius, director of Azure infrastructure management solutions, explained in an interview with Data Center Knowledge.

    But even if you’re using Azure Stack to virtualize existing applications, you’re going to be managing it in a very different way from your existing data center infrastructure – even if that includes Microsoft’s current Azure Pack way of offering cloud-style services on premises.

    Step Away from the Servers

    Azure Stack does integrate with your existing tools. When you set it up, you can choose whether to manage access using Azure Active Directory in a hybrid cloud situation, or Active Directory Federation Services if it’s not going to be connect to the public cloud.

    But you never do most of the setup you would with most servers. Network configuration happens automatically when you connect the switches in Azure Stack to your network, for example. “Customers complete a spreadsheet with relevant information for integration into their environment with information, such as the IP space to be used and DNS. When Azure Stack is deployed, the deployment automation utilizes this information to configure Azure Stack to connect into the customer’s network,” Mackevicius said.

    You won’t monitor Azure Stack like a normal server cluster because much of what an admin would normally do is automated and taken care of by the Infrastructure system. But there are REST APIs for monitoring and diagnostics – as well a System Center Operations Manager management pack for Azure Stack and a Nagios extension – so you can use your usual monitoring tools. Server vendors like HPE are using those APIs to integrate Azure Stack into their own tools, so if you already use HPE OneView, for example, you can manage Azure Stack compute, storage, and networking through that.

    “The switches in Azure Stack can be configured to send alerts and so on via SNMP, for example, to any central network monitoring tools,” Mackevicius said. “Each Azure Stack integrated system also has a Hardware lifecycle host (HLH), where the hardware partner runs software for hardware management, which may include tools for power management.”

    The portal on Azure Stack lets you manage the VMs that you’re running on it (and with the Windows Azure Pack Connector for Azure Stack, you can also manage VMs running on your existing infrastructure on Azure Pack), but not the IaaS service that runs them. “You can use monitoring tools such as System Center Operations Manager or Operations Management Suite to monitor IaaS VMs in Azure or Azure Stack in the same way you monitor VMs in your data centers.”

    Backup and DR

    For backup and DR, you need to think both about tenant workloads and the infrastructure for Azure Stack itself. Microsoft suggests Azure Backup and Azure Site Recovery for replication and failover, but that’s not the only option. “Tenant assets can use existing backup and DR tools such as Veeam, Commvault, Veritas Backup products,” or whatever other systems you already have in place.

    “For [its own] infrastructure, Azure Stack includes a capability which takes a periodic snap of the relevant data and places it on an externally configurable file share,” Mackevicius explained. That stores metadata like subscription and tenant-to-host mapping. so you can recover after a major failure, and you can use regions within your Stack deployment for scale and geo-redundancy.

    Updates on Your Own Schedule

    Updating is also very different. Updates to the Azure services and capabilities will come whenever they’re ready; updates for the Azure Stack infrastructure will come regularly, but that’s updates to infrastructure management. Even though Azure Stack runs on Windows Server, you’re not going to sit there testing and applying server patches. What Microsoft calls ‘pre-validated’ updates are delivered automatically, and what you control is when they’re applied, so they happen during your chosen maintenance window.

    Getting updates to be seamless and stress-free is why Microsoft turned to specific hardware partners rather than letting customers build DIY Azure Stack configurations. “Sure, you can get it up and running … but then you need everything to update, and by the way, that needs to happen while all the tenants continue to run,” explained Vijay Tewari of the Azure Stack team. “The thing people fixate on is getting the initial deployment right, but this is about the full operational lifecycle, which is a much bigger proposition.”

    That’s one of the reasons to bring cloud to your data center in the first place. “We have a highly simplified model of operation. We don’t want our customers spending inordinate amount of their resources, time, or money just trying to keep the infrastructure running. That’s not where the value of Azure comes from; it comes from innovative services, whether it’s Service Fabric, whether it is SQL DB, or Azure Machine Learning.”

    Azure Stack gives you the option of taking advantage of that cloud value without having to give up the value you get from your own data centers, but you will be doing things differently.

    4:32p
    Supply Chain Blunder Means Cisco Servers Could Lose Data

    Oops! There’s been a snafu in the Cisco supply chain that’s resulted in UCS servers being shipped that could lose data under a power loss. The problem is with hard drives from Seagate with an incorrect setting. The drive maker has already issued a mea culpa, which might’ve absolved Cisco of responsibility if not for the fact that Cisco installed and shipped the drives without checking to make sure they were configured properly. Sometimes one hand dirties the other.

    The problem isn’t complicated. It seems that some Serial Attached SCSI 7.2K RPM Large Form Factor drives had drive write cache enabled. As Cisco put it with deadpan simplicity in its field notice: “If drive write cache is enabled during a power loss it can result in loss of data.” The issue affects Cisco UCS C220-M3/M4L, C240-M3/M4L and UCSC-C3X60 servers.

    “Cisco ships all of their hard drives from manufacturing with drive write cache disabled.” Cisco said. “During a quality audit, select units were found to have the drive write cache enabled. The issue has been remediated in the manufacturing process. Users of potentially affected devices are recommended to change the drive cache configuration.”

    The problem is that with drive write cache enabled the cache is not flushed prior to a power loss, meaning data in the cache will be lost. The workaround is to disable write cache, which is simple enough — although when dealing with technology, “simple” is a relative term.

    “Users will generally have two types of setups with their hard drives; just a bunch of disks (JBOD) and redundant array of independent disks (RAID). The procedure to change the drive write cache settings differs depending on the OS and which setup the drive is in. In order to use the correct tool, you will have to know which OS you have and which storage volume setup is configured.”

    The good news is that Cisco includes a chart to determine the proper tool to use, along with step-by-step instructions for resolving the issue. The bad news is that the disabling process could be a little time consuming, especially for those running the affected model UCSC-C3X60, which can house up to 56 disks.

    What a way for an admin to start the week, eh?

    5:09p
    Google’s Quantum Computing Push Opens New Front in Cloud Battle

    Mark Bergen (Bloomberg) — For years, Google has poured time and money into one of the most ambitious dreams of modern technology: building a working quantum computer. Now the company is thinking of ways to turn the project into a business.

    Alphabet Inc.’s Google has offered science labs and artificial intelligence researchers early access to its quantum machines over the internet in recent months. The goal is to spur development of tools and applications for the technology, and ultimately turn it into a faster, more powerful cloud-computing service, according to people pitched on the plan.

    A Google presentation slide, obtained by Bloomberg News, details the company’s quantum hardware, including a new lab it calls an “Embryonic quantum data center.” Another slide on the software displays information about ProjectQ, an open-source effort to get developers to write code for quantum computers.

    “They’re pretty open that they’re building quantum hardware and they would, at some point in the future, make it a cloud service,” said Peter McMahon, a quantum computing researcher at Stanford University.

    These systems push the boundaries of how atoms and other tiny particles work to solve problems that traditional computers can’t handle. The technology is still emerging from a long research phase, and its capabilities are hotly debated. Still, Google’s nascent efforts to commercialize it, and similar steps by International Business Machines Corp., are opening a new phase of competition in the fast-growing cloud market.

    Jonathan DuBois, a scientist at Lawrence Livermore National Laboratory, said Google staff have been clear about plans to open up the quantum machinery through its cloud service and have pledged that government and academic researchers would get free access. A Google spokesman declined to comment.

    Providing early and free access to specialized hardware to ignite interest fits with Google’s long-term strategy to expand its cloud business. In May, the company introduced a chip, called Cloud TPU, that it will rent out to cloud customers as a paid service. In addition, a select number of academic researchers are getting access to the chips at no cost.

    While traditional computers process bits of information as 1s or zeros, quantum machines rely on “qubits” that can be a 1, a zero, or a state somewhere in between at any moment. It’s still unclear whether this works better than existing supercomputers. And the technology doesn’t support commercial activity yet.

    Still, Google and a growing number of other companies think it will transform computing by processing some important tasks millions of times faster. SoftBank Group Corp.’s giant new Vision fund is scouting for investments in this area, and IBM and Microsoft Corp. have been working on it for years, along with startup D-Wave Systems Inc.

    In 2014, Google unveiled an effort to develop its own quantum computers. Earlier this year, it said the system would prove its “supremacy” — a theoretical test to perform on par, or better than, existing supercomputers — by the end of 2017. One of the presentation slides viewed by Bloomberg repeated this prediction.

    Quantum computers are bulky beasts that require special care, such as deep refrigeration, so they’re more likely to be rented over the internet than bought and put in companies’ own data centers. If the machines end up being considerably faster, that would be a major competitive advantage for a cloud service. Google rents storage by the minute. In theory, quantum machines would trim computing times drastically, giving a cloud service a huge effective price cut. Google’s cloud offerings currently trail those of Amazon.com Inc. and Microsoft.

    Earlier this year, IBM’s cloud business began offering access to quantum computers. In May, it added a 17 qubit prototype quantum processor to the still-experimental service. Google has said it is producing a machine with 49 qubits, although it’s unclear whether this is the computer being offered over the internet to outside users.

    See also: The Machine of Tomorrow Today — Quantum Computing on the Verge

    Experts see that benchmark as more theoretical than practical. “You could do some reasonably-sized damage with that — if it fell over and landed on your foot,” said Seth Lloyd, a professor at the Massachusetts Institute of Technology. Useful applications, he argued, will arrive when a system has more than 100 qubits.

    Yet Lloyd credits Google for stirring broader interest. Now, there are quantum startups “popping up like mushrooms,” he said.

    One is Rigetti Computing, which has netted more than $69 million from investors to create the equipment and software for a quantum computer. That includes a “Forest” cloud service, released in June, that lets companies experiment with its nascent machinery.

    Founder Chad Rigetti sees the technology becoming as hot as AI is now, but he won’t put a timeline on that. “This industry is very much in its infancy,” he said. “No one has built a quantum computer that works.”

    The hope in the field is that functioning quantum computers, if they arrive, will have a variety of uses such as improving solar panels, drug discovery or even fertilizer development. Right now, the only algorithms that run on them are good for chemistry simulations, according to Robin Blume-Kohout, a technical staffer at Sandia National Laboratories, which evaluates quantum hardware.

    A separate branch of theoretical quantum computing involves cryptography — ways of transferring data with much better security than current machines. MIT’s Lloyd discussed these theories with Google founders Larry Page and Sergey Brin more than a decade ago at a conference. The pair were fascinated and the professor recalls detailing a way to apply quantum cryptography so people could do a Google search without revealing the query to the company.

    A few years later, when Lloyd ran into Page and Brin again, he said he pitched them on the idea. After checking with the business side of Google, the founders said they weren’t interested because the company’s ad-serving systems relied on knowing what searches people do, Lloyd said. “Now, seven or eight years down the line, maybe they’d be a bit more receptive,” he added.

    5:42p
    Three Ways to Generate Profit With the Data You Already Have

    Andrew Roman Wells is the CEO of Aspirenta, and Kathy Williams Chiang is VP, Business Insights, at Wunderman Data Management. 

    Build it and they will come. That is the view many organizations maintain about their data lakes and data warehouses. Companies are rapidly investing in systems and processes to retain business data that they know is valuable but have no clue what to do with it. Even the government collects mass amounts of data without specific plans for using the information at the time of collection.

    This trend only accelerates as the amount of data being produced continues to escalate. Today, it is estimated that human knowledge is doubling every 12 to 13 months, and IBM is estimating that with the build out of the “Internet of Things,” knowledge will double every 12 hours.

    Most organizations search for value in their data by throwing teams of data scientists at the various stores of data collected hoping to find insights that are commercially viable. This approach typically results in endless hours of digging for insights and if any are found, they rarely see the light of day. In order to monetize your data, you need a different approach, one that starts by turning the process on its head. We recommend three approaches to help you monetize your data:

    1. It’s About the Decision. A common approach when starting an analytics project is to ask what questions you would like the analysis to answer. But if your goal is to drive actionable analytics that monetize your data, you need to start at a different point. You need to understand the decisions you would like the analytics to support. This approach, termed Decision Architecture, is radically different from conventional methods. Understanding the decisions you would like to support drives the direction for the rest of the analytical exercise, including the type of data and analytics needed to support the decision. The decisions you focus on determine the analytics your team will undertake which can range from simple metrics like ROI or it may call for more sophisticated metrics such as a propensity or churn model.2. Align Decisions to Business Objectives. Knowing the goal is to provide analytics to support value driving decisions, you need to make sure the goals align with overall corporate objectives. Through mapping your decisions to key business drivers that achieve corporate objectives, you are charting a clear path to actionable analytics.
       
      3. Economic Value and Decision Theory. In order to monetize your data, adding economic value to your decisions through the use of data science and decision theory is a must.  Whereas data science helps you generate insights from your data about actions you can take, decision theory helps you structure your decisions for maximum impact and feasibility. Economic value captures both the quantitative and qualitative aspects of an action and can come in various forms including revenue and profitability, market growth or process efficiency. The goal of economic value analysis  is to provide the decision maker with an understanding of the economic trade-off among the set of decisions they have available to them.Decision theory is applied to help decision makers select the best choice to achieve their objectives.  Structuring the decision criteria into a decision matrix laying out anticipated acts, events, outcomes, and payoffs helps managers see more clearly the full scope of their proposed actions and make more objective choices, guarding against hidden or implicit cognitive biases. Cognitive biases arise where an individual holds a view of a situation that is based on prior subjective experiences but may not be completely consistent with current reality.  Confirmation bias, for example, occurs when the inclination is to look for information and analytics that support pre-existing beliefs or goals.</p>

      If you focus your analytics on your decision, you are already ahead of most analytical practitioners. Creating alignment from your decisions to your business drivers that achieve your corporate objectives makes your analytics actionable and relevant. Assessing economic value of your decision choices and employing decision theory to assist the decision maker with making the best possible choice will improve the value of your decisions.  These three practices will drive up the value of your analytics and enable you to monetize your data.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    6:35p
    Report: VMware, AWS Mulling Joint Data Center Software Product

    Amazon Web Services and VMware are in talks about potentially teaming up on developing a data center software product, The Information reported, citing anonymous sources.

    No details about what that product would be are available at this point, but VMware’s value for AWS is its enormous enterprise data center install base, while helping customers create and run environments that combine their on-premises data centers with one or multiple public cloud providers is currently the big business focus for VMware. Its VP John Gilmartin told us that much in a recent interview for The Data Center Podcast.

    All that makes it very likely that the product being discussed would focus on hybrid cloud capabilities. The two companies already have a partnership in place, announced last year, whose goal has been to help enterprises easily integrate their existing VMware environments with AWS. Under that partnership, VMware will essentially sell a set of tools that will enable its customers to manage AWS the same way they manage VMware on-premises. VMware CEO Pat Gelsinger said at the time that this would be “the primary public cloud offering of VMware.”

    Last year’s announcement was one of the rare acknowledgements by AWS that hybrid cloud is the preferred way forward for most enterprise IT shops. Its messaging has traditionally treated hybrid cloud as a stepping stone in transition to an all-cloud infrastructure.

    Read more: Amazon Wants to Replace the Enterprise Data Center

    Microsoft’s hybrid cloud strategy now rests on Azure Stack, its on-premises software stack that integrates with its public cloud and gives users some public cloud-like features in their own data centers. The product started shipping just this month.

    Read more: Here’s How Azure Stack Will Integrate Into Your Data Center

    Google’s boldest move in the area to date has been its recently announced partnership with Nutanix, one of the hottest hyperconverged infrastructure vendors in the market. The partners promise customers an easy way to “fuse” their on-premises Nutanix infrastructure with Google Cloud Platform.

    7:54p
    Sponsored: Groupon – A West Coast Hyperscale Cloud Case Study and Success Story

    Imagine working with a data center and hyperscale provider that helps you get up and running with all the right services and infrastructure components in just 41 days. As in delivering the entire ecosystem, serving users, and delivering application –all in less than two months.

    This is the story of RagingWire and Groupon.

    It’s worthwhile to note that as more users connect to the cloud, organizations will find new ways to deliver services and offerings via a cloud solution. Cloud computing is an ever-changing, highly dynamic platform. This is why it’s critical to work with a data center partner that can take you to the cloud – or get you there in a hybrid fashion. Gartner recently pointed out that more than $1 trillion in IT spending will be directly or indirectly affected by the shift to cloud during the next five years. The market for cloud services continues to grow, say analysts, making cloud computing one of the most disruptive forces of IT spending today.

    At the heart of any organization’s  IT infrastructure is a data center that is capable of scaling with the needs of the business. However, not all data centers are created equal. Furthermore, not every data center can customize a solution to best fit an organization’s IT needs. And, in many situations, the physical location of the data center can be a make-it or break-it situation for an organization because of costs and inherent location risks. In this white paper from RagingWire, we go on a journey with a high-profile hyperscale cloud company that went through the West Coast data center selection process: Groupon.

    Through it, we’ll get a view into the rapidly-growing company’s selection process, considerations, challenges, what the West Coast data center market looks like, and the real value brought by a powerful data center partner.

    Northern California – Making the right data center decision

    In looking at a Northern California data center, it’s essential to understand the choices that need to be made in selecting the right solution and partner. This guide will take a look at the various components a good data center can provide and how real customers are leveraging these data center resources. This includes power, cooling capacity, physical footprint, facility amenities, and very importantly, location. For example, companies with a sizable footprint in Silicon Valley might look for a data center in the immediate vicinity, but the region is at risk of a seismic event. Locations in Sacramento, on the other hand, can still get you the data center you need close enough to the bay area in a no-earthquake zone.

    Here’s the other truth: data center deployments in the West Coast area continue to increase at a staggering pace. A recent report from JLL Research shows how new initiatives are pushing demand for space through the roof in many North American markets, causing demand to spread out across both primary and secondary markets alike.

    For the data center and cloud professional, the decision process is now more extensive than ever before. And, a big part of that selection process when looking at West Coast data centers is to work with a partner who has available space and power and can help you reduce costs and offset risks while delivering superior onsite operations and customer service. That is why it’s critical to understand the selection process, and see where real-world use-cases impact true business results. Check out the white paper to get the details on modern selection criteria and which considerations are critical for you to make the right decision.

    For an organization such as Groupon, it revolved around service, delivery, and a good partnership.

    Groupon – A Case-Study

    As a growing organization, Groupon knew that it needed to partner with the right type of data center provider to align technology strategies with their evolving business goals. Their provider needed to be customer-focused, with capabilities around scale, security, agility, and support. So – why did they go with RagingWire?

    • Needed to stay on the west coast
    • Latency was a big concern – and RagingWire was able to overcome that
    • Cost of power, rent, and the proximity to the Bay Area
    • Carrier-neutrality
    • Having access to multiple, industry-leading carriers and cloud providers

    By leveraging the right type of data center partner, Groupon was able to go from deployment to service delivery in an incredibly short amount of time. RagingWire was able to provide the right service and infrastructure that allowed Groupon to set up their entire ecosystem in 41 days – as in serving users and delivering application.

    “As a company that’s seen incredible growth throughout our seven-year history, it was important for us to find a wholesale data center provider that could meet our requirements for scalability, customizable high-density power, cooling containment, ISP neutrality and physical security,” said Groupon’s Director of Global Data Center Operations, Harmail Chatha. “RagingWire provided us with exactly what we needed in a timely and efficient manner, helping to ensure that we are able to support our increased traffic demands as our business grows.”

    Download this whitepaper to learn about the data center selection process, the research that Groupon did to select their data center partner, and why hyperscale platforms are the foundation for your business and the future cloud model.

    << Previous Day 2017/07/17
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org