Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, February 21st, 2017
Time |
Event |
4:44p |
Trump Team Sounding Out Tech Firms Ahead of Delayed Cyber Order Nafeesa Syeed (Bloomberg) — The Trump administration has quietly consulted technology industry leaders ahead of issuing a delayed executive order on cybersecurity, even as executives have clashed with the White House over policies including the president’s efforts to limit entry to the U.S.
President Donald Trump delayed the signing of a cybersecurity directive that had been planned for Jan. 31 just as legal challenges stalled his effort to ban travel to the U.S. by citizens of seven predominantly Muslim countries. While no new date has been set for signing the cyber order, executives attending a security conference in San Francisco this week said the administration has sought input to help smooth the rollout.
“People associated with the administration have reached out for feedback to myself and other experts in the industry as they’re thinking through the strategy for cybersecurity and more,” Dmitri Alperovitch, co-founder and chief technology officer of CrowdStrike Inc., said in an interview. CrowdStrike was hired by the Democratic National Committee last year to investigate Russia’s breach of its computer systems.
Michael Brown, a retired Navy rear admiral who’s vice president and general manager of cybersecurity company RSA’s Global Public Sector unit, said his and other companies have conveyed what the Trump administration should concentrate on, including public-private partnerships to bolster defenses.
‘Working Together’
“We have been working our priorities in talking to the administration,” Brown told reporters in San Francisco this week. “We’re trying to influence the conversation around the role the private sector can have with respect to policy and, in fact, engagement — the ability to respond to the threats that are out there, working together.”
After high-profile breaches at the Office of Personnel Management and the Pentagon during the Obama administration, Trump’s initial draft of the cyber order would have held government agency heads personally responsible for securing their departments’ computers against hackers, according to a Trump aide who asked for anonymity to describe it.
House Homeland Security Committee Chairman Michael McCaul, a Republican from Texas, said the administration still needs to define its cyber policy.
“I’ve been urging the administration to develop a new national cybersecurity strategy as soon as possible,” McCaul said Feb. 14 in San Francisco. “We are feeling tectonic shifts on the virtual ground beneath us, and our current cyber plans just won’t cut it.”
He and others said that based on leaked drafts they expect the administration to order a broad, government-wide cybersecurity assessment, as Trump promised during his presidential campaign.
Some in the private sector are cautious about making judgments on Trump’s cyber policies — or even being involved in the process — before the executive order comes out. There was also unease when initial leaked versions of the order had few references to the private sector and its role.
Adobe, Microsoft
The Information Technology Industry Council, which represents big companies including Adobe Systems Inc., Facebook Inc. and Microsoft Corp., is trying to cultivate ties with the administration. The council sent Trump’s transition team a list of recommendations from its members on how to improve cybersecurity at federal agencies. Based on draft versions of the executive order they say they’ve seen, the directive has “been getting better and has evolved,” said Pamela Walker, the council’s senior director for federal public sector technology.
“Everybody is waiting for the executive order” to see how it affects government agencies and the companies they work with, Walker said in a phone interview. “We thought it was heading in the right direction and aligned with things we’ve been promoting.”
Those drafts promise some continuity with former President Barack Obama’s approach, according to Lisa Monaco, who served as Obama’s homeland security adviser. Monaco cited a focus on an open and innovative internet that’s “prioritized for commerce” as well as on information-sharing.
“The draft I saw, the preamble, you basically could lift that entire paragraph out of President Obama’s cyber strategy,” she said Feb. 14 in a speech in San Francisco.
Technology firms have a complicated relationship with Trump. Many of the sector’s executives and employees helped raise money for his campaign rival, Hillary Clinton, and more than 120 companies, from Apple Inc. to Zynga Inc., filed an impassioned legal brief condemning his immigration order.
Those companies and others remain eager to see who Trump names to senior cybersecurity roles. Some advisers from the Obama era have stayed on, but key positions remain unfilled both within the White House and at some federal agencies. Daniel Lerner, a staff member of the Senate Armed Services Committee, said a big question is who will steer Trump’s policies, especially in the Defense Department.
Mattis Adviser
“There’s been some names that have been announced, but I don’t think we’ve received many formal appointments yet, especially on cyber,” he said in an interview. He said a key appointment will be who is tapped as the principal cyber adviser to Defense Secretary James Mattis.
The best-known figures working on cybersecurity in the administration so far are former New York Mayor Rudy Giuliani, who Trump has said would lead a committee to work with private-sector experts, and Thomas Bossert, the president’s assistant for homeland security. Monaco said she spent a dozen hours during the transition with Bossert, her replacement, discussing issues such as cybersecurity and the need to replace outdated federal computer systems. Bossert worked on the National Security Council during President George W. Bush’s administration.
CrowdStrike’s Alperovitch said Bossert understands cyber “really well” and realizes the government needs the private sector to combat cyber threats, so the outreach to companies is a “very encouraging sign.” | 5:00p |
Multi-Cloud Won’t Work Without Replication; How Do We Get There? Avinash Lakshman is CEO of Hedvig.
The journey to the cloud is well underway. Market efficiencies, economics and technology have advanced sufficiently, and it is inevitable that virtually all organizational functions and technology infrastructure will lever public clouds in some capacity. In fact, a recent Gartner forecast expects that by 2020 more than $1 trillion in IT spending will either be directly or indirectly affected by the shift to the cloud. Gartner notes that, “this will make cloud computing one of the most disruptive forces of IT spending since the early days of the digital age.”
I don’t disagree, but there are consequences; and here’s one of the biggest: moving all your data and applications to a single public cloud provider represents a massive vendor lock-in. Even moving just a subset of your data and applications introduces significant financial and supplier risk. The obvious solution is to leverage multiple public cloud providers.
That leads to a different challenge: How do you overcome the inherent portability, locality, and availability constraints of moving data among clouds?
Reaping the Business Benefits of Multiple-Cloud Requires Cross-Cloud Replication
As organizations move to multiple public clouds, they in turn will need a way to synchronize data seamlessly across multiple providers — cross-cloud replication makes this possible. Cross-cloud replication enables organizations to move applications easily among different cloud sites and, as importantly, cloud providers. It’s the missing piece that makes the multi-cloud world we hear and read so much about a reality today. It ensures that no matter where you run your app, it will have local access to its data.
Why is this important? The promise of a multi-cloud future is one in which you’re able to move your application dynamically based on business requirements. If you can replicate your data across all of the public cloud services, then you can eliminate cloud vendor lock-in by employing cloud arbitrage, reverse auction, and follow-the-sun scenarios. The bottom line is you run your application in the cloud that provides the best performance, economics, availability, or some combination of these.
Four Trends Will Make it a Reality Within Two Years
We’ve reached a point in the sophistication and evolution of IT where cross-cloud replication is necessary to realize a multi-cloud environment. In fact, I predict cross-cloud replication will be commonplace among medium to larger organizations within two years. This accelerated adoption is fueled by four converging trends:
- Infrastructure evolutions in private cloud. First, let’s start with a simple definition of a private cloud: virtualization of some form (be it VM- or container-based) combined with automation and self-service. Advances in microprocessor and memory architecture (whether HDD or SSD) make the virtualization side of private cloud more cost effective. Couple these with advances in cloud orchestration tools from Docker, Kubernetes, Mesos, and OpenStack and you have the automation and self-service. Building a private cloud with these elements creates an “AWS-like” foundation and cross-cloud replication then allows companies to move apps and services across private cloud data centers with ease.
- Broad usage of multiple public cloud providers. Amazon Web Services (AWS) is the 800-pound gorilla in this space so far, running close to $10 billion a year in revenue. But Microsoft Azure and Google Cloud Platform (GCP) have made impressive strides in the last several years. Wanting to avoid vendor lock-in, organizations will augment private clouds with at least two or more public cloud providers. In fact, a recent survey shows the average enterprise using six clouds (three public, three private). These organizations will either need cross-cloud replication to keep data synchronized or risk the onerous task of lifting and shifting infrastructure silos to a multitude of public clouds.
- The emergence of DevOps talent and processes. While still a relatively scarce skill set, DevOps is no longer the unicorn it used to be. Even mainstream, so-called legacy organizations now have DevOps teams and culture, not just the cloud- and digital-native companies. Another recent survey found that DevOps adoption climbed to 74 percent in 2016 from 66 percent in 2015. That number rises to 81 percent for enterprises (organizations with 1,000 or more employees) where adoption is even more prevalent. DevOps talent ensures companies have the know-how to build, ship, and run applications across these evolving private and public cloud trends. Cross-cloud replication gives these apps access to data regardless of where they run.
- The commercialization of AI and machine learning technologies. We’re now seeing an explosion of interest in and development of artificial intelligence and machine learning for commercial, rather than research, purposes. In fact, large organizations like Facebook, Google, Microsoft, IBM, and Intel are donating open source machine learning code so organizations can better use the intelligence in their own businesses. Machine learning expands DevOps’ value by automating decisions like where applications and services should run. Need to move an app for performance or cost reasons? Machine learning can detect the need and make the decisions while cross-cloud replication ensures data portability.
Because of the above trends, cross-cloud replication is not a question of “if” so much as “when.” Whether this replication arrives in full form in four or 24 months is difficult to say, but the true software-defined storage (SDS) we now see available in the marketplace is a good start.
A Universal Data Plane
Inherent to its design, SDS is already decoupled from the underlying hardware, the intelligence lives in programmable software, and the necessary data protection mechanisms are there. More recent SDS solutions even provide each application with its own unique policy, such as which cloud or clouds on which it should run. But deploying a software version of your old storage array is not the right approach. You need to deploy SDS in an architecture that spans traditional storage tiers, runs in public clouds, and integrates with any of the virtualization and workload infrastructures powering your cloud. Deploying SDS in this architecture is a fundamentally different approach from how enterprises have been doing in storage for the last 40 years.
So what is the right architecture? I call it a Universal Data Plane.
A Universal Data Plane is a single, programmable data management layer spanning storage tiers, workloads and clouds. It replaces the need for disparate SAN, NAS, object, cloud, backup, replication, and data protection technologies. As true software-defined storage, it can be run on commodity servers in private clouds and as instances in public clouds. A Universal Data Plane also dramatically simplifies operations by plugging into modern orchestration and automation frameworks like Docker, Kubernetes, Mesos, Microsoft, OpenStack and VMware.

Perhaps most importantly, a Universal Data Plane veers away from hyperconvergence. By definition it’s a decoupling or disaggregation of distinct tiers in your IT stack. Rather than collapsing application and orchestration layers into the same physical solution, a Universal Data Plane remains as its own unique, software-defined storage layer. It provides APIs into the VM, container, cloud, and orchestration technologies. As such, it’s the right layer to provide cross-cloud replication. Tight coupling, as found in hyperconverged solutions, cannot not provide this multi-cloud foundation.
If you’re looking to go multi-cloud, then the good news is that the necessary software-defined storage and cross-cloud replication technologies make this a reality today. The concept of a Universal Data Plane is not science fiction. It’s the next logical step in your cloud journey.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 9:35p |
Google Launches Bare-Metal Cloud GPUs for Machine Learning Google has rolled out in beta a new cloud service that allows users to rent Nvidia GPUs running in Google data centers for machine learning and other compute-heavy workloads.
While relatively few companies outside of a small group of web giants like Google itself use machine learning in production, there’s a lot of development work going on in the field, with computer scientists building and training machine learning algorithms and companies mulling various business cases for using the capability. Training these systems requires a lot of computational horsepower, and at least today companies have found that harnessing the power of many GPUs working in parallel is the best way to get that horsepower.
The problem is that building, powering, and cooling a GPU cluster is far from trivial – not to mention expensive – which makes renting one an attractive option, especially at the experimental stage, where most companies are with machine learning. It’s a business opportunity for cloud providers, who already have experience with this kind of infrastructure and the resources to offer it as a service.
Google’s biggest rivals in cloud infrastructure services, Amazon and Microsoft, launched cloud GPU services of their own earlier. Amazon Web Services has been offering its P2 cloud VM instances with Tesla K80 GPUs attached since last September, and Microsoft Azure launched its N-Series service, also powered by Tesla K80 chips, in December.
The same GPUs are now available from Google at 70 cents per GPU per hour in the US and 77 cents in Asia and Europe. Google’s pricing beats Amazon’s, whose most basic single-GPU P2 instance, hosted in the US, costs 90 cents per hour. Microsoft doesn’t offer per-hour pricing for its GPU-enabled VMs, charging instead $700 per month for the most basic configuration of N Series.
What type of infrastructure will dominate once the machine learning space matures is unclear at the moment. Dave Driggers, whose company Cirrascale also provides bare-metal GPU-powered servers for machine learning as a cloud service, told us in an interview earlier that he believes a hybrid infrastructure is most likely to become common, where companies use a mix of on-premise computing and cloud services.
But, as one of Cirrascale’s customers also told us, even GPUs themselves may at some point be replaced by a more elegant solution that requires less power.
Read more: This Data Center is Designed for Deep Learning |
|