Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, May 19th, 2017
| Time |
Event |
| 12:00p |
Report: AI Tells AWS How Many Servers to Buy and When Internet giants Google, Microsoft, Amazon, and Facebook use Machine Learning to enhance their services for end users, such as real-time search suggestions, face recognition in photos, voice commands, or cloud services for software developers, but they also use Artificial Intelligence to optimize their internal operations. Google revealed in 2014 that it uses Machine Learning to improve energy efficiency of its data centers, and Amazon’s use of AI to manage warehouses for its e-commerce business hasn’t been a secret since at least 2015.
So, it comes as no surprise that Amazon Web Services, the company’s cloud services arm, also applies Machine Learning to one of the toughest puzzles in data center management: capacity planning. AWS uses Machine Learning to forecast cloud data center capacity demand and to figure out where on the planet to store additional data center components so that it can expand capacity quickly where and when it’s needed.
AWS CEO Andy Jassy revealed the practice in front of an audience at this week’s Foundations of Science Breakfast by the Pacific Science Center, GeekWire reported. The company buys an enormous amount of servers on a regular basis. GeekWire quotes Jassy:
“One of the least understood aspects of AWS is that it’s a giant logistics challenge, it’s a really hard business to operate.”
“Every single day we add enough new servers to have handled all of Amazon as a $7 billion global business.”
The report doesn’t provide much detail about what kinds of input data the company’s Machine Learning algorithm uses to forecast demand, but one of the primary data sources appears to be its cloud sales team. From the GeekWire report:
For example, it can pick up signals from the process its sales teams follow (enterprise sales cycles are notoriously long) to forecast demand. A lot of new customers like to start slow on AWS and then accelerate their usage as they see more benefits, Jassy said, which can lead to spikes in demand if they move faster than anticipated. | | 3:00p |
As WannaCrypt Recovery Continues, Analysts Back Microsoft’s Leader  Brought to you by IT Pro
As the global WannaCry ransomware attack began spreading to computer systems around the world on May 12, Microsoft president Brad Smith quickly responded by publicly blaming part of the problem on businesses which don’t keep up with critical security patches, leaving their systems vulnerable to attackers.
Smith’s comments came in response to critics who had blamed Microsoft for leaving systems vulnerable in the first place by not doing enough sooner to assist customers and for ending security patches for older operating systems such as Windows XP and Windows Server 2003. Many enterprises, including hospitals and a wide range of businesses, still rely on systems running older operating systems or embedded operating systems, leaving them open to hackers and ransom attacks.
The problem with that argument, according to several industry analysts who spoke with ITPro, is that Smith and Microsoft are right this time to criticize IT administrators and their companies that are failing to keep their systems patched and updated.
See also: Microsoft Faulted Over Ransomware While Shifting Blame to NSA
Smith said the attack provided graphic evidence about “the degree to which cybersecurity has become a shared responsibility between tech companies and customers.” The spread and disruption of WannaCry “is a powerful reminder that information technology basics like keeping computers current and patched are a high responsibility for everyone, and it’s something every top executive should support.” But Smith didn’t stop there. He also blasted the way government agencies have handled sensitive security disclosures.
That’s not wrong at all, Charles King, principal analyst with research firm Pund-IT, told ITPro.
“For all intents and purposes, I’m with Brad Smith on this,” said King. “Microsoft sent customers a ‘critical’ advisory along with a patch to fix the vulnerability on March 14, a month before The Shadow Brokers released the attack vulnerability that ransom-ware hackers exploited.” In addition, Microsoft also took the very unique step of recently releasing security updates to address the vulnerability for Windows XP and Server 2003, even though they are both years past their Extended Support lifetimes.
“It’s hard to imagine what more Microsoft could have done,” said King. “That people are attempting to lay blame on the company says volumes about them, and about the curious view that some have of software vendors in general and Microsoft in particular.”
In addition, if critics want to point a finger, “the NSA which reportedly discovered the vulnerability and then failed to warn hospitals and other organizations is a better target than Microsoft,” he said. “But the fact is that many or most of those affected by WannaCrypt had the chance to secure their systems and failed to do so. “
Another analyst, Dan Olds of Gabriel Consulting Group, said Smith makes a reasonable argument in saying that business must do a better job of defending themselves as well.
“Customers have to take at least a little responsibility for their own security,” said Olds. “If they don’t have an automatic update mechanism and they don’t apply patches manually, they’re going to be at risk – it’s as simple as that.”
Lots of users, particularly those who are overseas, don’t use automatic updates and leave their systems vulnerable, he said. “Many of these same folks are running systems with outdated operating system versions. I can see this happening with individuals, but can anyone in their right mind use an unsupported version of an operating systems on a banking or hospital system? That’s insanity.”
Since Microsoft offered patches for this vulnerability before the attacks took place, “then it’s on the users to apply those patches for their own safety,” said Olds.
Jan Dawson, the chief analyst with Jackdaw Research, said “the reality is [Microsoft has] done everything it could to get people to upgrade, to provide patches for recent versions of Windows, and so on. At some point, organizations which don’t update or patch their software even in the face of a steady stream of security threats can’t expect their suppliers to fix things for them.”
Rob Enderle, principal analyst with Enderle Group, agreed.
“Microsoft rushed out a patch before the attack which is pretty much all they can do,” said Enderle. “People didn’t patch and a huge number of those hit were running versions of Windows that were either way out of date or pirated. Even so Microsoft did attempt to patch what they could at a massive cost to the firm.”
Ultimately, “Microsoft will take a lot of heat for this, but in this instance, they performed as rapidly as they could, they have a right to be [angry].”
The attack has reportedly hit 74 countries including the U.K., U.S., China, Russia, Spain, Italy and Taiwan. Windows 10 was not affected by the WannaCry attacks. | | 5:19p |
CrowdStrike Raises $100 Million as Cyber Makes Headlines Nafeesa Syeed (Bloomberg) — CrowdStrike Inc., whose digital security services helped the Democratic National Committee respond to a network breach that was linked to Russia, raised $100 million in its latest funding round.
The new money, led by investor Accel, values the company at more than $1 billion, CrowdStrike said in a statement Wednesday. With the latest investment, Irvine, California-based CrowdStrike said it has raised a total of $256 million.
“It’s great validation for what we’ve built and the growth we’ve seen both in the customer base as well as the company,” George Kurtz, president, chief executive officer and co-founder, said in a phone interview. “More importantly, it’s looking toward the future and being able to focus on the tremendous growth opportunity that we have.”
See also: FireEye, Symantec Jump as Ransomware Hack Seen Boosting Spending
The investment comes as cybersecurity companies are enjoying the limelight. A global ransomware attack over the weekend that affected hundreds of thousands of computers in more than 150 countries, sent cybersecurity stocks up in anticipation of higher spending on security by companies and governments.
The widespread hack highlights the need for better endpoint security that goes beyond signatures and employs a cloud-based approach to deal with advanced threats as seen in the WannaCry attack, Kurtz said.
“This incident puts it all in perspective,” Kurtz said. “It’s more a public view into the failures of some of the existing technologies that are out there.”
CrowdStrike said it would use the cash infusion to help meet “spiking demand” for its services as organizations look to replace their antivirus software with more effective solutions. Over the weekend, the company saw a 10-fold increase in the number of people reaching out to acquire its platform, Kurtz said. It’s too early to tell yet who was behind the attack, but researchers are continuing to investigate, he added.
See also: This Hacker Can Talk His Way inside a Data Center
The DNC called CrowdStrike last year to respond to a breach of its networks that led to disclosures of committee emails and other internal data. CrowdStrike linked the attackers to Russian intelligence agencies, a finding echoed by the U.S. government, which said the campaign was ordered by Russian President Vladimir Putin.
The latest funding will feed CrowdStrike’s aims of geographic growth both within and outside the U.S. It has 15 offices, including in Europe, the Middle East and Africa. There’s continued expansion in Europe as well as in the Asia Pacific region, with plans for an office in Singapore, while broadening its presence in South America, according to Kurtz.
Other goals include small acquisitions. In the current funding environment, “there’s a lot of companies that are stuck in no man’s land of $5 million to $10 million,” Kurtz said, that just don’t have enough customer traction or big enough enterprises using their product – even if they have great technology. CrowdStrike is considering how those companies could complement its work, he said.
While research firm IDC expects global spending on security solutions to accelerate slightly over the next several years, the market is also getting more competitive. It’s harder to stand out to prospective customers in the face of legacy tech companies like Cisco Systems Inc.
Other existing investors who participated in the latest round included Warburg Pincus; CapitalG, which is part of Alphabet Inc.; and as well as new investors such as March Capital Partners and Telstra Corp., an Australian telecommunications company.
This is the company’s fourth funding round. In July 2015, a round led by Google raised $100 million with a valuation nearing $1 billion. Wednesday’s valuation tops that, he said.
“We haven’t raised capital in two years; we just haven’t had the need,” Kurtz said. “We still have plenty of cash, but we thought the funding environment was a good one, and we wanted to continue to bolster our balance sheet.” | | 8:00p |
Tear Down the Silos: a Call for App-Centric Infrastructure Performance Management Len Rosenthal is chief marketing officer for Virtual Instruments.
Applications are at the heart of today’s enterprise, but like the body’s most vital organ, applications cannot sustain life on their own. To ensure applications deliver in terms of performance and availability, organizations need a common view between application- and infrastructure-layer management. This requires the free flow of insights between what have traditionally been siloed functions: application performance management (APM) and infrastructure performance management (IPM). This is a challenge, particularly in enterprises that rely on legacy infrastructure; but it’s one data center leaders must overcome if they are to deliver on the complex application requirements in physical, virtual and cloud computing environments.
Converging APM, IPM for Greater Synergy
In the past, APM and IPM have played different roles, in different silos. Working from the top down, APM analyzed end-user response times, runtime application environments and portions of virtual servers. On the other end of the spectrum, IPM managed the physical resources such as servers, networks and storage, as well as most of the virtual ones. The two played vital roles as part of a whole, but there was little integration – which is now critical to effective end to end performance optimization.
To connect applications, web, database and infrastructure teams, IT needs to do more than just mash up management tools. The test of true synergy is whether executives can change the ecosystem at the system and operational level. One of the first steps toward that goal is to end reliance on silo-specific solutions. These tools were built for isolated application and infrastructure environments, and they can’t do the job modern data center ecosystems require. Enterprises can no longer accurately predict traffic, nor control resources in a shared or multi-tenant environment. And if they want to compete in a market where new offerings can rapidly overtake traditional products, they must be flexible and scalable.
Silos get in the way of that goal. Many data center teams have worked to fix that problem through virtualization, software-defined architecture and other initiatives. And yet, many still battle technological (and cultural) challenges. Some crucial applications remain on traditional infrastructure (and some crucial players remain tied to their roles as data gatekeepers). These barriers threaten real-time information flow and performance visibility across the enterprise.
The Need for App-centric IPM
By contrast, app-centric IPM enables enterprises to manage infrastructure for the express purpose of delivering application performance and availability. Such an approach does three things:
- Continuously capture, correlate and analyze data: Enterprises can compare heterogeneous infrastructure information with established response time, utilization and other metrics systemwide. Freed from silos, organizations gain advanced analytics frameworks that provide them with contextual understanding of their application environments.
- Deliver a dashboard of correlation, discovery and predictive data analytics: App-centric IPM delivers accurate, actionable and vendor-agnostic insights. Engineering and operational teams get these insights delivered in an intuitive way that supports scalability, while maintaining speed and functionality.
- Understand infrastructure in the context of the application: Data center managers need to understand which applications are running on which components of the infrastructure. They also need to understand the importance, or business value, of each application on the shared infrastructure. Finally, App-centric IPM needs to detect and understand changes in application workload behavior, so that problems can be proactively avoided..
Application-centric IPM keeps the blood of information pumping through the enterprise, so organizations can deliver high performance and availability – all the time. This approach provides a map of the data center in relation to each application, as well as the context around each application and its criticality to the business. Lastly, app-centric IPM helps IT understand how applications behave and how that behavior stresses the infrastructure. This level of understanding is essential in rapidly changing market landscapes, where services live and die on their reputations. When the user experience is paramount, enterprises can’t afford to retain the silos that separated APM and IPM for so long.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
|
|