Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, March 9th, 2017

    Time Event
    5:17p
    Does WikiLeaks CIA Dump Suggest Cybersecurity is Largely Futile?

    Brought to you by MSPmentor

    This week’s WikiLeaks disclosure essentially confirms the harsh reality that all of the endpoints IT professionals work so hard to secure have long been cracked and rendered wide open to intrusion by the U.S. government – at least.

    Tuesday’s document dump by the famous leak-disseminating online publication lays out several hundred million lines of code that appear to confirm sensitive C.I.A. methods for hacking into an unnerving array of electronic devices, including smartphones, computers and smart TVs.

    Thus far, WikiLeaks has redacted sufficient portions of the actual code used in C.I.A. cyber attacks, citing a type of ethics review. But the fear is the organization could choose to release details of the powerful, weaponized exploits at any time.

    “The…site didn’t’ release the code, saying it was postponing release ‘until a consensus emerges on the technical and political nature of the C.I.A.’s program,’ and how the cyberweapons could be disarmed,” USA Today reported in an article.

    A worst-case scenario has U.S. intelligence agencies eavesdropping on anyone, anywhere, so long as an Internet-enabled electronic device is situated nearby.

    “In one revelation that may especially trouble the tech world if confirmed, WikiLeaks said that the C.I.A. and allied intelligence services have managed to compromise both Apple and Android smartphones, allowing their officers to bypass the encryption on popular services such as Signal, WhatsApp and Telegram,” according to an article in the New York Times. “According to WikiLeaks, government hackers can penetrate smartphones and collect ‘audio and message traffic before encryption is applied.’”

    That report also describes some of the other exploits:

    “A program called Wrecking Crew explains how to crash a targeted computer, and another tells how to steal passwords using the autocomplete function on Internet Explorer,” the New York Times article states. “Other programs were called CrunchyLimeSkies, ElderPiggy, AngerQuake and McNugget.”

    In a “press release” accompanying the document dump, WikiLeaks said the 8,761 files released this week comprise just the first portion in a series of records and documents stolen from one of the U.S.’s most important intelligence agencies.

    “Recently, the CIA lost control of the majority of its hacking arsenal including malware, viruses, trojans, weaponized “zero day” exploits, malware remote control systems and associated documentation,” the WikiLeaks statement said.

    “This extraordinary collection, which amounts to more than several hundred million lines of code, gives its possessor the entire hacking capacity of the CIA,” it continued. “The archive appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive.”

    WikiLeaks said the source seeks to stoke a public debate about cyberweapons.

    But while the revelations clearly mark an upending of the cybersecurity world, multiple experts said the dump offered little in the way of surprises.

    “The types of capabilities described in the WikiLeaks (files) are not new and many of the exploits were demonstrated as technically possible for a while now,” Slawek Ligier, vice president of security engineering at Barracuda Networks, told the U.K. publication IT Pro.

    Matthew Ravden, a vice president at security systems specialist Balabit, told IT Pro that: “Assuming these revelations are true (and they certainly appear to be authentic), it’s probably fairly shocking to the general public to see the lengths to which a sophisticated government-sponsored organization will go to find ways of ‘listening in,’ through TVs, smart-phones or other ‘connected’ devices.”

    “For those of us in the security industry, however, none of this is particularly surprising,” Ravdin continued. “The resources available to the CIA, (British) MI5, or the (Russian) FSB are such that they can do pretty much anything; they live by a different set of rules from the rest of us.”

    This article originally appeared on MSPmentor.

    8:47p
    Identity is the New Enterprise Security Perimeter

    Richard Walters is SVP of Security Products at Intermedia.

    To say that today’s enterprises are up against a whole new world of security threats is an understatement.

    Businesses across the globe have suffered massive data breaches affecting operations and customer trust. Oracle, for example, discovered malicious software on systems running its network of MICROS point-of-sale payment terminals—ultimately impacting hundreds of the company’s computers and its online support portal.

    Oracle is not the only organization suffering from such attacks. It will not be the last. Things are only going to get worse as new threats grow in popularity among cybercriminals. But why is enterprise security in such disarray? The problem lies with the current state of security perimeters.

    The uptick in cloud computing and mobility have rendered traditional enterprise security perimeters non-existent. Employees nowadays don’t just access corporate data through their desktop at the office but through multiple devices and web apps at any time and from any place. Each device and web app an employee uses is a potential weak point governed by one challenge: verifying that the right person is accessing the appropriate information on the right device.

    So, how can organizations protect against sophisticated cyberthreats in this new landscape? The answer is to adopt an identity-first approach to security and access with an identity and access management (IAM) solution.

    At its core, IAM separates two groups of users: those who have permission to do certain things and those who don’t. That may seem straightforward, but in this day and age where we can conduct work anywhere, over a range of devices and using a nearly infinite number of web apps, it’s an incredibly complex challenge. Next generation IAM, however, is evolving to keep ahead of these new realities. Deployed correctly, advanced IAM can significantly reduce exposure to security risks.

    Below are key considerations for IT teams taking an identity-first approach.

    Remove the Human Element in Password Management

    According to the 2016 Verizon Data Breach Investigation Report, 63 percent of all breaches leveraged a weak or stolen password. This poor state of password hygiene shouldn’t be a surprise. After all, the average enterprise, according to Netskope’s 2016 cloud report, uses 935 web apps.

    Single Sign-On (SSO) emerged as a solution to address the growing burden of creating and managing passwords. SSO reduces the tendency to use weak or common passwords that are easily cracked. But SSO alone is not enough. Dynamic password management is the next step, keeping credentials secure by reducing the human element and ensuring strong passwords are created and automatically changed on a regular basis. This greatly improves security, while also preserving convenience for users who don’t have to constantly come up with, and then remember, new passwords.

    Understand User Behavior

    Context-based authentication and authorization helps balance the dual requirement for security and usability. By dynamically adapting authentication according to the level of risk posed by the user’s current context, organizations can provide flexibility without relinquishing control. This takes into consideration the conditions around a request to verify trust. For example, a context-based authentication and authorization solution can verify a user and grant access to sensitive data by considering their role, where they are through geolocation, the time of day, device type and health and what network they’re using.

    Let’s say a known user typically accesses data remotely from a particular device during a certain time period and in a certain geographical area. Now let’s say that user falls out of their established pattern and attempts to access sensitive data on a new device from a new location after hours. Additionally, their location differs wildly from their previous login location that same day. A context-based authentication and authorization  solution should be able to detect this discrepancy and escalate the authentication process, such as requiring an SMS code sent to the user’s verified phone number, to continue.

    Maintain a Trail of Accountability

    No security measure on earth is one-hundred percent effective at protecting against breaches, and often human error is to blame. According to an Intermedia 2015 Insider Risk Report, 93 percent of employees admitted to “engaging in at least one form of poor data security.” This includes sharing login credentials with multiple users (65 percent), deploying shadow IT solutions without consulting IT first (45 percent for tech-savvy users) and more.

    Therefore, it is essential for IT teams taking an identity-centered approach to maintain an audit trail to capture user interactions with web applications. That way, when an attempted or actual breach occurs, IT or the security team can quickly investigate what happened, which employees were involved, who’s credentials were compromised and what data was targeted.

    Such an audit trail is also beneficial in managing work with contractors, partners and vendors. Let’s say a business unit working on a new product needs to share data with contractors. Access is needed by a growing group of people, but controlling who has access to what data is difficult. By standardizing the access processes through a uniform, company-wide IAM policy, a business can provide contractors the right data with complete visibility without putting restricted data at risk. When the contract is over, access to the data can be revoked. And, when it comes time for an audit, the company can provide clear insights into what data was accessed, when it was accessed, by whom and why without overburdening IT or any other part of the organization.

    Like many modern upgrades, any business considering an IAM-centered approach to security will need to consider what both their IT infrastructure and company will look like further down the road. Systems will need to scale, solutions need to be compatible, and the infrastructure will need to grow as the company grows. However, the investment – especially when safeguarding against rising security threats, suspicious activity and unpredictable employee behavior – is well worth it.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
    11:04p
    Google Expands Cloud Data Center Plans, Asserts Hardware, Connectivity Leadership

    Google has added three locations to the list of cloud data center construction projects it is doing to expand geographic reach of the physical infrastructure that supports its enterprise cloud services, investing billions to catch up with its biggest rivals in the space — Amazon, and Microsoft.

    Urs Hölzle, the company’s senior VP for technical infrastructure, announced the plans from stage Thursday at the Google Cloud Next event in San Francisco. He also underlined hardware innovation inside Google’s cloud data centers and its global network backbone and announced new cost controls for cloud users, while his colleagues unveiled new security, developer, and collaboration tools.

    Cloud data center locations, connectivity, hardware, cost, security, and toolset are all levers cloud providers pull as they compete for enterprise cloud market share – a race in which Google Cloud Platform remains far behind market leader Amazon Web Services as well as Amazon’s distant-second competitor, Microsoft Azure.

    AWS had more than 40 percent market share in public infrastructure and platform cloud services at the end of last year, Synergy Research Group estimated. Microsoft, Google, and IBM collectively had 23 percent. Synergy doesn’t break out the latter three companies’ individual market share, but another market analyst firm, Structure Research, estimated that at least in the infrastructure-as-a-service space, Azure commanded close to 11 percent market share in 2015, while Google’s share was 2.5 percent, compared to AWS’s nearly 71 percent.

    But Google’s cloud business is growing. The company paraded numerous big new customers at the conference, including Verizon, Colgate-Palmolive, HSBC, eBay, and Evernote, among others. “Customers of GCP connect to a billion individual users every single day,” Hölzle said.

    Expanding the Google Data Center Empire

    The three new cloud data center locations Google is planning to launch are in California, Canada, and The Netherlands, Hölzle said. Previously announced regions that are in the works are Northern Virginia, São Paulo, London, Finland, Frankfurt, Mumbai, Singapore, and Sydney.

    All these locations will come online this or next year, he said, bringing Google’s total to 17 availability regions and 50 availability zones. Each zone is essentially a separate data center, with its own dedicated power and cooling infrastructure. The company plans to have a minimum of three zones in each region, although some regions are initially launched with two zones.

    Like its big rivals, Google has been spending billions of dollars each year on building out its global network, which includes data centers, hardware, and telecommunications infrastructure, including terrestrial and submarine fiber cables. Its trailing three-year capital investment totals about $26.4 billion, Hölzle said. Capex figures Google discloses usually include costs other than infrastructure, but infrastructure is responsible for the bulk of the expenses.

    The company reported $10.9 billion in capex for 2016 – up 10 percent from the year before, an increase that was in line with its recently expanded focus on cloud infrastructure investment.

    Chipping Away at Market Share

    One of the layers of the stack where Google is asserting leadership is data center hardware. In addition to designing its own servers and network switches, Google has been deeply involved in the design of CPUs that power its cloud services by collaborating with Intel, which supplies chips for virtually all computing capacity cloud providers have today.

    Last month, Google announced that it was the first company to upgrade its servers with Intel’s next-generation Xeon processors, codenamed Skylake. At the event Thursday, Raejeanne Skillern, who leads Intel’s Cloud Service Provider Group, confirmed that Google was first, and that Intel would not be putting the product on the market for some time.

    While it’s common practice for Intel to customize products for all hyper-scale cloud companies – it’s been tweaking its chips for Google since 2003 – Google was involved in the development of Skylake from the very beginning. “We take the Google feedback every step of the way,” Skillern said.

    Hölzle also shared more details about a custom security chip that’s on every server motherboard in Google data centers, which was only briefly mentioned in a recently published whitepaper on the company’s cloud security practices. The chip is called Titan, and it’s so tiny that Hölzle was able to have it attached to an earring pin he was wearing on stage. Titan helps Google protect hardware at the BIOS layer. It helps authenticate hardware and services running on the servers.

    Wiring the Planet

    Another big capital investment area for Google is global data center connectivity. It was the first non-telco technology company to invest in a submarine cable construction project nine years ago. The cable, called Unity, crosses the Pacific Ocean to link landing stations in Chikura, Japan, and Redondo Beach, California. It came online in 2010.

    Since then, Google has invested in five more submarine cable projects. In recent years, other hyper-scale data center operators, including Microsoft, Amazon, and Facebook, have also become major investors in transcontinental connectivity.

    Read more: Here are the Submarine Cables Funded by Cloud Giants

    Sharpening the Machine Learning Angle

    Google has been making good progress on differentiating itself with cloud services around Artificial Intelligence and machine learning (the most widely used type of AI), Sid Nag, a research director at Gartner, said. That has been the company’s core message since around one year ago, when it started significantly ramping up its enterprise cloud efforts.

    A key challenge going forward will be simplifying the narrative around these tools and creating some “pedestrian use cases” to help customers easily integrate Google’s machine learning capabilities into their applications, Nag said.

    Go-to-Market Improvements

    Another important development this week was Google’s announcement of a significant expansion of its channel and technology partner programs. The company hasn’t had a managed services capability to help clients get on-boarded onto its cloud platform, and partnerships such as the one it announced with Rackspace, will go a long way in improving that.

    The managed service provider has been supporting AWS and Azure, saying these managed cloud services have been its fastest-growing business. Now, it has added the third massive cloud platform into the mix. Google’s “partnership with Rackspace is going to be pretty significant,” Nag said.

    << Previous Day 2017/03/09
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org