Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, January 3rd, 2017

    Time Event
    6:09p
    US Intelligence Got the Wrong Cyber Bear

    (Bloomberg View) — The “Russian hacking” story in the U.S. has gone too far. That it’s not based on any solid public evidence, and that reports of it are often so overblown as to miss the mark, is only a problem to those who worry about disinformation campaigns, propaganda and journalistic standards — a small segment of the general public. But the recent U.S. government report that purports to substantiate technical details of recent hacks by Russian intelligence is off the mark and has the potential to do real damage to far more people and organizations.

    The joint report by the Department of Homeland Security and the Federal Bureau of Investigation has a catchy name for “Russian malicious cyber activity” — Grizzly Steppe — and creates infinite opportunities for false flag operations that the U.S. government all but promises to attribute to Russia.

    The report’s goal is not to provide evidence of, say, Russian tampering with the U.S. presidential election, but ostensibly to enable U.S. organizations to detect Russian cyber-intelligence efforts and report incidents related to it to the U.S. government. It’s supposed to tell network administrators what to look for. To that end, the report contains a specific YARA rule — a bit of code used for identifying a malware sample. The rule identifies software called the PAS Tool PHP Web Kit. Some inquisitive security researchers have googled the kit and found it easy to download from the profexer.name website. It was no longer available on Monday, but researchers at Feejit, the developer of WordPress security plugin Wordfence, took some screenshots of the site, which proudly declared the product was made in Ukraine.

    That, of course, isn’t necessarily to be believed — anyone can be from anywhere on the internet. The apparent developer of the malware is active on a Russian-language hacking forum under the nickname Profexer. He has advertised PAS, a free program, and thanked donors who have contributed anywhere from a few dollars to a few hundred. The program is a so-called web shell — something a hacker will install on an infiltrated server to make file stealing and further hacking look legit. There are plenty of these in existence, and PAS is pretty common — “used by hundreds if not thousands of hackers, mostly associated with Russia, but also throughout the rest of the world (judging by hacker forum posts),” Robert Graham of Errata Security wrote in a blog post last week.

    The version of PAS identified in the U.S. government report is several versions behind the current one.

    “One might reasonably expect Russian intelligence operatives to develop their own tools or at least use current malicious tools from outside sources,” wrote Mark Maunder of Wordfence.

    Again, that’s not necessarily a reasonable expectation. Any hacker, whether associated with Russian intelligence or not, can use any tools he or she might find convenient, including an old version of a free, Ukrainian-developed program. Even Xagent, a backdoor firmly associated with attacks by a hacker group linked to Russian intelligence — the one known as Advanced Persistent Threat 28 or Fancy Bear — could be used by pretty much anyone with the technical knowledge to do so. In October 2016, the cybersecurity firm ESET published a report claiming it had been able to retrieve the entire source code of that malicious software. If ESET could obtain it, others could have done it, too.

    Now that the U.S. government has firmly linked PAS to Russian government-sponsored hackers, it’s an invitation for any small-time malicious actor to use it (or Xagent, also mentioned in the DHS-FBI report) and pass off any mischief as Russian intelligence activity. The U.S. government didn’t help things by publishing a list of IP addresses associated with Russian attacks. Most of them have no obvious link to Russia, and a number are exit nodes on the anonymous Tor network, part of the infrastructure of the Dark Web. Anyone, anywhere could have used them.

    Microsoft Word is a U.S.-developed piece of software. Yet anyone can be found using it, even — gasp — a Russian intelligence operative! In the same way, a U.S.-based hacker aiming to get some passwords and credit card numbers or seeking bragging rights could use any piece of freely available malware, including Russian- and Ukrainian-developed products.

    The confusion has already begun. Last Saturday, The Washington Post reported that “a code associated with the Russian hacking operation dubbed Grizzly Steppe” was found on a computer at a Vermont utility, setting off a series of forceful comments by politicians about Russians trying to hack the U.S. power grid. It soon emerged that the laptop hadn’t been connected to the grid, but in any case, if PAS was the code found on it and duly reported to the government, it’s overwhelmingly likely to be a false alarm. Thousands of individual hackers and groups routinely send out millions of spearphishing emails, meant for an unsuspecting person to click on a link and thus let hackers into his computer. Now, they have a strong incentive to use Russian-made backdoor software for U.S. targets.

    For Russian intelligence operatives, this is an opportunity — unless they’re as lazy as the U.S. reports suggest. They, in turn, need to switch to malware developed by non-Russian-speaking software experts. Since their work tends to be attributed to the Russian government based on Russian-language comments in the code and other circumstantial evidence, and the cybersecurity community and the U.S. government are comfortable with the attribution, all they need is Chinese- or, say, German-language comments.

    The U.S. intelligence community is making a spectacle of itself under political pressure from the outgoing administration and some Congress hawks. It ought to stop doing so. It’s impossible to attribute hacker attacks on the basis of publicly available software and IP addresses used. Moreover, it’s not even necessary: Organizations and private individuals should aim to prevent attacks, not to play blame games after the damage is done. The most useful part of the DHS-FBI report is, ironically, the most obvious and generic one — the one dealing with mitigation strategies. It tells managers to keep software up to date, train staff in cybersecurity, restrict their administrative privileges, use strong anti-virus protections and firewall configurations. In most cases, that should keep out the Russians, the Chinese and homegrown hackers. U.S. Democrats would have benefited from this advice before they were hacked; it’s sad that they either didn’t get it from anyone or ignored it.

    This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners. It also does not necessarily reflect the opinion of Data Center Knowledge and its owners.

    7:04p
    Issues for 2017: Has the Enterprise Data Center Stopped Disappearing?

    If you believed everything you read, nothing would be correct.  The cloud, we’ve been told, will absorb resources and investment from enterprises, leading to smaller and fewer enterprise data centers.  Indeed, entire businesses will cease to exist as a result of a tremendous force of enterprise absorption, as predicted by former Cisco CEO John Chambers in 2015.

    The cloud, we’re told, will rejuvenate enterprises and restore their faith in their ability to own and maintain their own infrastructure.  Indeed, entirely new businesses will bloom and prosper, as predicted by the contributors to the OpenStack Foundation, one of which is Cisco.

    So what does the evidence tell us?  Last June, we reported the findings of the latest Uptime Institute survey for 2016.  Fewer respondents said their firms were building new data centers within the previous 12 months — which was fewer than the year before.  This would appear to have disproven a 451 Research report the previous year, which predicted that nine of 10 data center operators planned to build a new facility.

    Uptime’s survey results were followed one month later by a Jones Lang LaSalle report, indicating a spike in leases.  The cloud, the commercial real estate firm predicted, would lead to a doubling of the entire data center industry within the next five years.

    You’d think there were more contradictory signals from the global data center industry over the last year than in the entire previous US presidential campaign during the last four.

    See alsoWhat 2016’s Top Data Center Stories Say About the Industry

    The End

    “I definitely think the enterprise data center, if you will, in the classic sense, is gone,” Chris Sharp, CTO and senior VP for service innovation at Digital Realty Trust, said in an interview with Data Center Knowledge.

    “The most economical and secure architecture you can possibly pull to market,” Sharp continued, “is this hybrid, multi-cloud strategy. And by ‘multi-cloud,’ I mean hybrid interconnectivity to these services. And that’s next to impossible to achieve in the traditional enterprise data center.”

    As regular DCK readers no doubt already know, Digital Realty’s key product this year has been its new Service Exchange, a connectivity-as-a-service offering that brought it within par of Equinix Cloud Exchange.  Clearly, Sharp has an interest in promoting the idea that it makes sense for companies to deploy their IT gear in spaces with greater opportunity to connect with cloud providers using high-speed, high-availability connections.

    But his broader point is difficult to dispute:  Connectivity is the actual issue here.  Virtualization would effectively render all cloud servers anywhere on the planet equivalent to one another, regardless of location, were it not for the issue of connectivity.

    In most any survey where you give IT personnel a list of factors that comprise their firms’ final decision about where to house their IT assets — on-premise, in colocation facilities, in the public cloud — and ask them to choose the most important deciding factor, cost or price typically tops the list.  But when you examine the forces that end up positioning IT assets where they are today, and that relocate them from place to place, you find these firms are not always following price the way the ancient Egyptians built their empire along the river Nile.

    Connectivity is the commodity that data center users and operators are actively chasing.  Like a migratory species adapting to a new environment to survive, data centers will take whatever shape and function they need to adapt to the workloads they’re being given.  So shrinkage or growth may not really be what you think they are.

    “The mashup of services that enterprises have to rely on is becoming more and more complex,” Sharp said.  “Customers today have a sense of what their existing requirements are around space and power.  But I ask them, look two years, three years out.  They have a very rough time gauging what they’re going to need.  What they do know is that it’s going to grow, and that it’s going to grow rather large.”

    The Other End

    If you extend the definition of “enterprise data center” to be as virtual and as versatile as the applications and services that live-migrate between today’s servers, then those contradictions no longer seem like opposing forces.  Like the familiar metaphor of two people judging an elephant in a darkened room, they become different perspectives of the same picture.  Yes, on-premise facilities may no longer be growing.  No, enterprise investment in data center infrastructure and resources is not shrinking.

    So you have to look back at some of these surveys that ask IT personnel about “your data center,” and ask yourself whether all the respondents thought it meant the same thing.

    Steven Carlini, senior director for data center global solutions at Schneider Electric, offered this explanation in an interview with DCK:  Data centers, he said, are not homogeneous across the spectrum, and frankly never have been.

    Imagine three concentric spheres.  “You have the core centralized data center in the middle; then you have the regional data centers in the middle ring; then in the outer ring, there are going to be these tiny, on-premise data centers that are going to run applications in smaller facilities.”

    Yes, Schneider has an interest in seeing smaller facilities come to fruition, having recently partnered with HPE on the “Micro data center.”  But wouldn’t you trust someone urging you to bet on a horse if that someone was betting on the same horse?

    In Carlini’s outer ring, facilities may become more application-centric.  He foresees a possibility that, for instance, the agriculture industry may invest in these trailer-sized complexes, for gathering the Internet of Things data from sensors stationed throughout active fields, measuring soil temperature, moisture, acidity, and other factors in real-time.

    By that measure, data center investment could increase, simply by giving huge industries (e.g., geology, petroleum, natural gas production, refineries, renewable energy facilities) new opportunities to use consolidated servers in nearby spaces — opportunities that would trump SaaS providers.  Once again, connectivity is the key issue.

    “When you run the processing closer to the application,” Carlini pointed out, “you don’t necessarily need to save all of the data.  You only want to save the results of the IoT processing.”

    And yes, there’s a self-serving component to that argument; but again, the broader point being made here is difficult to dispute:  Big Data got big because meaningful analytical results take time to process.  Greater connectivity would reduce that interval.  As a result, many applications might not need to store so much of their captured data for the results of their analytics to be just as relevant.

    Maybe the enterprise data center ends up looking more like a semi-trailer than the ground floor of a high-rise.  But if it does the job reliably and affordably, who cares?

    Ends Meet

    The middle ring in Carlini’s model is easy to overlook.  He sees Schneider as having a hand in that department as well.  But because this ring involves regional data centers, there’s a major roadblock that the inner and outer rings may not face.

    To borrow Arlo Guthrie’s phraseology, ther-r-r-re was a third possibility that no one even counted upon.

    “We could drop a pre-fabricated, regional data center, basically off a truck, and have it up and running in a week,” Carlini explained.  “But because of all the permitting and regulations, we can’t do that.  And the internet [service provider] giants can’t do that either.  So what’s happening is, a lot of these colocation facilities that were built in urban areas now have these internet giants as their primary tenants.”

    Because regional data centers deal in local regions, they get caught up in regional disputes — zoning laws, rights-of-way, energy monopolies.

    ISVs are moving their SaaS services and their clouds closer to their customers, and they’re using colo providers to help them get there.  Colo gives ISVs a way around the regional regulatory problem — a shortcut to the connectivity they’re looking for.  If the regulatory environment were to change, in this country and elsewhere, conceivably a new bumper crop of mid-range data center facilities could spring up in industrial parks and central business districts.

    And the definition of “enterprise data center” would change yet again.  The thing we forget when we’re sorting out our definitions and our taxonomies from year to year is that circumstances change the way such a sorting process works.  From one milestone year to the next, the context of our industry shifts, and our kaleidoscopic view of the world no longer presents us with the same pattern.

    We call it “digital disruption,” but usually that’s just a catch-all phrase for the white noise that results from trying to perceive an evolving world from a non-evolving lens.  As any scientist will tell you, when a subject of investigation appears to be growing in one measurement and shrinking in another, that’s not the Uncertainty Principle.  That’s natural evolution.

    7:30p
    Top Open Source Software Challenges for 2017

    By The VAR Guy

    It’s a new year, and open source software is more popular than ever. But the open source community is also confronting a new set of challenges. Here’s what open source programmers and companies will need to do to keep thriving in 2017.

    There is no denying that open source has come a very long way in a relatively short time. In January 2007, only a handful of major companies were invested heavily in open source. Closed-source software vendors like Microsoft and VMware dominated the enterprise computing market. On the desktop Linux front, you were lucky if you could get your Linux PC to connect to a wireless network, let alone actually use it do work.

    Fast forward to January 2017, however, and open source software is everywhere. More than two-thirds of companies are contributing to open source. Open source technologies like OpenStack, Docker and KVM are being used to build the next generation of infrastructure. And it has been years since I have had to fight with Xorg.conf or ndiswrapper in order to make Linux work on my PC.

    Open Source’s Top Challenges

    Yet for all that the open source community has achieved, a new set of challenges has arisen. It includes:

    • Cloud computing. Virtually everything is migrating to the cloud, and cloud computing is projected to continue growing at a compound annual growth rate of 19.4 percent for the next several years. That’s good news for open source technologies that power the cloud, like OpenStack. But it’s bad news for people who believe that the primary purpose of open source (or free software) should be to free users. Even when clouds are powered by open source code, cloud computing’s architecture denies users many of the freedoms they would otherwise gain by using open source software (as Richard Stallman keenly points out).
    • The Internet of Things (IoT). IoT presents challenges for open source that are similar to the cloud. Many IoT devices, like smart thermostats, are powered in part by open source technologies. But that doesn’t mean much to users, who usually have little ability to modify the code running on the devices — which tend to be undocumented, to lack interfaces that facilitate modification and to depend on proprietary components.
    • Apple. The open source community has won its long war against Microsoft. Redmond declared its “love” for Linux and made many open source-friendly moves in recent years. But that other major consumer computing company — Apple — remains considerably less in love with open source (which is ironic, given that macOS is built in part on open code derived from BSD). Sure, Apple publishes some open source code. But most of Apple’s products and platforms are super-proprietary and closed. (Case in point: Facetime, which is so proprietary you can’t talk to people with non-Apple devices.) As long as Apple looms as a highly successful closed-source software company, open source will face stiff competition in the consumer market.
    • Docker. Docker containers, which provide an innovative way to isolate applications and build next-generation infrastructure, are the hottest open source story of the moment. But Docker comes with a downside for the open source community. Concern about open standards for containers helped to drive talk of forking Docker several months ago. That never officially happened, but Red Hat launched a competing container framework called OCID. Red Hat swears OCID is not a Docker fork, but it kind of looks like one. All of this competition in the container ecosystem suggests that the spirit of collaboration that has traditionally undergirded open source projects is breaking down in the container world — and that forks and rumors of forks could damage Docker’s momentum.
    • Corporate control. In elder days, most open source code was written by volunteers. Today, the vast majority of code contributions to projects like Linux and OpenStack come from programmers paid by companies like Red Hat and Intel. There’s nothing wrong with this; the fact that companies are investing so much money in open source development is a good thing. But this change does reflect a much higher degree of corporate control over open source code. That leads to tensions that the open source community must contend with — such as the kerfuffle last January about the Linux Foundation’s board of directors giving too much control to companies at the expense of individuals.

    There is no doubt that open source software will continue to thrive in this new year. But as open source reaches new frontiers, the open source landscape is changing. The open source community must adapt with it.

    This article first appeared here, on TheVARGuy.

    8:24p
    Speak Data Center But Don’t Speak Cloud? Google Wants to Help

    Google has released a guide to its cloud platform written specifically for data center professionals with minimal cloud experience.

    Like its competitors Amazon and Microsoft, Google is coveting the enterprise cloud market, wanting companies housing IT infrastructure in their own or colocation data centers to migrate as many workloads as possible from those facilities onto its cloud platform. The guide is meant to explain to the people running enterprise data centers how the cloud substitutes each function of the facilities they operate.

    Google Cloud Platform for Data Center Professionals is a guide for customers who are looking to move to Google Cloud Platform and are coming from non-cloud environments,” Peter-Mark Verwoerd, a Google solutions architect, wrote in a blog post.

    The guide covers the basics of running IT in the cloud: compute, networking, storage, and management.

    Starting in 2015, Google increased investment in its cloud services business and amplified messaging around it. The enterprise cloud opportunity is huge, but the company is far behind its biggest competitors in terms of market share.

    One analyst report last year estimated that the top Infrastructure-as-a-Service cloud providers collectively made $11.2 billion in revenue in 2015 and forecasted the market’s size to reach $120 billion by 2020. Google’s share of the market in 2015 was 2.5 percent, compared to Amazon’s 70.7 percent and Microsoft’s 10.8 percent, according to Structure Research, which published the report.

    See also: Here’s Google’s Plan for Calming Enterprise Cloud Anxiety

    Google’s efforts to prove its cloud’s worth included hiring VMware founder Diane Greene to lead its cloud business, investing in 10 new data center locations, and calling attention to well-known cloud customers, including Spotify, Niantic (the company behind Pokémon Go), Coca-Cola, and The Walt Disney Company.

    Read Google Cloud Platform for Data Center Professionals here.

    << Previous Day 2017/01/03
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org