Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, October 25th, 2016

    Time Event
    12:00p
    Cloud Giants Likely to Beef Up Bandwidth to Fight IoT Botnets

    Amazon and IBM appear to be the only major cloud infrastructure providers to have been affected by the DDoS attack of potentially unprecedented scale on DNS servers operated by Manchester, New Hampshire-based Dyn last week.

    While it’s unclear exactly why the other providers, such as Microsoft, Google, and Rackspace, didn’t suffer service disruptions (the most likely reason is that they don’t use Dyn), companies whose computing and storage infrastructure underpins so many of the top internet services will have to take a hard look at their existing DDoS mitigation strategies.

    Attributed at least in part to a piece of open source software that enables attackers to use poorly secured connected devices – such as some CCTV cameras and DVRs – the series of attacks on Dyn last week shows that it will be difficult to predict just how big future attacks may be and how much bandwidth headroom companies will need to maintain in their networks to fight IoT botnets.

    “In the past month, we’ve seen a doubling of the largest DDoS attacks,” Lawrence Oran, research VP at Gartner, told Data Center Knowledge in an interview. “I think the infrastructure providers are going go be looking for ways to beef up their DDoS mitigation” capabilities.

    The scale of DDoS attacks is typically measured in Gigabits per second, since they work by flooding target networks with enough requests to exhaust their bandwidth resources. Before last September’s attack on the website of cybersecurity journalist Brian Krebs, the size of the largest known attack had been 363 Gbps. The attack on KrebsOnSecurity.com was close to 620 Gbps, Krebs wrote. That attack, however, was quickly followed by an attack on the French hosting company OVH, which was “roughly twice the size of the assault on KrebsOnSecurity,” according to Krebs.

    Dyn has not yet disclosed the size of the multi-stage attack on its infrastructure on October 21st. As of Monday afternoon, the company was still busy conducting root-cause analysis of the incident and expected to have more details to report by the middle of the week, its spokesman, Adam Coughlin, wrote in an email to DCK.

    “At this point we know this was a sophisticated, highly distributed attack involving 10s of millions of IP addresses,” Kyle York, Dyn’s chief strategy officer, wrote in a statement posted on the company’s website over the weekend. Tens of millions of discrete IP addresses used in the attack were associated with Mirai, the software that automatically detects poorly secured IoT devices and enlists them into a botnet used to conduct a massive-scale DDoS attack.

    AWS Cloud Hit in US and Ireland

    Some Amazon Web Services customers whose infrastructure is hosted in Amazon’s Northern Virginia data centers (the Amazon cloud’s largest data center cluster) could not reach “a small number of AWS endpoints” in the early hours of the morning Eastern Time, when the first attack, directed at Dyn’s East Coast data centers, took place. The second attack, which was more global in nature, caused similar impact on AWS users hosting applications in Amazon data centers in Ireland. There was a third attack that day, according to Dyn, but the company was able to prevent it from affecting customers.

    A summary of the incident is posted on the AWS Service Health Dashboard. An Amazon spokesperson pointed us to the summary in response to a request for comment. The company didn’t point to Dyn specifically, saying only that the errors resolving DNS hostnames for some AWS endpoints were caused by an “availability event” with one of its third-party DNS service providers. AWS uses several such providers, in addition to its own DNS service called Amazon Route53.

    Asked why other cloud providers managed to avoid being hit last week, a spokesperson for the cybersecurity and intelligence firm Flashpoint said it was “because AWS is the only major cloud provider that heavily relies on Dyn for its infrastructure services.”

    Microsoft and Google cloud status dashboards did not show any disruptions during the attacks on Dyn. A Microsoft spokesperson declined to comment, while a Google spokesperson said there had been no Google Cloud Platform disruptions in connection with the incident.

    IBM PaaS Users Affected

    IBM, it appears, is another provider that relies on Dyn for at least one of its cloud services. Depending on who you ask, IBM is or isn’t one of the top cloud providers. While it has an extensive cloud services business, its cloud revenue doesn’t come close to the amount of money AWS reels in each quarter, which often causes it to be excluded from discussions about top cloud providers.

    Users of Bluemix, IBM’s Cloud Foundry-based Platform-as-a-Service, experienced DNS resolution issues in Australia, US, and Europe within the timeframe of the attack on Dyn, according to updates on the Bluemix System Status dashboard. One of the updates pointed to Dyn’s website for details on the incident.

    The health dashboard for IBM’s Watson IoT service does not list any issues for October 21. SoftLayer, IBM’s IaaS cloud, doesn’t offer a status dashboard for non-customers. An IBM spokesperson did not respond to a request for comment.

    Rackspace Sticks to DDoS Best Practices

    Rackspace, another Infrastructure-as-a-Service cloud provider, also was not directly affected by the attacks. “Of course, as customers of companies that leverage Dyn, our customers and Rackers alike experienced degraded connectivity and/or weren’t able to access many prominent websites on Friday,” a Rackspace spokesperson wrote in an email.

    Because there was no direct impact on its infrastructure, Rackspace is not planning to take any extra measures specifically in the incident’s aftermath, she said. The company conducts regular bandwidth assessments and upgrades to ensure it can withstand DDoS attacks.

    “As a best practice for potential DDoS attacks, we leverage global redundancy, high bandwidth, and both internal and external mitigation systems to protect Rackspace’s infrastructure and our ability to provide authoritative DNS services to customers.”

    How to Fight IoT Botnets?

    But extra measures will most likely be needed, as connected devices continue to proliferate, offering hackers a way to build DDoS botnets of unprecedented scale. Armed with these IoT botnets, bat actors can inflict ever more damage and launch attacks at a higher rate than before, Gartner’s Oran warned. Nothing suggests that an attack on Dyn today couldn’t be followed by an attack on UltraDNS or another service provider tomorrow, he said.

    Cloud companies and others who provide internet infrastructure services will have to invest more in bandwidth, DDoS mitigation equipment, and experts to address new attack capabilities, Oran said. “That’s what they need to do to mitigate impact from these types of attacks.”

    3:00p
    When is a Cloud Not a Cloud? When it’s a Single-Tenant Hosted Product

    Charlie Oppenheimer is CEO of Loggly.

    Not all “clouds” are created equal – or considered clouds at all, for that matter. With all due respect, single-tenant hosted products are one such instance. Just because a traditional software product is hosted by a vendor doesn’t make it the equivalent of SaaS.  Let’s face it – it’s not uncommon for successful licensed software companies that focus on operational intelligence or enterprise compliance and security to zig and zag as they evolve their business models to the cloud. Neither is it uncommon for them to maximize their best attributes in their marketing materials.

    The difference between SaaS and a single-tenant hosted software “cloud”, however, is an important distinction. If you’re looking for a solution that offers the key benefits of a modern SaaS product, hold out for a provider whose underlying architectural model offers the benefits of a true cloud offering. And while your first reaction might be, “Who cares? Hosted software seems like SaaS as far as the user is concerned.” But here are the three reasons why customers should care about their “cloud” provider’s underlying model.

    Single-Tenant Architecture Can’t Adapt at SaaS Speed

    One of the biggest advantages a SaaS company has over a licensed software company is that development cycles are much faster and run in a tight, iterative feedback loop with customers. While licensed software products tend to deploy new versions on an annual or perhaps twice-a-year basis, a SaaS company deploys new “versions” constantly with new iterations every few days.

    When a SaaS company deploys new capabilities, they’re measuring the results in real-time. They can see what’s working or what might be a bit confusing, watch overall usage patterns and performance, and rapidly adjust to that feedback. In fact, A/B testing of alternative implementations is a foundation strategy employed by SaaS companies to allow risk taking and faster innovation.  If a certain improvement doesn’t immediately work out as hoped, then a new iteration can be quickly deployed. .

    In contrast, a single-tenant hosted product like Splunk Cloud can’t deploy new capabilities any faster than the pace of the licensed software product because the software and hosted offerings are the same. Software customers can’t possibly handle new versions every few days because of the operational overhead. They don’t want to have to have operational resources constantly running staging versions of new versions to test the next release and then deploy to production and start again. And this forces licensed software companies to batch up changes and slow the pace of innovation to the rate at which their customers can migrate.

    SaaS companies are built to constantly stream new software into products with no interruption or overhead to customers. A new feature can appear on a product page without a customer even having to think about it. Obviously, bigger changes are noticeable (by design); but even then they do, everything possible to avoid requiring the customer to change how they work unless they want to.

    Single-Tenant Architecture Limits User Behavior Analysis

    There is a related and subtle problem for merely hosting on-premise software. SaaS products are built with the assumption that every aspect of the product’s usage and interactions are measured constantly and in aggregate so that anonymity is preserved. These measurements are monitored on scores of dashboards, alerts and ad hoc analyses.

    But, licensed software products are not built with this kind of monitoring because the customers won’t have it. Does a F500 company license software with the idea that everything they do is reported back to the vendor? Of course not. Sure, certain high-level summary statistics can be reported back, but there’s no good way to push back low-level details anonymously because they come from a single uniquely identified source. When single-tenant architectures are hosted, they have the same problem as licensed software products.

    Again, SaaS products are multi-tenant by design, so the dashboards and analyses are measured at a cluster level, aggregating many customers without having to watch what individual customers are doing.

    Single-Tenant Architectures Limit Economies of Scale

    There are other problems with hosting on-premise software. For example, from a capacity management standpoint, single-tenancy means that the amount of capacity allocated to that tenant is all that is available and any extra capacity made availabileto that one customer must be paid for.

    On the other hand, a multi-tenant SaaS product enjoys the benefits of the law of averages. They can allocate far more capacity than any individual customer would need while amortizing the cost over all of them. At any point in time, some customers are using less than their subscription capacity, while others are using more. But through years of experience, they can plan capacity so that individual customer spike requirements can be absorbed easily and cost efficiently.

    The SaaS Differences Add Up

    A single-tenant hosted product will never match the advantages of a modern multi-tenant SaaS product. The economic advantages offer advantages to customers and the vendor as well.  Perhaps more importantly, a product designed to be licensed and used by a single company can’t offer the key benefits of fast, iterative development, behavior analysis and scalability that buyers are increasingly coming to expect. That doesn’t mean there isn’t room for both in this world – it simply means that an on-premise company delivers a different value proposition than a SaaS product does. It’s important to know which one you need.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:46p
    What MSPs Should Learn from the Dyn Internet Attack
    Brought to you by MSPmentor

    Brought to you by MSPmentor

    What can MSPs learn from last week’s DDoS attack against Dyn, which brought dozens of major websites to crawl? It’s simple: Keep users secure by changing default access credentials on networks or infrastructure MSPs manage.

    In case you missed it, here’s what happened: On Friday, Oct. 21, a Distributed-Denial-of-Service, or DDoS, attack against Dyn’s DNS services caused a number of big-name websites to load content very slowly or not at all. The attack was made possible because a large number of “smart” devices — meaning things like Internet-connected thermostats and cameras — were taken over by malicious hackers. The hackers then used the devices to send a flood of traffic to Dyn’s servers. As the servers grew overwhelmed with bogus requests, they stopped responding to legitimate DNS queries.

    In the days following the Dyn attack, the Internet has been awash with warnings about how the Internet of Things, or IoT, poses a huge new security threat because IoT devices could be easily leveraged for repeated DDoS incidents like the one last week. The implication is that IoT device manufacturers have not invested in proper security for their hardware, or even that we should not be connecting so many things to the Internet in the first place.

    Read more: Cloud Giants Likely to Beef Up Bandwidth to Fight IoT Botnets

    See alsoHackers Take Down Sites From New York to LA in Web-Host Siege

    The Lesson for MSPs

    Yet device vendors do not deserve most of the blame. What made the Dyn attack possible was not that devices lacked proper security features, but rather that those devices were secured with default credentials. The attackers apparently took control of the devices using a malware program called Mirai, which defeats security controls by guessing username and password combinations based on those that devices are known to use by default.

    Preventing break-ins like these is the job not just of device manufacturers, but also of the companies who install and service their devices. Yes, device manufacturers should avoid placing the same usernames and passwords on devices they manufacture. But service providers should make sure they change the default logins when they set up a device. They should also update login information periodically so that they can prevent brute-force attacks. Those are likelier to succeed if passwords never change and attackers can spend a long time trying all possible passwords until they stumble upon the right one.

    This problem is not restricted to IoT devices, by the way. Security vulnerabilities resulting from default login credentials have been a danger on traditional computing devices for many years. For example, Windows XP famously allowed logins under the administrator account with no password unless that setting was changed after installation. Similarly, default access credentials on many devices that use SNMP, a common networking protocol, were one of the gravest security threats to network switches and routers before the introduction of newer versions of SNMP.

    If you’re an MSP, then, the lesson is simple. Don’t count on device manufacturers to make the hardware (or, for that matter, software) that you deploy and manage secure by default. And you shouldn’t leave it up to your users to secure themselves, either. Part of the value you provide as an MSP is knowing about and resolving security threats like the one that caused the Dyn outage. This is a lesson that will grow only more important as the IoT continues to expand and IoT devices become a more common part of the infrastructure that MSPs help to manage.

    This first ran at http://mspmentor.net/technologies/what-msps-should-learn-dyn-internet-attack

    6:02p
    Cockcroft, the Man Behind Netflix’s Move to AWS, Joins AWS

    Adrian Cockcroft, who was one of the key architects of Netflix’s move out of its own data centers and onto Amazon’s cloud, has joined Amazon as VP of Cloud Architecture.

    Part of his role will be advising Amazon Web Services customers on their cloud architecture. In a way, helping AWS users make wise choices about running in the cloud is something he’s been doing already.

    Not only was Cockcroft instrumental in migrating the popular video streaming service to a cloud-native architecture, he was also a key person behind NetflixOSS, a collection of open source software other companies can use to build resilient cloud infrastructure at scale the way Netflix built its own.

    Read more: Netflix Shuts Down Final Bits of Own Data Center Infrastructure

    “AWS customers around the world are building more scalable, reliable, efficient and well-performing systems thanks to Adrian and the Netflix OSS effort,” Amazon CTO Werner Vogels wrote in a blog post announcing Cockcroft’s appointment.

    Adrian Cockcroft, VP of Cloud Architecture, Amazon Web Services (Photo: Amazon)

    Adrian Cockcroft, VP of Cloud Architecture, Amazon Web Services (Photo: Amazon)

    Providing more hands-on advice to big customers is a growing focus for cloud providers who are coveting the lucrative market for big enterprise infrastructure services. This is something legacy enterprise IT vendors, such as IBM and Hewlett-Packard Enterprise, have been doing for many years.

    Google, which is on a mission to demonstrate that it can become a major enterprise cloud player, recently rolled out an unusual program under which it will embed its own infrastructure engineers with its cloud customers’ IT teams to help them use Google’s cloud better. The first test of this program was the launch of the wildly popular mixed-reality mobile game Pokémon Go.

    Read more: Here’s Google’s Plan for Calming Enterprise Cloud Anxiety

    In addition to advising customers, Cockcroft, who most recently worked as technology fellow at the well-known Silicon Valley venture capital firm Battery Ventures, will work with AWS execs and product groups and engage with developers in open source communities the company supports.

    Prior to Netflix, he worked in high-level engineering roles at eBay and Sun Microsystems.

    See also: VMware Gives AWS Keys to Its Enterprise Data Center Kindgom

    7:16p
    These Data Center Providers Use Most Renewable Energy

    Digital Realty Trust uses more renewable energy than any other data center provider, followed by Equinix, according to the US Environmental Protection Agency.

    Companies that use providers like Digital and Equinix are increasingly interested in data center services powered by renewable energy, partly because of their own corporate sustainability programs and partly because energy generated by sources like wind and solar has gotten a lot cheaper in recent years. In response, the providers have been sourcing more renewables to address the demand.

    recent survey of consumers of retail colocation and wholesale data center services by Data Center Knowledge, found that 70 percent of these users consider sustainability issues when selecting data center providers.

    Equinix has been on EPA’s list since October of last year. However, this is the first time Digital has been included, following its announcement in July of a wind power purchase agreement to offset energy consumption of its entire retail colocation business in the US.

    Read more: How Renewable Energy is Changing the Data Center Market

    Digital Realty was the sixth-largest user of renewable energy on the latest edition of EPA’s quarterly list of users in the tech and telecom sector. The list pegged Digital’s total annual use of renewables at 400,000 kWh, which is 16 percent of the company’s total energy consumption for the report period.

    The report uses annualized contracted energy purchase figures rather energy use per calendar year.

    Equinix was seventh on the list, having contracted for about 306 million kWh – 25 percent of its total energy use – consisting of wind and on-site generation fueled by biogas. The company has deployed Bloom Energy fuel cells in at least one of its locations in Silicon Valley for on-site generation.

    Rackspace is ninth on the list after having used about 114 million kWh of wind energy, or 36 percent of its total consumption.

    See also: Here’s How Much Energy All US Data Centers Consume

    Other data center providers on the list are Green House Data, which used 16 million kWh, and vXchnge, which used 14 million kWh. Iron Mountain is also on the list, although data center services are a relatively small portion of its business.

    The amount of renewable energy data center providers consume pale in comparison to some of the big tech names at the top of the list. Renewable energy usage by Intel, the leader, is about 3.42 billion kWh; followed by Microsoft, with 2.7 billion kWh; Cisco, with slightly over 1 billion kWh; and Google, whose renewable energy use is close to Cisco’s. Apple is fifth on the list, with annual renewable energy use of 830 million kWh.

    You can see the EPA’s full Green Power Partnership Top 30 Tech and Telecom list here.

    8:28p
    Report: OpenStack Deployments Move Beyond Test and Dev
    Brought to You by Talkin' Cloud

    Brought to You by Talkin’ Cloud

    OpenStack deployments are getting bigger and are being used for all kinds of enterprise workloads, a new study by 451 Research says.

    Released on Tuesday at the OpenStack Summit in Barcelona, the study finds that OpenStack is not limited to large enterprises as 65 percent of respondents are in organizations of between 1,000 and 10,000 employees. OpenStack noted earlier this year that it has seen dozens of mid-sized organizations come on board when it addressed common misconceptions about OpenStack, including the misconception that only large enterprises use OpenStack.

    According to the study, OpenStack users are adopting containers at a faster rate than other enterprises, with 55 percent of OpenStack users also using containers, compared to 17 percent across all enterprises.

    OpenStack supports workloads including infrastructure services (66 percent), business applications and big data (60 percent and 59 percent, respectively), and web services and ecommerce (57 percent).

    “Our research in aggregate indicates enterprises globally are moving beyond using OpenStack for science projects and basic test and development to workloads that impact the bottom line,” Al Sadowski, research vice president with 451 Research said in a statement.

    The majority of OpenStack users are in the technology industry (20 percent), while manufacturing accounts for 15 percent, retail/hospitality for 11 percent, and professional services for 10 percent. Healthcare, insurance, transportation, communications/media, wholesale trade, energy and utilities, education, financial services and government account for the remainder of OpenStack users, the report says.

    451 Research said that enterprises cite increasing operational efficiency (76 percent) and accelerating deployment speed (75 percent) as top drivers for OpenStack adoption.

    This first ran at http://talkincloud.com/cloud-computing/report-openstack-deployments-move-beyond-test-and-dev

    << Previous Day 2016/10/25
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org