Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, March 14th, 2017

    Time Event
    12:00p
    IoT Spells Trouble for Data Center Security, Networks

    The Internet of Things has gone from a concept not many people grasped clearly to a tangible, living and breathing phenomenon on the verge of changing the way we live—and the way data centers strategize for the future.

    At the very least, data center managers better develop new strategies for handling the IoT and all the data that could overwhelm current systems.

    What does that volume of data look like? In the past five years, traffic volume has already increased five-fold; and according to a 2015 study by Cisco, annual global IP traffic will pass a zettabyte and surpass 1.6 zettabytes by 2018. Non-PC devices—expected to double the global population by that year—will generate more than half that traffic.

    That spells trouble with a capital “T”. The global growth of data is creating the need for wider information networks and tightened security controls. Each new IoT device potentially creates a new point of vulnerability.

    Next month, in a session at Data Center World, titled Data Centers and IoT: There’s No Such Thing as a Free Lunch, Chris Crosby, CEO of Compass Datacenters, will identify and discuss the problems associated with current networks in relation to the IoT. He will also present the framework for planning for IoT implementation from a security perspective, as well as discussing the new emerging security model that can enable IT to maintain network security while increasing the scope of IT implementations.

    From a data center operations perspective, IoT translates into billions of tiny packets from billions of devices. Just a few short years ago, we would have referred to these as Denial of Service attacks, and now data center professionals must develop infrastructures that are able to process this information in real time or it loses its value, Crosby explained.

    For example, he referred to how a company’s IoT-based, just-in-time inventory system would suffer serious consequences if there were very long delays in its ability to track the location and volume of component parts.

    In order to prevent such delays, Crosby sees growth in more stratified structures in which data, and its processing component, are moving as close to user groups as possible in terms of edge and (small but growing) micro data centers.

    “IoT is outstripping the capability of many in-place data centers and driving the evolution to more stratified architectures,” he said.

    You might recall a recent and very real-world illustration of a cyberattack that harnessed the massive scale of IoT back on Oct. 21, 2016, when many of the 3 billion internet-addicted people across the globe weren’t able to  access social networks, download movies or do much of anything thanks to a DDoS attack. This attack was unlike others.

    A DDoS, or Distributed Denial of Service attack, is usually achieved when a hacker(s) bombards a server with so many requests in such a short amount of time that it simply crashes. It’s no different than when a site crashes from too little bandwidth and too much traffic, only this is done intentionally. Even the largest servers, across the widest networks, with the best cybersecurity software in place can fall victim when done on the largest of scales.

    One reason the hackers were able to affect so many websites is because they targeted an actual DNS provider (domain name server), in this case a company called Dyn—otherwise it would be impossible to coordinate such a wide-scale attack.

    That’s not the first time a DNS provider has been targeted and it probably won’t be the last.

    And, while DDoS attacks have been around for quite some time, this latest one that brought down the likes of Amazon, Spotify, Netflix, PayPal, Twitter, and many others, had a new and very troubling nuance. Experts believe hackers tapped into all those intelligent devices connected to the Internet (IoT) to help pull off the massive outage.

    The attack on Dyn was unique in that IoT devices – including Internet facing cameras, home routers, baby monitors, and more – were used as part of tens of millions of IP addresses that were infected, connected to a malware-based botnet called Mirai, and then used to attack Dyn’s network of servers. Mirai  used IoT devices in order to break into the millions of devices on the Internet, which are poorly guarded, rarely patched, and easy to commandeer with their default or easy-to-guess passwords. And there are a lot of IoT devices out there, and a lot of companies working on creating even more IoT devices.

    But, the real story isn’t about the titans of the industry who were taken down in this attack – it’s about everyone else. Millions of other smaller domains were in this tsunami-sized path of digital destruction and businesses got crushed. Despite the associated risks, almost every CIO reading about the attack likely figures that these hackers “only go after the big guys” or “our company isn’t famous enough to get on a hacker’s radar” – think again.

    A mid-year 2015 study by HP reported that of the 10 home-based devices it tested (including door locks, thermostats and TVs), 80 percent didn’t require strong passwords and 70 percent had security holes. In fact, the devices—some of which will be used in industrial settings—averaged 25 security flaws each.

    Keep in mind, too, that this group of hackers wasn’t going specifically after money, or ransom, or personal identifications; they simply did it to upset the proverbial apple cart—and that they did. Internet outages still disrupt business and can be very costly.

    According to Kaspersky’s “Global IT Security Risks Survey 2015 – DDoS Attacks” report, an average damage range of $52,000 to $444,000, depending on company size.  Less quantifiable injuries include reputational damage and temporary loss of access to critical business information. Nearly 40 percent of those affected couldn’t perform their core functions.  Additionally, one-third of the companies surveyed told Kaspersky they lost contracts and opportunities because of the attacks. Almost as many saw their credit rating decline, and 26 percent reported increased insurance premiums.

    So, we’ve got nothing short of a crisis on our hands, one even bigger than originally suspected, and absolutely no budget constraints for what companies across every industry and private and public sectors can spend on securing our businesses, personal lives and national security.

    In 2015, companies spent $75 billion on cybersecurity and lost $300 billion. According to Markets and Markets, IT security spending will soar to $101 billion in 2018 and hit $170 billion by 2020.

    Data Center World – Global 2017 runs from April 3-6 at the Los Angeles Convention Center. For more information on the event and a detailed look at the educational sessions, visit datacenterworld.com.

    A version of this article originally appeared on AFCOM.

    3:00p
    Africa’s Largest Data Center Firm Raises $91M for Growth

    (Bloomberg) — Teraco Data Environments, which says it is the largest provider of data-center services in Africa, said it raised 1.2 billion rand ($90 million) from South African lender Barclays Africa Group Ltd. to invest in information-technology infrastructure on the continent.

    The closely held business will use some of the cash to complete the construction of a new data center in eastern Johannesburg by the end of year, according to a statement e-mailed by the company on Tuesday.

    Barclays Africa “understands our unique business model and the associated infrastructure funding requirements and timelines,’’ Chief Financial Officer Jan Hnizdo said. “The new site will be the largest commercial data center in Africa.’’

    Teraco is investing to meet higher demand for data services in Africa as internet access improves and businesses adopt cloud-based technology. Wireless operators including South African market leaders MTN Group Ltd. and Vodacom Group Ltd. are seeing higher growth rates in data sales than traditional voice revenue.

    3:30p
    Amazon to Add Another Bit of Ireland to Data Center Portfolio

    After the recent four-hour Amazon Web Services (AWS) outage Amazon probably wouldn’t mind some “luck of the Irish” now that it has plans to build a $213 million data center campus in Dublin.

    And, it might give the Irish a little more to celebrate on St. Patrick’s Day this Friday, with the company saying that it expects 400 workers to be onsite during peak construction.

    In reality, luck of course has nothing to do with Amazon’s choice to add a 223,000-square foot facility to a data center portfolio already closely tied to Ireland. Analysts estimate that the company has invested more than $1 billion in Irish operations since 2004, with multiple data centers in Blanchardstown, Clonshaugh, and Tallaght. Amazon is also building a data center next to Dublin Airport.

    See alsoN. Virginia Landgrab Continues: Next Amazon Data Center Campus?

    These data centers serve two purposes for the company: They host web retail services, while AWS uses them to offer data hosting to business customers globally. AWS alone created over $11 billion of revenue for the company in 2016, according to CloudPro.

    The company says it expects the data center, codenamed “Project G”, to take about 18 months to complete with groundbreaking starting this year. Amazon recently asked permission from the Fingal County Council to build it. With an answer still to come, the Amazon camp certainly hopes it won’t run into a roadblock like Apple did last year.

    Amazon also said seven, smaller data centers on the 64-acre, IDA-owned site might be in the works as it looks to convert the space into a data-storage facility campus. IDA Ireland oversees foreign business interests in the country.

    Often called the “Data Center Capital of Europe,” Dublin is a draw for many of technology’s behemoths: Google, Microsoft, Apple, and Facebook all have data centers there.

    4:00p
    Bracketology: March Madness Lessons in Agile Database Management

    Mike Kelly is CTO of Blue Medora and GM of SelectStar.

    Every year, one of my colleagues takes personal leave on the first day of March Madness to watch the basketball games on television. She’s a University of Minnesota alumna, and this year the Gophers are projected to be one of the 68 teams competing. All over the country, tournament brackets will be filled out by people who will predict their Final Four teams and go nuts as at least one favorite won’t make it past the first or second round.

    I’m no bracketologist, but as a casual basketball fan and serious technologist, I have observed that the best teams don’t just score, they also play great team defense, communicate with each other on and off the court, move the ball around and execute great plays after timeouts. You can see where I’m going with this, don’t you?

    If we can draw lessons in agile database management from March Madness, my bracket would include the top ways to increase uptime and performance for business-critical databases. Like the granny shot, traditional database management is decades behind. As companies shift to cloud and adhere to DevOps principles to deliver their applications to market, they need their own Final Four of monitoring best practices to implement agile database management.

    Run a Zone Defense

    Rather than play man-to-man defense — deploying separate monitoring tools for each database — companies should centralize database monitoring and play zone to cover everything from on-premises to cloud infrastructure. Most organizations today use dozens of applications. To bring those applications to production might require more than one instance of a database for dev, test and production. Different applications may run best on different databases – from traditional SQL Server to open-source PostgreSQL and NoSQL or even Hadoop and Cassandra.

    By unifying monitoring for multiple in-production databases for virtualized, cloud or distributed scale-out environments, you can save time and money with the ability to manage different databases in a single tool. Each database instance appears in the centralized interface with the same look and feel, as well as the deep-dive metrics that are gleaned from each database. Thus, it’s easier to pull and compare metrics to understand where database solutions may be underperforming. Having insight into these key metrics can help you leverage a zone defense into a high-performing operation with the flexibility, scalability and reliability that you need to win.

    Share the Ball

    In basketball as in IT, don’t be a ball hog. Give your DBA access to infrastructure operations data, and give your IT admin insight into database health. In addition to managing multiple types of databases, DBAs have to keep an eye on the infrastructure that supports these databases – in data centers and the public cloud. For most, diagnosing infrastructure issues means managing multiple tools or working with other teams to understand if infrastructure details like storage usage, CPU utilization, and bandwidth are slowing database performance. Virtualization-aware relationship mapping saves days of troubleshooting by pinpointing issues faster and more accurately, even if the databases are deployed in different environments. By providing equal visibility to consolidated database and infrastructure data you’re eliminating blind spots. No matter their role on the team, players don’t have to beg for the data to do their jobs, they just get it whenever they need it. Constant insight equals consistent, high performance.

    Stay in Your Lane But Switch as Needed

    Eliminate your need to be an expert in everything by relying on automated notifications and analytics. DBAs can focus on optimizing queries, without having to be DevOps pros or virtualization experts. As your business grows and leverages more tools and applications, your database team’s attention gets diluted further in an effort to support these applications, bleeding resources away from what’s most important to both your customer experience and the bottom line. This problem can be compounded by the need to cross-train team members on new systems, legacy environments, and the multitude of tools used to maintain and monitor them. In practice, this creates an impossible task for the database team to figure out where exactly a performance issue is rooted.

    Instead, you can use a system of auto-generated software recommendations and alerts to flag areas that need attention before they compromise end-user performance, such as incorrect security settings or missing backups. Recommendations can take it one step further to provide insight into how you can prevent alerts from triggering again in the future, as well as modifying parts of your environment to optimize performance.  Consolidating platforms also enables you to manage all alerts in a single location or easily disable alerts for your development environment or other less-critical resources.

    Draw Up Creative Plays

    Whenever performance issues crop up, the typical response might be finger-pointing at another player. The IT teams that are Final Four-worthy understand that problems can only be solved promptly if infrastructure, database and application admins expose their metrics to all IT members who have a stake in providing excellent customer service and support. Making dashboards available to everyone from DevOps to DBAs allows them to understand and execute the same play.

    Like the winning coach’s whiteboard, monitoring dashboards always give the team the best chance to succeed. IT teammates get by-the-minute assessments of how their latest code or virtualization implementation impacts overall database and infrastructure health. Enabling deep-dive analyses helps your team troubleshoot performance problems as soon as they arise. Individual database health scores can be integrated with alerts to stay on top of performance problems. Simplifying the management interface to provide one dashboard for both on-premises and cloud allows the DBA to diagnose infrastructure issues immediately, and connect with the right team or tool to correct them quickly.

    While I’m not laying any bets on who’ll make it to the basketball Final Four, I can predict that your cloud database will only perform well if it’s surrounded by the right IT monitoring approach. The probability for success increases exponentially if IT teams — like basketball players — avoid playing hero ball and, instead, play team defense, share information equally and keep their eyes open for trouble ahead.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
    5:29p
    Citrix Said to Work With Goldman Sachs on Possible Sale

    Alex Sherman and Kiel Porter (Bloomberg) — Citrix Systems Inc. is working with advisers to seek potential suitors for the cloud-services company, according to people familiar with the matter.

    The Fort Lauderdale, Florida-based company hired Goldman Sachs Group Inc. to sound out buyers including private equity firms, said the people, who asked not to be identified because the information isn’t public. Interest has been limited so far as Citrix’s large market valuation means buyout firms would have to team up to fund a bid, complicating any possible deal, the people said.

    An increase in the company’s market value over the past year is also making it difficult for private equity firms to offer a premium for Citrix, they said.

    The shares rose 6.8 percent to $84.93 Monday, valuing the company at about $13.3 billion. The stock climbed more than 30 percent in the 12 months through March 10, despite management warning in January that they’re maintaining a “conservative outlook” on the next four quarters.

    A spokeswoman for Goldman Sachs declined to comment. Representatives for Citrix didn’t immediately respond to requests for comment.

    Strategic Review

    After reaching a standstill agreement with activist shareholder Elliott Management Corp. in 2015, and adding Elliott’s Jesse Cohn to the board, Citrix has undertaken strategic and operational reviews.

    Last July the company announced plans to spin off its GoTo business and merge it with LogMeIn Inc., in a $1.8 billion deal to combine the rival online meeting organizers.

    Citrix — which is expanding its reach into cloud-related products — provides services that help companies deliver applications and Windows desktops to mobile workers. Its management and security tools also help employees outside of an office do their jobs by accessing data and other information. The company reported net revenue for the fourth quarter of $908.4 million, narrowly beating the average estimate of $900.7 million.

    << Previous Day 2017/03/14
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org