Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, October 5th, 2015

    Time Event
    12:00p
    Top 10 Data Center Stories of September

    Here’s a recap of the 10 most popular stories that ran on Data Center Knowledge in September:

    EMC Rolls Out Hyper Converged Infrastructure with Arista Switches – Looking to simplify provisioning of compute, storage, and networking resources in a data center, EMC unveiled ScaleIO Node.

    How Data Center Providers Have Become Cloud Leaders – In creating a data center platform ready for the cloud, administrators must take a few important details into consideration.

    Switch Claims Reno Site Will be World’s Largest Data Center – SuperNap Tahoe Reno 1 is only the first phase of development. Switch’s future plans call for as many as seven buildings, mostly of comparable size. The campus will neighbor a Tesla battery manufacturing plant currently under construction.

    Here’s What Caused the Amazon Cloud Outage – A minor network disruption at Amazon Web Services led to an issue with its NoSQL database service DynamoDB, causing some of the internet’s biggest sites and cloud services becoming unavailable

    Data Center Consolidation: a Manager’s Checklist – We look at three key areas that managers should look at when it comes to data center consolidation: hardware, software, and the users.

    AWS, GoDaddy Named in Ashley Madison Lawsuit – The suit seeks $3 million in damages, and also names 20 other defendants, three of whom operated sites which sold the leaked user data.

    Who Needs Generators? Data Center Taps Directly into Grid for Power – The facility receives power directly from a bulk transmission line designed to transmit massive amounts of electricity over long distance.

    Latency, Bandwidth, Disaster Recovery: Selecting the Right Data Center – Are you working with web applications? Are you delivering virtual desktops to users across the nation? There are several key considerations around the type of data or applications an organization is trying to deliver via the data center.

    How Enterprise Cloud and Virtual Networking are Changing the Telco Market – Delivering enterprise connectivity services today is very different from even five years ago, and telecommunications companies that have dominated the market for many years today are having to make a lot of adjustments to the way they do business.

    Six Facts in High-Availability Data Center Design – Data center design isn’t simply about infrastructure redundancy. As senior company executives pay more attention to what’s happening in the data center, it is more important than ever for a data center design to match specific company needs.

    Stay current on data center news by subscribing to our daily email updates and RSS feed, or by following us onTwitter,Facebook, LinkedIn and Google+.

     

    3:00p
    Evolving to Next-Gen Data Center: Cloud, Storage, Virtualization, Security

    The data center has become the home of media content, cloud services, and a new breed of industry-disruptive companies. As more organizations find benefits of cloud and third-party data center services, spend and growth in the data center market will continue to grow.

    According to Gartner, in 2014, the absolute growth of public Infrastructure-as-a-Service workloads surpassed the growth of on-premise workloads (of any type) for the first time. The market research firm’s 2015 CIO survey indicates that 83 percent of CIOs consider IaaS as an infrastructure option, and 10 percent are already cloud-first with IaaS as their default infrastructure choice.

    Administrators will be looking for more ways to optimize their data centers and all the workloads they support. Most of all, they’ll be looking for smarter ways to secure their data. As more data is placed within the data center, there are see more targets and more attacks. Juniper Research recently pointed out that the rapid digitization of consumers’ lives and enterprise records will increase the cost of data breaches to $2.1 trillion globally by 2019, increasing to almost four times the estimated cost of breaches in 2015.

    It’s no wonder that respondents to the latest State of the Data Center Survey by our sister company AFCOM indicated that security is still a top concern among IT decision makers.

    Here are the respondents’ top concerns about cloud:

    • Security: 32 percent
    • Interoperability of cloud services with existing enterprise infrastructure: 7 percent
    • Total cost of ownership: 7 percent
    • Ownership of data: 6 percent

    The survey also asked how IT decision makers intend to meet their data center services needs over the next 12 and 36 months:

    • Expand capacity in existing data center(s)
      • 12 Months: 74 percent
      • 36 Months: 73 percent
    • Cloud
      • 12 Months: 66 percent
      • 36 Months: 81 percent
    • Colocation/managed services
      • 12 Months: 65 percent
      • 36 Months: 83 percent
    • Renovate/refurbish
      • 12 Months: 57 percent
      • 36 Months: 72 percent
    • Build new
      • 12 Months: 42 percent
      • 36 Months: 88 percent

    Conversation revolving around expansion, cloud, and security, AFCOM also saw some big shifts around controlling the explosion of data. A recent Cisco report indicates that global data center IP traffic will nearly triple (2.8-fold) over the next five years. The AFCOM survey asked respondents about storage, capacity, and how they’re reacting to this data growth.

    As it stands today, 31 percent are managing between 1 and 10PB of data. A further 16 percent are managing 10PB to 50PB and more. Several respondents said that they anticipate their annual storage growth rate to be between 20 percent and 30 percent.

    So, how are data center manager reacting to this storage growth? Top five responses were:

    1. 58 percent are allocating physical space within the data center
    2. 48 percent are using storage virtualization
    3. 46 percent are using storage consolidation
    4. 37 percent are using flash storage
    5. 19 percent are leveraging a cloud provider

    New storage consolidation and control systems will help control the vast boom in data growth. Most of all, organizations will have to find ways to store data more efficiently and securely. In some cases, this will mean moving to the cloud; in others, deploying better levels of virtualization.

    Download AFCOM’s entire 2015 State of the Data Center Survey for more on trends in cloud, security, virtualization, and what it all means for your data center.

    3:30p
    The Paradox of Network Blind Spots

    Frank Winter is the CEO of Auconet.

    You could drive a truck full of hackers through the blind spots on nearly any IT infrastructure, despite multiple layers of security. Go to any Gartner or Forrester conference for CIOs and IT directors, and ask the attendees what percentage of devices, ports, and endpoints on their network are unknown and uncontrolled. We do this routinely. The typical guess is 10 percent to 15 percent, or “We just can’t track that.”

    No enterprise IT pro has yet replied, “We detect everything in real-time.” Some companies, frustrated they can’t build an effective solution, turn to third-party consultancies for discovery of routers, switches, ports and endpoints. From what they tell us, the results are far from perfect. As it turns out, 100 percent discovery is not easy.

    The IT Paradox: Despite Advances Across the Spectrum of IT Security, Huge Blind Spots Persist

    The continued existence of significant infrastructure blind spots is difficult to rationalize. You cannot protect, control, or quarantine what you can’t see. Data governance, risk management, and compliance leaders live under the shadow of security and breach implications. Chief information security officers know the gap is always there. Access by unauthorized endpoints is clearly dangerous and means that advanced security measures can be for naught, if they regulate only the visible endpoints.

    The persistence of blind spots – despite the major advances in “post-logon” security – is the paradox of corporate IT this year, last year, and for the past decade. What appears to be inexplicable complacency about unknown endpoints is more likely to be grim awareness that failure to see every port in real-time puts a greater burden on other security measures.

    Paradox-Buster #1 – Bar the Door

    The best way to stop a rogue endpoint intrusion is to never let it onto the network. Don’t rely on the layers of post-logon protection that you have in place. While they are good, they aren’t good enough… nor fast enough, every minute, 24 X 7. Instead of banking on remediation after bad behavior is detected, intercept every unauthorized endpoint using either Layer 2 MAC-based or 802.1X, and switch off the port being used, before any tainted traffic can touch the network.

    This “Bar the Door” approach requires that you can also discover and persistently monitor all ports and links. Without the ability to see them all in real-time, there will be serious gaps.

    Paradox-Buster #2 – Replace Discovery Tools That Aren’t Fully Vendor-Independent

    Endpoint protection must start from awareness of every router, switch, and port on the network, plus real-time discovery of any endpoint attempting access. That requires discovery covering all brands, models, and versions. Realistically, your infrastructure is heterogeneous or will be soon. With the Internet of Things, and to give you negotiating leverage, you require fully vendor-independent discovery. It will help keep you out of costly traps.

    Paradox-Buster #3 – Study Vendor Claims and Customer Testimony Carefully

    Read vendor claims attentively. Do they state unequivocally “We find every port and endpoint and detect/block every new endpoint and device, in real-time, regardless of vendor, even on huge networks?” For compete discovery, your vendor should be willing to commit unequivocally to near-100 percent discovery effectiveness.

    Earlier this year, a major technology publication conducted a test of several network access control (NAC) products, using a [very] small test network. Not one was successful in recognizing typical devices on even a tiny network. One can reasonably ask whether they could recognize everything on a network as large as yours.

    Paradox-Buster #4 – Eliminate the Paradox and Bask in Higher Productivity

    Establish 100 percent visibility of every IT asset, including ports and endpoints, to achieve efficiency, order, and higher operator productivity in security and other aspects of running a data center or network.

    Insist on hearing customer testimony that a vendor solved their discovery issues “completely.” Discovery that is successful only 99 percent of the time is inadequate; if a breach occurs on the one percent it can cost tens of millions of dollars. There’s only microscopic wiggle room on the need for 100 percent discovery on big, heterogeneous networks.

    Find Everything, Miss Nothing

    This will give your other investments in security and ITOM applications the full and correct picture needed to work across the entire infrastructure. Wipe out the blind spots, and you can be confident that your other security solutions will become more effective. With absolute 100 percent discovery comes freedom from the pervasive, 100 percent unacceptable paradox of modern IT: network blind spots.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:16p
    HPC Virtualization Use Cases and Best Practices

    The use of high-performance computing is continuing to grow. The critical nature of information and complex workloads have created a growing need for HPC systems. Through it all, compute density plays a big role in the number of parallel workloads we’re able to run. So, how are HPC, virtualization, and cloud computing playing together? Let’s take a look.

    One variant of HPC infrastructure is vHPC (“v” stands for “virtual”). A typical HPC cluster runs a single operating system and software stack across all nodes. This could be great for scheduling jobs, but what if multiple people and groups are involved? What if a researcher needs their own piece of HPC space for testing, development, and data correlation? Virtualized HPC clusters enable sharing of compute resources, letting researchers “bring their own software.” You can then archive images, test against them, and maintain the ability for individual teams to fully customize their OS, research tools, and workload configurations.

    Effectively, you are eliminating islands of compute and allowing the use of VMs in a shared environment, which removes another obstacle to centralization of HPC resources. These benefits can have an impact in fields like life sciences, finance, and education, to name just a few examples.

    • Combining Cloud and HPC. When an end user deploys a virtual HPC cluster, they’re doing so with a pre-validated architecture which specifies the required machine attributes, the number of VMs, and the critical software that should be included in the VM. Basically, you allow full customization to their requirements. This architecture also allows the central IT group to enforce corporate IT policies. By centralizing data, virtual resources, and user workloads security administrators are able to, for example, ensure security and data protection policies.
    • Virtualizing Hadoop (and Big Data Engines). Not only can you virtualize Big Data clusters, you can now allow them to scale into the cloud. Project Serengeti, for example – VMware’s open source Hadoop virtualization for its hypervisor — allows the virtual system to be triggered from the VMware vCloud Automation Center, making it easy to enable users to self-provision Hadoop clusters. So why introduce an extra level of indirection to get from the Map/Reduce tasks to the storage? Here are a few reasons:
      • Virtualization ecosystems can make it very fast
      • Because it allows for easy elasticity of the compute part of the cluster (since it is decoupled from storage), and because it supports multi-tenant access to the underlying HDFS file system, which is owned and managed by the DNs.
    • Other key benefits to consider:
      • Simplified Hadoop cluster configuration and provisioning
      • Support multi-tenant environments
      • Support Hadoop usage in existing virtualized data centers
      • Big Data Extensions

    There are two critical aspects to look out for when virtualizing HPC:

    1. Low-latency apps. Bypassing the kernel in HPC bare-metal environments is the standard way to achieve highest bandwidth and lowest latency, which is critical for many MPI applications. The cool part here is that VMware can do the analog of this in their virtual environment. In using VM Direct Path I/O, VMware can make the hardware device (e.g. InfiniBand) directly visible to the guest, which then allows the application to have direct access to the hardware as in the bare-metal case. This capability is available now using ESXi.
    2. Not all HPC workloads are made to be virtualized. We need to be realistic here. Before you virtualize an HPC cluster, make sure it’s the right thing to do. Basically, make sure to develop a use case for your application and ensure the performance metrics you require will be available for your cluster. Remember, use cases can revolve around:
      • Research requirements
      • Volume of data being processed
      • Sensitive nature of the information
      • Specific hardware requirements
      • Your user base
      • Location of the data

    Here’s one more thing to think about: getting started with virtual HPC isn’t as hard or as risky as it may seem. Why? Because it’s virtual.

    6:57p
    HP Launches Open Source OS for Data Center Networking

    HP has open sourced a network operating system for data center switches, partnering with a handful of other major vendors in launching a full-fledged open source project that has the potential to become a major disruptor for Cisco, whose proprietary pre-integrated hardware-and-software solutions dominate the data center networking market.

    In recent years, HP and other challengers to Cisco’s dominance – companies like Dell, Arista, Juniper, and Brocade – have been moving in the direction of disaggregating network hardware from network software, separating the packet forwarding plane from the control plane. HP, Dell, and Juniper have introduced data center switch lines that gel with other vendors’ operating systems, namely Linux-based ones by Cumulus Networks, Big Switch Networks, and Pica8.

    The move to open networking is about giving users more control of the configuration of their networks, as well enabling Software Defined Networking and Network Function Virtualization capabilities.

    Cisco has taken a different approach. Instead of opening up its hardware for third-party software, it has introduced Application Centric Infrastructure, its proprietary SDN technology, while also adding support for OpenFlow, the open SDN standard, launching the Cisco Extensible Network Controller, a distribution of the open SDN controller called OpenDaylight, and supporting BGP EVPN, an open protocol for virtual network overlays.

    Brocade, whose position in the data center networking market is strong but relatively small in terms of market share, has been an early supporter of open networking technologies. The company’s CEO Lloyd Carney told Data Center Knowledge in a recent interview that, in his opinion, specialized integrated data center networking solutions were well on their way to being replaced by commodity x86 servers where all networking functionality is handled by software.

    Vendors Group around Open Source Network OS

    HP launched its OpenSwitch NOS (network operating system) as an open source project together with Arista, a network hardware company that has been successful using the disaggregated-network model from the get-go, Broadcom, one of the leading “merchant silicon” vendors, VMware, Intel, and Accton, a Taiwan-based design manufacturer that produces hardware for HP’s open networking switch line called Altoline.

    Old Guard IT Giants Embrace Open Source

    HP, like other “incumbent” IT vendors, is increasingly embracing open source software. The company has gotten heavily invested in open source cloud projects OpenStack and Cloud Foundry, for example.

    This is a change for HP, illustrative of the changes taking place in the IT industry, where major IT suppliers recognize that open source software is something they cannot ignore or fight against. The company went from being one of the first big vendors to legitimize open source when it embraced Linux in the 90s, Brandon Keepers, open source lead at GitHub, told us on the sidelines of the GitHub Universe conference in San Francisco last week. “And then they flipped around and became this really rigid, typical corporate organization,” he said. GitHub hosts the most popular online repository and collaboration platform for open source code.

    Full-Featured L2/L3 Network OS

    According to HP, OpenSwitch is a full-featured network OS with support for L2 and L3 protocols. The open source project also includes a cloud database for persistent configuration. Its universal API approach ensures support for command line interface, REST, Ansible, Puppet, and Chef.

    The OpenSwitch community is already operational, and an initial developer release of OpenSwitch NOS is available for download at www.openswitch.net.

    8:27p
    Gartner Symposium/ITxpo 2015: Algorithms, Not Apps, Future of IT

    varguylogo

    This post originally appeared at The Var Guy

    By Charlene OHanlon

    The future of IT is digital, and algorithms are the drivers of success. That’s the message from the keynote address today at the Gartner Symposium/ITxpo 2015.

    “If the most important thing you offer is data, you’re in big trouble,” said Peter Sondergaard, senior vice president, Gartner Research. “Big data is not where the value is. By itself it may not be transformative. Algorithms are where the value lies.”

    As analog revenues flatten and decline, businesses are shifting to new revenue for growth, he said. That represents a $1 trillion digital business opportunity. And much of that lies in algorithms. “Algorithms define the way the world works,” he noted.

    The best part? Companies already have much of what can push them forward. The trick is making it work for them. To do that, companies first must know what they have.

    “You must inventory your algorithms,” Sondergaard said. “Which ones define your most important processes? Key customer interactions? Then you must assign ownership to algorithms. Prioritize which should be public and which should be private. Then license, trade or sell those that are important but not critical to your business.

    “Imagine a marketplace where tens of billions of algorithms are available,” he continued. “The algorithmic economy will power the next phase of machine-to-machine interaction.”

    Gartner predicts that by 2020 smart agents will facilitate 40 percent of business interactions. “Users will have forgotten about apps,” Sondergaard said. “The post-app era is coming. Google, Amazon, Apple platforms will be agents.

    “Agents enabled by algorithms will define the post-app era,” he said.

    But, to make this vision of the future work, “we have to get the algorithms right.”

    Sondergaard also discussed the changing role of the chief security officer in digital business, noting that by 2020 50 percent of large enterprises will have a digital risk officer who manages IT, OT (operational technology) and IoT risk, Gartner predicts.

    “The risk and security officer of today is obsessed with hacks. But the new concerns are not just of protection, but also safety and quality,” he noted. “By 2017 IT organizations will spend 30 percent of their budget will be spent on risk, security and safety.”

    The first step, he said, will be to create an infrastructure that is more resilient and shift security spending to focus on detection and response rather than “impossible protection.”

    “You can’t control hackers, but you can control your infrastructure” he said.

    This first ran at http://thevarguy.com/information-technology-events-and-conferences/100515/gartner-symposiumitxpo-2015-algorithms-not-apps-future

    8:37p
    CoreOS Intros AWS On-Ramp for Kubernetes

    talkincloud

    This article originally ran at Talkin’ Cloud

    Looking to simplify deployment and management of application containers, such the ones by Docker, on top of Amazon Web Services, CoreOS has launched an AWS Installer that makes use of a CloudFormation tool from AWS and kube-aws, a CoreOS tool that automates cluster deployments, to simplify deployment of Kubernetes, Google’s open source orchestration framework for managing containers.

    Locked in a battle with providers of rival distributions of Linux as well as Microsoft, CoreOS clearly want to become the most efficient platform available running containers that are being widely adopted to rapidly build and deploy applications based on a microservices architecture.

    While those applications can naturally run on premise or in the cloud, CoreOS CEO Alex Polvi says that first place many of those applications are manifesting themselves is on public clouds. As the largest public cloud, Polvi said it only makes sense for CoreOS to focus on making it simpler to install both Kubernetes and containers themselves on AWS.

    CoreOS has been riding a wave of developer enthusiasm for containers that has extended to adopting lighter-weight implementations of a distribution of Linux on which those containers run. Naturally, rivals such as Red Hat have responded with their own lighter-weight versions of Linux, but CoreOS continues to build momentum behind its Tectonic distribution of Linux. By helping to bring a Kubernetes orchestration framework originally developed by Google to AWS, Polvi said CoreOS is hoping to significantly expand the reach of its customer base.

    For solution providers that rapid shift to applications based on microservices architectures creates several challenges and opportunities. The rate at which applications are being deployed in the cloud is increasing significantly because developers find it much simpler to interact with containers than virtual machines. In many instances, however, those containers are being deployed both on bare metal servers and on top of virtual machines, largely because most internal IT operations team don’t have a way to natively manage containers running on a bare metal server. There are also concerns about the security of containers running on bare metal servers. To make matters more interesting for solution providers, providers of virtual machines are in the early stages of building lighter weight virtual machines that are specifically optimized to host containers.

    Put it all together and it’s clear that managing IT both inside and out of the cloud is about to become much more difficult than it already is. The plus side of that equation for solution providers is that demand for external expertise to help manage the multiple types of server platforms that are being deployed inside and out of the cloud these days should only increase in the months and years ahead.

    This first ran at http://talkincloud.com/cloud-computing/smoothing-kubernetes-path-aws

    9:00p
    IBM Buys Cleversafe to Boost SoftLayer Cloud’s Object Storage Chops

    logo-WHIR

    This article originally appeared at The WHIR

    IBM has acquired data storage vendor Cleversafe to boost object storage capabilities of its cloud services, according to an announcement on Monday. Terms of the deal were not disclosed.

    As part of the agreement, IBM will gain Cleversafe’s 350 patents and 210 employees who will join its IBM Cloud business. Cleversafe’s Dispersed Storage Network (dsNET) solutions complement IBM’s software-defined Spectrum Storage portfolio, and will be integrated in IBM Cloud to enhance SoftLayer and the SoftLayer Object Storage platform.

    Cleversafe, a privately held company based in Chicago, was founded in 2004. The company has more than 350 patents in object-based on-premise storage solutions, and it has clients across multiple industries that use Cleversafe for content repository, backup, archive, collaboration and storage as a service.

    “Today a massive digital transformation is underway as organizations increasingly turn to cloud computing for innovative ways to manage more complex business operations and increasing volumes of data in a secure and effective way,” Robert LeBlanc, Senior Vice President, IBM Cloud said in a statement. “Cleversafe, a pioneer in object storage, will add to our efforts to help clients overcome these challenges by extending and strengthening our cloud storage strategy, as well as our portfolio.”

    The acquisition will enable IBM clients to store and manage massive amounts of data more efficiently as they are able to balance between on-premise and cloud storage deployments, IBM said in a statement. Clients will be able to use SoftLayer cloud services and IBM’s Platform-as-a-Service Bluemix to create applications with the Cleversafe technology.

    Last month, IBM acquired application development software provider StrongLoop to integrate Node.js capabilities with its software portfolio.

    This first ran at http://www.thewhir.com/web-hosting-news/ibm-acquires-cleversafe-to-boost-data-storage-capabilities

    << Previous Day 2015/10/05
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org