Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, July 16th, 2014

    Time Event
    11:00a
    IIX Raises $10.4M For Global Internet Exchange Platform

    IIX (International Internet Exchange) has raised a $10.4 million Series A from New Enterprise Associates (NEA) and is ready to make a major market push. The company says its ushering in the next generation of interconnection and peering with a new platform. It has been stealthily adding locations and forming connections with leading content providers and now has the funds to grow its team, expand its product roadmap and increase business development activities.

    The idea behind IIX is to make peering simple through distributed and interconnected Internet Exchange Points (IXPs). The IIX PeeringCloud enables its customers to connect to leading IXPs via a single interconnection from anywhere in the world. There’s a simple user interface that lets users make these connections with clicks.

    It’s not an overlay product, but a complete bypass of the public Internet, providing a one-click solution for national and global interconnection. By bypassing the public Internet, it improves latency, increases immunity to DDoS attacks and mitigates several other issues. It gives people the ability to directly connect to content providers like Box, Microsoft and others.

    IIX is making a case for distributed and interconnected IXP’s and moving away from the interconnection “islands” that currently exist, where certain data centers providers control the major interconnection points. The platform is designed for enterprises, content providers, cloud providers and other network operators to enhance the performance of applications and services regardless of location.

    “We’re taking advantage of a first-to-market opportunity,” said CEO and founder Al Burgio. “An analogy I use is the LinkedIn for enterprise networks. It’s a software platform for content and cloud application providers that enable them to securely interconnect regardless of location. People need to do this at scale, and that’s become cost prohibitive.”

    Management team packed with peering experts

    These are lofty goals, but Burgio has quietly built a major contender. His team includes some Equinix veterans, including Bill Norton, co-founder of Equinix and published author on peering, Morgan Snyder, who spent a decade with Equinix, and advisory board member Jay Adelson, Equinix founder and co-creator of PAIX, formerly known as the Palo Alto Internet Exchange.

    Today, Equinix controls most of the largest internet exchanges in the U.S.

    Also on the board is founder and former chairman of the London Internet Exchange (LINX) Keith Mitchell. The management team is a veritable who’s-who of the peering world, “We’ve done it before and we’re going to do it again,” said Burgio.

    “Virtualizing the core of the Internet”

    IIX launched its first IXP in 2011. “We’ve tackled a lot in stealth and really validated an IX next-gen solution,” said Burgio. The company has since established presence in various markets in the U.S., Canada and Europe and is continuing to expand globally.

    These Internet exchange points in various markets are connected not just locally but globally – an industry first, according to Burgio. “We’re virtualizing the core of the internet,” he said. “It gives the ability to directly connect regardless of scale, at the speed of business.”

    IIX also has a growing ecosystem of customers, including big names such as Microsoft, Box, Blue Jeans Network, GoDaddy, LinkedIn and TripAdvisor.

    The present-day Internet does not provide the degree of performance and predictability required by today’s mission critical applications. The movement to cloud is undeniable and the dependency on today’s form of interconnectivity isn’t suitable, Burgio said.

    There is also added pressure placed on existing internet infrastructure. Distributed Denial of Service (DDoS) attacks are growing exponentially and less than two percent are correctly connected, he said. “These organizations have little to no control over the path of their data. Now you’re leaving yourself exposed.”

    The platform isn’t trying to replace interconnection as we know it. “Our goal has never been to replace how local interconnection is done. It’s to introduce a new way that’s simple that can help you not just locally, but at scale,” said Burgio “Some people are peering locally; the rest of the world is still heavily dependent on internet transit. We inject ourselves into that blend.”

    12:00p
    etcd: the Not-so-Secret Sauce in Google’s Kubernetes and Pivotal’s Cloud Foundry

    What do Google’s open source cluster container management software Kubernetes and Pivotal’s open source Platform-as-a-Service software Cloud Foundry have in common? The answer is etcd, the open source distributed key-value store started and maintained by CoreOS, a San Francisco startup that earlier this month announced an $8 million Series A funding round by a group of Silicon Valley venture capital heavyweights.

    As Blake Mizerany, head of the etcd project at CoreOS, explained in a blog post, cluster management across distributed systems is complicated business. etcd makes it easier by creating a hub that keeps track of the state of each node in a cluster and manages those states. It replicates the state data across all nodes in the cluster, preventing a single node failure from bringing down the whole group.

    Getting clustered servers to agree

    In an interview, CoreOS CEO Alex Polvi said etcd was an implementation of Chubby, a software tool Google designed to manage a key component in every distributed system: consistency. For five servers to make a decision as a cluster, they need to agree about the current state of something they are making a decision about. In the world of distributed computing, this is called consensus, and Chubby uses a “consensus algorithm,” called Paxos, to manage consensus in a cluster of servers. This consensus is key to resiliency of distributed systems.

    Google published a paper describing Chubby in 2006, which inspired Doozer, a highly available data store Mizerany wrote together with his former colleague Keith Patrick in 2011, when both were working at Heroku, the Platform-as-a-Service company that was at that point already owned by Salesforce. Doozer became inspiration for etcd, Mizerany wrote. Both are written in Go, but a big difference is that Doozer uses Paxos, while etcd’s consensus protocol is Raft, which gives it the ability to keep the same logs of state changing commands on all nodes in a cluster.

    Kubernetes, the Docker container manager Google open sourced in June, is a lighter version of its in-house system called Omega and relies on etcd for cluster management. “To run Kubernetes, you have to run etcd,” Polvi said. Everything CoreOS is building has been inspired by the way Google runs its data center infrastructure, so “we’re excited to see them build on top of one of our tools,” he said.

    ‘Operational utopia’

    CoreOS, the company’s main product, is a server operating system designed for companies that want to run their data centers the way web giants, such as Google, Amazon or Facebook, run theirs. Its target customers are companies that operate data centers at Google’s scale but, unlike the web giants, don’t design and build everything inside those data centers by themselves. The only customer whose name CoreOS has disclosed so far is Atlassian Software, the Australian company best known for creating JIRA, one of the top software tools used by project managers.

    As Polvi puts it, Kubernetes is a step toward the “operational utopia we’ve all been dreaming of for a long time.” That utopia is being able to treat a massive data center as a single operating system. It is too early to say whether Kubernetes will become the de facto standard management tool for doing that, but the style of infrastructure operations it represents is where things are going, he said. “It could be it. I think market wants one. I don’t’ think it wants 20.”

    Others in industry want to be involved in Kubernetes

    A group of IT infrastructure heavyweights joined Google’s open source project exactly one month after it was announced, which means some level of standardization on Kubernetes is coming. IBM, Red Hat and Microsoft all pledged to contribute to the project, as did a group of startups, including CoreOS, Docker, Mesosphere and SaltStack.

    Microsoft wants to make sure Kubernetes works on Linux VMs spun up in its Azure cloud. IBM, looking out for its primary customer base, wants to make sure Docker containers are digestible by enterprises.

    Matt Hicks, director of OpenShift engineering at Red Hat, said the software company was interested in Kubernetes because it was interested in having a common model for describing how applications packaged in Linux containers are built and interconnected. “How you orchestrate and how you combine multiple containers to create a useful application is useful technology for us,” he said.

    Besides the company’s obvious interest in Kubernetes because of its Enterprise Linux and Fedora Linux operating systems, application containers have been a core technology underlying OpenShift, RedHat’s popular PaaS product.

    From common framework to full automation

    When it announced arrival of new members to the open source Kubernetes community, Google said the goal was to make sure it becomes an open container management framework for any application in any environment. This means the community will have to come to a consensus on what attributes that common management framework will have.

    Hicks said the framework would have to address the way multi-container applications and dependencies between the underlying containers are described. Another component would be defining how the application’s containers are placed across what could be thousands of servers, so they come together cohesively. Since containers can run on shared resources, the framework would also have to address how security is handled.

    Polvi said essentially you’ll want to be able to describe your goals to the system and let it figure out the best way to achieve them. With systems like Amazon Web Services or OpenStack-based clouds, you have to specify which server or which database to spin up or spin down when. With Kubernetes, you will ideally be able to tell the system that your app needs a data base, three servers, an x-amount of storage, etc., and “go make it so and guarantee that it’s so,” he said.

    12:00p
    Data Center Expansion Brings CloudFlare Within 10 Miliseconds of 95 Percent of World’s Population

    Content delivery network company CloudFlare said earlier that 2014 was going to be the year of data center expansion, and it wasn’t kidding. The company is doubling its footprint in Equinix data centers to keep up with a growing customer base and web traffic passing through its network, which has seen a 400 percent increase in the last year alone. It now processes more than 300 billion page views every month for more than 1.5 million customers worldwide.

    CloudFlare provides a blend of DDoS mitigation, web acceleration and CDN services. It’s popular in part because its tools are easy to use.

    The company announced it was planning a significant expansion in the beginning of the year, adding facilities where it believes it lacks coverage. Leveraging Equinix’s global footprint, CloudFlare can also more effectively absorb and disperse massive volumes of web traffic associated with DDoS attacks to better ensure uptime.

    “What they’re doing is leveraging a combo of physical facilities in lots of geographic locations, the interconnection platform and the ecosystems we’ve built,” said Nikesh Kalra, of Global Cloud Services at Equinix. “In terms of the network ecosystem, one of the statistics we tout is we have 975-plus carriers. That’s important to a company like CloudFlare, because they need the maximum amount of end users. For them to get max exposure and include value to the service, they get into peering arrangements and interconnection arrangements.”

    A lot of what CloudFlare distributes is web content, and they also need to be close to distributors of that content, such as large entertainment, bloggers, etc. “We house a lot of infrastructure that hosts or originates that content, allowing for better interconnection and lower latency,” said Kalra.

    CloudFlare has been an Equinix customer since around 2010, starting with three markets: Chicago, Silicon Valley and Washington, DC. Since then it has grown substantially, now in 15 markets worldwide. Upon initial deployment, 20 percent of the world’s population was located 10 milliseconds from the nearest CloudFlare data center. After this expansion it covers 95 percent of the world’ population, leveraging Equinix locations in North America, Asia and Europe.

    CloudFlare’s total Equinix footprint spans Amsterdam, Atlanta, Chicago, Dallas, Hong Kong, Los Angeles, New York, Paris, Seattle, Silicon Valley, Singapore, Sydney, Tokyo, Toronto and Washington, D.C.

    Equinix’s cloud services division largest revenue producer for first time

    Equinix markets and sells according to different verticals. The cloud and IT vertical is the largest revenue producer for the company for the first time ever this year, according to Kalra.

    “This is noteworthy because we were historically based around the network vertical,” he said. “All the networks we have create ecosystems. Cloud networks surpassing other verticals in revenue this year for the first time indicates how strong the market for cloud is.”

    CloudFlare: 2013 was for refactoring, 2014 for expansion

    CloudFlare said 2013 was its “refactoring” year, updating its architecture  and upgrading existing facilities through added equipment and network redundancy. The server infrastructure was upgraded from a 1Gbps platform to a 10 Gbps platform. The entire DNS infrastructure was rebuilt from scratch. It created a new, fully customizable, rules-based Web Application Firewall (WAF) to augment its original heuristics-based WAF.

    Now it is the year of data center expansion. “Equinix has been a key partner to us every step of the way,” said Joshua Motta, director of special projects, CloudFlare. “No other company would have been able to help us scale as quickly as Equinix did on a global level. Becoming a part of the greater content delivery network ecosystem through Equinix and taking advantage of its platform and connectivity options, enables us to better serve our existing customers and grow our business within new geographies and industries.”

    12:30p
    The Role of Internet Exchanges in the Data Center Interconnect Market

    Eve Griliches is the Director of Solutions Marketing at BTI Systems

    The Data Center Interconnect (DCI) market is experiencing explosive global growth. Content, service and colo/hosting providers are seeking to directly connect content to end users at the edge of global networks.

    Providers are seeking this direct connection in order to deliver faster anytime, anywhere access to the unprecedented numbers of businesses and consumers straining these networks, driven by the game-changing dynamics of cloud computing, mobility and video.

    This shift of content closer to the user ‘eyeballs’ is causing the continued disintermediation of ISPs while fueling arguments about net neutrality and raising issues about paid and/or unpaid peering arrangements.

    Growing on a global scale on national long-haul routes and sub-terrestrial links, the DCI market is undergoing massive and accelerating deployments in metro regional areas. Adopted early by the largest Web/content providers, service providers are now building metro overlays for DCI. And colocation and hosting providers are deploying DCIs as their businesses and data center real estate grows.

    The not-so-little driving force

    But often-overlooked, driving forces of the DCI market are Internet Exchanges (IXs). Largely nonprofit, with most IXs headquartered outside of North America, their numbers have increased worldwide by 20 percent since 2012. And last year, overall traffic on IX networks increased 26 percent.

    These IX players’ roles are totally focused on moving content to local users. A strategy that is also reflected in the aforementioned growth of metro networks and regional data centers, built to move content closer to business and consumer customers.

    At its most basic level an Internet Exchange Point (IXP) ensures local Internet traffic is kept within local network infrastructures, lowering costs and providing content to users more quickly. Content providers and operators of content delivery networks globally are pushing service providers to connect to IXs specifically to move content faster. IXs offer public peering services (which are transparent and provide information about traffic patterns) to content and service providers across Europe, with European IXs just beginning to deploy in North America.

    Growing momentum in North America

    IXs and content providers often share space in colocation data centers, driving the number of sites and players up dramatically. A recent EU-IX report shows in Europe that local traffic is staying more local, and increasing at all sites, especially at some of the largest IXs who can peak over 1T. The same pattern is expected in North America and with a growth in direct peering, the disintermediation of smaller ISPs will continue, more data centers will be deployed and traffic will escalate. And the DCI market will also grow exponentially as data centers increase.

    As further evidence of the growing momentum of IXs in North America, Open-IX was created by data center operators, content providers and others to encourage the development of a neutral and distributed Internet Exchange model in the U.S. This newly formed group will address inconsistencies in connectivity, resiliency and security. Another key motivator is to reduce higher prices set by some colocation providers.

    A bright looking future

    Ultimately, this rapid movement to keeping content distribution local will lead to major network advantages such as dynamic bandwidth on demand, with the ability to provision bandwidth in minutes or seconds.

    IX deployments will require a network architecture that can collapse the layers and provide immediate analytics for network optimization fueling faster service creation, content delivery and increasing customer preference by combining the requirement for massive scale with the intelligence of routing.

    With new technologies and standards available, metro networks could interface to the IX infrastructure to create Layer 2 or Layer 1 interconnects. Additionally, new tools can assist in moving a public peering connection onto an IX hosted private session to increase security.

    These are exciting times for network operators as well as companies participating in the DCI market. IXs play a critical role in keeping traffic local, controlling costs and driving a mutual approach to peering, always putting the customer first.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library

    2:00p
    Performance Results for Oracle on a Hitachi HUS VM All Flash Array

    Your storage environment is critical for your organization. Basically, it’s the brains of the operation. All-flash arrays have been making an impact on the market as they offer a new way to optimize critical workloads.

    Many times databases, heavy data sets, or complex applications take up a lot of IO when it comes to storage requirements. Trends indicate that there will only be more users, data and applications to manage moving forward. One of those critical systems revolves around an Oracle infrastructure. So how can you optimize your data base storage capabilities? What can you do to have optimal performance around your Oracle infrastructure?

    In this Performance Benchmark whitepaper from Hitachi, we look at performance results around an Oracle instance sitting on a HUS VM All Flash array.

    There are eight key aspects to the results and understanding the benchmarks. They include:

    • System Configuration
    • Introduction to Storage Performance Tests
    • Storage Benchmark Results – Sequential IO
    • Storage Benchmark Results – Random IO
    • Introduction to Database Performance Tests
    • Database Benchmark Results – Database Load
    • Database Benchmark Results – OLTP Transactions
    • Reviewing Storage and Database Benchmark Results

    So, why measure storage performance? Storage performance is essential not only for overall Oracle database performance, but for system management tasks like backup, recovery and archiving.

    Download this whitepaper today to learn the power of the Hitachi HUS VM All Flash array. This type of storage model pulls together the best of all storage worlds including:

    • Seamless integration in existing SAN infrastructures
    • Proven scalability and performance for all workloads
    • Extremely high IO throughput with microseconds service time
    • Highly efficient DRAM storage cache in addition to flash technology
    • Rich and mature storage management software portfolio (cloning, snapshots, replication, dynamic provision, dynamic tiering etc.)

    The idea is to close the gap around IO. The HUS VM All Flash Array reduces the IO gap and allows Oracle platforms to fully utilize CPU capacity without waiting for IO operations. Ultimately, this helps by increasing server CPU utilization, improves return on assets for customers, and lowers CAPEX for server platforms.

    2:00p
    RightScale’s New Self-Service Portal Gives Enterprises Control Over Cloud Sprawl

    Cloud portfolio management provider RightScale has announced RightScale Self-Service, a portal that helps developers and other cloud users get instant access to cloud infrastructure. It enables enterprise IT teams to curate a catalog of applications and services across clouds within the necessary governance and cost controls of the organization.

    Reining in cloud infrastructure and applications within the confines of the enterprise has been a major initiative to counter what has been dubbed as “shadow IT”. The ease of use of certain Software-as-a-Service applications or the ability to provision cloud resources has become so simple that many departments leverage tools outside of the confines of an organization’s policy. By curating cloud services, an enterprise regains control over increasingly independent IT users.

    This is the third major RightScale product initiative. The other two were Cloud Analytics and Cloud Management. Cloud Analytics was announced in November 2013 and entered in public beta in March 2014. RightScale Cloud Management recently added vSphere integration, which went into General Availability in March.

    Integrated with Cloud Management, RightScale Self-Service allows users to automatically provision instances, stacks or complex multi-tier applications from a catalog defined by IT.  Common use cases include development, testing, staging, production, demos, training, batch processing and digital marketing. The self-service portal also enables users to manage cloud applications, track costs and set automated shutdown schedules through an easy-to-use interface. The combination of management and self-service allows teams to administer applications while giving business and finance teams the ability to visualize and optimize cloud usage and costs.

    The self-service portal comes with a curated catalog of stacks and applications that can be deployed across a portfolio of clouds.  It lets enterprises leverage corporate standard technologies and control the versions, patches, configurations and security settings. There are built-in cost controls to manage costs and set quotas for users and teams. Self-Service supports all major public and private clouds, including Amazon Web Services, Microsoft Azure, Google Cloud Platform, OpenStack and VMware vSphere. It’s delivered as a service, so users can get started quickly, and RightScale has exposed an API for integration with existing systems and DevOps tools.

    “RightScale Self-Service allows Nextdoor’s operations team to provide our engineers simple one-click access to pre-defined resources,” said Matt Wise, senior systems architect, Nextdoor, a large free private social network for neighborhoods. “It also integrates seamlessly with RightScale Cloud Management and our Puppet automation framework. One of our core principles in the Ops team at Nextdoor is that we try to limit the number of technologies we leverage, but become experts in the ones we do use. We’ve chosen to leverage RightScale as our main cloud management interface.”

    Stephen O’Grady, principal analyst at Redmonk, notes that this brings in the best of both worlds: developers get the flexibility of the cloud and enterprises get the control they need. “Developers love the frictionless availability of cloud, but enterprises crave visibility into their infrastructure which is challenged by the widespread cloud adoption,” said O’Grady. “RightScale Self-Service is intended to serve as a way to provide both parties what they need from the cloud.”

    6:15p
    U.S. Army Goes After Application Sprawl Before Tackling Data Center Consolidation

    The U.S. Army has started migrating all of its enterprise applications and systems that host them to designated core data centers as part of an ongoing consolidation effort.

    The migration must be complete by the end of fiscal year 2018, according to a June memo issued by Under Secretary of the Army Brad Carson. The department is attempting to consolidate more than 1,100 data centers – a number that most likely includes server closets under the government’s broad definition of what constitutes a data center.

    The memo is the first step to establish policy and procedures for moving services from local data centers to modern, centralized environments. The effort is part of a broad Department of Defense initiative.

    Application consolidation is a good place to start. For instance, Microsoft Outlook is used across the entire DOD, but it is not necessarily all hosted in the same data center.

    By consolidating applications that are in widespread usage but hosted in multiple locations, the Army stands to gain from a leaner infrastructure but also from economies of scale, as many applications are priced by volume. Reducing the amount of application instances also makes it easier to manage security and block malware. The department is also terminating errant apps that are no longer in use. This creates further savings on hardware, licensing fees and upgrades.

    David Vergun, of the Army News Service, wrote that about 800 unused apps have been terminated out of about 11,000 the Army has in use.

    Eliminating redundant apps is not as easy as eliminating unused apps, as they are harder to identify. Unused apps may still contain valuable data, and it’s difficult to collect and migrate data a lot of which was generated over the last 20-plus years.

    The 11,000-application figure came from the Office of the Chief Information Officer, G-6, which is now tracking those apps. This is just one division within a massive organization, and the number indicates a potentially massive government-wide application sprawl.

    Focus shifted from data centers to applications

    The government-wide Federal Data Center Consolidation Initiative, now in its fourth year, was started to tackle inefficiency of the government’s IT infrastructure. The initial focus was on simply reducing the number of data centers the government had running, , an effort that proved extremely challenging as agencies had trouble sorting out how many data centers they had what constituted a data center and which agency had control over what facility.

    Lately, the focus has shifted to consolidating applications rather than buildings, an approach government IT leadership view as a more rational.

    Then Navy CIO Terry Halvorsen highlighted this shift in thinking at a MeriTalk event in Washington, D.C., in March. He believes FDDCI goes much deeper than counting data centers.

    “In the end, it’s about counting dollars,” he said. ”I don’t like the word ‘consolidation.’” He prefers the term “Application Kill.”

    The Army, in similar fashion, prioritizes reeling in the application sprawl that has occurred over the years. Do this correctly, and then you can have a serious talk on data center consolidation.

    Vergun, of the Army News Service, puts things in context: “At one time in the mid-1990s, [Neil] Shelley [chief of the Army Data Center Consolidation Division] noted HQDA (Headquarters, Department of the Army) had seven different e-mail systems running at the same time. In 2013, the Army finished migrating 1.4 million Army users to a single enterprise e-mail system with DISA (Defense Information Systems Agency) supporting the effort. The Army saved $76 million in fiscal year 2013, and expects to save $380 million through 2017.”

    << Previous Day 2014/07/16
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org