Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, June 25th, 2014

    Time Event
    10:00a
    SolidFire Integrates All-Flash Array With VMware vSphere Storage I/O Control

    SolidFire‘s flash storage systems that include tools for providers to fine tune Quality of Service (QoS) controls is now interoperable with VMware vSphere Storage I/O Control, the all-flash array vendor announced this morning.

    SolidFire QoS settings are automatically adjusted on the fly to match any VMware vSphere Storage I/O Control changes, reducing the need for storage administrator intervention. The company says this capability will enable IT managers to deliver predictable storage performance to each individual virtual machine within their infrastructure, while enabling applications and end-user experiences to be more predictable and easier to manage.

    Wright, a former Rackspace executive, founded SolidFire in 2009. It has several strategic partnerships with VMware and Citrix and integrates with OpenStack. The company has landed several service provider customers including COLT, ViaWest, Internap, ServInt and SunGard, among others.

    While it has had a lot of success with service providers, it has been targeting enterprises with new features to entice them.

    SolidFire’s QoS capabilities enable IT managers to designate, manage and deliver predictable virtual machine performance from the host all the way through to the underlying SolidFire storage system.

    “SolidFire with VMware vSphere Storage I/O Control provides a tunable and predictable storage infrastructure for each virtual machine datastore,” said SolidFire’s founder and CEO Dave Wright. “VMware managers can oversee provision storage policies within the virtual infrastructure that are then enforced down to each virtual disk in the SolidFire storage system.”

    The company’s QoS architecture aims to increase VM awareness and management granularity between the host and storage system layers of the data center:

    •  SolidFire’s interoperability with VMware vSphere Storage I/O Control, with storage-enforced QoS, enables predictable VM performance
    •  Interoperability allows for automated storage performance allocation and manages minimum, maximum and burst performance based on per-VM SIOC requirements
    • Dynamic performance allocation to datastores reduces the need to over-provision storage, allowing more VM deployment
    •  Automated orchestration dynamically adjusts volume IOPS allocation, matching each virtual machine’s VMware vSphere Storage I/O Control settings, even as those settings are changed and virtual machines are moved from datastore to datastore
    • End-to-end QoS control reduces “noisy neighbors” and allows for the consolidation of multiple performance-sensitive applications onto a shared infrastructure
    11:45a
    HP Launches Second-Gen Campus Network Switches with SDN Capabilities

    Anticipating demand from customers who are upgrading their campus networks to accommodate wireless technology, workforce mobility and much higher bandwidths, HP launched the second version of its 5400 zl OpenFlow switch series called 5400R zl2.

    Like its predecessor, the new switch supports the popular software defined networking protocol OpenFlow, but handles much higher bandwidths, provides lower latency and unifies wired and wireless policy in the vendor’s network management software.

    HP is going after customers who are upgrading their campus networks from 1 Gigabit Ethernet to 10GbE, with an eye on future support for 40GbE, Steve Brar, global product marketing manager for HP Networking, said. While global Ethernet switch sales have been flat compared to last year, be they data center or campus core switches, 10GbE and 40GbE products have been the sole growth engines in the market, according to IDC.

    “This is definitely targeted at campuses who are transforming their networks,” Brar said.

    HP is competing with all the usual suspects in this space, including Dell, Juniper and Alcatel-Lucent, positioning 5400R zl2 as an alternative to a product by the heaviest of the heavyweights – Cisco. According to Brar, the switch moves three times more packets per second at lower latency than the San Jose, California-based giant’s comparable Catalyst 4500 series switches.

    Growing SDN capabilities

    HP’s previous-generation 5400 zl switches, the company’s first to support OpenFlow, were quite successful. The vendor launched the product line in 2007 (then part of its ProCurve family), but added OpenFlow support in 2012.

    About 10,000 customers have deployed about 20 million ports of 5400 zl network capacity, Brar said. The new generation is compatible with the predecessor.

    The company has been fleshing out its SDN strategy since 2012, and as it continues to do so the amount of things its OpenFlow-enabled gear can do will only grow.

    At the moment, HP is offering two major SDN applications for 5400R zl2. One of them is Network Protector, a security app that runs on its Virtual Application Networks SDN Controller, launched last year.

    Via OpenFlow, the app programs the switch to forward DNS traffic coming from wireless devices on the network to the controller, where the website the user is trying to access gets vetted, Brar explained. If the Protector decides that the site’s reputation isn’t up to snuff, the user cannot access it.

    The other SDN app is Network Optimizer for Microsoft Lync, a video conferencing and instant messaging application. Dynamically and automatically, the application configures the network according to policy, setting quality of service for each Lync call as it takes place.

    12:00p
    Twitter’s Infrastructure Team Matures, Making the Fail Whale a Thing of the Past

    Twitter’s infrastructure team has managed to kill the notorious Fail Whale, which signified a Twitter outage, and Raffi Krikorian, the company’s vice president of platform engineering, says the team likes to think the creature (which a few years ago used to show up on users’ screens instead of their Twitter feed quite often) is now a thing of the past.

    “I think we’re getting to that point that we can say confidently that we know how to do this,” he said during one of the fireside chats on stage at last week’s GigaOm Structure conference in San Francisco.

    “Back in the day we didn’t really understand how to do our capacity planning properly,” Krikorian said. Capacity planning on a global scale is a difficult thing to do and it took time to get it down.

    The team now operates at a very different level, worrying about delivering a smooth experience to the company’s software engineers. Krikorian does not want the engineers to ever worry about the infrastructure layer, which may distract them from focusing on the end-user experience.

    Companies like Twitter, that were built around Internet services unlike anything that came before them and grew at breakneck speeds, had to learn how to operate data center infrastructure for their specific applications from scratch. The group also includes the likes of Facebook, Google and Yahoo.

    As they grew, they developed their own IT management software and a lot of their own hardware. They have open sourced lots of technology they developed for their own purposes, many of which have been adopted by others and become central to commercial offerings developed by a multitude of startups.

    Bursting remains a big challenge

    The main challenges have remained the same, only Twitter has gotten better at managing them. The two biggest challenges are speed – or “real-time constraint,” as Krikorian put it – and burst capacity.

    “We have to get tweets out the door as fast as we possibly can,” he said. Doing that during times of high demand requires some careful engineering.

    These high-demand periods come quite often. The ongoing FIFA World Cup generates quite a bit of them, for example.

    “Every time a goal happens … there’s a huge influx of tweets happening,” Krikorian said. Every tweet “just has to come in, get processed and get out the door again.”

    To manage capacity bursting, Twitter’s infrastructure team has started breaking services down into tiers based on importance. When something like a big sporting event is taking place, all services other than the core feed will automatically degrade in performance and forfeit the spare capacity to ensure the core user experience is delivered smoothly.

    Key infrastructure management tools

    One of the most important tools in Twitter’s infrastructure management toolbox is Apache Mesos, the open source cluster manager that helps applications share pools of servers. Twitter runs Mesos across hundreds of servers, using it for everything, from services to analytics.

    Another key piece of technology it has built is Manhattan, a massive real-time globally distributed multi-tenant database. “We’re migrating everything we possibly generate on Twitter into Manhattan in most cases,” Krikorian said.

    Manhattan handles things like SLAs and multi-data center replication. Both are examples of things he does not want Twitter engineers to think about when writing applications.

    The system allows an engineer to “throw some data” onto a cluster in one data center once and see it automatically show up everywhere else it is needed.

    Efficiency comes over time

    This isn’t the way Twitter has always done things. “The easier way to do it is basically just have lots of spare computers laying around,” Krikorian said. “Turns out that’s not very smart.”

    The infrastructure team cares a lot about efficiency, which is why it has implemented tiering and global load balancing. “We don’t print money … so I want to make sure that every single CPU is used as much as we possibly can but still provide the headroom for spikes.”

    Not only does it matter that a goal has been scored in World Cup, it also matters which team is playing. “When Japan is playing in the world cup, I know that most of the traffic [during the game] will come from Japan,” Krikorian said.

    This means the infrastructure sitting the company’s West Coast data center and other points of presence closest to Japan have to have data shifted around to prepare for the influx of traffic.

    Twitter has not officially disclosed its West Coast data center location, but sources have told Data Center Knowledge that it leases space at a RagingWire data center in Sacramento, California. RagingWire was also present at Structure, where it announced a major expansion at its Sacramento campus.

    Twitter also leases a lot of capacity at a QTS data center in Atlanta, where it has been since 2011.

    12:30p
    Evaluation Criteria for Your Cloud-based Data Protection Solutions

    Lynn LeBlanc, CEO and founder of HotLink Corporation, has more than 25 years of enterprise software and technology experience at both Fortune 500 companies and Silicon Valley startups.

    There’s so much talk about disaster recovery (DR) and business continuity (BC) in the cloud, you might be considering such an option for your data center. After all, the cloud promises unlimited capacity in a pay-as-you-go model – the perfect combination for affordably ensuring that your IT operations continue uninterrupted. Best of all, perhaps you can finally sleep at night without dreading the inevitable outage and fallout that ensues.

    It’s true that cloud-based resources have created a host of new products and services, many with exceptional value. Unfortunately, it’s also true that amid the cloud-washing of the past few years, some data protection vendors decided to reclassify their virtual machine (VM) backup products as cloud-based “DR” solutions.

    This shift may have started when Gartner predicted 55 percent annual growth for DR as a service (DRaaS) over the next five years. Perhaps it happened when the backup market became so crowded and undifferentiated. Maybe it’s just aspirational marketing. Whatever the reason, before signing on the dotted line, be sure your new DR/BC vendor explains exactly how they will provide seamless and automated failover of critical business operations when your disaster strikes.

    In other words, be sure your DR solution actually has an “R” and not just a “D.”

    Four essential DR/BC capabilities

    When evaluating a potential cloud-based DR/BC offering, look beyond simply replicating your data and VMs and be sure your new solution can offer:

    Recovery: Protecting critical data and VMs with no place to recover is obviously not going to be much help with continued operations following a failure. While enterprise business-critical applications may justify the expense of standby resources, most workloads don’t warrant the cost of redundant computing infrastructure. If you protect these workloads in the cloud, where will you run after a failure?

    Moreover, VMs can be very large and the Internet is slow. If the cloud-protected VMs have to be recovered to an alternate site, how long will that take? The ideal solution combines in-cloud protection with in-cloud recovery. That way you don’t need redundant hardware, and you won’t have bandwidth or latency problems.

    Automation: You will need an automated way to failover to an alternate cloud-based computing infrastructure in the event of on-premise failure. When a disaster strikes after hours, the last thing you need is to have your team scrambling around reconfiguring networking and VMs.

    Testing: If you cannot easily test the ongoing changes to your VM infrastructure inside your DR/BC environment, you should not reasonably expect a pain-free recovery. Testing is the Achilles’ heel of any DR/BC plan, whether cloud-based or not, so testing early and often is vital. The reality is most BC solutions are labor-intensive to test, and as a result, they are tested infrequently. When evaluating cloud-based DR solutions, look closely at:

    1. What exactly is involved in testing?
    2. Can I automate the testing?
    3. How can I trigger this test myself?

    If the testing activity is difficult, it is a mere indicator of how trying the recovery process will be.

    Management: Assuming a green light on automation and testing, now consider how you will manage and monitor the servers running in the cloud. With some cloud-based DR/BC solutions, management is fully integrated with your existing on-premise management infrastructure, and you will manage the cloud-based resources in the same way as your on-premise servers. With others, you have a separate console provided by the DR/BC vendor. In yet other cases, it’s a managed service in which the vendor is running the show. Regardless of which option is best for you, it’s vital to look closely at how your operations will continue in a failure scenario.

    Make data protection solutions a priority

    Focusing on these four essential DR/BC capabilities should clear up some of the confusion when evaluating the sea of cloud-based data protection solutions. Most importantly though, IT managers need to ensure that DR/BC plans are on today’s to-do list.

    Few IT disasters take out your entire data center; usually it’s hardware failure, software bugs or security breaches, and these happen every single day. Leveraging low-cost providers such as Amazon Web Services can provide cloud-based BC at a price any IT organization can afford.

    Not taking advantage of this newest generation of cloud-based DR/BC solutions and just hoping that “the big one” doesn’t take out your data center (and your job) is the worst possible continuity strategy.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:00p
    IT Infrastructure and Expanding Global Reach

    There are a lot of new types of resources being delivered through a cloud model. Data points, desktops and even entire applications are pushed from powerfully connected data center platforms. In the financial, mission-critical software world, it’s going to be absolutely necessary to deploy Software-as-a-Service (SaaS) solutions with absolute uptime. With that in mind, meet Linedata.

    Founded more than a decade ago, the company has always strived to be at the frontlines of innovation. It was a pioneer in bringing the SaaS platform to the financial industry, among many other notable accomplishments.

    From pre-trade to post-settlement, the company offers comprehensive front-to-back solutions to manage all types of investment processes. Today, Linedata has more than 900 employees worldwide that serve 700 clients operating in 50 countries.

    This financial services organization needed a way to deploy powerful workloads through a distributed cloud model. When the company acquired the Longview Group in 2001, the new business unit foresaw an opportunity for front-office hosting and began working toward offering its Order Management System, LongView Trading System, as part of its already established SaaS delivery model.

    To support its SaaS offering, Linedata needed a world-class data center and a trustworthy network carrier. In this case study from CenturyLink, we learn how they found just that.

    Download this case study today to find out how, after evaluating numerous companies, Linedata knew that CenturyLink Technology Solutions had everything it was looking for and more.

    “From the start, CenturyLink demonstrated a strong understanding of the financial services market — we knew this wasn’t your average hosting and colocation provider,” said Toby Battell, Director of Hosted Infrastructure. “CenturyLink has a customer support team dedicated specifically to the financial vertical. When you call the CenturyLink Financial help desk they immediately ‘get it,’ and this was exactly the kind of partner we wanted to be working with.”

    For organizations looking to expand and better operate within their vertical, it’s critical to work with a data center and colocation provider which can match the speed of your business. Looking ahead, Linedata is eager to continue expanding its use of CenturyLink colocation services.

    As always, make sure your data center partner is capable of keeping up with your ever-evolving business demands and the demands of the industry.

    3:15p
    Aerospike Open Sources in-Memory Database, Raises $20M Series C

    Flash-optimized-database company Aerospike announced open source licensing for its in-memory NoSQL database and a $20 million Series C round of venture capital funding.

    The five-year-old company’s now open source database software is gaining in popularity because of the speed, reliability and extreme scalability that it provides. It will typically sit between a web application and a Hadoop instance and capture big data for analyzing.

    The company claims that it is ten times faster than existing NoSQL solutions and 100 times faster than existing SQL solutions. Competitors in this space include MongoDB, Couchbase, and Cassandra.

    These features enable new business opportunities outside of the typical real-time bidding and digital marketing for enterprise, telecommunications and retail markets. According to a Forrester Research report, “storing and processing customer data in-memory enables upselling and cross-selling new products based on customer likes, dislikes, friend circles, buying patterns and past orders.”

    Joe Gottlieb, Aerospike CEO, said, “Aerospike was built to take advantage of modern processors and flash storage, and today powers many of the top digital marketing platforms that operate at extreme scale. We are now seeing strong demand from enterprises that must scale their revenue-critical applications with both predictable low latency and strong consistency.”

    Aerospike said it will use the new investment to expand its developer community and engineering and support teams, as well as its global sales and marketing initiatives. In conjunction with the round, Kittu Kolluri, a NEA general partner, Mohsen Moazami, a CNTP general partner, and Gilman Louie, of Alsop Louie Partners, are joining the Aerospike board of directors.

    Aerospike’s client software development kit will be licensed under the Apache License, Version 2.0, and the Aerospike database server will be licensed under the Affero General Public License.

    While the Community edition may give Aerospike more traction in the NoSQL field, it will keep the Enterprise edition as a commercial offering with additional features like Cross-Data Center Replication.

    Srini Srinivasan, Aerospike co-founder and vice president of engineering and operations, said, “Now that Aerospike has been in production non-stop for close to four years in some of the world’s most data-intensive environments, we are fulfilling our commitment to developers by open sourcing the technology and expanding our community.”

    7:09p
    Google Dumps MapReduce in Favor of New Hyper-Scale Analytics System

    Google has abandoned MapReduce, the system for running data analytics jobs spread across many servers the company developed and later open sourced, in favor of a new cloud analytics system it has built called Cloud Dataflow.

    MapReduce has been a highly popular infrastructure and programming model for doing parallelized distributed computing on server clusters. It is the basis of Apache Hadoop, the Big Data infrastructure platform that has enjoyed widespread deployment and become core of many companies’ commercial products.

    The technology is unable to handle the amounts of data Google wants to analyze these days, however. Urs Hölzle, senior vice president of technical infrastructure at the Mountain View, California-based giant, said it got too cumbersome once the size of the data reached a few petabytes.

    “We don’t really use MapReduce anymore,” Hölzle said in his keynote presentation at the Google I/O conference in San Francisco Wednesday. The company stopped using the system “years ago.”

    Cloud Dataflow, which Google will also offer as a service for developers using its cloud platform, does not have the scaling restrictions of MapReduce.

    “Cloud Dataflow is the result of over a decade of experience in analytics,” Hölzle said. “It will run faster and scale better than pretty much any other system out there.”

    It is a fully managed service that is automatically optimized, deployed, managed and scaled. It enables developers to easily create complex pipelines using unified programming for both batch and streaming services, he said.

    All these characteristics address what Google thinks does not work in MapReduce: it is hard to ingest data rapidly, it requires a lot of different technology, batch and streaming are unrelated, and deployment and operation of MapReduce clusters is always required.

    Hölzle announced other new services on Google’s cloud platform at the show:

    • Cloud Save is an API that enables an application to save an individual user’s data in the cloud or elsewhere and use it without requiring any server-side coding. Users of Google’s Platform-as-a-Service offering App Engine and Infrastructure-as-a-Service offering Compute Engine can build apps using this feature.
    • Cloud Debugging makes it easier to sift through lines of code deployed across many servers in the cloud to identify software bugs.
    • Cloud Tracing provides latency statistics across different groups (latency of database service calls for example) and provides analysis reports.
    • Cloud Monitoring is an intelligent monitoring system that is a result of integration with Stackdriver, a cloud monitoring startup Google bought in May. The feature monitors cloud infrastructure resources, such as disks and virtual machines, as well as service levels for Google’s services as well as more than a dozen non-Google open source packages.
    7:42p
    Ascent to Build Data Center in Hometown St. Louis

    After the recent sale of one of its Chicago data centers to free up capital for further expansion, Ascent has determined that its hometown St. Louis, Missouri, was where the next opportunity was.

    The company is developing a greenfield data center in the city on a 15-acre site that will offer 10 MVA of power or more. The design for the new project includes plans for an expandable, hardened building shell capable of withstanding tornado-speed winds, essential for the continuity of data center operations in the region.

    Located in close proximity to a range of businesses, including several medical and medical research facilities, academic institutions, startups and Fortune 500s, the facility will offer Ascent an opportunity to sell into a strong local base.

    The data center will be similar to the company’s CH1 and CH2 data centers in Chicago.

    The CH2 data center was recently acquired by Carter Validus in a modified sale-leaseback, with Ascent continuing to manage operations. Ascent sold the data center to help free up some capital for future projects, such as the one in St. Louis.

    While the developer has typically dealt with large enterprises in the past, it has moved into providing a mix of retail and wholesale (but still mostly wholesale). The new St. Louis facility will offer similar flexibility, including rack-ready, purpose-built, fully customizable and autonomous suites as well as shared infrastructure for customers seeking enterprise-level wholesale solutions at a lower cost.

    “As a St. Louis-based company, we are excited to play an integral role in the city’s continued advancement as a center for job growth and innovation,” Phil Horstmann, CEO of Ascent, said. ”Our offices are here, our roots are here, and our plans to develop a data center here are finally coming to fruition.”

    St. Louis data center market growing

    Missouri is occasionally lumped into what’s referred to as the “Silicon Prairie”, a group of Midwestern states with rising tech profiles. 

    In 2011, St. Louis was said to have been undergoing a renaissance of sorts, with a wealth of activity in the tech sector. Other providers there include Cosentry (which acquired XIOLINK to get into the market). Cosentry is a big player in the Midwest, with Omaha, Kansas City and Sioux Falls facilities in its portfolio.

    Digital Realty Trust owns 210 North Tucker, a key data center and telecom hub in the St. Louis hub. Within that building, Contegix operates across the entire 6th floor, offering colocation, managed services and cloud.

    St. Louis is one of the markets with sizeable cities and growing tech communities that have been earmarked as underserved by many. The new Ascent facility will further raise the area’s profile.

    “Ascent’s development is an important and substantial capital investment to an area undergoing significant revitalization by many key business players in St. Louis,” said Doug Rasmussen, senior vice president of the St. Louis Economic Development Partnership. “The development will not only be a boon for job growth in sectors like construction and the skilled trades but will also provide the kind of high tech jobs necessary for longevity and continued growth in today’s economy.”

    << Previous Day 2014/06/25
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org