Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, June 20th, 2013

    Time Event
    11:31a
    Accel Partners Fuels Second Big Data Fund With $100 Million

    Accel Partners commissions a second fund for big data with $100 million, Good Data receives a $22 million Latin American investment, and Hortonworks certifies Concurrent’s Cascading application framework.

    Accel Partners announces $100M Big Data Fund 2. Seeking to further its big data portfolio, leading venture capital Accel Partners anounced that it is allocating $100 million for a new Big Data Fund 2. The additional capital will support entrepreneurs that are using the technology platforms that were built in the first wave of big data startups, to create Data Driven Software (DDS) designed to help the workforce at large make smarter decisions through deeper insights. “We are seeing an accelerated rate of innovation in big data, with the newest generation of entrepreneurs re-imagining ways to extract the most value out of big data and fundamentally change the way we work and process information,” said Ping Li, Partner, Accel Partners. “In addition to the capital support from Big Data Fund 2, we continue to deepen the expertise within our network with thought leaders from related fields in data driven enterprise software applications, and are excited to share that Anthony Deighton and Shlomo Kramer will be joining our Big Data Advisory Council.” In its initial Big Data fund Accel helped fund such companies as Cloudera, Couchbase, Nimble Storage,Prismatic, RelatelQ, Sumo Logic and Trifacta.

    GoodData receives $22 million investment.  Cloud-based big data analytics platform company GoodData announced that TOTVS Ventures, the largest enterprise software company in Latin America, is leading a $22 million Series D investment in GoodData. TOTVS Ventures joins existing GoodData investors Andreessen Horowitz, General Catalyst Partners, Next World Capital and Tenaya Capital, who also participated in the latest round. This latest round raises GoodData’s total funding to $75.5 million. With this investment GoodData gains access to the $9 billion Latin America software market through TOTVS’s distribution channel in 24 markets throughout the region. “By combining GoodData’s intuitive analytics with our market-leading enterprise software, we are creating the cloud analytics leader in Latin America and giving access to Big Data solutions to our clients in Latin America,” said Alexandre Dinkelmann, executive vice president of strategy and finance at TOTVS. “We see enormous benefit for TOTVS’s customers, who now have access to the most innovative and advanced business intelligence platform.”

    Concurrent teams with Hortonworks. Enterprise big data application platform company Concurrent announced that Hortonworks has certified its Cascading application framework against the Hortonworks Data Platform (HDP). The certification ensures that enterprises can take advantage of Concurrent’s Cascading application framework and HDP to ease Hadoop Big Data application development and bring machine-learning applications to the masses. “Hadoop adoption continues to grow as organizations look to take advantage of new data types and build new applications for the enterprise,” said Shaun Connolly, Vice President, Corporate Strategy of Hortonworks. ”By combining our enterprisegrade data platform and unparalleled growing ecosystem with the power, maturity and broad platform support of Concurrent’s Cascading application framework, we have now closed the modeling, development and production loop for all data-oriented applications.”

     

    12:00p
    Many Factors Boosting Data Center Demand

    An example of the hot aisle in a Savvis data center in the London market. (Photo: Savvis)

    The focus on the modern data center is only continuing to grow. As the business world moves to digitize even more infrastructure components, the data center will sit square in the middle of all of these new deployments. In fact, almost all of the new technologies and modern solutions which are coming out find the data center as their home. We can now call it “The Data Center of Everything.”

    There are a lot more users coming online. These users are sharing more data and are requiring more services to be delivered to them. Data and services must stay highly available and very resilient. Furthermore, as more users fill the data center environment, high-density computing and highly efficiency systems are making their ways into the data center 2.0 infrastructures.

    Big data center providers are sitting ahead of the curve for a few very specific reasons. They caught the cloud wave at the right time and deployed internal systems which were capable of handling the influx of data, users, and business which came to the data center for help.

    Infrastructure Without Walls

    As the data center of everything, organizations are directly looking to these new types of platforms to help them with many new types of business challenges. Companies of all sizes and verticals now see it as financially feasible and logical to move towards a data center model. And for good reason too. The data center has evolved from a brick and mortar shop to an infrastructure without walls. This means that these environments are logically connected to create massive resources pools and highly available data center platforms.

    So, why is it good to be in the data center business?

    • More data center services. Today’s data center environment isn’t there just to host servers and hardware. Data center providers are proactively building in services into their offering stack to entice new, modern customers. That means offering services around virtualization, cloud computing, disaster recover, and even hybrid data center extensions. All of these new services are evolving because more organizations are moving to a data center model. Infrastructure is becoming less expensive and the landscape is a lot more competitive. For many IT shops – it simply makes sense to move to a hosted data center model.
    • More data to be managed. The average user may utilize three to four devices to access cloud-based resources. Whether this is a simple email or an entire desktop – these on-demand services have to be delivered from somewhere. Furthermore, all of these devices and connections transmit data and information. Data centers are becoming the hub for big data management and big data services. The highly distributed nature of big data has helped data center providers find yet another niche where they can help. By connecting their data centers together, providers can offer large networks where massive amounts of data can be analyzed and quantified. These types of services will only continue to grow for the data center model as big data continues to grow as an industry demand.
    • More users coming to the cloud. The use of Internet services and wide area networking (WAN) has truly exploded. Now, we have everything from IPTV to everything being streamed via high-bandwidth resources. Furthermore, more bandwidth is becoming available for both the user and the organization. The data center of everything is also the home to the cloud. As the central hub for all connectivity and data distribution – the modern data center is tasked with hosting some of the most advanced technologies out there. Couple this with high-density computing, multi-tenancy storage, advanced networking technology, and top-down management solutions in the form of a data center operating systems – and you will see the blueprint for the data center 2.0 platform.
    • Global connectivity. The world is become more connected. The drive to conquer distance is driving the creation of the data center without walls. Modern technologies allow us to place more users, applications and workloads on a single blade. In turn, we are able to create large – logical network capable of global connectivity and failover. New types of load-balancing solutions allow for global traffic management and controls. This means that the data center is no longer one single entity. Rather, it is a node within a large cluster of interconnected data center environments. Basically, this is the formulation of the cloud and the globally connected networking environment.

    The modern data center will only continue to evolve. Business drivers and demands are growing and more organizations are offloading services to the data center platform. Remember, the data center environment has become the heart of any organization as the central IT resource. These environments are being carefully managed, monitored and planned around for the future.

    Conversations around pre-fabricated and modularized data centers are already growing rapidly. According to the 2012 Uptime Institute Survey, 41 percent of their respondents said that they are using traditional data center environments which are supplemented with pre-fabricated components. Another 19 percent said that they already have a data center made entirely out of pre-fabricated systems.

    As reliance around the data center continues to grow, providers must stay ahead of the curve. This means understanding new demands and delivering data center services around those offerings. Whether those are new types of cloud platforms or better big data management systems – the data center will be the home of it all.

    12:30p
    Accelerating Oracle RAC Performance with Caching SAN Adapters

    Cameron Brett is Director, Solutions Marketing, for QLogic.

    Cameron-Brett-tnCAMERON BRETT
    QLogic

    Today’s mission-critical database applications, such as Oracle Real Application Clusters (RAC), feature Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) workloads that demand the highest levels of performance from servers and their associated shared SAN storage infrastructure. The introduction of multi-processor CPUs coupled with virtualization technologies provides the compute resources required to meet these demanding workload requirements within Oracle RAC servers, but increase the demand for high-performance, low-latency, scalable I/O connectivity between servers and shared SAN storage. Flash-based storage acceleration is a high-performance, scalable technology solution that meets this ever-growing requirement for I/O.

    Flash-based storage acceleration is implemented via one of two fundamentally different approaches: first – flash-based technology as the end-point storage capacity device in place of spinning disk, and second – flash-based technology as an intermediate caching device in conjunction with existing spinning disk for capacity. Solutions which utilize flash-based technology for storage are now widely available in the market. These solutions, which include flash-based storage arrays and server-based SSDs, do address the business-critical performance gap, but they use expensive flash-based technology for capacity within the storage infrastructure and require a redesign of the storage architecture.

    A new class of cache-based storage acceleration via server-based flash integration, known as a Caching SAN Adapter, is the latest innovation in the market. Caching SAN adapters use flash-based technology to address the business-critical performance requirement, while seamlessly integrating with the existing spinning disk storage infrastructure. Caching SAN adapters leverage the capacity, availability and mission-critical storage management functions for which enterprise SANs have historically been deployed.

    Adding large caches directly into servers with high I/O requirements places frequently accessed data closest to the application, “short stopping” a large percentage of the I/O demand at the network edge where it is insensitive to congestion in the storage infrastructure. This effectively reduces the demand on storage networks and arrays, improving storage performance for all applications, (even those that do not have caching enabled), and extends the useful life of existing storage infrastructure. Server-based caching requires no upgrades to storage arrays, no additional appliance installation on the data path of critical networks, and storage I/O performance can scale smoothly with increasing application workload demands.

    To verify the effectiveness of a Caching SAN adapter, Oracle’s ORION workload tool was used to mimic and stress a storage array in the same manner as applications designed with an Oracle back-end database. Within the test, the Caching SAN adapter displayed the required performance scalability – 13x IOPS improvement over non-cached operations – to support the unique requirements of virtualized and clustered environments such as Oracle RAC.

    Orion Test Results

    orion-test-results

    In order to support solutions spanning multiple physical servers (including clustered environments such as Oracle RAC) caching technology requires coherence between caches. Traditional implementations of server-based flash caching do not support this capability, as the caches are “captive” to their individual servers and do not communicate with each other. While they are very effective at improving the performance of individual servers, providing storage acceleration across clustered server environments or virtualized infrastructures which utilize multiple physical servers is beyond their reach. This limits the performance benefits to a relatively small set of single server applications.

    Addressing the Drawbacks

    Caching SAN adapters take a new approach to avoiding the drawbacks of traditional caching solutions. Rather than creating a discrete captive-cache for each server, the flash-based cache can be integrated with a SAN HBA featuring a cache coherent implementation which utilizes the existing SAN infrastructure to create a shared cache resource distributed over multiple servers. This eliminates the single server limitation for caching and opens caching performance benefits to the high I/O demand of clustered applications and highly virtualized environments.

    This new technology incorporates a new class of host-based, intelligent I/O optimization engines that provide integrated storage network connectivity, a Flash interface, and the embedded processing required to make all flash management and caching tasks entirely transparent to the host. All “heavy lifting” is performed transparently onboard the caching HBA by the embedded multi-core processor. The only host-resident software required for operation is a standard host operating system device driver. In fact, the device appears to the host as a standard SAN HBA and uses a common HBA driver and protocol stack that is the same as the one used by the traditional HBAs that already make up the existing SAN infrastructure.

    Finally, the new approach guarantees cache coherence and precludes potential cache corruption by establishing a single cache owner for each configured LUN. Only one caching HBA in the accelerator cluster is ever actively caching each LUN’s traffic. All other members of the accelerator cluster process all I/O requests for each LUN through that LUN’s cache owner, so all storage accelerator cluster members work off the same copy of data. Cache coherence is guaranteed without the complexity and overhead of coordinating multiple copies of the same data.

    By clustering caches and enforcing cache coherence through a single LUN cache owner, this implementation of server-based caching addresses all of the concerns of traditional server-based caching and makes the caching SAN adapter the right choice for implementing flash-based storage acceleration in Oracle RAC environments.

    Notes:

    Details on Orion – Oracle database application performance test: Orion is an I/O metrics testing tool which has been specifically designed to simulate workloads using the same Oracle software stack as the Oracle database application.

    The following types of workloads are supported currently:

    • Small Random I/O: Best if you are testing it for an OLTP database to be installed on your system. Orion generates random I/O workload with a known percentage of read versus writes, given I/O size, and a given number of outstanding I/Os.
    • Large Sequential Reads: Typical DSS (Decision Support systems) or Data Warehousing applications, bulk copy, data load, backup and restore are typical activities that will fall into this category.
    • Large Random I/O: Sequential streams access disks concurrently and with disk striping (RAID that is); a sequential stream is spread across multiple disks, thus at disk level you may see multiple streams as random I/Os.
    • Mixed workloads: A combination of small random I/O and large sequential I/Os or even large random I/Os that allow you to simulate OLTP workloads of fixed random reads/writes and 512KB for backup workload of sequential streams.
    • At 100% cache coverage QLogic solution exhibits an average of 11x IOPS improvement over the range of the testing, with a maximum 13x.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:01p
    CloudSigma Version 2.0 Embraces SDN and SSDs

    CloudSigma wants to be the silent partner in your computing. The company has released CloudSigma 2.0, and it’s not just a marketing thing. The new software features a completely new codebase to take advantage of new technologies, including Software Defined Networking and Solid State Drives.

    The Zurich, Switzerland-based company is very different from most other cloud computing providers. It wants to make public cloud and owned infrastructure completely seamless. Founded in 2009, the company grew out of frustrations with public cloud offerings at the time.

    “What we found was that public cloud offerings were restrictive and proprietary,” said cofounder and CEO Robert Jenkins. The company launched in 2010, and expanded into the U.S. in 2011. CloudSigma was self-funded, but it did take on a small group of angel investors, including Anthony Foy from Interxion. The company has around 40 employees.

    The company made its platform as open as possible, focusing on providing qualitative rather than the quantitative. The business model itself is a different beast.

    “It’s a challenge to explain it to people because it’s quite different than buying servers and drives,” said Jenkins. “You buy 300GB of RAM and we don’t care how you arrange it. You can spread it across a couple or hundreds of servers.  We sell resources, not servers or drives. You don’t buy on a per server basis; we look at aggregate consumption. It’s just like electricity. Customers can build it out and it’s a very efficient way to do so. Longer term usage combined with short term purchasing. If load goes up, you spin up some more servers. Our system looks every 5 minutes and we charge on what you’re using. Customers have a credit limit or pre-paid balance and purchase however they want.”

    What’s New in Version 2.0

    As mentioned before, the company started from scratch with a completely new code base to take advantage of new technologies.

    • Direct private patch/hybrid cloud capability: CloudSigma allows customers to connect their infrastructure to CloudSigma’s public cloud vLANS, improving data portability by exactly mirroring on-premise infrastructure (regardless of software or OS) and improving security since VMs are not on public IPs. “Us and Amazon are the only two clouds that can take a physical cross connect and put it in our cloud,” said Jenkins. “With private networking, you can choose a completely private IP solution, a private line coming into the cloud. You wouldn’t know whether the server was in the cloud or yours if you want. One environment flows into another.”
    • Disaster Recovery as a Service: CloudSigma streamlines data portability through new private patching, giving customers instant data access and recovery while further protecting against cyber-attacks, data leakage and malicious hacking by avoiding public IPs. CloudSigma is completely open, you can get in and out if you use x86, meaning customers are never tied to its platform. This also has the advantage of enabling cloud disaster recovery quickly and easily. “We’re allowing customers to do live snapshots and backup to another cloud location,” said Jenkins. “It’s an ideal solution – run the same environment in the public cloud. You can run any x86 in our cloud, unmodified. You don’t need to constantly run two environments. You can seamlessly have resources going private to public. You can do that in CloudSigma.” 
    • All-SSD, high-performance storage:  The solution incorporates all solid-state drive (SSD) storage eliminating I/O bottlenecks and CPU wait time to provide the highest levels of speed and stability.
    • Advanced CPU options for better application performance: The company places no restrictions on VM size, offers all resources, including CPU and RAM on a utility basis, and now incorporates advanced CPU options with a fully-exposed architecture, including CPU and NUMA visibility to meet any app requirements. “What this means is people running big machines, it has a big performance difference,” said Jenkins.
    • SDN support: CloudSigma now includes full SDN support with the ability to network cloud private networks directly onto physical customer lines without the need for a VPN, offering very high network throughput with low latency.

    Jenkins disclosed that the company is working on rolling out a marketplace to offer Platform as a Service options.

    2:30p
    Using Airflow Containment to Reclaim Power Capacity

    The modern data center is home to all of the new platforms which house corporate IT environments. Now, these data center environments are tasked with hosting more users, more workloads and a lot more data. With the cloud, virtualization and high-density computing – data centers have also been tasked with operating and more optimal states. This means finding ways to better contain airflow and overcome challenges with limited power availability.

    In this case study, we learn how Seattle-based TeleCommunication Systems (TCS) saw plans for its new data center run into a wall when the local utility company said the grid didn’t have enough available power to support the expansion.

    “We found out we had absorbed every spare ounce of energy the building had available,” said Stephen Walgren, TCS Data Facility Engineer. “For us to grow we would have to go to Seattle City Light and have them put in a new primary feed.”

    But in this case, it seemed running into a wall was exactly what TCS needed. How so? Utilizing a “cooling wall” introduced by the design-build firm of McKinstry, TCS’ new data center not only overcame the problem of limited power availability, it made TCS one of the most efficient data centers in the Pacific Northwest. Deployed through an evaporative-only cooling wall and airflow containment provided by Chatsworth Products (CPI) Passive Cooling Solutions, the TCS data center expansion went on to earn two ASHRAE awards—one national and one regional—and an average PUE of 1.15.

    chatsworth

    [Image source: Chatsworth Products, Inc.]

    As the reliance on the data center continues to grow – there will be that continued need around efficiency. Data center operators will need to work with intelligent and powerful technologies capable of reducing the PUE and improving air flow. Download this case study today to learn how TCS, McKinstry and CPI have proven that an efficient data center design can overcome power limitations and drastically reduce energy consumption—in this case by an estimated 513,590 kW per year.

    4:54p
    HostingCon Insights: Hosting Industry Growing & Evolving

    Hosting providers, gathered this week in Austin for HostingCon 2013, expressed apprehensive optimism about the current state of the hosting and cloud industry. The conference covered a range of topics, such as Internet freedom, how hosting providers can differentiate, and how and where to expand in Europe and AsiaPac.

    The conference drew more than just hosting providers. Data center providers like Cobalt Data Centers, Interxion, Cologix (which announced the completion of an expansion in Montreal) Phoenix NAP, Tierpoint, and Telecity Group were also on hand. The event also highlighted the growing role of multi-service providers, and the coming hybrid wave of colocation and cloud. Companies like Peer1 Hosting, Telx, VAZATA, OnRamp and Tierpoint all discussed finding the right mix of services for the right customers, which we’ll explore more in a future article.

    The industry is still abuzz from IBM’s acquisiton of SoftLayer, a pillar of the hosting industry. IBM essentially consumed a private provider that was holding its own against increasing threats coming from outside of the traditional hosting industry, like Amazon Web Services. I learned that DH Capital took part in the transaction, and that it received a very attractive multiple. Also, it appears that IBM will let SoftLayer operate autonomously under its own brand name, and that this was a major factor in SoftLayer’s choosing its suitor.

    Other hosting providers like SingleHop saw the acquisition as potential opportunity to step up. SingleHop has a lot of the same DNA as SoftLayer — it’s also a very automated, dedicated and cloud hosting provider that offers everything through a single panel. Further, SingleHop has seen triple digit revenue growth for several years. The company recently expanded to Amsterdam and has been looking heavily into AsiaPac and South America.

    International Outlook

    On a panel about international expansion to Europe and Asia that was moderated by 451 Research’s Kelly Morgan, the consensus seemed to be that Amsterdam was most attractive for hosting providers looking to establish a primary footprint in Europe, while Singapore was recognized as key to AsiaPac operations. Amsterdam’s rich connectivity and great tax structure means that hosting providers are most aggressively looking at it for its initial base of European operations. London, a perennial favorite locale or data centers and commerce, was described as somewhat insular and falling a bit out of favor. However, Europe is not one market rather it can be seen as multiple country-specific markets because IT managers from different companies often want to keep data within borders.

    How do hosting providers choose their data center provider when expanding to Europe? Perhaps the most important consideration, as Martijn Kooiman from TeleCity Group put it, “You need to choose a partner with the ability to scale, but one that can also start with you.” The panel also included wisdom from other major data center providers in Europe, through Interxion’s Jelle Frank van der Zwet; SoftLayer’s Todd Mitchell; and Readyspace’s David Loke. Loke provided insights into expanding into AsiaPac in particular.

    Industry Insights

    Structure Research’s Phil Shih outlined the industry outlook from a high level, providing a temperature check for each Internet infrastructure sector. Colocation providers have been growing from 5-20 percent, with cloud growing the fastest at 40-50 percent. The mix is trending towards managed services, said Shih, with both cloud and colo providers looking to managed services as their way to raise Average Revenue Per User (ARPU).

    “The big trends I’ve noticed is time to market has matured, and companies are having success growing customers,” said Shih. Another trend discussed was the uptick in folks leaving Amazon’s public cloud (AWS) for operator clouds. When examining two industry successes, such as Rackspace and SoftLayer, one point that Shih made that bodes well for the data center industry is that neither company got into the data center building business. Hosting providers in general aren’t about to get into the physical infrastructure game, which means they’ll continue to be growing and lucrative customers for the data center industry.

    An Evolving Industry

    In a presentation titled, “Pradigm Shift: How to React in the Face of Industry Change,” Maxwell Wessel, from SAP North America, described what disruptive technologies mean for the industry. He drew several parallels from industries like pharmaceuticals and steel. His five key points were:

    • Everything changes. Embrace it
    • Never fight a disruptive entrant head on
    • Disruption isn’t defeat
    • Business Hinges on understanding the customer
    • Compete where you can win

    These points make a lot of sense in an industry that seems to be converging into one, with colo providers offering cloud, IBM getting deeper into hosting, and the multi-service provider on the rise.

    Wessell finds that small, nimble companies win when innovation is disruptive in nature. He talked about the theory of disruption, and the importance of the “extendable core”. His major example of extendable core was Airbnb, a disruptive player in the hotel/rented room space that can move upmarket to threaten the hotels industry, such as the Four Seasons, as well as competes and disrupts the brands such as Best Western.

    Christian Dawson, co-founder and chairman of i2Coalition, presented the general session on the Tuesday, as well as held a last minute addition panel on the final day to discuss why Web Hosts should care about NSA Government surveillance and prism. Dawson’s day job is with hosting provider ServInt, but he tirelessly fights for our rights via the Internet Infrastructure Coalition.

    The Tuesday session was titled “The Fight For Our Future,” and included Reddit’s Erik Martin and Fark’s Drew Curtis as well as Michael McGeary from Engine Advocacy, Michael Petricone from CEA and Julie Samuels from Electronic Frontier Foundation. The panel discussed why an unregulated Internet was so important as well as why patent trolls are so corrosive to the economy. Panelists said hosting providers have a responsibility to fight for our rights because of the position they hold. Government will listen to hosting providers because they are job creators. Should it have passed, SOPA (Stop Online Privacy Act) would have been disastrous to the new Internet-based economy.

    The conference provided a great peek into the next generation of infrastructure and how the industry is evolving. Rackspace’s CTO John Engates was even on hand for a panel, providing a rare opportunity to hear one of the industry’s greatest minds speak about driving open standards and how he sees cloud shaping up in general.

    The mood was good throughout the event, with apprehensive optimism abounding — the industry is growing nicely, and considering how inter-related all companies in the Internet infrastructure world are, that’s good news for all.

    << Previous Day 2013/06/20
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org