Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, January 30th, 2014

    Time Event
    12:30p
    Expedient Completes Expansion and Announces Additional Growth

    Expedient completes data center expansion in Pittsburgh and announces three new projects for 2014, Zayo upgrades electrical and security infrastructure at its Las Vegas colocation facility, Internap’s CDN helps Sportradar stream live professional sporting events, and Global Transit expands its European presence with Interxion in London.

    Expedient completes expansion and announces more.  Expedient announced that it has completed an expansion project,  adding more than 7,000 square feet of raised-floor space to the Pittsburgh data center at Allegheny Center Mall (ACM), which was first opened in September of 2008. Spending approximately $3 million on this most recent project, the company now has nine data centers throughout the Midwest, Mid-Atlantic and Northeast regions which are interconnected via a 10Gbps fiber network. “Now that our Pittsburgh and Baltimore expansions are complete, we are pleased to announce three additional data center expansions for 2014,” said Shawn McGorry of Expedient. Cleveland is a possible site for a $7 million, 10,000 square foot data center, which will complement the existing two data centers the company currently has there. Expedient will add a second 20,000 square foot data center in the Columbus market. Adjacent to its existing data center, the company is adding 6,000 square feet to its Indianapolis data center. Upon completion of this expansion, the total raised floor footprint at the company’s Carmel facility will exceed 25,000 square feet. These investments will bring Expedient’s total data center capacity to over 250,000 square feet. “Our solutions are unique in that we effectively offer customers one giant data center, at 9, soon to be 11 different sites, which are seamlessly connected by a fully redundant 10Gig fiber ring. This is especially appealing to clients seeking geographic diversity to help meet their security and disaster requirements” added McGorry.

    Zayo Upgrades Las Vegas Colocation Facility. Zayo Group announced that it has completed an upgrade of its Las Vegas colocation facility that supports improved uptime and security.  A new 2 megawatt utility feed is supported by a 2 megawatt generator and redundant UPS system. Two 240 ton chillers are complemented by free-cooling (pre-coolers) to improve power efficiency. The facility upgraded to a new building management system and C-Cure security system, to be monitored by zColo 24/7. An additional 28 security cameras were also added. “Upgrading the Las Vegas facility enables us to provide a more resilient environment for our current customers, plus those searching for cost-effective, yet competitive, colocation solutions.” says Greg Friedman, vice president of zColo. “Gaming operators, financial services, and cloud computing companies are looking for the essential disaster recovery and business continuity services that this facility now offers.”

    Internap selected by Sportradar.  Internap Network Services (INAP) announced that Sportradar, the market leading supplier of sports and betting-related data services, is using Internap’s content delivery network (CDN) to stream live professional sporting events and data directly into 1,000 retail betting shops worldwide. Given the real-time nature of sports and betting, Sportradar needed a high-performance CDN that could minimize latency and keep up with the demands of its global customer base. “Our business extends from our experts recording statistics and analysing matches at stadiums all the way to our Live Channel displays in retail bookmakers worldwide. Delivering high quality video streams and up-to-the-minute data are critical,” said Joern Anhalt, managing director media rights at Sportradar. “In a competitive review, Internap stood out in providing the performance and resilience required to support today’s betting landscape through a highly customizable, turnkey solution that is easy to deploy and can seamlessly support our expansion into new markets with Internap’s global infrastructure footprint.”

    Global Transit opens Interxion London presence. Interxion (INXN) announced that Global Transit has opened a Point of Presence (PoP) at its London data centre campus as part of a wider company strategy to expand their international network. With many other global PoPs Global Transit views Europe as its next key hub presence. With the new London presence the company will receive enhanced connectivity to Europe and gain access to Interxion’s vast community of customers, as well as Europe’s leading Internet Exchanges such as LINX (UK), DE-CIX (Germany) and AMS-IX (Amsterdam).  “This partnership is further evidence of Interxion’s expertise in supporting companies as they expand their reach across Europe,” said Doug Loewe, UK Managing Director at Interxion. ”Our London campus offers some of the best connectivity in Europe, making it an ideal location to support Global Transit’s initial expansion across the region. Moreover, it is great to see that Global Transit is to become a member of our dynamic carrier community and we are now able to offer their services to our customers.”

     

    1:00p
    Blu-Ray in the Data Center? Facebook Creates 1 Petabyte Storage Rack
    Facebook's Blu-Ray cold storage prototype

    Matthew Niewczas, a hardware test engineer at Facebook, stands by the cold storage unit, where he was explaining the technology of using blue-ray discs to store data. (Photo: Colleen Miller.)

    SAN JOSE, Calif. - Will Blu-Ray disks find a second life in the data center? Facebook has developed a storage system that packs 1 petabyte of data into a single cabinet filled with 10,000 Blu-Ray optical discs.

    Facebook hopes to put the Blu-Ray storage unit into production by the end of this year, providing “ultra-cold” storage for older photos. The company showed off its prototype this week at the Open Compute Summit. But it’s not just a novelty. Over the long term, Facebook believes Blu-Ray has the potential to move beyond its origins in consumer video and became a durable, cost-effective data storage medium.

    Blu-Ray is not ideal for primary storage because data can’t be retrieved instantly. But it has other selling points, especially cost. Using Blu-Ray disks offers savings of up to 50 percent compared with the hard disks Facebook is using in its newly-completed cold storage facility at its data center in Oregon. The Facebook prototype also uses 80 percent less energy than cold storage racks, since the Blu-Ray cabinet only uses energy when it is writing data during the initial date “burn,” and doesn’t use energy when it is idle.

    Potential for Broader Use – Eventually

    “Within five years, this is going to be a new way of storing data, and will make its way into uses that have been reserved for magnetic disks,” said Giovanni Coglitore, hardware engineering director at Facebook. “I think it will start creeping into warmer and warmer storage tiers.”

    Blu-Ray is an optical data storage format created as a high-volume successor to DVDs. The format enjoyed some of its best sales yet during the holiday season, but still holds just 25 percent market share in the home video market, trailing DVD as both formats face a formidable challenge from the rapid growth in streaming video.

    Facebook stumbled into its Blu-Ray experiment as part of a broader evaluation of storage options. Facebook is eager to use NAND Flash memory wherever possible to gain benefits in performance and cost. But it soon became clear that the economics of Flash would not allow ubiquitous deployment anytime soon. So the company shifted its focus to optical drives.

    Robotic Retrieval System

    In less than six months, the Facebook team built  a prototype using Panasonic discs and a robotic retrieval system similar to those used to retrieve tape from archived storage units. The end result was seven-foot cabinet that’s compatible with the Open Rack standard. The rack includes 24 magazines, which each house 36 sealed cartridges, each of which contains 12 Blu-Ray discs. When a disc is needed, the robotics system retrieves the magazine. See this video from the Facebook Engineering team for a demonstration.

    Each disc is certified to retain data for 50 years. The system can operate in a wide range of environmental conditions.

    “The heat of the jungle and the cold of the arctic do not affect it,” said Cogliotore. “You could dump these disks in water and not lose data.”

    Complement to HDDs, Not an ALternative

    Blu-Ray could also allow some new approaches to energy, which is needed primarily for the initial burn-in of data. “Once done with initial burn, could reallocate power,” said Cogliotore. “We think it’s an opportunity to rethink power. This is a whole new opportunity to rethink how we look at storage.”

    For all its potential in cold storage, Blu-Ray won’t be a rival to hard disk drives anytime soon. Cogliotore said that additional development could broaden its use cases.

    “This is not an alternative,” said Cogliotore. “It’s complementary. We’ll make our coldest (storage) tiers much more financially viable. It’s very price attractive and we like the disaggregated nature.”

    cold-storage

    The magazine extending from the cabinet houses 36 storage cartridges, which each hold 12 Blu-Ray optical discs. (Photo: Colleen Miller.)

    1:30p
    Using DCIM To Achieve Simplicity in Face of Complexity

    Lara Greden is a senior principal, strategy, at CA Technologies. Her previous post was titled, Preparing for DCIM in 2014: Best Practices for Getting It Right. You can follow her on Twitter at @laragreden.

    LaraGreden-tnLARA GREDENCA Technologies</p>

    Data centers are complex, dynamic environments that take a team of specialists to operate. But no one relishes complexity nor should team members adopt a specialist mindset. What’s needed is a team that understands the data center as an ecosystem and how changes to one part of the system will impact other parts.

    Too often, data center and IT executives think they have this covered because of the skills, talent and occasional heroic efforts of individual specialists on their teams. But heroics are no longer sufficient in today’s world of rising demand for IT and data center services, increased power density, and growing dependency on owned, leased and cloud data center environments that support business services.

    To achieve simplicity in the face of complexity, your data center and IT staff should seek to better understand system impacts in the data center across all roles and functions. This will enable your team to enhance uptime and availability, reduce costs, and improve execution to deliver top line results.

    DCIM can help your team achieve these goals by broadening their understanding of the data center outside of their immediate specialty. That’s especially important today where the need to manage data center infrastructure with real-time intelligence based on accurate data has never been greater.

    Streamlining and Scaling Operations

    In many organizations, staff members in a variety of functions periodically walk the data center floor to check the status of PDUs, CRACs/CRAHs, and other power and cooling equipment. While the power and cooling status of the data center is critical to ensuring uptime and availability for the end user, so are the other systems that are left unattended during the walk-throughs. This can be problematic when another data center location, or some other set of responsibilities, is added to the scope.

    This is where you can use DCIM software to connect remotely to power, cooling, and IT equipment throughout your data center – covering racks, the raised floor environment, the chiller room, batteries, generators and more. Users can access the status of equipment, regardless of original manufacturer or vendor, from a web browser or a mobile device. Staff members can receive intelligent alerts to help them reduce the occurrence of false alarms, and know when an asset is behaving abnormally. The end result is that you’ll be better prepared to cover more data centers and solve the next set of challenges. And if you have a screen in the NOC dedicated to power and cooling status, you’ll be able to provide transparency into critical metrics and indicators for people from different roles and functions.

    Simplifying Racking and Stacking

    There’s more to just finding space when it comes to placing new servers on the data center floor. Today, it’s no longer sufficient to rely on somebody walking the floor to identify open space or to relay on outdated information contained in spreadsheets. Some IT organizations pay hefty consulting fees for periodic audits of their data center space and asset inventory. But why not ingrain the data on space, power and cooling for the racks into the process of provisioning servers, such as the workflows that depend on that data for good outcomes. Given the dynamic nature of today’s data center, it is essential to take power and cooling into account to make a reliable placement decision.

    Data quality and consistency are essential to implementing DCIM technology. Taking the time to correctly identify and govern the information that manages your data center infrastructure will help make your DCIM implementation a success. By applying intelligence and analytics to the process of placing new servers on the data center floor, DCIM will help your team find optimal locations for new devices, provide instructions and visualizations to efficiently install them, and help ensure that the installations occurred as expected via auto-discovery “checking” mechanisms.

    Easing Capacity Planning

    In many IT organizations, capacity planning is a distinct function staffed by people with exceptional analytic and mathematical skills, and deep knowledge of data center domains. The capacity planning group addresses questions such as the potential impact of a proposed merger on data center capacity, or how a company’s growth will affect its need for power and cooling capacity in the data center. They rely on good input data for understanding historical capacity and consumption, and modeling new scenarios based on libraries of technology alternatives.

    Capacity planners believe in the old adage that “the forecast is always wrong” and use scenarios and uncertainty modeling techniques to reveal key insights. Today, they often depend on counterparts in facilities, mechanical and electrical engineering to cover the domains of power and cooling, as well as concerns related to building or adding more data center capacity. But capacity planners recognize that the power, space, and cooling domains are intricately linked with questions of compute, storage, and network capacity.

    DCIM software helps capacity planners look at the complete data center picture, beyond projections based on simple historical averages. Some DCIM software applications can even provide sophisticated analytics to help carry out capacity planning activities with embedded intelligence.

    Implementing DCIM technology should be seen as part of an organizational goal for staff to go beyond their specialties and truly understand how the data center can be optimized to deliver better business results. As you architect your DCIM implementation, you will quickly see many instances of how DCIM software is fundamental to simplifying and regaining control of the complexity in your data center operations. And you’ll be well on your way to achieving the benefits of DCIM.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:00p
    CenturyLink Gets Modular, Enters Phoenix Market With IO
    A row of IO Anywhere data center modules at the IO Phoenix facility.

    A row of IO Anywhere data center modules at the IO Phoenix facility. IO and CenturyLink have announced a strategic partnership in Phoenix and Scottsdale. (Photo: IO)

    CenturyLink has announced a strategic agreement with IO to expand its hosting and colocation footprint in Phoenix and Scottsdale, Arizona. The agreement expands the colocation footprint of CenturyLink Technology Solutions’ (the artist formerly known as Savvis) using IO’s Intelligent Technology platform, with initial deployments at IO Phoenix and nearby IO Scottsdale.

    CenturyLink enters the Phoenix market as a hosting and colocation provider, making up to 9 megawatts of additional capacity available through IO’s modular deployment technology.The partnership extends beyond Phoenix, however, as both CenturyLink and IO stand to gain from the collaboration.

    Why is this partnership notable? CenturyLink is the largest tenant for Digital Realty Trust, which has significant capacity in the Phoenix market at its facility in Chandler. CenturyLink said the decision to go with IO was a decision of going for modular over a brick-and-mortar shell data center.

    A Deal Unique in the Industry

    But there’s far more to it than that. This is a win-win for both CenturyLink and IO, and a unique deal in the industry.

    IO is trying to do a strategic pivot out of colocation and become a product-focused company. IO customers now have access to colocation across CenturyLink’s footprint, across 55 data centers globally. CenturyLink gets to take over the colocation piece for IO. It’s a very symbiotic relationship.

    “From our perspective, there’s a few things the deal does for us,” said David Meredith, senior vice president and general manager, CenturyLink Technology Solutions. “We’ve been hearing from customers more and more: what about these modules?  They say it seems more efficient from a power perspective. It allows rolling out smaller increments. Meanwhile, we’ve been spending a ton of money on the virtual side of things. This allows us to diversify our product mix. We can now deploy modules in 55 of our data centers.”

    It means more referrals for colocation. CenturyLink already has a sizeable presence in Phoenix, employing about 1,700 people. It has a solid understanding of the market, so this isn’t jumping into the deep end. The only the company didn’t have was a data center.

    “This is a very unique deal; we’re going to be the exclusive sellers for colocation in that market,” continues Meredith. “It’ll be Centurylink branded for any new IO customers in Phoenix. The opportunity is for us is to take over sales and operations services for any new customers.”

    Differentiators In Phoenix

    “The modules do help us to differentiate,” said Meredith. “It gives us lower PUEs (Power Usage Effectiveness) and better operational efficiency in that market.  But our biggest differentiator is hybrid. Most people need colo, but they also need to figure out a roadmap for cloud. What’s great is to have that flexibility from a provider to do more managed, cloud, colo – whatever the customer needs. We’re very flexible from a contractual standpoint and a solutions standpoint. We have a diverse portfolio, and now we’re adding diversity at the physical layer.”

    “The pioneering IO technology enhances our product mix with the latest options for delivering flexible hybrid IT solutions.” said Jeff Von Deylen, president of CenturyLink Technology Solutions. ”IO’s energy-efficient data center technology platform can be deployed just-in-time, helping us to preserve capital and enabling our clients to reduce costs and operate more efficiently.”

    IO Intelligent Control technology platform consists of IO.Anywhere data center modules and the integrated IO.OS data center operating system. IO.OS provides integrated control, data collection and business intelligence across the stack: infrastructure, IT equipment, applications and users. Layered on top of this, is CenturyLink’s broad set of colocation, managed services, cloud and network offerings.

    “We believe CenturyLink’s substantial commitment to IO technology further validates that our Intelligent Control platform can meet the demands of large-scale, global service providers,” said George D. Slessman, IO chief executive officer and product architect.

    CenturyLink’s ClientConnect is also available. ClientConnect is an online gateway/ecosystem for businesses to expand their capabilities by locating, connecting and sharing services with other businesses residing across CenturyLink’s global data center footprint.

    “Armed with IO’s prefabricated modular technology and proprietary data center operating system, CenturyLink is rounding out its digital infrastructure offering by combining a sophisticated, agile and intuitive data center platform with its deep stack of IT services, optimizing customer scalability and management capabilities,” said Michael Levy, data centers senior analyst at 451 Research.

    This strategic agreement with IO follows recent CenturyLink announcements regarding investments in its hosting capabilities, including the acquisitions of Tier 3 and AppFog and data center construction projects in Toronto and Minneapolis. CenturyLink operates more than 50 data centers worldwide, with more than 2.5 million square feet of gross raised floor space throughout North America, Europe and Asia.

    2:30p
    Apprenda 5.0 Powers Transition to Hybrid Cloud

    Platform-as-a-Service provider Apprenda announced the latest version of its enterprise PaaS solution, with ambitious new features. Version 5.0 offers support for both .NET and Java applications., and features a streamlined developer portal and enterprise policy enhancements including dynamic scaling.

    “No other private PaaS we considered delivers the depth of Java and .NET capabilities in a single technology as Apprenda does,” said Matias Klein, VP, Intelligence Hub, for San Francisco based healthcare IT specialist McKesson. “We licensed the technology because it perfectly fit our needs as a modern enterprise.”

    Apprenda 5.0 enables businesses to seamlessly manage resources from within an enterprise’s datacenter as well as from public cloud providers, easing the transition to a fully-optimized hybrid PaaS. Enterprise customers including AmerisourceBergen, JPMorgan Chase and McKesson rely on Apprenda’s proven unified platform to run mission-critical applications and eliminate friction between developers and IT operations teams.

    “We’ve been working hand-in-hand with our customers in production to develop the most sophisticated, enterprise-grade PaaS on the market,” said Rakesh Malholtra, VP, product, Apprenda. “Our ultimate goal is to make developing applications simpler for everyone involved: developers, IT ops and production teams. Apprenda 5.0 signifies our most robust PaaS with full support for .NET and Java, and armed with this solution, customers will be able to realize new streams of revenue and prepare for private, public and hybrid cloud environments.”

    3:00p
    Creating the Next-Generation Cloud with Software-Defined App Services

    Cloud computing, end-user mobility technologies, and new delivery models have all directly impacted the modern data center infrastructure. In fact, enterprise IT departments are under constant pressure to meet user and application demands, aware that cloud deployments offer an easier and faster alternative but often pinned down by legacy deployment models. The problem stems from the inability of those legacy models to adapt to meet expectations for rapid provisioning, continuous delivery, and consistent performance across multiple environments.

    Current data center platforms now span many logical nodes where resources, data, and users are shared between infrastructures. When it comes to delivering applications, the challenges of performance, security, and reliability have not changed. What have changed are the environments and conditions under which those challenges must be addressed. In particular, the extension of the data center into cloud environments poses significant obstacles to IT operations trying to maintain consistent policies between data center infrastructure services and those in the cloud. Without architectural parity between the environments, applications may execute without consistent policies for security, performance, and availability. The results include increased risk, unpredictable performance, and loss of control over user satisfaction.

    In this whitepaper from F5 Networks, you’ll see how the evolution of cloud computing has resulted in the next generation of application delivery. The current excitement around SDN and network functions virtualization (NFV) add programmability and extensibility to this list, since application delivery solutions can rapidly extend network and application services and thus enable rapid service definition and quick response to changing market demands.

    Download this whitepaper today to learn about F5’s Software-Defined Application Services which includes the next-generation model for delivering application services. The paper outlines the key delivery solutions around SDAS including:

    • A fabric-based solution.
    • Automation and orchestration.
    • A unified operating framework.
    • The application service platform.
    • The application services fabric.
    • Application services.
    • Rapid system and service provisioning.
    • True context-aware application services.

    As the data center and cloud model continue to evolve – it will be crucial for administrators to control performance, security, and availability of their respective services. Today, service providers and enterprises need to efficiently provision application services based on the demands and needs of individual subscribers and services. An application that may not need to scale today may still benefit from performance-related services, and subscribers may pay a premium for an optimized mobile experience but not enhanced security. Operations must be able to meet these needs and more with equal alacrity and minimal cost. Remember, creating direct optimizations around your application delivery model will help with an overall reduction in delivery costs and help optimize the user experience.

    3:30p
    Syncsort, Hortonworks Help Migrate Legacy Workloads to Hadoop

    Syncsort Hadoop product line is now certified on Hortonworks Data Platform 2.0, Amazon Redshift launches SSD-based node types, and Pure Storage boosts operations in the UK to continue hyper growth.

    Syncsort and Hortonworks help migrate legacy systems to Hadoop.  Big data integration provider Syncsort announced that its high-performance Apache Hadoop-based product line, DMX-h ETL edition, is now certified on Hortonworks Data Platform 2.0 (HDP 2.0) with YARN.  This integration provides a powerful combination that helps enterprises reduce costs and better leverage information across the enterprise. “Hadoop is being adopted by innovative technology leaders within mainstream enterprises because it offers the fastest path to unlocking significant value from Big Data,” said John Kreisa, vice president of strategic marketing, Hortonworks.  “The integration between HDP 2.0 and Syncsort’s powerful Hadoop-based product line provides enterprises with a seamless way to efficiently move data transformation workloads into Hadoop.” To make it simple for customers to evaluate the joint solution, a test drive of DMX-h is available to download and deploy in the Hortonworks Sandbox immediately.

    Amazon Redshift launches SSD-based Node Type. Amazon (AMZN) announced the availability of Dense Compute nodes, a new SSD-based node type for Amazon Redshift. Dense Compute nodes allow customers to create very high performance data warehouses using fast CPUs, large amounts of RAM and SSDs. For data warehouses over 500GB, the dense compute nodes are cost-effective, and a high-performance option for those with a focus on performance - giving the highest ratio of CPU, Memory and I/O to storage. Scaling clusters up and down or switching between node types requires a single API call or a few clicks in the AWS Console. On-Demand prices for a single Large Dense Compute node start at $0.25/hour in the US East (Northern Virginia) Region and drop to an effective price of $0.10/hour with a three year reserved instance.

    Pure Storage bolsters UK operations.  All-flash enterprise storage company Pure Storage announced it is experiencing unprecedented momentum in the UK market, as the company nears the one year mark since opening its UK headquarters. Last August the company snagged a $150 million funding round, and a $1 billion valuation.  Pure Storage has expanded the Pure Storage Partner Program (P3) to include more than 40 companies across Europe, supporting its aggressive partner-focused go-to-market plans. ”Flash memory powers the experience we have all come to expect from smartphones, tablets and leading web properties, like Google Search and Facebook, today,” said Scott Dietzen, CEO at Pure Storage. ”Flash has already superseded mechanical disk for consumers and Pure Storage is driving this same transformation for organisations, by removing the traditional hurdles of mainstream flash adoption — cost and compatibility. Ultimately, $15 billion in global yearly spend will shift from disk arrays to all-flash storage.” The company’s decision to expand into the UK and EMEA was driven by demand from European enterprises frustrated by poor performance, usability and features from disk or hybrid storage solutions. This, paired with the increasing market penetration of random and heavy IO workloads, have resulted in Pure seeing a rapid expansion in adoption of the Pure FlashArray across the region via a broad and diverse partner network and direct sales force.

    4:00p
    Cloudyn Expands Support to Google Cloud Engine
    Cloudyn breaks down cost allocation across virtual machines

    Cloudyn breaks down cost allocation across virtual machines

    Cloudyn, provider of multi-dimensional cloud analytics, has expanded to offer support for Google Compute, cloud comparison and porting recommendations. The company predominantly monitors Amazon Web Services deployments, but says that it is seeing healthy demand amongst its customers for the same service on Google’s new offering.

    Cloudyn will provide role-based insights into Google cloud spend, multi-dimensional view of cost, usage and performance. A granular view down to resource level, as well as capacity and cost trending. It now provides a single view of AWS and Google Cloud engine, comparisons, what if deployment configuration simulations and accurate porting recommendations by cost, performance and location.

    The company says it has moved to offer support for Google Cloud Engine (GCE) based on customer interest.

    “When we opened up our beta program, our customer base exhibited strong interest, with 35 percent asking to participate,” said Sharon Wagner, CEO of Cloudyn. ”We’re very pleased to have extended our offering to support our customers and the slew of enterprises migrating their business to the cloud. Going forward, what we expect to unfold is akin to an ‘Expedia for the cloud,’ where real brokerage empowerment is taking hold and the best deals are offered between sellers and buyers. I have no doubt 2014 will be an interesting year for both vendors and their customers.”

    The company underwent a study to compare and contrast AWS and Google Compute.  Findings show that Google is an attractive option to 53 percent of its AWS customers. “Google is attractive to half of Amazon customers, which is quite an amazing number,” said said Vitally Tavor, Cloudyn Founder and VP Products. “With Google, there’s much faster network and IO performance. With Google you have high disc performance to start with. Google is using its own infrastructure. You’re able to build a fault tolerant deployment that is much simpler and better performing.”

    Has a true competitor to AWS emerged? Wide adoption of Google Apps by DevOps, Google’s existing customer relationships, and its formidable infrastructure are all reasons that Cloudyn believes Google Cloud Engine (GCE) will be the one to take on AWS.

    Cloudyn examined the usage of 500 of its customers, and found several key statistics. 53 percent of its customers will find GCE attractive. Some of the advantages of GCE include faster network and I/O by default (AWS provisioned-IOPS matches GCE’s default, but costs double).

    The company shows that there are distinct pricing advantages with both platforms. GCE offers sub hour billing, which the company says is better for short running instances. AWS wins in cost-performance if leveraging Reserved Instances and Spots. The company gives cost results from 2 distinct customer use cases running MapReduce jobs,  generating the following findings:

    • Customer A typically runs 1000 x m1.large instances, with ~40 minutes average runtime per instance. The MR instances are running on-demand. Due to GCE’s per minute billing and lower prices, moving to GCE would save the company ~40%.
    • Customer B’s workload is very similar to Customer A; however, Customer B’s typical job is in the range of 80 minutes, with ~800 m1.large instances per job. Yet, by using Spot instances and calculating the transition, in this case, GCE would actually be ~10% more expensive, despite the sub-hour billing of Google.

    Because of the difference in pricing for both services, there are advantages and disadvantages, depending on the use case. This is where Cloudyn hopes to step in and help customers. The company has expanded to Google Compute because it believes that ability to intelligently choose vendors and run workloads on different clouds is key for enterprises.

    Cloudyn monitors cost, performance, usage and lifecycle, offering optimization recommendations such as deployment right sizing, resource relocation and reassignment and pricing model modifications. It’s currently monitoring over 50,000 virtual machines on AWS, which it equates to around 7 percent of AWS’ capacity. The company touts 1200 clients with 2000 AWS accounts, doubling its revenue every quarter for the last 4 quarters.  “What we see across our customers, is that 15% of capacity is Systems Intergrators managing capacity on behalf of clients,” said Tavor. “We think this number will grow. “We’re also seeing growing concern among customers about amazon lock-in.”

    << Previous Day 2014/01/30
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org