Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, December 5th, 2014

    Time Event
    1:00p
    How a London Startup is Building Cloud of Clouds to Challenge the Giants

    OnApp started as a platform for hosting providers to build Infrastructure-as-a-Service offerings, later evolved to offer federated CDN, and most recently created a federated compute and storage cloud, bringing together many smaller cloud providers whose cumulative resources are presented to the user as a single, massively distributed cloud. Needless to say, the logistics of this are not simple.

    Market research firm Ovum believes the London startup’s ideas have huge market implications. In a non-commissioned report, Ovum states that the impact of the company’s work in terms of cloud federation is yet to be fully understood by the market.

    “They’re pragmatic and not trying to run before they can walk,” said Ovum analyst Laurent Lachal. “The vision, the long-term plan, and the fact they’re creating a marketplace where supply and customer come together, I like that they have this vision.”

    OnApp’s CDN has had good momentum and the federated compute cloud is building steam. The acquisition of SolusVM and high-profile partnerships with IBM and Interxion are early examples of positive momentum.

    Enabling Cloud Resellers

    Ovum believes OnApp’s strategy is sound and the progress for the federated marketplace is ahead of anyone else. The marketplace is already hundreds of OnApp-based clouds strong, but it is still very early in the game, and OnApp is methodically building the foundation for its vision. Now, the company has reached a critical phase by opening up the federation to companies outside of its ecosystem (OnApp platform-based clouds).

    In dedicated and shared hosting, there are traditional resellers who borrow or rent the resources and add services atop, usually hosting web pages or something simple. What the London startup is doing is reseller enablement on steroids.

    Through federation, the pool of resources is worldwide. Through OnApp’s platform, that pool is curated, managed, orchestrated, and billed in a single platform. Traditional resellers sold web pages, the new reseller can offer the architecture, and hence the services if they have the skill, for an entire businesses IT needs.

    OnApp is also targeting the types of customers that are largely ignored by others: the small to medium regional service provider in need of an enterprise-grade cloud offering.

    A Big Engineering Project

    The first phase of getting a bunch of small hosting providers using its platform for IaaS offerings was successful. Phase Two was to get those same providers to sell some of their capacity in their federated market.

    OnApp is just starting in the critical third phase, admittedly a tall order, according to Ovum. OnApps’ interface is available as open source GPL code under GNU. In other words, it is reaching out to everyone, making it so that all clouds, not just ones based on its platform, can hook in. This is meant to spur adoption.

    The value is not the marketplace, but the work around it. It’s not just a marketplace; OnApp automates orchestration, server management, provisioning, scaling, failover, and billing, among other functions.

    “It is a business platform as much as a technology platform,” said Lachal. “The marketplace will challenge the major clouds. It’s an interesting alternative, and what the market needs is alternatives.”

    Modular, Composable Cloud Services

    OnApp’s federation offers hundreds of possibilities across IaaS providers worldwide. Service providers have complete transparency in picking and choosing the parts when creating their IaaS offerings.

    AWS made it dead simple to start a virtual machine with just a credit card number. OnApp is making it dead simple to launch a complex and flexible cloud and CDN to form the raw infrastructure of a virtual cloud service provider.

    “I see quite a lot of companies happy to focus on the service and leave infrastructure to third parties,” said Lachal. “It’s already happened on the CDN level, and OnApp is the company most prepared to do this for compute.”

    An example of a provider leveraging federation is Scandinavia’s Meebox. Meebox migrated their cloud (customers on around 700 virtual servers) over to OnApp to overcome some scalability issues it was having. It can now extend geographic reach and capabilities of its services without restraint.

    “Our business is built on transparency, quality and service, so while it is always painful to have to switch platforms, we simply had no choice,” said Patrick Theander, co-founder and CEO of Meebox. “…with the OnApp Federation we can help them scale out to multiple clouds around the world, or scale out with capacity in specific countries, according to the kind of applications and data they need us to manage.”

    Checks and Balances

    While the federation is open, it’s not open to everyone. OnApp does have standards in place to prevent degradation in the quality of what’s in the federation.

    Those in the federated marketplace offer Service Level Agreements and OnApp offers additional SLAs atop of that, based on an assessment of the IaaS. It means that while OnApp offers access to a bunch of different provider infrastructures, the London startup shares accountability with members of the federation.

    The SLAs also serve to differentiate the quality of different offerings. OnApp has spent considerable time in assessing what’s listed and benchmarking their performance. The cream rises to the top.

    Raw IaaS Commoditization is a Moving Train

    The biggest argument lobbed against OnApp is that federated cloud essentially pushes cloud toward commoditization. But commoditization of raw cloud compute and storage resources is already happening, regardless of what OnApp does, said Lachal. The opportunity is in shaping those resources into something more useful.

    Those supplying the raw infrastructure within the federation should’t be apprehensive, he said. For one, it’s a way to sell unused capacity. It’s a way to extend your offering to your customers beyond what you are capable of. You pick and choose the infrastructure resources you offer to customers. The benchmarks OnApp has in place insures that the quality remains high, so while you’re among several other IaaS providers, it is a high-quality group.

    “[IaaS suppliers] are often fairly small companies with small clouds,” said Lachal. “Taken individually, they are fairly simple. Altogether, that’s where the synergies develop to expand their reach and portfolio.”

    What it allows them to do is expand their resources in a way that is cost effective. “Yes, there’s an element of commoditization, because it’s the name of this particular game,” he said. “As the marketplace matures, there will be more and more services available. Once service providers have the infrastructure, they need to invest more in services.

    4:30p
    Friday Funny Caption Contest: Present

    Although Mother Nature seems a bit confused Kip and Gary know it’s winter. Help us spread some holiday cheer with this week’s Friday Funny!

    Diane Alber, the Arizona artist who created Kip and Gary, has a new cartoon for Data Center Knowledge’s cartoon caption contest. We challenge you to submit a humorous and clever caption that fits the comedic situation. Please add your entry in the comments below. Then, next week, our readers will vote for the best submission.

    Here’s what Diane had to say about this week’s cartoon, “So this just showed up at the data center, I wonder what it is?”

    Congratulations to the last cartoon winner, Kerry, who won with, “I’m absolutely in LOVE with the new Dell Easy-Bake servers!”

    For more cartoons on DCK, see our Humor Channel. For more of Diane’s work, visit Kip and Gary’s website.

    6:36p
    Salesforce to Open Second Data Center in Japan

    Salesforce is continuing its international data center expansion with a second data center in Japan, announced this week. It will join the company’s other Japanese data center opened in 2011 when it lights up in the first half of next year.

    Salesforce hasn’t disclosed the provider, but its first data center in Japan is with NTT, and the company has frequently used Equinix globally.

    In addition to growing its Asia data center footprint, the SaaS giant has been expanding in Europe, driven by healthy customer growth in the region. The company announced three new data centers in the U.K., France, and Germany. It launched the U.K facility recently. A data center was also opened in Canada this year.

    Salesforce is trying to drive up its international revenue by opening data centers in support of its Software-as-a-Service offerings. This is particularly imperative as more companies look to keep data within their countries and amid heated competition from companies like SAP, IBM, and Oracle.

    It used to be a battle of SaaS versus traditional software, in which Salesforce thrived. Now the traditional vendors like SAP are focusing on SaaS as well. Salesforce’s first Asia data center opened in 2009 in Singapore.

    Japan has a growing digital economy, and Salesforce has made over 20 investments in the country’s technology market.

    The company also touted a new high-profile customer Thursday — Japan’s Ministry of Internal Affairs. The Ministry developed a big data solution for conducting road inspections using the Salesforce1 Platform.

    “Salesforce continues to invest in Japan,” Shinichi Koide, chairman and CEO of Salesforce Japan, said in a statement. “Our second data center, our investments in tech companies, and our foundation work will enable us to continue to build trust with our customers and partners in Japan.”

    Salesforce reported earnings of $1.38 billion in revenue in November for the most recent quarter. The company beat expectations, but there is some investor concern, particularly around profitability.

     

    7:01p
    HP Refreshes 3PAR Storage Line

    HP announced a refresh of its midrange and flash-optimized 3PAR storage systems portfolio, advancing block, file, object, and backup of 3PAR StoreServ Converged Flash arrays. The company is releasing new 3PAR File Persona software and a new StoreServ 7440c Converged Flash Array.

    With the new 3PAR 7440C array HP hopes the mid-market enterprise can use StoreServ as a single solution to support a variety of workloads and operating environments. The 7440C features 3.5 petabytes of usable capacity, 16Gb Fibre Channel connectivity, and all-flash optimized performance over 900,000 IOPs.

    The array also, for the first time, gives users the ability to add hard disk drives to work in concert with flash capacity.

    To lure customers off the competition HP offers a self-managed platform migration solution for moving from EMC VMAX installation. HP has updated 3PAR StoreServ 7200c, 7400c and all-flash 7450c platforms, and included more processing cores, larger memory caches, and support for block, file, and object access.

    The new line of 3PAR systems was launched at HP’s annual Discover event in Barcelona, where it also announced that it is including a 50 terabyte StoreOnce VSA and expanded Kernel-based Virtual Machine hypervisor support to enable backup-as-a-service consolidation.

    HP also launched a new application-managed backup feature for 3PAR StoreServ to help manage snapshots on 3PAR StoreServ systems.

    Offered as a feature of the 3PAR OS, HP Persona software includes updates for Network File System, Common Internet File System and object access. HP says that this is a first step towards moving storage-affinity related workloads such as data access, protection. and analytics directly into the storage operating system and controller. Multi-protocol access to a single-shared capacity group, along with integration with a new 3PAR StoreServ Management Console are additional benefits that HP notes.

    7:30p
    Microsoft Announces Azure BizTalk Microservices Platform as Part of Cloud PaaS Strategy

    logo-WHIR

    This article originally appeared at The WHIR

    Microsoft has unveiled its new Azure BizTalk Microservices platform, which will involve the use of small services or “microservices” to be combined like building blocks to form a platform for cloud applications.

    In a typical microservice architecture (as defined by James Lewis and Martin Fowler), microservices are independently deployable, run their own processes, and communicate via lightweight mechanisms such as an HTTP resource API. Also, centralized management of microservices is kept at a bare minimum.

    In the Azure model, these microservices will run in scalable containers.

    Saravana Kumar, founder and CTO of BizTalk Server software developer BizTalk360, noted in a blog post that Azure BizTalk Microservices will likely co-exist with BizTalk Server for the time being rather than replace it. BizTalk Server, after all, has a major update coming in 2015, and guarantees support of BizTalk Server 2013 R2 until 2023.

    “In my view, both BizTalk Server and App Platform are going to co-exist together as married couples for quite some time in the future,” he wrote. However, he notes that BizTalk Services (such as business rules, validation, and authentication) could eventually evolve into microservices. As well, Microsoft mentioned that it will provide support for migrating from BizTalk Server to BizTalk Microservices.

    Kumar also stated that microservices could offer many cost advantages for integrated services over the BizTalk Services model given its greater granularity of resource use. “One of the main disadvantages of BizTalk Services was that (in order to support extensibility and custom code) each BizTalk Service instance resulted in a set of dedicated virtual machines (compute) behind the scenes. And that setup resulted in a billing model that was far off the pay-as-you-go promise of cloud,” he wrote.

    “Azure [BizTalk] Microservices provides and offers a high density, scalable runtime that is built for scalability and that will probably allow a better consumption based billing model.”

    In October, Microsoft Azure CTO Mark Russinovich mentioned that Azure Platform-as-a-Service would be more microservice-oriented.

    Microsoft said Azure BizTalk Microservices will be available through the Azure Pack, and a preview of Azure BizTalk Microservices is expected in Q1 2015. Pricing details have not yet been disclosed.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/microsoft-announces-azure-biztalk-microservices-platform-part-cloud-paas-strategy

    8:00p
    North Korea Denies Sony Hack Allegations

    logo-WHIR

    This article originally appeared at The WHIR

    An anonymous North Korean diplomat told the Voice of America broadcast network Wednesday that North Korea did not hack Sony Pictures Entertainment last week. Investigators had found hacking tools previously used by North Korea were used against Sony, according to Reuters, possibly indicating otherwise.

    “My country publicly declared that it would follow international norms banning hacking and piracy,” the diplomat said. He called widespread speculation that North Korea was responsible for the attack “another fabrication targeting the country.”

    Reuters reports an internal Sony memo told staff it is uncertain of the “full scope of information that the hackers might have or release,” and that the company is still struggling to bring its network completely online, 10 days after the initial attack.

    The hacking tool Reuters refers to is surely the malware which The New York Times reports was the subject of an FBI warning issued Monday evening. That malware was written in Korean, and is designed to erase the contents of an infected computer.

    Hackers also published confidential Deloitte data to Pastebin on Wednesday, which is alleged to have come from the same attack at Sony.

    Bloomberg reports that Sony’s internal investigation has identified a North Korean group called DarkSeoul in the attack. DarkSeoul is blamed for attacks on three South Korean banks and two broadcasters last year.

    Hacks by North Korea against South Korean targets have been alleged for years, while a rash of attacks by mysterious hackers against banks and financial targets have struck the US this year, including one revealed by FireEye on Monday.

    In the immediate aftermath a spokesman for North Korea said: “The hostile forces are relating everything to the DPRK (North Korea). I kindly advise you to just wait and see.” North Korea has previously referred to the US and South Korea as hostile forces.

    The hack and its consequences will continue to play out as the investigation continues and the hackers decide what to with the rest of the data, of which 40GB out of 100TB have been released, by Gizmodo’s count.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/north-korea-denies-sony-hack-allegations

    8:28p
    Arista CEO on Cisco’s Lawsuit: “It’s Not the Cisco I Knew”

    Saying they have not had the chance to go over the details of accusations in Cisco’s litigation against the data center network technology vendor, Arista Networks management said the move was unbecoming of a market-leading company.

    “While we have respect for Cisco as a fierce competitor and the dominant player in the market, we are disappointed that they have to resort to litigation rather than simply compete with us in products,” the company’s official statement in response to the two patent and copyright infringement lawsuits brought Friday read.

    An Arista spokesperson emailed us a personal statement by the company’s president and CEO Jayshree Ullal, who said she was “disappointed at Cisco’s tactics. It’s not the Cisco I knew.”

    One of the suits accuses Arista of using 12 features in its products covered by 14 U.S. patents the San Jose, California-based networking giant holds. The other suit accuses it of copying and pasting chunks of Cisco’s copyrighted product manuals into its own manuals and of ripping off command line interface expressions used in Cisco’s IOS software for its own EOS software.

    Blow to a Quickly Rising Giant

    Santa Clara, California-based Arista sells data center network switches to operators of massive IT infrastructures. It lists Facebook, Morgan Stanley, Netflix, and Equinix as its customers.

    The company had a successful IPO on the New York Stock Exchange in June. Its shares were down 6 percent Friday afternoon, following the lawsuit announcement. Arista’s current market capitalization is about $4.5 billion.

    The company was founded by Andy Bechtolsheim, a co-founder of Sun Microsystems (now owned by Oracle). Bechtolsheim is also known to have been one of the first two people to invest in Google in the late 90s, when the company was just starting out.

    Arista’s Deep Cisco Roots Key to Lawsuits

    Another company Bechtolsheim founded, a Gigabit Ethernet startup called Granite Systems, was acquired by Cisco in 1996. He served as vice president and general manager of Cisco’s Gigabit Ethernet business unit from the time of the acquisition until 2003, overseeing the development of Cisco’s Catalyst 4500 family of switches.

    Arista’s leadership team has extensive roots at Cisco. Ullal, the CEO, and Kenneth Duda, co-founder and CTO of Arista, held top roles within Cisco’s data center networking business. Its top engineering, business development, marketing, operations, and human resources executives have all at some point worked at Cisco.

    Arista’s deep Cisco roots will play a big role in the current litigation, since the technologies named in one of the lawsuits “were patented by individuals who worked for Cisco and are now at Arista, or who at Cisco worked with executives who are now at Arista,” according to a blog post by Cisco General Counsel Mark Chandler.

    10:40p
    AWS Slashes Cloud Data Transfer Prices

    You can almost set your watch to cloud price cuts. The most recent cuts by Amazon Web Services, however, are not for compute and storage but for outbound data transfer. AWS has cut data transfer pricing on transfers out of its cloud, and both ways for CloudFront CDN.

    The cost for moving data out has been cut significantly — by a quarter to a third, depending on the regions. The CDN pricing dropped 26 to 29 percent for the first 10 TB.

    This cut follows Amazon’s announcement of simplified Reserve Instance (RI) pricing earlier this week. The company has now made close to fifty price cuts overall, and the rest of the cloud market has continually cut prices of their own.

    The most recent pricing moves by Google, Amazon, and Azure have focused away from compute and storage and towards unique wrinkles like sustained usage (Google) or simpler RI (AWS). The good news is this is adding some differentiation to the public clouds. The bad news is you now need a math PhD to figure out your bill.

    Data transfer pricing has been lobbed as criticism overall, with AWS historically pricier than counterparts Azure and Google, as well as IBM SoftLayer and CenturyLink. The new pricing puts outbound from cloud to Internet — the most expensive bandwidth– in line with the competition.

    Price cutting by cloud players has been primarily focused on compute and storage, but data transfer is the somewhat hidden expense with cloud. For some, it’s trivial, but for companies that rely heavily on data transfer out, these costs are significant. The price cuts might make AWS more suitable for certain applications and processes that rely on continuous or frequent data transfer out.

    Cloud pricing in general remains complex, as there is a variety of factors to consider. Take RightScale’s analysis for AWS versis Google reserve instance pricing. For something meant to simplify pricing, it still takes several charts and comparisons to figure out which company is cheaper.

    There is a wealth of considerations and factors just for reserve instances. AWS simplified reserve pricing still puts Google at a price advantage, according to RightScale, but that statement is not without several caveats.

    One of the sells for cloud has been that it makes life a little easier for IT. This is true, as it put flexible and theoretically unlimited resources in a customer’s hands. The CFO gets to defer CapEx to OpEx and arguably sees significant savings, but his or her life now involves sprawling calculations. It has provided an opportunity for third parties to offer cost calculators and for research firms to try and make simple comparison metrics.

    AWS’ first big cut in data transfer occured in 2008, following increasing criticism of bandwidth as a somewhat hidden cost to doing business with cloud. After that cut, users with data transfer of more than 150 terabytes a month were paying 10 cents per GB of outbound transfer, compared to 17 cents for those with less than 10 terabytes.

    << Previous Day 2014/12/05
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org