Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, September 15th, 2015

    Time Event
    12:00p
    Nexenta to Bring Containers into Software-Defined Storage Fold

    Nexenta, a provider of an open source software-defined storage on x86 servers, today outlined multiple paths towards adding support for Docker containers.

    Via a partnership with ClusterHQ, Nexenta announced it is adding support for Flocker, an open source data volume manager for containers, to its storage management software. At the same time, Nexenta said it will also add support for both VMware vSphere Integrated Containers, which enables containers to run on existing instance of VMware, and VMware Photon OS, a project through which VMware is embedding Docker containers within a lighter-weight virtual machine platform.

    As part of that effort, Nexenta also announced today that it has become a member of both Open Container Initiative and Cloud Native Computing Foundation and that it has integrated Nexentda Edge and Nextenda Stor with the Kubernetes framework for managing containers.

    Thomas Cornely, chief product officer at Nexenta, said the ultimate goal is to provide persistent block-level storage for containers running either natively on a physical server or on top of multiple iterations of virtual machines in a way that enables those containers to move across a server cluster as needed.

    As part of the general trend towards eliminating the need for separate arrays to manage primary storage, Cornely said, enterprise IT organizations don’t want to have to deploy separate server clusters to support virtual machines and Docker containers.

    “Our focus is on scale-out storage that is simple to deploy,” said Cornely. “We’re eliminating the waste created by the iSCSI stack.”

    Cornely said much of the initial focus when it comes to deploying containers in production environments will be on top of virtual machines that already have a rich set of management tools for IT organizations to readily invoke. As the management framework surrounding containers over time continues to mature, Nexenta expects to see usage of physical servers to run applications based on Docker containers increase.

    Both virtual machines and containers will be permanent fixtures of most data center environments for years to come. In fact, Cornely said, as more IT organizations look to move to software-defined storage, one of the prerequisites will be the ability to support both containers and virtual machines in all their forms using a common persistent storage platform.

    Obviously, Nextenta is not the only storage software vendor moving down a similar container path. But as a storage management software, chances are high the Nexenta will get there well before storage vendors that rely on proprietary ASICs to manage storage I/O on proprietary storage arrays.

    12:30p
    VMware’s Easy Button for Private OpenStack Cloud

    OpenStack has emerged as the leading technology option for enterprise private clouds. But when an IT manager decides they want to create a private OpenStack cloud in their data center, they quickly find that the term “OpenStack” doesn’t represent a single path with a single set of implications. It is a multitude of paths, each with its own consequences down the road.

    By introducing its own OpenStack distribution last year, VMware helped many companies narrow the choices down. The Palo Alto, California-based giant’s technology is ubiquitous in enterprise data centers around the world, and its promise is you can now also have a private OpenStack cloud in your data center but use the same skill set you use to manage your VMware environment. And, you don’t have to switch to the open source KVM hypervisor.

    The Appeal of OpenStack

    Put simply, OpenStack is the best way to get as close to having something like an Amazon Web Services cloud internally as possible, Donna Scott, a VP and distinguished analyst at Gartner, said. More and more enterprises want to experiment with software, develop and deploy, improve, or abort, and this mode of operation, pioneered by web giants, requires infrastructure flexibility best served by the Infrastructure-as-a-Service model. OpenStack is a way to get IaaS APIs internally, and “it’s growing because of that,” she said.

    OpenStack also represents the promise of web-scale-style infrastructure, comprised of commodity hardware, orchestrated by open source software. The promise of this approach is reduced cost, greater interoperability, and less vendor lock-in. That expected cost reduction comes not only from cheaper hardware but also from not having to pay for VMware licenses. Of course, those savings expectations have to be compared with the cost of standing up and operating a private OpenStack cloud, which is an expensive undertaking in itself.

    OpenStack is Hard

    It’s an expensive undertaking because even though OpenStack has been around for about five years now, the group of technologies called OpenStack as a whole is still immature. Some parts are more advanced than others, but not all components are at the level of maturity most enterprises can deal with. “OpenStack today is still pretty tough to consume,” Scott said.

    These gaps are what VMware is offering to fill with its VMware Integrated OpenStack (VIO). Standing up a private cloud is not just about automating compute; it requires storage and network components as well, said Arvind Soni, a product line manager at VMware who leads the VIO effort.

    Switching to a KVM-based architecture means having to learn how to do networking and storage with KVM, he said.

    VMware is promising a streamlined OpenStack deployment that simply works. VIO offers ways to do things like provisioning vLANs or creating distributed network switches for your private OpenStack cloud a typical IT organization is already familiar with because it has managed a VMware environment for years.

    Going It Alone Is Expensive

    Standing up a private OpenStack cloud in-house without a pre-tested package from VMware or one of the other vendors that offer them takes specialized knowledge that is hard to come by in today’s market. The big-name companies with big publicized OpenStack deployments have in-house resources most enterprises don’t. Those companies also didn’t do it completely without vendor involvement. Rackspace helped Walmart out in its initial stages, for example, and PayPal used some help from Mirantis.

    Once the environment is up, you need to continue spending resources on managing it. A custom home-baked OpenStack cloud means you have to do all the software updates and patching yourself. This is especially difficult with KVM-powered OpenStack, which requires OpenStack software on every KVM host, Soni said. That problem doesn’t exist with VIO.

    Yes, companies that go it alone have more control over their destiny, but that control “is coming at a hefty price,” he said.

    KVM and ESX under One Roof, If That’s Your Thing

    Some users (very few today) don’t want to have to choose between KVM or VMware’s ESX under the hood of their private OpenStack cloud. Solutions like the one launched recently by VMware together with Rackspace address that need. The two vendors now offer a single interface and a single authentication for KVM-powered Rackspace OpenStack cloud and VIO cloud as an option.

    There is some interest in such solutions in the market, but not a whole lot, Scott said. Soni had a similar impression. “I have yet to see a production deployment which has multiple stacks underneath,” he said. From VMware’s standpoint, the idea behind the partnership with Rackspace is to have a partner that does OpenStack on KVM “really well” in case the need does arise.

    VMware-OpenStack Combo Will Be Hard to Ignore for a While

    VMware spending so much time and resources on OpenStack represents a recognition that OpenStack is here to stay. But the trend also represents an existential question for VMware. If the difference between VIO and KVM-based OpenStack is maturity of OpenStack technologies and the level of familiarity enterprise users have with them, the advantages VIO offers today are temporary. Of course, nobody knows whether VMware and OpenStack will co-exist in the enterprise data center 10 years from now, but the onramp to private cloud that combines the next generation of IT represented by OpenStack and the trust enterprises have in VMware today is a powerful proposition.

    1:00p
    Akana Simplifies API Management Across Multiple Data Centers

    Taking API management to a higher level, Akana today unveiled an upgrade to its namesake platform that creates a single logical instance of a control plane for API endpoints that can be distributed across multiple data centers.

    Rather than having to manage a complex web of API gateways, the latest version of the Akana API management platform enables a client to establish a handshake with the platform in a federated manner to reduce overall application latency, Ian Goldsmith, VP of product management for Akana, said. Akana will then route the rest of the API calls to where the application resides across a distributed set of data centers.

    Scheduled to be available in the fourth quarter, the API management platform itself can either be invoked as a service managed by Akana or deployed on-premise by an internal IT organization. There is also a hybrid configuration through which an on-premise deployment can invoke additional API management resource residing in the Akana cloud when needed. The Akana API management in the cloud itself is distributed across multiple cloud and hosting service provides in much the same way a content delivery network operates, said Goldsmith.

    “Our API management platform is based on a multi-tenant architecture,” said Goldsmith. “We have the ability to create a multi-master approach that is … centralized but also highly distributed.”

    Other benefits of this approach, added Goldsmith, are the ability to have one server or an entire data center to go offline with no interruption to the API service, the ability to distribute OAuth tokens in a way that makes them available to every server, and the ability to apply API analytics on a global basis.

    At its core, the Akana API management platform makes use of NoSQL databases based on MongoDB that are interconnected via a Hazelcast in-memory data grid. In addition, Akana has included ElasticSearch technology that makes it simpler for IT organizations to discover all the documentation and content associated with a particular API.

    In general, Goldsmith said, API management inside most organizations is becoming more federated. While developers still design and create APIs, other individuals inside product management teams are now responsible for version control and the portal through which those APIs are accessed. Internal IT organizations, meanwhile, are taking more responsibility for running the API management control plane in order to guarantee performance levels and maintain security, said Goldsmith.

    As the “API economy” continues to mature so do the approaches to managing APIs that by definition become more distributed every day.

    3:00p
    Cloud v. On-Premise: The Battle for Corporate ERP

    Gregory Belt is the Senior Director of Oracle Practice for Fujitsu Consulting.

    These days, when you hear a company announce that a solution or offering is now available in the cloud, chances are you don’t bat an eye. Nevertheless, software companies as well as service providers still take great pains to promote their cloud prowess in an effort to capitalize on the trend toward cloud migration. However, here’s the problem with trends: They can disappear as quickly as they came. As technology evolves in our personal lives, so do the options for businesses. With what seem to be limitless opportunities, many executives believe that organizations need to be all-in for cloud, but there are distinct advantages and disadvantages to being completely in the cloud or exclusively on-premise.

    They Call it ‘Being on Cloud 9′ for a Reason

    There are many benefits for an organization to use cloud services, which is why we continue to see widespread corporate adoption. One strong advantage of cloud is that it allows companies to streamline processes by taking advantage of best practices inherent in cloud applications. As organizations grow, restructure, merge, etc., processes often go out the window, and the ease of doing business both internally and externally becomes strained. It’s imperative that these business process challenges are addressed in a timely fashion throughout the company – and that cloud applications provide quicker “time-to-value” implementation.

    Cloud-based services offer the best practice processes mentioned above, as well as data management and analysis capabilities, all delivered via relatively easy-to-use tools over a relatively short time frame. These points are crucial as they enable companies to spend less time learning the technical aspects of the platform and more time putting their services into action.

    Cost is always an important consideration when deciding on a cloud service. When using the cloud, all hardware and software is taken care of by the provider, which reduces spending on IT infrastructure. This results in minimal upfront spending and lower TCO, while still maintaining access to corporate information.

    The Benefits of On-Premise

    Of course, cloud services aren’t always the magic bullet, and might not necessarily align with a company’s business needs. One disadvantage is that cloud customers need to play by the provider’s rules when implementing their business process in cloud applications. Cloud ERP suites cannot currently be modified to suit all customer needs, which might adversely affect a customer’s competitive advantage. This could be a deal-breaker for some companies and an advantage for on-premise solutions.

    Another advantage of on-premise solutions is that companies have complete control and access to their information internally. Companies don’t need to go through their provider for access and are able to manage everything in-house. On-premise is becoming more attractive to companies now that there is no practical limit to affordable storage capacity. We are seeing more products hit the market that are able to meet this need.

    As with cloud, there are disadvantages that come with on-premise. One of the main concerns is cost. As discussed earlier, cloud providers offer many services through a cost-effective “pay-as-you-grow” subscription model that can match hardware capacity and software licenses very precisely to a customer’s actual needs. This model also provides for continuing upgrades to IT infrastructure and removes that worry completely from the customer’s planning cycle. Companies that choose on-premise solutions must plan future investments based on anticipated growth; and if that growth does not materialize, they are stuck with unused hardware and “shelfware” instead of software. They also need to hire a dedicated IT staff to support that investment, which can be a heavy financial burden for companies.

    Putting it All Together

    Both on-premise and cloud have their advantages. Most companies should be able to get the best of both worlds by implementing a hybrid or co-existence cloud strategy. As noted earlier, companies don’t need to commit to one option. Hybrid cloud uses a mix of on-premise, private cloud and public cloud services with communication between the different platforms. Hybrid cloud provides companies the option to keep their competitive advantage by implementing their proprietary processes either on-premise or in a private clouds, while still using public cloud services for their other needs.

    As has been said before, “The move to the cloud is a journey; it is not an event.” Keeping a competitive advantage while still utilizing cloud services in some capacity should be the right strategy for most companies moving forward. There’s a silver lining for everyone. Now it’s time to figure out just the right mix.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

     

    5:01p
    With Sao Paulo Online, IBM Nears Completion of Global Cloud Data Center Build-Out

    IBM announced the launch of a 2.8-megawatt cloud data center in Sao Paulo.

    Part of the global network of cloud sites the company has been building out since acquiring SoftLayer in 2013, the Sao Paulo facility is the 29th data center supporting its cloud services. The network of cloud data centers stretches across 12 countries.

    All told, IBM now operates about 40 data centers, with the non-SoftLayer sites supporting specific customer engagements.

    It plans to bring one additional cloud data center online as part of the IBM SoftLayer network later this year. IBM will then reevaluate where it needs to bring on additional capacity in 2016, IBM SoftLayer COO Francisco Romero said.

    For all intents and purposes, the data expansion plans outlined early last year are now complete, he said. Going forward, the company will decide where it might need to simply deploy additional pods in existing data centers to service additional demand versus actually building new facilities.

    Like all the IBM SoftLayer data centers, the 10,000-square-foot facility is Sao Paulo makes use of the same basic data center design. Each cloud data center can accommodate two to five pods consisting of anywhere from 4,000 to 5,000 servers. Each pod provides access to 1.5 MW. The data centers support dual 10G Ethernet connections to server that are no more than one network hop away from a local internet peering exchange.

    Two major factors now drive placement of cloud data centers around the world, Romero said. The first is data sovereignty regulations in some countries that require the primary copy of any data to be stored locally. The second is latency in performance of cloud applications. Most cloud applications need to be highly distributed to guarantee satisfactory customer experience, he said.

    While enterprise IT organizations are not abandoning their existing data center investments just yet, Romero noted that the economics of the cloud are pushing new application deployments into either the public cloud or managed hosting environments.

    “A lot of organizations are deciding they want to be able to allocate their capital elsewhere in their business,” he said. “Also, a lot of the newer deployments involve infrastructure that can support real-time applications.”

    Historically, most IT organizations decided to locate data centers based on real estate costs and access to inexpensive power. While those issues matter, network connectivity is now also a much bigger factor in the age of the cloud, Romero said.

    In the case of IBM, the primary issue is to be able to build out a network of data centers that can to compete with the scale of Amazon Web Services, Microsoft Azure, and Google Compute Engine.

    The Brazil cloud market, estimated to be worth $1.11 billion in 2017, is only one small piece of a global market where, much like in the restaurant business, scale and location still matter very much.

    6:19p
    vXchnge Launches Philadelphia Data Center It Wants to Turn into Interconnection Hub

    vXchnge, a data center provider formed by former Switch & Data executives that targets second-tier US markets, announced completion of its Philadelphia data center.

    This is the 15th data center in the company’s portfolio, which it has grown through both construction and acquisition. Its biggest expansion move to date was the acquisition of eight Sungard data centers announced in May.

    vXchnge markets itself as an “edge data center” provider. Edge data centers are facilities in densely populated tier-two metros where demand for internet services is growing rapidly. They are facilities where major network carriers, ISPs, and most of the major web content and cloud service providers interconnect to exchange traffic. Content providers cache popular content at these locations so it can be served to end users in the surrounding metros cheaper than delivering it over long distances.

    Read our recent in-depth feature on edge data centers here.

    How many such players interconnect in vXchnge’s new Philadelphia data center the company doesn’t say. “We are not disclosing that information at this time,” vXchnge CEO Keith Olsen said in an emailed statement. He declined to say how many carriers or local ISPs are in the facility today.

    It will be a carrier-neutral data center, however, and the company positions it as a place from which customers can reach the end-user eyeballs in the metro. Whether it can deliver on that positioning will depend on its ability to bring the key players into the facility.

    Inside vXchnge's Philadelphia data center (Photo: vXchnge)

    Inside vXchnge’s Philadelphia data center (Photo: vXchnge)

    The provider chose to expand in Philadelphia because of its population density, and because it is a major point of network interconnection between Washington, D.C., and New York City, Olsen said. With population of about 1.5 million, Philadelphia is the fifth-largest city in the US.

    More on vXchnge’s reasons for choosing Philly here.

    vXchnge’s Philadelphia data center is a 70,000-square-foot facility with total power capacity of 2 megawatts. Olsen declined to disclose how much of that capacity is live and available for consumption today.

    Stylish interior design in vXcnhge's Philadelphia data center (Photo: vXchnge)

    Stylish interior design in vXchnge’s Philadelphia data center (Photo: vXchnge)

    7:08p
    Zayo to Expand Miami Data Center, Home to Florida Internet Exchange

    zColo, the data center services subsidiary of Zayo Group, is expanding its Miami data center, home to the Florida Internet Exchange, which Zayo formed together with Netflix and south Florida data center service provider Host.net.

    With 15 network carriers in the facility, the Miami data center is a robust interconnection hub. Miami in general is an important tier-two US data center market and a network connectivity gateway between the US and Latin American markets.

    The biggest carrier hotel in the city is NAP of the Americas, owned by Verizon. Zayo’s facility connects to the big data center, which Verizon gained through its acquisition of Terremark in 2011.

    According to Zayo, demand for capacity in its data center has been high, and most of the available inventory in the facility, which was launched 18 months ago, has been leased.

    Netflix and other major web content and cloud service providers have been a major force behind new interconnection initiatives, such as FL-IX. New peering exchanges in big markets give them additional options for delivering their content to users in those markets besides the biggest exchanges controlled and operated commercially by a handful of data center providers, such as Equinix, Telx, or Verizon.

    The biggest such initiative in recent years was the formation of Open-IX, an organization that certifies internet exchanges and data centers that can host them. The internet’s biggest content providers – Google, Netflix, and Akamai, among others – formed Open-IX to stimulate creation of distributed internet exchanges that stretch over multiple data centers in a metro. By increasing their peering options, these content companies lower the cost of delivering content to users.

    7:43p
    Accenture Acquires Cloud Sherpas to Boost Cloud Consulting Chops

    talkincloud

    This article originally ran at Talkin’ Cloud

    Accenture has acquired cloud advisory company Cloud Sherpas in order to scale its cloud consulting capabilities. The financial terms of the acquisition were not disclosed.

    According to an announcement by Accenture on Tuesday, the acquisition will bring cloud strategy, technology consulting, and application management, integration and management services to its newly formed Accenture Cloud First Applications team. The division delivers cloud services for Google, NetSuite, Salesforce, ServiceNow, Workday and other “pure play” technologies.

    With the cloud professional services market to be worth up to $34.41 billion in 2019, Accenture is one of the main contenders, and it continues to invest in cloud capabilities, not limited to its acquisition of Cloud Sherpas. Also this year, Accenture acquired consulting services company Axia, and Spanish cloud consulting firm Solium, along with several other acquisitions.

    Headquartered in Atlanta, Georgia, Cloud Sherpas has more than 1,100 employees. It has grown since its launch in 2007 to one of the top cloud service brokerages for Google, Salesforce, and ServiceNow.

    Last year, Cloud Sherpas made a $10 million investment in its new global cloud advisory business unit designed to support large enterprises in their migration to the cloud. In April, Cloud Sherpas launched its Managed Services group and committed to hiring dozens of new employees.

    “We are thrilled to be joining forces with Accenture. Cloud Sherpas was born in the cloud and we are perfectly aligned with Accenture’s Cloud First agenda,” David Northington, Cloud Sherpas’ chief executive officer, said in a statement. “The combination of our capabilities and experience with Accenture’s scale, broad industry expertise and global cloud application capabilities represents a unique and compelling opportunity for our clients, for our people and for the future of cloud technology.”

    Cloud Sherpas made this year’s Talkin’ Cloud 100 list as #56.

    Over the past couple of years, Cloud Sherpas has grown its business organically as well as through acquisitions. In 2013, Cloud Sherpas acquired Innoveer Solutions and Navigis to help customers migrate to Salesforce and ServiceNow. It also acquired London-based Stoneburn Software Services to gain a foothold in growing markets.

    Goldman Sachs served as Accenture’s financial advisor on the deal.

    This first ran at http://talkincloud.com/cloud-computing-mergers-and-acquisitions/accenture-acquires-cloud-sherpas-boost-cloud-consulting-cho

    << Previous Day 2015/09/15
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org