Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, October 28th, 2014
| Time |
Event |
| 12:00p |
SwiftStack Gets $16M to Make Open Source Software-Defined Storage Easy for Enterprises SwiftStack wants to make open source storage software work really well on commodity hardware, says founder and CEO Joe Arnold, whose OpenStack Swift-based software-defined storage company recently raised a $16 million Series B funding round.
The round was led by OpenView Partners, a venture capital firm that specializes in boosting growing enterprise software companies after they have found their market fit, but before they have gone to market. Existing investors Mayfield Fund, Storm Ventures and UMC Capital also participated.
SwiftStack’s software-defined storage controller uses OpenStack’s object storage component Swift at its core. It virtualizes storage resources distributed across data centers, presenting them as a single centrally controlled system. Operators can manage and scale their storage environments through a single pane of glass, orchestrate cohesive upgrades and monitor utilization and performance.
“Unlike many B2B software [companies] or startups in general that touch on one macro trend, SwiftStack sits at the crossroads of so many trends,” said OpenView Vice President Mackey Craven. “The growth of unstructured data, the movement and abstraction of storage at the software level, open source technology in the enterprise — all the winds are at their back.”
Craven sees SwiftStack as the future of enterprise storage. “Despite being a small company, the technology has been proven at an incredible scale,” he said.
Making open source easier for enterprises
Tech giants are acquiring and VC firms are funding those that simplify open source adoption. “Enterprises really want to consume open source but it can be really difficult, and it’s our job to make it really simple,” Arnold said. “If you look at data center technologies that have gotten a foothold, the last proprietary one was probably VMware. We’re really seeing a desire by these enterprises for open source starting to show itself.”
Pooling legacy, commodity storage
SwiftStack launched in 2011. The company said it signed up several Fortune 500 customers, including eBay, Pac-12 Networks, and HP. Arnold said the company continues to see healthy momentum with enterprises, and the round will help in pursuing that market.
 SwiftStack enables storage cluster management across multiple data centers.
Enterprises have been tackling unstructured data storage using traditional array-based storage technologies, which can be cost-prohibitive. SwiftStack’s distributed approach is more cost effective, flexible, and simplifies the process.
Version 2.0 launched earlier this year, adding a Filesystem Gateway that integrates object storage with existing file-based applications. The new version also saw enhancements to enterprise authentication with plug-and-play integration with enterprise management systems, such as LDAP and Active Directory.
Storage is next on software-defined checklist
Defining and carving up storage resources using software tools like Swift and SwiftStack is a major shift in storage management. “There’s been a separation of hardware and software and a disaggregation of what’s on the equipment and the equipment itself,” said Arnold.
“People want to get away from appliances. It’s happened in compute; we’re starting to see this with networking as well — a separation from the control plane being placed into software. Storage is sharing some of the same design principles. Storage is now becoming about distributed systems.”
Several vendors offer scalable storage software for vendor-specific hardware, such as EMC’s ScaleIO, and for commodity hardware, such as Scality.
SwiftStack competition also includes Red Hat’s alternative to Swift called Ceph, which the open source software giant gained through the acquisition of InkTank. InkTank, which spun out of hosting provider DreamHost, offers commercial support for Ceph.
OpenStack momentum accelerates
There is an uptick in both VC funding and consolidation in the OpenStack ecosystem. Another example of a player that recently raised a major round was Mirantis, which raised $100 million.
Cisco acquired private OpenStack cloud provider Metacloud in September and EMC acquired CloudScaling earlier this month.
The OpenStack Foundation’s summit in Paris is next week, where SwiftStack customer Time Warner will be a keynote speaker. OpenStack’s tenth and latest release, named Juno, came out earlier this month.
| | 3:30p |
Navigating the Bermuda Triangle of Application Modernization Rick Oppedisano is Vice President of Global R&D and Marketing for Modern Systems.
Mainframe applications sit right at the heart of the business. They’re one of the most mission-critical aspects of the business’ technology infrastructure, running the applications that make the business work.
However, companies are facing serious challenges around their mainframe stack. First, they’re running out of people. A recent CIO survey indicates it is highly likely or certain that the original knowledge of a company’s mainframe applications and supporting data structure is no longer in the organization. Second, the cost of maintaining legacy systems has become prohibitive. Gartner estimates the average in-house cost of managing a mainframe infrastructure environment is 7 percent of an organization’s annual IT spend—and can reach as high as 20 percent.
Another challenge is competitive advantage. Legacy systems weren’t built to integrate with external business intelligence or analytics software, so it’s difficult to leverage the systems to drive any sort of innovation.
So what’s a business to do? And how can virtual infrastructure help? How can they navigate what we call the Bermuda Triangle of Application Modernization?
Gain value in byte-sized chunks
Migrating from or even extending a mainframe application is risky because you’re tampering with an application that forms a core part of your business. The process is time-consuming because most companies typically maintain a long and growing backlog of application changes they’d like to make, and because working slowly and carefully is the only way to reduce the risk of modifying a mission-critical piece of the business.
The key is to focus on incremental solutions for specific problems, solutions that can then be replicated across the mainframe stack to reduce risk and cost while adding competitive advantage.
Liberate mainframe data
Moorcroft Debt Recovery Group, a debt collection company servicing large banks and utilities, wanted to modernize their core applications, but like many others, felt the risk was too high. For Moorcroft, downtime could equal up to $24k/hour or upwards of $330k/day.
Moorcroft’s decades-old core debt management application sat on a mainframe with data structured in flat files, requiring an entire program to be written for basic search or reporting functionality. The extensive timeframe required for reporting held Moorcroft business, making them less responsive. With multiple developer resources required to compete basic tasks, technical innovation was stifled.
Moorcroft leveraged mainframe data sharing technology to structure and replicate mainframe data to a virtual instance of SQL Server. This freed up data for faster reporting and deeper queries, exposing insight not previously available. Virtual infrastructure enabled Moorcroft to move away from the peak usage-pricing model of the mainframe, reducing cost and laying the groundwork for exposing data to mobile and external sources.
Move workloads off the mainframe
Isolating reporting workloads and running them in virtual environments is a low-risk, non-invasive way to reduce cost. Moorcroft’s core application activity was consistent with most financial applications, with roughly 80 percent of the batch activity tied to reporting. The company leveraged off-mainframe process power to reduce mainframe MIPS and cost-of-ownership. This replicated the reporting job functionality to the target Windows virtual environment, retiring and eventually eliminating the cost of the mainframe workload.
The key to this concept is doing it without impacting the developers or end users. In Moorcroft’s case, developers continued to write new reports as required from the business, except now these reports were written and executed in the Windows virtual environment.
Consider rehosting
Moorcroft’s incremental approach prepared them for broader modernization without overwhelming or disrupting the business. Once the data and reporting were modernized, only the update, batch and online modules remained on the mainframe.
Moorcroft wanted to leverage virtual infrastructure to replace the mainframe entirely. However, they didn’t want to disrupt internal resource and support models. Therefore, the company rehosted the remainder of the core application in the Windows virtual environment, eliminating the mainframe entirely. Doing so put the company in position to choose from a broader set of development resources, while also offering a more robust and affordable disaster recovery plan.
Reap the rewards
Moorcroft’s timeline for complete modernization was approximately three years. But the project’s impact will reverberate for years to come.
“The mainframe had many strengths such as familiarity, performance, reliability and security,” says Dave Pickering, Moorcroft VP of IT. “A complete big bang implementation would have been a barrier. When we designed this conversion strategy we removed this barrier and significantly reduced the risk of this project to our business.”
In the end, Moorcroft’s approach improved IT service delivery, helped the business gain competitive advantage and generated savings upwards of $400k annually.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 4:53p |
Ohio Governor Confirms $1.1B Amazon Data Center Project in State An upcoming $1.1 billion Amazon data center project in central Ohio has been confirmed by none other than the Ohio Governor John Kasich.
During a recent campaign stop, Kasich said he had met with top Amazon officials and that the company would make a “billion-dollar investment in cloud,” which would bring jobs to his state, the Columbus Dispatch reported. The exact location is yet to be determined, with officials from different parts of the state trying to entice the company with local incentives.
Reports that Amazon was evaluating potential data center sites in Ohio started surfacing earlier this year. The state has approved two tax credits, and officials in the cities of Dublin and Hilliard approved freebies for the company to sweeten the pot for the massive construction project.
When asked by an audience member during the said campaign stop whether the Amazon data center project was coming, Kasich said the company had already committed. The Columbus Dispatch reported Kasich saying: “I just met with one of the top officials at Amazon. … They’re making a billion-dollar investment in cloud computing. So, what is coming is very exciting jobs to Ohio.”
Amazon’s response to requests for comment on its Ohio plans from Data Center Knowedge has been that the company is always on the lookout for good data center locations.
The state tax credits approved for Vadata, an Amazon subsidiary that does data center projects on its behalf, are worth an estimated $81 million. The size and scope of the project promises to spur the local economy, as these builds usually bring in jobs and help blossom the data center and tech industry in the areas around them.
Dublin offered Vadata free city-owned land, valued at $6.8 million. The town is hoping to spur a data center industry, building on a 60,000-square-foot data center project by Expedient it recently landed.
Hilliard’s proposed package includes a 15-year 100 percent real estate tax abatement of $5.4 million plus rebates and wavers of other various fees, together valued at about $200,000.
The project is expected to create 120 jobs with an average salary of $80,000 a year, according to Tax Credit Authority documents.
Earlier this month, Amazon announced the launch of a new data center in the Frankfurt area to host the infrastructure that supports its cloud services. This is the second location for AWS in Europe, adding to a previously existing one in Dublin, Ireland. | | 5:16p |
Hitachi Data Systems Intros Equinix-Hosted Private Cloud Hitachi Data Systems introduced a fully managed cloud infrastructure offering that mixes on-premise and off-premise cloud services hosted at Equinix data centers this week.
Called Hitachi Compute – Cloud as a Service, it offers enterprises a private managed cloud with pay-as-you-go pricing and elastic compute, network and storage resources behind their firewall.
The managed cloud is now part of the Equinix Cloud Exchange program, which gives the vendor a way to expand its cloud service to the 30-plus markets around the world where Equinix has data centers.
HDS also announced updates to its converged infrastructure solution called Hitachi Unified Compute Platform. The platform can now integrate with VMware’s vCloud Air and Microsoft’s Azure public clouds.
Hitachi’s UCP Director software enables control and management of on-prem and pubic-cloud infrastructure through a single pane of glass, the company said. To achieve this HDS has natively integrated UCP with both Microsoft System Center 2012 R2 and VMware vCenter Server. | | 6:15p |
QTS Takes Major Step Toward FedRAMP Compliance QTS has gotten one step closer to being qualified as a cloud provider who’s secure and compliant enough to serve agencies of the U.S. federal government.
The company has been designated the FedRAMP Ready status, which means it has gone through some preliminary steps of the FedRAMP process and is now ready to initiate the FedRAMP authorization process.
FedRAMP is a set of security standards the government has put in place to streamline adoption of cloud services by its agencies. To date, 16 companies have been certified as compliant FedRAMP cloud service providers, including Amazon Web Services, Salesforce, Microsoft, IBM, HP and Oracle, among others.
QTS is joining a short list of FedRAMP Ready service providers. Others on the list are CA Technologies, OnCloud, PegaCloud, Project Hosts and IBM SoftLayer.
“By being designated as FedRAMP Ready, the QTS Federal Cloud is positioned to quickly initiate an assessment and obtain a FedRAMP authority-to-operate from a federal agency customer,” Matthew Goodrich, acting FedRAMP director at the General Services Administration, said in a statement.
QTS hosts its Federal Cloud infrastructure – brought online in early 2014 – at its Richmond, Virginia, and Atlanta, Georgia, data centers. In 2013, the company established a lab at the massive Richmond facility dedicated to developing cloud solutions specifically for federal clients. | | 6:55p |
Pacnet Opens Data Center Behind Great Firewall of China Pacnet has opened a 225,000-square-foot data center in Tianjin, China, its fifth in the region. The facility will serve data center and managed services demand in the Beijing-Tianjin-Hebel region with capacity for 2,000 racks.
It is located in the Gaocun Science and Technology Innovation Park of Tianjin Wuqing District. The park is trying to establish itself as a major technology hub in northern China, and the new facility is a joint initiative between Pacnet and The Tianjin Wuqing Government.
“To date, the district has attracted over $1.9 billion in investment capital and is home to over 1,400 enterprises,” said Zhang Yong, secretary of Wuqing District Committee of the CPC.
The government is planning to integrate the Beijing, Tianjin and Hebel provinces the new facility will serve into a single megalopolis.
“We are excited to launch one of our largest data center facilities in the rapidly developing city of Tianjin,” Carl Grivner, CEO of Pacnet, said. Pacnet will serve both domestic and multination corporation needs in the facility, he said.
Like other Pacnet data centers, the new one leverages software-defined networking to allow customers to self-provision bandwidth on demand, a capability the company has been one of the leaders on. “Today’s opening of TJCS1 also positions Pacnet as an innovator in the data center industry in China, enabling the delivery of SDN solution in China,” said Grivner.
The TJCS1 is connected to the Beijing data center of China International Data System via Pacnet Enabled Network, the SDN service platform.
“Through this strategic collaboration, we can leverage TJCS1′s strategic location, innovative SDN capabilities and world-class power, security, environmental and connectivity features to enable our customers to dynamically provision and scale bandwidth between our Beijing and Tianjin data center locations,” said Zhang Nianlu, CEO of China International Data System.
China remains an active data center market
Research firm Gartner pegs data center capacity growth in China to grow at a compound annual growth rate of over 11 percent through 2016. IDC expects the data center services market to grow from $1.7 billion in 2012 to $3.9 billion by 2017, which indicates a CAGR of 25 percent.
However, tapping that growth is not simple. China employs the Golden Shield Project. Also known as “the Great Firewall of China,” it is a surveillance and censorship system that filters cross-border traffic. Foreign companies are also often forced to partner with Chinese companies to establish presence in the mainland.
The market remains attractive, as the Chinese economy continues to grow, and many technology giants both inside and outside the country are trying to establish themselves within borders to capitalize on that growth. CenturyLink recently launched a data center in Shanghai, its first in mainland China. That data center is in a GDS facility, a growing Chinese provider with 17 facilities.
Brian Klingbeil, CenturyLink’s senior vice president of international development, told Data Center Knowledge that the Great Firewall of China slows down cross-border traffic for service providers and sometimes causes errors, which is one of the major reasons the company went through the trouble of setting up a data center on the mainland, even though it already has significant presence elsewhere in the region.
IBM and Microsoft Azure entered China by partnering with 21Vianet. Amazon partnered with ChinaNetCenter to establish an Amazon Web Services cloud availability region there. It’s footprint outside China includes Tokyo and Singapore, as well as edge locations in Osaka, Hong Kong, Singapore, and Sydney. Oracle is working on bringing a cloud data center to China as well.
In China, Pacnet operates data centers in Tianjin, Chongqing, Shenzhen, Beijing, and Shanghai. Across Asia-Pacific, it has data centers in 15 cities. It recently opened a 155,000-square-foot data center in Singapore, dubbed CloudSpace II. That facility achieved Uptime Institute’s Tier III Certification for Design. Hosting and cloud provider LeaseWeb has taken space in the facility in support of an ambitious growth plan in the region. | | 7:30p |
Cloud Sprawl: The Problem of Too Many Clouds Several years ago, when virtualization first entered the market, there was a good adoption rate until it became a mainstream platform. Before long, administrators were working with this new technology, spinning up new servers, creating new workloads, and often forgetting to manage those instances. So, we had a VM sprawl issue. There were too many VMs in a given environment and not enough visibility to control those virtual instances. Now, with the wide adoption of cloud computing, we are beginning to see the same problem, but with a new name: Cloud Sprawl.
Managing your cloud
Believe it or not, this is actually becoming a bit of a problem. Administrators are working with a very new technology and are beginning to expand their WAN (or cloud) presence far beyond what they originally thought would be possible. IT consumerization has been the main driver behind this push as has been the demand for more distributed computing systems. Unlike virtualization or even desktop sprawl, administrators have the opportunity to get control of the cloud environment sooner rather than later.
- No longer just a one cloud option. Many organizations now have two or more cloud environments all under one roof. Some organizations have private as well as ac public cloud presence. Within those environments, there are numbers virtual instances and workloads running. Although these types of advancements are healthy, keep an eye on cloud resources and make sure that there is always an element of control.
- Get the right management tools in place. Whether they are native or third -party, you have to maintain constant visibility into your environments. With a number of different platforms working, get a tool that’s agnostic to the underlying systems.
- Train and get certified. No need to explain this further, but learn your platform and understand how everything works together. The only way to control a diverse environment is to understand how the underlying components work.
- Have a change management platform. For large, multi-cloud environments, a change management system must be in place. Making changes on a cloud platform can include firewall, switch, storage and end-point modifications. Keep logs and track of what changes are being made will help control sprawl by giving administrators visibility into changes within the environment.
- Begin with the end (goals) in mind. To help control cloud sprawl plan out your deployment(s) ahead of time. Know how many servers and systems will be required and how the cloud model will be designed. My having a hardened plan in place, it’ll be easier to follow the strategy instead of trying to develop one as you move along. One of the key points here is to create a cloud strategy which is capable of scaling with the needs of the organization. Anything too rigid may not be conducive to the goals of the business.
New tools around automation are helping ease the pain of managing a variety of cloud environments. Open source technologies now directly connect heterogeneous platforms to create one logical control plane. Still, engineers are seeing more virtual resources being used, more policies being deployed – and more users depending on cloud services. At this point, there’s really no getting away from cloud computing; this compute platform is here to stay. With that in mind, make sure to always control your resources whether in the data center or in the cloud.
The bottom line is that the use of the Internet is going to increase. Our current data-on-demand society is going to require their information fast and flexible. Because of this market push, organizations will want to adopt some kind of cloud strategy to meet industry demands. In the process of moving to an even greater cloud model – never forget about the importance of ground control. | | 8:30p |
DigitalOcean Partners with Mesosphere to Provide Highly Scalable Distributed Application Hosting 
This article originally appeared at The WHIR
Cloud hosting provider DigitalOcean has partnered with Mesosphere, a provider of resource management software for distributed applications, to help automate cluster provisioning of its cloud servers (or “droplets”) and make it easier for developers to deploy, scale and manage applications, and spin up clusters.
Mesosphere’s software is based on Apache Mesos, which was developed by UC Berkeley’s AMPLab and is run at-scale by Twitter. This essentially organizes servers, VMs, and cloud instances into a single pool of intelligently and dynamically-allocated resources (CPU, memory, storage and other resources) for greater efficiency, higher availability, more fault tolerance and less operational complexity.
According to DigitalOcean’s Tuesday announcement, developers can launch a Mesosphere cluster from a dedicated DigitalOcean landing page, allowing them to build and quickly deploy highly available applications that can scale with little or no changes in code.
“Mesosphere is a breakthrough for developers seeking more flexible ways to write applications directly against the datacenter,” DigitalOcean CEO and co-‐founder Ben Uretsky said in a statement. “The API that Mesosphere exposes for managing the underlying compute resources gives developers powerful advantages in standing up and scaling applications. The ability to manage even thousands of droplets like a single computer is very exciting to our large customer base of developers who value our fast, developer-‐friendly infrastructure.”
DigitalOcean helps make it easy for developers to setup of Mesos, which can otherwise require developers to install a software package like ZooKeeper, and connect various pieces of software and hardware.
With the setup out of the way, the Apache Mesos API allows applications to treat all cloud instances as though they were one, large compute fabric rather than a complicated tangle of individually connected servers and hostnames. DigitalOcean’s integration of Mesosphere also enables existing containerized applications to be orchestrated via Mesosphere’s Marathon scheduler without having to make code changes.
Mesosphere CEO and co-founder Florian Leibert said, “Mesosphere on DigitalOcean lets developers focus on apps, not servers.”
Mesosphere has also been active on other cloud platforms, having launched last year a utility for setting up Mesos clusters on Amazon Web Services called Elastic Mesos. It has also created a service for provisioning Mesosphere clusters on Google’s cloud platform.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/mesosphere-enables-digitalocean-provide-highly-scalable-distributed-application-hosting | | 9:00p |
GoDaddy Pushes into Developer Hosting Market with VPS, Dedicated Server Updates 
This article originally appeared at The WHIR
GoDaddy has launched updates to its VPS and dedicated server products to better meet the needs of its growing customer base of developers and web designers, which will be served under its new GoDaddy Pro brand.
While GoDaddy has traditionally focused on the needs of small businesses, it is launching more hosting products aimed at developers and designers after learning that more than half of its small business customers have enlisted a third-party to build their website, Jeff King, SVP and GM of hosting at GoDaddy says.
“There’s everything from what we call moonlighters, this is your technical brother or cousin or something who builds your website for you. Or people who are basically building websites for other people for free,” King says. “There are also freelancers, to small agencies, developers, designers; there’s a whole array of different types of folks who are building ecommerce or websites for other people.”
According to an official announcement by the company last week, GoDaddy’s VPS and dedicated servers are powered by CentOS, Fedora or Windows and use cPanel or Plesk control panels.
Each product comes in three different managed tiers – either self-managed, managed or fully-managed. The managed offering is what King calls its “standard” tier, while the self-managed offering is for developers who are looking for “super value” and can handle most of the technical requirements on their own.
The new Pro plans also include new features such as OS patching, staging, cloning, root access, backups, and disaster recovery.
Support was also another huge area of improvement. With help from Media Temple, the hosting company GoDaddy acquired last October, GoDaddy has improved its support available to customers. Now employees will be required to be product certified, which GoDaddy says will result in customers receiving more knowledgeable and timely support.
“Essentially what we’ve done here is we’re taking the Media Temple playbook and we’re implementing it at GoDaddy,” King says. “This is a big advantage that we got when we acquired Media Temple and brought them into the family.”
“What we’ve done first and foremost is created a very specific hosting support team so that when you have a hosting problem…you get someone who really understands hosting,” he says.
“You essentially graduate into hosting support. We’ve executed a whole academy training program that we basically took the playbook right out of Media Temple’s world and make sure these teams are trained on everything from control panels to WordPress and PHP and other technical capabilities.”
King says there are more products to come under its GoDaddy Pro umbrella.
“We believe that designers and developers have very specific needs not only in hosting but in how they manage their clients,” he says. “We’re very excited to focus on this market and provide services and capabilities for them from the very beginning.”
This article originally appeared at: http://www.thewhir.com/web-hosting-news/godaddy-pushes-developer-hosting-market-vps-dedicated-server-updates |
|