Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, January 20th, 2015

    Time Event
    4:53p
    Look Before You Leap to Converged Storage

    Janae Lee, senior vice president of strategy at Quantum, has over 30 years of experience in the computer and digital storage industry, as a CEO and senior executive leading sales, marketing, product and strategy organizations.

    The hottest storage offerings these days are all about making digital storage easier (and by inference, cheaper to manage). The move to converged storage represents a perfect example.

    The drive for ease of use is natural, particularly considering the challenge and expense of finding skilled storage professionals. But given the industry’s desire to define whatever is new as the answer to world peace, it once again falls on the shoulders of IT decision makers to be sure they understand what they’re buying. Here are three tips to consider when evaluating converged storage offerings.

    Evaluating Converged Storage Offerings: 3 Areas of Consideration

    Be aware that you are buying a solution to a specific problem, not the answer to all your storage problems. Use the solution for the purpose for which it was designed.

    I’ve seen a number of articles recently implying that a converged storage solution is good for (mostly) everything. That’s just not true. If your performance and budget requirements aren’t very demanding, sure, you can buy anything and it will work okay. But the truth is that different applications need different flavors of storage performance, a mix of latency, throughput and IOPs. You wouldn’t configure the same storage for email as you do for big data analytics. Tightly packaging compute, storage, and smart data caching software into a converged solution doesn’t change this fact.

    To accommodate this need to match product design to requirements, when vendors create their unique version of converged storage, they have a customer ecosystem and an application, or set of applications in mind. This influences their capacity choice for SSD and disk and whether they use object storage. It also influences their caching algorithm. It may even determine whether they include a spigot to the cloud.

    So as you look at these solutions, you must pay attention to both the technical specifications and the marketing pages on the vendor’s website. What use cases do they highlight? What applications are their customers running? If these don’t match your requirements, you may want to ask a lot more questions (or run a POC) before you put this solution on your short list. Otherwise, the solution won’t give you the performance (or price-performance) you’re seeking.

    Consider your future growth as you create your short list.

    The vendor’s converged storage model also deserves consideration in the context of time, as your needs increase. Think about your growth requirements as you consider the solution. Converged solutions are scale-out solutions, offering non-disruptive growth. But many of these scale-out configurations have a locked ratio of compute to storage that you buy as you grow. If this package doesn’t match your problem, your converged solution will be very expensive over time, as you either buy a bunch of storage or a bunch of processing that will sit idle. Many genomics research companies discovered this, to their dismay, as getting the large capacities they needed to support their ongoing data growth meant they had to buy much more processing capability than they wanted to pay for. Look for a solution that grows the way you expect to grow.

    Be careful to not create a converged storage silo.

    The simplicity of converged storage solutions make them very attractive as a means to satisfy today’s requirements for easier, lower cost administration. Again, much of this easiness is derived from the vendor delivering a system which is well-designed to solve a particular application-centric problem. The challenge is you have many application-centric storage needs. While these applications have very different performance requirements, they may still share data, such as your transaction storage and your analytics system. Deploying a converged solution for one of these needs may dramatically impact your data movement cost, as you suddenly need to migrate data out of the converged solution to your other application. Because you don’t want to create a storage island, consider how you will need to move data from this storage into your data pool. You may decide the volume and difficulty of data movement means the converged solution isn’t as easy as it initially seems – shared storage is more complicated to manage, but it is also able to provide more universal data access.

    With the expanding number of converged solutions in the market, chances are you will find one that comfortably matches your needs. But if you don’t – be aggressive with your suppliers about delivering easy storage solutions without the need to buy converged. This may give you the lower labor cost you are seeking along with greater flexibility and price performance.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:29p
    Telx Completes Expansion in Two NYC Carrier Hotels

    Telx has announced completion of expansion in a pair of New York data centers. The company’s CEO Chris Downie told us last year it was adding capacity in two of the city’s key carrier hotels.

    Telx has added 15,000 square feet of data center space at 32 Avenue of the Americas and 17,000 square feet at 60 Hudson St. The buildings are important connectivity hubs not just for the U.S. but for the global Internet.

    Telx’s NYC1 at 60 Hudson now has 40,000 square feet of data center space available. The provider’s customers who needed to expand in the facility have already started to move in equipment, the company said in a statement.

    The Telx facility at 32 Avenue of the Americas is the company’s third in the New York data center market, dubbed NYC3. Telx exclusively operates the interconnection room in the building, following a 2013 agreement.

    It completed a built-out of colocation capacity in April 2014. Through an agreement establishing a global telecommunications company as a long-term tenant, Telx has increased capacity by adding a new floor. Its total footprint in the building is now 120,000 square feet.

    At 60 Hudson, Telx is taking space with wholesale provider DataGryd, one of the newest players to enter the Manhattan market. The company has four floors in the building. In September, it announced the launch of the facility with Telx as the anchor tenant, which had leased about 70,000 square feet of build-to-suit space there.

    In the wider New York Metro, Telx now serves over 870 customers across 700,000 square feet in six data centers. It has presence at Google-owned 111 8th Avenue. Google, which bought the building for office space, has reportedly not allowed tenants to renew their leases there, including data center providers. Internap is one data center provider that recently moved out because its lease had expired.

    “New York City remains a strategic market for us as we continue to grow our customer base and foster interconnection between enterprises and service providers in the region as well as those across the globe,” Telx CEO Chris Downie said in a statement. “Expansions in NYC1 and NYC3 allow us to increase our connectivity-dense colocation capacity while securing and accelerating additional footprint in North America’s most strategic data center market.”

    5:49p
    Hawaii Data Center AlohaNAP Now Fully Operational

    The newly redesigned AlohaNAP, a data center in Kapolei, Hawaii, is fully operational and accepting tenants. AlohaNAP is a joint venture between fifteenfortyseven Critical Systems Realty and Hawaii Pacific Teleport, an international satellite and fiber-based communications company.

    The new Hawaii data center is located about 2 miles inland and 130 feet above sea level on the Island of Oahu, placing it outside flood and tsunami zones. The data center has 10,000 square feet of turn-key space. The site can accommodate additional 50,000 square feet of white space. It is adjacent to the Hawaii Pacific Teleport facility.

    The carrier-neutral data center has access to trans-Pacific submarine cable systems, local fiber providers, and a fleet of over 40 satellites. It acts as a meet-me point and facilitates the convergence of fiber and satellite telecommunications.

    “We‘re looking forward to introducing AlohaNAP to the PTC community,” Corey Welp, 1547 managing director, said in a statement. “AlohaNAP offers its clients a state-of-the-art facility to support the most demanding business applications and provide a truly secure solution for their IT infrastructure requirements.”

    1547 has expertise in both data center construction and finance. A team of industry veterans founded the company, which plays in newer markets, offering turnkey, build-to-suit, and powered shell data centers. Managing Director Todd Raymond was formerly a C-level exec at Telx. Co-founder Welp previously raised more than $4 billion for investment opportunities in various asset classes. Other co-founders, Jerry Martin and Pat Hines, were at the Martin Group, which has focused on the data center space for the last decade.

    In addition to its Hawaii data center venture, 1547 also partnered with Green House Data for a data center project in Wyoming.

    5:58p
    Oracle Launches Financial Services Cloud

    Looking to further define its version of the modern cloud and push companies to shift venerable, mission critical enterprise resource planning applications to the cloud, Oracle introduced the Oracle Financial Services Cloud.

    The announcement came at Oracle’s CloudWorld event in New York last week. It is for customers in the financial services industry, such as banks, insurance and investment companies, and securities institutions. Initially, only U.S.-based financial services companies are able to subscribe..

    The new service lists in-region hosting as a feature. Over the last decade, Oracle consolidated data centers, and operates two primary U.S. data centers in Utah and Texas. The company has been aggressively expanding its data center presence globally, with recent announcements about data centers in Germany and China.

    The announcement builds upon the cloud applications announcements made last fall at Oracle OpenWorld. The finance cloud joins Oracle’s Industry Applications cloud portfolio

    Keeping in step with the primary trepidations that businesses have about security, privacy, compliance, and regulatory requirements, Oracle maintains that the new Financial Services Cloud features data privacy, compliance services, enterprise-level support, and alignment with FINRA (Financial Industry Regulatory Authority) rules.

    The company said the cloud will offer complete integration with its ERP Cloud, Human Capital Management Cloud, and Customer Experience Cloud, as a part of its promise for an end-to-end solution.

    Oracle continues to bet big in the cloud market. Its CTO Larry Ellison said he expects the company to bring in more than $1 billion in new annual Softare-as-a-Service and Platform-as-a-Service subscription revenue in the next fiscal year.

    7:42p
    Datapipe Buys Big Data Multi-Cloud IaaS Provider GoGrid

    logo-WHIR

    This article originally appeared at The WHIR

    Datapipe, a managed service provider specializing in cloud solutions, has acquiredGoGrid, a provider of multi-cloud big data Infrastructure-as-a-Service, for an undisclosed amount.

    The acquisition of GoGrid will help Datapipe advance its strategy around developing security, integration and management features across multiple cloud platforms such as VMware, AWS and its own Stratosphere private cloud.

    “GoGrid has made it easy for companies to stand up big data solutions quickly,” Datapipe CEO Robb Allen said in a statement. “Datapipe customers will achieve significant value from the speed at which we can now create new big data projects in the cloud. This acquisition advances Datapipe’s strategy to help our enterprise clients architect, deploy and manage multi-cloud hybrid IT solutions.”

    Datapipe, which already has data centers in Shanghai, Iceland, London, Singapore, and eight US locations, will also be growing its data center footprint with the addition of new GoGrid facilities in San Francisco and Amsterdam.

    In 2013, GoGrid turned its attention from general managed cloud and dedicated hosting to focus on big data infrastructure, including clustered test and production environments.

    Last year, the GoGrid launched its 1-Button Deploy feature, which simplifies the process of moving big data applications from testing to full-scale production. Later in 2014, it launched its “Managed Services Stack” which features managed backup, monitoring and security.

    Datapipe has been growing its service capabilities over the past few years through acquisition. In September 2013, Datapipe bought AWS monitoring and optimization company Newvem, and in August 2014, it purchased managed services provider Layered Tech.

    As Gigaom notes, this is the first major cloud deal of the year, and is part of the cloud consolidation trend which has included, for instance, Cisco buying Metacloud, HP buying Eucalyptus, EMC buying Cloudscaling, and IBM buying SoftLayer.

    And while cloud companies such as Rackspace, CloudSigma, Digital Ocean, and Joyent continue to operate independently, there is speculation that Oracle, SAP, and other major IT companies might be on the lookout for cloud acquisitions.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/datapipe-buys-big-data-multi-cloud-iaas-provider-gogrid

    8:26p
    Red Hat’s OpenStack Ambitions Move it Beyond its Linux OS Core

    Red Hat‘s OpenStack focus continues to move the company beyond its popular distribution of the Linux operating system, and investors are starting to notice. The company’s stock was up around 25 percent last year on the heels of several moves expanding its position and its place in the cloud ecosystem.

    Its bigger vision and strategy is the open hybrid cloud. Red Hat provides common management, common storage and middleware that enables building workloads across footprints in heterogeneous environments. It helps an enterprise with its hybrid strategy by helping put the right workloads in the right place. Open source is the way to make everything work together and enable true hybrid cloud.

    “One of the things we’ve recognized and have been building for, is you want your infrastructure to support whatever workloads you have,” Tim Yeaton, Red Hat senior vice president of infrastructure, said. “You want to be able to support them concurrently and have that commonality. We provide common management, with the ability to define workloads, provision to the right platforms, and manage them regardless. We’ve built our strategy with this in mind.”

    This message is resonating with enterprises, which are growing increasingly comfortable with open source, and with OpenStack in particular. Acceptance of open source and maturity of OpenStack put Red Hat is in good position. The company just started its fourth fiscal quarter, and, according to Yeaton, it has already exceeded aggressive goals for new customers and new proof-of-concept OpenStack deployments.

    “If we were doing this as a proprietary company, it would be a path to lock-in,” said Yeaton. “Every single thing we do is all open source.”

    Not a Fan of Proprietary OpenStack Middleware

    He compared what’s occurring in the cloud world with the Unix and Linux world. Unix would be akin to “open core,” meaning only a small part of it was open, and vendors built proprietary pieces atop, while Linux is fully open source.

    “If you look at how Unix and Linux played out … Unix wasn’t leveraging the community effects, so the value-add was fractured. This didn’t happen with Linux, because it was community-based.”

    OpenStack is currently facing the same learning curve. The current trend is vendors adding proprietary middleware to OpenStack and calling it their own, treating OpenStack as “open core” in a sense. Yeaton believes this is the wrong way. “You can’t let OpenStack fracture like what happened with Unix,” he said. “We’re strong advocates of the open approach; upstream is about community and innovation.”

    Cantor Fitzgerald analyst Brian White recently noted that both Red Hat’s core Linux business and its cloud incentives are doing well. Many companies are coming to Red Hat while switching from older Unix-based systems, he recently told Investor’s Business Daily. However, its position in next-generation data centers and in the cloud market in general is where the potential lies.

    Red Hat has made several OpenStack-related moves this year, maintaining a leadership position in the ecosystem, and focusing on running and managing the big picture.

    Gaining OpenStack Talent via Acquisitions

    Red Hat made a pair of acquisitions that boost its cloud efforts last year. It acquired storage software company Inktank for $175 million in April and the French OpenStack cloud firm eNovance in June.

    InkTank is a firm specializing in the Ceph open source software-defined storage system it developed. Ceph runs on commodity hardware and is a potential alternative to OpenStack’s Swift. Red Hat acquired Gluster in 2011, another open source cloud storage offering that runs on commodity hardware.

    The acquisition of Inktank gave Red Hat top contributors to the OpenStack project. The eNovance deal deepened the talent pool further. eNovance helps service providers and enterprises build OpenStack clouds.

    “In the case of eNovance, we acquired a team that was an expert in helping customers start on this journey,” said Yeaton. “Many customers can get as far as saying, ‘I can see the value, but how do I start?’ eNovance increased our ability to start them on their OpenStack journey.”

    Playing Nice With Others

    The message at Red Hat’s March 2014 summit was cloud integration. The company can provide the unifying fabric and tools for heterogenous IT environments. To this end, it entered into several partnerships and completed multiple integrations in 2014.

    It launched an integrated infrastructure solution for OpenStack with Cisco and a DevOps starter kit with Dell. The DevOps starter kit combines Red Hat’s OpenShift Enterprise Platform-as-a-Service, Dell’s PowerEdge R420 server, and partner services. The Cisco integration combines Cisco Unified Computing servers, Application Centric Infrastructure (Cisco’s proprietary software defined network technology), and Red Hat Enterprise Linux (RHEL).

    Red Hat is keeping up with the Big Data analytics space. It has partnered with enterprise Hadoop leaders Hortonworks and Cloudera for Hadoop on OpenStack.

    The company also made RHEL available on several big clouds, including the Google Compute Platform. It introduced test drives on Amazon Web Services, so users could explore ready-made solutions built on Red Hat technologies. The company has made it easy to migrate subscriptions to Red Hat certified public clouds.

    The company added a marketplace in support of its OpenShift enterprise Platform-as-a-Service, connecting the OpenShift partner ecosystem directly to OpenShift customers.

    Keeping Abreast With Container Trends

    Last year also saw Red Hat support Google’s Kubernetes project, an open source application container cluster management technology. Red Hat has been a long-time supporter of container technology, but last year’s skyrocketing rise of Docker has brought the technology to the spotlight.

    Red Hat launched several Linux container innovations in April, including OpenShift Origin community project GearD. GearD enables rapid application development, continuous integration, delivery, and deployment of application code to containerized application environments.

    Open source containers help to separate infrastructure services from the application, allowing portability across not only different clouds, but also physical and virtual environments.

    “We’re big advocates and have been building enablement in,” said Yeaton. “Containers are strong for modernizing IT.”

    Correction: The image that originally accompanied this story has been replaced. It pictured and named Brian Stevens as Red Hat’s CTO. Stevens left Red Hat last September to work as vice president of cloud platforms at Google. Data Center Knowledge regrets the error.

    11:26p
    Amazon Web Services Buys Wind Power for Data Centers

    Amazon Web Services has signed a long-term power purchase agreement with developer of a wind farm in Benton County, Indiana, the cloud services provider announced Tuesday. The agreement will help the world’s largest provider of cloud infrastructure services make good on the commitment it made this past November to power its operations, including a fleet of massive data centers, entirely with renewable energy.

    AWS has been one of the big Internet businesses Greenpeace has been calling out for powering their data centers with dirty energy. AWS and others, companies like Google, Facebook, Microsoft, and Twitter, are large customers that utilities covet, so they have bargaining power to pressure utilities to clean up their fuel mix, Greenpeace has argued.

    Google has been using PPAs, among other instruments, to make its operations carbon neutral. Facebook and Microsoft started doing the same more recently. Microsoft’s biggest PPA to date was for 175 megawatts with a wind farm developer near Chicago, announced last July. Facebook’s big wind investment was in a wind farm project in Iowa, announced in November 2013.

    The way it works is the data center operator agrees to a long-term high-capacity PPA with the developer, enabling the developer to finance the wind farm’s construction. Once the wind farm comes online, it feeds power into the local grid that also feeds the data center, and the data center operator applies Renewable Energy Credits to the energy their facility consumes.

    The future 150 megawatt wind farm in Indiana, which will actually be called the Amazon Web Services Wind Farm, is scheduled to come online early next year. It will generate about 500,000 megawatt hours annually, according to AWS, which has a 13-year PPA with Pattern Energy Group, the project’s developer.

    Jerry Hunter, vice president of infrastructure at AWS, said the farm will feed clean energy into a grid that feeds a large number of Amazon data centers. “This PPA helps to increase the renewable energy used to power our infrastructure in the U.S. and is one of many sustainability activities and renewable energy projects for powering our datacenters that we currently have in the works,” he said in a statement.

    Greenpeace has criticized Amazon for not providing specifics about the energy mix that powers its data centers and also about the tactics it planned to use to achieve its goal of fully renewable operations. The environmentalist organization welcomed Tuesday’s announcement, but pointed out that it will take much more to achieve the renewable-energy goal.

    The bulk of Amazon data center capacity is in Virginia, served by Dominion Power, a utility whose fuel mix includes only 2 percent of renewables, the rest coming from coal, nuclear, and gas-powered plants, according to Greenpeace. The company is expanding its capacity in Virginia substantially. A developer is building a massive data center for it in Ashburn (the one that caught on fire earlier this month). Dominion has also filed plans to build a high-capacity power transmission line for future data center in Haymarket, whose tenant will reportedly be Amazon. Dominion’s plans have run into strong opposition from a group of Haymarket residents.

    “The massive size and rapid growth of Amazon Web Services’ infrastructure makes clear that Amazon’s Indiana wind farm ought to be the first of many similar investments,” Greenpeace senior energy campaigner David Pomerantz said in a statement.

    AWS claims three of its availability regions are carbon neutral today: U.S. West in Oregon, E.U. in Frankfurt, and AWS GovCloud. The company has not disclosed location of the data center that hosts its availability region for government customers.

    << Previous Day 2015/01/20
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org