Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, July 10th, 2014
Time |
Event |
12:30p |
Actifio Unshackles Stranded Storage Investment While “faster” and “bigger” are at the top of the list of the sexiest storage topics, copy data management is a silent but potentially revolutionizing force in storage.
The business-as-usual way of making and storing data copies has resulted in a lot of sunk investment for IT. According to a startup called Actifio, unnecessarily stored data consumes 60 percent of disk capacity, drives 65 percent of storage software spending and 85 percent of storage hardware spending.
Actifio’s founders have decided to do something about it by creating technology that tames the sprawl, so far successfully. The firm has raised more than $207 million since its inception five years ago and has a valuation in the $1.1 billion range.
It has a number of enterprise and service provider customers, and there have been rumors of an Initial Public Offering in the making, but its CEO Ash Ashutosh wouldn’t comment on that.
One golden copy
Called “copy data virtualization,” Actifio’s technology removes unnecessary copies. Its Virtual Data Pipeline stores a single “golden copy,” captured at block level, in native format, according to the customer’s SLA.
Actifio manages the physical copy moved once and stored anywhere, then uses an unlimited amount of virtual copies for instant access and protection. This is different from deduplication, which is a compression technology that simply finds and removes duplicate copies of data. Beyond taming copy sprawl, Actifio’s virtual copies can be used to increase availability or in disaster recovery.
“Organizations spend a significant amount of money in making copies of data, and on their data needs,” Ashutosh said. “In the old model, every time you’d made a copy of it, then run the application on top of it.”
Customers to prove it
The company touts more than 400 customers, many of them very large companies whose names the company could not disclose. One customer, KKR Financial services, replaced several disparate products with Actifio’s technology.
There are also dozens of Actifio-powered clouds, including IBM’s and NEC’s. C7 Data Centers is an example of a data center provider using Actifio.
 One of Actifio’s products is Actifio CDS, designed for large-scale deployments in heterogeneous data center environments.
‘Underbelly of the storage market’
Beyond cost considerations, the company says it increases agility. “There’s a huge change in the speed along with a lot of cost taken out in the backend,” said Ashutosh.
“This [copy data] market has been the underbelly of the storage market for a long time. There has been revolution in the storage space — flash arrays, etc. What we decided to do is examine what exactly is stored? We found that a lot of those arrays are storing unnecessary copies of data.”
In addition to cost and speed considerations, there’s a strong case for using Actifio for application mobility in the cloud. It can determine what applications can move, what don’t move, and is completely independent of the infrastructure of the end user. Actifio can make sure data never crosses country borders, a concern especially acute since the Snowden revelations.
Actifio Sky virtual appliance
The company’s first foray into the virtual appliance market was Actifio Sky, released in May. It allows users to protect centrally backed-up data locally at remote sites for faster restore times. In the event of a disaster, users can restore data off the central cluster to have things up and running again fairly quickly.
Actifio Sky’s capabilities include:
- Easy Download and Deploy with a consumer-grade UI
- Application-centric, SLA-based management for remote data protection and disaster recovery for applications
- Rapid recovery from remote site failure, centrally, or into the cloud
- The ability to vault data directly to public cloud storage services, including Amazon Web Services
- Patented Dedup-Async Replication for network-optimized data replication and space efficiency through Direct-to-Dedupe storage
| 12:50p |
Datapipe Offers to Take On the Risk of Running on AWS Continuing to build out its portfolio of services around Amazon’s cloud infrastructure offerings, Silicon Valley hosting company Datapipe has designed a service that essentially enables it to transfer the risk associated with using the public cloud from the customer to itself.
The provider’s new contracting and liability arrangements mitigate risk associated with growing a customer’s deployment on Amazon Web Services. The idea is to let the customer rely on San Jose-based Datapipe’s expertise in security on AWS.
Security remains a major concern for enterprises whose IT leaders consider using public cloud services. Taking some of the Amazon cloud security risk off the customers’ shoulders may help Datapipe attract more enterprise clients.
Datapipe has also expanded the amount of locations where its data center customers can plug into Amazon’s cloud privately, circumventing the public Internet. It has added what Amazon calls “Direct Connect” option in its Seattle and London data centers, which join Singapore, San Jose and Ashburn, Virginia, sites.
The company also has data centers in New Jersey, Iceland, Hong Kong and mainland China.
Datapipe specializes in building integrated infrastructure solutions for enterprise clients that can consist of hosted private cloud, hybrid cloud, managed public cloud, traditional hosting and any number of the slew of managed services it offers.
The most recent portfolio addition was the AWS Cloud Analytics service. It has recently expanded those analytics capabilities, adding the ability to identify Key Performance Indicators and action plans customers can use to quickly optimize cost-utilization and governance in the AWS cloud.
“The rapid growth in cloud adoption is changing how IT organizations optimize their internal and external resources,” said Robb Allen, CEO of Datapipe. “By expanding our managed cloud solutions for AWS, we are addressing the key barriers to public cloud adoption and enhancing the value of AWS for our enterprise clients.” | 2:00p |
How to Unleash Oracle Performance with Flash Storage Your database infrastructure is absolutely critical to a variety of resources within your data center. Applications, data sets, virtual machines, and users rely on rapid storage to deliver their needed resources quickly.
When it comes to databases, having the right type of storage is absolutely key. Enterprise organizations demand increasing performance for the business-critical applications running on Oracle databases, and need to consider flash storage systems to provide these performance improvements. With superior performance to hard disks, flash storage is proving its value in the enterprise.
In this whitepaper, we learn how Hitachi provides flash-based storage systems to deliver the performance of flash, while addressing some of the durability and reliability challenges that can be unique to flash storage. With a reliable flash storage platform from Hitachi, you can unleash the Oracle performance for your business-critical applications, delivering better service throughout the enterprise.
The company recently evaluated the Hitachi Unified Storage VM (HUS VM) all-flash array, and found that HUS VM provides superior performance for Oracle environments. And because the HUS VM is built with the robust Hitachi enterprise storage technology, you get scalable, efficient, manageable and consistent high performance.
Download this paper today to learn how HUS VM all-flash array delivers consistent high performance across a wide range of real-world Oracle database configurations.
Organizations of all sizes and in all industries make increasing performance demands of Oracle environments. However, the current IO gap between server and storage performance has proven to be a formidable challenge to overcome. Businesses need high-performance Oracle storage environments to enable faster decision-making and maximize server utilization and data center efficiency.
Remember, as the gap between processor performance and storage performance continues to grow, enterprises need a storage solution that helps close the performance gap. They need a solution that enables high performance for business-critical applications to satisfy business demands.
In working with a Hitachi solution you’re better equipped to handle the requests of the modern business and align them with your IT capabilities. | 5:24p |
IBM, Microsoft, Red Hat Join Google’s Open Source Container Management Project Kubernetes Kubernetes, the open source container management project announced by Google in June, is seeing strong support. IBM, Red Hat and Microsoft, as well as well-hyped startups CoreOS and Mesosphere are now supporting and contributing to Kubernetes. A company called SaltStack, which is building a container automation framework for cross-cloud portability, has also come on board, as well as Docker, which has built the full container stack as a product,
With Kubernetes, Google draws on its experience in efficiently building and deploying powerful apps like Search, Gmail and Maps over its massive cloud infrastructure. Kubernetes was once proprietary, but now several big players are participating and evolving themselves with the Docker container management tech. Docker recently launched 1.0 and has brought the container buzz to the front of the industry.
Containers are lightweight and easy to move around. They comprise just application and its dependencies, whereas a virtual machine has to store a guest operating system in addition to binaries and libraries. Kubernetes is designed for Docker container management at scale. What’s notable are the names of the companies now contributing to it. competitors are holding hands and banking on the same project. However, there’s always the “too many cooks spoil the pot” problem to watch out for, as agendas might not necessarily line up.
The hope for the Kubernetes project is to help users easily launch Docker containers onto a cluster of servers through better management. It’s comparable to Docker’s project Libswarm, which Docker is ensuring is aligned with the project.
IBM is looking to Kubernetes to help customers launch applications across multiple clouds. Red Hat sees it as a way to push hybrid environments and easily moving back and forth from private and public clouds. Microsoft talks about giving customers choice about what they run in its Azure cloud. Google most likely sees it as a way to make its latecomer Compute Engine stand out. CoreOS is making sure its operating system for distributed architectures works with Kubernetes, which is built on etcd, CoreOS’ service discovery tool. Google’s software engineer Brendan Burns tweeted about how much he likes etcd yesterday.
“Each company brings unique strengths, and together we will ensure that Kubernetes is a strong and open container management framework for any application and in any environment – whether in a private, public or hybrid cloud,” wrote Urs Hölzle, senior vice president for technical infrastructure at Google.
Google made its commitment to open container standards in June when it open sourced Kubernetes. “Everything at Google, from Search to Gmail, is packaged and run in a Linux container,” wrote Eric Brewer, vice president of infrastructure at Google. “Each week we launch more than 2 billion container instances across our global data centers, and the power of containers has enabled both more reliable services and higher, more-efficient scalability. Now we’re taking another step toward making those capabilities available to developers everywhere.”
“Predictable deployments and simple scalability are possible because Docker packages all of a workload’s dependencies with the application,” wrote Hölzle.
This allows for ultimate portability; you can avoid vendor lock-in and run containers in the cloud of your choice. It is just as important that the management framework has the same properties of portability and scalability, and that is what the community will bring to Kubernetes. | 7:18p |
QTS Enters Chicago Market with Former Sun-Times Plant Acquisition QTS Realty Trust has closed the acquisition of a massive building in Chicago that used to house a Sun-Times newspaper printing plant. This is the data center developer’s first property in the Chicago market.
The deal falls nicely within the company’s strategy of buying massive properties with robust infrastructure at a discount to convert them into data centers. QTS paid about $18 million for the 317,000 square foot building that sits on 30 acres of land.
Announcement of the deal comes one week after the company announced another major property acquisition. It bought a McGraw Hill Financial data center in East Windsor, New Jersey, for $75 million.
Plans to expand capacity to 37 MW
QTS said the Chicago building can accommodate about 130,000 square feet of raised floor and 24 megawatts of power. The company’s redevelopment plans include expanding its size to accommodate about 215,000 square feet of raised floor and 37 megawatts.
Because prior owners of the facility have already made substantial investment in infrastructure, QTS expects to redevelop it at “below market rates.”
A report that the company was in talks to buy the former Chicago Sun-Times plant surfaced in May. It has attracted interest as a potential data center site from developers since at least 2013, when Madison Partners and JDI Realty were reportedly looking to buy it.
Buy big, buy cheap
QTS went public as a real estate investment trust in October 2013. Its SEC filing that year indicated that it had plans to invest up to $277 million in adding more than 300,000 square feet of data center space – most of it in its massive facilities in Dallas and Richmond, Virginia.
The Richmond property was another example of the developer’s buy-big-buy-cheap expansion strategy. It bought the 1.3 million square foot Qimonda fab plant and a massive adjacent property for $12 million because the owner had gone bankrupt.
The 700,000 square foot Dallas facility was also a former semiconductor manufacturing plant, but the company has not disclosed how much it paid when it bought it in 2013.
In a statement on the Chicago deal, QTS CEO Chad Williams said, “This acquisition strongly supports our growth philosophy.”
A different play in New Jersey
These deals and the most recent Chicago acquisition are different from the McGraw Hill data center purchase announced earlier this month. The building in New Jersey is a fully-fledged data center, and the publishing and financial services giant will continue using it as a tenant.
QTS also entered into a partnership with French IT outsourcing company Atos, which will provide its services to McGraw Hill, while QTS will act as the data center provider.
The developer does plan to expand the building from its current capacity of 12 megawatts to 20 megawatts. | 8:22p |
Cloudian to Extend Tiering Capabilities to Amazon’s S3 and Glacier On the heels of a $24 million financing round, Cloudian announced a partnership with Amazon Web Services to expand the capabilities of its hybrid cloud storage platform by adding tiering for Amazon’s cloud storage services S3 and Glacier. Enterprises can use the object storage platform to build scalable storage infrastructure that combines on-premise storage with cloud.
Bringing the two AWS services into the fold helps users leverage agility benefits of public cloud but keep the most sensitive data in their own data centers. Cloudian challenges traditional storage approaches like SAN and NAS and has positioned itself well to benefit from enterprise demand for hybrid cloud solutions.
The company’s has attracted customers like Bitcasa, Storage Made Easy, NTT, Nextel and Vodaphone and high-profile partnerships with the likes of Citrix, Apache CloudStack and OpenStack.
Bitcasa is a popular mobile file sharing application on AWS that is expanding its enterprise offerings. “Cloudian’s partnership with Amazon helps us deliver a more full-featured and secure hybrid file sharing with all the necessary control for critical business content,” said Joe Lyons, vice president of business development at Bitcasa.
Storage Made Easy has an enterprise file share and sync solution and tackles sprawl.
“This partnership with Amazon Web Services means that our customers can now not only deploy full-featured, on-premises, Amazon S3-compliant clouds, but easily auto-tier their data to the AWS cloud,” said Paul Turner, chief marketing officer at Cloudian. “Users have the option to leverage AWS infrastructure for long-term bulk storage while keeping their most critical data close at hand.
“This unique hybrid cloud approach ensures nearly unlimited capacity expansion, straightforward data tiering and unfettered use of the vast ecosystem of AWS S3 applications – all behind the user’s firewall.”
Expanding geographic reach
Cloudian recently closed a $24 million financing round with new investors, INCJ and Fidelity Growth Partners, and existing Cloudian shareholders, including Intel Capital. The company said it will extend its global sales and marketing reach through targeted programs with the funds, as well as amplify market development.
“This substantial investment is strong validation of our unique and leading approach to enterprise on-premises storage and hybrid cloud storage,” said Michael Tso, CEO and co-founder of Cloudian. “The rapid growth of unstructured data is transforming the data storage landscape. With this funding, we will accelerate the deployment of our production-proven storage solutions and revolutionize the cost, scalability and availability models for storing unstructured data in the enterprise.” | 9:51p |
Microsoft to Expand Azure Data Center Infrastructure in Virginia and Iowa Citing growing demand for Microsoft‘s cloud services (more than 8,000 customers sign up for its Azure cloud every week), Phil Sorgen, the company’s corporate vice president for Worldwide Partner Channel, announced in a blog post that the company will bring two new Azure regions online in Virginia (US East 2) and Iowa (US Central). The additional regions will help the company respond to what it says is a need to double capacity every six to nine months.
There are currently four U.S. Azure regions — US East in Virginia, US West in California, US North Central in Illinois and US South Central in Texas — one in Brazil, two in Europe and four in Asia Pacific. US East 2 would be a second region in Virginia, where the company kicked off a new data center construction project last month.
In advance of its Worldwide Partner Conference next week in Washington, D.C., Microsoft announced a handful of new technologies, services and features for Azure, which Sorgen also detailed in his post.
Growing the Azure cloud
Expanding its partnership with Equinix, Microsoft will add six worldwide locations for ExpressRoute, the service that provides customers with private links to Azure from colocation data centers to circumvent the public Internet for higher security and better performance.
Microsoft will also bring new Infrastructure-as-a-Service functionality to its recently launched Azure Preview Portal, which brings cross-platform cloud technologies, tools and services together in a single environment. Adding the IaaS capabilities will let users manage virtual machines, as well as have single-click SharePoint deployment for the management of multiple virtual machines within the portal.
The company is also planning to preview Azure Event Hubs, an event broker service that allows processing and data analytics from a number of cloud-connected smart devices.
Microsoft said it will expand the preview of the Azure Government cloud and complement it with a new Dynamics CRM Online U.S. cloud offering. The Azure machine learning service announced last month, which helps build predictive analytics into applications, will be also be available for preview.
Hybrid cloud storage arrays
The company also announced the Azure StorSimple 8000 series hybrid storage arrays with new Azure-based capabilities to enable new use cases and centralize data management. The array series seeks to capture rising enterprise demand for hybrid infrastructure solutions.
The new 8000 StorSimple series features expanded access to enterprise data by cloud-resident applications and a single, consolidated management plane. Not limited to just hard drives and SSDs the 8100 and 8600 arrays use Azure cloud storage as a hybrid cloud tier for automatic capacity expansion and off-site data protection.
To go with the new arrays, there is the Microsoft Azure StorSimple Virtual Appliance, which is an implementation of StorSimple technology running as an Azure virtual machine in the cloud.
A productivity and platform company
In a call-to-action letter to all Microsoft employees this week CEO Satya Nadella said Microsoft’s focus for fiscal 2015 (which the company recently entered) will be on enabling a mobile-first and cloud-first world with core as a productivity and a platform company. | 10:00p |
High Density, Low Budget: Massive Bitcoin Mines Spring Up in Warehouses This is the first feature in our three-part series on Bitcoin mining infrastructure.
The Bitcoin mining craze is driving the creation of a new breed of computing facilities featuring high-density hardware, low-reliability electrical infrastructure and off-the-shelf enclosures. These “hashing centers” often are built in old warehouses and house servers on shelving from hardware stores like the Home Depot.
It’s a low-tech solution, yet these facilities are supporting compute density equivalent to that seen in the largest Facebook or Google server farms. Some Bitcoin mines cool their servers with liquid instead of air.
The sudden emergence of hashing centers reflects the rapid growth of the Bitcoin network, along with the intense focus on building high-powered infrastructure at the cheapest price point possible. As industrial mining operations scale up, they are improvising a new type of infrastructure, customized for rapid changes in hardware and economics.
“It’s all about finding the right cost of power and finding enough shelving,” said Bryan Ballard, CTO for Netsolus, which has built several dedicated Bitcoin facilities for customers. “In traditional data centers, you’re trying to find the right confluence of fiber, power and bandwidth. Our bandwidth is negligible. We’re looking at old steel mills and other sturdy facilities with good power.”
Ballard, whose company hosts more than 3 megawatts of Bitcoin customers and expects to build an additional 20 megawatts of capacity, said miners are building a different breed of facility than the traditional mission-critical enterprise data center.
“Sometimes you hesitate to call these buildings data centers,” said Patrick McGinn, a product manager with CoolIT Systems. “They’re really powered shells.”
Deploying many megawatts
One of the largest of these Bitcoin mines is rising in northern Sweden, where hardware vendor KnCMiner is packing custom hardware into rows and rows of steel shelving. The company, which makes mining rigs using specialized ASICs (Application Specific Integrated Circuits), has deployed an estimated 5 megawatts of gear and expects to add another 5 megawatts. KnC sees major growth ahead.
“Mining is a billion-dollar industry. Today,” writes KnC Chief Marketing Officer Nanok Bie.
In the U.S., MegaBigPower operates a large Bitcoin hashing center in a former warehouse in central Washington, where it uses Raspberry Pi micro-computers to manage tens of thousands of ASICs, all housed on shelves and cooled with air and household fans. The Raspberry Pi serves as a low-power controller, saving on energy.
Founder Dave Carlson has announced plans to create a franchise network, in which MegaBigPower will provide Bitcoin mining hardware for franchisees who can supply industrial facilities with 1 megawatt to 5 megawatts of power.
Carlson told CoinDesk that the first franchisee, California-based Aquifer, will provide up to 50 megawatts of capacity for MegaBigPower’s custom mining rigs.
Gearing up for more growth
Bitcoin miners are scaling up to manage the next phase of expansion for a network that has experienced exponential growth. At the start of 2014, the Bitcoin network had a total compute power of 10 petahashes per second. This hash rate — a measure of the number of bitcoin calculations that hardware can perform every second – has since soared to more than 135 petahashes per second.
That equates to huge demand for power capacity to run the network. “We’re talking gigawatts, not megawatts,” said Ravi Iyengar, CEO of CoinTerra, which makes ASIC hardware and also operates a large mining facility. “There’s a whole ecosystem that will emerge.”
Silicon Valley veteran Jeremy Allaire says the Bitcoin network is in the midst of a major transition as it moves out of garages and basements and into dedicated facilities.
“Mining is undergoing a significant evolution from a hobbyist undertaking to an institutional model, with well-funded companies doing this as a business,” said Allaire, who is now the CEO of Circle, a venture-backed digital currency startup. “We’re at this institutional scale today. We’ll see investments grow to billions of dollars in coming years. We’ll see the mining pools move from being run by hobbyists to being run by large companies.”
Cheap infrastructure = more Bitcoin profit
Those large players have major resources but aren’t keen on spending it on redundant infrastructure, such as the UPS units and diesel generators that provide backup power for enterprise data centers. The ongoing arms race in Bitcoin hardware has made mining a low-margin game, with the key criteria being the cost of power and the ability to cool high-density hardware.
“The Bitcoin miners are not interested in reliability, and are really margin-sensitive,” said Wes Swenson, CEO of C7 Data Centers. “They’re really sensitive to the cost of the colo.” |
|