Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, February 9th, 2015
| Time |
Event |
| 11:00a |
Actifio Makes its Business Resiliency Suite Avalialbe as Cloud Service Copy data protection provider Actifio has introduced a new cloud-based version of its data management suite called Actifio One. Positioned as a “business resiliency cloud,” it brings the functionality of its appliance delivered as a service.
Actifio helps with taming data sprawl created by copy data. Copy data is proliferation of file copies made by various enterprise systems, each operating on its own, such as backups, test and development, disaster recovery, and business continuity.
Actfio CEO Ash Ashutosh believes the cloud version of the software makes it appealing to a wider audience desiring the simplicity of consuming services on demand.
“Most of these folks don’t want to buy, train, install,” he said. “They can now access application data anywhere. It is a big mind-shift for the product.”
Actifio claims it replaces several data protection and management tools. Many mid-sized businesses have a solution in place to back up their physical machines, and another solution to protect their virtual ones. They have a solution to handle snapshots, and another for replication; a point tool for tape, because it’s inexpensive, and a point tool for disk, because it’s fast.
Actifio One is a single integrated system that the company says replaces all of these individual tools. At the center of everything are its copy data virtualization technology and Virtual Data Pipeline.
Copy data virtualization removes unnecessary copies and improves business resiliency by keeping a “golden copy” safe. Its Virtual Data Pipeline stores a single “golden copy,” captured at block level, in native format, according to the customer’s SLA.
If something goes wrong, it quickly restores the application data and fails back to production, eliminating the traditional slow restore process.
The benefits are freeing up stranded storage investment, decreasing backend complexity, and better resiliency
There is a software download with the cloud service, but then customers manage the system through a browser-based portal. It allows users to fail back to production if something happens.
“[Actifio provides] the ability to instantly create an extension of the data center,” said Ashutosh. “If something goes bump in the night, data center built somewhere else, or if you want to migrate applications all the while doing the usual backup, you can do that faster at a fraction of the cost on a pay-per-use model. This is the holy grail of what everyone is trying to do.”
Actifio raised $100 million last year at a staggering valuation of $1.1 billion. While the cloud version is targeted at smaller than typical customers, Ash said its average deal size has gone up.
“Having that infusion of cash built confidence for some of the larger users sitting on the sidelines,” he said. “They say that ‘this is a company I can build around.’ Since then, there’s been significant growth in our average order values. There’s been several multi-million-dollar deals and an average of close to a million.”
Actifio is one of many enterprise tech vendors who have been compelled to release a cloud version of their offerings. It isn’t a matter of sticking something on a cloud and calling it cloud. This shift involved thinking about things a little differently, according to Ashutosh.
“As you make the shift from product to service, the operational aspect comes into the picture. How do you deliver an outcome, a service level? There’s a lot of work on the backend to deliver a service, such as how do you provision a user? Bill a user? How is the day-to-day different? You are in an operations business much more than a technology business.” | | 1:00p |
The Future of Commodity Systems in the Data Center We’re at a very interesting point in time during the cloud infrastructure era. The modern data center continues to evolve from physical to virtual, numerous management stacks being abstracted into the logical, or virtual, layer. Is another data center evolution around the corner? Will new kinds of compute platforms allow for a more open data center model? We’ve already begun to see a shift in the way data centers provide services. A new kind of commodity architecture is making its way into the consumer, cloud provider, and even service provider data center.
Customers are being given much greater options around what they want deployed and how they want it controlled. With all of this in mind, it’s important to see how commodity server platforms are already making an impact in your cloud architecture.
Bare Metal Cloud and Commodity Servers
Although the conversation has certainly picked up recently, white-box and commodity offerings from a few data center providers is actually a reality. In a recent article on DCK we outlined Rackspace’s dedicated servers which behaved like cloud VMs. The offering, called OnMetal, provides cloud servers which are single-tenant, bare-metal systems. You can provision services in minutes via OpenStack, mix and match with other virtual cloud servers, and customize performance delivery. Basically, you can design your servers based on specific workload or application needs. This includes optimizations around memory, IO, and compute. As far as the base image goes, Rackspace has CentOS, CoreOS, Debian, Fedora, and Ubuntu images available.
It’s important to note that Rackspace isn’t alone in this space. Internap has been offering powerful metal servers, and so has SoftLayer, now an IBM company. The servers provide the raw horsepower you demand for your processor-intensive and disk IO-intensive workloads. From there, you can configure your server to your exact specifications via a portal or API and deploy in real time to any SoftLayer data center. With all of that in mind, the amount of bare metal customization that you can do within the Softlayer cloud is pretty impressive. Storage, memory, network uplinks, power supplies, GPUs and even mass storage arrays can be customized. You can even get an entire customized physical rack.
Cloud-Ready Platforms as Commodity
Big server vendors have certainly heard the message. Cloud, data center, and service providers are all looking at better ways to control performance, price, and the compute platform. So why not get in the game and help out? Recently, HP and Foxconn formed a joint venture to create a new line of cloud-optimized servers specifically targeting service providers. According to the press release, the new product line will specifically address compute requirements of the world’s largest service providers by delivering low total cost of ownership (TCO), scale, and services and support. The line will complement HP’s existing ProLiant server portfolio, including Moonshot. The idea is to cut out the software as well as the bells and whistles while still keeping HP support involved. From there, these servers aim at large service providers to help them address the data center challenges of mobile, cloud, and Big Data.
The cool part with HP’s Cloudline servers is that these are full, rack-scale systems, which are optimized for the largest cloud data centers and built on open industry standards. Vendors within the enterprise vendor community are offerings options as well. Storage solutions from X-IO Technologies focus on absolutely pure performance at 100 percent capacity. They build in high availability and redundancy,but don’t offer snapshotting, dedup, replication, thin provisioning, and a few other software-level storage features. It does, however, carry a five-year warranty on the appliance.
Of course, there will still be places where this doesn’t work. However, for a large number of organizations moving toward a more logically controlled storage platform this is very exciting. In some cases the hypervisor or software-defined storage layer can deliver enterprise storage features like encryption, dedup and more directly from the virtual control layer.
Future Cloud Ecosystem Will Be More Diverse
The growth of cloud computing has also allowed for greater diversity within the data center platform. We now have more hosting options, greater delivery capacities, and more support from powerful systems located all over the world. Adoption of bare metal and commodity systems will certainly continue to grow. Fueled by new concepts around the Internet of Things and mobility, data center will simply have to support more users, carrying a lot more data.
Consider this from a recent Cisco Service Provider forecast: globally, 54 percent of mobile devices will be smart devices by 2018, up from 21 percent in 2013. The vast majority of mobile data traffic (96 percent) will originate from these smart devices by 2018. As everything in technology, we will continue to see systems evolve to meet modern demands. Vendors like Cisco, HP, Dell and others – who serve the more traditional server market – will need to evolve alongside organizations seeking a more “commoditized” approach to data center architecture.
As modern organizations take on new challenges around cloud and content delivery, more options will make the design and architecture process a bit easier. In many cases, you simply need raw power, without any software-based bells and whistles. This is becoming more and more the case as software-defined solutions and virtualization help abstract the logical layer from the physical platform. We can now control resources, route traffic, and manage users from the hypervisor and the cloud. This allows the underlying hardware to solely focus on resource delivery, leaving the management layer elsewhere. | | 4:30p |
Understanding the Business Benefits of Colocation Rowland Kinch has led Custodian Data Centres since its inception. He not only oversees the corporate aims, but works with the technical director and his team to ensure that the business strategy is supporting the very best technical requirements.
As more and more companies shift from individual servers to networked systems, they are realizing that the original benefit of running their own server room is being outweighed by the advantages of a colocation solution. Many companies do not realize that you can save money and have greater resilience in a data center.
Maximizing Business Potential
Reductions in operational expenditure and the ability to focus your IT team on your core business, means that data centers offer organizations the ability to maximize the potential within their businesses. Do businesses have a team where people are available 24/7/365 to re-boot servers when they fail at 3 a.m.? Colocation companies specialize in data center and network services so businesses don’t have to.
For financial directors and IT directors, colocation provides the perfect win-win scenario, providing cost savings and delivering state-of-the-art infrastructure. When comparing the capabilities of a standard server room to a colocation solution, an assessment of the power alone demonstrates the gap between in-house solutions and utilizing the expertise of a specialist.
While many in-house server rooms have access to power and may well have air conditioning and battery back-ups, this system does not fully protect infrastructure. Organizations need to consider whether their power solution also include diverse power feeds and distribution paths, with dual generator systems that can be re-fuelled while in operation as well as onsite fuel reserves. Do they have diverse cooling systems, with UPS support in place? Who is monitoring their power and battery levels 24/7? Do they have a 100 percent uptime solution?
Connected Globally, Quickly, Securely
When it comes to connectivity, colocation means a business is connected globally, quickly and securely. We find that many companies with onsite server rooms often do not have onsite access to a resilient Internet connection, nor do they have dedicated personnel monitoring traffic flow to ensure they always remain on.
Colocation enables organizations to benefit from faster networking and resilient connectivity at a fairly low price – delivering 100 mbps of bandwidth might be hard at an office location and trying to create a redundant solution is often financially unviable. Data centers are connected to multiple transit providers and also have large bandwidth pipes meaning that businesses often benefit from a better service for less cost.
Sustaining Your Infrastructure
With these considerations in mind, some organizations start to look to cloud solutions rather than colocation. However, cloud does not provide organizations with a fully auditable system and the ability to have full control over their own infrastructure. Colocation often enables businesses to avoid spending money on storage bills in the cloud as it is often cheaper to store information on their own servers.
From the periodic necessary replacement of UPS batteries, to the maintenance and testing of UPS systems, the hidden costs of sustaining your infrastructure to optimal levels can be surprising. As part of a standard colocation solution, organizations instantly benefit from high level security with ISO 27001 accredited processes, onsite security teams and infrastructure.
Additionally, data centers have the time, resources and impetus to continually invest in and research green technologies. This means that businesses can reduce their carbon footprint at their office locations and benefit from continual efficiency saving research. Companies that move their servers from in-house server rooms typically save 90 percent on their own carbon emissions.
Location, Location, Location
Choosing a colocation provider away from a city or data center hub with optimal connectivity options – both to the capital, Europe and further afield – means having the advantages of all central data centers with the added benefits of having attractive power capabilities and the security of being away from centrally targeted terrorist activity. Out-of-town colocation providers allow businesses to take full advantage of the capital’s infrastructure without the premium costs associated with it.
A colocation solution provides companies with a variety of opportunities, with exceptional SLAs and having data secured off-site, providing organizations with added levels of risk management and the chance to invest in better equipment and state-of-the-art servers. This can enable IT teams the possibility to explore options such as virtualization and condense the amount of racks and servers required.
Colocation providers are able to meet business requirements at a lower cost than if the service was kept in-house. Data centers and colocation providers have the ability to have businesses up and running within hours, as well as provide the flexibility to grow alongside your organization. Colocation space, power, bandwidth and connection speeds can all be increased where required to ensure that all sizes of colocation clients can be catered to.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 5:35p |
Nasdaq to Relocate Trading Data Center in Sweden The NASDAQ OMX Group will move its primary trading data center serving the Nordic and Baltic regions from Lunda to Upplands Väsby. Both towns are within 20 miles north of Stockholm.
The new data center is purpose built for Nasdaq’s needs within a new facility by DigiPlex, a U.K.-based data center provider. The DigiPlex facility, still under development, has access to 20 megawatts of power and can accommodate 65,000 square feet of data center space, with room for further expansion, according to the company’s website.
Nasdaq spokesman Richard Gaudy said via email there were many reasons for the move, “one reason being [that] contract expires with our existing data center provider, and we will be able to offer better services to our clients at the new data center.”
He declined to say how much space or power capacity Nasdaq was taking at the DigiPlex site.
Besides hosting matching engines for the Nasdaq stock exchange, the company’s trading data centers are used to provide colocation services to companies in the electronic, or high-frequency, trading space. These customers pay a premium for placing their servers in physical proximity to the matching engines, which gives them a sub-millisecond latency advantage over competitors located in different data centers.
Nasdaq offers a variety of services besides pure colocation, which is characteristic of modern stock exchange operators, which have also in recent years become technology services companies.
Like its peers, companies like the CME Group (operator of the Chicago Mercantile Exchange), or Intercontinental Exchange (owner of the New York Stock Exchange, ICE, and Liffe exchanges, among others), the company is leveraging the value of access to its infrastructure by providing various services around it. It can help with network topology decisions, private links to liquidity markets, remote hands services, managed hosting, disaster recovery, and even cold storage.
In addition to its own infrastructure, Nasdaq provides services hosted by Amazon Web Services.
In Europe, Nasdaq has points of presence at Equinix and Interxion data centers in London and in an Equinix facility in Frankfurt. Customers in the Sweden data center have network access to those locations as well.
Stateside, Nasdaq’s primary data center is located in Carteret, New Jersey. | | 6:35p |
Stock Analysts: 2015 Looks Bright for Data Center REITs Updated with a statement by Equinix regarding its REIT conversion
Data center real estate investment trusts (REITs) have emerged as a stock category over the past several years and have now achieved reputation as one of the best kinds of REITs to invest in.
Barron’s, a prestigious newspaper that covers the stock market, recently published a report by analysts from the investment banking firm MLV & Co. that forecasted a solid 2015 for data center REIT stocks.
There are currently five publicly traded data center REITs in the U.S., with one more (Equinix) awaiting regulatory approval. Equinix board approved the conversion in December.
Although Equinix started operating as a REIT on January 1, it has not yet received an official letter of approval. “Based on existing legal precedent, opinions of counsel, and the fact that many other data center companies currently operate as REITs, Equinix anticipates receiving a favorable PLR,” a company spokesman said in an email.
Once Equinix converts, it will become the largest data center REIT. Until then, that spot belongs to San Francisco-based Digital Realty, which became the world’s first data center REIT when it was formed and went public in 2001.
The category “lost favor” around mid-2013, according to a Wall Street Journal article (subscription) published then. Analysts were concerned with rising capital expenses and declining rents. There were also concerns about Internet giants, such as Amazon, Google, and Facebook, moving servers out of these providers’ facilities and into their own.
Those worries seem to have subsided, as five publicly traded data center REITs delivered healthy returns over the past 12 months and had better growth rates than the REIT category as a whole.
Rents increased in 2014, MLV analysts wrote, according to Benzinga. Supply and demand were in balance, and vacancy rate at the end of the year was “a healthy 9.0 percent.”
The analysts issued buy ratings for CoreSite and QTS Realty, the former because of its opportunity for growth, and the latter because of its knack for achieving high yields by finding infrastructure-rich properties it can buy at a discount and converting them to data center space.
The other three REITs – CyrusOne, Digital Realty, and DuPont Fabros Technology, received hold ratings from MLV.
The analysts are concerned with falling oil prices and their effect on demand for CyrusOne, which relies on the energy sector for 30 percent of its revenue.
While acknowledging that Digital Realty is on the right path with its new focus on build-to-suit development, MLV analysts said they did not see “enough earnings growth.”
DuPont Fabros received its hold rating because its top tenants reportedly aren’t growing with the company as quickly as they used to. MLV was also concerned that the company’s sales force wasn’t large enough to close new customers.
DuPont Fabros announced last week the appointment of former NTT executive Christopher Eldredge its new president and CEO. | | 8:07p |
Snappy Ubuntu Core for Raspberry Pi 2 Could Help Further IoT, Web Application Delivery 
This article originally appeared at The WHIR
Canonical, the company behind the Ubuntu operating system, has collaborated with the Raspberry Pi Foundation to bring its light-weight Snappy Ubuntu Core operating system to the Raspberry Pi 2, a low-cost computer that is roughly the size of a credit card.
Up to six times faster than its predecessors, the latest version of the Raspberry Pi features a 900MHz quad-core ARM Cortex-A7 processor and 1GB RAM. The announcement this week of Snappy Ubuntu Core support means Raspberry Pi 2 can now run a variety of operating systems which also include PIDORA (a “Fedora Remix”),OpenELEC, OSMC, Debian Jessie, and the default Raspbian OS.
Announced late last year, “Snappy” Ubuntu Core has a transactional approach to software that is fast and reliable, and it also features rigorous application isolation. The new software allows “Snappy Apps” and the Ubuntu Core itself to be upgraded atomically and rolled back in the event of an error.
“I am incredibly excited to have Snappy Ubuntu Core running on the Raspberry Pi 2,” Raspberry Pi Foundation founder Eben Upton said in a statement. “Ubuntu has been a key missing piece on our operating system support for the Raspberry Pi.”
Canonical’s Internet of Things VP Maarten Ectors said developers can use Snappy Ubuntu Core and Raspberry Pi 2 as an ideal foundation for building Snappy Apps and sharing them on the Snapp Store. And given the Raspberry Pi 2’s size and performance, it is bound to be incorporated into more smart devices that form the IoT.
Many have also thought about Raspberry Pi’s potential as a web server, and there have also been a few hosting providers building services around Raspberry Pi.
Austrian hosting and data center provider EDIS offers free Raspberry Pi colocation witha 100MBit uplink and 100GB per month traffic.
And in February 2013, Dutch web host PCextreme began giving people the opportunity to colocate their Raspberry Pi in its data center for free, and later began charging a nominal colocation fee – around $41 (€36) per year. This includes a 100 Mbit uplink and 500 GB bandwidth.
PCextreme was able to fit 150 Raspberry Pi’s into a single rack, but figured that higher density configurations on custom-designed boards could see close to 500 Raspberry Pi’s fit in a rack.
An ecosystem is developing around the once diminutive Raspberry Pi, and as new functionality gets built into this small package, there seems to be a lot of potential baked into these small systems.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/snappy-ubuntu-core-raspberry-pi-2-help-iot-web-application-delivery | | 8:30p |
Web Design Firm and Hosting Reseller Sues GoDaddy for Using “It’s Go Time” Slogan 
This article originally appeared at The WHIR
A GoDaddy reseller and web design company is suing GoDaddy for use of the “It’s Go Time” slogan, according to a report Friday by DomainNameWire. The lawsuit was filed on Tuesday in Springfield, Ill., where the plaintiff is based.
According to the report, GoWeb1, owned by plaintiff The Easy Life, LLC, had emailed newsletters to its GoDaddy account manager using the It’s Go Time slogan since at least October 2011, two years before GoDaddy trademarked it. The It’s Go Time slogan appears on its website in a screenshot from Aug. 19, 2012.
GoDaddy applied for the trademark on intent-to-use basis, and the slogan appeared in its advertising as of September 2013, when the company launched a new SMB-focused brand strategy and TV commercials. The plaintiff first notified GoDaddy of its “prior rights” and the infringing content by letter on Oct. 8, 2013.
The slogan also appeared in GoDaddy’s Super Bowl 2014 commercials, including one featuring Danica Patrick.
Currently, “It’s Go Time” appears on GoWeb1’s website, displayed prominently on the home page and again at the top of the page near its social buttons and support number. Though, as DomainNameWire points it, it could have “ramped up this use after GoDaddy filed its trademark application.”
“As a result of Plaintiff’s excellent reputation, the services offered in connection with the IT’S GO TIME mark are widely recognized as emanating from Plaintiff and are very well received among purchasers of such services,” the lawsuit said.
“[GoDaddy’s] services significantly overlap the services offered by Plaintiff in connection with the use of the IT’S GO TIME mark.” GoWeb1 has been a GoDaddy reseller since 2006.
The Easy Life LLC is looking for “equitable relief, compensatory damages, punitive damages, and attorneys’ fees and costs incurred by reason of Defendant’s wrongful conduct.”
In addition to damages, the plaintiff is asking the court to order GoDaddy to recall its advertising and block access in the US to any of its websites that use the It’s Go Time slogan.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/web-design-firm-hosting-reseller-sues-godaddy-using-go-time-slogan | | 9:40p |
Former Engine Yard CEO Dillon Joins Aerospike as Chief Exec NoSQL database provider Aerospike has named John Dillon its new CEO. He was the first CEO of Salesforce, and his resume also includes stints as CEO of Engine Yard and Hyperion. Dillon also spent some time at Oracle during the company’s initial rise to prominence.
Dillon’s background is in engineering but he’s a go-to-market expert with special expertise in the “last mile,” the period in a tech startup’s life when it finally takes its product to the market. Aerospike is a flash-optimized, in-memory open source NoSQL database company that raised a $20 million Series C last year.
The name Aerospike comes from a type of rocket nozzle that maintains its output over a long range of altitudes — a commentary on the software’s ability to scale. Aerospike is known for speed, reliability, and scalability. It will typically sit between a web application and a Hadoop instance and capture big data for analyzing.
However, it’s the type of strong technology company that might struggle during the last mile, a company whose DNA is in engineering, not go-to-market. This is why Dillon is a good fit.
“I was often hired to take the product the last mile,” said Dillon. “The last mile is where you often trip. [Aerospike has] great technology, but haven’t spent time and money on the last mile yet.”
He said he spoke with the founders of Aerospike and — equally important — its customers and realized that the right ingredients were all there.
“I like the challenge of taking a company to market,” he said. “My goal was to be very skeptical and vet the technology thoroughly. I like that I don’t have to fix the engineering, or the technology. It’s all there, only it’s waiting for something.”
Dillon dislikes the expression “big data,” arguing that data’s always been big. The difference today is in the types of workloads.
He said Aerospike has created an optimization and utility for handling massive data stores with better reliability and better performance.
At Engine Yard, a Platform-as-a-Service company, Dillon said he saw first-hand that NoSQL is de facto for building out next-gen applications, as relational databases can’t tackle a lot of emerging workloads. He also witnessed the emergence of open source. Many of the components Engine Yard used were based on open source technologies and many customers were increasingly turning to open source.
“Open source just feels more honest,” he said. “Ten or twenty years ago, nobody would dare do anything with open source. Today, if you start up a company, chances are you’ll spin up a stack with all open source.”
One example of the things Aerospike powers are real-time bidding systems.
“The amount of processing for an ad distribution platform to know who you are, which ad to deliver, the bidding system to create the inventory, and to do it all fast enough so that you’re not annoyed …, when I look at processing at that scale, Aerospike tackles these workloads.”
What makes Aerospike different from all the other NoSQL competitors? Simply, it works with these workloads regardless of what they throw at it.
“When speaking to customers, the interesting word I kept hearing is ‘more,’” Dillon said. “It isn’t so much that [Aerospike is] faster or cheaper; it’s that these guys can’t afford to have a database go down. They can’t go down on a Black Friday or not deliver ad inventory. They like Aerospike because it handles more in a world where they are continuously growing.”
“The go-to-market is simply to look for places where people need more,” said Dillon. “I think that’s the right equation.” |
|