Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, January 15th, 2015
| Time |
Event |
| 1:00p |
Why Zayo is Paying $675M for Latisys Acquisitions are part of Zayo Group’s day-to-day business. Having bought 30 or so companies since its founding in 2007, acquisition has been the Louisville, Colorado, company’s chief instrument of growth.
“Zayo’s been historically a consolidator of infrastructure assets,” Greg Friedman, vice president of zColo, the group’s data center colocation business, said.
But the $675 million acquisition of Latisys the company announced Wednesday is different than the deals before it. It isn’t the biggest transaction Zayo has done (the biggest was its $2.2 billion AboveNet acquisition in 2012), but it is the company’s first transaction done primarily to expand its data center business. Past deals that included data centers were about network assets more than anything else. The Latisys acquisition indicates Zayo’s serious ambitions in colocation, managed services, and Infrastructure-as-a-Service.
Four of the five markets Englewood, Colorado-based Latysis has data centers in are new for zColo: Orange County in Southern California, Denver, Northern Virginia, and London. Because the facilities have been plugged into Zayo’s fiber network, the deal gives zColo instant presence in those markets. Chicago is the only overlap.
More Than a Colo Play
Latisys has a total of 185,000 square feet of sellable data center space, 33 megawatts of critical power, and room for expansion in the same locations, but that’s only part of the picture, albeit a big part. Latisys also has a substantial managed services and IaaS business, which is something zColo’s leadership realized the company really needed to get into only recently, following Zayo’s acquisition of AtlantaNAP in July.
AtlantaNAP was zColo’s introduction to being a provider of services “up the stack,” or beyond the simple space, power, cooling, and connectivity – the bread and butter of colocation. That’s “something we began to fully appreciate in 2014, after the AtlantaNAP acquisition,” Friedman said.
Latisys already has customers for its IaaS and managed services, a platform, a customer portal, consulting capabilities – things that would take Zayo a lot of time and resources to build on its own, Philbert Shih, managing director at Toronto-based Structure Research, said. “They can get to market faster and also have something a little bit more mature, than if they had built from the ground up,” he said.
The two other reasons Zayo leadership thought Latysis would be a good fit were cross-sell opportunities between the two companies’ customer bases, and, importantly, the solid growth Latisys has shown. According to Friedman, it has seen 20 percent annual growth rate. Latisys has had positive cash flow since 2012, the company said in an announcement in March 2014, when it also announced expansion to London, the first time it ventured outside of the U.S.
A Rare Catch
Most competitors of Latisys’ caliber have already been acquired, so in a way, it was a rare catch. There aren’t many independent companies out there that have decent scale, more than $100 million in revenue, and a mid-size-to-enterprise customer base, Shih said. After Canada’s Shaw Communications bought ViaWest for $1.2 billion in July of last year, there weren’t many comparable businesses left in the market besides Latisys, Peak 10, and Datapipe, he said.
The Latisys acquisition appears to be a solid strategic move for Zayo. The company instantly gained a major data center business and capabilities to provide sophisticated services it hasn’t been able to provide before. It is also a catch-up move, since many of Zayo’s competitors in the network infrastructure business have already made acquisitions along similar lines. Now begins the hard part of competing in the crowded and convoluted world of managed services and IaaS. | | 4:30p |
4 Steps to a Successful Data Center Migration Art Salazar is the Director of Data Centers & Compliance at Green House Data.
As companies undergo mergers and acquisitions, on-premise facilities continue to age, and consolidation mandates are handed down, the need arises to migrate data center equipment to new facilities.
Whether you colocate or choose the best equipment for a consolidated, company-owned data center, moving IT equipment and workloads between sites is a time-consuming and potentially costly endeavor. These best practices will help plan for data center migrations.
Step 1: Deciding What to Move
You might purchase new equipment, move just some items, or haul everything to the new site. This is a great time to phase out older equipment and trade in rentals. Equipment migration can be risky—if something breaks on the way, you might not be able to get your system running on the other side. Loaner equipment or a service contract for the migration period can help smooth the transition.
Go back through your contracts with hardware and software providers. Do any need to be terminated? Can they move with you? There might be limitations based on location or compatibility issues. Since you’re tearing everything down and setting it back up, it’s a great time to finally ditch a troublesome vendor, try out a new service, or negotiate a better deal.
You may also need to adapt your equipment to the new space. Is it time to implement aisle containment or pods? Can you design a higher density environment? A migration allows you to explore efficiencies and take a look at what is or isn’t working in your facility design.
Once you know what equipment is moving, decide whether you will move all at once or in chunks. The latter allows you to get elements of the data center running in the new location and begin to transfer systems. Otherwise, rentals or a service contract may be necessary to avoid downtime. If your organization is comfortable with downtime, that may not be a problem.
Decide if you have the resources to move yourself or if you need a service provider. This can be a professional IT company that specializes in data center work or it can be as simple as a regular mover—just make sure they have experience handling IT equipment.
Step 2: Reviewing the Environment and Performing Equipment Inventory
Before anything is unplugged or taken down to the loading dock, pull system logs and inventory documentation. Check to see if everything is there and record any new equipment. Measure utilization to discover live workloads, scheduled backups, and current software and applications. If you have service contracts, they will need to be notified: disaster recovery, for example, will need to point to the new location. Some items may need special licensing in order to run concurrently or temporarily as you cut over to the new facility.
Tag what is staying and what is going. If a piece of equipment is moving, look up and record the warranty information and serial number. Make sure nothing in the migration process will void the warranty.
Now is the time to set up or adjust disaster recovery or backups. It is wise to have a physical backup as well as one in the cloud. Testing disaster recovery is a good way to prepare for the actual move.
Step 3: Gathering a Team and Making the Move
Schedule your target move date to avoid interfering with a heavy business period, like an upcoming product launch or internal project. The actual move will probably happen during off hours. Ensure you have access to all necessary building areas.
Group personnel into leaders, physical movers, and digital teams ready to monitor and migrate systems. Create a comprehensive plan for moving day that includes how and what will move, backup plans, installation and testing. Think about the risk involved in each step and seek to minimize any business impact.
Pack and organize sensibly, labeling everything. Boxes of cables need types and lengths on them. Servers should note what block and/or room they are destined for to simplify reinstallation. It might make sense to move the data center floor-by-floor, or you can use a different system like moving non-critical systems first.
Dispose of old equipment and supplies responsibly. Recycle electronics if you can and sell what is still useful. Be certain no data remains on any devices. Clear technology or purge-level sanitation may not be adequate to purge data. Degaussing or physical destruction of storage could be necessary depending on the circumstances. Dangerous equipment like batteries must be handled properly.
Security is paramount during this process. Know your workers, track your equipment, and keep an eye on security logs. This is an easy time for people to sneak past your usual perimeter as doors are left propped to carry items or firewalls are shut down. Take or destroy security keys, documents, and access systems as required.
Step 4: Documentation and Testing
After everything is installed, begin testing. Check the equipment in the new facility against your inventory list in case anything was misplaced along the way. Check off your list of systems and applications to ensure they are all running correctly or a replacement is in place.
Complete a project audit and review for future documentation and evaluate the success of the move. Did you hit your schedule? Were design specifications met? Ask your team for their thoughts, and ask C-levels and heads of other departments if their needs are being met post-move.
There is much to keep track of during a data center migration. These steps are broad strokes to help you think about how, what, where, when, and why you are moving equipment and systems. Perhaps the biggest takeaway is documenting the entire process, starting with a strong plan and ending with an audit. This helps you lay out the process while leaving a paper trail to help discover errors along the way and measure success at the end.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 6:38p |
Cloudian Integrates With Hortonworks for Hadoop-Ready Distributed Storage Cloudian has made its platform for data storage Hadoop-ready with a Hortonworks certification. The latest 5.1 release of Hyperstore also includes enhancements for distributed and hybrid storage, such as better security and geo-replication enhancements to enable real-time access to metadata across multiple sites.
Making Cloudian’s storage Hadoop-ready means customers can now run analytics in place, directly against the company’s storage system without having to extract the data.
Cloudian continues to expand its capabilities beyond just storing data. It recently added tiering to S3 and Glacier, cloud storage on Amazon Web Services.
Hyperstore runs across commodity servers stitched together acting as a big storage pool. A peer-to-peer scaling system based on Cassandra (NoSQL store) provides reliable, consistent storage. It also has optional low-cost disk compression and data is always encrypted using in-line encryption.
Cloudian wants to disrupt traditional storage. Its initial focus was on delivering low-cost storage on commodity hardware. It has expanded functionality to create what it calls a “smart” storage system that can run on commodity hardware. “People want a storage platform that’s a bit more horizontal with multiple use cases,” said Cloudian Chief Marketing Officer Paul Turner.
To achieve Hortonworks certification, Cloudian’s system underwent a battery of tests. Down the line, Turner said, the company will support multiple Hadoop distributions. Hortonworks was selected as the maiden voyage because of its leadership position.
The Hadoop vendor had its Initial Public Offering in December in a strong debut. It has been focusing on partnering in addition to making Hadoop more secure, most recently deepening its relationship with Talend around stream data analytics.
The two major current trends in storage, according to Turner, are smart storage and general hybridization of storage by enterprises. Companies are employing both private and public, depending on the data, and there is a split occurring between high-performance data and bulk data.
“There is a convergence of big data and analytics,” said Turner. “You can’t just have a storage system to hold data. There’s usually a lot of capacity data I want to run capacity analytics on; you don’t want to only store, you want to analyze.”
Not all data is created equal, leading to the second trend, hybrid storage. “People are truly looking at hybrid, at which data goes to the cloud, archive or backup to the cloud, long-term retention,” said Turner.
As businesses become more geographically dispersed, a storage system needs to handle more responsibilities. Those responsibilities include file distribution and sharing, tiered storage, and storage for private cloud. Turner said Cloudian can act like a private Amazon S3 but on-premise.
Two big verticals that use Cloudian are financial services and media. Examples of use cases include media content storage, advertising in need of distributed feeds, and distribution of content around the world.
Cloudian runs on appliances or software, though appliances make up the bulk of the business.
“Enterprises want out-of-box solutions,” Turner said. “The world went through a shift; first it went to appliances, then it was software-defined everything. But I think reality is enterprises want all the flexibility from software defined (abstraction, API dynamic control, keep it open). What they didn’t like about the closed systems (NetApp, EMC) is they couldn’t take control of things.” | | 7:25p |
Consortium Previews Open Standard for Internet of Things The Open Interconnect Consortium has launched the initial release of its standard for machine-to-machine communication as preview. The standard is called IoTivity. It is being developed as an open source project hosted by the Linux Foundation.
IoTivity is an open source framework for the Internet of Things. The Intel-backed consortium is tackling the rise of network-connected devices and technical issues that may be associated with it. Since many different devices and servers they talk to come from many different vendors, there needs to be some common communications standard. OIC plans to release a reference implementation soon.
The IoT standard project includes RESTful APIs and will be available in various programming languages for a variety of operating systems and hardware platforms.
The open source approach plays a big role in IoT as well as in cloud computing because it is the best instrument for bringing together all stakeholders in a collaborative effort and ensure interoperability.
“We believe that an open source project combined with the OIC’s standards efforts is critical to driving true interoperability for the billions of IoT devices that will be coming online over the next few years,” Mark Skarpness, director of embedded software in Intel’s Open Source Technology Center, and chair of the IoTivity Steering Group, said in a statement. “We are pleased to be working with The Linux Foundation and the open source community to advance the project.”
IoTivity is governed by an independent steering group that liaises with the OIC. OIC was founded in 2014 and saw its membership jump to over 50 companies during the year.
Some potential IoT uses include smart homes, automotive industry, industrial automation, and healthcare.
451 Research reported that 2014 was a record mergers and acquisitions year in the IoT space. A recent report from IDC said that half of IT networks will soon feel the stranglehold of IoT devices.
In addition to the open source implementation, the IoT standard will include IP protection and branding for certified devices (via compliance testing) and service-level interoperability.
“The ability for devices and machines to communicate will unleash a whole new world of technology innovation. Open source software and collaborative development are the building blocks to get us there,” Jim Zemlin, executive director at The Linux Foundation, said in a statement. “IoTivity is an exciting opportunity for the open source community to help advance this work.”
The IoTivity project is licensed under the Apache License version 2.0. | | 7:29p |
IT Disaster Recovery Workshop Join Business Continuity and IT Disaster Recovery experts Thursday, January 22, from 8 a.m. – 12 p.m. for a free IT Business Continuity and Disaster Recovery Workshop.
The workshop will be held at Online Tech’s Indianapolis Data Center in Indianapolis, Indiana.
This half-day event for technology professionals will feature three industry experts who will guide attendees through BCDR planning strategies and real-world scenario exercises.
During the workshop you will get the opportunity to:
- walk through a Business Continuity planning exercise,
- review different IT architecture options for meeting 3 different IT Disaster Recovery scenarios,
- learn precious lessons vicariously from teams who have seen it all in the real life recovery of hundreds of companies from the brink of both human-made and natural disasters.
For more information, visit the IT Disaster Recovery Workshop website.
To view additional events, return to the Data Center Knowledge Events Calendar. | | 7:55p |
IT Disaster Recovery Workshop Join Business Continuity and IT Disaster Recovery experts Thursday, February 19, from 8 a.m. – 12 p.m. for a free IT Business Continuity and Disaster Recovery Workshop.
The workshop will be held at Online Tech’s Metro Detroit Data Center in Westland, Michigan.
This half-day event for technology professionals will feature three industry experts who will guide attendees through BCDR planning strategies and real-world scenario exercises.
During the workshop you will get the opportunity to:
- walk through a Business Continuity planning exercise,
- review different IT architecture options for meeting 3 different IT Disaster Recovery scenarios,
- learn precious lessons vicariously from teams who have seen it all in the real life recovery of hundreds of companies from the brink of both human-made and natural disasters.
For more information, visit the IT Disaster Recovery Workshop website.
To view additional events, return to the Data Center Knowledge Events Calendar. | | 9:55p |
The Data Centers Behind Datto’s Backup for Everything What does a globally distributed provider of backup services for any business data look for in a data center? Datto’s goal is to back up anything and everything and to help MSPs do the same for their customers.
Datto (not to be confused with Dato, the recently renamed GraphLab) has about 400 employees and counts over 8,000 resellers around the world providing backup and disaster recovery services to some 2 million customers. The company is powering its business out of several data centers across the country, offering customers an alternative to setting up their own DR data center infrastructure.
It operates its own private cloud in these data centers, holding over 100 petabytes of data and quickly growing. Datto recently extended its Software-as-a-Service backup capabilities with the acquisition of Backupify.
Running its own cloud means it’s not too worried about pricing pressure in the storage space affecting the backup and DR business. It’s more cost efficient to build and run the cloud in house, the company’s executives said. By using colocation, however, it can focus on data backup and not worry about managing the data center itself.
Distributed Footprint With Center in Utah
Datto’s data centers are located in Reading, Pennsylvannia, Salt Lake City, Utah, as well as in Canada, U.K., and Australia. It uses several colocation providers, but its largest location is with C7 Data Centers in Utah.
“They do a ton of environmental stuff, such as recycled cold air,” Datto’s vice president of infrastructure engineering George Bedocs said. “They have a new a la carte design for energy consumption. We like that it’s N+1 all the way down to signal failure.”
Utah is considered a DR data center hotspot, thanks to its proximity to key West Coast markets and a relatively disaster-free geography. Given Datto’s business, it’s a logical choice. All of their data centers are in markets conducive for backup and disaster recovery.
Datto’s provider-selection criteria are fairly standard: compliance, security, room to grow, infrastructure redundancy, fiber availability. The company prefers smaller regional data center providers, because they tend to be more hands-on with their customers than the big global players.
“From my experience, at smaller hosting facilities you have more preferential treatment in terms of on-the-ground support,” said Bedocs. “At the larger global providers you get your standard wire pluggers and rack-and-stack, but no one seems to have a personal investment. Smaller providers step up and have ownership; they’re right on the floor and receptive to all kinds of ideas.”
Optimizing Server Cooling
Some of those data center ideas range from big, overarching changes like implementing hot- and cold-row isolation to working on custom server fans.
“We’re playing with fan shields right now,” said Bedocs. “Each chassis has seven fans in it. These fans obviously eat up electricity and we’re trying to turn them all off but two.”
It is 3D-printing different designs to see what works best. 3D printing allows Datto to tweak and customize the design, and to experiment with different shapes and sizes like different fan blade curvatures.
They’re hoping to come up with a configuration and design that will allow them to use fewer fans that better move the air around. Fewer fans means lower electricity costs. A byproduct of having fewer fans would mean a reduction in vibration, which reduces hard drive failure.
The company is also looking at different chassis designs. Currently, its storage nodes are 36 drives each, front- and back-loaded. This means that some of these hard drives are on the exhaust side, so the engineers are looking for ways to address that problem.
Differentiating in a Crowded Space
With several commercial offerings extending into business capabilities (and often bringing the lower price point along), the competitive landscape can appear crowded. Mozy, Carbonite, Dropbox, SugarSync, Box, all offer backup in some form or another, and have growing enterprise cloud DR plays.
However, Datto’s mission of serving local needs, protecting all the data in SaaS applications, and serving the channel means it’s positioned somewhat uniquely. More firms are using SaaS for mission critical applications, according to Gartner, which means a need to backup this data is growing. This was part of the rationale of the Backupify acquisition.
“The way that we think about it is, our job is to protect business data no matter where it lives,” said Datto CEO and founder Austin McChord. “Now, when you think about business data, it’s all over the place. “A lot of businesses see a tremendous amount of value in what we offer.”
“The hot news is the SaaS backup and getting it totally integrated,” said Bedocs. “We’re also excited about all the new technology we’ll be learning about on the backend. Google and Facebook are doing interesting things in the data center (open rack designs etc.), so we’re interested in examining that.” | | 10:35p |
Report: Flash Array Market Reached $11B in 2014 Aggressive improvements by suppliers and increasingly complex workloads in customer data centers drove the flash-based array market past $11 billion in 2014, according to new research from IDC.
Further enterprise adoption and explosive growth witnessed in this market was guided by startups and enterprise storage vendors alike improving the reliability and performance of flash-based arrays and lowering the effective cost per gigabyte. The IDC report looks at Dell, EMC, HDS, HP, IBM, NetApp, and Oracle, as well as highlighting start-up revenue leaders, including Nimble Storage, Pure Storage and SolidFire.
“Because of the increasing maturity of flash-based arrays, along with more widespread recognition about the secondary economic benefits of flash deployment among end users, we have seen the overall market grow faster than we had originally forecast in 2013,” Eric Burgener, research director, IDC storage systems, said in a statement. “As legacy arrays come up for technology refresh, we are seeing more and more customers evaluate flash-based options.”
When studying this market IDC breaks down the taxonomy of the enterprise storage market by I/O-intensive, performance-optimized, and capacity-optimized solutions. It then compares vendors by revenue and raw capacity shipped.
Flash has become a mainstream solution, flash-optimized features like in-line compression and de-duplication, as well as technology refresh cycles helping pave the way. | | 10:49p |
Thousands of French Websites Face DDoS Attacks Since Charlie Hebdo Massacre 
This article originally appeared at The WHIR
Nineteen thousand French websites have been attacked since the Charlie Hebdo terrorist attacks last week, according to French military head of cyberdefense Adm. Arnaud Coustilliere. The attacks have been carried out by a variety of hackers, including “more or less structured groups” and some well-known Islamic groups, Coustilliere said.
Most have been minor DDoS attacks, carried out on sites for everything from military regiments to pizza shops.
“What’s new, what’s important, is that this is 19,000 sites — that’s never been seen before,” the Associated Press quoted Coustilliere as saying. “This is the first time that a country has been faced with such a large wave of cyber-contestation.”
The Huffington Post published a story earlier this week on Algerian hackers attacking French sites in response to the publication of offensive images by the French magazine. Those hackers included members of a group called Anonymous Algeria, though the similarly named group Anonymous explicitly expressed support for Charlie Hebdo while vowing to disrupt terrorist websites.
Coustilliere characterized the attacks as a response to the public outpouring of support for free speech and the victims of the attack.
Arbor Networks counted 1,070 DDoS attacks in a 24 hour period this week, CBC said. For comparison, Arbor says the US hosts 30 times more sites and suffered four times more attacks, meaning French sites are roughly 750 percent more likely to be attacked.
Jihadist hackers also hacked US military social media accounts on Monday, and the intersection of hacking with the revived “war on terror” promises to further muddy a whole raft of long awaited regulatory reforms related to internet communication and security.
The European Union and UK have both suggested more monitoring of internet communication is necessary since the attacks.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/thousands-french-websites-face-ddos-attacks-since-charlie-hebdo-massacre |
|