Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, July 15th, 2014
Time |
Event |
12:00p |
DigitalOcean Expands into London With Equinix Cloud provider DigitalOcean has expanded to an Equinix data center in London. The company says this is just the beginning of a larger worldwide expansion. Fresh after a funding round, the company is gearing up to bring its easy-to-use cloud for developers everywhere.
DigitalOcean has been growing rapidly. In 2013, Netcraft noticed it was growing at an impressive clip despite it being a fairly late entrant into the cloud provider game. The high adoption rates are often contributed the the user-friendly nature of its cloud.
“Our mission is to simplify the complexity of cloud infrastructure for developers and user experience is number-one on the priority list,” said Mitch Wainer, co-founder and chief marketing officer at DigitalOcean. “AWS wants to offer anything and everything under the sun, but you can see they don’t prioritize user experience. We make it easy to use and easy understand how much you’ll spend a month.”
This is the company’s third location in Europe. It has two data centers in the Amsterdam market with TeleCity Group.
Wainer said London was the next stop for DigitalOcean because it’s been a heavily requested region by customers. “The UK is going through a tech renaissance, and we want provide startups with the infrastructure to grow.”
Another reason for the location is the increasing need to host data within coutry borders. The London data center gives UK developers a native location to service their customers.
The European Union’s Data Privacy Directive currently makes it difficult for data to be moved outside of the region. This is an issue that will keep coming up as the company grows its geographic reach.
Wainer said DigitalOcean is also currently looking to expand into other countries, such as Canada, which has similar laws and regulations when it comes to data. “It’s going to be interesting to see what the adoption rate is in these new regions that prohibit storing data [on foreign territory].”
The recent funding round gives the company enough cash to invest in expansion, Wainer said. “I’m happy to say, as of recent, we’ve secured a massive amount of credit to help us secure enough capacity for the future without limitations, driven by demand.”
The company recently announced the launch of a Singapore data center, its first location in Asia.
LON1 will be running the latest version of the company’s backend codebase, allowing for IPv6 support on all “Droplets” – the company’s branded term for cloud servers. The new codebase provides a number of benefits, including actions that can be initiated without needing to power off a Droplet (e.g. snapshots and enabling/disabling networking services), as well as a more reliable backup service architecture overall.
IPv6 can also be added to existing Droplets without rebooting. DigitalOcean will be mandating IPv6 support as the standard for all locations moving forward.
“With this expansion, we’re using Equinix again, who have been a great partner,” said Wainer. “We feel very comfortable using them, and we’ve used them for years.”
Equinix was home to the company’s first data center in New Jersey. DigitalOcean also takes space at a Telx data center within the Google-owned carrier hotel at 111 8t Avenue in New York City. | 12:30p |
Cloud Protection: How to Avoid Emergency-Related Outages Brian Burns is the Director of Cloud Services for Agile Defense, Inc.
‘Tis the season for hurricanes, twisters, tornadoes, floods and worse, outages. Companies hope their providers have properly prepared their applications and data centers for safety and security during unexpected and often disastrous weather conditions.
In an age of advanced technology and many excellent preemptive tools and systems available, it’s hard to imagine an entire data center losing power. However, it was only two years ago when Hurricane Sandy hit the East Coast wiping out data centers between Virginia, New York, and New Jersey causing them to lose public power and go dark for days.
For government agencies or large enterprise organizations that use internal data centers to house their applications, public multi-tenant clouds offer a lower-cost, easy to deploy disaster recovery/continuation of operations (DR/COOP) solution. The following steps can help these data centers plan and execute effectively with minimal to no disruption in the production environment.
Plan for the worst, hope for the best
Identify mission-critical applications
Begin by determining which web-based applications cannot go down for even a short period of time. Make a list of these applications, their dependencies, and minimal hardware requirements to operate.
Identify a compliant cloud service provider OR give a checklist to the one you have
Identify the right cloud service provider (CSP) that can support your business and technical requirements. If possible, choose a CSP that uses the same hypervisor that you use in-house; this will make your mirroring a lot easier, faster and cheaper in the long run.
Configure remote mirrored virtual machines
Depending on the hypervisor contractor assigned for handling the virtualization, either setup the data center to automatically mirror these virtual machines (VMs) or arrange to manually setup the remote VMs. Either way, make sure there is a setup VM for each production system that needs the emergency backup.
Setup the failover to be more than just DNS
With the mirrored VMs tested and in place, it’s time to select a technology that will handle the failover if and when a disaster occurs. When selecting this technology, avoid one that only offers DNS changes. While a DNS change will work, in most cases there will be a downtime of many hours or possibly even more than a day before users can reach the DR/COOP site. Therefore, seek a technology that can detect a failure in your primary data center and redirect end-users instantly to the DR/COOP solution.
Perform regular failover tests
With the above complete, the final step is performing the end-to-end failover test, which must be routinely tested to the DR/COOP site. Depending on internal policies, this test may be as small as one application’s individual failover or schedule a full site failover. Whichever is done, it is important to document the process, the steps taken when performing the test with a clear record of results after each test is done.
If the failover test worked without failure, you now have a documented failover plan! In the event something did not failover as expected, refer back to your documentation, identify what did not work as expected, make the adjustments to your plan (and documentation), and test again. You may need to do this multiple times until you have a bulletproof failover plan.
While some predictions suggest fewer hurricanes than previous years, the intensity of what may come could very well eclipse previous years. It only takes one emergency to take down a data center but a simple plan and proper preparation can prevent it. Whether you bring the expertise in-house or outsource it, make the time and budget available to properly plan so you are not out of luck during the outages!
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
| 1:00p |
Chef Adds Docker Container Automation DevOps software company Chef has added end-to-end management capabilities for Linux container workflows to its IT automation platform.
A new build of the Chef client, called Chef Container, integrates with all Linux containers, including those by Docker. Docker is a big name in the world of application containers. It is both an open source software project and a company that provides commercial offerings based on the technology.
Containers are a way to describe an application’s infrastructure requirements so that the application can be deployed quickly in various IT environments. The most well-known user of application containers is Google, which recently open sourced a version of its container management technology called Kubernetes.
Chef Container brings container management into the company’s overall IT automation framework. A new Chef plug-in, called Knife, enables users to launch, configure and manage Docker containers.
The Chef client runs inside the container and communicates with the Chef server, Colin Campbell, director of patterns and practices at Chef, explained.
Chef goes beyond just container management. “We’re really about configuring the environment for the applications,” he said.
Chef buys Tower 3, adds analytics platform
Chef also announced a new analytics platform, which provides visibility into activity on the Chef server for audits and compliance. Specifically, Chef users now have access to an action log, which documents activity in the environment, such as cookbook usage, roles and infrastructure changes, presented in a single dashboard.
The company recently gained some Big Data expertise by buying a startup called Tower 3 for an undisclosed sum. Jay Wampold, vice president of marketing at Chef, said the deal was closed several weeks ago.
The acquisition, he said, was not about a specific product or technology, focused purely on Big Data and analytics talent.
Adapting traditional software testing for IT automation
Another technology addition to Chef’s portfolio is the introduction of test processes used in software development into the realm of software for IT automation. Called Test-Driven Infrastructure, it applies programmatic, automated testing to the entire stack to ensure consistency.
A new installation package, called Chef DK, includes open source tools that cover the entire test and development workflow. Chef is planning to make money by offering commercial support of the open source tools to take some risk out of the equation for enterprise customers, Campbell said.
Chef’s strong business metrics indicate growing demand for web-scale IT
Chef also reported a number of business metrics that show its own rapid growth as well as the growth of interest in IT automation tools that enable enterprises run infrastructure the same way web giants, such as Google or Amazon, run theirs. The approach is referred to as “web-scale IT.”
The company said its recurrent revenue in the second quarter was more than 180 percent higher than in the second quarter of last year. It also had 100-percent growth in new customers, compared to the same three-month period last year.
Wampold said Chef was enjoying “tremendous” growth in the enterprise space: “Seventy percent of our total sales are coming from the Fortune 1000.” | 1:00p |
Rackspace Makes Big Push into Managed Services, Tweaks Cloud Pricing Building on its tradition of extreme hands-on support, Rackspace added two managed services offerings, provided and billed for on a pay-as-you-go basis.
The cloud provider is offering two levels of managed services, an entry-level one, called Managed Infrastructure, and a more involved and expensive Managed Operations package. Customers using the latter can have Rackspace techs do everything from helping them design and deploy cloud infrastructure to installing, managing and monitoring their applications.
“We’re going all-in in the managed cloud space,” Rajeev Shrivastava, vice president of product marketing at Rackspace, said. “The [managed services] market is growing, and we think we have a niche in the managed cloud space.”
The services are aimed at companies that have fairly sophisticated IT organizations but for whom IT operations are not a crucial strategic differentiator. They are also for companies that simply don’t have IT staff who can deploy and run cloud workloads.
Shrivastava made it clear that the two service packages are not for companies with unique IT requirements specific to their workloads who treat the way they run IT as a business advantage.
Managed Infrastructure, the entry-level package, offers staff experts available to provide technical guidance when needed. The all-inclusive Managed Operations package provides the customer with an account manager and staff to manage their cloud infrastructure and applications on an on-going basis.
The second package includes DevOps automation, or automated operations aimed to help customers deploy applications faster.
Here’s a back-to-back comparison of the two new service packages:
 *Available with DevOps Automation service only
Rackspace sells both services as utilities (on-demand, pay-as-you-go) together with its cloud infrastructure, but requires minimum commitments from the customers to use them: $50 minimum for Managed Infrastructure and $500 minimum for Managed Operations.
One of Rackspace’s close competitors, CenturyLink Technology Solutions, also recently introduced utility-style managed services. Billing them as managed services for smaller companies that don’t have the deep pockets usually required to use traditional managed services, CenturyLink emphasized that it did not require minimum commitments from customers.
More cash back for breaking SLAs
Rackspace also said it will now return 10 times its standard compensation rate, up to 100 percent the customer’s monthly invoice, in case it breaks the SLA for managed cloud.
It also introduced another new SLA scheme, wher a customer can warn the provider about an upcoming event that will require a cloud capacity burst ahead of time and have the provider handle infrastructure provisioning to absorb the “peak event.” If Rackspace fails to maintain uptime during the event, it will pay the customer 10 times the standard SLA, up to 200 percent of their monthly invoice.
“If you give us seven to 10 days’ notice, we will make sure that your application is up and running,” Shrivastava said. “We will give you all your money back, if we don’t perform for you.”
The 10x SLA pay-out scheme is currently in test mode with a number of select customers, but the company plans to roll it out into general availability in the near future, he said.
Comparing apples to apples
Piling on, Rackspace said it will now break out the rate it charges its customers for raw cloud infrastructure services and the additional support and services charges separately. By doing this, the company hopes to show customers a clearer picture of what it charges when they compare it to other cloud infrastructure providers.
Rackspace’s cloud services cost more than the raw infrastructure offered by Amazon Web Services or Google Compute Engine, and the provider wants to emphasize that its offering includes much more than simply cloud servers or storage capacity. | 3:41p |
Microsoft’s 175MW Wind Farm Deal is its Biggest Power Purchase Agreement to Date Microsoft took a big step toward transforming the energy supply chain with its biggest power purchase agreement to date with the Pilot Hill Wind project near Chicago, Illinois, a 175 megawatt wind farm.
Microsoft is among a handful of major tech companies spending large on renewable energy, much of it to balance out the coal energy used by their data centers. Microsoft began bulk purchases of wind energy last November, signing a 20-year PPA in Texas, its first “utility scale” move.
The latest deal is even bigger. “The Pilot Hill Wind Project is our largest wind investment to date,” wrote Robert Bernard, the company’s chief environmental strategist. Pilot Hill is nearly 60 percent larger than Keechi (the Texas wind farm), at 175 megawatts versus Keechi’s 110 megawatts. The agreement with EDF Renewable energy means Microsoft will purchase up to 675,000 megawatt-hours of renewable energy from Pilot Hill every year. This is enough to power 70,000 homes.
Pilot Hill will supply clean energy to the Illinois power grid, which powers the company’s Chicago data center. The plant is 60 miles from the Windy City.
Construction has already started, and Pilot Hill is slated to come online in 2015.
One of many environmental initiatives
Microsoft included its commitment to green power in its Global Public Policy Agenda, which goes beyond investments in wind energy. Its massive Quincy, Washington, campus is powered by hydro.
Last fall, the company began a new proof of concept that involved using fuel cells within the rack, with a successful demonstration this year. “Over the past fiscal year, we have purchased more than 3 billion kilowatt-hours of green power, equivalent to 100 percent of our global electricity use,” wrote Bernard.
Greenpeace, which has been pushing large data center operators like Microsoft to clean up the energy mix they use to power their operations, welcomed the announcement. In a statement, the environmental activist organization’s senior energy campaigner David Pomerantz said, “Microsoft’s wind energy purchase shows that it intends to compete in the race among cloud computing companies to power their operations with renewable energy.”
Another company on Greenpeace watch is Apple, which announced earlier this month that it was building its third solar farm that will pump energy to power its Maiden, North Carolina, data center. Apple also received kudos from Greenpeace.
Another public shot at AWS
One of the worst offenders on Greenpeace’s dirty cloud list is Amazon — primarily because the company does not share much information about its operations publicly — and Pomerantz used the Microsoft announcement to call Amazon out one more time” “Microsoft‘s large purchases of wind energy in Illinois and Texas, taken alongside the commitments by cloud competitors Rackspace and Google to power their respective operations with 100 percent renewable energy, highlight the failure by Amazon Web Services to reach even the starting line in the race to build a clean cloud and green internet.
“As other companies move to embrace solar and wind, AWS risks losing business from customers that are beginning to expect their cloud to be powered by renewable energy.” | 4:18p |
Gold Data Centers Sells Sacramento Facility to Single Buyer Gold Data Centers has sold its latest Sacramento, California-area data center before finishing construction of the facility. The company began development of a multi-tenant, 30,000 square foot data center a few months ago in Rancho Cordova. An undisclosed national Internet and networking company has purchased the entire facility, according to the Sacramento Business Journal.
Sacramento is a data center market growing in popularity for those looking for space in Northern California. The area is seismically stable, sitting on a separate tectonic plate from the Bay Area, while remaining within driving distance.
Gold Data Centers’ intention was to build a shell, capable of housing up to four other companies. The data center has up to 3 megawatts available to it and is located in territory of the Sacramento Municipal Utility District, which has lower power rates than many of the surrounding for-profit utilities.
Principal and CEO of Gold Data Centers Bill Minkle didn’t want to sell, but the unnamed company didn’t want to lease. Minkle told the SBJ that they were able to work out a deal because there was not a large piece of wholesale data center space available in the area.
While it is not as active of a market as Silicon Valley, Sacramento is home to numerous massive data center properties owned by national- and global-scale providers.
RagingWire is one of the bigger providers in the area, first opening a Sacramento data center in 2000. It has a data center campus known as “The Rock,” which consists of its CA1 and CA2 data centers and is undergoing a significant expansion right now. More than half of the sizable new facility is reportedly pre-sold. One of the biggest tenants on the campus, reportedly, is Twitter.
“We are preparing to open our new CA3 data center in Sacramento with 180,000 square feet of space, 14 MW of critical IT power and four data center vaults,” said Jim Leach, vice president of marketing at RagingWire. “We are offering CA3 preview tours now and accepting day-one reservations. First customer installations in Vault 1 of CA3 are expected by year-end 2014.”
QTS Realty Trust expanded into the Sacramento market with the acquisition of the Herakles data center, which joined its two other California facilities in Santa Clara. Digital Realty Trust has a fully-leased property in the area.
Gold Data Centers is also looking for other locations in Sacramento, Rancho Cordova, Roseville or Rocklin. It has another building in the area under contract.
The company is selling the entire data center in one shot, and early interest in RagingWire’s new CA3 is indicative of an imbalance of supply versus rising demand in the area. | 6:57p |
Think Loud Data Center Project by Members of Band “Live” Alive With $5M Grant Members of the 90s rock band Live decided data was the new punk rock and started Think Loud, a project to build data centers in four Pennsylvania cities, as well as a fiber line from New York to Northern Virginia. It’s been relatively quiet since the initial announcement, but new funding suggests the project is alive and well (unlike the band…).
Think Loud Development has been given a $5 million state grant to be used for construction of a data center and office building that will house Think Loud’s headquarters. The organization had originally applied for $10 million. The grant comes from the state’s Redevelopment Assistance Capital Program, which views Think Loud as having potential to create nearly 700 jobs in addition to boosting infrastructure in the area. Think Loud is about giving back to its home state and is an example of broadening investor interest in the data center business.
“Economic growth initiatives are energizing local economies around our state,” said Pennsylvania Governor Tom Corbett. “Think Loud saw the promise that its involvement in downtown York could bring to revitalizing the community. This project will serve as a technology hub and bring good-paying jobs back to the downtown.”
The money will be used to renovate an old printing warehouse into a 54,000 square foot office building on a portion of the company’s “multiple, contiguous parcels of land” near Santander Stadium. Think Loud purchased a bunch of old houses that it plans to demolish.
This first foray will be the flagship offices of Think Loud Development and United Fiber and Data, the latter overseeing the planned data centers. The initial plan was to bring a data center to York with three others to follow. Construction began in 2012 on a $16.8 million dollar project at 210 York Street, but appears to have stalled out while Think Loud waited on the grant. The grant will let them finish the project.
Next up will be a 40,000 square foot data center at an expected cost of $30 million, according to the York dispatch.
Think Loud Development includes former Live band members Chad Taylor, Chad Gracely and Patrick Dahlheimer, along with business partner Bill Hynes. Live sold 20 million albums back when people bought albums.
“You might call data the new punk rock,” said Live guitarist Chad Taylor, speaking at the original launch event at the Strand-Capitol Performing Arts Center in York. “”The company, like our band, would have to give voices to the masses.”
Also in Think Loud and United Fiber and Data’s plans is a major fiber line. From New York, the fiber line bypasses the traditional interstate 95 route taken by most major networks, instead traveling west through New Jersey. The four data centers were planned for Allentown, Reading, Lancaster and York, along the fiber line route. The data centers are expected to be around 20,000 square feet each.
In eastern Pennsylvania, iNetU has built a cluster of data centers in the Lehigh Valley. A state-backed initiative known as Wall Street West sought to establish the region as a hub for disaster recovery services for financial firms with limited results. The primary success of the effort was a facility built by DBSi, which is now part of Xand. | 7:00p |
Bitcoin Infrastructure May Grow by $600M in Second Half of 2014 This is the third feature in our three-part series on the growing data center market for Bitcoin mining. Read the first one about a new breed of colocation for Bitcoin infrastructure or the second one about the tough economics of being a data center provider for customers in the Bitcoin mining business.
Bitcoin mining operations are expected to spend at least $600 million to deploy infrastructure in the second half of 2014 to process transactions using the fast-growing cryptocurrency.
That investment will continue into 2015, according to a panel featuring some of the largest players in Bitcoin mining at Friday’s CoinSummit conference in London. A significant portion of that spending will benefit data center providers with clients in the cryptocurrency game.
Projections of future spending are often optimistic. But these experts say their projections are, like much of the Bitcoin world, based on the math.
“It comes down to the financial incentives,” said Dave Carlson, founder of MegaBigPower, a large Bitcoin operation in Washington state. “Over the next two years there will be 2.6 million Bitcoin mined.”
At current price of about $630, that adds up to at least $1.6 billion in Bitcoin that will be earned by mining operations through “block rewards,” an incentive paid out every 10 minutes for using their computers to process transactions on the Bitcoin network. If the price moves higher, as is anticipated by many Bitcoin miners, that opportunity will be even greater.
“How much does it take to capture a share of that $2 billion?” Carlson asked. “You could invest $100 million and do very well. That’s how I’ve been thinking.”
Network capacity soaring
The growing interest in Bitcoin and other virtual currencies has spurred a massive investments in servers and infrastructure to process transactions. Computing power in virtual currency is measured by the “hash rate,” the number of Bitcoin calculations that hardware can perform every second. Since January, the Bitcoin network has grown from a total compute power of 10 petahashes per second to 135 petahashes per second.
The mining panelists at CoinSummit expect that growth to continue through the fall, resulting in a year-end hash rate of between 400 petahashes and 700 petahashes per second. The cost of deploying a petahash ranges from $1.5 million to $2 million, depending on where the capacity is deployed. Operators like Carlson that deploy in warehouse-based hashing centers are at the lower end of that range, while providers like PeerNova, which use wholesale data center space, are at the higher end.
That suggests a range of $600 million to $1.4 billion in new investment by the end of 2014. The industry’s largest players are already expanding their infrastructure.
“You’ll see some announcements from us shortly on data centers we’re opening,’ said Marc Aafjes, chief strategy officer at BitFury, an ASIC manufacturer that recently raised $20 million from venture capital firms. “I think that will be a differentiator in the near future.”
Hardware vendors now building huge mines
BitFury illustrates a trend in which hardware vendors that make custom ASICs (Application Specific Integrated Circuits) for Bitcoin are shifting their focus to mining. While they continue to sell mining rigs to customers, these companies are also using them to populate large industrial mines. BitFury operates large hashing centers in Iceland, Finland and the republic of Georgia.
“I think the economies of scale favor data center installations,” said Timo Hanke, the CTO at Cointerra, an ASIC manufacturer that has also begun developing data centers to host cloud mining services, including one in Utah that was featured in the Wall Street Journal.
PeerNova was formed by the merger of ASIC vendor HighBitCoin and cloud mining pioneer CloudHashing, creating a vertically-integrated company that can handle the entire supply chain, from design of the chip to deployment in the data center.
“When we are building our machines, we build at the data center level,” said Naveed Sherwani, the CEO of PeerNova, which leases wholesale data center space in Iceland and Dallas. “Machines can be tuned to their location within the data center aisle. We need to figure out how to run 150,000 to 200,000 machines.”
PeerNova’s current mining ASIC, the PetaOne Blade, features a 28nm ASIC chip. Sherwani says that PeerNova’s roadmap calls for future chips with 16nm and 9nm semiconductors, putting it in the same neighborhood as Intel’s current roadmap. “I think in the next two years we are going to see a dramatic improvement in ASICs in power and CapEx,” he said.
Building franchise networks to scale up
As the mining action scales up and larger players emerge, it becomes challenging for smaller operations to compete. Carlson has recently launched a franchise program to help MegaBigPower stay competitive.
“If we’re going to scale along with the network and maintain our market share, we’re going to have to (grow) quicker,” said Carlson. “Our way of doing that it to decentralize. We’re seeing an opportunity for the small business startup mine that has the ability to make the economics work.”
MegaBigPower will provide Bitcoin mining hardware for franchisees who can supply industrial facilities with 1 megawatt to 5 megawatts of power. A similar program has just been announced by ZeusMiner, which makes ASIC hardware focused on AltCoin mining using the Scrypt protocol (Bitcoin mining uses the SHA-256 protocol).
“We want to find global partners who can provide a site to host hundreds of miners with cheap power supply to deliver more cost-efficient hashing rates,” ZeusMiner said. |
|