Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, August 15th, 2014
| Time |
Event |
| 12:00p |
The Bitcoin Service Provider: New Tenant Class for Data Centers As more money flows into the Bitcoin mining sector, new companies are emerging to provide managed services for large-scale mining operations. These service providers need data center space to deploy their clients’ equipment.
An example of this trend is Rocky Mountain Miners, a new company specializing in Bitcoin hosting, which is leasing space with IT infrastructure provider Latisys. Under a multi-year agreement, Rocky Mountain Miners will colocate a portion of its high-density Bitcoin mining operations in the Latisys DEN2 data center in Centennial, Colorado.
Rocky Mountain Miners (RMM) provides tailored solutions and consulting for customers doing large-scale cryptocurrency mining. As the Bitcoin sector attracts new players, providers like RMM can streamline their market entry by installing and configuring mining hardware, which can be a tricky task. There’s been an arms race in Bitcoin mining hardware over the past year, with a flurry of new entrants offering more powerful mining rigs.
“There have been a lot of changes in this business,” said John Logan Jones, the CEO of Rocky Mountain Miners. “We’ve learned a lot about this equipment and how to get the most out of the hardware.”
More uptime means more profit
When he looked for hosting infrastructure, Jones considered many options. “We were looking at opening our own space,” he said. “We concluded that for the amount of money we needed to invest, it made more sense to go with Latisys. Their cutting edge, reliable infrastructure enables us to focus on the installation, monitoring and maintenance of our customers’ hardware, ensuring maximum hashrates, uptime and profits.”
RMM started out building custom mining hardware for Litecoin, one of the many “AltCoin” cryptocurrencies adapted from the original Bitcoin code. “It was a hobby that turned into a business,” said Jones, a former Air Force cryptographer. “We started out building systems for individuals, but are now focused on large-scale mining. Now we’re a service provider to the mining industry and one of the first mining services companies.”
From its inception, Rocky Mountain Miners has focused on making mining equipment easy to use. Bitcoin and other major cryptocurrencies are mined by custom hardware, often powered by Application Specific Integrated Circuits (ASICs). Many early ASICs were manufactured in China, with other leading vendors based in Sweden, Ukraine and Israel, as well as the U.S. Configuring drivers and software can be difficult, as popular management software (including cgminer, BFGMiner and cpuminer) has been forked to create versions that support new hardware, often with limited GUI support.
Although mining hardware can be managed using Windows computers, most large-scale operators use low-power controllers like the Raspberry Pi. With limited documentation for hardware and software, trouble-shooting often takes place on industry forums like BitcoinTalk or Reddit.
Density drives Bitcoin hosting
That’s why Jones sees an opportunity in managed hosting and consulting for miners. This approach requires a platform, which is where Latisys comes in. A key requirement in a Bitcoin mining data center is high density, as racks of mining rigs can run at 20 kilowatts and higher. The 87,000-square-foot DEN2 facility, built in 2012, was designed to support higher loads.
“The hosted Bitcoin solution is a great example of why ultra-high-density infrastructure is the key consideration for optimizing a variety of hosting and hybrid IT infrastructure solutions – particularly those that require massive parallel processing and scale,” said Pete Stevenson, CEO of Latisys.
RMM has already developed liquid-cooled rigs and sees that as part of its future roadmap. “As we grow our operation, chances are we’ll have to go to glycol cooling,” said Jones.
In recent months Bitcoin mining has become a game for large players with lots of financial strength, making it nearly impossible for solo miners to compete. Jones is convinced that the cryptocurrency world has plenty of room for innovation and says smaller players can continue to find niches. That’s especially true, he said, in the world of AltCoins, where new entries are building distributed storage and other services atop the core blockchain technology that powers cryptocurrencies.
“Bitcoin hashing is very important, but there will be different kind of services that require different infrastructures,” he said.
 This rack of Bitcoin mining equipment will run at 20 kW in a Latisys data center. (Image: Rocky Mountain Miners)
Visit the dedicated Bitcoin data center market section on Data Center Knowledge for more coverage of this space. | | 12:30p |
Cooling and Powering Florida Poly’s New Supercomputer As the new futuristic campus of Florida Polytechnic University prepares to welcome its first 500-student class later this month, the university, together with IBM, announced installation of an IBM supercomputer in the building.
The school will teach science, technology, engineering and mathematics, and the supercomputer will support its cybersecurity, Big Data and analytics, cloud and virtualization programs.
While not extremely large in size, it is a powerful system and requires very high power density. Ed Turetzky, senior architect at Flagship Solutions Group, who worked on the team that installed the system, said the three racks that were currently populated would consume 32.7 kW when running at full steam.
For comparison, median power density in today’s enterprise and colocation data centers is lower than 5 kW per rack, according to a recent report by the Uptime Institute. High performance computing systems, such as Florida Poly’s new supercomputer, usually need much higher power densities than systems housed in enterprise and colocation facilities and require a different approach to data center design, especially mechanical and electrical infrastructure.
Chilled water to the rack
Florida Poly’s 1,000-core system is cooled with a rear-door heat exchanger, bolted onto the back of one of the racks. The exchanger is made by IBM and its cooling capacity is about 50,000 BTUs, Turetzky said.
The supercomputer needs about 70,000 BTUs of cooling capacity, and the difference is taken care of by ambient air conditioning in the room. As the system scales and more compute nodes are added, another heat exchanger will be installed on another rack.
 The new Florida Polytechnic University will open its doors to the inaugural 500-student class later this month. (Photo by Jeane H. Vincent/Florida Polytechnic University)
The system pumps ionized water through the heat exchanger, which is part of a closed loop, dedicated to the supercomputer. The loop extends outside of the computer room where it goes through another heat exchanger that cools it with regular chilled water used by the building’s own air conditioning system.
“There’s an apparatus outside of the computer room that monitors the heat that is exchanged, so it will regulate itself to keep that ionized water at a steady temperature,” Tom Mitchell, vice president of sales at Flagship, said. Water used in the supercomputer’s cooling loop is treated for impurities to prevent corrosion in the pipes, he explained.
Power transformer directly on data center floor
The HPC system does not have a dedicated utility feed and relies on the building’s electrical infrastructure for power. Building power comes into the 2,500-square-foot room, where it goes through a “step-down” transformer and an Eaton uninterruptible power supply system. Turetzky said this set-up, where the transformers are placed directly on the data center floor, was unusual.
Typically, the power will be stepped down in a separate closet outside of the data hall, and when an operator needs to add more power, they have to bring an additional feed from that closet. The set-up at Florida Poly makes it easier to add more power when the time comes.
With the transformer right there next to the IT racks, adding power capacity is simply a matter of adding circuits, he said. There is capacity to support 28 circuits, and the system is currently using only eight.
Building a Big Data work force
The campus is the only one in the State University System of Florida dedicated strictly to science, technology, engineering and mathematics. It is one of 28 schools IBM said it would partner with to train students for the millions of jobs in various Big Data fields it expects to be created around the world by the next year. | | 1:00p |
Rackspace’s ObjectRocket Launches Managed Redis Service Rackspace-owned ObjectRocket, known for providing managed MongoDB services, is now helping customers manage Redis at scale. The company is providing full automation, support and management of Redis — a service now available out of Rackspace’s northern Virginia data center, with Dallas, Chicago and London locations expected to come online by the end of the month.
Rackspace acquired ObjectRocket in February 2013, establishing itself in the high-growth managed database market. By taking on management of the intricacies of Redis, the company lets developers devote more time to building applications.
Redis is an open source advanced key-value cache and store. It is often referred to as a data structure server and used as an ephemeral data structure, with the dataset and data store ceasing to persist once the computation finishes. This aids in optimizing utilization of resources.
“Redis is becoming even more of a focus for developers,” wrote Sean Anderson, product marketing manager for data services at Rackspace. “Redis is easy to setup, replicate and code to, which makes it an important part of the modern data architecture.”
Users can deploy a fully managed Redis service backed by certified engineers specializing in the open source technology. It speeds up adoption and implementation and comes with with around-the-clock expert support. The service offers high availability with free backups, simplified operations, high performance and high bandwidth.
Users can provision and manage Redis instances and highly available cluster nodes of up to 50 gigabytes through the ObjectRocket control panel and API.
Major companies that use Redis include photo site Flickr, which uses automated Redis master failover for an important subsystem, and social site Pinterest, wich uses it for its following model and interest graph.
Managed MongoDB has been available for two years. NoSQL databases are easy to adopt and free to obtain, but troubleshooting and administering the full environment can be difficult. | | 1:30p |
Enterprise Cloud Storage Startup Nasuni Raises $10M Enterprise storage provider Nasuni has closed a $10 million round of venture capital funding, bringing the total amount of money it has raised to date to $53 million. Previous investors Flybridge Capital Partners, North Bridge Venture Partners and Sigma Partners, as well as a new investor, participated in this extension of the company’s C round.
The Boston-based startup is possibly a few years away from an IPO and will use the new funds to scale engineering, sales and marketing efforts.
“Simply put, we wanted a bigger share of Nasuni,” said Paul Flanagan, managing director at Sigma Partners. “With their disruptive technology and approach to delivering enterprise storage as a service, Nasuni is revolutionizing the way data storage is deployed. Clearly, we’re excited about the company and have been super impressed with its growth. The opportunity here is enormous, and Nasuni is perfectly positioned to take full advantage of IT’s shift to the cloud.”
Nasuni’s unified storage infrastructure looks to take on entrenched vendors EMC and NetApp with a patented UniFS Global File System, which gives users fast access to a global file share no matter where they are located.
The company logged a 232-percent increase in bookings in the second quarter of 2014 and 181-percent sales growth.
Version 6 of its service was launched recently, adding cloud-scale global file locking and mobile file synchronization for access to corporate data. This cloud-centric service is complimented with Nasuni Filers, an appliance that provides WAN optimization and acts as a cloud gateway with sophisticated caching algorithms for local NAS and SAN workloads.
Founder and CEO Andres Rodriguez said, “The Nasuni Service liberates data from the limitations and high cost of traditional storage silos. With this new financing, we will expand our outreach and accelerate innovation and market adoption.” | | 3:50p |
A New Predictive Approach to Energy Efficiency The reality of the modern data center is that energy costs are continuing to rise. The need to control those costs requires data center managers to increase energy efficiency not only in the data center, but across the entire organizational estate.
For a majority of organizations, improving energy efficiency is about saving money. Therefore, managing power consumption and optimizing cooling are at the top of their to-do list.
Managing power consumption could involve deploying new energy-efficient equipment, but due to the expense, most companies put that off until it is time for a scheduled technology refresh. As a result, cooling is the single-largest data center operational and energy cost that can generate a return on investment when remediated. So, optimizing cooling is one of the first areas data center managers should look to in order to reduce costs and increase efficiency.
The increasing demand for energy efficiencies has led organizations to investigate or implement data center infrastructure management (DCIM) tools to automate the creation of documentation and the collection of energy and environmental data upon which optimization decisions can be made.
In this whitepaper from Panduit, we learn how to create a more proactive – and predictive – approach to creating better energy efficiency.
When looking at good ways to measure and optimize energy efficiency, it’s critical to look at the tools you’re using. As the white paper points out, the primary functions of energy and environmental management solutions are to:
- Provide detailed information on power consumption and environmental conditions
- Accurately and dynamically map a holistic view of the energy flow within the facility, from the point of entry to an individual payload or supporting plant
- Provide easy-to-use, easy-to-understand reporting, tailored to customer-specific requirements
- Provide accurate energy consumption performance metrics
- Allow multiple departments to input and view data collaboratively
Technologies like the Panduit SmartZone Solution can deliver comprehensive energy and physical infrastructure efficiency through a range of intelligent products, systems and services.
Download this whitepaper today to see how DCIM tools can help in your quest for greater energy efficiency by automating the creation of documentation and the collection of energy and environmental data. | | 4:23p |
Friday Funny: Pick the Best Caption Another busy week is nearing its end and that can only mean one thing (hint: starts with Friday, ends with Funny)! Let’s finish the week out strong with our Data Center Knowledge Caption Contest!
Several great submissions came in for last week’s cartoon – now all we need is a winner! Help us out by scrolling down to vote.
Here’s how it works: Diane Alber, the Arizona artist who created Kip and Gary, creates a cartoon and we challenge our readers to submit a humorous and clever caption that fits the comedic situation. Then we ask our readers to vote for the best submission and the winner receives a signed print of the cartoon!
Take Our Poll
For previous cartoons on DCK, see our Humor Channel. And for more of Diane’s work, visit Kip and Gary’s website! | | 4:56p |
TierPoint Breaks Ground on Oklahoma Data Center TierPoint broke ground on a new 69,000-square-foot data center in Oklahoma City. The new data center will be the largest of three facilities on the company’s 15-acre campus there, adding 30,000 square feet of net usable raised-floor space.
Fresh off of a recapitalization deal, the company has been building out its footprint this year through acquisition and now a greenfield build. It was founded as Cequel Data Centers but took on the moniker of TierPoint following an acquisition in 2012.
Additionally in the region, TierPoint operates a 22,000-square-foot facility in Oklahoma City and a data center in Tulsa with around 9,000 square feet of raised floor — facilities it acquired from Perimeter Technology in 2011. It has six WAN-connected data centers total, the others located in Dallas, Spokane, Seattle and Baltimore. The company’s current footprint totals about 141,000 square feet of space.
The new Oklahoma data center will be carrier-neutral and will include N+1 redundant power. The company says it will be able to accommodate up to 30 kW per rack in some areas. The facility will also include 10,000 square feet of business-continuity office space and offer a private customer lounge and work areas with multiple large staging rooms for customer installs and upgrades.
Security will include onsite personnel around the clock, biometrics, proximity sensors and logging at all entrances. The company also plans to complete an SSAE 16 audit.
“Our focus is on customizable services and solutions,” said Todd Currie, general manager for TierPoint Oklahoma. “This design gives us the ability to accommodate most any customer configuration and power requirements.”
TierPoint has predominantly relied on acquisitions to expand so far. The acquisition strategy has been to buy data center properties in underserved regional markets. In June, the company acquired Philadelpjia Technology Park, which was its sixth acquisition.
“The new data center will enhance our ability to meet growing customer demand for secure colocation, managed hosting and cloud services and demonstrates TierPoint’s commitment to growing the Oklahoma market,” said Todd Currie, general manager for TierPoint Oklahoma. “Oklahoma’s low cost of living, land and electrical power have made it one of the nation’s most inviting data center locations.” | | 5:28p |
Apple Outsources iCloud Storage in China to State-Controlled China Telecom Apple is storing user data in China using state-controlled China Telecom’s cloud storage services. Fuzhou city government’s website had a statement confirming the arrangement, but the page was removed it after the story broke.
iCloud is the Apple’s cloud storage service used for backing up all of its devices including iPhones and iPads. The company is most likely using China Telecom instead of its own data centers because Chinese regulations make it very difficult for a foreign company to build there.
Apple’s data centers are in Prineville, Oregon, Maiden, North Carolina, Newark, California, and in Reno, Nevada (a new site currently running at low capacity). It also uses colocation providers sparingly but says that most of its workload is served out of its own facilities.
Strategically, outsourcing to a local service provider in a foreign country is a typical approach to expanding geographic reach of a company’s IT infrastructure. Outsourcing to a third party means smaller upfront expenditure and faster time to market, both keys in growing a local user base. Locating within China’s borders means a better user experience for Chinese customers.
There are some concerns, however, regarding data privacy, since China is known to be filtering and controlling content within its borders. However, Apple told the Wall Street Journal that all data it stores is encrypted, meaning China Telecom cannot see the content.
State-run China Central Television recently made claims that the iPhone poses a national-security concern due to its location-based services. The service must be turned on by the users themselves, however, and Apple says it does not keep track of locations.
Other companies, such as Microsoft, have chosen to go the third-party route to build a local presence in the country. Microsoft launched Azure in China via local partner 21Vianet Group. Chinese investigators raided Microsoft offices last month as part of an anti-monopoly investigation.
Amazon Web Services partnered with Chinese company ChinaNetCenter for data center space for the China region of its cloud services and signed with local company Sinnet as the service provider.
Google, however, chose to build out its own data centers to serve the region, which meant it could not build in mainland China. The company has built data centers in Hong Kong, Taiwan and Singapore.
A statement from a China Telecom business unit said Apple tested and evaluated their service for 15 months before choosing the company as its first and only cloud provider in the country. Data Center Knowledge reached out to Apple for comment, and we will update the post if we hear back.
China is a massive growth market for cloud services, and U.S. providers are still at early stages of establishing themselves there, local companies landing most of the business.
Other cloud service providers with locations in China include managed services and cloud provider Datapipe, which opened operations in Hong Kong in 2007. Another provider that was quick to enter the market was cloud storage provider Carbonite, which opened a data center in Beijing back in 2008. U.S. giant IBM has been doing business in China for years, which includes building data centers for cloud services. | | 10:34p |
LeaseWeb Launches CloudStack-based Private Cloud Service in Germany 
This article originally appeared at The WHIR
LeaseWeb is evolving its cloud service to keep up with international concerns with the launch of LeaseWeb Private Cloud in Germany. The solution is a flat-fee, plug-and-play platform based on CloudStack, and is targeted to systems administrators, application developers, MSPs, and other savvy cloud users.
The new solution features dedicated resources reserved for individual clients. This enables users to create and manage multiple instances with dedicated cores, RAM storage, virtual networks, firewalls, and load balancers to customize their virtual infrastructure to their requirements.
The platform is connected to LeaseWeb’s network, which includes 52 Points of Presence, 33 Internet Exchanges, and 4.0 Tbps.
“Germany is a key market for the LeaseWeb brand, and introducing LeaseWeb Private Cloud in this region is an important step making innovative cloud infrastructure solutions available around the globe,” Herke Plantenga, Managing Director, LeaseWeb Germany said. “The platform will provide customers high performance, high flexibility, and cost-efficiency, while fully meeting German data residency requirements. Together with the comprehensive amount of features, this offering will allow German customers to establish fully customized private cloud solutions at public cloud price levels.”
LeaseWeb Private Cloud Germany is similar to the company’s private cloud platform launched in the US in June 2014, and the platform launched previously in LeaseWeb’s native Netherlands. Each platform is independent from the others, to meet the increasingly strict compliance and data-integrity requirements being considered in the EU. The data center for the German cloud is located in Fankfurt.
Local storage is a customer relations issue as well as a regulatory one in Germany,where alternatives to the industry’s American giants have received a boost since NSA surveillance programs were revealed to the world.
LeaseWeb opened up its CDN to resellers earlier this month, and also opened a data center in Singapore in July.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/leaseweb-launches-cloudstack-based-private-cloud-service-germany |
|