Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, June 5th, 2015
| Time |
Event |
| 12:00p |
Project Seeks to Combine Sustainable Fish Farm and Data Center An organization called the Foundry Project has a plan for an 8-acre brownfield in Cleveland in which data center waste heat would be reused to provide heat for an aquaculture farm. It’s still in early stages, but ideally the project would involve a bunker-style data center piping exhaust heat to warm water for a sustainable sea bass fish farm onsite.
There have been a few projects where data center waste heat has been recycled to heat nearby condos or offices. The Foundry Project is similar in intention but different in terms of approach, finding a unique symbiotic relationship between servers giving off heat and sea bass.
This is a rare example of a project that attempts to combine a data center with a completely unrelated facility in way that is mutually beneficial. Because a data center is a massive power and water consumer and a huge source of excess heat, people are often compelled to look for creative ways to utilize those aspects of mission critical facilities. Another example is a project in California’s drought-stricken Monterey County, where a group of entrepreneurs wants to combine a data center with a water desalination plant.
The first initiative is the aquaculture facility, a fish farm that will produce 500,000 pounds a year of Mediterranean sea bass. A tech incubator is also planned for the site.
However, key to the project is a 20,000-40,000 square foot data center built underground. The data center would be occupied by a service provider or another customer looking for a sustainable, unique facility, not the fish farm itself. Details about the build aren’t finalized, with Foundry currently looking for the right data center developer.
 A rendering for the proposed Foundry Project that seeks to combine a fish farm and a data center. The data center would be the two-story building. Foundry also has plans for a tech incubator in the tall building (Image: Foundry Project)
“We’ve found components that are compatible and want to turn weakness into strengths,” said J. Duncan Shorey, an environmental attorney, consultant, and geothermal expert who’s involved in the project. “The big home run on the data center is the recovery of the waste heat and repurposing that heat.”
There’s a sustainability story, and there’s an economic development story in Foundry Project’s vision. The site is in an urban zone in a neighborhood that’s in need of economic stimulation. The right infrastructure for a data center is in place. Foundry said there’s a ton of power coming in, and the project’s leaders have been assured the location will be able to tap 100 Gigabit per second fiber network Cleveland is in the process of building.
Shorey said the redevelopment would employ an extremely simple system that would not only help lower carbon footprint but also simplify data center design by eliminating the need for chiller towers. Given that water could be a concern for a data center, the heat recycling process would occur in a separate structure than the data center, said Shorey.
The process is simple: take hot air, run it through a water sourced heat pump, extract that heat and pipe it into the fish tank. The energy comes from either chilled or hot water depending on the season and used to heat the conditioned space. Hot water would be pumped in a geothermal well in the summer and in the winter it would be pumped in the agricultural center or offices.
Geothermal wells have a huge capacity of absorbing energy. All winter long, that stored energy can be used as a source of heat.
The project is currently looking for the right data center developer, said Michael Dealoia. Dealoia has done economic development work with Cleveland, acting as the city’s tech czar of sorts. His background is in data centers and networks, including roles with Expedient. He is also a co-founder of local service provider BlueBridge Networks.
“It’s a high-visibility opportunity,” said Dealoia regarding the project. Foundry has spoken with Cleveland Economic Development and other local officials who are very supportive of the project, as it could be a good generator of jobs and provide economic stimulation to an area in need.
It is an economically challenged area, but there’s also an opportunity to build something impactful, said Dealoia, adding that the city would open its arms to the data center. It also means that the land is inexpensive.
There’s a significant amount of money at city, county, and state levels to help get the project rolling. Ohio has attractive data center incentives, and government officials on local levels all want to support economic stimulation. | | 3:00p |
DE-CIX Outlines Global Expansion Plans After establishing itself as a dominant provider of internet exchange services in Germany, DE-CIX will spend most of the coming year expanding its footprint into the Middle East, Africa, and North America.
Celebrating its 20th anniversary, DE-CIX is making investments in building out data center facilities in Istanbul, Palermo, and Marseilles, Frank Orlowski, chief marketing officer for DE-CIX, says.
“The Istanbul facilities will initially focus on the local Turkish market, but in time we think it will serve international traffic as well,” says Orlowski. “The Palermo and Marseilles facilities will connect our network to Africa.”
DE-CIX will also probably extend its presence in North America beyond the facilities it currently has in New York.
While the vast majority of the data centers that DE-CIX operates are in Germany, the company has also built out some data center capacity in the United Arab Emirates.
Meanwhile, back in Frankfurt, where DE-CIX claims to manage the world’s largest internet exchange, DE-CIX this week revealed that it has recorded an all-time record peak throughput volume of 4 Terabit per second. In addition, the company noted that in the first quarter of 2015, customers ordered the same number of 100 Gigabit Ethernet ports as they did in all of 2014.
As a result, DE-CIX reports that it is planning for an increase in IP traffic levels of 20 percent per year for the foreseeable future.
Other strategic initiatives, says Orlowski, include the rolling out of software development kits (SDKs) that would enable developers to invoke application programming interfaces (APIs) to enable organizations to programmatically self-service their own networking needs within a set of defined parameters. Based on API management software that DE-CIX developed internally for its own needs, Orlowski says, DE-CIX will begin testing those services in the coming year.
Most of the increased demand for internet exchange traffic, says Orlowski, is being generated by high-definition applications, content delivery networks such as Akamai, and cloud applications that are particularly sensitive to network latency.
The challenge most internal IT organizations face, he says, is that their own internal networks become easily congested. Once that begins to occur, it’s only a matter of time before those organizations look to move those applications off their internal networks.
Naturally, pricing competition across the global internet exchange market these days is especially fierce. That means by definition the ability to operate at scale is now a life-or-death matter for all concerned. | | 4:53p |
Friday Funny: Pick the Best Caption for “Twitter” Kip seems to be getting a little carried away with social media…
Here’s how it works: Diane Alber, the Arizona artist who created Kip and Gary, creates a cartoon and we challenge our readers to submit a humorous and clever caption that fits the comedic situation. Then we ask our readers to vote for the best submission and the winner receives a signed print of the cartoon.
Congratulations to Rob Golding, whose caption for the “iRobot” edition of Kip and Gary won the last contest with: “And amazingly, it never needs emptying – all the dust goes straight up to the cloud.”
Several great submissions came in for last week’s cartoon: “Air” – now all we need is a winner. Help us out by submitting your vote below!
Take Our Poll
For previous cartoons on DCK, see our Humor Channel. And for more of Diane’s work, visit Kip and Gary’s website! | | 5:49p |
UK Government Taps IBM OpenPower, Watson for $475M Big Data Initiative IBM in partnership with the UK government are investing over $475 million to boost Big Data and cognitive computing research in the UK through the Hartree Centre, an organization formed by the UK government’s Science and Technology Facilities Council.
IBM put together a package valued at $300 million consisting of OpenPower and Watson technology and onsite expertise consisting of at least 24 IBM researchers. The UK government earlier made a $175 million commitment to expand the Hartree Centre over the next five years.
The partnership aims to make it easier for non-computer experts and local businesses to derive insights from data in new ways and from new angles. IBM’s cognitive computing Watson works in a more intuitive way than traditional computing, while OpenPower is architected specifically for Big Data workloads. They will develop a systematic way to analyze data sets.
The work done at Hartree could transform how local businesses leverage data, changing the way they operate and engage with customers, suppliers, and employees.
The partnership is another big government win for OpenPower, the open development community around IBM’s Power processor architecture backed by Google, Canonical, NVIDIA, Mellanox, and well over 100 other organizations worldwide.
The US Department of Energy earlier selected OpenPower technology for the next generation of supercomputers at the Oak Ridge and Lawrence Livermore National Labs.
Lenovo also has a partnership with Hartree Centre and is developing an ARM server powered by Cavium’s 64-bit Thunder System-on-Chip. The project is part of a computing energy efficiency research effort funded by STFC.
The fruits of the partnership will be commercialization of intellectual property assets produced in partnership with STFC, which runs Hartree.
STFC and IBM will engage in collaborative projects with third parties as well as UK universities to develop advanced software solutions to address real-world challenges in academia, industry, and government.
“Data-intensive techniques are transforming every discipline of science, and connecting these capabilities to the needs of industry has the potential to revolutionize every business sector,” said John Womersley, professor and chief executive of STFC.
David Stokes, chief executive for IBM in the UK and Ireland, called this the dawn of a new era of cognitive computing.
The partnership was unveiled earlier this week by Universities and Science Minister Jo Johnson. The Hartree Centre is already helping businesses like Unilever and Glaxo SmithKline use high-performance computing to improve the stability of home products such as fabric softeners and to pinpoint links between genes and diseases. | | 6:09p |
Weekly DCIM News Roundup: June 5 Device42 adds new features for managing network software in release 7 of its DCIM software; a new report shows $30 billion worth of idle servers are in data centers, thanks in part to lack of capable asset management tools; Cormant adds the ability to manage multifiber push-on connectors into its DCIM offering; and research from DCD Intelligence census says that things are finally starting to change for DCIM and those planning to deploy it.
- Device42 adds capabilities in version 7 release of software. DCIM vendor Device42 announced new capabilities for managing physical and virtual instances of network software installed on enterprise IT infrastructure. The new version 7 release of its software enables organizations to identify, manage, and support a comprehensive, accurate profile of the software deployed throughout their network.
- New report says $30 billion worth of idle servers sit in data centers. Anthesis Group released a report announced the results of a report that was conducted with Jonathan Koomey, Research Fellow at Stanford, using data from TSO Logic. Findings from the report showed that there are about 10 million comatose servers worldwide, which translates into at least $30 billion in data center capital sitting idle.
- Cormant-CS DCIM solves MPO management headaches. DCIM vendor Cormant announced that its DCIM software now provides extensive documentation support for managing complex MPO (Multifiber Push-On) connectors, including MPO hydra cables. This new feature is offered at no extra charge, and allows operations staff to quickly designate an array of sub-channels in high density fiber environments.
- DCIM begins to blossom. Research from the DCD Intelligence Census reveals that things are changing for DCIM and the gap between the number of companies thinking about it and the number going on to deploy it.
| | 6:29p |
Canadian Telco Ormuco Launches HP Helion-Based Cloud Montreal-based Ormuco has unveiled a hybrid cloud offering called Connected Cloud, based on HP Helion OpenStack infrastructure. The offering aims to make it easy for mid-size enterprises to migrate between cloud environments easily and to help them understand the costs associated with different parts of a project.
Connected Cloud is a single platform that provides a web portal through which customers can provision a hybrid cloud environment that transitions from private to public easily and on a pay-as-you-go basis. Ormuco focuses on providing mid-level enterprises with workload portability, data protection, and security.
At the center of the hybrid cloud offering is a longstanding relationship with HP. HP has been working on its Helion Network, a network of service providers around the world that can attach to Helion and provide cloud around the world. Ormuco is one of the charter members, alongside British Telecom, Telefonica, Intel, and numerous big service providers.
“Ormuco requires extensive geographic reach and the ability to meet customers’ in-country or cross-border cloud requirements,” said Steve Dietch, vice president for HP Helion. “With HP Helion OpenStack and the HP Helion Network, Ormuco’s Connected Cloud customers will have access to hybrid cloud services from a global, open ecosystem of service providers.”
One of the advantages a telecom has in cloud is its existing relationships with enterprises and an understanding of their needs. A disadvantage for many telecoms in cloud is lack of agility, having to evolve the organizational thinking from legacy telco operations. Ormuco is a different kind of telecom in that its roots aren’t in the traditionally defined telco world.
For one, its CEO Orlando Bayter is in his early 30s. Bayter was a prodigy who started his first tech business when he was 11. He started Ormuco seven years ago as a managed services company.
Bayter had a revelation of a great sea change occurring in the space several years ago. That revelation was that enterprises that currently have infrastructure on-premise will increasingly watch these costs become prohibitive.
Data is growing so fast that enterprises will not be able to increase their processing power quickly enough to keep up. Several years ago, Ormuco began to work on a way for enterprises to use cloud in practice, combining cloud’s promise of agility while making it practical for enterprises with all of their needed functionality.
The forward-thinking telecom is also one of a few telecoms in Canada that has a CRTC license (Canadian Radio‑television and Telecommunications), putting it on equal footing with Verizon, Bell Canada, and Telus, said Ormuco COO Michael Malynowsky
Ormuco has data centers in Montréal, Dallas, and Sunnyvale, California, in addition to several other locations. Through the Helion Network, customers can extend beyond this footprint globally.
The company also has data center expansion plans throughout 2015 with planned data centers in New York City, Seattle, and internationally, in London and Frankfurt, to service hubs near key technology and financial markets and expand its reach throughout Europe.
The company will also look to establish data centers in the Middle East and Asia Pacific for increased global coverage and scope.
Ormuco does really well with the gaming industry in particular. One customer is Square Enix, a company best known for the biggest role playing game franchises, including Final Fantasy.
Game development is intensive, said Malynowsky, as it’s prohibitive to buy thousands a servers needed for select, limited purposes like stress testing. Its work with the gaming vertical has proven its capabilities around extreme purposes such as testing and launching games played by millions.
Ormuco provides a real-time environment with whatever features needed in its cloud, said Malynowsky. Once it’s done, the customer can push into production to either own environment all the way to Ormuco managing and supporting completely.
“It’s the flexibility of going between private and public cloud,” said Malynowsky. “It’s full flexibility of development and operations. It’s the full flexibility of the corporate enterprise to decide what critical elements they want to keep in cloud and to do so very easily.
The billing platform helps large enterprises understand their exact costs for granular portions of projects, said Malynowsky. The company had several ISVs signed up and using the new platform ahead of launch. | | 7:05p |
What University of Cambridge is Learning from Its DCIM Journey University of Cambridge has a rich history and legacy. It’s the second-oldest university in the English-speaking world and fourth-oldest active university overall. It also has a legacy IT footprint spread across campus, including a diverse set of server rooms and closets serving individual departments without much unified data center management.
Cambridge University Data Center Manager Ian Tasker and his team are undertaking a project of a lifetime in updating the university’s infrastructure and have chosen Emerson’s Trellis data center infrastructure management software as the centerpiece for ongoing consolidation into a new data center.
“The initial decision to look at DCIM was a historic one,” said Tasker. “There’s a diverse set of server rooms that we look after – some are managed, some are not. One of the things we were keen to do with the new data center was put in tools for proper management of infrastructure.”
Cambridge’s infrastructure situation is not unique; universities around the world often keep server rooms just as departmentalized as the staff itself.
Through DCIM, Tasker hopes to move from the standard reactive data center mangement to a proactive, centralized, and organized approach. The university’s aim is to introduce more shared services, reduce its carbon footprint, and drive better intelligence into operations. All were factors that had the team turning to DCIM.
Implementation Taking Much Longer Than Expected
Tasker said that looking back, it surprised him how relatively easy it was to initially integrate everything into Trellis. However, he said, the one thing that he didn’t anticipate was the amount of time it would take to complete the project. What he thought would be a six-to-12-month project is still 12 to 18 months away from being complete, not including the last six months.
“It’s a living system,” said Tasker. “It will evolve for the next however many years. Once we get it fully established, and we’re exploiting it to its full effect, we’re looking to expand it to more server rooms.”
Fighting Resistance to Change
Business organizations can be extremely siloed, and universities are no different. Departments act as individual entities. Since not only does the infrastructure need to change but also the way people behind that infrastructure work, Tasker said, his team has had trouble getting everyone aboard.
“The university had never had a proper bespoke data center, but rather machine rooms in office buildings – the concept of a purpose-built data center was a bit alien,” said Tasker.
However, now that Cambridge has been equipping the data center, many realize how beneficial the new approach to data center management will be over time.
While the consolidation has been slower than anticipated, different stakeholders are coming into the fold.
Recently, the high-performance computing service consisting of two main supercomputers came aboard. Tasker said they were able to relocate in a short space of time, around six weeks, for 60 racks of HPC. It was a big win for the new way.
The overall consolidation is taking the most time. “The rest of the university, we’re doing department by department,” said Tasker. “We are talking to each department as we bring them on board and looking to deploy more shared services across departments.”
While three server rooms are on deck, the university currently has more than 200. “If we consolidate, it will drive powerful information,” he said.
Picking Trellis
“It’s been a difficult journey in picking a new tool,” said Tasker. “I think Trellis was easy to pick up, but it does take a long time to set in across the university. Emerson has been absolutely brilliant during this process. They’re always on the end of the phone and they regularly come to the site.”
Tasker and his team did a lot of pre-work in evaluating different DCIM tools to eliminate those that weren’t the right fit. “Right from the start we had wide-ranging requirements,” he said. In total, they had about 35 pages of requirements just for the DCIM solution across a dozen functional areas.
“We wanted a fairly comprehensive view to provide data center monitoring entirely across UPS, generators, and systems to actual monitoring of individual devices,” said Tasker. “We have a lot of requirements around asset management and integration capabilities.”
The financials of DCIM took a backseat, said Tasker, with energy efficiency and help in consolidating other machine rooms as the main driver behind DCIM. The university looks to save 20 percent of its carbon footprint through the new data center, for which it had to have tooling in place from the very start.
Many universities treat infrastructure as something they need to simply keep running, taking a duct-tape approach to fixing problems as they arise. One hand often doesn’t know what the other is doing, leading to inefficiency and a communication barrier. It takes a lot of courage to admit there’s a problem, let alone to do something about it. With Trellis, the university is laying the groundwork for the future and realizing along the way that DCIM is not a destination, but a journey.
Visit the Data Center Knowledge DCIM InfoCenter for more case studies of real-life DCIM deployments and a wealth of other information about DCIM solutions, suppliers, purchasing, and implementation guidance. | | 9:03p |
ProfitBricks Launches Early Preview of Docker Hosting Platform 
This article originally appeared at The WHIR
IaaS provider ProfitBricks has launched an early preview of its new Docker hosting platform on Wednesday. With the platform customers can build applications in the ProfitBricks cloud and access dedicated resources with dedicated CPU cores and dedicated RAM that can autoscale the Docker hosts.
ProfitBricks said that early access customers will be able to use up to 2,500 CPU core hours as part of its Docker platform preview. The company is currently accepting early access sign-ups on its website.
In a statement, ProfitBricks claims to be the first provider to solve the noisy neighbor problem with its Docker hosting platform. The platform also offers SSH Key download support and a full control panel designed to give DevOps teams an easy to use Docker hosting platform.
“When we first embarked on our Docker product architecture, we knew that it would not only need to meet the standards of existing Docker platforms, but also incorporate new, industry-leading features that addressed existing problems encountered with Docker,” ProfitBricks CEO and co-founder Achim Weiss said. “Docker has been taking the DevOps world by storm since its widespread introduction to the community in 2014, and our platform will combine its impressive features with our flexible and painless cloud computing infrastructure.”
ProfitBricks has maintained that simple and cost-effective cloud pricing combined with performance is its secret sauce. In March, the company announced its price/performance guarantee which guarantees that any workload deployed on its cloud will cost less than the same workload running at the same performance level on Amazon, Google or Microsoft.
Amazon, Google and Microsoft have all launched support for Docker through various services recently. In November, AWS launched a service designed to make it easy to deploy huge amounts of Docker containers on EC2.
Recently, ProfitBricks has focused on courting DevOps users with the launch of its DevOps Central and REST API, along with support for three multi-cloud libraries and a Python SDK.
This first ran at http://www.thewhir.com/web-hosting-news/profitbricks-launches-early-preview-of-docker-hosting-platform |
|