Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, September 19th, 2013
| Time |
Event |
| 1:00p |
The Evolution of as-a-Service Data Center Technologies  Different types of “as a service” offerings are emerging in the cloud computing market.
The types of services that the modern data center is delivering have evolved drastically over the past few years. Driven by the consumer and the end-user, new technologies are paving the way with how we actively consume information and incorporate data into our daily lives. The average user is already utilizing two or more devices to connect to their workloads. These trends will only become more prominent as more users find their way into the modern cloud model.
In a recent article I explored the idea that the modern data center is becoming “The Data Center of Everything.” As the data center model continues to evolve, the way that end-users consume data center resources will evolve as well.
This is where the notion of “Everything-as-a-Service” comes into mind. Data center providers are actively trying to become your one-stop destination for all of your compute needs. It makes sense too. The data center market has really heated up as demands around data center services continue to increase.
So how have the end-user requirements changed? What new types of services are emerging around the modern data center which directly translates to user consumption?
- Network-as-a-Service. As more users connect to the cloud, data centers will need to figure out a better way to deliver high quality, low latency, network services. Already, we’re seeing NaaS become a key category of cloud computing where specific delivery models are defining how users utilize these services. For example, Bandwidth-on-Demand (BoD) can be considered a NaaS service model where bandwidth can dynamically adapt to the live requirements of traffic. Furthermore this can be configured based on number of connections, nodes connected to the data center, and where traffic priority policies integrate. As more users connect to the data center for things like streaming, data sharing, and consuming compute cycles – delivering high quality network services will be an absolute necessity.
- Data-as-a-Service. With more users comes a lot more data. In this service model, data is delivered on demand in a manner which allows the actual information to be clean and very agile. The idea is to offer data to various systems, different types of applications, and different user groups. This data would be available regardless of whether the user is inside of the organization or out of it. Furthermore, policies can be wrapped around this data to further enhance QoS, integrity, and agility. Already, big cloud vendors are utilizing a variety of DaaS models to enhance the data delivery process. Providers like Microsoft Azure deliver and store data via three different methods – queues, tables, and blobs. The future of DaaS is bright. Organizations will want to further control and optimize both structured and unstructured data sets. Applications include everything from optimized data delivery to big data analytics.
- Backend-as-a-Service. This one is becoming very popular – very fast. The major influx of users coming in via mobile devices has created a boom in mobile application development. BaaS allows for both web and mobile application platforms to link to backend cloud storage services. This helps provide optimized features around push notifications to a variety of devices, complete user management, and the ability to integrate with other social networking platforms. In utilizing SDKs and various APIs, BaaS is able to directly integrate various cloud services with both web and mobile applications. Already, there is a broad focus where open platforms aim to support every major platform including iOS, Android, Windows, and Blackberry. Furthermore, the BaaS platform aims to further enhance the mobile computing experience by integrating with cloud-ready hosting vendors like Azure, Rackspace and EC2. Still curious? Take a look at what some BaaS providers have been doing. For example, DreamFactory provides a truly open-source software platform capable of integrating with any cloud or data center provider. Basically, they give you the back-end and you create the front-end app.
- Everything-as-a-Service. Just imagine – using your favorite data center or cloud hosting provider as your resource to everything you’ll ever need for the compute experience. As a subset of cloud computing, EaaS aims to provide core services associated with numerous core components. This includes communication, infrastructure, data, various platforms, cloud APIs and more. One example of EaaS is SaaS. It’s the idea that a piece of software can be delivered completely via the cloud. This software service is agnostic of the connecting device and provides a consistent user experience across the board. Already, we’re seeing large data center providers and vendors embrace the EaaS trend. Organizations like Microsoft, HP and Google are all looking to become your one-stop for all cloud needs. In fact, Google is a great example of EaaS. From a data center perspective – they deliver workloads, applications, data, IT services, streaming services, and much more. By integrating core cloud components into one service – cloud and data center providers are able to deliver all necessary services for a complete cloud compute experience.
The consumption of cloud and data center resources is the primary reason that so many new types of models have emerged. Consumers are constantly looking for ways to be better connected and have their data delivered to them as quickly as possible. This means that new types of data services, cloud applications, and delivery models are going to have to be developed within the data center.
As more providers and hosting companies try to lock in more users – they will actively try to offer more bang for the user’s buck. In the past, organizations and technology would mainly dictate the direction of the data center. Now, the user has a lot more say in the process. As new services emerge – the hardware/software infrastructure for the data center will need to adapt to the ever-expanding requirements of the modern user and business entity. | | 1:15p |
Servers Will Lead the Data Center Evolution Young-Sae Song is Corporate Vice President of Product Marketing at Data Center Server Solutions for AMD. In this role, he leads the outbound marketing, branding, and demand generation functions for AMD’s push into next generation fabric based computing systems.
 Young-Sae Song
AMD
The last decade has seen the data center focus on a number of key technologies in order to improve efficiency. Since 2000, virtualization has been at the heart of increasing server utilization, allowing businesses to consolidate hardware and reap significant cuts in operating expenses. This was followed by a holistic focus on data center design, from the layout of suites to the efficiencies of HVAC and electricity supply. However, data center design will turn its focus on the server in order to make the next step to increase efficiency.
Data centers have always been evolving, from placement of cables 1 to the way servers are positioned in a rack to provide hot and cold aisles 2. In a bid to increase efficiency within the data center, the server has largely been overlooked in favor of low-hanging fruit, such as cooling infrastructure and on a macro scale, data center location.
Virtualization tapped unused resources and its popularity grew as processor performance gains mitigated the overhead of running a hypervisor. Servers are set to be the focus of the data center as virtualization expands from general compute to networking and storage, demanding more from hardware, alongside a need for increased density and improved manageability.
While virtualization has helped in increasing hardware utilization, two socket servers remain the most commonly deployed server hardware in the data center, yet offer enterprises little flexibility when it comes to swapping out parts that best suit emerging workloads or giving enterprises the ability to procure components from multiple vendors without concerns of interoperability.
The Facebook-initiated Open Compute Project will give power back to enterprises, allowing them to work around the familiar two socket server platform with silicon they are accustomed to. The advantage of open source hardware such as AMD’s Open Server 3.0 3 doesn’t end at familiarity, rather it offers enterprises the ability to shop for a motherboard that meets their hardware and budget without having to purchase a new chassis.
Empowering enterprises with the ability to make decisions with server hardware beyond simply buying a badge means the notion of ripping and replacing infrastructure as new workloads or use cases appear is banished. This allows enterprises to focus capital expenditure on components that directly results in revenue growth rather than continually ripping and replacing infrastructure.
The Open Compute Project is much more than giving enterprises access to open source hardware, because investing in Open Compute servers also results in access to open source management tools. Gone are the days of system administrators having to learn different management tools for different server vendors. Instead Open Compute offers a single specification for hardware management4, greatly reducing the complexity and enabling system administrators to deal with management tasks quickly and effectively.
Changing the Paradigm of Equipment
While Open Compute is focusing on modernizing the ownership of traditional two socket servers, high density servers such the SeaMicro SM15000 server 5 offers enterprises the ability to tackle new workloads in a cost effective manner through a step change in the number of sockets and cores that can be placed in a single rack. The upshot of this is more compute be squeezed into a single rack thanks to power efficient processors and the ability for servers within a single chassis to serve completely different workloads.
Going High-Density
Dense servers will be expected to do more than just undertake menial data center workloads such as serving up web page front-ends. Network virtualization is set to be the next big consolidation in the data center, but this can only occur if server has the performance and cost effectiveness to make it a viable proposition. Key to meeting performance and economic goals is the interconnect between the processors, and it is this technology that will differentiate servers, with bandwidth and protocol design playing pivotal roles in overall system performance.
The ability to have thousands of cores in a rack increases the need for tighter integration between bare metal and management software. In the same way that Open Compute is bringing together server hardware and software, dense servers will raise the bar when it comes to offering system administrators greater control over the provisioning and management of bare metal.
The Power Will Be in The Customers’ Hands
We have seen significant efficiency improvements to the data center over the last decade, but it is the workhorse of the data center – the server – that will evolve in the coming years. Servers will stop being a prescribed piece of hardware that enterprises will have to work around, and instead it will be the customer that will have greater power in demanding how the hardware meets their needs.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission processfor information on participating. View previously published Industry Perspectives in our Knowledge Library.
Endnotes
[1] How overhead cabling saves energy in data centers – Avelar, Victor APC
[2] Hot and Cold Aisle Layout – ENERGY STAR
[3] AMD Open 3.0 modular server specification – AMD and Open Compute Project
[4] Hardware management specification – Open Compute Project
[5] SeaMicro SM15000 Fabric Compute Systems | | 2:00p |
Cooling Capacity Factor Reveals Stranded Capacity The digitization of the business world has placed extra requirements around data center platforms. New IT solutions have created environments capable of more user density, more resiliency and even greater amounts of consolidation. Remember, although these systems do increase efficiency, they also modify the dynamics of data center cooling capacities.
Consider this — the average computer room today has cooling capacity that is nearly four times the IT heat load. Using data from 45 sites reviewed, this white paper by Upsite Technologies will show how you can calculate, benchmark, interpret, and benefit from a simple and practical metric called the Cooling Capacity Factor (CCF).
Of the 45 sites that Upsite reviewed, the average running cooling capacity was an astonishing 3.9 times (390 percent) the IT heat load. In one case, Upsite observed 30 times (3,000 percent) the load. Yes, there are some sites are this inefficient. When running cooling capacity is excessively over-implemented, then potentially large operating cost reductions are possible by turning off cooling units and/or reducing fan speeds for units with directly variable fans or variable frequency drives (VFD).
Download this white paper today to learn about the numerous benefits CCF can provide for your data center. This includes:
- Improved cooling unit efficiency through increased return-air temperatures
- Improved IT equipment reliability by eliminating hot and cold spots
- Reduced operating costs by improved cooling effectiveness and efficiency
- Increased room cooling capacity from released stranded capacity
- Enabled business growth by increasing capacity to cool additional IT equipment
- Deferred capital expenditure for additional cooling infrastructure, or construction of a new data center
- Improved PUE from reduction of cooling load
- Reduced carbon footprint by reducing utility usage
Proper data center care not only increases the life of the equipment – but it allows for greater cost savings and better infrastructure efficiency. Calculating the CCF is the quickest and easiest way to determine cooling infrastructure utilization and the potential gains. Furthermore, this white paper outlines other core metrics including PUE, AFM, and calculating the CCF. As your data center becomes more important – it’ll be vital for you to implement as many core efficiencies as possible to keep your environment up and running optimally. | | 2:01p |
German Internet Exchange DE-CIX Enters NY Market The European invasion of open Internet exchanges continues, and the landscape of Internet exchanges is drastically changing. DE-CIX (Deutsche Commercial Internet Exchange) is opening a new exchange in New York City, following the recent, similar move by the London internet Exchange (LINX) to open a site in northern Virginia. Both efforts are bolstered by the Open IX movement, which continues to gain steam stateside.
DE-CIX is a major Internet exchange operator headquartered in Frankfurt Germany, where it can handle more than 2.5 terabits per second of peak traffic from the 500 participants in its platform. DE-CIX successfully operates and manages Internet exchanges throughout the globe, including UAE-IX in Dubai, United Arab Emirates (UAE), which recently announced record growth; and DE-CIX in Frankfurt, Germany.
In the past, the European, open model for Internet exchanges has struggled to establish itself in America. In the current U.S. model, data centers primarily serve their own on-campus tenants or connect customers with passive Private Interconnects (PI). This has resulted in a concentration of activity in a handful of providers, most notably Equinix. The recent rise of the Open-IX movement has reinvigorated efforts to bring in the open, European approach to exchanges.
The plan is for DE-CIX to establish Internet exchanges in major U.S. metro regions. New York City is the first to receive a DE-CIX distributed, carrier- and data center-neutral Internet Exchange. Built on the successful open model as DE-CIX in Frankfurt, DE-CIX North America will engineer the new Internet exchanges in the U.S. for high-capacity, high-performance Layer 2 interconnection known as Peering.
Following the open Internet exchange approach, DE-CIX will continue to support the Open-IX initiative, which is a new industry group that supports, among other topics, the creation of neutral and cost-efficient Internet exchanges that are backed by the Internet community at large.
DE-CIX will deploy a large-scale Ethernet switching fabric, combined with an all-fiber metro optical backbone that supports traffic volumes of up to numerous Terabits per second across multiple data centers in the selected metropolitan areas. This will enable all types of Internet providers to exchange traffic across the neutral, distributed infrastructure.
New York City Ideal Starting Point
The City of New York has already taken action to address digital infrastructure development with programs such as the ConnectNYC Fiber Access program. These efforts are an essential element of making the city a better place for both companies and individuals.
“We are in the midst of transforming New York City into the world’s leading digital city,” says Rachel Haot, New York City’s Chief Digital Officer. “Our work is about ensuring that digital technology touches all New Yorkers to improve their lives. Implementing this neutral Internet exchange in our City is an important step forward in improving the fundamental technology infrastructure that fuels our New York City and enables it to compete globally with other digital hubs.”
DE-CIX is tackling New York first, while LINX has chosen Northern Virginia as its initial entry point, providing European-style exchange plays in the two biggest data center hubs in the U.S. DE-CIX says it will follow up with exchanges in Silicon Valley and Los Angeles.
“New York is a world-class city with an active interest in evolving its digital infrastructure to the highest levels,” said Harald Summa, CEO for DE-CIX. “The city is the top choice for Internet companies right now. We therefore selected New York City as the headquarters of our new U.S. operations and also decided to make New York the home of our first Internet exchange deployment in North America.”
“NYC is home to a huge concentration of ISPs, including broadband providers, content companies and a thriving and growing tech community,” said Summa. “Establishing and developing the new exchange will go far in elevating New York’s reputation as one of the world’s great technology hubs, in addition to its other first-rate attributes. DE-CIX’s new exchange in New York will be operational soon, while we are also planning for deployments in Silicon Valley and Los Angeles. DE-CIX will help these selected cities to grow their role as Internet hubs and finally achieve the major role they deserve in the global Internet ecosystem.”
So far, no specific sites or providers have been mentioned, as the company isn’t disclosing this. ”Our typical approach is to follow our customers,” said Frank P Orlowski, Head of Marketing, DE-CIX. “This means we build out to facilities where we find a lot of potential clients. For NYC, we plan to add some other interesting sites, too. There is a window of opportunity for new carrier Hotels such as 325 Hudson and we are ready to take a look at some of those new facilities, too.” | | 2:30p |
NetApp Outlines Strategy For Data Management Across Clouds With a vision for an enterprise data management solution, NetApp (NTAP) outlined its strategy to use its clustered Data ONTAP operating system and provide seamless cloud management across any blend of private and public cloud resources. The vision is founded with a universal data platform, the introduction of technology for cloud portability, and continuing to deliver solutions that support customer choice.
“Regardless of the ultimate computing destination, the CIO will maintain ownership of the organization’s data,” said Jay Kidd, Senior Vice President and Chief Technology Officer, NetApp. ”The introduction of new multicloud architectures makes data governance more complex because data is distributed, and not under direct control. Our vision is to create an enterprise data management solution, with the clustered Data ONTAP operating system at its core, which will span the customers’ data storage landscape, irrespective of data type or location.”
Expanding on the 20 years of ONTAP innovation, NetApp will further integrate its software into existing and forthcoming private cloud, large-scale public cloud, and hyperscale cloud service provider solutions to help organizations optimize IT delivery and harness the speed, flexibility, and economics of the public cloud. It will also utilize its universal data container to make it easier to move data and workloads across instances of Data ONTAP in a multicloud environment, boosting IT efficiency and enabling new, and innovative hybrid cloud architectures.
Building on its ecosystem of cloud providers, application, and technology partner options, NetApp and its global partners will announce new solutions that accelerate the transition to hybrid cloud architectures. These announcements will include deeper integrations for business applications in the cloud, new converged infrastructure reference architectures, secure cloud backup, and disaster recovery solutions. NetApp supports all major cloud operating environments, virtualization frameworks, application deployment models, and cloud management solutions – and in the coming months will announce integrations with flagship providers in each of these key areas, including new contributions to OpenStack and CloudStack, and new partnerships with large-scale hyperscale cloud service providers.
In this video NetApp’s Phil Brotherton talks about the company’s vision for the cloud and how NetApp technology benefits its customers and partners. | | 3:00p |
New Brocade Offerings Target Cloud Migrations Brocade (BRCD) has rolled out new features for its VCS fabric and VDX switch portfolio, seeking to offer an end-to-end multitenancy blueprint for migration to the cloud. New wrinkles include new VCS fabric features that provide native multitenancy, storage-aware networking and 100 Gigabit Ethernet (GbE) performance. Last May the company rolled out its software-defined networking strategy, with software innovations coming from its acquisition of Vyatta.
“As cloud computing matures and is increasingly adopted in production environments, new requirements are emerging and deficiencies in legacy architectures are becoming more pronounced,” said Jason Nolet, vice president data center swtiching and routing at Brocade. “Our continued innovation in Brocade VCS Fabric technology addresses the most challenging data center requirements, including network multitenancy, network intelligence for exploding storage growth and the emerging adoption of 100 GbE for ever-increasing bandwidth consumption.”
The VCS Virtual Fabric now automatically recognizes and prioritizes storage traffic. A new VDX 6740 family of 10/40 GbE switches with VCS Virtual Fabric support offer 40GbE to 160 GbE trunks, with 32 Flex Ports (Fibre Channel/Ethernet/FCoE), providing flexibility and investment protection. The switches also have ASIC support for OpenFlow 1.3. Brocade also announced a new 100 GbE line card for the VDX 8770 modular chassis.
“The ability to securely isolate tenants in a shared infrastructure environment is paramount to today’s cloud-service providers and to enterprises adopting private cloud,” said Brad Casemore, Research Director, Data Center Networks at IDC. “Traditional approaches to network segmentation have been around for years, and virtualization and cloud computing have exposed their inherent limitations, especially in relation to flexibility and scalability. Technologies such as Brocade’s VCS Virtual Fabric promise to address this challenge by delivering multitenancy at scale, enabling enterprises to maximize the number of tenants they can support by leveraging segmentation constructs with which network administrators are familiar, thus minimizing both the learning curve and operational overhead.”
New Vyatta 5600 vRouter
Brocade also announced a new Vyatta 5600 vRouter, as an addition to its NFV (Network Function Virtualization) product portfolio. The new router leverages Brocade vPlane technology, which is capable of 10 Gbps throughput per x86 core. For carriers, cloud providers and enterprises alike it directly addresses use cases such as BGP routing, ACL offload and Virtual BGP Route Reflection, among others. The Vyatta 5600 allows for extremely cost-efficient networking power, by leveraging software instead of deploying purpose-built hardware.
“In proof-of-concept tests with large carriers, we are seeing a CapEx savings potential of 90 percent or more when replacing purpose-built hardware with a high-performance x86 server and the Brocade Vyatta vRouter,” said Kelly Herrell, vice president and general manager, Software Networking Business at Brocade. “This system architecture shift is central to the larger pursuit of NFV architectures where CapEx and OpEx advantages go hand-in-hand with radical improvements in network service agility and time-to-service.” | | 3:41p |
Vantage Completes Construction of its First Quincy Data Center  The exterior of the first Vantage data center in Quincy, Washington. (Photo: Vantage)
Vantage Data Centers has completed construction, commissioning and delivery of its first build-to-suit data center in Quincy, Washington, the company said today. Designed in conjunction with an undisclosed enterprise customer, the first facility on Vantage’s Quincy campus will support up to 9 megawatts of critical IT load at full capacity. The 133,000 square foot building features 61,000 square feet of raised-floor data center space, and is designed to operate at an annualized Power Usage Effectiveness (PUE) of 1.3.
That PUE is achieved partially through a custom indirect evaporative cooling system designed to eliminate impact from outdoor conditions through a closed loop delivery infrastructure. The Vantage facility is also using generators that meet the EPA Tier 4 standard and reduce emissions by 90 percent compared to traditional generator deployments, and has installed LED site lighting designed to significantly reduce energy use.
The Vantage Quincy campus features 68 acres of land and low cost hydro-electric power. The site will ultimately house four buildings spanning about 529,000 square feet and 55 megawatts of critical IT load. With significant additional power at the site, the company will be able to support other large customers in either a powered shell or turnkey facility.
“The completion of the Quincy project in support of a large enterprise customer marks the opening of Vantage’s second data center campus,” said Sureel Choksi, President and CEO of Vantage. “Consistent with the company’s Santa Clara campus, the Vantage Quincy campus has been designed for massive scalability, high energy efficiency and operational excellence in support of large enterprises and web companies.”
The Vantage campus in Quincy offers an opportunity for larger space requirements. With its low-cost hydro power, Quincy has been an attractive market for companies with web-scale operations, including Microsoft, Yahoo and Dell.
“Quincy benefits from lower power costs,” said Rick Kurtzbein, Analyst at 451 Research. “With abundant hydroelectric power, network connectivity, tax incentives and low risks of natural disasters, Quincy is an emerging datacenter market for enterprises and web companies for either primary datacenter sites or for secondary, disaster recovery sites. The Quincy market allows for lower, ongoing operational costs that benefit datacenter providers, as well as customers in their facilities.”
Following the standards the company employed in Santa Clara, the Quincy facility has been built to achieve LEED Platinum certification. It was also awarded the Uptime Institute’s Tier III certification for both design documents and constructed facility. | | 7:16p |
Data Center Customers Warming to Iceland  Verne Global CEO Jeff Monroe calls its Iceland-based data center “the ultimate energy hedge” for its ability to provide long-term price visibility through 12 to 20-year contracts. (Photo: Colleen Miller)
KEFLAVIK, Iceland - Data center customers are warming up to Iceland. It’s been five years since Verne Global announced plans to build a data center business in Iceland, which offers nearly ideal scenarios for power and cooling servers. The company’s facility on a former NATO base is now filling with customers, with a boost from cloud hosting provider Datapipe.
The latest arrival is RMS, which specializes in modeling catastrophe risk for the insurance industry. RMS will use Datapipe’s Stratosphere high-performance cloud to support RMS(one), a new service that combines “big data” analysis and a cloud delivery model to provide insurers with real-time information on the risks posed by natural disasters.
The computing and storage horsepower RMS is housing in Iceland will make it easier for insurers to quickly assess looming disasters. An example: the approach of a hurricane, in which risk models shift along with the path of the storm, in which case the RMS app will need to quickly scale its capacity as many clients simultaneously seek to update their projections.
“Datapipe’s Stratosphere HPC green cloud platform delivers on-demand scalability combined with the power efficiencies of the Verne Global facility,”
said Robb Allen, CEO of Datapipe. “As a result, RMS has an infrastructure solution providing the reliability, security and efficiency required by high performance, big data applications.”
Traction With Data-Crunching Apps
RMS isn’t the only one crunching big data inside the Verne Global facility. Automaker BMW recently moved a group of applications to Verne Global, including crash simulations, aerodynamic calculations and computer aided design and engineering for BMW’s next generation of cars.
“We’ve been able to make a lot of headway in the high-intensity computing market,” said Tate Cantrell, the CTO of Verne Global. “That’s where we really see the interest.”
The success hasn’t come overnight for Verne. Shortly after it announced its project, Iceland was hit hard by the global financial meltdown, and Verne postponed construction for a year. Shortly after it began building, ash from a volcano in Iceland disrupted global travel.
Through it all, the Verne team has persevered with its vision for Iceland as a hub for the data center industry, offering an abundance of cheap, green power and low operating costs from the free cooling enabled by the cool climate. As with any new location, the “proof of concept” provided by early customers is critical in building momentum and winning over skeptics.
“We are very happy with the reception we’ve received,” said Cantrell. “We’ve gone from literally being heckled, to the point where name brands work with us. We’re just continuing to raise awareness about the opportunity in Iceland. We believe in it. Our customers believe in it. We’ve still got to continue to educate people.”
Green Power & Predictable Pricing
This week Verne, Datapipe and RMS held a press event to discuss the rollout of RMS(one), which allows Data Center Knowledge to provide our readers with a closer look at Iceland as a data center destination. Today and in coming days, we’ll take a look at Iceland’s current data center industry, the kind of applications being hosted here, and the renewable power resources that power the nation’s pitch to server farms.
Iceland’s power grid draws entirely upon hydro-electric and geothermal power, ensuring a totally “green” power supply. That’s become an issue in the data center industry in recent years, as the environmental group Greenpeace has targeted both Facebook and Apple with high-profile campaigns blasting them for using coal-based electricity to power their servers.
One of the most appealing facets of Verne’s green power play is its ability to arrange long-term contracts that can provide predictable power pricing for 12 to 20 years. Power in Iceland is available at 4.5 cents per kilowatt hour, with lower pricing available for bulk purchases. That’s an attractive pitch when you consider the potential for fluctuations in power pricing in countries like the U.S., Britain and Germany.
“We are the ultimate energy hedge for companies,” said Verne Global CEO Jeff Monroe. “Energy costs are not going down. In most cases, they’re escalating.”
While some U.S. utilities offer access to power sourced from renewables, they often charge a higher price for that power.
“We’re green, but without the premium,” said Monroe. “Instead, you save money on that hedge.” | | 7:28p |
Free Cooling in Iceland: A Closer Look at the Verne Global Data Center KEFLAVIK, ICELAND - Verne Global, which announced a cloud launch this week by client Datapipe and its client risk-modeling specialist RMS, is uniquely positioned from a geographical and business perspective. Verne is taking advantage of the geography of Iceland to operate a data center that is run on 100 percent renewable energy sources, and leverages the chilly climate in Iceland, located just below the Arctic Circle. The geography and geology of Iceland allows the local power companies to use natural resources such as hydro power and geothermal resources to produce electricity. Data Center Knowledge took a tour of this unique data center facility this week. Our photo feature gives insight into the facility, which is being deployed with a modular approach, and seeks to draw clients from both the United States and European countries. See Verne Global Data Center Leverages Iceland Power, Cooling.
|
|