Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, January 12th, 2015
| Time |
Event |
| 10:00a |
Verne Raises $98M to Expand Iceland Data Center Campus Verne Global, a U.K.-based data center developer with a massive data center campus on the site of a former NATO base in Iceland, has raised $98 million of equity funding in a Series D round.
The company has so far attracted more than a dozen customers to its facility, most of whom host high-octane computing systems for compute-intensive applications, such as big data analytics, predictive modeling, and video rendering. Iceland is attractive as a data center location primarily because of the abundance of low cost electricity and reliability of the national electrical grid.
Verne has designed the next phase of capacity expansion at its campus in Keflavik, just west of Iceland’s capital Reykjavik, and the new round will help fund the build-out. “This round of investment is just the capital that we need to continue to grow the business on that campus,” Jeff Monroe, Verne CEO, said.
New Player Joins Data Center Investment Round
All previous investors participated in the round, and one new one came on board. The new investor is Stefnir, an Icelandic asset management and private equity firm that manages investment for numerous major pension funds in the country.
Verne’s largest shareholder is the Wellcome Trust, a U.K. charity organization that funds biomedical research.
Verne CTO Tate Cantrell said the reason Wellcome Trust got interested in data center investment was that the charity saw scientists spend much of the grant money it awarded on data center capacity.
The trust remained the biggest shareholder after the most recent round.
Novator Partners and General Catalyst, who, according to Cantrell, were the data center developer’s founding investors, also participated in the round.
Popular Campus for Compute-Intensive Applications
Verne’s 45-acre campus has two buildings and access to 120 megawatts of power. One of the buildings is 100,000 square feet, and the other is 125,000 square feet, Cantrell said.
Of the dozen or so existing customers, the company has publicly named seven, including BMW, data center service provider Datapipe, and RMS, which designs software that analyzes risk for insurance companies.
According to Cantrell, most growth for Verne comes from high performance computing workloads, and those types of workloads represent the bulk of capacity currently deployed at the campus.
“We’re seeing sector after sector starting to go into this high [performance] analytics type environment,” he said. “That’s probably the biggest area of growth that we’re seeing.”
Aluminum Sector Spurs Robust Infrastructure
Electricity in Iceland is relatively cheap and abundant. It is generated by hydropower and geothermal plants, which makes it more environmentally friendly than fossil energy.
Not only is the cost low, it is guaranteed to stay low over a long term, up to 20 years, Cantrell said. Such long-term price stability is impossible when a power company relies on fossil fuels, whose prices are volatile, he explained.
The only cost risks associated with power in Iceland are the costs to operate hydropower plants and the investigative costs associated with geothermal energy. “That can be pretty well amortized into the future development and growth, so [the cost] is very predictable,” Cantrell said.
There is about 2.5 gigawatts of power available on the national grid, even though there are only about 320,000 people living on the Iceland. The grid itself is highly reliable.
The main reason for the abundance of power, the predictability of pricing, and the reliability of the grid is the industry that is the predominant energy consumer in Iceland: aluminum smelters, Cantrell said.
These customers operate buildings that can demand as much as 500 megawatts. They deal in commodities, so when they invest in a costly construction project that is going to churn out a commodity over a long term, they need to be able to predict their long-term costs.
Power reliability for aluminum smelters is paramount, since the buildings don’t have diesel generators for backup. If power goes out, electrolysis, the process used in aluminum extraction, stops. “The actual smelting pot can start to freeze, and you’ll lose your investment,” Cantrell said. “So, it’s not an option for them to go down.”
The government and the power companies in Iceland had to commit to ensuring that the electrical grid was extremely reliable to get the aluminum companies to build their plants there. | | 3:59p |
Open Compute Project (OCP) U.S. Summit 2015 The Open Compute Project (OCP) U.S. Summit 2015 will be held March 10-11 at the San Jose Convention Center in San Jose, California. This event is free and open to the public, but it does require registration.
New this year is a full day of OCP Engineering Workshops that will be held the day before 2015 OCP US Summit at the same venue. This additional event is reflected in the registration options.
This event will host exciting keynote addresses from industry leaders, technical workshops, and educational tracks where you’ll have the opportunity to help drive the future of open hardware. For the third year in a row, there will be a software and hardware hackathon at Summit, where anyone who’s interested will be able to hack their own OCP implementations.
For more information about this year’s conference, visit the Open Compute Project (OCP) U.S. Summit 2015 website.
To view additional events, return to the Data Center Knowledge Events Calendar. | | 4:00p |
The Green Data Center Conference The Green Data Center Conference will be held February 24-26, 2015 at the Marriott San Diego La Jolla in San Diego, California.
The conference examines the ROI of enhanced data center efficiency, and will bring together data center owners and operators for a series of interactive workshop sessions and case study presentations from some of the most innovative facilities in the world.
This year’s installment will focus on innovations in “green” power and cooling. It will also highlight some pressing building and development issues to consider when commissioning a new facility. In addition, a tour of the San Diego Supercomputer Center has been incorporated in the program to give you real life examples and best practices that your organization can immediately institute.
For more information about this year’s conference, visit the Green Data Center Conference website.
To view additional events, return to the Data Center Knowledge Events Calendar. | | 4:30p |
“Just When You Thought”: How Predictive Analytics Will Impact the Data Center Bill Jacobs is VP of Product Marketing and field CTO for Revolution Analytics.
In the last few years, the value of predictive analytics has become clear, and businesses are clamoring for rapid adoption and deployment.
As pressure mounts to build out analytical frameworks atop the big data “lakes” or “landfills” started in 2013 and 2014, data center teams will be driven to meet a few immediate needs and begin considering some longer-range trends.
Production Analytics: New Platforms
Deployment of predictive analytics to production will deliver operational, performance and risk improvements across large organizations in 2015.
Whether by integrating on-demand analytics into BI dashboards and reports, or by computing predictions as part of the business application logic, predictive analytics will go production for many this year.
For those already in production, historical methods such as re-coding predictive models into Java or C++ prior to deployment have reached end-of-life. As application teams accelerate model deployment cycles, lower development costs and minimize coding inaccuracies, direct deployment of new models is needed.
For IT, this means onboarding of new production systems that compute predictions, and with them, come challenges.
Where transaction or event volumes are high or latency requirements are low, or both, IT will need to deploy in-database, in-Hadoop and streaming analytical components into the application stack. This will keep IT operations, security and business continuity teams busy characterizing and deploying these new components to achieve fast, reliable computation of model predictions.
Sensor Data: Big Volume, Here and Now
The “Internet of Things” or IoT will bring large flows of new data into the data center in 2015. However, while many focus on new types of sensors such as smart thermostats, smart cars, fitness bands and Wi-Fi light bulbs, IT should be wary of the potential impacts of existing data sources as well.
Today, many cars sold in the last five years are already producing data. Most manufacturing machinery also produces large data flows, particularly in the electronics business. Telecom networks have produced large historic data sets as well as large real-time flows of data. Weather and geo-spatial data are widely available, growing rapidly in detail, and can be embraced to great positive effect as modeling techniques and deployment infrastructures make their inclusion a reality.
It is the here and now data that will impact the data center first, providing a training ground for larger and more numerous data flows from new sensor networks this year and next.
Predictive Analytics: A Fit for the Hybrid Cloud?
Much has been written and scores of hosting and platform companies are working toward generalized solutions for creating hybrid clouds that combine public, private and on-prem assets. The reality remains that bandwidth is not free, nor will it be anytime soon.
We predict that in 2015, many organizations will realize that hybrid clouds can prove a good foundation for predictive analytics before generalized solutions are widely adopted.
With the lion’s share of new data originating outside the organization, it makes sense to store and analyze large, new data assets near that origin. However, movement of data between cloud and on-prem remains expensive.
Big data analytics offers a natural division between model development and model execution that fits reasonably well into hybrid cloud realities for many problem sets. By storing large new data assets in cloud-based infrastructure and selecting data modeling tools that can run there, IT can help data modelers simplify and accelerate the ingestion, blending, exploration and model estimation on vast data assets without moving the data.
By pairing in-cloud analysis with on-prem model execution, IT can bridge data modeling directly to production applications without re-engineering mission critical apps to run in the cloud, and without moving massive data sets between the cloud and on-prem infrastructures.
The future will reveal new architectures for hybrid clouds and analytics, modeling atop data lakes in the cloud, and deploying those models on-prem. This matches the realities of today’s hybrid clouds and provides a great starting point. Evolution of hybrid cloud architectures will smooth seams visible in today’s hybrid cloud architectures. As hybrid cloud facilities mature, they will also pave the way for innovative analytical techniques like online learning algorithms that blur the current distinction between model development and model execution.
Deploying predictive analytics to production, embracing big data from sensor-based networks, and fitting production analytics to the realities of hybrid clouds are but a few of the interesting challenges to be met in 2015. I welcome your comments on what you envision as the most import barriers to your company or organization. Good luck in 2015!
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 6:12p |
Finning to Distribute BaseLayer Modular Data Centers in UK, Ireland BaseLayer, the Chandler, Arizona-based maker of prefabricated modular data centers being spun out of a company called IO, has signed Finning UK & Ireland as its exclusive distributor in the U.K. and Ireland.
Phoenix-based IO is in the process of being split into two companies: BaseLayer, which manufactures and sells modular data centers and data center infrastructure management software, and IO, a colocation and cloud services provider.
Finning UK & Ireland is a subsidiary of Finning International, a vendor of power solutions for various industries, including power generation and oil and gas.
“Finning is known and respected for providing efficient energy and power solutions to large enterprises,” Peter McNamara, executive vice president of worldwide sales and customer operations at BaseLayer, said in a statement. “The partnership expands BASELAYER’s reach and strengthens our ability to transform the data center industry.”
IO has used a similar approach to several Asian markets. In 2013 it signed Tractors Machinery International as an exclusive distributor of its modular data centers in Singapore, Malaysia, and Brunei.
Until now, IO was operating as a single business, designing and manufacturing modules (previously called io.Anywhere), writing DCIM software, and providing data center space. It provided space within the modules, housed in large warehouses, and traditional raised-floor space.
IO execs said the split would enable better focus for each part of the business.
The data center modules are manufactured at a factory in Chandler (just outside of Phoenix), which is where BaseLayer is now also headquartered. | | 6:33p |
EMC and Elliott’s Feud Over VMware Spin-Off on Hold Hedge fund Elliott Management and EMC have reached an agreement, for now. Elliott has been pushing EMC to spin off VMware, the virtualization giant that EMC owns a controlling stake in. Now, EMC has added two Elliott-approved directors to its board, and Elliott has agreed to a limited standstill and voting provisions through September 2015.
Elliott is EMC’s fifth largest shareholder with a $1 billion stake in the company. The hedge fund has been critical of EMC’s “federation” structure. In a letter to its board in October, Elliott criticized EMC’s corporate structure, calling EMC a “company of companies”. The argument centers on a belief that the companies are of more value to shareholders as separate entities. EMC disagreed.
The two Elliott-approved board directors mean a (likely temporary) halt to the criticism. But the appointment also means a potentially stronger spin-off argument down the line.
For now, the two are playing nice. Elliott has worked collaboratively with EMC to identify and review candidates, according to an EMC release. Elliott will vote for EMC’s proposed slate of directors at the annual shareholder meeting on April 30.
EMC is under pressure much like other big publicly traded technology companies with hands in multiple buckets. HP split into two companies in October, one focused on the enterprise and the other on the consumer and printer markets. IBM followed the sale of its PC business to Lenovo with sale of its x86 server business to the same company.
EMC “proper” leads in traditional storage offerings. In addition to VMware, EMC has a 62 percent stake in Pivotal, a software company led by former VMware CEO Paul Maritz in which VMware owns a sizable chunk as well.
EMC’s board now totals 13, with the addition of José Almeida and Donald Carty. Almeida is chairman and CEO of Covidien, and Carty is a retired chairman and CEO of AMR.
“Joe Almeida and Don Carty are two highly experienced leaders of major global corporations who will bring added depth and insights to our Board of Directors,” said EMC Chairman and CEO Joe Tucci in a release. “We are delighted to welcome both Joe and Don to the Board.”
Tucci is set to retire in 2015.
VMware and EMC are rumored to be developing an EMC-branded converged infrastructure appliance, code-named Project Mystic.
| | 7:55p |
AWS Launches Supercharged C4 Cloud Instances Amazon Web Services has introduced new gigantic cloud instances for compute-heavy workloads. C4 is the compute-optimized family of instances on Amazon EC2 available in seven regions for now, with more regions coming in the near future.
The instances are based on the Intel Xeon E5-2666 v3 (code name Haswell) processor. Intel customized its Xeon E5 chips specifically for Amazon to support the C4 instances.
Amazon announcement follows a similar move by one of its biggest rivals in cloud services, Microsoft Azure. Last week, Microsoft announced availability of G-series instances on its cloud, which go up to 32 cores, 448 GiB of RAM, and 6,596 GB of local SSD storage. Microsoft’s high-octane instances were initially rolled out in the cloud’s West US region only.
New Instances Meant to Attract New Workload Types to AWS
Amazon pre-announced the new instances late last year at its re:Invent conference. The new capabilities make AWS more relevant for several use cases.
“Our customers continue to increase the sophistication and intensity of the compute-bound workloads that they run on the cloud,” AWS Chief Evangelist Jeff Bar wrote in November. The instances provide higher packet per second performance, lower network jitter, and lower network latency using Enhanced Networking. Enhanced networking capabilities using single root I/O virtualization.
Some examples of the types of workloads Amazon’s new cloud instances are targeted toward include applications such as top-end website hosting, online gaming, simulation, and risk analysis.
The instances complement SSD-Backed Elastic Block Storage, with EBS optimization enabled by default. SSD-Backed EBS also targets high-end performance use cases.
 The specs and pricing for the new C4 family (Source: AWS Blog)
The largest c4.8xlarge has a 36 virtual CPU count, 60 GiB of RAM and 10 Gbps network performance. A user also has the ability to fine-tune the processor’s performance and power management (which can affect maximum Turbo frequencies) using P-state and C-state control.
C-state is used to tune your application for better performance by managing the power consumption on a per-core basis. P-state is control over the desired performance (CPU clock frequency). More info on how these instances can be optimized can be found on the AWS blog
Before rolling out the new monster cloud instances, AWS kicked off 2015 by added some new cloud features, including early warning spot instance termination and updates to its GovCloud. | | 9:18p |
Microsoft Makes Moving Apps Between Azure Data Centers Easier Application migration between Microsoft Azure data centers has become a little easier thanks to a free open source solution from Persistent Systems. Developed in collaboration with the Azure CAT team in Bangalore, Azure Data Center Migration Solution helps migrate cloud assets from one Azure data center to another.
Licensed under Apache v2.0, customizable ADCMS replaces the need to develop a custom application migration solution. It automatically copies an entire deployment from one location to another.
Microsoft and its largest competitor in cloud services Amazon Web Services kicked off 2015 with a bang, both rolling out additions and improvements to their cloud service portfolios. Just last week, Azure announced availability of new high-performance cloud instances. AWS made a similar announcement earlier today, following the roll-out of numerous new cloud features last week.
ADCMS produces a JavaScript Object Notification (JSON) template of subscription configuration metadata. This template is used to stand up a replica or a modified version of the infrastructure setup.
Persistent Systems architect Satish Nikam listed some of the potential reasons for data or application migration in his LinkedIn post:
- Moving IaaS deployments to a data center that is closer to you or your customers
- Creating multiple data center deployments
- Providing a backup solution to work around IaaS maintenance plans
- Transitioning between subscriptions
Microsoft operates Azure in data centers around the world and it continues to expand locations. One use case may migrating a workload to a new location because it provides better performance.
The solution is also useful in test scenarios and for migrating between cloud subscriptions.
ADCMS addresses potential problems mid-migration. It is designed to handle interruptions and either start from where it left off or roll back.
“A migration can encounter two kinds of faults: transient and permanent,” Nikam wrote. “The solution uses early validations and retries, and compensations to implement a limited level of ‘atomicity.’ It also supports automatic rollback in case of permanent failure.”
If a migration stops, the migration can resume at the point of error. For consistency’s sake, the solution also shuts down virtual machines just before migrating them to avoid inconsistency issues.
It creates resources in parallel wherever possible. There is also resource name mapping and enables customization.
“Writing a script to add automation, customization and repeatability to your data center migration can become a major programming project, with extensive investment in error handling in case a problem occurs mid-migration,” Guy Bowerman, senior program manager, Azure, wrote on the Azure blog. The new solution “takes much of the pain away from these types of migration.”
The solution was developed using .NET Framework 4.5 and uses the Microsoft Azure Management APIs to interact with Azure. ADCMS can run on premises or on a virtual machine in the cloud. | | 9:57p |
Verizon: Taking Cloud Down for Upgrades Thing of the Past Upgrade work that took Verizon Cloud down early Saturday morning was complete on Sunday evening, the company said in a statement issued Sunday.
Even though the provider notified customers about the upcoming downtime more than one week in advance, anticipated length of the cloud outage (up to 48 hours) was unusual. Since all data centers hosting Verizon Cloud would be affected, customers that had not set up architectures that enabled them to failover to a different cloud provider would have to accept a potentially lengthy downtime window.
It turns out that part of the upgrade was meant to address precisely this kind of issue. Over the weekend, the provider implemented “seamless upgrade functionality,” meant to enable it to upgrade the cloud infrastructure without impacting customer operations.
Major cloud providers usually conduct “rolling maintenance,” or upgrading one availability zone at a time. This way customers can shift workloads from data center to data center to avoid downtime.
In its announcement, Verizon spun the upgrade as one that enabled users to live through its upgrades easier than competitors’, making rolling maintenance unnecessary.
“Many cloud vendors require customers to set up virtual machines in multiple zones or upgrade domains, which can increase the cost and complexity,” the statement read. “Additionally, those customers must reboot their virtual machines after maintenance has occurred.”
All maintenance and upgrades to Verizon Cloud will happen in the background without taking the cloud down and affecting customers, according to the provider.
The company announced Verizon Cloud in 2013. This is a new cloud platform which operates separately from the company’s older enterprise, managed, and federal cloud services. Customers on those “legacy” platforms were not affected by the weekend’s outage.
Data centers that host Verizon Cloud are in Culpeper, Virginia, Englewood, Colorado, Miami, Santa Clara, California, Amsterdam, London, and São Paulo, among other locations. |
|