Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, July 2nd, 2014
Time |
Event |
10:00a |
Internap Expands Bare-Metal Cloud Servers to London, Hong Kong Data Centers Data center service provider Internap has expanded its bare-metal public cloud service to data centers in London and Hong Kong. The service was previously available at the company’s Amsterdam, Singapore, Dallas, New York and Santa Clara, California, locations.
Bare-metal cloud servers are essentially dedicated managed servers customers can provision like they provision cloud virtual machines. The offering, also available from Internap rivals like Rackspace and IBM SoftLayer, provides performance of dedicated servers along with elasticity of a public cloud service.
Internap said it was addressing demand by companies running globally distributed data-intensive applications, such as Big Data analytics, mobile and digital advertising and online gaming. Such applications can suffer from performance issues when sharing physical resources with other applications.
The provider’s bare-metal cloud offering, called AgileCloud, is coupled with its patented Managed Internet Route Optimizer technology, which continuously analyses Internet performance and routes customer traffic over the best available path.
“Organizations deploying real-time, data-intensive applications are increasingly seeking cloud services that provide flawless performance, reliability and cost efficiency across globally distributed environments,” said Christian Primeau, senior vice president and general manager of cloud and hosting at Internap. “Our bare-metal cloud uniquely addresses these demands, and the addition of our London and Hong Kong locations delivers broader reach in key end user markets.”
Internap’s competitors in the space have been very active.
IBM announced earlier this year that it would invest $1.2 billion in expanding physical footprint of the SoftLayer cloud. Earlier this week the company announced the most recent addition to that footprint: a new data center in London.
Rackspace in June rolled out a brand new bare-metal cloud offering, called OnMetal. The company has custom-built servers to support the offering using Open Compute designs as a basis. | 12:00p |
CoreOS Gets $8M in VC Cash to Bring Web-Scale Computing to the Masses CoreOS, a startup with a Linux OS distribution that can update simultaneously across massive server deployments, announced an $8 million Series A funding round and the release of CoreOS Managed Linux, a commercial version of the open source technology that comes with support.
The company, which has in the past raised undisclosed sums from Andreessen Horowitz and Sequoia Capital, pitches its “Operating System as a Service” as a lightweight OS that automatically updates and patches lots of servers at ones, enabling highly resilient massive-scale infrastructure.
“Not a lot of startups go out an build new Linux distros,” founder and CEO Alex Polvi joked. “The original premise was around web security, with the amount of folks getting compromised and hacked [so] ridiculous. We started rethinking things as low as we could possibly go – going down to the Linux kernel.
“If we could deliver the OS as a service, providing a continuous stream of patches so you’re always running the latest version, we could improve the backend security of the Internet.”
Taking a note from Google Chrome
Polvi used to work in Mozilla, where his focus was on the front end. He noted that around that time in his career, Google Chrome came out with a feature that allowed Google to patch the browser and push out updates automatically.
This revolutionized the front-end web security. Both Firefox and Internet Explorer are also doing the same now.
“So now we have a secure front end,” he said. “On the server side, we have nothing like this at all. State of the art is get running and don’t touch it.”
CoreOS helps companies build environments where it doesn’t matter if an individual machine goes down. “You can reboot any machine and it’s OK,” Polvi said. “If you can handle that extreme, you can handle anything.”
Last Linux distro server migration you will ever need
“This is a big day for us,” Polvi said. “Not only are we announcing funding from one of the top Silicon Valley venture capital firms, we also have worked hard to deliver Managed Linux” said Polvi. “Businesses today can begin to think of CoreOS as an extension of their OS team, and for enterprise Linux customers this is the last migration they will ever need.”
CoreOS has been around for about one year, but this is not Polvi’s first foray into entrepreneurship. In 2010 he sold Cloudkick, a cloud server management and monitoring Software-as-a-Service solution, to Rackspace.
CoreOS has already been tested by a lot of companies. “We haven’t shipped our fully production ready version yet, but we’re starting to support people with what we have now,” said Polvi. “We’re tackling it; we have the right team in place.”
In the short time since inception, it has had over 150 contributors to its GitHub projects with over 5,000 stars collectively on CoreOS projects. The team has contributed features and fixes to other important open source projects, including Docker, Linux Kernel, networkd, systemd and more.
Support from modern infrastructure leaders
CoreOS images are currently available through Google Compute Engine, Rackspace (including the new OnMetal service) and Amazon.
It is also available on Docker 1.0, the container runtime for application packaging that has attracted a lot of buzz recently.
“CoreOS is providing an innovative approach to running Docker on an exceptionally lightweight, easy-to-update, minimal OS, and we can’t wait to see what is in store for the company in the future,” said Solomon Hykes, CTO and founder of Docker.
The Series A round was led by Kleiner Perkins Caufield & Byers, with follow-on investments from existing investors Sequoia and Fuel Capital. The funding will go towards growing the company, product development and managing the increasing global interest in CoreOS.
Kleiner Perkins Caufield & Byers general partner Mike Abbott believes that CoreOS is game-changing. “CoreOS is solving infrastructure problems that have plagued the space for years with an operating system that not only automatically updates and patches servers with the latest software, but also provides less downtime, furthering the security and resilience of Internet architecture,” he said.
Scalable infrastructure for the masses
“CoreOS is not your typical Linux distribution,” wrote Kelsey Hightower, developer advocate and toolsmith at CoreOS, on Rackspace’s blog. “You won’t find a package manager or have to deploy additional automation tools – they’re built in. Our goal is to make sysadmins’ lives easier by making your infrastructure more stable, reliable and resilient, but we don’t stop there. CoreOS raises the bar by providing a platform that enables organizations big and small to easily build highly scalable infrastructures like those used by Internet giants.” | 12:30p |
The Business of Data Centers: Efficiencies and the Role of DCIM Kate Baker is a business strategist at Custodian Data Centre.
In January 2014 a Silicon Angle blog post estimated that global data center traffic will grow threefold (a 25 percent CAGR) from 2012 to 2017, whilst Uptime Institute figures have noted that data centers consume up to 3 percent of all global electricity production.
Cloud growth between 2011 and 2012 saw public cloud adoption increase from just 2 to 25 percent and other commentators such as The International Energy Agency has suggested that the world’s energy needs could be 50 percent higher in 2030 than they are today.
Increased usage means increased efficiency
With growth figures such as these and most industry commentators agreeing that trends show a shift from data stored in-house to either cloud providers or data centers, the onus is on data centers to deliver their services as adeptly as possible.
Efficiencies in the data center ecosystem are vastly important – from data center design to which server to use, the amount of processing that can be achieved on a physical footprint can differ enormously.
At the heart of a data center is its ability to maintain and sustain access to an energy source 100 percent of the time. Every adjustment, efficiency initiative needs to ensure resilience is built into the decision making process. It is more important than ever for organizational change to be inextricably linked with this modus operandi of the data center.
But it is not enough to simply say we must stay on. Power at all costs is not a transaction that data centers can afford; there is not an infinite pool of money to pay for soaring energy bills and inefficient data center solutions.
DCIM development: off-site and in-house
Data center infrastructure management (DCIM) has been heralded by many organizations as the solution companies need in order to drive forward efficiencies and provide data center operators with a universal set of metrics to enhance their strategic planning. It also helps to ensure that a data center has the capacity to deliver what is required and the awareness to always remain on.
DCIM is seen as an effective way of delivering core infrastructure challenges on a smaller budget. However, can the same be said of a DCIM product? Can out of the box DCIM solutions truly meld and seamlessly interlink effectively with a data center if it is not developed by the data center itself?
Whilst the overreaching trends of data center needs are similar, two data centers rarely have identical needs. The key to DCIM is the management of the data and information created; if developed off-site it is imperative that the engineers utilizing the information can use it effectively for what they need it to do.
Some data centers, rather than operate DCIM products, design their systems in-house so that if there is a problem or a fault, they can find or adapt it.They are writing a specific piece of bespoke code for their own infrastructure.
Some of these data centers have calculated that during the early few weeks of deployment of their own data center information management systems they have recouped more than the cost of all the hardware via the energy savings they made. This is usually the justification for most DCIM systems.
Finding the right solution
Not all data centers have this level of in-house expertise and for companies that choose to adopt DCIM products the benefits can enable them to streamline and manage their infrastructure.Yet with so many aspects of DCIM to cover, can a total vendor DCIM product be the best solution for each system? That is not to say that DCIM products are good or bad, it is more a question of finding the right solution for each data center, once again bearing in mind the myriad of variables between data centers and their design.
As such one could suggest that data centers might be best to take a modular approach to their DCIM strategy rather than one complete solution and thus risk compromising on one part of their management system.
The counter-argument to that approach is the question of whether different vendor solutions can work compatibly with each other. Data center operators naturally want the best for their facility and compromise is not a term that sits well within the day-to-day ethos and running of a facility. They demand the best. If one vendor supplies the best capacity DCIM planning solution for them and a competitor holds the key to the right network solution, they expect and will demand that they will work together.
DCIM and employee skills gap
In July 2013 the Data Centre Alliance raised concerns about skills shortages in the data center industry, with a real focus on ensuring prospective data center employees have the right critical thinking skills to work in the data center industry. Crucially, they need the ability to see the parts and the whole of the data center, as well as the inter-relations simultaneously in order to prevent a major outage.
DCIM is the software/program buttress to those skills and whether developed in-house or by a vendor is a crucial tool in order to help manage the thousands of variables occurring daily within a data center.
For some data centers, their role within a business is to support the day-to-day workings of the company’sbusiness operations, which in turncan have an impact on the company’s bottom line. For a colocation provider, their whole business is being a data center – their bottom line solely depends on the data center performing at optimal levels. Regardless of the type, data center infrastructure management or DCIM vendor solutions must be carefully evaluated and implemented.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 12:30p |
WANdisco Acquires OhmData for HBase Availability Talent and Technology WANdisco has acquired OhmData, a developer of Apache HBase database solutions.
HBase is a core component of the Hadoop stack designed for real-time read and write processing of large-scale data sets. It’s used in mission-critical applications at companies such as Facebook and Bloomberg.
WANdisco provides enterprise-grade software solutions for globally distributed organizations. Its active-active replication technology helps satisfy continuous availability requirements at many Fortune Global 1000 companies.
The acquisition will help WANdisco make HBase continuously available. Terms of the deal were not disclosed.
The OhmData team will continue to develop WANdisco’s Big Data product portfolio and develop better integration between WANdisco’s products and the Hadoop platform.
OhmData has a patent-pending technology that addresses efficient database server power consumption. While that was an appealing piece to WANdisco, the talent that comes with the acquisition was also a major driver, the company said.
OhmData co-founders Alex Newman and Ryan Rawson will join WANdisco as part of the acquisition.
Newman is an open source Hadoop committer with expertise in big data, network protocol design, software security and application development. He also worked on HBase at Cloudera.
Rawson is a core committer on the Apache HBase open source project and member of HBase Project Management Committee. He previously worked as a software engineer at Google, where he worked on the ‘Big Table,’ a forerunner to HBase, as well as with Amazon, where he developed synchronous cross-data center applications.
Their status as senior open source committers who can contribute to Hadoop’s code base will further improve integration between WANdisco’s products and the Hadoop platform.
“We have again demonstrated that WANdisco can attract the top talent in the industry,” said David Richards, CEO at WANdisco. “Alex and Ryan come to us having worked as core technologists at some of the biggest names in the industry, such as Google, Cloudera and Amazon.
“HBase is a critical component of the Hadoop stack and customers have told us that they need it to be continuously available. We are delighted to have added the OhmData team to our renowned Hadoop organization”.
The recently released second version of Apache Hadoop has been driving a wealth of acquisitions in the big data space, as it has opened up a wealth of possibilities to differentiate services on or based on the platform. It has also helped kick off a major funding spree across providers of the various distributions.
MapR, Hortonworks and Cloudera have all raised cash and made acquisitions. Despite all of this activity, there also continues to be new entrants in the space, like Pepperdata.
More customers are using search and analytics capabilities in conjunction with Hadoop. | 4:56p |
Tokyo’s Tsubame-KFC Remains World’s Most Energy Efficient Supercomputer After the most powerful supercomputers in the world were pronounced last week at the International Supercomputing Conference, the Green500 published its semi-annual list of the world’s most energy-efficient supercomputers, with Tokyo’s Tsubame-KFC immersion-cooled system remaining at the number-one spot.
Relying on the liquid cooling system from Green Revolution Cooling, which submerges electronics in dielectric oil for most efficient heat transfer, the Tsubame-KFC supercomputer is also still the only system above the 4,000 MFLOPS/watt mark. The Green500 list ranks according to the millions of floating point operations per second per watt, or MFLOPS/watt.
Tsubame-KFC is approximately 20 percent more efficient than the Cambridge University’s Wilkes supercomputer, ranked number two.
Heterogeneous systems gaining
Like the June 2014 edition of the Top500 most powerful supercomputers, the top 10 supercomputers on the most recent Green500 remained mostly unchanged as well. The list shows an increasing trend of heterogeneous supercomputing systems at the top. The top 17 spots on the list are heterogeneous systems containing two or more types of traditional CPUs, GPUs and co-processors.
For instance, the first 15 systems pair NVIDIA Kepler K20 GPUs with Intel Xeon CPUs.
The Tsubame-KFC supercomputer uses 1U 4x GPU SuperServer systems and operates with high energy efficiency by submerging the SuperServer nodes in a CarnoJet cooling enclosure from Green Revolution. The insulated oil absorbs the heat from the compute nodes and transfers it into water loops via heat exchangers, with the warm water finally releasing the heat into the air via a cooling tower.
Green500 asserts that if Tsubame-KFC’s energy efficiency could be scaled linearly to an exaflop supercomputing system (1 trillion floating-point operations per second), such a system would consume on the order of 225 megawatts of power.
While NVIDIA has the next generation of Intel’s Xeon Phi (Knights Landing) to worry about next year, it has it’s own plans to improve efficiency, with the help of GPU-assisted 64-bit ARM SoCs for supercomputer workloads. NVIDIA’s Sumit Gupta notes that “future GPU accelerators achieve new levels of energy-efficient performance, and with the introduction of new, more efficient processor architectures for HPC, like ARM64, we expect to continue the steady advance to exascale.”
Green and powerful is a tall order
Illustrating how challenging it is to build a supercomputer that is both powerful and energy efficient, only one of the Top500 most powerful supercomputers, the Piz Daint at number six on the Top500, made it in the top 10 (number four) of the Green500 list.
Tsubame-KFC ranked number 437 on the June Top500 list. The number-one spot on the Top500, the Milkyway-2, ranked number 49 on the Green500 list, with 1,902 MFLOPS/watt. | 5:25p |
Racemi Raises $10M to Move Workloads Between Data Centers and Clouds Racemi, provider of software for server provisioning and cloud migration, has raised $10 million in a Series C funding round led by new investor Milestone Venture Partners with participation from existing investors Harbert Venture Partners and Paladin Capital Group. The company said it will use the capital for product development and geographic expansion.
Racemi’s software enables businesses to move server workloads to and from the cloud, between clouds and across data centers. It has had strong partner momentum in the past year, bringing on board several data center providers, including Phoenix NAP, Windstream and IBM SoftLayer, among others, and says its software works with all major cloud platforms.
Businesses are becoming mobile and remote work is on the rise. Cloud is one way to enable workers to work securely from home or across devices. The need for collaboration and access anywhere has been driving many applications to a Software-as-a-Service model.
“There are two strong forces propelling Racemi growth — one is the overall markets’ increasing adoption of cloud computing and the other is market validation of Racemi’s technology for the rapid migration of workloads into public, private and hybrid clouds,” said Morgan Rodd, of Milestone Venture Partners.
Racemi’s server imaging technology quickly clones server workloads between dissimilar physical, virtual or private cloud platforms, substantially shortening the migration process. The software captures the entire server stack–including operating system, applications and configuration–and automatically converts it to run on the target platform by applying the necessary tools and drivers during the migration process.
This ensures existing workloads are migrated in their time-tested configuration, which leads to better reliability and ability to be supported.
“The increasing adoption of cloud computing presents a tremendous opportunity for Racemi and this round will help accelerate our efforts to enter new markets and deliver the truly unique capabilities of our solution,” Racemi CEO Lawrence Guillory said. “We are continuing to build on our momentum and strong growth.
Racemi’s previous $7 million round came in July 2012. | 7:23p |
Google, Microsoft Partner With Network Vendors on New Ethernet Standard Google and Microsoft have teamed up with Arista Networks, Broadcom and Mellanox to create a consortium around a new Ethernet standard for connecting servers to top-of-rack switches in data centers to support the next generation of cloud computing network requirements.
The 25 Gigabit Ethernet Consortium’s aim is to promote quick development of higher-bandwidth access networking equipment needed to support workloads in hyper-scale data centers whose requirements will soon surpass the 10 Gigabit per second Ethernet and 40 GbE protocols being supported today, according to the members.
The specification for is available royalty-free to any vendor who joins the consortium. It prescribes a single-lane 25 GbE and dual-lane 50 GbE link protocol.
The consortium’s goal is to define the standard for physical and media access control layers to enable rollout of compliant equipment over the next 12 to 18 months.
Companies like Google and Microsoft, which build massive data centers to support delivery of their services globally, will benefit from both capital and operational expense savings since the standard will enable them to reduce overprovisioning of resources.
“The new Ethernet speeds proposed by the Consortium give superior flexibility in matching future workloads with network equipment and cabling, with the option to ‘scale as you go,’” said Yousef Khalidi, a distinguished engineer at Microsoft.
“In essence, the specification published by the 25 Gigabit Ethernet Consortium maximizes the radix and bandwidth flexibility of the data center network while leveraging many of the same fundamental technologies and behaviors already defined by the IEEE 802.3 standard.”
Networking hardware vendor Arista and silicon vendors Broadcom and Mellanox are major suppliers for the hyper-scale data center operators.
Mellanox and Broadcom both have switch designs based on Facebook’s Open Compute specifications. Arista, which went public last month, lists Facebook, Microsoft, Yahoo and eBay as customers. | 8:00p |
Zayo’s zColo Enters Atlanta Market With AtlantaNAP Acquisition zColo, the colocation division of Zayo Group, has acquired service provider Colo Facilities Atlanta and its AtlantaNAP data center, which adds 42,000 square feet of usable data center space inside a 72,000 square foot facility to Zayo’s data center fleet, extending it to a total of 28 locations.
This is zColo’s first entry into the Atlanta colocation market, although the parent company Zayo has an Atlanta fiber network that spans more than 600 miles. The company has been steadily expanding its national data center presence via acquisition.
The company recently acquired Dallas provider CoreXchange, which added a standalone data center as well as a 12,000 square foot suite at the Dallas Infomart to the fleet. About one year ago the company entered the Austin, Texas, market by purchasing local provider Core NAP.
Zayo’s total data center footprint is now more than 570,000 square feet.
AtlantaNAP, located at 1100 White St. SW, offers 5 megawatts of fully redundant 2N UPS power, serviced from diverse utility feeds. The site will offer Zayo’s full suite of lit bandwidth services and access to Zayo’s metro and long-haul dark fiber backbones.
Zayo’s roots are in bandwidth, a space recently marked by constant consolidation, and the company is one of the players playing an active role in the trend (its recent acquisition of Neo Telecoms network in France is a good example). zColo is an additional line of business that gives Zayo another way to monitize its network assets.
Atlanta colocation market attributes solid
Atlanta is a healthy data center market to expand into. There is a wealth of service providers and Fortune 500 companies, comparatively low power costs, a large IT workforce, a major airport and a generally favorable business climate.
Greg Friedman, vice president of zColo, said, “Atlanta is a high-demand colocation market and a growing hub for healthcare, technology, and large enterprises.”
AtlantaNAP “will now be able to leverage Zayo’s existing fiber footprint to provide a connectivity-driven colocation offering in Atlanta for new and existing customers.”
The biggest player in Atlanta is QTS, which also has been trying to diversify beyond its core market. There are also wholesale providers like T5 and ByteGrid, as well as mixed services players like Peak 10, Colo ATL and DataSite, which offers what it calls hybrid colo, where customers can own or share pieces of the critical infrastructure that they choose.
Two major facilities in Atlanta are 55 Marietta and 56 Marietta, a carrier hotel owned and operated by Telx that houses Atlanta’s Internet Exchange Point (IXP) and is home to several other colo players. |
|