Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, November 7th, 2014

    Time Event
    1:00p
    CERN’s OpenStack Cloud to Reach 150,000 Cores by 2015

    PARIS – Building the Large Hadron Collider itself was doubtless a massive feat, but the machine – a nearly 17-mile ring more than 300 feet underground on the Franco-Swiss border – is useless without the huge data storage and computing capacity needed to analyze the ungodly amount of data it generates when microscopic particles get recorded smashing into each other at extreme speeds.

    That computational power at the European Organization for Nuclear Research (CERN) is delivered through four cloud environments the organization’s IT team created using OpenStack, the suite of open source cloud software that is quickly becoming the industry standard for building clouds. Tim Bell, CERN’s infrastructure manager who oversees that team, spoke about the organization’s cloud during a keynote at this week’s OpenStack summit in Paris.

    CERN currently has four OpenStack clouds living in two data centers – one in Meyrin, Switzerland, where Bell’s office is located, and the other in Budapest, Hungary, a remote business continuity site for the primary Swiss facility. Largest of the four has about 70,000 compute cores on about 3,000 servers. The other three comprise a total of 45,000 compute cores.

    Bell’s team started building its cloud environment in 2011 using the Cactus release of the open source cloud software. They went into production with the Grizzly release in July 2013. Today, all four clouds run the Icehouse release. Bell said he has about 2,000 additional servers on order to increase the cloud’s capacity since the upcoming increase in energy of the particle beams inside the collider mean it will generate more data than the 1 petabyte per day it already generates when running at its current capacity.

    CERN’s cloud’s architecture is a single system that scales across two data centers. Each data center, in Switzerland and in Hungary, has clusters of compute nodes and controllers for those clusters. Both controller “cells” speak to a master controller cell in Switzerland, and upstream from the master controller cell is a load balancer.

    An OpenStack cloud is never built using just components of the OpenStack suite, and CERN’s cloud is not an exception. Other tools in the box (all open source) are:

    • Git: a software revision control system
    • Ceph: distributed object storage that runs on commodity servers
    • Elasticsearch: a distributed real time search and analytics system
    • Puppet: a configuration management utility
    • Kibana: a visualization engine for Elasticsearch
    • Foreman: a server provisioning, configuration, and monitoring tool
    • Hadoop: a distributed computing architecture for doing big data analytics on commodity-server clusters
    • Rundeck: a job scheduler
    • RDO: a package of software for deploying OpenStack clouds on Red Hat’s Linux distribution
    • Jenkins: a continuous integration tool

    Bell’s team recently devised a federated cloud system that currently works across CERN’s cloud and the public cloud resources by Rackspace, whose cloud is also built on OpenStack. Users can deploy federated identity across Rackspace cloud and CERN’s private cloud. In the future, Bell foresees more public clouds and other researchers’ clouds to be able to gel with CERN’s.

    CERN’s OpenStack environment is already massive, and it is going to become even bigger when the collider is upgraded to double its energy in 2015 so scientists who have already used LHC to confirm existence of the Higgs boson (50 years after Peter Higgs and five other physicists theorized about the “God particle’s” existence) can continue looking for answers to some of the most fundamental questions about the universe. Why is gravity so weak? Are there dimensions we’re not aware of? Do gravitons, the currently hypothetical guardians of gravity, exist?

    Those are things physicists worry about when they wake up in the morning, Bell said. What fills his mornings with worry is how to make sure those physicists have an IT environment capable of crunching through the amount of data required to answer questions of that magnitude.

    4:00p
    ScienceLogic’s Free Tool Maps AWS Cloud Resources and Interdependencies

    IT and cloud monitoring provider ScienceLogic introduced MapMyCloud.net. The service allows enterprises to automatically map and visualize all of their Amazon Web Services cloud resources and associated technology interdependencies in real time. The free service leverages the company’s CloudMapper technology.

    MapMyCloud runs as an Amazon Machine Image within the cloud and helps organizations get a grip of IT assets within. The service visualizes what a company is running on both private and public AWS cloud.

    As AWS assets accumulate, cloud management becomes difficult, with lots of resources deployed and interconnected. Cloud sprawl is a real thing, and it not only causes unused and stranded capacity, it also complicates the big picture.

    Growing and increasingly complex deployments makes it difficult to figure out what you can add and remove safely. There is a big financial impact to not understanding the big picture in cloud management.

    In addition to intelligent mapping, the service provides a reporting history to help understand historical changes to learn from the past in order to make better planning and investment decisions moving forward.

    It’s a free service and is the result of customer feedback. “It’s all about providing education for anybody — current customers and new customers alike,” said Jeremy Sherwood, vice president of strategy at ScienceLogic. “It encompasses everything around visibility. It gives visibility into the interdepencies of stacks and services. It covers the big, broad spectrum. It’s about showing what you might not know. ”

    MapMyCloud extends the company’s AWS capabilities for existing customers and newcomers.

    The company is tackling the biggest cloud first with the service and is currently in the process of expanding to Microsoft Azure and Google Cloud Platform.

    ScienceLogic provides all-in-one centralized IT monitoring, and the free service may be a good way to attract new customers to the paid solution.

    It works across hybrid infrastructures, including the data center, cloud, system, and network. “We provide the business intelligence so you make intelligent orchestration solutions,” said Sherwood.

    It also has several individual complementary “niche” tools — think of things like ticketing systems — as well. If a company is already using another provider’s specific tool or point system, they can continue using it alongside ScienceLogic if they choose. “We don’t displace, we integrate,” said Sherwood. An open architecture and REST API means a company doesn’t need to rip and replace.

    4:30p
    Friday Funny: Pick the Best Caption for Giant Pumpkin

    Halloween may have come and gone but Friday Funny’s are always in season. Help us complete this week’s Kip and Gary cartoon by scrolling down to vote!

    Several great submissions came in for last week’s cartoon – now all we need is a winner! Help us out by scrolling down to vote.

    Here’s how it works: Diane Alber, the Arizona artist who created Kip and Gary, creates a cartoon and we challenge our readers to submit a humorous and clever caption that fits the comedic situation. Then we ask our readers to vote for the best submission and the winner receives a signed print of the cartoon!

    Take Our Poll
    For previous cartoons on DCK, see our Humor Channel. And for more of Diane’s work, visit Kip and Gary’s website!

    5:00p
    VMware’s VP of Engineering Tessel Joins Docker

    Marianna Tessel, who until recently worked as vice president of engineering at VMware, has joined Docker, the San Francisco-based startup whose software helps enterprises streamline the process of developing, testing, and deploying software quickly, the way Internet giants like Google and Facebook do.

    Docker is an open source technology, but Docker the company is the primary driving force behind it. At its core is the concept of application containers, which enable an application to be easily deployed on any type of infrastructure, be it a laptop, a public cloud, or a bare-metal server cluster.

    Docker has been around for about 1.5 years but has already gained massive popularity, being heralded as the next big thing in IT and cloud computing. Many expect its impact on the IT and software development industry to be comparable to the impact VMware’s server virtualization technology had when it created a way to turn a single physical server into multiple virtual machines.

    Tessel, a former captain in the Israeli Defense Forces, had been at VMware since 2008 before joining Docker this month as senior vice president of engineering. At VMware, she oversaw the company’s collaboration with others in compute, storage, networking, and cloud categories, stimulating growth of the massive ecosystem around its technology.

    Docker is focused on growing an ecosystem around itself as well. The company’s CEO Ben Golub told us in an earlier interview that growing a “huge ecosystem” was crucial for it to succeed. Ecosystem building is the reason Docker decided to open source its technology, he said.

    “We are confident that her familiarity of the landscape and experience in scaling engineering teams will enable us to take full advantage of our accelerated enterprise opportunity, while putting systems in place that drive the ongoing success of our Docker ecosystem partners,” Golub said in a statement commenting on Tessel’s appointment.

    In September, Docker raised its Series C funding round, beefing up its war chest with an additional $40 million of venture capital.

    5:30p
    VDI and the Cloud – Understanding the Scenarios

    Initially, virtual desktops were touted as the replacement for physical desktops and a simple transition to a new type of end-point architecture. Managers and admins were promised this whole new type of environment which would help them transition to a more BYOD-friendly environment and assist with the move to Windows 7 (and now Windows 8). The problem became clear when underlying infrastructure components began to suffer as more resources were, in fact, required to run a VDI platform. More network bandwidth, storage resources and computing processes required organizations to rethink exactly how they were going to deploy VDI, and make it work.

    So, let’s take a new approach to the VDI conversation. Instead of bashing the technology or mentioning what it needs to work properly we can examine where exactly VDI fits, and where it has been most successful:

    • Labs. Labs, kiosks and any other environment that has a lot of users accessing the same hardware is a great use-case for VDI. Once the user is done with the end-point, the OS is reset to its pristine state. This is perfect for healthcare laboratories, task workers, libraries and even classrooms. There have been several large educational VDI deployments taking place as thin/zero clients begin to replace older fat clients. Furthermore, these lab environments can be completely hosted either in a private or public cloud environment. By using non-persistent cloud-based desktops administrators can quickly provision and de-provision these labs.
    • Testing and Development. What better way to test out an application, service or new product than on an efficiently provisioned VDI image. Administrators can deploy and test out new platforms within “live” environments without having to provision hardware resources. Once the testing is complete, they can simply spin down the VDI instance and rollout the new update, application or desktop environment. This can be done either internally or through a cloud provider.
    • Application compatibility. Recent updates within an organization have forced some applications to adopt 64bit technologies. Well, some apps just won’t run on such a platform. So, administrators have been forced to get creative. This is where VDI can help. For those select finicky applications, VDI within a private cloud environment can be a lifesaver. Virtual desktops can run within a 32bit or 64bit instance and allow administrators to continue to support some older apps.
    • Contractors and outside employees. Some organizations have numerous contractors working within an organization. A great way to control contractor access is through a private cloud VDI platform. Give a user access via controlled AD policies and credentials and allow them to connect to a virtual desktop. From there, the administrators can quickly provision and de-provision desktop resources as needed for a given contractor. This allows outside consultants to bring in their own laptops, access centralized desktops and conduct their jobs. Then, once done, simply power down or reset the VM. This creates a quick, easy to manage contractor VDI environment.
    • Controlled BYOD. Application virtualization aside, delivering desktops via BYOD can be a great solution for the end-user. Whether they’re working from home, internally or even internationally, the user can be presented ass desktop with all of their settings intact. BYOD and IT consumerization have created a true demand for BYOD. This is where VDI can help. The end-point never retains the data, and the desktop as well as the applications, are always controlled at the data center level.
    • Heavy workload delivery. That’s right – you read that correctly. New technologies, like those from nVidia GRID, are allowing for powerful resource sharing while still using a single GPU. Solutions like GRID basically accelerate virtual desktops and applications, allowing the enterprise IT to deliver true graphics from the datacenter to any user on the network. Unlike in the past, you can now place more heavy resource users on a multi-tenant blade and GPU architecture. This opens up new possibilities for those few users that always needed a very expensive end-point.

    VDI can be very successful if deployed in the right type of environment. One of the first steps in looking at a VDI solution is to understand how this type of platform will work within your organization. Is there a use-case? Is there an underlying infrastructure that will be able to support a VDI platform? By seeing the direct fit for VDI within an organization, the entire solution can actually have some great benefits.

    6:00p
    Microsoft Opens Zero-Carbon Methane-Powered Data Center In Wyoming

    Microsoft has opened a zero-carbon, biogas-powered data center in Wyoming that combines modular data centers with an innovative way to leverage waste from a nearby water treatment facility. The data center is independent of the power grid, an achievement that certainly warrants the “cable-cutting” ceremony that was held in its honor.

    This is the culmination of an ambitious research project from Microsoft. It serves as a research center for biogas and fuel cell technology, two alternatives to drawing from the power grid. FuelCell Energy of Connecticut developed the fuel cell technology to convert unused biogas into ultra-clean power generation solutions. Siemens worked with Microsoft and FuelCell to engineer and install power monitoring equipment for the data center.

    The data plant doesn’t look technologically advanced, and is arguably underwhelming visually. Shipping containers scattered around look more akin to a trailer park than a next-gen data center. However, the ideas beneath the data plant are far advanced to what most of the industry is doing today.

    Data center containers filled with servers are deployed next to a water treatment plant in Cheyenne, Wyoming. Servers are powered using electricity from a fuel cell running on methane biogas from the plant. The plant uses an electrochemical reaction to generate electricity and heat. Virtually no air pollutants are released because of the absence of combustion. Each fuel cell generates around 300kW of renewable power of which the data center uses about 200kW.

    The data center works off of the power grid, which in itself is a major feat. It not only uses waste, but prevents and converts environmentally dangerous waste into something positive. Methane is normally a damaging greenhouse gas.

    Siemens’ work on power monitoring measures the performance and energy output of the fuel cell so power is consistent. It monitors the amount of biogas being sent to the fuel cell, the conversion to usable energy, and the fuel cell output to ensure that enough electricity is created throughout this process to reliably power Microsoft’s data center. It includes a predictive demand alert capability so the data center operators are made immediately aware of any power quality or energy demand issues. It is an integral part of the process making biogas and fuel cells feasible in an industry that needs dedicated, predictable power.

    A coalition of industry, University of Wyoming, Business Council, Cheyenne LEADS, Cheyenne Board of Public Utilities, the Western Research Institute and state and local government partners led to the project coming to fruition. University of Wyoming students will have access to the plant for further research.

    “This project has been a collaboration of many organizations. We are very proud to have had the opportunity to be a part of this fascinating project,” states Randy Bruns, CEO of Cheyenne LEADS.

    Putting Wyoming on the data center map

    The idea was initially proposed in 2010. Microsoft has made significant data center investment in the state, with another major $274 million expansion announced in April of this year. The company’s investment in Wyoming has put the state on the data center map and is approaching nearly half a billion dollars.

    “Growing Wyoming’s technology sector has been a priority and Wyoming is seeing results,” Governor Matt Mead said. “This alternative energy project is not only a zero-carbon data center, it is more. It is a laboratory for biogas and fuel cell research. Wyoming is on the cutting edge.”

    The State Loan and Investment Board, comprised of the five statewide elected officials, approved a $1.5 million Wyoming Business Council Business Ready Community grant request from the city of Cheyenne in 2012 to help fund the $7.6 million plant. Microsoft covered the remaining cost.

    6:30p
    Research Firm 451 Sets Cloud Pricing Benchmark

    Establishing a cloud pricing benchmark is hard work, given the different configurations, pricing and services across different cloud offerings. 451 Research has attempted to index cloud pricing, creating a benchmark for the hourly price for a typical Web application.

    The average hourly price is currently set at $2.56. Average cost for “hyperscalers,” a term for the biggest public clouds of Amazon Web Services, Microsoft Azure and Google Compute Engine, is slightly lower at $2.36 per hour.

    The benchmark was established through several quotes from cloud providers and establishes a baseline to examine how pricing changes over time. There continues to be aggressive price cuts across the industry, be it with pure-play public clouds like Google and AWS, targeted clouds like ProfitBricks, or big service provider clouds like CenturyLink as only a few examples. The differences between each cloud make for an apples to oranges comparison and the nature of the cuts make it difficult to gauge how much pricing is changing industry-wide.

    The benchmark tries to reflect pricing in a real-world situation for a typical application and acts as a measuring stick going forward. Certain clouds still remain better for certain apps and pricing is dependent on specific needs, but the benchmark gives cloud users a starting point for their pricing assessments.

    While there appears to be a race to the bottom in pricing for hyperscalers – occasionally pejoratively referred to as commodity cloud – other cloud providers focusing on value adds still need to maintain competitive pricing of the raw resources. The index gives an idea of the pricing disparity.

    451 is simplifying its Virtualization Price Index (VPI), which represents the average hourly price of a basic three-tier Web application based on quotes from a range of hosting and cloud service providers. The VPI is currently $0.73 across all providers, whereas the hyperscalers are slightly more expensive at $0.78.

    “The current average cost of running a multi-service cloud application is $2.56 per hour, or around $1,850 per month, which includes bandwidth, storage, databases, compute, support and load balancing in a non-geographical resilient configuration,” said Owen Rogers, senior analyst for 451 Research’s Digital Economics unit.

    “At this hourly price for an application that potentially could deliver in excess of 100,000 page views per month, it’s easy to see how cloud is a compelling proposition for enterprises. Our research indicates that savings of up to 49 percent can be achieved by committing to a minimum usage level, so enterprises should consider alternatives to on-demand if they wish to secure cost savings.”

    The CPI is based on cloud quotes from a number of providers – including AWS, Google, Microsoft Azure, Swisscom, Verizon, UpCloud, Gandi, Lunacloud, Internap, Peak10 and Windstream.

     

    7:00p
    Apple Acquires Cloud Infrastructure Startup Union Bay Networks: Report

    logo-WHIR

    This article originally appeared at The WHIR

    Apple has reportedly acquired stealth cloud infrastructure startup Union Bay Networks and has opened a Seattle engineering office, according to The Seattle Times. An Apple spokesman confirmed the new office, but none of the involved parties has commented directly on the Union Bay acquisition.

    Several employees of the startup have changed their LinkedIn profile’s to reflect a move to Apple, and one even posted a hiring announcement for the Seattle area before removing it. The deleted post said that Apple is “looking for talented multidisciplinary engineers to design and develop the core infrastructure services and environments driving every online customer experience at Apple ranging from iCloud to iTunes.”

    Union Bay was formed by F5 Networks veterans and backed by venture capital firms Madrona, Greylock and Divergent to “enable the next generation of networking for cloud computing and software defined datacenters,” according to its website. Union Bay’s email address is no longer in service, lending further credence to the reports that it has been acquired.

    Union Bay was formed in May 2013, and secured $1.85 million in seed funding last year, according to GeekWire.

    Apple is ramping up its cloud services portfolio with offerings including mobile cloud solutions for enterprises jointly developed with IBM and a June iCloud Drive upgrade. However, it has had to deal with PR stumbles related to iCloud hacks which resulted in a user data breach and Chinese government access to user accounts.

    Thirty LinkedIn users currently identify themselves as software engineers for Apple working in the Seattle area.

    Numerous tech companies have gained or expanded presences on Microsoft’s home turf, including GoDaddy, which officially opened a Seattle office a year ago.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/apple-acquires-cloud-infrastructure-startup-union-bay-networks-report

    7:30p
    CloudMGR Creates New Module for WHMCS to Make AWS Integration Easier

    logo-WHIR

    This article originally appeared at The WHIR

    Integrated AWS platform CloudMGR has released a new integration module for WHMCS, the company announced this week. Integrating WHMCS’s automation solution with AWS will make it easy for web hosts and IT service providers to sell AWS, according to CloudMGR.

    CloudMGR, an AWS Technology partner, says the module deploys in minutes and allows management of server images, storage and backups, and includes a product wizard in both the admin and client areas.

    The module allows service providers to sell and provision AWS EC2 and S3. Admins can also create products from pre-existing images including WordPress, Drupal, and Joomla. It is offered out of AWS centers in the US, Australia, the EU, Singapore, Tokyo, South America and Germany.

    Previously, CloudMGR could be linked to WHMCS, but customer feedback led the company to build an API to support its first module.

    OnApp responded to service provider demands for WHMCS integration with a module released in January. As public cloud and AWS adoption continue to grow quickly, automation should enable more hosting, agency, and IT service providers to more easily leverage AWS.

    “By extending the functionality to include cloud management tools, CloudMRG provides hosting companies and services providers and unique opportunity to extend their product portfolios and tap into Cloud Hosting for their businesses,” said Matt Pugh, CEO of WHMCS.

    Leveraging AWS by providing visibility or management services is a growing trend. AWS partner 2nd Watch announced it had upgraded its AWS Enterprise App management service with performance monitoring by New Relic in October. Cloudyn launched a multi-cloud cost management tool for its largely AWS customer base last week.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/cloudmgr-creates-new-module-whmcs-make-aws-integration-easier

    << Previous Day 2014/11/07
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org