Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, November 4th, 2014

    Time Event
    1:00p
    Immersion Cooling Comes to the Hosting Market

    Submerging servers in fluid isn’t for everyone. But immersion has proven effective for cooling high-density server configurations, including high-performance computing clusters for academic computing, seismic imaging for energy companies, and bitcoin mining.

    Now CH3 Data is making immersion cooling available to the broader web hosting market. The service provider just completed work on a new data center in Austin, Texas, featuring servers submerged in a dielectric fluid. CH3 Data was founded in 2011 as Midas Green Technologies, and has since split into two companies, CH3 handing data center services, while Green Midas focuses on commercializing its immersion cooling technology.

    In addition to supporting extreme power density, immersion cooling has the potential to produce significant savings on data center infrastructure, allowing users to operate servers without a raised floor, computer room air conditioning (CRAC) units or chillers.

    Chris Laguna, data center manager for CH3 Data, provides some additional details on CH3′s use of immersion cooling and the company’s new facility. Here’s a Data Center Knowledge Q&A with Laguna:

    Data Center Knowledge: What prompted you to adopt immersion cooling for your business?

    Chris Laguna, CH3: Our CTO Chris Boyd started looking for cooling alternatives back in the mid-2000s. During this period, our electric provider increased rates substantially, so Boyd started thinking about ways to improve our data center efficiency. Better heat transfer was an obvious way, so he looked into liquid-to-the-server solutions. These didn’t exist in a useful form at the time, but gamers had liquid to the chip systems available in the market. However, these did not fit the ideal 1U (1.75”) or less form factor, so deployment of massive water-cooling systems for individual servers didn’t make financial sense. In addition, “dripless” connectors were not guaranteed to be dripless for enough cycles of server installation and refresh over the life of a data center to make deployment practical.

    In March of 2011, we formed the company Midas Green Technologies in Austin, Texas, and became the first commercial data center in the world running servers immersed in tanks of mineral oil (Immersion cooling has been used as far back as the 1940s, more recently in the Crays of the 1980s, but never in a hosted data center form). Over the first eighteen months of operation, we found changes we felt were imperative to make the system a proper data center solution. We decided if we were going to improve things, we would need to do it ourselves and use our team’s extensive knowledge and experience in data center operations to better the product. To accomplish this, we have worked with several engineering and legal firms and simply put, we believe we have a better way to build a data center.

    Midas Green Technologies LLC filed its IP and today has 1 patent pending with over 28 claims. That design has been realized and is currently deployed in CH3 Data’s new facility where we continue to run servers and other devices submerged in our tanks.

    DCK: What are the advantages and challenges in using this approach?

    Laguna: The advantages are quite abundant. During the summer of 2014, we were able to maintain a PUE of 1.09 in Austin, Texas during 95F+ heat and relative high humidity. Our GPU and Mining clients are especially in love the immersion-cooled environment for the densities and over-clocking benefits it provides.

    As for challenges, we do not yet have a broad range of immersion-certified servers and networking gear, but this is very rapidly changing. We primarily user Super Micro servers, but Dell has some exciting opportunities that will give us and our customers many more options in the near future.

    ch3-enclosures

    Several of the immersion cooling tanks being used at CH3. Each is filled with a dielectric fluid, which is similar to mineral oil. It transfers heat almost as well as water, but doesn’t conduct an electric charge. (Photo: CH3)

    DCK: We don’t see many examples of immersion being used in multi-tenant or hosting environments. How has the use of immersion cooling approach been accepted by customers? Is education an important component of your sales process?

    Laguna: Our customers love it, but surprisingly not for the reasons you would expect. When searching for a reliable data center solution, many factors are included: Price, SLAs, Security, etc, and we can check all those boxes. We have a great customer base who use us because of the environmentally-friendly nature of immersion cooling. However, with majority of our customers, it ultimately comes down to the bottom-line. It is a divide that you can almost be cleanly drawn between Main Street and Wall Street. We are unique in that we offer a “green” solution that will actually cost you LESS.

    DCK: Does immersion cooling create any considerations on the staffing front? Is it easy to find staff with experience with this equipment, or is it primarily a training issue?

    Laguna: At the end of the day we are still performing the same functions as a traditional data center (except we never have to change out server fans because we do not use them!), our staff does have a small amount additional training to work with servers in the immersion environment. That training is mostly handling and processes, performing any server repairs remains the same.

    ch3-cooling-pipes

    A look at some of the piping that supports the liquid cooling system at CH3 Data in Austin. (Photo: CH3)

    4:00p
    OpenStack COO: Days of AWS as Cloud Monolith are Numbered

    PARIS – The culminating sequence of Stanley Kubrick’s 2001: A Space Odyssey has Dave Bowman, the film’s protagonist, transform while lying in bed in front of Kubrick’s iconic black stone monolith. OpenStack Foundation COO Mark Collier used a still from the bedroom scene as an image to represent Amazon in his keynote Tuesday morning at the OpenStack summit in Paris.

    “There is a monolith in the room,” Collier said, the monolith in the background featuring Amazon’s curved-arrow logo. While there are many public cloud providers, including a couple of giants (Microsoft and Google), according to Collier Amazon remains a single massive monolith in the cloud market. But this state of affairs is going to change in the very near future, he said.

    “There’s not going to be one cloud strategy that’s going to work for everybody,” Collier said from stage at the Palais des congrès de Paris. One vendor is simply not going to cut it, and the amount of OpenStack clouds that already exist in many places where AWS does not have data centers is proof, according to him.

    “That’s’ pretty self-evident at this point, or else we wouldn’t be seeing clouds all over the planet,” he said, showing a map of the world indicating about 20 OpenStack cloud locations and about half that number of AWS regions.

    AWS v. OpenStack is a rare choice to be facing

    It’s not a simple dichotomy, however. AWS and OpenStack are two different things.

    The former is a company that provides a specific set of services from a proprietary platform it has designed on its own, and the latter is an open source cloud architecture, which in its current state emerged from an industry-wide collaboration effort. Some companies have used OpenStack to build cloud service businesses, but there are also many examples of companies using it to stand up private clouds for their own use.

    Cloud ambitions of companies like Microsoft, Google, IBM SoftLayer, and CenturyLink Technology Solutions are another reason the AWS v. OpenStack dichotomy doesn’t exist.

    Some customers do face the choice of AWS versus OpenStack in some situations. They could be choosing between AWS and another public cloud service that’s built on OpenStack (such as Rackspace’s), or they could be choosing between AWS and an internal cloud of their own.

    Collier’s point was that there is definitely room in the cloud market for players other than Amazon, which he said he had nothing against. “I actually think that they’re a very impressive technology company.”

    OpenStack creates freedom of choice

    In situations where it is a matter of choosing between the two, OpenStack wins on flexibility of the hardware deployed underneath. One size doesn’t fit all, but a few sizes don’t fit all either, as Wes Jossey, head of operations at Tapjoy, pointed out in his presentation during the morning keynote.

    Tapjoy, which provides an advertising and monetization platform for mobile applications, is an example of a user that runs on both AWS and OpenStack. The company has grown up on AWS, and since June of this year has been running its real-time data analytics engine on an internal OpenStack cloud.

    Essentially, only seven “modern” server configurations are available to AWS users. When Tapjoy was designing the infrastructure for its OpenStack cloud, Jossey was amazed at the level of configurability that was possible. “We got to define exactly what we wanted to build, exactly how we wanted it to look, and exactly the right ratios [between CPU, RAM, and IO].”

    Cloud is hard

    The problem is that standing up an OpenStack environment is a pretty involved and lengthy project, regardless of whether you’re doing it alone or going to vendors for help. As Jossey put it, he’d gotten so good at using AWS, he could “go in and shoot the shit with the best of them.” With OpenStack, not so much. There was a steep learning curve.

    “Operation of a public cloud, or any cloud, is non-trivial,” Bill Hilf, senior vice president of product management for Helion, HP’s cloud business, said. “It’s a real investment. It takes real time and real expertise to do it.”

    Much of the Helion business revolves around OpenStack. Hilf also said that there is great need for cloud vendors beyond not just Amazon, but also beyond Microsoft and Google.

    Scale of the big public clouds makes them really good for certain types of workloads but not for all workloads. If you’re streaming the Olympics, World Cup, or the Academy Awards, AWS is a great platform for you. But if you’re a running a bank in a highly regulated environment with ongoing budget cuts, you live in a different world that requires a very different set of solutions, Hilf explained.

    4:30p
    The Laws of Technology: Driving Demand in the Data Center

    Bob Landstrom is the Director of Product Management at Interxion.

    What Moore, Kryder, Nielsen, Koomey and Jevons can teach you about the data center.

    Data centers have a seemingly insatiable appetite for space, power and bandwidth. But why does this happen? Why does the demand seem to grow and grow, rather than level off? Let’s begin with the five laws that drive demand for data center services.

    #1 Moore’s Law

    In 1965, Gordon Moore, co-founder of Intel, wrote a paper in which he predicted the number of transistors manufactured onto a common semiconductor chip would double roughly every two years. This prediction, which has held true, has come to be known as Moore’s Law and ensures growth of computational devices on a given amount of chip space.

    For the data center, this ever increasing density of data processing electronics correlates to increasing heat produced by those electronics, which in turn drives cooling resource demands.

    #2 Kryder’s Law

    Mark Kryder was a researcher and CTO at Seagate in 2005 when he was credited with the creation of Kryder’s Law. It is an observance of how magnetic disk storage density roughly doubles every thirteen months. As disk storage density has approached the physical limits of magnetic media, new storage techniques have been introduced. These include solid state storage (based on transistors, and following Moore’s Law again), which opens the gates to a new era of storage density increases for years to come.

    Together, Moore’s Law and Kryder’s Law point toward greater and greater data processing capacity for a given physical footprint. For the data center, this is the continued trend of increasing power consumption and subsequent cooling demand of data processing equipment.

    #3 Nielsen’s Law

    Jakob Nielsen is a researcher in Web usability who, in the 1990s gave his name to the observation that common high-end consumer network speeds double approximately every 21 months.

    The increasing consumption of content and data services by end users drives growth of demand in the data processing capacity hosted in the data center. Nielsen’s Law reveals the scalability of computing delivered to the masses of end users. It connects people and things to a global ecosystem of data processing, and with that access comes demand from both users and the data processing systems delivering it.

    #4 Koomey’s Law

    Jon Koomey is credited with documenting a trend of increased efficiency of data processing equipment. Koomey’s Law states that the number of computations per joule of energy dissipated doubles approximately every 18 months.

    Some have pointed to Koomey’s Law as a reflection of natural energy efficiency as data processing technologies scale. While it certainly does reflect computational efficiency improvements, there are energy-specific features increasingly built into original equipment manufacturing (OEM) devices, which are primarily responsible for any reduced energy consumption by servers, such as 80-Plus rated power supplies and clock control using dynamic voltage and frequency scaling.

    Koomey’s Law is the unleashing of the Internet of Things. The idea is that, at a fixed computing load, the amount of battery needed falls by a factor of two every 18 months. This enables wide-scale proliferation of mobile and miniaturized computing, sensing, and telemetry applications, facilitating the Internet of Things which in turn drives growth of data processing and content management in the data center.

    With all of these laws rooted in some way to the physical nature of data processing technology, we now come to perhaps the most overriding factor driving growth of data center services.

    #5: Jevons’ Paradox

    William Stanley Jevons was a 19th-century English economist, working long before the advent of data processing technology. Even still, he noted that increased efficiency of a resource results in increased consumption, rather than the increased efficiency satisfying demand.

    As an example, notice that as devices become more computationally effective, the price comes down. In the 1960s, the U.S. spent over $25 billion to land humans on the moon. Today, there is more computational power in a $200 smart phone than existed for the entire lunar program. We are achieving exponential steps in data processing capacity every 18 months, reducing the price of devices, and with each step we’re using it all and wanting even more.

    Continuous growth, now and forever

    These laws point to growth in your data center service needs as your business grows. You need more from your data center provider today than you did five years ago, and you will need even more in the future.

    As well as providing a strong track record of high availability to minimize risk to your business, your data center provider should be positioned to support growth in the data processing demands of your business and that of evolving data processing technology. This is done when your provider understands how to operationally support contemporary data processing environments integrated with the cloud and pervasive connectivity to your end users and customers.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:00p
    TeraGo Acquires Former BlackBerry Data Center In Toronto Area

    Telecom TeraGo has acquired a former BlackBerry-managed data center in Mississauga, Ontario, its fourth acquisition in the last two years. The company has been expanding its data center footprint in Canada and now operates around 40,000 square feet of data center space.

    TeraGo is another telecom getting deeper into cloud services in order to diversify revenue with business friendly data center services. It is primarily known for wireless services. The national service provider owns and manages its IP network serving over 4,100 business customers in major markets across Canada.

    The 10,000-square-foot data center has close to 5 megawatts of power, an immense amount for its size, suggesting BlackBerry operated it at high density. The former enterprise data center was designed with N+2 availability of generators and has room for 1,600 square feet of expansion.

    This is TeraGo’s second data center in Greater Toronto, joining another in Vaughan. It started building out its data center footprint and kicked off an acquisition spree by purchasing the Vaughan facility in May 2013. It acquired its first downtown Vancouver data center in February of this year, followed by a third acquisition and data center in Vancouver a month later.

    The company said the facility is in support of growing cloud services. A recent study by TechNavio said that Canada’s cloud computing market will grow at a Compound Annual Growth Rate (CAGR) of close to 18 percent between 2013-2018.

    “This unique, high power facility was important for us to add to our footprint as we have seen a vast increase in the amount of power required to service individual customers,” said Stewart Lyons, CEO and President of TeraGo. “Customers are focusing more and more on high density server capability and this site will future proof us in this regard. It’s the perfect site from which to grow our cloud computing environment.”

    Research In Motion (RIM) and BlackBerry have struggled since the introduction of the iPhone and additional competition from Android. In terms of data centers, the BlackBerry name has appeared in the past thanks to highly public outages. There were outages in 2007, 2008, 2011 and 2011 again. The very criticized outages prompted the company to boost its infrastructure.

    While this data center has nothing to do with the outages, its fate might have been sealed because of them. Several years removed and off of its peak, the hit to its market share might be one reason BlackBerry is shedding this data center, as it looks to become operationally lean and resize itself. This means TeraGo is able to pick up high-density enterprise-grade space in a key Canadian market.

     

    5:30p
    C Spire Opens Tier III Mississippi Data Center

    C Spire has opened a $23 million Mississippi data center. The facility has achieved Tier III Concurrently Maintainable from the Uptime Institute, one of approximately 30 data centers in the U.S. that can say the same. Construction took 12 months from foundation to completion of phase one.

    C Spire is a telecommunications company that offers colocation, cloud and hosted exchange services in addition to Internet and wireless. The 24,000-square-foot Mississippi data center is located on a 6.5-acre site at The Cochran Research, Technology and Economic Development Park. The Park is a growing technology hub adjacent to Mississippi State University, the area’s largest employer.

    The new data center joins two others in Mississippi – one in Ridgeland and the other in downtown Jackson.

    C Spire now has a Tier III data center to entice government and businesses in the Southeast. The company claims it is the only commercially-available center of its class within 250 miles. That claim isn’t completely outlandish, as it’s one of around 30 with the Tier III certification in the U.S. and there hasn’t been much data center activity in Mississippi. Local officials in Starkville and Oktibbeha County provided tax incentives to support the project and hopefully bring in more data centers.

    Starkville is considered an anchor of the Golden Triangle region in northeast Mississippi, along with Columbus and West Point. The facility’s Tier III certification means it might appeal to the sizable federal presence in the area. Mississippi is (or was, depending on consolidation efforts) home to a U.S. Department of Homeland Security disaster recovery data center, and a Navy DOD Supercomputing Resource Center (Navy DSRC).

    The data center is also located in a low-risk geographic zone.

    “Our center is purpose-built with world-class solutions for the complex data, storage and cloud service needs of our customers,” said Brian Caraway, senior vice president of Enterprise Markets for C Spire. The center is built to withstand the most extreme weather conditions and natural disasters and is located in one of the lowest composite geo-risk zones in Mississippi. “Our customers can rest easy knowing their data is safe and secure and that business continuity is safeguarded.”

    6:00p
    HP Helion and Wind River to Build OpenStack-Bases NFV Solutions for Carriers

    PARISHP has partnered with Intel subsidiary Wind River to design and sell Network Function Virtualization solutions for carriers based on OpenStack.

    NFV gives telcos a way to virtualize and automate many of the functions they usually have to buy special hardware for. Virtualizing some common network management functions means having to buy less hardware, which translates in reduced cost.

    “Savings that they get is tremendous,” Bill Hilf, senior vice president of product management for Helion, HP’s cloud business, said in an interview with Data Center Knowledge at this week’s OpenStack summit in Paris.

    Virtualizing network management also means telcos can deploy in new markets faster. Additionally, many of them want to build cloud services to open new revenue avenues for their network assets, and OpenStack provides a path to those capabilities.

    The combined solutions will leverage HP’s OpenStack distribution and numerous Wind River capabilities, including its vSwitch (virtual switch) technology. Wind River also has a carrier-grade server and NFV technology called Wind River Open Virtualization, which is based on its own version of Linux.

    OpenStack is a natural fit for the telco industry, which relied to a great extent on open source software. “That community of customers is very oriented around open source and Linux,” Hilf said. The affinity to open source exists primarily because of scale and cost, he explained.

    They also never buy out-of-the-box solutions, and if they do, they modify them to a great extent to fit their environment.

    The solution with Wind River will include Linux and open source KVM hypervisor with high-availability add-ons for OpenStack Control Plane. vSwitch will be at the core to optimize performance, utilization, and reliability. It will also provide workload scheduling and orchestration, carrier-grate security features, and open APIs.

    The companies plan to have the solutions available sometime next year.

    6:30p
    Snowflake Raises $26M to Deliver Data Warehouse as a Service

    Snowflake Computing came out of stealth with a $26 million funding round, intent on modernizing data warehousing with a from-scratch cloud-based approach. The company has developed a patent-pending architecture that promises to disrupt the decades-old data warehouse market with a solution that decouples data storage from compute.

    To help launch the product, the two-year-old San Mateo, California-based startup raised a Series B round from Redpoint Ventures, Sutter Hill Ventures, and Wing Ventures.

    Snowflake’s CEO Bob Muglia is a former Microsoft executive. The company generally brings vast experience to the table, from companies including Actian, Cloudera, Google, Microsoft, Oracle, and Teradata.

    The startup says its approach builds on top of a relational database with standard SQL and adds the elasticity, scalability, and flexibility of cloud. It will feature native semi-structured data storage.

    Dubbed the Snowflake Elastic Data Warehouse, the product aims to bring users, data, and all workloads together in a single SQL data warehouse at much lower cost than on-premises data warehouses.

    7:00p
    Want a Cloud Job? You Should Be Learning Some New Things

    There is a new type of role quickly evolving in the IT industry. Cloud architect and cloud engineer are becoming highly sought-after positions in the modern technology environment. Now, more than ever before, it’s making a lot of sense to jump into the cloud job arena. Why? Data centers and cloud providers are desperately looking for people who understand cloud communication, various delivery models and, most of all, security.

    • New Service Delivery Models. With new service models like “Everything-as-a-Service” cloud providers are striving to become your one-stop-shop for everything that is cloud services related. But that’s not where it stops. You now have Data-as-a-Service, Security-as-a-Service, and even Backend-as-a-Service. This is where developers can create cloud-based repositories for applications which still reside locally at a data center. Cloud engineers must stay on top of these new delivery models as more will be emerging. Don’t be surprised if you see even more options in the future. Now, compliance and regulation play a big role in how service delivery models are applied. As a cloud architect, knowing the business impact of a service delivery model can make all the difference in a competitive situation.
    • Understand APIs. APIs are not only optimizing the application, they are directly impacting resource and data center utilization. Administrators are bypassing hardware layers and are creating logical connections directly into required resources. Cloud-based APIs are making waves as technologies which are optimizing the application and cloud communication layer. APIs are helping organizations interconnect between vastly distributed resources. These are the connection points for very heterogeneous technologies. As a cloud engineer, it’s important to understand how all of these logical operations work together to deliver rich content and resources. On a related note, engineers should know how software-defined technologies are directly impacted by today’s API architecture.
    • Security, Security, and More Security. A new area of security has been created to directly address cloud security challenges. Solutions like a next-generation firewall are making their ways into the modern data center. IPS/IDS, DLP, application firewalls, and virtual security appliances all fall into the next-generation security conversation. It’s no wonder that the folks at Palo Alto Networks and Checkpoint are doing so well. There is a big need for cloud engineers who understand the intricate workings of LAN, WAN, and cloud services security models. Beyond next-gen security topics, security virtualization has become a big hit with clouds. Engineers must understand that security in the cloud is absolutely no longer only at the physical layer. New types of virtual appliances and software services are enhancing the way organizations secure and protect their data.

    The evolution of the cloud will continue to evolve the IT person as well. There’s no doubt that proliferation of cloud services will push engineers and architects to understand new technologies that will optimize their infrastructure. There is a boom in data center cloud services. There are more cloud connection points, and now with “fog computing,” engineers are really bringing massive amounts of content to the user and to the edge.

    As you evolve your own cloud career, always make sure to stay innovative and keep pace with the speed of technology. Unlike even a few years ago, the speed of innovation is much greater than it ever was before. Architects and engineers are no longer in silos. Rather, they must interact with other engineers, the end-user, and in many cases, the entire business organization.

    7:30p
    Salesforce Data Center Opens In UK

    Software-as-a-Service giant Salesforce has opened its first data center in the U.K. The company has been boosting its strategic investments in Europe, with plans to open three new European data centers.

    Europe is the company’s fastest growing market, with 41 percent growth during fiscal year 2014. To support this growth, Salesforce is also opening data centers in France and Germany and will add 500 employees to its European workforce. All three planned markets are key European traffic hubs.

    Salesforce is primarily known for Customer Relationship Management (CRM) software, but also offers a PaaS called Force. A large customer base supports a large ecosystem of third party apps in the AppExchange marketplace. The company also recently entered into the Business Intelligence (BI) field with Wave, hoping to make BI user-friendly in the same way it did with CRM. Wave is also contributing to data center expansion.

    The company can be partially credited for making businesses comfortable with cloud-based applications. Part of that comfort comes from maintaining strict uptime and consistent performance for its apps, meaning a large data center footprint in support. The company was not without hiccups early on, but over the years has maintained solid uptime.

    A large portion of the Salesforce footprint is in Equinix data centers worldwide, but the company has taken a different approach to tackling Europe so far. NTT was announced as its U.K. data center provider, while Interxion won the contract for the Paris data center. The German provider has not yet been named.

    “The opening of Salesforce’s first European Data Center underscores our commitment to customers and partners in the U.K.,” said Andrew Lawson, SVP for U.K. and Ireland, Salesforce. “The new data center will support the unprecedented growth we’ve seen in the region and further accelerates the adoption of cloud, social and mobile technologies, empowering U.K. companies to connect with their customers in a whole new way.”

    Market growth is driving European expansion for several data center and cloud providers. However, the Snowden revelations have kicked in-country data efforts into high gear which could cause problems for those serving from outside the country.

    7:30p
    Google Brings Containers to its Cloud With Hosted Kubernetes

    Google made several big announcements and cut prices once again for its cloud during its Google Cloud Platform Live event.

    Containers and Docker were heralded by several Google execs as the current revolution in cloud. The company announced Google Container Engine, a fully hosted version of Kubernetes, a container management system. Kubernetes is an open source technology that allows you to orchestrate containers.

    A slew of announcements were made including:

    • Google Cloud Interconnect which will allow customers to hook into Google cloud via VPN. Carrier Interconnect is the direct link option for carriers and data centers, who can then provide a dedicated secure connection into Google cloud for their customers.
    • Canonical’s Ubuntu is now available on Google’s Cloud Platform for the first time. Ubuntu was the last major linux distro not available on GCP.
    • The company also rolled out Compute Engine auto-scaling into wide release. It allows a customer to grow or shrink a fleet of virtual machines based on demand, based on metrics you set.
    • The company also added local SSDs to Compute Engine, joining a range of network storage options currently available. This is for certain classes of applications with large IO requirements like a Cassandra or SQL cluster. It’s available for any machine type, 1-4 375 gig SSD partitions. With 4 disks, it gives customers 680,000 read and 660,000 write operations. The cost is .28 cents per gig per month.
    • The Cloud debugger is now publicly available in beta. The modern take on debugging in the cloud was first previewed in June. It allows debugging in instances being used in production, across any number of instances.

    Container revolution

    Containers make developing and deploying applications on cloud easy. Containers package applications and dependencies in a single file. It means not having to worry about configurations and nuances of platforms. It makes for a quick development cycle as they’re quick to spin up and tear down.

    Managing containers at scale can be complicated, which is where Kubernetes comes in. Kubernetes is an API that makes deploying a fleet of container applications across the cloud easy. Kubernetes was originally developed by Google and open sourced in June. Google Container Engine is a hosted, formalized version for Google’s cloud.

    “We want to change what our users are able to do, not just where they’re doing it,” said Vice President Bryan Stevens. “Just as we’re getting our head around public cloud comes the next disruption: containers. The reason why it’s gotten so popular is that even in early stages its delivered great benefits.”

    “A data center is not a collection of computers. A data center is a computer,” said Greg Demichillie, director of product management at Google Cloud. “And we think containers are the technology that will make this possible.”

    The new hosted Kubernetes option is an alternative for those who don’t want to work in the open source project. It makes Google cloud and containers easier to manage. Google Container Engine is now in alpha, but is open to everyone immediately. “We want your help in guiding and shaping this product,” he said.

    Containers have been used in the Platform-as-a-Service (PaaS) App Engine since day one, however their versatility was limited by the nature of PaaS . “There’s a drawback with PaaS, you have to color in the lines,” said Memichillie. “We knew that was a problem and set out to solve it with it with Managed VM.”

    Managed VM lets customers use whatever they want in terms of libraries and open source frameworks on App Engine. The result is that it’s now open because you can run any library, taking away one of the larger limitations of PaaS: being confined to certain setups. Customers can use the complete range of virtual compute with Managed VM.

    More price drops for GCP

    Google has also cut cloud prices once again. Following a 10 percent cut for Compute Engine last month, the company announced another 10 percent cut today.

    Sizable cuts were made to several other services:

    • 23 percent drop for BigQuery
    • 79 percent drop for Persistent Disk Snapshots
    • 48 percent drop for Persistent SSD
    • 25 percent drop for large cloud SQL instances.

    Other cloud progress, customer announcements

    Google has made three acquisitions since May in support of its cloud:

    • Stackdriver became the backbone of its monitoring system
    • Zync is a new rendering service for the movie and entertainment vertical
    • Firebase is going to be the centerpiece of the mobile developer offering

    Google noted great partner and customer momentum. Amazon Web Service’s ecosystem of third party cloud management and enhancement platforms was considered a competitive advantage when Google first launched, but its ecosystem of partners and consultants is rising dramatically closing the gap.

    Several customers were named with Office Depot, Wix, and Automic Fiction highlighted, presenting a wide swath of use cases.

    • Office Depot runs its “My Print Center” offering on Google cloud. The service lets customers order print jobs from their computer for pick up at any store. It uses App Engine and Google storage, cutting the time to execute an order by 40 percent.
    • Wix hosts its Wix editor on App Engine and uses Google Cloud storage to store static media files. It now serves production media traffic from Compute Engine. Wix sees 11 million files uploaded per day, manages 600TB and its users resize 8.6 million images per day.
    • Automatic Fiction’s handles the FX for big blockbusters. This process includes rendering images, a normally cost- and time-intensive process.

    Automatic Fiction presented its use of Google’s cloud. The company couldn’t afford to build its own giant data center and instead focused on building cloud tools, said founder Kevin Bailie.

    One of those tools is an effects rendering tool called Conductor, which the company is making available in 2015.

    Rendering is a compute intensive job. The company demonstrated how much easier cloud makes the process. The company held a demonstration, rendering one frame by splitting it into 700 chunks across several instances in real time. Seven hundred and fifty instances and 12,000 cores rendered the frame in minutes, versus the hours and hardware expense it would take to do it in a data center. The cost was the same to render for minutes on several machines versus one machine processing over hours. Cross site interoperability means that rendered frame is available to several dispersed teams, said Bailie.

    The company and its customers both touted its per minute pricing versus the standard rounding up. Automatic Fiction said the difference between hourly and per minute billing equated to 10 percent savings for long time frames and nearly 40 percent savings for short time frames.

     

    8:00p
    Internap Expands OpenStack Cloud Platform to Amsterdam, Integrates Horizon Dashboard

    logo-WHIR

    This article originally appeared at The WHIR

    Internap announced on Monday at the OpenStack Summit in Paris that it has expanded its OpenStack-based cloud platform AgileCLOUD to Amsterdam. It is also introducing an integrated OpenStack Horizon dashboard in the platform to give customers more control over cloud and infrastructure management.

    Internap launched its AgileCLOUD platform last year at the company’s Dallas data center after months of beta testing. The platform is now available in Amsterdam, Dallas, Texas and Montreal data centers.

    Horizon is the official OpenStack management dashboard, and Internap says it is one of a few OpenStack cloud providers to expose the native console.

    “We use OpenStack’s Horizon dashboard to manage our extensive public cloud footprint with Internap. It gives us the ability to access the latest OpenStack services and manage and automate our cloud resources, along with an extensible framework to easily plug in third-party tools,” said Rock Rockenhaus, cloud infrastructure architect at CheapCaribbean.com “By integrating the Horizon dashboard, Internap has eliminated the complexity that customers running their own instance of Horizon need to contend with, such as setting up and maintaining API servers and installing the latest software – delivering the management benefits straight to customers’ fingertips and freeing them up to focus on their business.”

    Internap’s Amsterdam location and Horizon portal are available through an early access program and will be generally available by December 2014.

    “As the cloud becomes a cornerstone of today’s infrastructure mix, more organizations are looking for the flexibility to deploy and manage large-scale, globally distributed application environments,” Internap’s Satish Hemachandran, vice president of product management, cloud and hosting said. “The expanding footprint of our next-generation AgileCLOUD, powered by OpenStack, provides customers with worldwide reach and powerful, extensible management capabilities, like the Internap Horizon dashboard, needed to meet these demands with ease.”

    AgileCLOUD includes all SSD storage and Internap’s Performance IP bandwidth to deliver higher network performance and lower latency.

    Internap is offering two server tiers with AgileCLOUD. Series A servers are designed for average workloads, while Series B servers include dedicated cores and higher RAM to CPU ratio which makes it ideal for more demanding workloads like medium databases, complex websites, and scheduled batch processing.

    Below is a video explaining the OpenStack Horizon dashboard from Internap:

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/internap-expands-openstack-cloud-platform-amsterdam-integrates-horizon-dashboard

    8:49p
    Public Cloud Spending to Reach $127 Billion by 2018: Report

    logo-WHIR

    This article originally appeared at The WHIR

    The cloud market is growing six times faster than the overall IT market, according to a forecast released Monday by International Data Corporation (IDC). By 2018, public cloud services spending is expected to reach more than $127 billion, a growth rate of 22.8 percent over the $56.6 billion in spending this year.

    This consistent with other recent news that shows enterprise is moving to the cloud. Over half of the attendees of the 2014 New York Cloud Expo and AWS Summit are building hybrid cloud solutions for their organizations, according to a survey by Avere Systems. Thirty-five percent of those surveyed said they are building their own hybrid cloud in the next two years.

    The IDC report says factors driving public IT cloud services growth are cloud-first initiatives and innovation in the cloud.

    “Over the next four to five years, IDC expects the community of developers to triple and to create a ten-fold increase in the number of new cloud-based solutions,” IDC SVP and chief analyst Frank Gens said. “Many of these solutions will become more strategic than traditional IT has ever been. At the same time, there will be unprecedented competition and consolidation among the leading cloud providers. This combination of explosive innovation and intense competition will make the next several years a pivotal period for current and aspiring IT market leaders.”

    The trend of moving to the cloud is reflected in both the private sector and many governments. The IDC report expects SaaS will “continue to dominate public IT cloud services spending.” SaaS accounted for 70 percent of cloud spending in 2014.

    Service providers are already benefiting from this trend, including cloud SaaS provider OpenText, which recently reported a growth rate of 260 percent over last year. IaaS and PaaS will also continue to grow with PaaS and cloud storage being the fastest growing categories.

    Reports of cloud growth are coming from most of the world. Asian companies say that over half of them are moving to cloud-based systems. In South Africa, infrastructure is being built to support growth. Recently, the Australian federal government announced a cloud first policy that is expected to save taxpayers 30 percent. The US and UK governments also have cloud first guidelines in place that will facilitate even more data being stored in the cloud.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/public-cloud-spending-reach-127-billion-2018-report

    << Previous Day 2014/11/04
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org