Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, July 29th, 2014

    Time Event
    12:00p
    Rackspace and the Cool Kids on the Web-Scale Data Center Scene

    In keeping with its image of a forward-looking technologically sophisticated hosting company, Rackspace held its first Solve event in San Francisco on Monday, speaker presentations sandwiched by Docker CEO Ben Golub as the opening keynote and by CoreOS CEO Alex Polvi as the day’s final keynote.

    Docker and CoreOS are venture-capital-backed San Francisco startups working to bring Web-scale computing to the masses. Both startups have open source technology at the core, and both are trying to convince the world that the approach to infrastructure by the likes of Google and Facebook is really the right approach to infrastructure for anybody doing computing at scale.

    The goal is to make it easy for developers to build and deploy applications quickly, without having to worry about the infrastructure. Rackspace has designed its services and marketing messaging to drive home the narrative that cloud may be great but it still needs management, which is hard, but Rackspace is there to make it easier.

    The company recently changed its tagline to “#1 managed cloud company,” but, more importantly, added a series of managed services for cloud. “We’re all in on managed cloud,” Rackspace CTO John Engates said during his opening remarks. “Managed cloud is everything we do.”

    Earlier this month, Rackspace launched two managed services offerings – an entry-level service with around-the-clock access to engineers and general guidance and assistance for deploying applications in the Rackspace cloud, and a high-end one that enables a customer to essentially outsource management of their entire cloud infrastructure to Rackspace.

    “It’s about taking the pain out of managing the cloud,” Engates said.

    Building WordPress for developers

    If Rackspace is about taking the pain out of managing the cloud, Docker is about taking the pain out of deploying applications on any cloud – Rackspace’s or anyone else’s. It’s that simple. The startup’s ultimate goal is WordPress, Blogger or Tumblr for applications. All you’ll need is an idea and some free time. The platform and the infrastructure underneath will not be anything for you to ever worry about.

    Golub said we’re not there yet, but sounded 100 percent certain when promising that Docker would get us there eventually. To get there, however, some fundamental notions about infrastructure will have to change, he said. It will not be easy, since majority of IT infrastructure that exists today has been built using these notions.

    The three notions the Internet has been built on over the past 15 years that are now breaking down are that the applications are long lived, that they are monolithic and built on a single stack, and that each application is deployed on a single server. Neither of these is true about modern applications, which are constantly under iterative development, are built from loosely coupled components and deployed to multiple servers. Thinking about how all these components interact with each other and how the servers that host them are set up and reconfiguring the infrastructure manually every time something new is deployed simply isn’t going to work.

    “It’s simply impossible,” Golub said.

    Docker’s solution is to pack the application and all of its diverse components into a standard “container.” While the contents of two containers may be different, everything looks exactly the same on the outside. Outside, in this case, means what the application’s infrastructure requirements look like to an infrastructure platform, be it Amazon Web Services’ or Google’s cloud, a dedicated server in a Rackspace data center or a developer’s sticker-ridden MacBook.

    “If you really want to understand what the future of applications is, all you really need to remember is that developers, at the end of the day, are authors,” Golub said. “They’re content creators.”

    To explain himself, he draws parallels between IT and publishing. It used to be that the only way to create a book was to sit in a dark cave and write one. When mobile type arrived, the act of creation was separated from the act of replication and distribution, and when the Internet arrived, anybody could be an author without ever having to worry about the nuts and bolts of getting their content in front of an audience.

    CoreOS builds resilient compute clusters

    CoreOS, the startup with a light-weight Linux distribution for servers that updates automatically across every node it is deployed on (much like Google’s Chrome browser, which served as inspiration for the product) incorporates Docker containers as a “design requirement,” according to Polvi, its founder and CEO.

    The startup has generated a lot of buzz and attracted some heavyweight venture capital backing, promising essentially the same future Docker is promising: a Web-scale infrastructure for everyone. Still in its early stages, the company already has attracted a number of customers large and small, including Rackspace.

    The Texas hosting firm uses CoreOS to enable its new bare-metal cloud offering called OnMetal, which went into general availability last week. Before anything is loaded onto one of the OnMetal servers, it boots up with CoreOS temporarily to set it up with whatever operating system and other software the customer needs.

    CoreOS launched its first stable release last week. “It’s a Linux-based operating system built for running large server deployments,” Polvi said. The product makes management of server clusters easier by providing consistency among the nodes with its global updates. A major value proposition of such consistency is the ability to build compute clusters where outage of a single node does not affect uptime of the cluster as a whole and the applications it is running.

    Will data center reliability be less important in the future?

    While there are examples of applications for which something like CoreOS is not necessarily the ideal technology – massive Oracle databases are one example Polvi brought, saying these are better off running on the big expensive metal they are running on today – a big chunk of the world’s workloads would benefit from the resiliency of CoreOS or the flexibility of Docker. If the future does shape up along the lines people like Golub and Polvi envision (and they are not the only ones who envision it that way) implications for the data center industry are huge.

    A big part of what a data center provider brings to the table is reliability. That’s their expertise, and that’s where they spend the big bucks so that their customers don’t have to. If an environment using CoreOS is designed to withstand an outage of several individual servers in a large cluster, the value proposition of redundant generators, UPS systems and expensive switch gear all carefully orchestrated to make sure servers don’t lose power gets diminished. That is a lot of ifs, but the movement to bring what Google in 2009 referred to as “warehouse-scale machines” is not a trend to ignore.

    2:00p
    Understanding the Types of Prefabricated Modular Data Centers

    The modern data center has truly evolved. We have branch locations, complete systems for disaster recovery, and cloud platforms capable of handling large influxes of users. Similarly, the model of the data center has also evolved.

    Data center systems or subsystems that are preassembled in a factory are often described as prefabricated, containerized, modular and skid-based, among others. There are, however, important distinctions between the various types of factory-built building blocks on the market.

    This paper from Schneider Electric looks at standard terminology for categorizing the types of prefabricated modular data centers. It also defines and compares their key attributes, and provides a framework for choosing the best approach (or multiple approaches) based on business requirements.

    The idea is to present a framework for classifying the different types of prefabricated modular data centers so that ambiguity is eliminated. The framework is based on three attributes:

    • Functional Block – Power, Cooling, IT
    • Form Factor – ISO container, Enclosure (non-ISO), Skid-mounted
    • Configuration – Fully prefabricated data centers, semi-prefab data centers, and all-in-one data centers

    Download this whitepaper today to learn about the various types of data center platforms – including fully fabricated, semi-fabricated, and even all-in-one data centers – and how all of this impacts power, cooling, container considerations, and more.

    There are many approaches to implementing prefabricated modular data centers. However, a lack of standard terminology for describing them has made selecting the appropriate type difficult. The optimal configuration, including the right functional blocks and form factors, depends on the application and specific business requirements.

    In some cases, a fully prefabricated data center is the best approach, and for others, a semi-prefabricated approach with a mix of prefabricated modules and traditional systems is best. Understanding the limitations and benefits of each form factor helps ensure the optimal approach is selected. Ultimately, business needs around speed of deployment, scalability, space constraints, capacity, and cash flow drive the decision.

    2:00p
    Atlantic.Net Expands Cloud to Dallas, Toronto Data Centers

    Florida-based Atlantic.Net has launched data centers in two key locations to support its cloud services in Dallas and Toronto.

    The facility at 2323 Bryant in Dallas is a major Internet hub. The company chose Toronto to support a Canadian cloud to a growing customer base that doesn’t want to keep data within U.S. borders.

    Atlantic.net has evolved over the years, going from dial-up provider to colo provider to cloud provider. The company has hit success with its homegrown cloud offering despite a fairly grassroots effort and minimal advertising.

    The infrastructure its cloud runs on is high-performance, interconnected with InfiniBand and using solid-state storage. The company offers per-second billing and Windows support, which it believes to be a diffentiator.

    “Windows is a quarter to a third of the market,” said Marty Puranik, founder, president and CEO of Atlantic.Net. “If you’re a Windows developer, there’s nothing like this for Windows.”

    The popular 2323 Bryant facility, owned by Digital Realty Trust, was attractive because it is well connected and because the location does not have that many natural disasters, Puranik said. “We have a lot of customers on the west coast concerned about earthquakes, so this location will speak to them.”

    The Toronto cloud is in Cogeco’s facility and is the company’s first international location. “There are a lot of concerns with the NSA,” said Puranik. “Canadians don’t want their data in U.S. The Canadian market doesn’t have a product like we have: everything included.”

    He added that a lot of Europeans were choosing Canada for their production infrastructure to serve U.S. customers. Canada is part of the Commonwealth of Nations, so it’s a big deal for UK customers looking to serve North America.

    A relatively small firm out of Florida, Atlantic.net began using cloud for temporary capacity for colocation customers, but it eventually grew into a product of its own. “We started with individuals, developers within larger companies, but we’ve been moving up the value chain,” said Puranik.

    The company currently has three open facilities, with plans to continue expansion to hot spots of the cloud ecosystem in North America, Latin America, Europe and Asia-Pacific countries.

    3:45p
    Businesses and the Big Data Skills Shortage

    Rick Delgado is an enterprise tech commentator and writer.

    Although Big Data is no longer a foreign concept to businesses they still face significant challenges when implementing it into their organizations. And the problem doesn’t stem from understanding its importance. Rather, it lies in finding the right people for the job.

    Addressing the shortage

    There are many reasons the Big Data skills shortage exists. Put plainly, there are too few true data scientists out there and that problem is only expected to grow if no action is taken. McKinsey & Co. has placed the shortfall of Big Data experts at anywhere from 140,000 to 190,000 by the year 2018.

    The technical and IT experts that companies have now, while perfectly capable and intelligent individuals, don’t currently have the skill sets needed to handle the Big Data revolution. For most of their career they’ve been trained to use more traditional databases, training that has a difficult time being applied to new Big Data techniques.

    Without the needed Big Data skills, plenty of jobs go unfilled. Some of the most in-demand job openings are for NoSQL experts (people that have experience with unstructured data systems), Apache Hadoop and Python experts, ETL developers, data warehouse appliance specialists, predictive analytics developers, and information architects.

    Confronting the challenge

    Companies have responded to the Big Data skill shortage in a number of ways. First, they’ve identified skills current and future employees need in order to meet the challenges Big Data presents. Some are fairly obvious such as computing and analytical skills. Others are geared more toward a specific expertise in Big Data, particularly being able to understand it, collect it, and preserve it. Knowledge of statistics, mathematics, and data visualization techniques including charting, mapping, and graphing are also needed.

    With these skills in mind, companies can narrow their focus in the search for the right candidates, but even then they might find few who match the criteria. That’s when companies look for different but related skills by casting a wider net and looking at people with backgrounds in astrophysics or computational chemistry. While they might not have the exact skills, they have enough of an analytical background where they can be trained to become a data scientist.

    Providing the tools for success

    Training is another area where companies are working to improve in order to bring up a new generation of Big Data experts. Some companies are looking at taking employees already experienced with relational database management systems (RDBMS) and training them to use big data platforms like Hadoop. Using some of the latest technology, companies can even use some of the techniques related to RDBMS and allow them to be used with Big Data platforms. By training those already with the company, businesses get to keep valuable team members that already have experience with the enterprise while giving them some much-needed skills. Plus, those with skills in SQL or virtualization technology may come up with their own unique solutions to Big Data problems.

    Take the team approach

    Companies have also responded to the Big Data skills shortage by taking the team approach. A data science team combines people of various areas of expertise and sets them loose on mastering Big Data and making it work for the business. The team can include analytics managers, algorithm scientists, and project managers, each bringing their own set of skills to the table. Together, they can make up for any Big Data skills shortage present in the company.

    It starts in the classroom

    Perhaps most important of all, businesses are getting help with this skills shortage from universities. Years ago, most colleges weren’t equipped to prepare students with the skills needed to get jobs as data scientists, but that is quickly changing. Many universities all across the world are adding courses and postgraduate degrees that teach students what they need to know to land a career as a Big Data professional.

    For example, a university in Australia is offering a Master’s Degree in Data Science. Stateside, Columbia is providing students with its Institute for Data Sciences and Harvard has the Institute for Applied Computational Science, each providing postgraduate masters programs around Big Data. There are also schools with masters programs for business analytics: New York University has an MBA degree with a specialization in analytics and information management, North Carolina State and Drexel have business analytics programs that last only a year, and there’s a business and Big Data program at the University of Tennessee.

    These are all different approaches to meeting the demands Big Data is placing on businesses. The benefits can be immense, which is why companies are willing to invest so much in making sure their employees are equipped to handle the challenges. Over time, the hope is that the Big Data skills shortage will shrink until supply meets up with demand.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:11p
    Bright Computing Raises $14.5M for OpenStack, Hadoop, HPC Cluster Management

    Bright Computing raised a $14.5 million Series B funding round to accelerate its High Performance Computing (HPC), Hadoop and OpenStack cluster management software business. The company designs software that makes it easy to deploy and manage Linux server clusters in your data center. The round was co-led by Draper Fisher Jurvetson (DFJ) and DFJ Esprit, with participation from Prime Ventures and existing shareholder ING Corporate Investments.

    The money will go toward investing in Bright Computing’s flagship cluster management technology. It will focus on especially on serving the emerging Hadoop and OpenStack ecosystems, which the company considers its major growth markets.

    Founded in 2009, Bright Computing today touts more than 400 customers, including 20 of the Fortune 500.

    The timing is good for a cluster management product. “Bright is in markets as competitive as we’ve ever been, especially in the Big Data, Hadoop and OpenStack private cloud spaces,” wrote CEO Matthijs van Leeuwen. “I believe we will thrive in these markets because we are bringing years of relevant experience to bear in new ways that will accelerate the adoption of these new technologies.”

    Partnerships with Dell, Cisco, Cray, HP and others are one avenue Bright Computing reaches customers. The company will extend its channel by adding partners as well as providing higher levels of support to its direct enterprise accounts.

    This is yet another investment into a company that eases management of big scale computing. Just as management platforms like RightScale rode the coattails of the rise of cloud, companies like Bright Computing have an opportunity to ride coattails of big computing. This has been the summer of big computing management, with many investments going into enterprise technology that enables operating at large scales.

    Startups like CoreOS, which also has a product that makes it easy to set up and manage compute clusters, have generated a lot of buzz and attracted a lot of venture-capital cash. Companies building businesses around Hadoop, such as Hortonworks and Cloudera, have seen major recent wins as well. The OpenStack ecosystem is thriving, with backing from a multitude of companies large and small.

    “Data analysis has become the lifeblood of modern business, but it can be complex and challenging for corporations to manage,” said Alexander Ribbink, partner at Prime Ventures. “Using Bright Cluster Manager to manage clouds, clusters and data infrastructure can turn this challenge into a strategic advantage. This round of funding will help bring that advantage to more customers around the world.”

    4:28p
    Fujitsu Launches All-Inclusive Hardware-Middleware IT Solutions

    Fujitsu has launched a full line-up of integrated solutions for what it calls the business-centric data center, with eight principal use cases for specific workloads. The company says these straightforward reference architectures provide pre-validated, pre-integrated systems geared toward addressing the specific workload requirements.

    Joining its competitors who have launched converged systems, Fujitsu is joining business applications with a stack of compute, networking, storage and middleware. To start, Fujitsu’s entry-level system is comprised of dual Windows servers, shared storage and networking in a standalone or rackmount single enclosure. These systems will also support SSD and automatic storage tiering for increased performance or operating as NAS storage accommodating up to 140 disks.

    The eight principal use cases range from SAP and Microsoft environments to Big Data, IT security, server virtualization, virtual desktop infrastructure, private cloud and high-performance computing. The Integrated Systems portfolio features PRIMERGY and PRIMEQUEST servers, as well as Fujitsu ETERNUS and NetApp FAS storage platforms. Additional items Fujitsu hopes it can help enterprises with include a pre-installation staging center, Lifecycle Management services and data center services, such as Managed Datacenter, Managed Hosting and Fujitsu Cloud IaaS.

    Fujitsu has been investing heavily in data centers and technology partnerships lately. It acquired shares of Panasonic Information Technology Solutions, made numerous mobile partnerships and product announcements, and has enhanced its private cloud platform products with support for OpenStack. Together with Intel, Fujitsu demonstrated silicon photonics technology in an Optical PCI Express (OPCIe) design, which allows the storage and networking to be disaggregated, or moved away from the CPU motherboard.

    Jens-Peter Seick, senior vice president of product development at Fujitsu, said, “Quite simply, the introduction of Silicon Photonics-based technology means that businesses can be confident that their ICT departments will be able to service bigger, better, faster and more needs than ever before. As a result, Fujitsu envisages that the data center of the future will become much more of a technology enabler for business velocity.

    “The introduction of disruptive technology such as Silicon Photonics-based systems creates new opportunities for environments where the ability to process large volumes of data has, until now, been a major bottleneck.”

    6:33p
    IBM Donates Supercomputer Access for Obama’s Climate Data Initiative

    IBM has pledged a ton of compute in support of the White House’s Climate Data Initiative, announced in March. Eligible scientists studying climate change will be given free access to dedicated virtual supercomputing and a platform to engage the public with their research, the company said Tuesday.

    Extreme weather events caused by climate change, such as floods and droughts, can have a drastic impact on food production. This is a move for the greater good on the part of IBM, making sure scientists have the necessary resources to solve problems they are trying to solve. Intel is another high-tech giant that has contributed resources to scientific efforts to tackle the issues humans are faced with using Big Data. Intel is working with California universities on several project aimed at addressing the state’s water shortages.

    A team working on a project approved for IBM’s program will have access to up to 100,000 years of computing time, valued at $60 million. All work will be performed on IBM’s philanthropic World Community Grid platform.  The World Community Grid has already provided sustainability researchers with many millions of dollars of computing power to date. It has been used to facilitate research into clean energy, clean water and healthy foodstuffs, as well as cures for cancer, AIDS, malaria and other diseases.

    Some past WCG projects include a partnership with the University of Virginia to study the effects of human activity on the Chesapeake Bay. Harvard’s Clean Energy Project identified more than 35,000 materials with the potential to double carbon-based solar cell efficiency, after screening and publicly cataloging more than 2 million compounds on WCG.

    The University of Washington has a project called Nutritious Rice for the World. It modeled rice proteins and predicted their function to help farmers breed new strains with higher yields and greater disease and pest resistance, potentially providing new options for regions facing changing climate conditions.

    “Massive computer power is as essential to modern-day scientific research as test tubes and telescopes,” said Stanley Litow, IBM vice president for corporate citizenship and corporate affairs. “But due to scarce funding for research, pioneering scientists often don’t have access to supercomputers vast enough to meet their research objectives. At IBM, we hope that the equivalent of 100,000 years of computing time per scientist will speed the next major breakthrough to help the world meet the challenge of climate change.”

    Nearly 3 million computers and mobile devices used by more than 670,000 people, and 460 institutions from 80 countries have contributed power for projects on WCG over the past nine years. Since the program’s inception, WCG volunteers have powered more than 20 research projects, donating nearly 1 million years of computing time to scientific research, and enabled important scientific advances in health and sustainability.

    “Through his Climate Data Initiative, President Obama is calling for all hands on deck to unleash data and technology in ways that will make businesses and communities more resilient to climate change,” said John Holdren, the president’s science advisor. “The commitments being announced today answer that call by empowering the U.S. and global agricultural sectors with the tools and information needed to keep food systems strong and secure in a changing climate.”

    8:34p
    Causam Energy Acquires Power Analytics in Smart-Grid Play

    Causam Energy is acquiring Power Analytics, an energy management software and professional services company whose platform Paladin is used in many data centers. It also has customers among utilities, government agencies and commercial microgrids.

    Causam focuses on bringing greater intelligence to the electric power grid. It will use Power Analytics’ talent and software capabilities in development of an advanced communications and analytics solution for the power grid. Power Analytics will operate as a wholly owned subsidiary of Causam.

    The deal is a smart-grid play. Paladin will be a core component in development of applications for real-time communication, advanced energy settlements, distributed generation and intelligent analytics.

    Power Analytics’ software is used for electrical system planning and operation in energy intensive, mission-critical facilities and microgrids. Its software products are used worldwide and currently protect more than $120 billion in customer assets, according to the vendor.

    “Power Analytics’ products and people are the best in the business,” said Joe Forbes, CEO of Causam. “Having a proven suite of high-value products with a significant portfolio of intellectual property behind them, substantially greater resources and strong, experienced management, this new company is uniquely positioned to develop and deliver the critical solutions required by the impending next generation energy grid.

    “Microgrids and other distributed energy resources hold great promise for improving energy reliability and security, balancing loads and reducing our dependence on fossil fuels,” he continued. “This combination will accelerate the introduction of additional grid solutions, improving access, grid security and data analytics.”

    Power Analytics president and CTO Kevin Meagher said Causam was an ideal fit for the company. “This union empowers us to continue providing high-quality products and service to customers while adding resources to extend and enhance our existing products that will help our customers meet the demands of the rapidly evolving grid.”

    << Previous Day 2014/07/29
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org