Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, June 19th, 2014
| Time |
Event |
| 11:00a |
Intel to Offer Hyper-Scale Operators Ability to Reconfigure CPUs on a Dime Getting deeper into the business of building customized processors for hyper-scale data centers, Intel has cooked up a hybrid chip that bolts a Field-Programmable Gate Array onto its high-end Xeon E5 server chip. The two work coherently and will come in a single package, socket-compatible with regular Xeon chips.
FPGAs are a popular way for customers to test different chip configurations before hard-wiring them into their infrastructure. Intel is offering the hybrid chip as a dynamically configurable destination for off-loading certain workloads from the main CPU.
Diane Bryant, general manager of Intel’s data center group, said the new chips will act as accelerators for applications that can take advantage of the off-loading capabilities. Users will be able to customize the accelerator, however, for their specific workload.
Bryant announced the product at GigaOm Structure in San Francisco Wednesday. The chips are not yet available, and she could not say when exactly they would become available other than suggesting it would be within about a year.
Using FPGA for off-loading provides between 10 and 30 times better performance, Bryant said. “By moving it into the Xeon package we’ll double that performance benefit.”
Intel plans to offer the solution as a way for a customer to test what configurations work and then order Server-on-Chip cards with those specific configurations as a permanent solution.
The second option will be to just deploy the original hybrid chips at scale to gain the ability to constantly reconfigure them as needs change. “You can reprogram for a different algorithm or a different workload on the fly,” Bryant said.
A huge opportunity
Custom chips have become a big business for Intel. Hyper-scale data center operators, such as Facebook, Google or Amazon, want to customize every single piece of their infrastructure stacks (including server chips) for their specific needs, because at their scale even modest performance gains mean many millions of dollars in savings.
Last year, Intel said it had delivered 15 custom CPUs to end users, including Facebook and eBay. More than double that amount is in the pipeline this year, Bryant said.
In addition to the Internet giants, Intel considers cloud service providers and telco companies a big market for custom chips.
The chipmaker has been engaged with many of the large-scale cloud service providers since about 2007, when it realized how much they were affected by their data center technology. “This has been a continuous progression of customization,” Bryant said.
“These are highly technical corporations. These folks know exactly what they need to accelerate their workload.”
‘Amply paranoid’ about competition
Intel is not alone in the custom-chip market. It faces competition from AMD and other companies that license processor architecture from ARM Holdings and modify it to optimize for various workload types. The company does not take this competition lightly.
“We have always had competition and we will always have competition,” Bryant said. “We are amply paranoid about competition.”
Because of this paranoia, the company always invests heavily in adding products to its portfolio to make sure every workload runs best on Intel.
Having run Intel’s entire IT operation as CIO in the past, Bryant is very conscious of the fact that customers do not like to have only one vendor for a piece of technology in their data centers. “I’m not naïve to the fact that everyone would like a second source,” she said. | | 11:30a |
Oak Hill Acquires European Data Center Provider Pulsant New York-based private equity firm Oak Hill Capital Partners has acquired European cloud, managed hosting and colocation provider Pulsant for an undisclosed sum, though some are pegging the deal at £200 million. Oak Hill is a savvy Internet infrastructure investor with several past dealings in the space.
In 2005, Oak Hill co-led consolidation of the European colocation industry by acquiring TelecityGroup and subsequently making a number of other strategic acquisitions. In 2010 it sold part of its position in TelecityGroup for a nice return on its investment. Pulsant is a rollup and consolidation play itself, and it is likely that Oak Hill will try repeat history.
“We expect continued significant consolidation amongst suppliers in this dynamic market, and we believe Pulsant is well-positioned to expand its leadership position by executing on targeted acquisitions, bringing additional capabilities to benefit Pulsant and its customers,” said Mark Howling, CEO of Pulsant.
Investment firm Bridgepoint Development Capital built Pulsant from a series of acquisitions, beginning with Lumison in 2010. The other deals included Blue Square Data, Dedipower and Scotland’s ScoLocate, with its 75,000 square foot Edinburgh facility, in 2012.
Bridgepoint is exiting with what is expected to be a handsome windfall as Pulsant goes to Oak Hill.
Oak Hill’s past investments in data center companies include Savvis and Cincinnati Bell, former owner of CyrusOne, which went public last year. It partnered with GI Partners to acquire ViaWest in 2010.
One good example of the way Oak Hill builds up a company is Microsoft Exchange specialist Intermedia, acquired in 2011. Former Savvis CEO Phil Koen came aboard as CEO following the acquisition, and the company grew its exchange business and broadened its services strategy, expanding into hosted SMB applications as well as buying into VoIP services.
“We are pleased to be investing in Pulsant and to be supporting Mark Howling and the rest of the Pulsant management team as the company continues its strong growth trajectory,” said David Scott, principal at Oak Hill. “With this transaction, Oak Hill builds upon a decade of significant data center expertise.”
Pulsant has a European data center footprint of more than 240,000 square feet across 10 facilities. It is currently in the process of completing a series of multi-million-pound upgrades at its Maidenhead data center. It is upgrading its UPS architecture and Network Operations Center, as well as investing £200,000 to upgrade its core switch network across the campus.
| | 12:30p |
Preparing the Workforce to Manage Today’s Data Center Antonella Corno is the product manager for the Data Center/Virtualization, Cloud SDN product lines within Learning@Cisco, creating certifications and training for customers, partners and Cisco employees.
Many IT departments are being confronted with parallel problems in light of the evolution of data center technology. In addition to investing in more powerful equipment, IT departments are realizing that they must invest in their employees’ skills if they hope to thrive in the modern data center environment. Many of the technology investments made in the data center may not see a return unless all levels of staff are properly trained.
Virtualization and the changing environment
The virtualization of the data center continues to change the status quo and requires data center operators and engineers to be flexible and adaptable. The coming of a virtual switch, for example, has significantly changed operations. Even though virtual switches have already been available in some server operating systems for more than 10 years, their level of use in modern implementations is beyond compare.
Whereas in more readily structured computing environments, roles, workflows, and skill sets are well established, the open-endedness of a virtual data center that is so powerful can be potentially overwhelming. Given all the possibilities, each company, and each team, will find itself at a different stage with data center technology. The size, history, and culture of a company, as well as the nature of its real-world projects, will also play an important role in how it decides to approach the data center.
While some data center vendors will be focused on promoting specific products and solutions, comprehensive solution providers will ensure that all of the options are made available to customers, and that customers are educated enough to make the right choices for their business.
Because technology changes may in many instances modify customer IT organizational structure, they must represent a joint investment between the solution provider and the customer. The solution provider needs to invest capital and resources in enablement, and the customer has to be willing to step up and make the changes necessary to fully exploit the potential.
Comprehensive training, preparing the workforce
When the technologies are so extensive and so new, the purpose of training should not be to simply teach employees how to join a crowd of already-trained individuals doing a particular job. The training must go beyond this and help companies create a new workforce—a workforce in which IT individuals are prepared to work within and outside of their current comfort zone.
An argument for this much more comprehensive approach to data center training can be found by looking at how data centers have historically been set up. Traditionally, companies have compartmentalized their IT department, with the result that there have been data center siloes: a computing department, a networking infrastructure department, and a storage department.
Virtualization is continuing to merge these functions, and thereby the separate siloes have started to break apart. While this dramatic shift has not been fully realized in every company yet, a pathway now exists to allow individuals to embrace data center innovation in full and bring their organizations to the next level. We have entered the era of the “data center architect.”
Identifying a data center architect
The term “data center architect” invites parallels with the construction industry. An architect designing a structure drafts a blueprint of the construction. Next the architect gathers needed information from a team of experts, who might otherwise be challenged attempting to coordinate with each other. Similarly, the data center architect, or cloud architect, looks at a company’s data center operation holistically and unites those with specific expertise in the server, network, storage, security, or software application arenas.
In the early phase of adoption, companies would do well to tap those most capable of serving as architect. These will be individuals who are not only technical experts in their discipline, but also fully capable of reaching out to the computing side of the house, extending their data center knowledge to them and interacting effectively with them. The architect will need to be a leader who can use vast experience in the field to harmonize the efforts of individuals as diverse as a server expert and a storage expert.
Data center architects appreciate the details but do not get mired in them. Instead, they function as a bridge across the complexities presented by virtualization, software integration, and application integration. Increasingly, they become less hands-on, but they must always maintain the capacity to understand, learn, and, where necessary, embrace the latest innovations.
The data center evolution
Once it has identified its data center leaders, an organization can decide how much it wants to evolve–whether to merge all the skill sets (or most of them), or maintain a compartmentalized structure. The latter model must still rely upon strong leadership that coordinates the various technologies through a robust design, albeit with a slower cross-pollination of knowledge, and consequently a slower convergence.
There is no right or wrong evolution path for organizations embracing data center virtualization technology, but there is an ongoing need to understand the complexity, identifying individuals capable of making the right decisions and implementing changes according to what is best for the company.
As part of this, organizations would be well served to maintain an open pipeline to those newly entering the workforce. While just a few years back, colleges emphasized exposure to basic networking knowledge, more and more educators are realizing the importance of imbuing students with awareness of new technologies, understanding that by doing so, there will be fewer gaps to be filled later.
Bridging the gap for future success
There exists today a data center knowledge gap across all job roles from administrators and engineers to business services and technology architects. This talent shortage continues on to specialized engineering roles as well. It is a knowledge gap that must be bridged from both directions: by a new workforce, fresh from colleges and universities and already somewhat trained as it enters the work space, and by the current workforce moving from a traditional environment to advanced technologies that call for a new and different structure and skill set.
Individual employees and organizations able to bridge this gap will find themselves well positioned for the even greater and more disruptive innovations to come.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 1:00p |
Real-Time Search and Analytics Platform Elasticsearch Gets Hadoopier Fresh off a $70 million Series C funding round, real-time search and analytics platform provider Elasticsearch announced release 2.0 of its Hadoop connector, called Elasticsearch for Apache Hadoop, along with certification on Cloudera Enterprise 5. This means Elasticsearch is now compatible across all Apache-based Hadoop distributions, including the other two big distros Hortonworks and MapR.
Elasticsearch helps pull data from any environment and get it into the hands of developers, engineering leads, CTOs and CIOs who need insight into moving parts of their business at the rate they are moving. The connector gives the ability to read and write data between Hadoop and Elasticsearch. When ElasticSearch is used in conjunction with Hadoop, organizations no longer need to run a batch process and wait hours to analyze their data. It takes minutes.
ElasticSearch offers what it calls the ELK stack. In addition to powering search functionality it utilizes log management tool Logstash and Kibana for data visualization capabilities to help businesses gain immediate insights from their data stores. Combined with Hadoop, it simply becomes more powerful.
There’s native integration and support for popular Hadoop libraries. Users can run queries natively on Hadoop through MapReduce, Hive, Pig or Cascading APIs. Another benefit is the ability to Snapshot/Restore. The two forces combined make it easy to take a snapshot of data within Elasticsearch – perhaps a year’s worth – and archive it in Hadoop. At any time the snapshot can be restored back to Elasticsearch for additional analysis.
“Hadoop was created to store and archive data at a massive scale, but businesses need to be able to ask, iterate and extract actionable insights from this data, which is what we designed our products for,” said Steven Schuurman, Elasticsearch cofounder and CEO. ”With today’s certification from Cloudera, Elasticsearch now works with all Apache-based Hadoop distributions, and with it solves the last mile of Big Data Hadoop deployments by getting big insights fast.”
High-profile clients: check
Elasticsearch has an impressive and growing customer roster, featuring the likes of Comcast, eBay, Facebook, Mayo Clinic, Foursquare, SoundCloud and Tinder. The company highlighted some additional customers as well as how important Hadoop integration is to their leveraging of ElasticStack. Two examples named were Klout and MutualMind.
Klout is an online reputation management firm, which connects petabytes of data stored in Hadoop Distributed File System on its 400 million-plus users to Elasticsearch so it can deliver query results in seconds rather than minutes to quickly build targeted marketing campaigns for their customers.
“Elasticsearch has a very good integration with Hadoop,”said Felipe Oliveria, director of backend engineering at Klout. ”It allows us to export a Hive table to an index on Elasticsearch very easily. HBase is a great data store, and it allows random access to the data, which Elasticsearch is perfect for. Elasticsearch fits very nicely into our data pipeline.”
MutualMind provides brand monitoring on social networks for customers like AT&T, Kraft, Nestle and Starbucks. After their Hadoop batches started taking more than 15 minutes, they moved to Elasticsearch to power their real-time analytics, while utilizing Hadoop for statistics analysis. | | 2:00p |
Experience Rapid Growth and Fast Time to Market Speed In today’s ever-evolving business and IT world, it has become even more critical to keep pace with the speed of the market. This holds especially true for the always fluctuating financial services industry.
Take Scivantage for example.
Scivantage enables financial institutions and financial professionals to dramatically improve productivity, strengthen customer relationships and reduce operational costs. The company’s proven, back-office independent solutions offer a full suite of powerful applications that span the workflow of financial professionals and support the complex investment needs of the retail investor.
With more than a decade of experience serving the ever-changing financial sector, Scivantage understands that some things remain constant. Chief among them is the need for scalable IT infrastructure that is always secure, reliable and available.
“In the financial services industry, you have to host operations in a data center that is up and running 24/7,” says Mike Felice, Manager of Technology Infrastructure at Scivantage. “There’s just no getting around that fact.”
In this whitepaper and case study from CenturyLink, we learn how a powerful hosting and colocation solution can greatly improve the elasticity of the modern organization. As Scivantage expanded its Software-as-a-Service (SaaS) offerings, the organization chose CenturyLink for hosting and colocation needs.
“The combination of services and capabilities that CenturyLink provides has been perfect for us,” Felice says. “CenturyLink Markets Infrastructure gives us the flexibility and scalability to pick what solutions are required for Scivantage, and most importantly our clients, at the right time.”
Download this white paper today to learn how CenturyLink’s hosting and colocation services have allowed Scivantage to focus on its core software business and support significant company growth.
Looking ahead, Scivantage plans to continue exploring new ways to utilize the CenturyLink portfolio of products and services. Remember, the ultimate goal is to partner with a data center provider which can continuously scale the ever-evolving needs of your organization. | | 4:30p |
Puppet Labs Raises $40M to Grow DevOps IT Automation Biz DevOps-style IT automation software company Puppet Labs has raised a $40 million funding round, nearly doubling the $46 million total it has raised previously. Existing investors Cisco, Google Ventures, Kleiner Perkins Caufield & Byers, Triangle Venture Capital Group, True Ventures and VMware pitched in on the latest round.
Puppet is riding the wave of DevOps in the enterprise. DevOps combines many of the roles of systems administrators and developers, and enterprises look to it to add some agility and automation. Based on open source architecture, Puppet saw amazing adoption since its inception in 2005, but it didn’t commercialize it until about three years ago.
It has added more than 500 enterprise users since then. The Puppet Enterprise product automates the configuration and deployment of virtual servers is targeted at streamlining the workload of IT system administrators.
Much of the investment prior to this round –$30 million of it–was from VMware. It makes sense for VMware to invest as Puppet Labs is a compatible management tool used by much of VMware’s userbase.
The company said it will use the funding to invest in talent, product and global expansion.
IT automation is critical for any company that uses software to compete, according to Puppet CEO and founder Luke Kanies. “Today we help more than 18,000 organizations automate IT, both on premise and in the cloud,” he said. “But we’re just getting started. This funding round enables us to further our leadership position by entering new markets and helping our customers deliver great software faster than ever.”
Puppet customers include Zynga, Twitter, NYSE, Disney, Citrix, Oracle, Constant Contact, Match.com, Shopzilla, Los Alamos National Laboratory and Stanford University. Puppet Labs has opened new markets beyond North America and Europe, with user groups and meet-ups growing worldwide, including in China, India and Japan.
“The need to remain competitive within the global economy has led enterprises to demand IT automation capabilities that Puppet Labs has become singularly qualified to deliver. This has led to an exemplary growth trajectory over the years that True Ventures is excited to be supporting,” said Puneet Agarwal, of True Ventures. “Puppet Labs is a rare combination of excellent product and talented people who have come together at a time when the world’s leading enterprises are looking for solutions that will help them reshape their businesses.” | | 5:30p |
Cyan’s Planet Orchestrate Integrates Cloud, NFV and WAN Cyan announced Planet Orchestrate, a multi-vendor and multi-domain, network function virtualization and Cloud orchestration application for its Blue Planet software defined network platform. The new application combines WAN service creation and automation with the orchestration of virtual resources, creating a software-defined engine for revenue generation in carrier networks.
The challenge for network operators is to direct network elements and operationalize new virtualized services between data centers, on top of existing network services and across the WAN. Just like IT workloads are virtualized, optimized and automated, the network elements supporting those workloads can be orchestrated to bring together virtual and physical resources, as well as automating and dynamically configuring these resources across multiple network domains.
“The top motivation driving service providers to pursue SDN and NFV is the ability to deploy services much more quickly for new revenue opportunities – it’s about revenue and service agility,” said Michael Howard, principal analyst at Infonetics Research. “While operators are certainly attuned to the capital and operational benefits of these transformative technologies, it is the orchestration of virtual compute, storage and network, combined with the automation of existing network resources that will lead to new, higher-margin revenue streams and happier customers.”
Based on the ETSI NFV ISG Management and Orchestration framework Planet Orchestrate provides multi-domain orchestration, cloud services orchestration and NFV orchestration, and uses open APIs to ensure interoperability with other platforms and tools. The application can leverage the OpenDaylight platform and other cloud or network management systems for deploying services. This single pane of glass that Planet Orchestrate empowers network operators to provision and manage services across networks comprised of both legacy and new SDN- and NFV-enabled components.
The application can be used across data centers and distributed architectures, as it is NFV vendor and function neutral. Available later this year, Planet Orchestrate will enable cloud orchestration with the ability to dynamically instantiate new cloud resources, such as virtual machines, new tenant networks and storage, on-demand through an enterprise portal with the ability to control network (bandwidth-on-demand) and virtual resource allocation.
“Planet Orchestrate combines the capabilities of a WAN service automation and network management system with an SDN controller and orchestrator to enable network operators to deliver new services on both physical and virtual infrastructure, more quickly and at a lower cost,” said Cyan President Michael Hatfield. “Developed in collaboration with network operators and other industry-leading Blue Orbit partners, Planet Orchestrate is ahead of the market and has already proven its viability and interoperability via multiple real-world trials and proof-of-concept demonstrations.” | | 6:00p |
Rackspace Intros Dedicated Servers That Behave Just Like Cloud VMs Seeking to make cloud infrastructure performance more predictable for its customers, Rackspace is launching a new “bare-metal” server offering, giving users the ability to spin up and down dedicated servers just like they spin up and down virtual machines in its OpenStack cloud.
Rackspace will charge for the service by the minute, meaning anybody will be able to rent a powerful dedicated server sitting in a Rackspace data center for 20 or 30 minutes at a time. They can request it online and have it up and running in a matter of minutes, using the same OpenStack API and tools used to provision and manage cloud VMs.
Ev Kontsevoy, the company’s director of product, said the problem with cloud VMs was the multi-tenant nature of the service. Your application is always sharing physical server resources and network connections with other applications, making its performance fluctuate constantly.
Currently, companies that start on cloud infrastructure, move into colocation data centers when they start growing and need to scale to avoid this problem. As they move, howver, they lose the elasticity of cloud.
This predicament is what Rackspace is trying to address with its new OnMetal service, offering dedicated machines with all the utility-computing benefits of Infrastructure-as-a-Service, Kontsevoy explained.
Rackspace president Taylor Rhodes announced OnMetal Thursday at the GigaOm Structure conference in San Francisco.
“Faster than SoftLayer,” billed by the minute
This is certainly not the first bare-metal cloud service ever launched. SoftLayer, the company IBM bought last year, was founded around this concept, and Rackspace itself has offered bare-metal servers before.
OnMetal is very different from both of the above, Kontsevoy said. It performs better and provisions servers much faster.
The company’s previously existing bare-metal offering is more like traditional dedicated hosting, where a customer has to wait hours or sometimes even days to have a server provisioned. They would be billed by the month.
IBM SoftLayer charges for its bare-metal cloud servers by the hour, but its performance is subpar, Kontsevoy said. The provider uses off-the-shelf Supermicro servers which are not optimized the same way Rackspace’s hardware is optimized, he explained.
“Open Compute freaks”
Using reference designs for Open Compute servers, Rackspace has developed three custom designs specifically for OnMetal. The company has identified four workloads that stand to benefit the most from the bare-metal cloud service (processing web requests, background processing, RAM-based caching and database servers) and modified Open Compute designs for them.
The Open Compute Project is an open source hardware and data center design initiative started and led by Facebook. Open source server designs currently available through OCP were originally developed by Facebook for its own purposes.
Rackspace has designed its own OCP-based servers for its traditional VM cloud offering as well.
The OnMetal server is 100 percent solid state, Kontsevoy said. There are no moving parts, no heat (other than CPU) and no vibration, since there are no spinning disks or fans. Cooling is external.
The servers are memory-heavy: half a terabyte is at the low end of the range. “We call them Open Compute freaks,” he said.
Cheaper than VMs for large workloads
OnMetal is currently in limited availability, with a few customers “kicking tires.” Rackspace expects it to go into general availability in June.
The initial deployment is in the company’s Herndon, Virginia, data center. The company will dedicate more data center capacity to the offering as it becomes necessary.
Rackspace has not announced pricing for OnMetal. Running a large workload on it, however, will cost a lot less than running a large workload on cloud VMs, since that’s what it’s optimized for.
The offering aims to bring together the best of both worlds: the capacity and stability of colocation and the scaling capabilities of cloud.
“Ultimately, this leads to much simpler scaling,” Kontsevoy said. “If you are transitioning from a public cloud to colocation, come and take a look at this.” |
|