Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, July 13th, 2015

    Time Event
    12:00p
    Custom Servers Process Tweets in Twitter Data Centers

    Twitter data centers, like Google data centers, Facebook data centers, and other so-called “web-scale” server farms, house custom servers, designed by the company’s dedicated staff of hardware engineers to optimize for its own applications.

    The use of hardware designed for a particular company’s purpose and produced by original design manufacturers in Asia is now a common approach to infrastructure that needs to deliver web services on a global scale. It’s not a huge surprise that Twitter uses custom servers, but the company’s head of engineering, Alex Roetter, recently confirmed it to Wired.

    Unlike Facebook, which makes most designs of its hardware available for public consumption through its open source hardware and data center design community called the Open Compute Project, Twitter is a web-scale company that rarely shares details about its infrastructure.

    Sharing Infrastructure Software Tools

    That’s not to say it doesn’t share anything. It has used a lot of open source software and open sourced a lot of tools it created in-house.

    The most widely known implementation of Apache Mesos runs in Twitter data centers. Benjamin Hindman, one of the open source cluster management system’s creators, is credited with deploying Mesos during his time at Twitter and making the notorious “Fail Whale” largely a thing of the past.

    Last year, Hindman joined Mesosphere, a startup that has packaged Mesos as a commercial product, calling it the “data center operating system.”

    Another example of Mesos in action is a server cluster underneath JARVIS, a Platform-as-a-Service that supports Siri, the natural-language-based interface for Apple iPhones.

    Apple also keeps its infrastructure strategy close to the vest, but we know that it is at least interested in custom hardware. After several years of quietly participating in the Open Compute Project, Apple was revealed as an official member earlier this year.

    Since Facebook launched it in 2011, OCP has become a hub for companies interested in using, making, and selling custom hardware for web-scale data centers. Not only is it customized for performance, it is also optimized for the lowest possible cost and speed of procurement.

    Learning to Scale

    Web-scale giants tend to grow at breakneck speeds, and data center capacity planning in support of those growth rates is a science. Spend too much too soon and face stranded capital; deploy too little, and watch your service get knocked out by a traffic spike when Ellen Degeneres decides to tweet a selfie together with everyone from Angelina Jolie and Brad Pitt to Jennifer Lawrence and Bradley Cooper.

    As Raffi Krikorian, at the time Twitter’s VP of platform engineering, admitted about a year ago, it was only around then that the company’s infrastructure team could finally say with some degree of confidence that they “know how to do this.” And knowing how to do it involved creating hardware and lots of software in-house.

    Web Scale Headed for Mainstream?

    Like with other web-scale companies, hardware available off the shelf just didn’t cut it for their purposes. But that’s changing. Slowly but surely, every “incumbent” hardware vendor has joined OCP, and most now have some sort of a commodity line.

    The market for this kind of hardware is growing faster than any other category, and it is gradually growing outside of the small circle of web giants. In the financial-services world, for example, Goldman Sachs and Fidelity Investments, who were two of the earliest participants in OCP, have been joined by Bank of America and Capital One. JP Morgan Chase and Bloomberg have also been looking at OCP hardware.

    Market analysts at Gartner have predicted that as much as half of all global enterprises would use web-scale IT as their architectural approach. Today, about 15 percent of servers in data centers around the world are customized computers designed for scale, Jason Waxman, general manager of Intel’s Cloud Infrastructure Group, told Wired.

    3:00p
    SaaS Steps Over the Line – and Closer to Customers

    Stewart Florsheim is the VP of Marketing for Kenandy.

    Software-as-a-Service (SaaS) is transforming the relationship between software providers and their customers. Vendors and customers are increasingly designing and building software products and services more collaboratively—even mission-critical business applications like ERP.

    That would have been unthinkable a few years ago. Of course, a software company’s biggest customers have always had some say in requesting new features and changes. But those suggestions were typically thrown over the wall to the vendor’s development team, and only after months or years—and with a lot of luck and deployment effort—would the customer get the new functionality. Since then, things have changed.

    At first, when SaaS just referred to hosting software in the cloud, software vendors viewed it as simply a novel way of delivering software to customers. On the customer side, shifting the responsibility (and resources) for hosting and maintaining the software was valuable on its own. But since then, the SaaS model, and cloud computing more generally, has generated a new renaissance for development. Software built on the cloud is faster to develop and deploy than legacy software. It also has built-in mobile, global, and social capabilities. It is typically easier to use than legacy software, and there is no longer a need for costly and time-consuming updates.

    While cloud software is easier to deploy than legacy systems, large enterprises likely won’t “rip-and-replace” business critical software overnight. However, they aren’t necessarily locked into multi-year contracts either, so SaaS vendors must keep customers engaged in a continual courtship, and be responsive to their needs. Customers know it’s now possible for vendors to deploy new features and services in a matter of weeks or months, not years. So when they tell vendors what they want to see in the product, they expect their suggested changes to be added to the roadmap.

    “What cloud does, and more specifically Software-as-a-Service cloud does, is create a real partnership where you’re both aligned to the success of the project,” said Dave McLain, senior vice president, chief information officer, and chief procurement officer for Big Heart Pet Brands (recently acquired by Smuckers). “That’s a new shift in the industry that’s changing the probabilities of success for many of these projects.”

    And there are many ways this plays out between SaaS vendors and customers. Clients may request relatively minor features, such as additional reporting capabilities, or major features, such as localization for a specific country. The vendor may respond by saying that the additional reporting will be available in the next update, but the localization will have to wait. In this case, the SaaS model can provide a paradigm for dialogue and compromise.

    Vendors now may not even wait for their customers to approach them with requests. Increasingly, they are monitoring and engaging their customers in online communities. These forums enable customers to network with other users to share best practices and ideas to get more from the software, and vendors can gain valuable insight into potential issues or opportunities to provide new functionality—even before the customer asks.

    SaaS companies still know more about software development, and customers still know more about their own business needs. What’s new is that the onus to achieve success is no longer the responsibility of one party. Both parties are increasingly acting in partnership, crossing back and forth over the traditional lines that divided customer from vendor to play on the same team. With the power of the cloud behind them, SaaS companies can be more responsive than ever before, bringing new features and updates seamlessly to the customers who need them.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:02p
    HP and Intel Form Alliance To Capture HPC Perfect Storm

    HP and Intel have formed an alliance around high-performance computing, the companies announced today at the International Supercomputing Conference in Frankfurt. The partnership is in a bid to capture a growing commercial enterprise market for HPC.

    In addition to advancing HPC offerings, the alliance is about forming a go-to-market strategy to deliver purpose-built compute platforms to enterprises across various industries. The companies see HPC needs expanding beyond traditional use cases in academia and government to commercial enterprises and will work to develop vertical-specific HPC solutions.

    The companies will launch a new HPC Center of Excellence in Texas as well as enhance an existing one in France with tighter go-to-market collaboration. The centers are designed to bring experts from both companies together to support customers throughout planning, developing, deploying, and managing HPC solutions. The alliance also means tighter combination of technologies from both technology giants.

    HPC is traditionally the territory of government and academia but has been increasingly expanding into enterprises seeking business insight from Big Data analytics. IDC forecasts the base HPC compute server market will reach $15.2 billion in 2019 on the heels of advanced analytics.

    The companies see a perfect storm for HPC. The sheer amount of data is increasing, as is the sources generating data from the Internet of Things and an increasingly connected world. Business processes are undergoing a transformation to increasingly real-time needs across all industries. Finally, the companies believe that processing technology has hit a price point that aligns with traditional general-purpose IT, making HPC the realm of everyone, not just traditional academia and government use.

    As a result of the alliance, HP is offering HPC Solutions Framework based on HP Apollo servers, which integrates Intel’s Xeon processors and HPC scalable system framework. Purpose-built HP Apollo Compute platforms will be tailored for a wide range of workload-optimized solutions and unique customer requirements.

    “As data explodes in volume, velocity and variety, and the processing requirements to address business challenges become more sophisticated, the line between traditional and high-performance computing is blurring,” said Bill Mannel, vice president and general manager, HPC and Big Data, HP Servers, in a release. “With this alliance, we are giving customers access to the technologies and solutions as well as the intellectual property, portfolio services and engineering support needed to evolve their compute infrastructure to capitalize on a data-driven environment.”

    In terms of competition, IBM, NVIDIA, Google, and others behind the OpenPOWER initiative have been making similar moves. The POWER architecture is being positioned for HPC needs and against Intel. IBM has opened a few HPC centers itself in Europe, recently in France as well as a big deal with the UK government.

    HP and Intel are enhancing capabilities at the HPC Center of Excellence in Grenoble, France, including providing customers with the opportunity to work with ISVs, and HP and Intel engineers to modernize code. The center in Grenoble enables customers, developers, and ISVs to carry out proof of concept, benchmarks, and characterizations to optimize the infrastructure for HPC-related workloads.

    The new center in Houston is expected to better support the North American market.

    5:35p
    Intel Flexes HPC Server, Network Muscle at ISC

    At the International Supercomputing Conference in Frankfurt today Intel showed off a bevy of forthcoming additions to its high-performance computing portfolio, including for the first time a switch built around the Intel Omni-Path Architecture 100 Series processor it claims is capable of delivering 195 million messages per port.

    In addition, Intel provided updates on how second-generation Intel Xeon Phi processors, codenamed Knights Landing, will be used to provide parallel computing capabilities inside the data center, while also revealing that it has allied with HP to establish new centers of excellence for HPC in Frankfurt and Houston.

    Charlie Wuischpard, vice president of the Intel Data Center Group and general manager of Workstations and HPC at the company, said Intel is trying to rally developers around these next-generation supercomputer platforms. To that end, the company expects 400,000 developers and partners with tools, trainings and support to be in place by the end of the year, with 10,000 of them having remote access to both Xeon and Xeon Phi systems to enable them to build applications.

    Intel expects to see its OEM partners delivering new systems based on the latest Xeon Phi processors late this year. Meanwhile, switches based on the Omni-Path 100 series are expected to deliver port-to-port latency as low as 100 to 110 nanoseconds, which is 23 percent lower than InfniBand switches at what Intel says will be a much lower cost.

    The biggest challenge facing the chip giant with Phi-powered HPC servers will be educating developers on how to create applications that take advantage of the processors to execute code in parallel. To facilitate that process, Intel will also provide developers that build the best applications to help a social cause with a free trip to the CERN supercomputer facility in Switzerland, said Wuischpard.

    “We’re planning to reach out to them via a number of hackathons where they can show off their skills,” he said. “We also think in aggregate that developers will see Intel providing the best total cost of ownership.”

    As part of that TCO focus, Intel is also showcasing today the latest cloud technologies and editions of Intel Lustre storage software that are due out in the third quarter. The company also said that version 3.0 of Lustre Enterprise Edition, due out in 2016, will not only be faster but will also provide enhanced security and manageability using snapshots enabled by ZFS.

    Intel is especially anxious to increase the size of its footprint in the HPC server market, which IDC expects to reach a total of $15.2 billion in 2019. The HPC category is expected to grow significantly faster than the rest of the server market.

    While Intel-based systems lower the cost of building HPC applications, for Intel they clearly represent a higher-margin opportunity than traditional data center servers.

    5:45p
    Amazon Rolls Out Fully Managed API Gateway as a Service

    The new Amazon API Gateway is a fully managed service that helps customers maintain, monitor, and secure APIs, which act as a “front door” for applications to access data, business logic or functionality from their backend services.

    The new generation of applications uses functionality from a variety of other applications as building blocks. These data clouds are accessed through API calls – the API acts as a connection point. Amazon Web Services‘ new API service, announced last week, handles traffic management, authorization, and access control, monitoring and API version management. Amazon API Gateway charges based on API calls and amount of data transferred out.

    The amount of data and diversity of data being collected today from a multitude of sources – mobile devices and increasingly connected everyday devices – means that clouds of data are being formed and made available for other applications. In addition to growing data, new functionality that works with this data is being built. It’s increasingly common for companies to make their backend systems and data accessible to applications for a ton of cross-pollination, either within a company or with a developer ecosystem. This is done through APIs.

    However, there are several considerations around APIs. Security and who can access are top of list, as the world is increasingly open, but not entirely open. The backend needs to remain secure. APIs need to perform consistently and the traffic generated needs to be kept in check as well. Amazon API Gateway is a building and management kit to handle all of these issues.

    “Building and running rock-solid APIs at massive scale is a significant challenge for customers,” said Marco Argenti, AWS VP, in a press release. “And yet, this is one of the most important ingredients for building and operating modern applications that are consumed through multiple devices.”

    Argenti said that AWS has close to a decade of experience running some of the most heavily used APIs in the world. “The Amazon API Gateway takes this learning and makes it available to customers as a pay-as-you-go service that eliminates the cost and complexity of managing APIs so that developers can focus on building great apps,” he said.

    Amazon API Gateway leverages familiar AWS security tools such as AWS Identity and Access Management to verify and authenticate API requests. Amazon API Gateway lets companies run multiple versions of an API simultaneously so that they can develop, deploy, and test new versions of their APIs without impacting existing apps.

    Once an API is deployed, Amazon API Gateway allows customers to control the number of API requests that hit their back-end systems within a certain time period to protect them from traffic spikes, and helps reduce API latency by caching responses.

    Amazon API Gateway also monitors the usage and performance of backend services, providing metrics such as number of API calls, latency, and error rates.

    The service integrates with AWS Lambda, which helps developers built a “smart” backend that evolves to meet certain triggers and events, spinning up and down compute as needed.

    Customers using API Gateway include electronic design automation provider Mentor Graphics, cloud communications platform provider Twilio, and mobile professional services firm Mibiquity.

    6:00p
    CenturyLink Chief Innovation Officer Leaves to Pursue Writing Career

    logo-WHIR

    This article originally appeared at The WHIR

    CenturyLink’s chief innovation officer Lucas Carlson is leaving the company this week, two years after joining the company as part of its AppFog acquisition.

    Carlson founded AppFog in 2010 and sold the platform-as-a-service company to Louisiana-based CenturyLink in 2013. The startup was dissolved into CenturyLink’s hosting subsidiary Savvis.

    According to a report by VentureBeat, Carlson is leaving CenturyLink to pursue his writing career. Carlson has written several books, including two books on programming and entrepreneurial non-fiction book called Finding Success in Failure: True Confessions from 10 Years of Startup Mistakes. His current book is a startup thriller novel called The Term Sheet. Carlson also runs an entrepreneurial podcast, called Craftsman Founder.

    In an email to the WHIR, a CenturyLink spokesperson said: “We wish Lucas Carlson well and thank him for his leadership of CenturyLink Labs as he leaves CenturyLink to pursue other opportunities. Sekar Swaminathan, senior vice president of integration and innovation at CenturyLink, will now also oversee Lucas’ former responsibilities.”

    In his time at CenturyLink, Carlson helped launch the Panamax open source tool for deploying applications in Docker containers. Panamax helped to make Docker and Linux containers available to average developers, Carlson told VentureBeat last year. Since then, the popularity of Docker has only grown.

    While Carlson was certainly an advocate for Docker technology and helped CenturyLink grow in that area, it appears that the company has invested in talent in that segment beyond Carlson. Its CenturyLink Innovation Labs division continues to share its expertise and R&D on its blog.

    Earlier this year, CenturyLink acquired Orchestrate, a startup that provides various NoSQL databases as a service through a single API.

    8:08p
    Expedient Reopens Former Hilton Data Center in Memphis as Colo

    Pittsburgh, Pennsylvania-based Expedient has opened a new Memphis data center following close to $9 million worth of upgrades. Expedient provides managed hosting, cloud, and colocation services.

    The 35,000-square-foot facility used to be a Hilton Hotels enterprise data center. The company updated the data center and is holding a grand opening later this week. This is the eleventh data center in Expedient’s portfolio across seven cities.

    Expendient currently operates in several secondary markets, including Pittsburgh, Cleveland, Indianapolis, Columbus, Boston, Baltimore, and now also Memphis. The company said Memphis was chosen because of local-company needs.

    Memphis is one of many sizable metro areas across the country without a big colocation provider presence. Zayo’s zColo is one of the few other providers in the metro. These cities do have legacy enterprise data centers that are potential candidates for acquiring by service providers and refurbishing as the local market for colocation grows.

    Both local businesses looking to outsource for the first time as well as the growth of the edge (desire to serve content close to metro populations) are driving colocation growth beyond the traditional markets.

    Memphis also complements the company’s other facilities and provides the company with its southernmost presence.

    “We are excited to show the local Memphis community the finished data center and how it can help solve their most pressing IT infrastructure problems,” Jim Kothe, mid-south VP for Expedient, said in a statement. “We’ve already hired several people from Memphis in our support center, sales and engineering teams and will continue to do so as we grow.”

    Expedient announced the Memphis data center upgrades early this year. The facility’s first phase includes 7,500 square feet of raised floor production space and is connected to the company’s 10 other data centers.

    The company broke ground on its $50-million-plus data center in Dublin, Ohio, last September. The facility is expected to open in November.

    Expedient completed an expansion of its Pittsburgh data center at the Allegheny Center Mall in late 2014.

    8:20p
    Survey: Hybrid Clouds Hotter Than Ever

    A recent ZDNet survey, covered by our sister publication Data Center Management magazine, showed that 70 percent of respondents were using or evaluating a hybrid cloud solution. Hybrid cloud solutions combine on-premise IT infrastructure or private cloud with components accessed in a public cloud. It’s no surprise that almost every enterprise systems vendor—IBM, HP, Cisco, Dell, VMware, EMC, Microsoft, etc. —either offers a hybrid cloud solution or is rushing one to market.

    The two best reasons these solutions are gaining popularity are cost efficiency and business agility. Even where enterprise data center budgets are expanding, nobody is being given carte blanche to spend on IT infrastructure. So, instead of over-buying or over-provisioning your on-premise IT resources and running them at very low levels of utilization most of the time, you turn to the cloud to access extra capacity when you need it. And you pay for only what you use and when you use it. That’s cost efficiency.

    Business agility results from the organization’s ability to turn to the cloud for services, capabilities, or resources it ordinarily doesn’t maintain in-house. If a customer requests a certain capability you can access it through the cloud, making it part of your hybrid cloud. That way, you are able to satisfy the customer and deliver the desired capability fast. Should other customers request the same, you can decide whether to build the capability in-house or continue to deliver it through your hybrid cloud.

    The full article is available here: http://www.afcom.com/digital-library/pub-type/dcmmag/enterprise-hybrid-clouds-gaining-popularity/

    << Previous Day 2015/07/13
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org