Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, December 7th, 2016

    Time Event
    1:00p
    Six Server Technology Trends: From Physical to the Virtual and Beyond

    Satya Nishtala is the Founder and CTO of DriveScale.

    Server technology has come a long way in the last 10 years. Why 10 years? No real reason, really. I chose that figure pretty much arbitrarily. Because whether you pick 20 years or two years, as with all things in technology — and in life — change is the only constant. Just consider the fact that what we in IT call servers today are vastly different from what we called them just 10 years ago. In fact, a “server” today isn’t even necessarily an actual physical device. With this in mind, let’s take a look at six of the biggest trends now operative in server technology.

    The Move from Single-Processor Systems to Multi-Processor Systems

    At the highest level, application and market needs drive trends in server technology. Remember when, decades ago, the performance requirements of enterprise applications like databases, ERP and CAD programs started to stress the capabilities of single-processor server systems? In response, the industry developed multiprocessor servers as well as the programming models to go with them. Not surprisingly, as the needs of large enterprises grew, server vendors responded with larger and larger multiprocessor systems.

    Big Data and the Scale-Out Model of Computing

    Where are we today? Very much in an environment of big data and the scale-out model of computing. And the new breed of applications for the web-based economy — what many call big data applications and the latest generation of NoSQL database applications — have similarly stressed the capabilities of even the largest multiprocessor servers that we can build. This gave rise to the development of programming models that enabled applications to use hundreds or even thousands of networked servers as a compute cluster platform. This is what Google calls “Warehouse Scale Computer.”  It’s also known as  the scale-out model of computing, as opposed to the scale-up model that uses larger and larger multiprocessor systems. In this scale-out context, a single physical server is a component of a compute cluster, and that compute cluster is, in turn, the new server.

    Advances in High-Performance Networking Technology

    The notions of scalability, failure resiliency, on-line fault repair and upgrade have also moved from server hardware to cluster software layers, enabled by advances in high-performance networking technology. 10Gb Ethernet enables I/O devices that before had to be directly integrated with servers for performance reasons can now instead be served over the network. Consequently, the architecture of a single physical server component has been simplified significantly. At the hardware level, it is the most cost-efficient compute platform with one or two processors, memory and network interface. At the same time, Linux has become the most widely accepted base software platform for these servers. The “design” of a server now consists of composing a network of simplified physical servers and I/O devices in software. Such a server can be sized (scaled up or down) as needed, and as often needed, in software based on enterprise workflow requirements — a functionality that was before impractical. The downside of this model is there are a lot of hardware and software components that must be configured properly to work together. This model requires new management systems and hardware architectural elements that didn’t exist until just recently.

    Virtual Machines and Container Technologies

    Virtual machine (VM) and container technologies enable abstraction and encapsulation of a server’s compute environment as a software entity that can be run as an application on a server platform. These two technologies are becoming the norm in public cloud providers. Multiple such VMs and containers can be deployed on a physical server, enabling consolidation of multiple servers onto a smaller number of physical servers. This thereby improves hardware efficiently and reduces data-center footprint. In this context, a “server” is a VM or a container software image and not a hardware entity at all! Such a “server” can be created, saved (or suspended), or transferred to a different hardware server — concepts that are totally alien to the traditional notion of a server, but which create deployment capabilities unavailable with physical servers. Additionally, a VM or container image of a fully configured and tested software stack can be saved and distributed, encapsulating the learning and expertise. This helps in rapid application deployment, saving manpower costs and time. This is one of the major value propositions of the VM and container model.

    Much like a fully configured and tested software stack of a VM/container can be managed as a software image that can be saved and redeployed, the software stack of a scale-out environment — including the configuration of the underlying logical servers — can be abstracted, saved and redeployed. This enables rapid deployment of scale-out applications, helping enterprise end users deal with the complexity of scale-out systems. This is particularly valuable given the fact that the underlying compute platform can be modified based on workflow needs.

    Advances in Memory Technology

    But let’s not forget entirely about hardware. Advances in memory technology, such as Phase Change Memory and ReRAM, are enabling a new class of memory with access times similar to DRAM in present-day servers yet offer two to 10 times the capacity, cost advantages and persistence. This forthcoming class of memory will create a new layer of memory hierarchy between DRAM and disk storage, known by the terms Storage Class Memory and Persistent Memory. The high capacity, coupled with low latency offered by the new memory technology, will enable an entirely new class of applications with performance that is orders of magnitude higher than present-day servers. But at the same time, it presents a number of architectural challenges that need to be overcome to achieve full potential and widespread use. These include (a) application awareness of a region of memory in the system being persistent, while a portion of that memory space is in volatile caches either on the processors or in DRAM and (b) dealing with failed servers with persistent, and potentially valuable data. The Linux community is actively working on these issues and we should see solutions starting to pop up within the next 12 to 18 months, if not sooner.

    Machine Learning and Mobile Applications Linked to Enterprise Databases

    Until now, general processor architectures such as x86 have been exclusively used in the design of servers, and these general-purpose microprocessors would be programmed for every application need. However, newer, more robust and demanding applications like machine learning, security functions and high-bandwidth compression perform very inefficiently on general-purpose processors. As a result, newer servers being deployed today are based on a hybrid of GP and GPU processors, machine learning processors and crypto processors. These newer servers offer vastly improved performance levels (orders of magnitude) than the standard general processor architecture. As a result, enterprise data centers will move to  an increasingly heterogeneous compute environment, with application-specific servers.

    Additionally, mobile applications linked to enterprise databases that respond in real time drive the market need for this new kind of server. While the end-users of traditional enterprise applications were generally limited to the enterprise employees and potentially some partners, these new applications now enable millions of online customers to have access to enterprise applications such as in health care, financial, travel, and social media. They demand orders of magnitude higher transactional throughput and millisecond response times. The new scale-out applications, such as NoSQL databases, combined with flash-based storage are being deployed to address this need.

    Expect to see More Change Ahead

    So as the cursory summation of current trends in server technology above shows, there’s no shortage of change and innovation, of finding new solutions to the new problems that the advances themselves introduce. In the last few years, big data, scale-out computing, advances in high-performance networking technology, virtual machines and container technologies, advances in memory technology, machine learning and other advanced applications linked to enterprise databases have all contributed to the progression of servers. And now that a server isn’t even what we knew as a server just a decade ago, who knows what one will look like 10 years from now?

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

     

    6:38p
    Microsoft Adds New Azure Cloud Training for Partners
    Brought to You by The WHIR

    Brought to You by The WHIR

    Microsoft attributes more than 90 percent of its revenue to partners, and is offering new Azure skills training and certification to help support this growth.

    Gavriella Schuster, corporate vice president, Worldwide Partner Group, Microsoft, announced new Azure training opportunities this week to help partners “respond to the surging demand, realize positive returns, and grow their market opportunity.”

    Among the educational offerings is new, free Azure training courses available in a self-paced online learning environment. There are 6 courses available today, and Microsoft is adding 6 more in the coming weeks.

    “To help ensure our partners can tap into a skilled and well-equipped workforce, Microsoft is investing in a variety of technical training, tools and resources, including the Microsoft Virtual Academy, the Cloud + Enterprise University Boot Camps, and the Microsoft Professional Program, just to name a few,” Schuster said.

    “These new Azure offers join the lineup today, featuring a modern learning model called a Massively Open Online Course, or MOOC. MOOCs are so much more than online videos and demos – they incorporate videos, labs, graded assessments, office hours, and more.”

    This article originally appeared here at The Whir.

    8:22p
    Intel’s New Nemesis? Qualcomm Shows Off Its 10 nm Server Processor

    The 10-nanometer server processor market officially opens its gates in the second half of next year, and today, we’ve seen the first physical evidence that Intel will not be taking the field alone.  In the clearest sign to date that it could, at the very least, meet Intel day-and-date, on Monday, Qualcomm staged a demonstration to select participants of its first ARMv8-based server processor: the Centriq 2400, formerly code-named “Amberwing.”

    We’re hearing the first news of this demonstration, in a Qualcomm announcement Wednesday.

    Qualcomm is calling this first chip in the series “purpose-built for performance-oriented datacenter applications,” in promoting today’s news to the press.

    A Qualcomm spokesperson provided to Data Center Knowledge on Wednesday as much information as the company is willing to share with the public at this time, about the demo.  It was an analytics application involving both Apache Spark and an unbranded Hadoop distribution, along with the APIs of Twitter and Google.  Java middleware was present, and an unnamed Linux provided the basic platform.  No virtualization or containerization was involved, and all software used in the demo was open source.

    Qualcomm Centriq REP_full-lg [640 px]

    A third party was contracted to produce the servers used in the demonstration, which did not carry major brand names.  Although the chip design is said to support as many as 48 cores, the quantity of cores used in the demonstration is not being shared publicly.  However, Qualcomm did share with Data Center Knowledge the first pictures of the 1U server platform involved in the demonstration, which did carry the Qualcomm Centriq trademark, along with the designation “QDF2400 REP.”

    Qualcomm Centriq REP (Angle)-lg [640 px]

    No performance indicators are currently available, and it’s telling that Qualcomm did not choose to use an industry standard benchmark for this demo.  The company may be in a greater position to provide more deterministic performance data at some point during Q1 2017, we were told.

    The spokesperson confirmed that Qualcomm has begun sampling servers running on Centriq processors with select customers.

    Finally Putting 3D to Work

    Centriq is being fabricated using the FinFET process, which is Qualcomm’s choice for implementing 3D transistors.  One way of overcoming the physical limitations of shuttling electrons in close proximity on a processor, is by effectively “printing” that processor in layers, creating 3D components.  While Intel and Micron Technology have been collaborating on an architecture called 3D XPoint, its use at present is limited to memory modules.

    With FinFET, the planar substrate acts as a lowest-level foundation layer, letting the gate of the transistor wrap around the source and the drain.  This has the immediate benefit of radically reducing leakage, enabling lower voltages.  It’s these low voltages that are absolutely necessary for implementing a lithography process as tiny as 10 nm, and FinFET is one way to make this happen.

    While the concept is already a quarter-century old, it’s taken this long for an evolutionary event — in this case, the impending dead-end for Moore’s Law in its current form — to compel chip manufacturers to give it a shot.

    Qualcomm declined comment on the details of its FinFET process at this time.

    An Athlete in a Different Sport

    The man driving Qualcomm’s production of Centriq is Anand Chandrasekher [pictured top], who may be best known for having led Intel down a path toward a dead-end: as the head of Intel’s ill-fated Ultra-Mobile PC project with Microsoft a decade ago.  Qualcomm hired him away from Intel in 2012, at first appointing him Chief Marketing Officer.

    After a series of incidents in which he publicly called Apple’s A7 system-on-a-chip a “gimmick,” and characterized the idea of an 8-core, 64-bit processor as “dumb,” the company took the unusual step of publicly censuring Chandrasekher, reassigning him to a vice presidency.

    The following year, in a public conversation with All Things D’s Ina Fried, Chandrasekher chastised Intel for acting like a prize athlete competing in the wrong sport — in this case, mobile devices, where its ambitions were crushed by Apple.  Let a company built for mobility compete in mobility, he argued at the time.

    Ironically, three years later, he’s the one driving Qualcomm down its own unfamiliar path, touting the benefits of a processor with as many as 48 cores.

    “This migration to a cloud model has. . . created a major power shift in the supply chain,” wrote Chandrasekher, in a Qualcomm company blog post Wednesday, “with mega data centers sourcing server platforms directly from ODMs. . . Most data center software is based on open source software projects.  This enables Qualcomm, as well as other ARM ecosystem partners, to work within these open source projects to implement support for ARMv8.”

    During his Monday presentation, Chandrasekher made the case that the PC had been the driving factor behind the process technology of processors.  But as the PC sales slump became a permanent trend, mobile devices took over that role.  Now mobile phones, he argued, are the principal drivers behind processor fabrication, thus giving Qualcomm an opportunity to take the lead in the industry.

    A Friendly Dragon?

    Last month, Qualcomm’s Falkor processor core (named for a furry, benevolent dragon in the movie, “The Neverending Story”) was officially inducted as a supported processor by the community supporting the LLVM systems compiler project, as indicated by this update page on GitHub.  This means developers building low-level software, such as operating systems and libraries, may use LLVM to compile C and C++ code for ARMv8 chips using the Falkor core, Centriq 2400 being the first series to do so.

    A recently uncovered, “leaked” copy of Intel’s processor roadmap, appears to indicate that the first series of that company’s 10 nm processors, code-named “Cannonlake,” is due for production very late next year — on the tail end of Q4.  If Qualcomm makes good on the front end of its H2 2017 promise for Centriq, it could beat Intel to market by as much as five months.

    8:51p
    HNA Acquisition of Ingram Micro Closes
    By The VAR Guy

    By The VAR Guy

    China-based HNA Group’s $6 billion acquisition of distributor Ingram Micro closed this week, marking the completion of one of the most significant M&A deals in the channel this year. The deal was first announced in February, and much of the last year has been spent navigating the tricky regulatory waters in both China and the U.S. At the same time, Ingram announced several changes in its board and senior leadership.

    Ingram Micro, the world’s largest IT distributor, is considered by many industry experts to serve as a microcosm of the channel. Marty Wolf, president and founder of martinwolf M&A Advisory, told The VAR Guy earlier this year that broadline distribution and the channel are both in the midst of a sea change. And because Ingram sits at so many chokepoints within the channel, the company’s moves both reflect and influence overall business IT trends.

    Global Market Opportunities

    So what does the distribution giant’s acquisition by a Chinese conglomerate indicate? For one, the globalization of IT continues to be a growing trend, though if the Trump administration follows through on some of its rhetoric during the campaign, tensions in business relations between the U.S. and China may escalate quickly. In fact, 2016 has seen a marked rise in populist movements throughout the U.S. and Europe, with more calls for protectionist policies.

    Though the deal cleared regulatory hurdles in both countries, it wasn’t without some tense moments. In July, the Shanghai Stock Exchange demanded an explanation as to why Ingram’s net profit margins significantly declined between 2013 and 2015, voicing concerns about the company’s credit rating and probability of future success. The Exchange also had questions surrounding a lawsuit filed against Ingram Micro by one of its shareholders claiming the company “sold itself too cheaply and via an unfair process.”

    Being Nimble is Necessary

    The deal is also a good example of the industry shifting to accommodate on-demand, agile market demands. HNA’s business is built on transportation and shipping. Combined with Ingram Micro’s logistics and supply chain capabilities, the new entity has nearly unparalleled opportunities to streamline its delivery operations. The hope is that this will provide the distributor the ability to increase revenue in high growth regions and provide existing customers access to new markets, said Adam Tan, vice chairman and CEO of HNA Group, in a statement.

    Like the rest of the channel, distribution is adopting new technology at unprecedented rates. Concurrently, the role of distributors is changing as they provide more education, package end-to-end solutions and begin to foster partner-to-partner alliances. The “emerging channel,” including non-traditional companies such as independent software vendors, system integrators and shared service providers are requiring new routes to market. The ability to be agile will be critical to future success for disties, and the capabilities Ingram will gain from HNA will give it a significant edge.

    Tim Harmon, industry analyst with Forrester, told The VAR Guy earlier this year that many tech vendors that go to market through two-tier distribution want to reduce the number of distributors they use, and that the Ingram Micro deal is the beginning of significant M&A activity coming in the distribution sector. “There are too many distributors in the world,” he said.

    Harmon also said that the distribution sector trails behind other parts of the tech industry in terms of digital communications and standards for efficiency. “In general, it’s incredible how antiquated the digital communications are between tech vendors and distributors, and between distributors and value added resellers,” he said. “There’s way too much business transacted in the tech value chain on the back of email and spreadsheets.” HNA’s money and technical savvy could help Ingram catch up.

    Demand for Diversity

    Alongside the announcement of the deal closing, Ingram announced several changes in leadership. Notably, the company has appointed its first female CFO. As Bill Humes transitions from his current role as CFO to a seat on the board, Gina Mastantuono will take his place. “Gina, who was recently honored by the National Diversity Council as one of the Top 50 Most Powerful Women in Technology, has been an invaluable addition to Ingram Micro since joining in early 2013,“ wrote CEO Alain Monie in an internal letter to employees. “We have been preparing Gina for an opportunity like this and I am extremely pleased to have her in this role.”

    Along with the CFO position, Ingram made a change in the role of executive vice president, secretary and general counsel. Larry Boyd, who currently occupies the position, will retire and assume a seat on the board. Taking his place is Augusto Aragone, who originally joined Ingram Micro as regional counsel for Latin America. Aragone has worked in Latin American legal counsel and logistics since 2002.

    The addition of a female and a Latin American specialist to Ingram’s executive leadership clearly speaks to both the focus on diversity and the increasing globalization in IT. The lack of women in leadership positions in technology has been a hot topic of late, with a slew of statistics coming to light that illustrate the problem. Companies like Microsoft have implemented formal policies designed to increase the number of female employees, and Mastantuono’s appointment is big news for diversity advocates.

    Moreover, Ingram has a significant presence in Latin America. Luis Ferez, General Director of Ingram Micro Mexico, told The VAR Guy that the opportunity for the channel in Mexico, for example, is huge. Rapidly developing infrastructure, a huge millennial population and new technological innovations are combining to open new markets that Ingram is eager to capitalize on, especially in the SMB space.

    “How to work and how to explode the SMB segment,” he said, “I would say that would be the monster, or biggest opportunity.”

    Aragone’s knowledge of Latin American regulations and legal structures will be beneficial as the company grows operations in countries with undeveloped infrastructure and unfamiliar government statutes and regulatory bodies.

    If partners want to try to predict what’s going on in the channel in 2017, they would do well to carefully watch Ingram Micro’s next steps.

    This article originally appeared here at The VAR Guy.

    10:18p
    ViaWest Fiber Access Agreement Strengthens the ‘Hillsboro Ring’

    In a lease agreement announced this morning, hybrid colocation provider ViaWest announced that CoastCom, the operator of an important fiber-optic asset that is coming to be known as the “Hillsboro Ring,” is opening full access to two of ViaWest’s Hillsboro, Oregon-based facilities already connected to the ring, from other data centers also connected.  This makes the ring a vital a cross-connection point for Oregon’s growing data center market, as well as for a strategically critical submarine cable landing station.

    “The metro fiber allows clients to connect from any data center on the ring in Hillsboro to the ViaWest data centers,” said Tim Parker, ViaWest’s vice president of cloud and network services, in a note to Data Center Knowledge, “without the need of carrier circuits to access the undersea cable carriers.  This improves the overall cost and delivery of service and removes the need for expensive and complex transport hardware.

    “The ring is diverse, and from any point on the ring, remote clients can connect through a simple metro cross connect,” Parker continued, “similar to being within the data center.”

    ViaWest operates two of the six data centers connected by the Hillsboro Ring, the northern edge of which runs parallel to NW Sunset Highway (U.S. Highway 26) northwest of Portland.  The other four are run by Digital Realty, EdgeConnex, Infomart, and Tata Communications.

    It’s this latter connection that’s key:  Tata runs the TGN Hillsboro Cable Landing Station, an inland junction to the TGN Transpacific cable system.  This is a 6-terabit cable linking Twin Rocks, Oregon with Emi and Toyohashi, Japan; but also with southern California as far south as Redondo Beach; as well as with Piti, Guam.

    A report to the Oregon state legislature [PDF] published last month by the Oregon Broadband Advisory Council cites the strategic importance of the Hillsboro Ring to the state’s economy.  The Council noted that the state currently lacks any Tier 1 peering facilities within its own territory.

    The best way to improve Oregon’s stature, enabling its own businesses to compete with California’s in the valuable cross-connection market, the group states, is by gathering its most valuable assets together to make the strongest pitches possible to prospective connectivity providers.  The group specifically cites the Hillsboro Ring, coupled with its connectivity to the cable landing station linking Asia and the Pacific Rim, as key to this strategy.

    “If Oregon can raise its position on this hierarchy, there are potential economic development benefits,” writes OBAC.  “The next two years will be critical in determining Oregon’s future position in the Internet hierarchy, and Oregon has some valuable assets that may be leveraged to improve its position and pursue Tier 1 peering within the state.”

    << Previous Day 2016/12/07
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org