AnandTech's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, February 28th, 2017

    Time Event
    7:00a
    The AnandTech Podcast, Episode 41: Let's Talk Server, with Patrick Kennedy

    While in San Francisco for AMD’s Ryzen Tech Day, I had a chance to catch up with a good friend by the name of Patrick Kennedy, who runs the tech news website ServeTheHome. We frequently battle STH here at AnandTech to be the first to break news on new server platforms, but it is a friendly rivalry where often we end up picking each other’s brains for information or to bounce ideas off of each other. To that end, I managed to convince Patrick to be a guest on our podcast, to talk about the recent issue with Avoton and Rangeley C2000 CPUs as well as the launch of C3000 and discuss what the upcoming Naples platform can do for AMD.

    Apologies in advance for parts of the recording. We did this in a high-rise hotel during a freak San Francisco storm, causing wind to whistle through the vents in the room and no way to close the vents. I tried to clean up the audio as best as I could, alas I am no expert. Experts, please apply to be our podcast editors, and tell us what equipment we should be using.


    Patrick Kennedy (ServeTheHome), Ian Cutress (AnandTech) and David Kanter (Microprocessor Report)
    Photo Taken by Raja Koduri (AMD). David was declared the winner of the 'Bring Your Suit A-Game' contest.

    The AnandTech Podcast #41: Let's Talk Server

    Featuring

    iTunes
    RSS - mp3m4a
    Direct Links - mp3m4a

    Total Time:  28 minutes 39 seconds

    Outline mm:ss

    00:00 – Introduction
    00:15 – Patrick’s 2000 cores
    01:41 – Atom C2000 Avoton/Rangeley Hardware Bug
    09:22 – Denverton and C3000
    15:17 – Xeon D-1500 Networking CPUs
    18:02 – Opportunities for AMD Naples
    28:39 – FIN

    Related Reading

    The Intel Atom C2000 Series Bug (via ServeTheHome)
    Intel launches Denverton C3000 Series
    AMD Naples Motherboard Analysis

    Image

    2:30p
    AMD GDC 2017: Asynchronous Reprojection for VR, Vega Gets a Retail Name, & More

    In what has become something of an annual tradition for AMD's Radeon Technologies Group, their Game Developers Conference Capsaicin & Cream event just wrapped up. Unlike the company’s more outright consumer-facing events such as their major product launches, AMD’s Capsaicin events are focused more on helping the company further their connections with the game development community. This a group that on the one hand has been banging away on the Graphics Core Next architecture in consoles for a few years now, and on the other hand operates in a world where, in the PC space, NVIDIA is still 75% of the dGPU market even with AMD’s Polaris-powered gains. As a result, despite the sometimes-playful attitude of AMD at these events, they are part of a serious developer outreach effort for the company.

    For this year’s GDC then, AMD had a few announcements in store. It bears repeating that this is a developers’ conference, so most of this is aimed at developers, but even if you’re not making the next Doom it gives everyone a good idea of what AMD’s status is and where they’re going.

    Vive/SteamVR Asynchronous Reprojection Support Coming in March

    On the VR front, the company has announced that they are nearly ready to launch GPU support for the Vive/SteamVR’s asynchronous reprojection feature. Analogous to Oculus’s asynchronous timewarp feature, which was announced just under a year ago, asynchronous reprojection is a means of reprojecting a combination of old frame data and new input data to generate a new frame on the fly if a proper new frame will not be ready by the refresh deadline. The idea behind this feature is that rather than redisplaying an old frame and introducing judder to the user – which can make the experience anything from unpleasant to stomach-turning – instead a warped version of the previous frame is generated, based on the latest input data, so that the world still seems to be updating around the user and matching their head motions. It’s not a true solution to a lack of promptly rendered frames, but it can make VR more bearable if a game can’t quite keep up with the 90Hz refresh rate the headset demands.

    As to how this related to AMD, this feature relies rather heavily on the GPU, as the SteamVR runtime and GPU driver need to quickly dispatch and execute the command for reprojection to make sure it gets done in time for the next display refresh. In AMD’s case, they not only want to catch this scenario but improve upon it, by using their asynchronous execution capabilities to get it done sooner. Valve launched this feature back in November, however at the time this feature was only available on NVIDIA-based video cards. So for AMD Vive owners, this will be a welcome addition. AMD in turn will be enabling this in a future release of their Radeon Software, with a target release date of March.

    Forward Rendering Support for Unreal Engine 4

    On the VR front, the company is showing off recent progress long-time partner (and everyone’s pal) Epic Games has made on improving VR support in the Unreal Engine. Being demoed at GDC is a new Epic-designed forward rendering path for Unreal Engine 4.15.

    Traditional forward rendering has fallen out of style in recent years as its popular alternative, deferred rendering, allows for cheap screen space effects (think ambient occlusion and the like). The downside to deferred rendering is that it pretty much breaks any form of real anti-aliasing for polygon edges, such as MSAA. This hasn’t been too big of a problem for traditional games, where faux/post-process AA like FXAA can hide the issue enough to please most people. But it’s not good enough for VR; VR needs real, sub-pixel focused AA in order to properly hide jaggies and other aliasing effects on what is perceptually a rather low density display.

    By bringing back forward rendering in a performance-viable form, Epic believes they can help solve the AA issue, and in a smarter, smoother manner than hacking MSAA into deferred rendering. Furthermore there is a potential for performance gains as well, since forward rendering allows for a few additional tricks such as disabling rendering features on a per-material basis, which can be used to shore up performance if the performance demands become too great. The payoff of course being that Unreal Engine remains one of the most widely used game engines out there, so features that they release into the core engine are features that become available to developers downstream who are using the engine.

    Partnering with Bethesda: Vulkan Everywhere

    Meanwhile, the company has also announced that they have inked a major technology and marketing partnership deal with publisher Bethesda. Publisher deals in one form or another are rather common in this industry – with both good and bad outcomes for all involved – however what makes the AMD deal notable is the scale. In what AMD is calling a “first of its kind” deal, the companies aren’t inking a partnership over just one or two games, but rather they have formed what AMD is presenting as a long term, deep technology partnership, making this partnership much larger than the usual deals.

    The biggest focus here for the two companies is on Vulkan, Khronos’s low-level graphics API. Vulkan has been out for just over a year now and due to the long development cycle for games is still finding its footing. The most well-known use right now is as an alternative rendering path for Bethesda/id’s Doom. AMD and Bethesda want to get Vulkan in all of Bethesda’s games in order to leverage the many benefits of low-level graphics APIs we’ve been talking about over the past few years. For AMD this not only stands to improve the performance of games on their graphics cards (though it should be noted, not exclusively), but it also helps to spur the adoption of better multi-threaded rendering code. And AMD has an 8-core processor they’re chomping at the bit to start selling in a few days…

    From a deal making perspective, the way most of these deals work is that this means AMD will be providing Bethesda’s studios with engineers and other resources to help integrate Vulkan support and whatever other features the two entities want to add to the resulting games. Not talked about in much detail at the Capsaicin event was the marketing side of the equation. I’d expect that AMD has a lock on including Bethesda games as part of promotional game bundles, but I’m curious whether there will be anything else to it or not.

    Vega Gets A Retail Name: It's Vega

    Fourth up, in a brief segment of their GDC presentation, AMD has announced Vega's retail name. It will be (drumroll)... Vega.

    The company is going to brand the video cards based on their Vega architecture GPUs under the Vega name, e.g. Radeon RX Vega. Presumably, similar to the old Fury series, AMD will be differentiating the cards based on suffixes rather than model numbers.

    AMD Vega GPUs to Power LiquidSky Game Streaming Service

    Finally, while AMD isn’t releasing any extensive new details about their forthcoming Vega GPUs at the show (sorry gang), they are announcing that they’ve already landed a deal with a commercial buyer to use these forthcoming GPUs. LiquidSky, a game service provider who is currently building and beta-testing an Internet-based game streaming service, is teaming up with AMD to use their Vega GPUs with their service.

    The industry as a whole is still working to figure out the technology and the economics of gaming-on-demand services, but a commonly identified component is using virtualization and other means to share hardware over multiple users to keep costs down. And while this is generally considered a solved issue for server compute tasks – one needs only to see the likes of Microsoft Azure and Amazon Web Services – it’s still a work in progress for game streaming, where the inclusion of GPUs and the overall need for consistent performance coupled with real-time responsiveness adds a couple of wrinkles. AMD believes they have a good solution the form of their GPUs’ Single Root Input/Output Virtualization (SR-IOV) support.

    From what I’ve been told, besides their Vega GPUs being a good fit for their high performance and user-splitting needs, LiquidSky is also looking to take advantage of AMD’s Radeon Virtualized Encode, which is a new Vega-specific feature. Unfortunately AMD isn’t offering a ton of detail on this feature, but from what I’ve been able to gather AMD has implemented an optimized video encoding path for virtualized environments on their GPUs. A game streaming service requires that the contents of upwards of several virtual machines be encoded quickly, so this would be the logical next step for AMD’s on-board video encoder (VCE) by making it efficiently work with virtualization.

    11:00p
    GeForce GTX 1080 Price Cut to $499; NVIDIA Partners To Begin Selling 10-Series Cards With Faster Memory

    Along with this evening’s new of the GeForce GTX 1080 Ti, NVIDIA has a couple other product announcements of sorts. First off, starting tomorrow, the GeForce GTX 1080 is getting an official $100 price cut, bringing the card's price to $499. Since the card launched back in May at $599, prices for the card have held fairly steady around that MSRP. So once this price cut goes into effect, it will have a significant effect on card prices. Though it should be noted that the price here is the base price for vendor custom cards; the Founder's Edition card was not mentioned. If it maintains its $100 premium, then that card would be coming down to $599.

    Update: The new prices for both the GTX 1080 FE and GTX 1070 FE have been published by NVIDIA. The GTX 1080 FE is getting a steeper-than-MSRP cut of $150, bringing it to $549 and reducing the FE premium to $50. Meanwhile the GTX 1070 FE is getting a $50 price cut, moving it to $399.

    As for the second announcement of the evening, NVIDIA has announced that their partners are going to be selling GeForce GTX 1080 and GTX 1060 6GB cards with faster memory. Partners will now have the option to outfit these cards with 11Gbps GDDR5X and 9Gbps GDDR5 respectively, to be sold as factory overclocked cards.

    To understand the change, let’s talk briefly about how board partners work. Depending on the partner, the parts, and the designs, partners can buy anything from just the GPU, to the GPU and RAM, up to a fully assembled board (the Founder’s Edition). With the release of faster GDDR5X and GDDR5 bins, NVIDIA is now giving their board partners an additional option to use these faster memories.

    GeForce 10 Series Memory Clocks
      GTX 1080 GTX 1060
    Official Memory Clock 10Gbps GDDR5X 8Gbps GDDR5
    New "Overclock" Memory Clock 11Gbps GDDR5X 9Gbps GDDR5

    To be clear, NVIDIA isn’t releasing a new formal SKU for either card. Nor are the cards' official specifications changing. However, if partners would like, they can now buy higher speed memory from NVIDIA for use in their cards. The resulting products will, in turn, be sold as factory overclocked cards, giving partners more configuration options for their factory overclocked SKUs.

    As factory overclocking has always been done at the partner level, this doesn’t change the nature of the practice. Partners have, can, and will sell cards with factory overclocked GPUs and memory, with or without NVIDIA's help. However with NVIDIA’s official specs already driving the memory clocks so hard, there hasn’t been much headroom left for partners to play with; factory overclocked GTX 1080 cards don’t ship much above 10.2Gbps. So the introduction of faster memory finally opens up greater memory overclocking to the partners.

    11:01p
    NVIDIA Unveils GeForce GTX 1080 Ti: Available Week of March 5th for $699

    In what has now become a bona fide tradition for NVIDIA, at their GDC event this evening the company announced their next flagship video card, the GeForce GTX 1080 Ti. Something of a poorly kept secret – NVIDIA’s website accidentally spilled the beans last week – the GTX 1080 Ti is NVIDIA’s big Pascal refresh for the year, finally rolling out their most powerful consumer GPU, GP102, into a GeForce video card.

    The Ti series of cards isn’t new for NVIDIA. The company has used the moniker for their higher-performance cards since the GTX 700 series back in 2013. However no two generations have really been alike. For the Pascal generation in particular, NVIDIA has taken the almighty Titan line in a more professional direction, so whereas a Ti card would be a value Titan in past generations – and this is still technically true here – it serves as more of a flagship for the Pascal generation GeForce.

    At any rate, we knew that NVIDIA would release a GP102 card for the GeForce market sooner or later, and at long last it’s here. Based on a not-quite-fully-enabled GP102 GPU (more on this in a second), like its predecessors the GTX 1080 Ti is meant to serve as a mid-generation performance boost for the high-end video card market. In this case NVIDIA is aiming for what they’re calling their greatest performance jump yet for a Ti product – around 35% on average – which would translate into a sizable upgrade for GeForce GTX 980 Ti owners and others for whom GTX 1080 wasn’t the card they were looking for.

    NVIDIA GPU Specification Comparison
      GTX 1080 Ti NVIDIA Titan X GTX 1080 GTX 980 Ti
    CUDA Cores 3584 3584 2560 2816
    Texture Units 224 224 160 176
    ROPs 88 96 64 96
    Core Clock ? 1417MHz 1607MHz 1000MHz
    Boost Clock 1582MHz 1531MHz 1733MHz 1075MHz
    TFLOPs (FMA) 11.3 TFLOPs 11 TFLOPs 9 TFLOPs 6.1 TFLOPs
    Memory Clock 11Gbps GDDR5X 10Gbps GDDR5X 10Gbps GDDR5X 7Gbps GDDR5
    Memory Bus Width 352-bit 384-bit 256-bit 384-bit
    VRAM 11GB 12GB 8GB 6GB
    FP64 1/32 1/32 1/32 1/32
    FP16 (Native) 1/64 1/64 1/64 N/A
    INT8 4:1 4:1 N/A N/A
    TDP 250W 250W 180W 250W
    GPU GP102 GP102 GP104 GM200
    Transistor Count 12B 12B 7.2B 8B
    Die Size 471mm2 471mm2 314mm2 601mm2
    Manufacturing Process TSMC 16nm TSMC 16nm TSMC 16nm TSMC 28nm
    Launch Date 03/2017 08/02/2016 05/27/2016 06/01/2015
    Launch Price $699 $1200 MSRP: $599
    Founders $699
    $649

    We’ll start as always with the GPU at the heart of the card, GP102. With NVIDIA’s business now supporting a dedicated compute GPU – the immense GP100 – GP102 doesn’t qualify for the “Big Pascal” moniker like past iterations have. But make no mistake, GP102 is quite a bit larger than the GP104 GPU at the heart of the GTX 1080, and that translates to a lot more hardware for pushing pixels.

    GTX 1080 Ti ships with 28 of GP102’s 30 SMs enabled. For those of you familiar with the not-quite-consumer NVIDIA Titan X (Pascal), this is the same configuration as that card, and in fact there are a lot of similarities between those two cards. Though for this generation the situation is not going to be cut & dry as in the past; the GTX 1080 Ti is not strictly a subset of the Titan.

    The big difference on the hardware front is that NVIDIA has stripped GP102 of some of its memory/ROP/L2 capacity, which was fully enabled on the Titan. Of the 96 ROPs we get 88; the last ROP block, its memory controller, and 256KB of L2 cache have been disabled.

    However what the GTX 1080 Ti lacks in functional units it’s partially making up in clockspeeds, both in regards to the core and the memory. While the base clock has not yet been disclosed, the boost clock of the GTX 1080 Ti is 1582MHz, about 50MHz higher than its Titan counterpart. More significantly, the memory clock on the GTX 1080 Ti is 11Gbps, a 10% increase over the 10Gbps clock found on the Titan and the GTX 1080. Combined with the 352-bit memory bus, and we’re looking at 484GB/sec of memory bandwidth for the GTX 1080 Ti.

    Taken altogether then, the GTX 1080 Ti offers just over 11.3 TFLOPS of FP32 performance. This puts the expected shader/texture performance of the card 28% ahead of the current GTX 1080, while the ROP throughput advantage stands 26%, and memory bandwidth at a much greater 51.2%. Real-world performance will of course be influenced by a blend of these factors, so I’ll be curious to see how much the major jump in memory bandwidth helps given that the ROPs aren’t seeing the same kind of throughput boost. Otherwise, relative to the NVIDIA Titan X, the two cards should end up quite close, trading blows now and then.

    Speaking of the Titan, on an interesting side note, it doesn’t look like NVIDIA is going to be doing anything to hurt the compute performance of the GTX 1080 Ti to differentiate the card from the Titan, which has proven popular with GPU compute customers. Crucially, this means that the GTX 1080 Ti gets the same 4:1 INT8 performance ratio of the Titan, which is critical to the cards’ high neural networking inference performance. As a result the GTX 1080 Ti actually has slighty greater compute performance (on paper) than the Titan. And NVIDIA has been surprisingly candid in admitting that unless compute customers need the last 1GB of VRAM offered by the Titan, they’re likely going to buy the GTX 1080 Ti instead.

    Speaking of memory, as I mentioned before the card will be shipping with 11 pieces of 11Gbps GDDR5X. The faster memory clock comes courtesy of a new generation of GDDR5X memory chips from partner Micron, who after a bit of a rocky start with GDDR5X development, is finally making progress on boosting memory speeds that definitely has NVIDIA pleased. Until now NVIDIA’s GPUs and boards have been ready for the higher frequency memory, and the memory is just now catching up.

    Moving on, the card’s 250W TDP should not come as a surprise. This has been NVIDIA’s segment TDP of choice for Titan and Ti cards for a while now, and the GTX 1080 Ti isn’t deviating from that.

    However the cooling system has seen a small but important overhaul: the DVI port is gone, opening up the card to be a full slot blower. In order to offer a DVI port along with a number of DisplayPorts/HDMI ports, NVIDIA has traditionally blocked part of the card’s second slot to house the DVI port. But with GTX 1080 Ti, that port is finally gone, and that gives the GTX 1080 Ti the interesting distinction being the first unobstructed high-end GeForce card since the GTX 580. The end result is that NVIDIA is promising a decent increase in cooling performance relative to the GTX 980 Ti and similar designs. We’ll have to see how NVIDIA has tuned the card to understand the full impact of this change, but this likely will further improve on NVIDIA’s already great acoustics.

    Meanwhile the end result of removing the DVI port means that the GTX 1080 Ti’s display I/O has been pared down to just a mix of HDMI and DisplayPorts. Altogether we’re looking at 3x DisplayPort 1.4 ports and 1x HDMI 2.0 port. As a consolation to owners who may still be using DVI-based monitors, the company will be including a DisplayPort to DVI adapter with the card (presumably DP to SL-DVI and not DL-DVI), but it’s clear that DVI’s days are now numbered over at NVIDIA.

    Moving on, for card designs NVIDIA is once again going to be working with partners to offer a mix of reference and custom designs. The GTX 1080 Ti will initially be offered in a Founder’s Edition design, while partners are also bringing up their own semi and fully custom designs to be released a bit later. Importantly however, unlike the GTX 1080 & GTX 1070, NVIDIA has done away with the Founder’s Edition premium for the GTX 1080 Ti. The MSRP of the card will be the MSRP for both the Founder’s Edition and partners’ custom cards. This makes pricing more consistent, though I’m curious to see how this plays out with partners, as they benefitted from the premium in the form of more attractive pricing for their own cards.

    Finally, speaking of pricing, let’s talk about the launch date and availability. Just in time for Pi Day, NVIDIA will be launching the card on the week of March 5th (Update: an exact date has finally been revealed: Friday, March 10th). As for pricing, long-time price watchers may be surprised. NVIDIA will be releasing the card at $699, the old price of the GTX 1080 Founder's Edition (which itself just got a price cut). This does work out to a bit higher than the GTX 980 Ti - it launched at $649 two years ago - but it's more aggressive than I had been expecting given the GTX 1080's launch price last year.

    In any case, at this time the high-end video card market is NVIDIA’s to command. AMD doesn’t offer anything competitive with the GTX 1070 and above, so the GTX 1080 Ti will stand alone at the top of the consumer video card market. Long-term here AMD isn’t hesitating to note their work on Vega, but that’s a bridge to be crossed only once those cards get here.

    Image

    << Previous Day 2017/02/28
    [Calendar]
    Next Day >>

AnandTech   About LJ.Rossia.org