AnandTech's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, February 17th, 2016
| Time |
Event |
| 5:06a |
Samsung Announces New Exynos 7870 Mid-Range 14nm SoC 
Today Samsung announced a new mid-range SoC called the Exynos 7870. The new SKU sports 8x Cortex A53s running at up to 1.6GHz. The GPU should be an ARM Mali T830 although we have no information on core-count or frequencies used. The part extends Samsung's ModAP lineup of SoCs with integrated modems as we see an integrated UE Category 6 modem integrated, delivering up to 300Mbps with FDD-TDD joint carrier aggregation.
| Upcoming 14nm Mid-Range SoCs |
| SoC |
Exynos 7580 |
Exynos 7870 |
Snapdragon 625
(MSM8953) |
| CPU |
8x A53 @ 1.6GHz |
8x A53 @ 1.6GHz |
4x A53 @ 2.0GHz
4x A53 @ ? GHz |
| GPU |
Mali T720MP3
@ 600MHz |
Mali T830MP? |
Adreno 506 |
Encode/
Decode |
1080p60
H.264 |
2160p
H.264 & HEVC (Decode) |
| Camera/ISP |
Dual ISP
16MP / (8+8) |
Dual ISP
16MP / (8+8) |
Dual ISP
24MP |
Integrated
Modem |
Cat. 6
300Mbps DL ?Mbps UL
2x20MHz C.A. |
Cat. 6
300Mbps DL ?Mbps UL
2x20MHz C.A. |
"X9 LTE" Cat. 7
300Mbps DL 150Mbps UL
2x20MHz C.A.
(DL & UL) |
| Mfc. Process |
28nm HKMG |
14nm |
14nm LPP |
More interestingly, is that the new SoC is manufactured on a 14nm FinFET process which promises to reduce power consumption by over 30% over similar SoCs such as the Exynos 7580. Only a few days ago we were discussing our surprise with the introduction of Qualcomm's Snapdragon 625 which is also manufactured in a 14nm LPP process, a great sign for the manufacturing process given that these mid-range parts are very price-sensitive. Samsung discloses that the Exynos 7870 will be in mass production in the first quarter of 2016 so we're essentially very close to device availability in the following months.
| | 8:00a |
Samsung Releases 750 EVO SATA SSD 
After an accidental leak in November that was spotted by our friends at Tom's Hardware, the Samsung 750 EVO has now officially launched worldwide. Since the introduction of their first consumer TLC SSD with the 840, Samsung's consumer/retail SATA SSD lineup has consisted of two product families: the MLC-based Pro drives, and the TLC-based 840 and EVO drives. With the 750 EVO, Samsung is creating a new budget-oriented product line that makes them a participant in the race to the bottom that they had been avoiding by positioning the 850 EVO as a mid-range SSD.
There are several design choices that help minimize the cost of the 750 EVO, aside from the expected choice of TLC over MLC. The MGX controller it borrows from the lower capacity 850 EVOs is a dual-core version of Samsung's usual triple-core architecture. The 750 EVO will only be available in 120GB and 250GB sizes, so there won't be any sticker shock of higher capacities and the PCB only needs to be large enough to accommodate the needs of the 250GB model. Both capacities are listed as having 256MB of DRAM, where the 850 EVO 250GB has 512MB of DRAM. But the most significant aspect of the 750 EVO is that it doesn't use 3D NAND.
It may come as a surprise that the 750 EVO marks a return to planar NAND. Samsung has proudly led the industry in transitioning to 3D NAND, but they haven't entirely abandoned the development of planar NAND flash. Earlier this month they made two presentations at ISSCC of their R&D accomplishments: one about a 256Gb TLC built on their 48-layer third generation V-NAND process, and one about a 128Gb MLC built on a 14nm process. The 750 EVO uses a 128Gb 16nm TLC, a larger die based on the same process as the 64Gb MLC we found in the SM951.
The 16nm TLC NAND is the successor to Samsung's 19nm TLC that had a troubled tenure in the 840 EVO. More than a year after launch, 840 EVO owners started reporting degraded read speed when accessing old data that had not been written recently. Samsung acknowledged the issue, then provided a firmware update and Performance Restoration tool less than a month later, but had to issue a second firmware update six months after that. The 750 EVO inherits the results of all the work Samsung did to mitigate the read speed degradation, and there's no reason to expect it to be any more susceptible than the competition using similarly dense planar TLC built on Toshiba's 15nm process or Micron's 16nm process.
| Samsung TLC SATA SSD Comparison |
| Drive |
750 EVO 120GB |
750 EVO 250GB |
850 EVO 120GB |
850 EVO 250GB |
| Controller |
MGX |
MGX |
| NAND |
Samsung 16nm TLC |
Samsung 32-layer 128Gbit TLC V-NAND |
| DRAM |
256MB |
256MB |
256MB |
512MB |
| Sequential Read |
540MB/s |
540MB/s |
540MB/s |
540MB/s |
| Sequential Write |
520MB/s |
520MB/s |
520MB/s |
520MB/s |
| 4KB Random Read |
94K IOPS |
97K IOPS |
94K IOPS |
97K IOPS |
| 4KB Random Write |
88K IOPS |
88K IOPS |
88K IOPS |
88K IOPS |
| 4KB Random Read QD1 |
10K IOPS |
10K IOPS |
10K IOPS |
10K IOPS |
| 4KB Random Write QD1 |
35K IOPS |
35K IOPS |
40K IOPS |
40K IOPS |
| DevSleep Power |
6mW |
2mW |
| Slumber Power |
50mW |
50mW |
| Active Power (Read/Write) |
2.1W / 2.4W (Average) |
2.4W / 2.8W (Average) |
Max 3.7W / 4.4W |
| Encryption |
AES-256, TCG Opal 2.0, IEEE-1667 (eDrive) |
AES-256, TCG Opal 2.0, IEEE-1667 (eDrive) |
| Endurance |
35TB |
70TB |
75TB |
| Warranty |
Three years |
Five years |
The 750 EVO's performance specifications are almost identical to the 850 EVOs of the same capacity. The 4kB random write latency is a little bit worse, but read speeds are the same and any other differences in the write performance of the 15nm flash are masked by the SLC write cache. The reduced warranty period of three years is typical for this product segment, and while the write endurance specifications may look quite low, they're sufficient given the capacity and intended use. It's nice to see that the 750 EVO keeps the encryption capabilities fully enabled, as many budget drives lack hardware encryption support.
Given the aforementioned similarities with the 850 EVO, it should come as no surprise that the 750 EVO is in part a replacement. The previously announced and now imminent migration to Samsung's 48-layer V-NAND won't apply to the 120GB 850 EVO, as the 256Gb per die capacity would mean building a drive with only four flash chips. That is undesirable from both a performance standpoint and from a packaging standpoint—Samsung will otherwise have no reason to stack fewer than 8 dies per package.
A few online retailers are listing the 750 EVO already, albeit with limited or no stock. The MSRP of $54.99 for the 120GB model and $74.99 for the 250GB model is about $10 cheaper than what the 850 EVO is currently going for, and any sales below MSRP will make for a very competitive price.
| | 8:01a |
Revisiting The Google Pixel C - Better, But Not There Yet Last month I published my review of the Pixel C. While I thought it was a very interesting tablet, in the end I was unable to give it any sort of recommendation due to the severe software bugs that were present. To me, this was quite surprising, as Google has a fairly good track record when it comes to the software on the Nexus devices. During the review process I reached out to Google to voice my concerns about the issues. What both concerns me and gives me hope for the Pixel C is that Google was readily aware of most of the problems I brought up. It concerns me because I think the appropriate decision would have been to delay its release, but it gives me hope that these issues will be fixed.
During my discussions with Google, I was offered the chance to test a new unit that would run a new unreleased build containing fixes that Google plans to release to the public in the future. Given the fact that the Pixel C has solid hardware let down by buggy software, the chance to see Google's improvements before they are officially released presented a great opportunity to revisit the Pixel C and determine if Google's upcoming changes can change my original verdict about the device. Read on to see what Google has changed, and if it's enough to turn things around for the Pixel C. | | 10:00a |
Quick Look: Vulkan Performance on The Talos Principle 
Following yesterday’s hard launch of Vulkan 1.0 – drivers, development tools, and the rest of the works – also released alongside Vulkan was the first game with Vulkan rendering support, The Talos Principle. Developer Croteam has a history of supporting multiple rendering paths with their engines, and the 2014 puzzle-em-up is no different, supporting DirectX 9, DirectX 11, and OpenGL depending on which platform it’s being run on. Now with Vulkan’s release Croteam has gone one step further, implementing early Vulkan support in a beta build of the game.
Since this is the first game with any kind of Vulkan support, we wanted to spend a bit of time looking at what Vulkan performance was like under Windows. Games with full support for Vulkan are still going to be some time off, as even with game dev participation in the standardization process it takes time to write a solid and high efficiency rendering path for these new low-level APIs, but none the less it gives us a chance to at least take a peek at the state of Vulkan on day 1.
To be very clear here this is an early look at Vulkan performance; Croteam admits from the get-go that their current implementation is very early, and is not as fast as their now highly tuned DirectX 11 implementation. Furthermore The Talos Principle is not a title that’s designed to exploit the CPU utilization and draw call improvements that are central to Vulkan (unlike say Star Swarm when we first looked at DX12). So with that in mind, it’s important to set reasonable expectations of what’s to come.
On the driver side of matters, both AMD and NVIDIA released Vulkan drivers yesterday. As is common with new API releases, both drivers are developer betas and either lack features or are based on older branches than current consumer drivers, however the NVIDIA driver has passed Vulkan conformance testing. AMD and NVIDIA will be integrating Vulkan into their release consumer drivers in the future as they improve on driver quality and catch up with the latest driver branches.
Finally, for our testing we’re using our standard GPU testbed running Windows 8.1, in part to showcase Vulkan on a platform that can’t receive DirectX 12. As the release of AMD’s drivers was unexpected – we had already begun preparing for this article earlier in the week – we don’t have results for very many AMD cards, but as this is a quick look it gets the point across.
| CPU: |
Intel Core i7-4960X @ 4.2GHz |
| Motherboard: |
ASRock Fatal1ty X79 Professional |
| Power Supply: |
Corsair AX1200i |
| Hard Disk: |
Samsung SSD 840 EVO (750GB) |
| Memory: |
G.Skill RipjawZ DDR3-1866 4 x 8GB (9-10-9-26) |
| Case: |
NZXT Phantom 630 Windowed Edition |
| Monitor: |
Asus PQ321 |
| Video Cards: |
GeForce GTX 980 Ti
GeForce GTX 960
GeForce GTX 760
AMD Radeon R9 285
AMD Radeon R9 370 |
| Video Drivers: |
NVIDIA Release 361.91 (DX11 & OpenGL)
NVIDIA Beta 356.39 (Vulkan)
AMD Radeon Software Crimson 16.1.1 Hotfix (DX11 & OpenGL)
AMD Radeon Software Beta for Vulkan (Vulkan) |
| OS: |
Windows 8.1 Pro |
The Talos Principle: Performance
We’ve gone ahead and run our full collection of cards with Ultra settings at both 1080p and 720p to showcase a typical gaming workload and a lighter workload that is much more unlikely to be GPU limited. We’ve also gone ahead and run our two most powerful cards, the GeForce GTX 980 Ti and Radeon R9 285, at 1440p to also showcase a more strictly GPU-bound scenario.



As expected from Croteam’s comments, at no point here does Vulkan catch up with DirectX 11. This is still an early rendering path and there’s no reason to expect that in time it won’t get up to the speed of DX11 (or even surpass it), but that’s not the case right now.
The real reason we set about to run these tests was not to compare early Vulkan to DX11, but rather to compare Vulkan to the API it succeed, OpenGL. OpenGL itself isn’t going anywhere – it is the DirectX 11 to Vulkan’s DirectX 12, the API that will remain for non-guru programmers who don’t need the power but need easier access – but as OpenGL suffers from many of the same performance bottlenecks as DX11 (plus some whole new ones from a 24 year legacy), there’s clear room for improvement with Vulkan.
To that end the results are more promising. As compared to The Talos Principle’s OpenGL renderer, the Vulkan renderer is not all that different in performance in clearly GPU-bound scenarios. But once we start looking at CPU-bound scenarios, even in a somewhat lightweight game like The Talos Principle, Vulkan pulls ahead. This is especially evident on the GTX 980 Ti at 1080p, and across a few different cards at 720p. This offers our first sign that Vulkan will indeed be capable of bringing its desired CPU performance benefits to games, perhaps even in games where they’re not explicitly pushing the draw calls limits of a system.
These performance results do also highlight some performance issues as well. The AMD cards – both of which have 2GB of VRAM – see some unusual performance regressions. Based on our experience with DX12 and Mantle, it seems likely that on these settings The Talos Principle is approaching full VRAM utilization, leading to the occasional drop in performance. Just as with DX12, developers have near-full control of the GPU, and will need to manage VRAM usage carefully.

Radeon R9 285 Running via Vulkan
As for image quality, the rendering path that Croteam has implemented appears to be every bit as good as their existing paths. Both AMD and NVIDIA cards exhibited great image quality that was comparable to the baseline DX11 rendering path. And admittedly we weren’t expecting any differently, but it means there are no image quality affecting bugs that we’ve picked up on in our testing.
That said, these Vulkan drivers are classified as betas by both AMD and NVIDIA, and this is not a misnomer. We encountered issues with drivers for both parties, particularly in NVIDIA’s case where we couldn’t successfully run a Talos benchmarking session twice without rebooting, otherwise the game would crash. So coupled with the known limitations for these drivers, it goes without saying that these drivers are really only for testing and development purposes, and that AMD and NVIDIA will need to knock out some more bugs before integrating Vulkan support into their release drivers.
Overall with this being the third low-level API release in the past two years (and a rebirth of sorts for Mantle), for our regular readers there aren’t any great surprises to be found with Vulkan as implemented on The Talos Principle. Still, the results do show promise. Khronos has set about creating a new cross-platform low-level API, and this early preview of Vulkan shows that they have achieved their basic goals. Now it will be a matter of seeing what developers can do with the API with more developer time and in conjunction with further driver improvements from AMD, NVIDIA, and the other GPU vendors.
| | 12:00p |
AMD to Bundle New Hitman Game with Select FX CPUs and Radeon 390 Series Video Cards 
AMD announced this week that it will bundle the full version of the new Hitman game with its Radeon R9 390-series graphics cards as well as FX 6000- and 8000-series microprocessors. The campaign will run in the U.S. as well as in the EMEA region.
To get a free copy of the full version of Hitman, you will need to buy an AMD Radeon R9 390/390X graphics card, an AMD FX 6000, 8000, or 9000-series microprocessor from a participating retailer starting from February 16, 2016, till April 30, 2016, or while supplies last. Vouchers can be redeemed until June 30, 2016. The new game will be officially released on March 11, 2016, but early buyers will be able to try beta version of the game on February 19 – 22.
As is frequently the case with these bundled games, Hitman will be making use of AMD technologies, in this case case implementing asynchronous shaders that take advantage of AMD’s ACE (asynchronous compute units) found in the GCN graphics processors in a bid to optimize performance under heavy loads. Footage and screenshots from Hitman that have demonstrated so far look rather impressive and the game will clearly take advantage of modern GPUs. AMD claims that the title has been developed with 4K displays in mind, which is a good news for those, who plan to play the title on their current high-end setups.
| AMD Current Game Bundles |
| Product |
Bundle |
| Radeon R9 Fury/Nano Series Video Cards |
None |
| Radeon R9 390 Series Video Cards |
Hitman |
| Radeon R9 380 Series Video Cards |
None |
| AMD FX 6000/8000/9000 Series CPUs |
Hitman |
| AMD FX 4000 Series CPUs |
None |
| AMD A Series APUs |
None |
Terms and conditions of the new Gaming Evolved campaign in EMEA are located here, whereas details about the campaign in the U.S. are available at Newegg. If you are interested to get Hitman for free, you should make sure that you buy an appropriate product from a participating retailer (in EMEA).
Images by Square Enix/IO Interactive and AMD.
| | 5:15p |
NVIDIA Announces Tom Clancy’s The Division Game Bundle for GeForce Video Cards 
NVIDIA has announced that its partners will bundle a free copy of Tom Clancy’s The Division game with select high-end GeForce GTX graphics cards starting this week and for about a month. The campaign will run in the U.S., Canada, in most of European and a number of Asian countries.
To grab a free copy of Tom Clancy’s The Division, you will need to buy a GeForce GTX 970/980/980 Ti graphics card or a laptop featuring a GeForce GTX 970M/980M/980 graphics adapter made by an authorized producer and bought from a participating retailer. The campaign starts on February 17, 2016 and ends on March 21, 2016. The promotional code expires on April 30, 2016, or while supplies last. The offer is valid worldwide, excluding China, but since not all retailers participate, you may choose to buy a GeForce GTX graphics card in a different country (which in some cases causes troubles with activation). Early buyers will get a chance to try open beta of The Division on February 19 – 21, 2016.
| NVIDIA Current Game Bundles |
| Video Card |
Bundle |
| GeForce GTX Titan X |
None |
| GeForce GTX 980Ti/980/970 |
Tom Clancy’s The Division |
| GeForce GTX 960/950 |
None |
| GeForce GTX 980 For Notebooks |
Tom Clancy’s The Division |
| GeForce GTX 980M/970M |
Tom Clancy’s The Division |
| GeForce GTX 965M And Below |
None |
Tom Clancy's The Division is an online open-world third-person shooter game with survival elements, which is set to be released on March 8, 2016. Gamers will have to fight their way through post-pandemic New York as part of The Division team of tactical operatives in a bid to find the source of the virus and restore order. The game features rather detailed destructive environments as well as dynamic time-based whether system. The official screenshots and videos released by Ubisoft look rather impressive and the game itself requires a powerful PC to play. Ubisoft itself recommends using an AMD Radeon R9 290 or an NVIDIA GeForce GTX 970 graphics card, hence, you will need something better if you want to play with all the effects enabled in 4K resolution.
The Division relies on NVIDIA’s proprietary GameWorks libraries to produce effects like HBAO+ (improved horizon-based ambient occlusion) and PCSS (percentage closer soft shadows) for additional eye candy. Hence, if you do not have a Maxwell-based graphics card, but want to experience the game in its full glory, NVIDIA's offer to upgrade and get a free copy may make a lot of sense for you.

The list of participating retailers, board partners and PC makers in the U.S. is located here. Users in Asia and Europe should check this web-site for details about participating retailers. Terms and conditions are located here.
| | 6:40p |
NVIDIA Announces Q4 FY 2016 Results: Record Quarter And Record Year 
This afternoon, NVIDIA released their financial results for the fourth quarter of their 2016 fiscal year, and the company had not only a record quarter, but also a record year. Revenue for quarter was $1.401 billion, up 7% from last quarter and 12% from the same point last year. Margins were strong, with a 60 basis point gain to 56.5%. Operating income for Q4 was $252 million and net income was $207 million, up 9% and 7% respectively. This resulted in earnings per share of $0.35, which despite all the good news in the previous numbers, was actually flat year-over-year.
| NVIDIA Q4 2016 Financial Results (GAAP) |
| |
Q4'2016 |
Q3'2016 |
Q4'2015 |
Q/Q |
Y/Y |
| Revenue (in millions USD) |
$1401 |
$1305 |
$1251 |
+7% |
+12% |
| Gross Margin |
56.5% |
56.3% |
55.9% |
+0.2% |
+0.6% |
| Operating Income (in millions USD) |
$252 |
$245 |
$231 |
+3% |
+9% |
| Net Income |
$207 |
$246 |
$193 |
-16% |
+7% |
| EPS |
$0.35 |
$0.44 |
$0.35 |
-20% |
flat |
NVIDIA also released Non-GAAP results which exclude “stock-based compensation, product warranty charge, acquisition-related costs, restructuring and other charges, gains and losses from non-affiliated investments, interest expense related to amortization of debt discount, and the associated tax impact of these items, where applicable”. On a Non-GAAP basis compared to Q4 2015, gross margin was up 100 basis points to 57.2%, operating income was up 26% to $356 million, net income was up 23% to $297 million, and earnings per share were up 21% to $0.52.
| NVIDIA Q4 2016 Financial Results (Non-GAAP) |
| |
Q4'2016 |
Q3'2016 |
Q4'2015 |
Q/Q |
Y/Y |
| Revenue (in millions USD) |
$1401 |
$1305 |
$1251 |
+7% |
+12% |
| Gross Margin |
57.2% |
56.5% |
56.2% |
+0.7% |
+1.0% |
| Operating Income (in millions USD) |
$356 |
$308 |
$283 |
+16% |
+26% |
| Net Income |
$297 |
$255 |
$241 |
+16% |
+23% |
| EPS |
$0.52 |
$0.46 |
$0.43 |
+13% |
+21% |
For the full fiscal year 2016, NVIDIA had revenues of $5.010 billion, up 7% from FY 2015, with a gross margin up 60 basis points to 56.1%. Operating income was down 2% to $747 million, with net income down 3% to $614 million. Earnings per share for FY 2016 were down 4% to $1.08. On a Non-GAAP basis, gross margin was up 100 basis points to $56.8%, operating income was up 18% to $1.125 billion, net income was up 16% to $929 million, and earnings per share were up 18% to $1.67.
Much of the growth of NVIDIA was due to their recent successes with GPUs for gaming, professional visualization, and data center, but they’ve also seen some tremendous growth in Tegra in the automotive space, but at the same time they’ve seen very poor results in Tegra in the mobile arena.
| NVIDIA Quarterly Revenue Comparison (GAAP) |
| In millions |
Q4'2016 |
Q3'2016 |
Q4'2015 |
Q/Q |
Y/Y |
| GPU |
$1178 |
$1110 |
$1073 |
+6% |
+10% |
| Tegra Processor |
$157 |
$129 |
$112 |
+22% |
+40% |
| Other |
$66 |
$66 |
$66 |
flat |
flat |
GPU sales is still by far the largest part of NVIDIA, and they had GPU revenues of $1.18 billion, which is up 10% from a year ago. GeForce branded GPUs grew 21% in the same time frame. Quadro branded cards were up 7% year-over-year to $203 million. Datacenter GPUs, which include but Tesla and GRID, had revenues up $97 million, up 10% from Q4 2015.
Tegra processor revenue was $157 million for the quarter, which is up 40% from Q4 2015. The bulk of that is a massive 68% increase in revenue of Tegra for automotive, which is now $93 million for the quarter.
NVIDIA’s “Other” category is where they stick their $66 million that they get from Intel which is licensing for NVIDIA technology based on a settlement agreement the two parties made in January 2011. Intel would have made its final payment for this settlement in January 2016 though, so we’ll see how that changes NVIDIA’s results going forward.

For the quarter, NVIDIA paid $62 million in cash dividends and repurchased 4.3 million shares, and for the full fiscal year 2016, NVIDIA returned $800 million to investors through these two mechanisms. For FY 2017, NVIDIA intends to return $1.0 billion. The next dividend payout will be $0.115 per share on March 23, to all shareholders of record as of March 2.
Looking ahead to Q1 FY 2017, NVIDIA is expecting revenue of $1.26 billion, plus or minus 2%, with margins of 57.2% for GAAP and 57.5% for Non-GAAP, plus or minus 50 basis points.
NVIDIA’s Fiscal Year 2016 was certainly very strong, although large warranty claims of the SHIELD tablets, along with restructuring fees, have hurt their GAAP numbers somewhat. I’m very curious to see how they do now that the Intel payout is complete, although it may be averaged out on the books for FY 2017 still.
Source: NVIDIA Investor Relations
| | 7:01p |
ARM Announces New Cortex-R8 Real-Time Processor 
ARM’s Cortex-R range of processor IP is something we haven’t talked about too much in the past, yet it’s a crucial part of ARM’s business and is integrated in a lot of devices. ARM divides its CPU offerings into three categories – At the high-end performance end we find the Cortex-A profile of application processors which most of us should be familiar with as cores such as the Cortex A53 and Cortex A72 are ubiquitous in today’s smartphone media coverage and get the most attention. The low-end should also be pretty familiar as the Cortex-M microcontrollers are found in virtually any conceivable gadget out there and also has seen increased exposure in the mobile segment due to their usage as the brain inside of both discrete as well as SoC-integrated sensor-hubs.

The Cortex-R profile of real-time processors on the other hand has seen relatively small coverage due to the fact that its use-cases are more specialized. Today with the announcement of the new Cortex-R8 we’ll be covering one well-established segment as well as an increasingly growing application of the real-time processors from ARM.

In storage devices such as disk drive microcontrollers the Cortex R processors are well established as such systems require response-times in the microsecond range. These systems use increasingly complex algorithms for things such as error correction and the control software. SSDs in particular require increasingly higher performance controllers as data-rates increase with each generation. ARM discloses that currently all major hard-drive and SSD manufacturers use controllers based on Cortex R processors, which is least to say an interesting market position.

Today’s announcement of the Cortex R8 was particularly centred on the use of R-profile processors in the modem space with a focus on the increasing performance requirements required to run future cellular standards such as LTE Advanced Pro and 5G. Here the processors are used for scheduling the data-flows through the signal processing for reception and transmission and as well run the protocol stack’s software tasks. These are so-called hard real-time tasks in which the processor must respond to events in the communication channel with a microsecond granularity. New standards such as 5G will vastly increase the transmission speeds to gigabits with complex carrier frequency and MIMO configurations which will also increase the feature-set requirements and workloads for the modem processors.
ARM also discloses that modem designers are looking more and more to modems that manage Layer-1 scheduling activities to be done by software on the processor to provide more flexibility among different standards, something which requires a lot of investment and R&D to do in hardware.

The Cortex-R8 is similar in architecture to the R7 - we still see usage of an 11-stage OoO (Out-of-Order) execution pipeline and clocks of up to 1.5GHz on a 28nm HPM process. The differences are found in the configuration options: The new core can now be deployed as a quad-core, versus the limited dual-core configuration of the R7, doubling the theoretical processing power over its predecesssor. The cores can also be run asymetrically and also each have their own power-plane, meaning they can be turned off for power savings and increased battery life. While concrete performance figures were a bit scarce, ARM talks about an example quad-core configuration on a 28nm or 16nm FinFET process being able to reach up to 15000 Dhrystone MIPS at 1.5GHz frequency.
Cortex-R processors are able to employ a low-latency on-CPU memory called Tightly-Coupled Memory (TCM) which is able to be used as a predictable and guaranteed memory subsystem that is able to service interrupts as quickly as possible with code and data, avoiding longer and less deterministic latency cycles when fetching data out of the cache memory system. The Cortex R8 now is able to significantly increase the size of the TCM and now provides up to 2MB (1MB instruction, 1MB data, up from 128KB instruction/data on the R7) of TCM per core for a maximum of 8MB for a quad-core configuration.
ARM disclosed one of the licensees being Huawei:
“The ARM architecture is the trusted standard for real-time high-performance processing in modems,” said Daniel Diao, deputy general manager, Turing Processor Business Unit, Huawei. “As a leader in cellular technology, Huawei is already working on 5G solutions and we welcome the significant performance uplift the Cortex-R8 will deliver. We expect it to be widely deployed in any device where low latency and high performance communication is a critical success factor.”
Among other licensees we'll also definitely see vendors such as Samsung who also currently deploy Cortex-R inside of their modems, such as the Shannon 333 found in last year's Galaxy devices.
| | 10:00p |
Microsoft Patches Surface Book And Surface Pro 4 Sleep Issue 
When I reviewed the Surface Book, there were a lot of bugs with the software. Some of them have been pretty minor, and Microsoft has been updating the firmware and drivers on it since before it was launched. Most of the issues have been sorted out, but there was still one issue which seemed to be elusive to the teams at Intel and Microsoft. The Surface Book would not always sleep, or, I should say, when it went to sleep it would actually use much more energy than when it was being used. Often times I would close the lid on the Surface Book and after a minute or two I’d hear the fans kick in, and the device would get very hot to the touch. This was an even bigger issue if you closed it and put it in a bag, since the bag would just trap all that heat.
This bug was so severe that I could not recommend the Surface Book at the time of the review. Apparently this bug can also strike the Surface Pro 4, but the two review units that I had never suffered from the same sleep bug issue as the Surface Book.
Today there is good news, or at least the chance of good news. Microsoft has released a firmware update which directly tackles the sleep issue. Normally firmware updates get released with little fanfare, but head of Microsoft’s hardware division, Panos Panay, has written a blog post letting everyone know that there is a firmware update. It’s not too often that the head of a division steps up and writes release notes, so clearly he felt that this issue was a big enough one to make a statement, and to be clear it is that big of an issue.
Whether or not this fixes the issue will remain to be seen, but I’m updating the Surface Book at the moment and will report back in time, but hopefully this solves it. As I said in the review, the Surface Book is solid hardware that was let down by software, and assuming this update does fix the major issue with the latest Surface models, it will be much easier to recommend it to others.
Here is everything listed in the release notes for today’s update:
- System Hardware Update – 2/17/2016
- Microsoft driver update for Surface UEFI
- Microsoft driver update for Surface Management Engine
- Microsoft driver update for Surface System Aggregator Firmware
- Surface Management Engine update (v11.0.0.1202) improves system stability.
- Surface System Aggregator Firmware update (v88.1081.257.0) improves accuracy of battery status and battery life during sleep.
- Surface UEFI update (v104.1085.768.0) improves battery life and improves stability during power state transition changes into and out of sleep states.
- Intel® Precise Touch Device driver update (v1.1.0.226) improves stability during power state transition changes into and out of sleep states.
- Intel® HD Graphics 520 driver update (v20.19.15.4364) improves display stability, system stability and battery life.
- Intel® Display Audio driver update (v8.20.0.745) supports compatibility with the updated graphics driver.
- Realtek High Definition Audio(SST) driver update (v6.0.1.7734) improves system stability.
- Intel® Smart Sound Technology (Intel® SST) Audio Controller driver update (v8.20.0.877) improves system stability.
- Intel® Smart Sound Technology (Intel® SST) OED driver update (v8.20.0.877) improves system stability.
- Intel® Management Engine Interface driver update (v11.0.0.1176) improves system stability.
- Intel® Serial IO GPIO Host Controller driver update (v30.63.1603.5) improves auto rotation reliability when tablet mode is turned off.
- Intel® Serial IO I2C Host Controller driver update (v30.63.1603.5) improves auto rotation reliability when tablet mode is turned off.
- Surface Book Base Firmware driver update (v1.2.0.0) improves battery life during sleep.
If anyone owns the Surface Book or Surface Pro 4, I would highly recommend installing this. According to Microsoft the update is being rolled out right now, so if you don't see it in your region just check back soon.
Source: Microsoft Devices Blog
|
|