AnandTech's Journal
[Most Recent Entries]
[Calendar View]
Monday, January 30th, 2017
| Time |
Event |
| 9:00a |
Basemark Releases VRScore, a VR & VR Headset Benchmark Suite for Windows 
Back in 2015 as the development of the first generation of modern VR headsets was coming to a close, benchmark developer Basemark announced that they would be applying their talents to the field of VR. The benchmark, now named VRScore, was to be developed in conjunction with Crytek and would serve as a multi-faceted VR test suite for VR headsets and computers, covering everything from rendering performance to display latency. At the time Basemark was expecting to launch the benchmark in mid-to-late 2016, and while development has taken a bit longer than expected, they are finally releasing version 1.0 of the benchmark this morning.
The final product – or rather the first iteration thereof – is designed to be a high-end AAA-quality benchmark, an unsurprising choice given the use of CryEngine V and the need for benchmarks to be forward-looking. CryEngine V of course introduces support for VR to CryEngine, but it also adds support for DirectX 12 as well. For VRScore, Basemark has played things a little more conservatively, designing the benchmark and its “Sky Harbor” scene around DX12, but including a DX11 mode as well for pre-Windows 10 OSes and headsets that don’t yet work with DX12 mode (which happens to be everything except the Oculus Rift at this time). VRScore has no specific minimum recommended GPU – and Basemark isn’t looking to test against Oculus/Valve’s GTX 1060/RX 480 class GPU performance recommendation – but to sustain 90fps you’ll generally need a GTX 1080 or faster.

Also notable here is that Basemark is looking to support as many PC VR headsets as possible. So this includes not only the Oculus Rift and HTC Vive SDKs, but also the OpenVR and OSVR SDKs. This is an important distinction not only because of the wider compatibility afforded by supporting more SDKs, but because it underscores just how important the SDK is in VR performance. The VR headset SDKs dictate the resolution used – following the best practices for each headset – along of course with controlling how synchronization and features like timewarp/spacewarp work. Consequently at this stage of development, benchmarking an active VR headset is as much an SDK benchmark as it is a GPU or CPU benchmark.

Overall, VRScore is broken down into 3 different types of tests. A 4K “baseline” test run without the headset that is meant to be a more typical system benchmark, a second headset-off test run at headset resolutions and configurations, and finally a headset-on test which runs as a proper VR workload. In the case of the latter two tests, this is particularly interesting as it allows Basemark to actually show the performance cost for VR – how much performance is lost from VR SDK features such as lens distortion, 3D audio, and various synchronization steps. As it turns out, the performance hit is not insignificant.
| CPU: |
Intel Core i7-4960X @ 4.2GHz |
| Motherboard: |
ASRock Fatal1ty X79 Professional |
| Power Supply: |
Corsair AX1200i |
| Hard Disk: |
Samsung SSD 840 EVO (750GB) |
| Memory: |
G.Skill RipjawZ DDR3-1866 4 x 8GB (9-10-9-26) |
| Case: |
NZXT Phantom 630 Windowed Edition |
| Monitor: |
Asus PQ321 |
| Video Cards: |
AMD Radeon RX 480
NVIDIA GeForce GTX 1080 Founders Edition
NVIDIA GeForce GTX 1060 Founders Edition |
| Video Drivers: |
NVIDIA Release 378.49
AMD Radeon Software Crimson 17.1.1 |
| OS: |
Windows 10 Pro |

Starting things off with AMD’s Radeon RX 480 and Oculus Rift attached, the average framerate with the full test drops 14% when the Rift is enabled. Unsurprisingly, even without the headset the RX 480 was already averaging less than 90fps, but the additional load brings the average framerate down to the mid-60s. Though all-told, the RX 480 doesn’t fare too poorly here; the downscaled resolution subtest, which tests a scene at 80% of the VR SDK’s recommended resolution (~ 2131x1268 for a Rift, almost precisely its native resolution) has the card averaging 89fps with an HMD. This goes to show just how expensive supersampling is, as although these relatively low DPI pentile screens greatly benefit from it, the performance cost is significant
Meanwhile in the NVIDIA camp, the story with the GTX 1060 is pretty much the same, to the point that the RX 480 and GTX 1060 are within a couple of FPS of each other. The GTX 1060 ends up paying a 17% performance penalty here with the Rift enabled, and similarly quickly perks up once Basemark pulls back on the resolution some. As for the GTX 1080, its high performance means the card has little trouble averaging 89fps with the benchmark’s default test, hitting the 90Hz refresh rate cap much of the time. Put another way, with a 131fps frame rate with the headset turned off, the GTX 1080 has more than enough performance to pay the price of VR overhead.

Moving on, as I mentioned before the other major test in VRScore is the VRTrek Suite, Basemark’s VR headset evaluation tool. Whereas VRScore measures the headset itself, specifically the application-to-photon latency, dropped frames, and duplicated frames.
To measure this, the VRTrek Suite uses the VRTrek tool, a curious device composed of a pair of phototransistors and plugs into a microphone jack. Intended to simulate the human eye, the VRTrek device is what gives the software feedback on the VR headset’s performance. Phototransistors of course aren’t cameras, so they can’t see/report a full image, but they are sensitive enough to see the cues Basemark puts in the rendered image for headset testing.

Measuring input latency and dropped frames goes as far back as the original proposal for VRScore, but it’s interesting that Basemark has opted to follow-through with it. Competitor Futuremark developed a similar test during development of their VRMark benchmark, but they ultimately scaled it back to just industry use, saying that “measuring the latency of popular headsets does not provide meaningful insight into the actual VR experience” and instead focusing on subjective/experiential testing, especially as modern VR SDKs employ a number of tricks to reduce the perceived latency. As a result, the VRTrek is a fairly unique device since Basemark intends to make it accessible (though I suspect not cheap) outside of the usual industry circles.
I haven’t had the chance to use the VRTrek on headsets from multiple vendors yet, but in testing it against the Oculus Rift, it does what it sets out to do. At the moment I’m not sure how valuable that’s going to be, but as Microsoft is trying to encourage cheap(er) VR with their $300 headset initiative, this will likely prove useful in quantifying just how low the latency is of these forthcoming headsets.
Wrapping things up, while today’s announcement marks the formal launch of VRScore, in practice Basemark is dividing up the launch into two parts. Launching immediately are the full-featured corporate and media versions. Launching a bit farther down the line will be the consumer versions, both free and professional. As with some of Basemark’s other benchmarks, they are offering a free version of the benchmark with a single test/report (the system score) and the VR experience mode, meanwhile reporting of the scores for the sub-feature tests and custom configurations will require a paid version.
| | 11:59a |
Mushkin Announces Helix SSDs: 2.5 GB/s, 3D MLC NAND, SM2260, 2 TB Capacity 
Mushkin introduced its new lineup of high-performance SSDs at CES. The Helix drives use 3D MLC NAND flash memory as well as Silicon Motion’s SM2260 controller. The M.2 SSDs aimed at premium desktops and laptops are listed to promise high performance and improved endurance of premium 3D NAND.
The Mushkin Helix family of SSDs will offer various models targeting different performance and price targets, including a 250 GB version for the entry-level segment as well as a 2 TB SKU for high-end PCs. The drives come in an M.2-2280 form-factor with a PCIe 3.0 x4 interface and are based on Silicon Motion’s SM2260 controller (which sports two ARM Cortex cores, has eight NAND flash channels, LDPC ECC technology, 256-bit AES support and so on) as well as 3D MLC NAND flash from an unknown manufacturer.
Mushkin rates Helix’s sequential read performance at up to 2.5 GB/s and its write performance at up to 1.1 GB/s when pseudo-SLC caching is used, which is exactly what the SM2260 controller is listed as offering. As for random performance, things start to get interesting. Mushkin indicates that the drives can deliver up to 232K/185K 4KB read/write IOPS, which is well beyond capabilities of the SM2260 declared by Silicon Motion (120K/240K). Apparently, Silicon Motion and Mushkin have implemented firmware optimizations that improve random speeds significantly. Keep in mind though that SSD makers tend to disclose maximum performance of the higher-end SKUs, so the entry-level 250 GB model is expected be slower than the 2 TB top-of-the-range SKU.

| Mushkin Helix SSD Specifications |
| Capacity |
250 GB |
500 GB |
1 TB |
2 TB |
| Model Number |
- |
- |
- |
- |
| Controller |
Silicon Motion SM2260 |
| NAND Flash |
3D MLC NAND |
| Form-Factor, Interface |
M.2-2280, PCIe 3.0 x4, NVMe 1.2 |
| Sequential Read |
- |
- |
2500 MB/s |
| Sequential Write |
- |
- |
1100 MB/s |
| Random Read IOPS |
- |
- |
232K |
| Random Write IOPS |
- |
- |
185K |
| Pseudo-SLC Caching |
Supported |
| DRAM Buffer |
Yes, capacity unknown |
| TCG Opal Encryption |
No |
| Power Management |
DevSleep, Slumber |
| Warranty |
3 years |
| MTBF |
1,000,000 hours |
| MSRP |
Unknown |
Unknown |
Unknown |
Unknown |
Mushkin showed a pre-production version of its Helix SSDs in its suite at CES. The drive looks like a typical M.2 PCIe/NVMe SSD without any heat spreader. We do not know whether the final product will employ a heat spreader it to ensure extra cooling, but that is certainly a possibility. As for endurance and reliability, Mushkin rates MTBF of its Helix SSDs at one million hours for now, which is below that of some competing offerings.
Mushkin is among the first of the independent suppliers of SSDs to announce a high-end PCIe 3.0 x4 drive based on 3D MLC NAND flash memory. At present, only ADATA ships similar products (XPG SX8000) in high volume. Mushkin did not announce when exactly it plans to start selling its Helix drives and we may be months away from their retail availability and information about price.
Related Reading:
| | 2:00p |
Seagate Confirms Plans for 12 TB HDD in Near Future, 16 TB HDD Due in 2018 
The CEO of Seagate has confirmed plans to release new nearline harddrives with 12 TB capacity in the coming months, and HDDs with 16 TB capacity over the course of the next several quarters. The latter are believed to be based on HAMR technology and the comment by the CEO essentially means that the company is on track with its next generation of heat assisted magnetic recording technology.
12 TB HDDs Incoming
Seagate’s CFO confirmed plans to release nearline HDDs with 12 TB capacity in early November, 2016. Last week Steve Luczo, CEO of Seagate, said that such drives had been evaluated by the company’s customers for about two quarters now and the feedback about the drives had been positive. He did not elaborate on the exact launch timeframe for the product, but given the fact that the drive is nearly ready, it is logical to assume that the 12 TB HDD should be announced formally in the coming weeks or months.

The hard drive maker has not revealed many details about its 12 TB Nearline HDD so far, but previously Seagate disclosed that this drive is filled with helium and is based on PMR technology, whereas last week the company implied that it uses eight platters. Keeping in mind that Showa Denko recently launched 1.5 TB platters for 3.5” drives, it is likely that Seagate uses eight of such platters for its 12 TB HDDs. In fact, Western Digital’s HGST Ultrastar He12 HDD with 12 TB capacity introduced last December comes with eight PMR disks as well.
“As you know, going from 8 to 10 to 12 to 16 [TB], you are going from six to eight disks, at least, on the nearline products,” said Steve Luczo, CEO of Seagate, during a conference call with investors and financial analysts.
16 GB HAMR HDDs in 12 to 18 Months
The 12 GB drive will be Seagate’s top-of-the-range model for enterprise and other demanding applications for quite a while and will become the company’s highest-capacity PMR-based drive. But, for the second time last week, the company mentioned 16 TB HDDs due in the next 12 to 18 months. Moreover, since such drives will be based on HAMR technology (Seagate discussed the feasibility of HAMR-based HDDs at 16 TB last year), they will cause a certain level of disruption on the market.
“During the next 12 to 18 months, we expect the nearline market to be diversified in capacity points for different application workloads, with use cases from 2 to 4 TB products for certain applications up to 16 TB for other use cases,” said Mr. Luczo.

Hard drives featuring heat-assisted magnetic recording technology will cost more to build compared to traditional HDDs because of the increased number of components and use of new materials. As a result, such drives will also be more expensive to actual customers. At present, we do not know specifics, but what Seagate says is that in the future the market of nearline HDDs will get more diverse and its lineup will get wider. In the past, the product stack used to remained similar, and as larger drives were introduced every year, previous-gen products were moved down the stack and low-capacity models discontinued. This may not be the case in the future and customers who need maximum capacity (i.e., who would like to store 3840 TB of data per rack and require 16 TB drives) in 2018 will probably have to pay more than they pay for leading edge HDDs today.
It is noteworthy that Seagate also mentioned 14 TB and 20 TB HDDs in the conference call, but without specifics, it does not sound like a good business to make assumptions about them. So far, the company has not explicitly announced any plans to release SMR-based 14 TB HDDs for specific workloads to compete against Western Digital’s Ultrastar He14.
Higher-Capacity Consumer Drives In Demand
Moving on with the comments made by the head of Seagate, we noticed that Mr. Luczo also mentioned higher-capacity HDDs for consumer applications. In particular, when talking about increasing amount of disks and heads per drive, CEO of Seagate indicated that the numbers are also increasing for consumer HDDs as well.
“We do think there [are] opportunities for more heads and disks on desktop and notebook, as people need higher capacity as well,” said Steve Luczo.
Keeping in mind that Seagate currently offers BarraCuda Pro desktop HDDs with 6 TB, 8 TB and 10 TB drives for consumers, and these drives use enterprise-class platforms (albeit with multiple changes). The remark by the CEO is an indication that the company will keep doing so in the future. Meanwhile it is interesting to note that the head of Seagate also mentioned mobile drives with increased number of platters and heads, hinting on increasing demand for higher-end 2.5” HDDs with more than one platter. At present, Seagate offers 5TB drives in a 2.5-inch form factor, although these come in at 15mm and typically tend not to fit in most mobile environments.

Related Reading:
  |
|