AnandTech's Journal
[Most Recent Entries]
[Calendar View]
Monday, March 13th, 2017
| Time |
Event |
| 8:17a |
Intel to Acquire Mobileye for $15 Billion 
In an interesting announcement today, Intel and Mobileye have entered into an agreement whereby Intel will commence a tender offer for all issued and outstanding ordinary shared of Mobileye. At $63.54 per share, this will equate to a value of approximately $15 billion.
Mobileye is currently one of a number of competitors actively pursuing the visual computing space, and the high item on that agenda is automotive. We’ve seen Mobileye announcements over the last few years, with relationships with car manufacturers on the road to fully autonomous vehicles. Intel clearly wants a piece of that action, aside from its own movement into automotive as well as cloud computing required for various automotive tasks.
Intel estimates that vehicle systems, data, and the services market for automotive to have a value around $70 billion by 2030, including edge cases through backhaul into cloud. This includes predictions such that 4TB of data per day per vehicle will be generated, which is going to require planning in infrastructure. Intel’s expertise in elements such as the RealSense technology and high-performance general compute will be an interesting match to Mobileye’s portfolio.
“This acquisition is a great step forward for our shareholders, the automotive industry, and consumers,” said Brian Krzanich, Intel CEO. “Intel provides critical foundational technologies for autonomous driving including plotting the car’s path and making real-time driving decisions. Mobileye brings the industry’s best automotive-grade computer vision and strong momentum with automakers and suppliers. Together, we can accelerate the future of autonomous driving with improved performance in a cloud-to-car solution at a lower cost for automakers.”
The acquisition will combine into a single organization under Intel’s Automated Driving Group, to be HQ in Israel and led by Prof Shashua, Mobileye’s co-founder, Chairman and CEO. All current contracts under Mobileye for automotive OEMs and tier-one suppliers will be retained under the single group, which will also be under Doug Davis, Intel’s SVP of Intel’s Automotive.
Mobileye currently offers on its roadmap products such as the EyeQ4 and EyeQ 5 SoCs, for level 3/4 autonomy in 2018 and 2020 respectively, as well as high-performance FPGAs for vision analytical techniques. The acquisition of Altera by Intel over a year ago as a step into the FPGA market may come into play here, as well as Intel’s semiconductor manufacturing facilities. As with Altera, it will likely take some time before full integration between Intel’s resources and Mobileye’s technology occurs.
There will be an investor call webcast on 3/13 at 8:30 am (ET) about this announcement at this link here. The full transaction is expected to close within nine months, subject to regulatory approval, and is not subject to any financing conditions. Intel intends to fund the acquisition with cash from the balance sheet.
As we get more information we will let you know.
Additional 1: For scope, Intel's purchase of Altera was $16.7 billion, as we reported here.
Additional 2: Here is the Investor Call slide deck.
Additional 3: It will require purchasing 95% of the ordinary stock, and will use offshore cash that Intel has not repatriated into the US.
| | 8:30a |
MWC 2017: Panasonic Demonstrates Store Window as a Transparent Screen 
At Mobile World Congress this year, Panasonic demonstrated a glass that can be turned into a display in an instant. The solution relies on a thin film between the sheets of glass that can quickly change its properties when electricity is supplied, allowing a rear projector to focus and provide an image. The system is currently aimed at retailers that want to attract more attention to their stores and shelves. The company says that the first deployments of the technology are expected this spring.
There are typically two ways for stores to attract the attention of those passing by. Either put something interesting in the shop window, or replace the window with LCD screens that showcase something appealing. The new solution that Panasonic is showing blends traditional showcases and displays, enabling owners of stores to have both. The technology behind the solution appears to be relatively simple: Panasonic takes two glasses and puts a special light-control film between them.

The film is matte and can be used to display images that are projected onto it using a conventional off-the-shelf projector. But when electricity is applied to the film, it becomes transparent. Similar opaque glass technologies are in frequent use, applying a potential difference across two electrodes embedded in the glass and between an electrolyte whereby larger particles in the electrolyte self-assemble in the presence of an electronic charge to allow light to pass through. This ends up being a natural extension of what Panasonic has shown at other recent events regarding large glass projection display technology.

At MWC 2017, Panasonic showed a booth with a mannequin wearing a red dress, a pair of black shoes, a green handbag. The lens of the projector was camouflaged with the environment. Once the film is “switched”, the 1×2 meter window can be used as a screen and this is where Panasonic is demonstrating a video with a model wearing that exact red dress (albeit, with red shoes). The manufacturer says that the resolution of the display depends entirely on the resolution of the projector, but the density of the non-transparent particles as well as the placement of the projector have its effect on the quality too. Meanwhile, since the videos are displayed using a projector, it should not be too hard for stores to set everything up for transparent screens.

Panasonic does not reveal the tech behind its smart glass and as there are multiple types of films that can change their properties when electricity is applied, which makes estimating difficult without an official announcement. What is important here is that the glass can either be a screen, or completely transparent. So, unless you stick several glasses together, the window will be either a window or a display, which limits the number of applications that can use the tech.

At present, a 1×2 meter wall (XC-CSG01G) is the maximum size of Panasonic’s “transparent screen”, so, if someone wants a larger wall, they have to use several glasses and projectors in sync. The total cost for a single 1×2 meter display with a control box (XC-CSC01G-A1) like this will be around $3000-$4000 according to a Panasonic rep at the booth (not sure if this includes the projector, it doesn't sound like it does, but that price is minus a support contract). Panasonic states that the company already has customers interested in these products and are basically ready to accept delivery. The high price of Panasonic’s transparent screen glass is conditioned not only by its capabilities but also by the fact that everything has to be rugged and work properly for different weather and temperatures. Panasonic plans to start selling its “transparent screens” in Japan first and then look for customers in other parts of the world as well.
| | 10:00a |
MWC 2017: Oppo Demonstrates 5X Optical Zoom for Smartphones 
This year at MWC, Oppo showed off a smartphone prototype that used a new implementation of dual cameras to offer a 5X optical zoom. The company did not reveal anything about the actual plans to use it for products, nor did they reveal the cost of its implementation, but it is likely that it will reach the market sometimes in the future.
Imaging capabilities of smartphones have been evolving rapidly since the introduction of the first handsets with cameras. Throughout the history of phones making photos, manufacturers have developed new lens packs, new CMOS sensors and extensive ISPs (image sensor processors) in order to improve the capability and/or quality of images. For a while, a number of makers tended to install higher-resolution sensors simply because the 'megapixel number' was easier to explain than the quality of optics or advanced ISPs. A lot has changed in the recent years as various smartphone makers have invested in high-end lenses (co-developed with Carl Zeiss, Leica, etc.), developed their own SoCs/ISPs for image processing, and other potential differentiators in a cramped smartphone ecosystem.
So at MWC 2017, multiple smartphone manufacturers demonstrated their products with dual back-facing sensors (RGB+RGB or RGB+IR) to further improve their photography acumen. One of those was Oppo using the two sensors to build a portable camera system with a 5X optical zoom in a very different configuration to what we have seen before.

Optical zoom is not anything new for smartphones, but Oppo’s approach is a little bit different compared to that used by other makers. The 5X dual camera optical zoom from Oppo relies on two image sensors:
- The first is placed inline with the motherboard (just like sensors inside all smartphones) and is equipped with a regular lens pack such that the light hits the sensor with minimal adjustment.
- The second is placed perpendicularly to the motherboard and is equipped with other optics with image stabilization and optical zoom. It is possible that the lens system here can physically move to allow for extra enhancement.
To direct the light to the second sensor, Oppo uses a special prism mirror placed perpendicularly to the motherboard (so, basically, everything works like a periscope) and which it can precisely regulate angles as low as 0.0025 degrees to compensate shaking. To enable 5X optical zoom, an unknown ISP processes images from both sensors.

In its booth at the MWC 2017, Oppo demonstrated promo videos describing the added qualities of its optical zoom capabilities, as well as its optical image stabilization. In addition, the company allowed visitors to try out the prototype devices. One of the concerns, when you use mirrors to transfer light, is that luminous intensity drops as well as a drop in the quality of images. In its video at the trade show, Oppo showcased that the quality of the photos made using the prototype featuring its 5X dual camera optical zoom in dark conditions was better when compared to images made by an 'unknown' rival. With a minor hands-on, we noticed no immediate problems shooting the images in light conditions. There are other phones with prisms used in the market it should be noted, however not quite used in this way.

Oppo did not mention which smartphones are going to use its 5X dual camera optical zoom technology, nor did it mention when. The reference system on the show floor looks slim, so it could be installed into various handsets by Oppo and give the company an opportunity to use it for its top-of-the-range smartphones with large displays, or perhaps for smaller models as well (provided that they have appropriate SoCs/ISPs).

It is noteworthy that in its briefing materials, Oppo did not state the type of sensors in use, but solely emphasized only the 5X dual cam optical zoom. This may likely be a work in progress for a future device, which may or may not be a smartphone.
| | 12:00p |
GDC 2017 Roundup: VR for All - Pico Neo CV, Tobii, & HTC 
Now that I’ve wrapped up the major GDC product launches, I want to spend a bit of time talking about the rest of GDC.
The annual show has always been a big draw for game developers and hardware companies alike, and since the end of the Great Recession that process has only accelerated. But without a doubt the fastest growth in terms of developer and vendor presence at the show has been VR. GDC 2016’s VR sessions exceeded any and all expectations – the show management had to scramble to move them to larger spaces because the attendance was so high – and it took all of half a year for VR to become its own stand-alone show as well with the GDC spinoff VRDC. Suffice it to say, the amount of attention being paid and resources being invested in VR is very significant, both for software and hardware developers.
So for GDC 2017, I spent an afternoon on the expo floor dedicated to VR meetings, to see what new hardware was on display. While a common theme throughout is that everyone is still looking for the killer app of VR – both in terms of hardware design and the actual must-have game/application – it’s clear that there’s a lot of progress being made for future VR headsets, and that developers aren’t afraid to experiment in the workshop and show off those experiments to the public.
Pico Neo CV – Stand-alone VR Headset with Inside-Out Tracking
The first stop was Pico Interactive’s booth, where the company was showing off their Pico Neo CV headset. Pico is one of several companies developing headsets around Qualcomm’s Snapdragon SoCs, for whom VR has become a priority. Already a major force in high-end smartphones, Qualcomm believes that the Snapdragon is a great fit for VR given the mix of portability and high-performance required. As a result the company has gone all-in on VR, dedicating quite a bit of engineering and marketing resources towards helping their customers develop VR headsets and bring them to market.

Pico headset, in turn, is one of several headsets in development based around a Snapdragon processor. However more than just being a stand-alone headset for the purposes of on-board processing, arguably Pico’s big claim to fame in the world of VR development is their inside-out position tracking, which is designed to do one better than current VR headsets. Whereas setups like the Samsung Gear VR and various Cardboard headsets primarily rely on inertial tracking, the Pico Neo CV can do true inside-out tracking, fixing itself relative to the outside world on an absolute basis.
The advantage to positional tracking is that it allows much greater accuracy, which in turn allows for greater freedom of movement than inertial tracking. You can actually do a lot by interpolating accelerometer and gyroscope data from Inertial tracking, and as a result it’s generally satisfactory for rotation – think 360 degree videos and fixed-position gaming such as Gunjack – but it is ultimately limiting in what can be done with interactive experiences where errors add up. The drawback to absolute tracking is that it normally takes an external camera or beacon of some kind – such as the Vive Lighthouse system – which in the case of stand-alone, untethered headsets is antithetical to their portability.

The solution then, as several companies like Pico are playing with, is inside-out tracking. In the case of the Pico Neo CV, the company combines the usual gyroscope and accelerometer data with a camera looking at the outside world, using computer vision processing to extract the user’s position relative to the rest of the world. Computer vision is a fairly straightforward solution to the problem – witness the number of self-driving cars and other projects using CV for similar purposes thanks to the explosion in deep learning – but it’s made all the more interesting on a headset given the processing requirements.
In the case of the Pico Neo CV, while the company won’t be shipping the headset until later this year, they already have a prototype up and running, inside-out tracking and all. In my hands-on time with the headset, the positioning of the Neo seemed very accurate; the demo software always reacted to my head position as I felt it should across all six degrees of freedom, and pulling off the headset to check my actual position revealed that I was positioned where (and facing where) I should be. It’s an experience that in principle is no different than using external tracking, but then that’s the point of inside-out tracking: it is meant to be the same thing, but without the external gear.

That said, like the first-generation of PC headsets, I suspect the Pico Neo CV is going to be a transitional product as the hardware further improves. The camera-based tracking system only updates at 20Hz, meaning there’s 50ms between position updates. Without getting deeper into the headset I’m not sure what the actual input lag is, but the low refresh rate is noticeable if you turn your head quickly. In my experience it’s not nauseating in any way, but like some of the other drawbacks of first-generation VR headsets, there’s clear room for improvement. The headset display itself operates at 90Hz, so it’s a matter of getting tracking operating at the same frequency.
Part of the catch, I suspect, is processing power. The Pico Neo CV is based around a Snapdragon 820 SoC, which although is powerful by SoC standards, now is splitting its time between rendering in VR and processing the additional tracking information. Future SoCs are going to go a long way towards helping with this problem.
Looking at the rest of the headset, Pico has clearly set out to develop something better than the vast array of cellphone-powered VR experiences out there. Pico has combined their tracking gear and the 820 with a pair of 1.5K displays, so the total pixel count – and resulting DPI – is a lot higher than on a Cardboard or Daydream setup. Along with built-in audio, and the Pico Neo CV is everything needed for stand-alone mobile-caliber VR gaming.
Pico hasn’t yet announced a precise launch date for the headset, but they expect to start selling it later this year. As one of the first serious efforts at a stand-alone Snapdragon-based headset, it should be interesting to see where these kinds of devices fall into the market, and just how much more Pico can improve the inside-out tracking before the headset’s launch.
Tobii – VR Eye Tracking
Going from the inside looking out, let’s talk about the inside looking even further inside. One of the technologies various companies have been investigating for second-generation VR headsets is eye tracking. Besides enabling a more immersive experience, eye tracking could also potentially change how VR rendering works by allowing foveated rendering. By using eye tracking to keep tabs on what direction a user is looking, foveated rendering would allow games to efficiently render in a non-uniform fashion, rendering at full quality only where a user is looking, and rendering at a lower level of quality outside of that focus area.
But to get there you first need to be able to accurately track users’ eyes, and that’s where Tobii comes in. The company, which focuses on eye tracking for gaming and other applications, has already made a name for itself with their eye-tracking cameras, which are available both stand-alone and integrated into some laptops and displays. The use of external eye tracking has proven a bit gimmicky, but the technology is sound, and VR stands to be a much more useful application.

To that end, the company was at GDC showcasing a modified HTC Vive headset with their eye tracking technology installed. The company’s demo was primarily focused on how eye tracking can improve the gaming experience, both as an input method and as a way to add life to avatars, and true to their claims, it worked. The eye tracking implementation in the company’s modified headset was very rapid, to the point that it didn’t feel like it was operating any slower than the headset tracking. And while it took some practice to get used to – it’s a bit jarring at first that where you look actively matters – once I got used to it, it worked very well.
But from a technical perspective, perhaps the most impressive part was just how well the company had integrated the eye tracking hardware into the headset itself. While the external cameras were by no means big to begin with, I was surprised just how easily it fit into the prototype. Adding eye tracking did not make the headset feel significantly heavier, and the sensors easily fit into the already limited free space inside the headset. From a hardware perspective, this very much felt like a technology that was already at a point where it could get integrated into a commercial headset tomorrow.

Consequently, if Tobii’s technology (or similar eye tracking tech) shows up in second-generation VR headsets, I would not be the least-bit surprised. While I’m not sold on the gaming aspects of the tech – it’s neat, not must-have – it’s the kind of thing where I expect developers would need some time to play with it to really figure out if it’s useful and just what the best use cases are. Otherwise the big use case here is going to be foveated rendering, which is likely going to prove critical for higher resolution VR headsets. The latter is outside of Tobii’s hands, but offering a good eye tracking experience is the first and most important part in making that happen.
HTC Vive – Hand on with the Deluxe Audio Strap & Tracker
My final stop for the afternoon was HTC’s private demo room, where the company was showing off some new games and other software technology being developed for the Vive. We’re at least a year too early for second-generation headsets, so the company wasn’t showing off anything new in that respect, but they did have on-hand their new Deluxe Audio Strap and the Tracker device for third-party peripherals. Both of these devices have been previously announced, but this is the first time I’ve had a chance to actually use them.
The Deluxe Audio Strap is an interesting device. Despite its plain-sounding name, it’s a lot more than just an audio solution for the headphone-free Vive. In adding earphones, HTC went and radically altered the entire strapping mechanism for the headset. As a result the Deluxe Audio Strap not only rectifies one of the competitive drawbacks of the Vive – it requires a pair of headphones/earbuds on top of everything else – but it also greatly improves the fit of the headset. The latter has always been of particular interest to me; the original Vive strap system just never fit my admittedly oversized head very well. So improving this would go a long way towards making the Vive more comfortable to wear over a long period of time.

Coming from the original strap system, I’ve found the difference rather pronounced. With the Deluxe Audio Strap installed, the Vive is not only easier to adjust, but it feels a lot more secure as well. The former comes thanks to a small dial (a “sizing dial”) on the back of the harness, which replaces the use of Velcro straps along the sides of the headset. Now you can just turn the dial to adjust the fit of the headset, which is easy enough to do both wearing the headset and with it off. Combined with some other general fitting tweaks HTC has made to the strap, and it feels like the strap they should have had for the headset’s launch last year.
Meanwhile the new earphones are similarly impressive. Relative to the Rift HTC has gone with something a little bigger and a little more versatile. The drivers HTC are using are larger than those used in the Rift’s earphones, and should give it a bit more kick in the bass, though that’s something that would need to be tested. The fit of the earphones is also very good; the ratchet mechanism keeps them pushed towards the ears, while it’s easy enough to flip one or both earphones out to hear the world around you (or in my case, the engineer giving the presentation). While I doubt most Vive owners will want to buy the new strap solely on the basis of audio since they already have headphones or another solution, combined with the new strap system, it’s a very compelling offering.

Also on display was the Vive Tracker. The external widget is designed to be used with the Vive’s Lighthouse system, allowing for Lighthouse tracking to be added to third party objects. The tracker itself does look a bit weird, owing to its need to match the pitted appearance of the Vive headset that the Lighthouse system is meant to work with, but it does its job well. Besides the obvious use case of third party controllers – which could prove interesting for developers since it’s just the Tracker and not the entire controller being tracked – HTC was also using it for more unusual applications such as attaching it to a camera to allow accurately superimposing recorded footage (i.e. unsuspecting editors) over the rendered game itself.

The Deluxe Audio Strap is available immediately for developers and other commercial firms who are buying the Vive Business Edition. Otherwise larger-scale consumer sales will start a bit later this year; HTC is pricing it at $99.99 and pre-orders start on May 2nd, while HTC will begin shipping it in June. Meanwhile the Tracker will go on sale to developers on the 27th of this month, also for $99.99.
| | 2:00p |
MWC 2017: Netgear Nighthawk M1 Coming to Europe in Mid-2017, But 
Earlier this year Netgear introduced its Nighthawk M1 router, powered by Qualcomm’s X16 LTE modem and is the first Gigabit LTE router on the market. Right now, the device is available on Telstra’s 4GX LTE network in Australia, but the router made a surprise appearance at the MWC 2017 show and it will actually hit the market in Europe later this year. There is a catch however: there will not be a lot of Gigabit LTE deployments because of technical challenges.
The Netgear Nighthawk M1 is based on Qualcomm’s Snapdragon X16 LTE modem (paired with Qualcomm’s WTR5975 RF transceiver) that uses 4×4 MIMO, three carrier aggregation (3CA) and 256QAM modulation to download data at up to 1 Gbps (in select areas) as well as 64QAM and 2CA to upload data at up to 150 Mbps. The Nighthawk M1 router is designed for those who need to set up ultra-fast mobile broadband connection but do not want an incoming physical data connection. The router is equipped with Qualcomm’s 2×2 802.11 b/g/n/ac Wi-Fi solution that can connect up to 20 devices simultaneously using 2.4 GHz or 5 GHz frequencies concurrently. Generally speaking, the Nighthawk M1 is aimed at mobile workgroups who need high-speed Internet connection where there is no broadband. In Australia, there are areas where Telstra’s 4GX LTE network is available, whereas regular broadband is not, so the device makes a lot of sense there.

Netgear’s Nighthawk M1 router is clearly one of the flagship products offered by the company. Nonetheless, it was still a bit surprising to see the device at the MWC (given its current Australia-exclusive status). When asked about availability in Europe, a representative of the company said that the Nighthawk M1 is coming to Europe this summer and will be available from multiple operators. What this means is that a number of operators from Europe will be ready to deploy Gigabit LTE later this year. Netgear did not talk about which operators or which geographies, or at what pricing, right now because it will depend entirely on operators that are going to offer the Nighthawk M1 with certain service packages.
While there are Gigabit LTE deployments coming, do not expect them to be widespread in the next couple of years with 4G networks. To enable Gigabit LTE, devices and operators have to support 4×4 MIMO, carrier aggregation (CA) and 256QAM modulation. It is not particularly easy to enable 4×4 MIMO and 256QAM modulation because of interference. In fact, far not all networks today use even 64QAM. Moreover, operators have to have enough spectrum and backhaul bandwidth to transfer all the data. Thus, to offer Gigabit LTE, operators have to upgrade their infrastructure both in terms of base stations and backhaul. Some operators may be reluctant to upgrade networks to Gigabit LTE because right now there are not a lot of announced devices featuring the technology, or not all operators have enough customers who need the tech and could use/are prepared to pay for routers like the Nighthawk M1. Despite this, wireless Gigabit networks are coming first with 4G/LTE in select areas, and then with 5G sometime from 2020+ and onwards.

Even if there are not a lot of Gigabit LTE deployments across Europe this year, the Netgear Nighthawk M1 will still have enough advantages to attract customers seeking for an high-end mobile router that can work for up to 24 hours on a charge (it comes with a 5040-mAh battery). In Australia, the Nighthawk M1 is available for less than $300 from Telstra, but we know nothing about the price in Europe.

Related Reading:
|
|