One of my favorite things about the tech industry is how quickly innovations from the big companies and premium products trickle down into more affordable devices. The rampant stealing of ideas isn’t so awesome when it happens between small companies — or, as in the case of Facebook treating Snapchat like its incubation lab, when a big company copies a smaller one. But I don’t have a problem with the general flow of good ideas from giants like Apple and Google to more budget-friendly suppliers of hardware and software. Apple and Google, though, have an obvious problem with that, and they’ve worked hard to develop new techniques and approaches that can’t be readily imitated.
The big new thing in smartphones lately is one of those buzz phrases you’ll have heard tossed around: machine learning (ML). Like augmented and virtual reality, machine learning is often thought of as a distant promise, however in 2017 it has materialized in major ways. ML is at the heart of what makes this year’s iPhone X from Apple and Pixel 2 / XL from Google unique. It is the driver of differentiation both today and tomorrow; and the companies that fall behind in it will find themselves desperately out of contention.
Machine learning still seems like a distant promise, but it’s already underpinning some major new phone features
A machine-learning advantage can’t be easily replicated, cloned, or reverse-engineered: to compete with the likes of Apple and Google at this game, you need to have as much computing power and user data as they do (which you probably lack) and as much time as they’ve invested (which you probably don’t have). In simple terms, ML promises to be the holy grail for giant tech companies that want to scale peaks that smaller rivals can’t reach. It capitalizes on vast resources and user bases, and it keeps getting better with time, so competitors have to keep moving just to stay within reach.
I’m not arguing that ML is a panacea any more than I would argue that all OLED displays are awesome (some are terrible): it’s just the basis on which some of the key differentiating features are now being built.
Google’s HDR+ camera
Let’s start with the most impressive expression of machine-learning consumer tech to date: the camera on Google’s Pixel and Pixel 2 phones. Its DSLR-like performance never ceases to amaze me, especially in low-light conditions. Google’s imaging software has transcended the traditional physical limitations of mobile cameras (namely: shortage of physical space for large sensors and lenses), and it’s done so through a combination of clever algorithms and machine learning. As Google likes to put it, the company has turned a light problem into a data problem, and few companies are as adept at processing data as Google.
I spoke with Marc Levoy, the Stanford academic that leads Google’s computational photography team, recently, and he stressed something important about Google’s ML-assisted camera: it keeps getting better over time. Even if Google had done nothing whatsoever to improve the Pixel camera in the time between the Pixel and Pixel 2’s launch, the simple accumulation of machine learning time will have made the camera better. Time is the added dimension that makes machine learning even more exciting. The more resources you can throw at your ML setup, says Levoy, the better its output becomes, and time and processing power (both on the device itself and in Google’s vast server farms) are crucial.
At CES in January this year, Huawei’s mobile boss Richard Yu was asked if his company would introduce its own voice assistant in the US, to which he replied, “Alexa and Google Assistant are better, how can we compete?” That uncharacteristically pragmatic response (for a mobile company CEO) neatly encapsulates the difficulty of copying Google and Amazon’s machine-learning efforts. All the vast resources that the two US companies have invested into natural language processing and voice recognition are returning a dividend in keeping them far enough ahead of the competition that even Huawei, one of the biggest consumer tech brands outside the US, isn’t trying to compete. That’s the cumulative power of long-term investment in machine learning.
Google’s amazing camera sells Pixels, its powerful Assistant helps keep other Android makers in check
Is Google Assistant a differentiating feature? Not for hardware, as Google wants to have Assistant running on every device possible. But the Assistant serves as a conduit for funneling users into Google search and the rest of the company’s services, with practically all of them benefiting from some variety of machine learning, whether you’re thinking of Google Maps tips or YouTube video suggestions. What Assistant does for the mobile market is to enhance Google’s influence over its hardware partners: woe betide the manufacturer that tries to ship an Android phone in 2018 without either the Google Play Store or Assistant on board.
Apple’s Face ID
On the Apple side of the fence, machine learning is permeating much of the software running on the iPhone already, and the company’s Core ML tools are making it easy for developers to add to that library. But the big highlight feature of the new iPhone X, the thing everyone notices, is the notch at the top of its display and the technology contained within it. Up in that monobrow section, you’ll find a full array of infrared and light sensors, something tantamount to a Microsoft Kinect system, which facilitates the new FaceID authentication method.
I remain uncertain about how well Face ID strikes the balance between security and convenience (especially without the fallback of Touch ID’s fingerprint recognition), but I have no doubt about the technical achievement that it represents. Everyone I know that has used Face ID gives a glowing assessment of its accuracy. The system is robust enough to work in the dark and, thanks to machine learning, it will adapt to changes in your appearance. If you strip away all the usual incremental upgrades and design tweaks, the FaceI D system is the iPhone X’s defining new feature. And it’s reliant on ML to work its technological magic.
It may still be early for machine-learning enhancements to truly be the key selling point for mass-market phones. Face ID is of secondary importance to iPhone X purchasers more attracted by the new, bezel-phobic design. While Google’s camera is the best reason to own a Pixel, there still aren’t all that many Pixel owners out there. But the critical thing is that phone companies need to be working on their own ML solutions now in order to remain competitive when those things become essential and core to the user experience, as they threaten to do as early as next year. Chinese companies may work at ludicrous speed when iterating on hardware, however the rules change when the thing you’re trying to replicate is months and years of ML training.
Huawei’s AI chips and Samsung’s Bixby disaster
Outside of Apple and Google, Huawei has been the biggest proponent of implementing machine learning and AI in mobile devices. The company’s latest phone and processor are both marketed as having “the real AI” smarts. Huawei is moving in the right direction with this AI push, however, unlike Apple and Google — both of which have turned ML into tangible, obvious and (literally) user-facing features — Huawei’s approach is to dig into the far less marketable sphere of using ML to optimize Android performance over the course of long-term use. That’s a laudable effort, but it’s hard to imagine it being a true differentiator when people are comparing shiny new phones in a store. Huawei is also putting some marketing toward having “camera AI” that tries to automatically enhance images based on detecting what is being photographed, however I have yet to see it come anywhere near the effectiveness of Google’s Pixel.
Huawei’s example reminds us that machine learning itself is not the unique selling point; the unique selling points are and will be built on top of machine learning.
Machine learning will redraw the distinguishing line between real mobile innovators and fast copycats
Another salient example to illustrate that point is Samsung’s experience with its Bixby voice assistant. Bixby is what Google Assistant might have been if a company decided to rush it into production devices with inadequate planning, preparation, or time to accumulate a useful amount of data and machine learning knowhow. Unfortunately, we can probably expect a lot more Bixbys than anything else next year, as companies work to figure out how to best exploit the potential on offer from machine learning.
When you look at the iPhone X, you might be wowed by its gorgeous new OLED display. As pricey and exclusive as it may be, though, that panel is available to Samsung as well, not just Apple. Every new hardware tweak from Apple seems to be targeted at making manufacture of its devices trickier and more technical — such as the Taptic Engine for haptic feedback, the 3D Touch interaction on iPhone displays, and the Touch Bar on the newest MacBooks — but all of those are ultimately systems that can be reverse-engineered and replicated by others. In 2014, Apple invested heavily in its attempt to build its own manufacturing supply chain for sapphire crystal displays, which would have been a huge and unique advantage, but that effort fell through and the production company hired for it went bankrupt.
The old days of phone makers being able to secure a major hardware advantage for longer than a few months are now gone. At this late stage of the evolution of smartphones, machine learning is the only path toward securing meaningful differentiation. I still believe Google’s camera is widely underrated, mostly owing to Google’s chronic inability to distribute Pixel devices widely enough. And I also think Face ID will be copied, badly, by a whole slew of aspiring competitors. But the distinguishing line between the true mobile innovators and the fast copycats, which had until recently been blurring and fading, will become apparent again as phones move into the era of ML-assisted everything.
More Info: www.theverge.com