Twelve days and counting.
It’s hard to believe, with as long as we’ve been talking about this topic, but Google’s much anticipated Pixel 6 phone and its plus-sized sibling, the Pixel 6 Pro, are finally set to make themselves official. The new devices will emerge from Google’s metaphorical womb a week from Tuesday — on October 19th, as Proud Mama Googleschmoop announced this week.
We’ve been thinking about the Pixel 6 phones for so long now that it almost feels like we know everything about ’em. But the phones’ most significant advancement may be something we haven’t yet encountered — and may never actually see, at least in any literal sense. In fact, the full scope and value of its presence might not become clear for some time yet, even after the Pixel 6 makes its way into our curiously moist person-paws.
Let me do a real quick rewind, and I’ll explain what I mean.
The Pixel 6 processor story
Back when we first heard rumblings about the idea of Google building its own custom processor for the Pixel 6 — way back in the year 2020, approximately 47 years ago — we talked about how having a Google-made chip inside the phone could matter for us regular ol’ Android-adorin’ animals in a few key ways.
One is that it’d give Google total power to decide how long the processor — and thus the phone around it — is supported. While nothing’s official on that front just yet, leaks suggest the Pixel 6 could receive a whopping five years of operating system updates, which would certainly put that point into action.
Another is that having a self-created processor could grant Google a newfound ability to cut back on costs and potentially pass those savings on to us. Plain and simple, avoiding the markup associated with paying someone else for a part can lead to significant savings in a phone’s core cost — especially when said component is the nucleus-like processor that powers the entire schlebang (and isn’t exactly inexpensive). Again, we don’t have any official answers on this one yet, but recent leaks make it look like the Pixel 6 could be almost shockingly affordable.
And then there’s the third factor, and that’s the one I want to focus on today. That factor is that creating the processor inside the Pixel 6 would give Google complete control over what, exactly, is included in the phone’s virtual brain. Processors may seem like things of concern mostly for engineers, supernerds (hiya!), and people named Ned, but the truth is that the chip within a device determines an awful lot about what sorts of functions the gadget can and can’t support.
As I put it back in November — with the emphasis being freshly added now:
Processors are what provide foundational support for standards like 5G, for instance (meaningless as that specific example may be at the moment). They provide the framework for tons of different camera functions, biometric authentication systems, fast-charging capabilities, and even artificial-intelligence-related features to operate.
As it stands now, Google is dependent upon companies like Qualcomm for giving it that framework and determining much of what it can do with its Pixel products. And for a company that’s laser-focused on areas like machine learning and an always-listening Assistant service, that creates some serious limitations with the types of experiences it’s able to provide.
And that brings us to today — and some eeeeeeeeeenteresting new context I want to share about what Google’s main motivation with its self-made Pixel processor might actually be and why it could ultimately be so important for the future of the company’s computing vision.
Pixel 6 and the Google A.I. advantage
Before we dive in too deep, we need to make one thing clear — an oft-overlooked reality: The Pixel 6 isn’t the first time that Google’s created its own custom processor. The company’s been doing it for years, in fact, just mostly within computers in its own internal data centers up ’til now.
Now, stay with me here, ’cause this is gonna get a little techie for a minute. But I promise it’s leading us somewhere that’s super-relevant for everyone (even those of us not named Ned).
So here we go: Back in 2017, right as the Pixel program was getting going, a team of Google researchers released a sprawling study that analyzed the performance of the company’s custom processors in those data center devices. Notably, the paper refers to the chips as Tensor Processing Units — a name Google came up with to describe the processors’ primary purpose of enhancing the level of machine learning and artificial intelligence processing the associated systems could perform. Notably, it’s a name that also carries over into the Pixel 6 processor, which is officially branded as the Google Tensor chip.
In 2017, the study compared the performance of Google’s then-internal-use-only Tensor Processing Units with other third-party processor options in the area of A.I.-related tasks.
And — well:
The TPU is on average about 15X – 30X faster than its contemporary GPU or CPU, with TOPS/Watt about 30X – 80X higher. Moreover, using the GPU’s GDDR5 memory in the TPU would triple achieved TOPS and raise TOPS/Watt to nearly 70X the GPU and 200X the CPU.
Gooblede-gobblede, porgas schmorgas borgas — right? I know. But allow me to translate this into normal-human speak:
Google’s homemade Tensor processor was a hell of a lot faster than the best-available third-party alternatives — by an almost ridiculously high margin.
For extra perspective, I chatted with an industry veteran who spent decades working in the semiconductor field and has close professional connections to the authors of the paper. (He opted to remain unnamed, since he has no direct involvement with this specific research or the work on the current Google processor.)
His take?
“Those are scary good numbers. I believe that because of this, a lot of [the] little delays or failures you see when you try to talk to your phone or do translations or do anything that requires Google to do its computing in the cloud will suddenly blaze — because the phone will not have to ship a lot of the computation off the phone to the mothership.”
And what’s more: “There is no way the standard processors offered by Qualcomm or whoever will come within an order of magnitude of this kind of performance and power consumption for a long time to come.”
Yuuuuuup.
The bigger Pixel 6 picture
To finish filling in this picture, let’s zoom back even further for a second: For years now, we’ve been talking about how we’ve entered what I like to call the Post-OS Era — a time when which virtual assistant you’re using is more consequential to a company like Google than which operating system or type of device you prefer. Assistant, ultimately, is more meaningful to the future of Google’s core business than any other variable. And so getting people as invested as possible in Assistant’s ecosystem and in the habit of using it as often as possible is arguably Google’s biggest motivator at this point and the underlying thread that connects most of its moves together.
As that 2017 study illustrates, the main variable holding back Google’s A.I.-oriented efforts and keeping them from reaching their full potential has been processing power and the fact that even the best commercial processors aren’t optimized for the kinds of computing Google wants to do. The Pixel 6’s self-made processor is the key to changing that — or at least the first step on what’ll likely be a long, multifaceted journey.
Google itself has alluded to as much. In its initial public acknowledgment of the Pixel 6 Tensor chip earlier this year, Google hardware chief Rick Osterloh put it thusly:
A.I. is the future of our innovation work, but the problem is we’ve run into computing limitations that prevented us from fully pursuing our mission. So we set about building a technology platform built for mobile that enabled us to bring our most innovative A.I. and machine learning to our Pixel users. …
Tensor was built for how people use their phones today and how people will use them in the future. As more and more features are powered by A.I. and ML, it’s not simply about adding more computing resources. It’s about using that ML to unlock specific experiences for our Pixel users.
Now, how exactly that’ll manifest itself in real-world terms is something we’ll have to wait to see — and something that may not come fully into focus for some time yet. So far, Google has hinted at improvements with on-device photo processing, speech recognition, and on-the-fly language translation. But odds are, that’s only the tip of the iceberg.
As the semiconductor expert I spoke with put it:
“You often think of all of the publicity about processors as essentially irrelevant inside baseball. This is not that. This is a very meaningful change, if what I’m saying is correct — and I’m confident it is.”
The bottom line is this: The Pixel 6 is bound to draw lots of attention for its unusual new design, its newly enhanced camera capabilities, and other surface-level qualities. And by all means, all of that stuff matters.
But in the bigger picture, the part of the latest Pixel that matters the most may be something we can’t truly see — and the scope of whose impact won’t become entirely apparent for months or maybe even years to come.
Now let’s see if Google can accomplish what it’s rarely been able to do and make the phone-buying masses aware of its advancements and what practical advantages they’ll offer in our day-to-day lives.
Don’t let yourself miss an ounce of Pixel magic. Sign up for my new Pixel Academy e-course to discover tons of hidden features and time-saving tricks for your favorite Pixel phone.
Copyright © 2021 IDG Communications, Inc.