When it comes to cloud photo storage, Google Photos leads the pack—four trillion photos and videos across more than a billion users. Millions of Apple users have Google Photos on their iPhones, iPads And Macs, but Apple has just flagged a serious warning about Google’s platform and given its users a reason to delete their apps.
This has been a dreadful few weeks for Apple on the privacy front—not what the iPhone maker needs in the run up to the launch of iPhone 13 and iOS 15. A week ago, the company awkwardly (albeit inevitably) backtracked on its ill-conceived plan to screen its users’ photos on their devices to weed out known child abuse imagery.
Screening for CSAM is not controversial. All the major cloud platforms—including Google Photos—have done so for years. “Child sexual abuse material has no place on our platforms,” Google told me. “As we’ve outlined, we utilize a range of industry standard scanning techniques including hash-matching technology and artificial intelligence to identify and remove CSAM that has been uploaded to our servers.”
But Apple, it transpires, has not been doing the same. The company has not yet applied any such screening to iCloud Photos, and its reasoning for this seemingly surprising decision once again highlights the different privacy philosophies at play.
Apple’s controversial (now stalled) decision to screen for CSAM on-device rather than in the cloud, was, the company said, because it wanted to flag known imagery “while not learning any information about non-CSAM images.” What it means is that all users should not surrender the privacy of all their content, to flag a tiny minority.
The principle itself is sound enough. If your private iPhone doesn’t flag any potential CSAM matches, Apple’s servers can ignore all your content. If your iPhone does flag potential matches, at least 30 of them, then the server knows exactly where to look.
The issue, though, is that despite detailed technical explanations and assurances, that concept of on-device screening didn’t land well. That “private iPhone” filtering simply came across as on-device spyware, raising the specter of scope creep, of ever more content being flagged at the behest of U.S. and overseas governments. And so, Apple has retreated back to its drawing board for a rethink.
But turn that around the other way, and there’s an interesting conundrum for the rest of the industry. Apple has highlighted the privacy invasion in searching across all your photos in the cloud, that just matching to CSAM databases would be welcome, but does it stop there? And what about the risks inherent in Apple’s technical detail, around false matches and manual reviews? Does that mean our cloud photos on other platforms are regularly flagged and reviewed by desks of manual operators?
Worse—the real issue that holed Apple’s CSAM plans below the waterline was the risk that governments would expand beyond known CSAM content—collated by child safety organizations, to other content. Political or religious dissent, other crimes, persecuted minorities in parts of the world where Apple sells its devices.
Apple explained in great detail that it had technical protections in place to hamper this, promising it would always say no. It then said this was only the U.S. to start and would only expand to countries where those risks could be contained. But the agitated privacy lobby was not assured, especially given Apple’s past challenges in “just saying no” in China, on iCloud storage locations and app censorship, for example.
Clearly, you don’t need to be a technical genius to work out that those same risks apply to cloud screening and are not limited to software on devices. Yes, the jurisdiction in which cloud data is stored varies, but big tech still needs to adhere to local laws, as is often made clear, and the defense that it’s not technical possible, which is used to defend message encryption by way of example, cannot apply.
And so, to Google Photos. There are three reasons why Apple users should delete these apps. First, using Google Photos means giving the platform full access to your photos. It’s all or nothing. Apple has a relatively new privacy-preserving tool in its photos app, to limit the photos any apps can access. But Google Photos won’t accept that, insisting that you change the setting to give it access to everything if you want to use the app.
Second, the privacy label for Google Photos is a horror show compared to Apple’s alternative. Just as with other stock apps, Google (like Facebook) collects what it can, excusing this by saying it only uses data when needed. But the issue is that Google links all this data to your identity, adding to the vast profiles associated with your Google account or other personal identifiers. Google isn’t doing this as a service, it’s the core of its data-driven advertising business model. Just follow the money.
Google says these labels “show all possible data that could be collected, but the actual data depends on the specific features you decide to use… We’ll collect contact info if you want to share your photos and videos… or if you decide to purchase a photo book, we’ll collect your payment information and store your purchase history. But this data wouldn’t be collected if you chose not to share photos or make a purchase.”
Google—like Facebook—will also harvest metadata from photos and pull the data into its algorithmically-driven money machine. “We do use EXIF location data to improve users’ experience in the app,” the company told me. “For example to surface a trip in our Memories feature or suggest a photo book from a recent trip.”
Clearly, you can each take a view as to the personal data you’re comfortable being pulled into Google’s datasets to be mined and analyzed, and Google now offers more controls than ever before to restrict what is shared. But limiting Google’s access, also limits its functionality. It’s that core philosophy at play.
“Your photo and video albums are full of precious moments,” Apple counters to Google’s approach. “Apple devices are designed to give you control over those memories.” And at the core of this assurance, we have the same device versus cloud debate that framed the CSAM controversy that hit Apple last month.
Which leads to the third issue. We already know that Google applies cloud AI to the photos it stores. Behind Apple’s CSAM move was its well-established approach to analyzing your device data. Apple uses on-device ML to categorize photos, for example, enabling smart searching for objects or people. Google does this in the cloud. And where Apple’s CSAM issue was linking this on-device ML to external processing, Google’s cloud ML is already external, off-device, a relative black box to users.
When Apple says its Photos platform “is designed so that the face recognition and scene and object detection—which power features like For You, Memories, Sharing Suggestions and the People album—happen on device instead of in the cloud… And when apps request access to your photos, you can share just the images you want—not your entire library,” we know exactly who they have in mind.
On its approach to CSAM in Google Photos, the company told me that “we work closely with the National Center for Missing and Exploited Children and other agencies around the world to combat this kind of abuse.”
But Google wouldn’t be drawn on my other questions—the privacy protections in Google Photos, limitations and restrictions on screening, its policy on government—foreign or domestic—requests, whether it had been asked to expand the scope of its screening—other than pointing me to its general advertising policies on content (not metadata, you’ll note), and its transparency report.
Google also didn’t comment on other AI classifiers it applies to Google Photos, how the data is harvested and used and whether it intends to revise anything in light of the Apple backlash. There is no implication that Google is doing anything more than the obvious—but that’s the thing about the cloud, it’s really just someone else’s computer.
Just as we exposed Facebook for harvesting EXIF data without any user transparency, the issue is digging beneath general terms and conditions to understand what that actually means for you. And when the analysis is taking place off-device, it’s entirely invisible to you unless they choose to share. That was kind of Apple’s point on CSAM.
Is there a risk here? Yes, of course. Apple has told you as much. We know that Google adopts a far less privacy-preserving architecture than Apple across the board in any case. And so, you should engage with its apps and platforms eyes wide-open.
Meanwhile, if you’ve spent $1000+ on your iPhone, my recommendation is to make use of the privacy measures it has in place. And that means skipping Google Photos despite the advanced search features it may have. As ever, convenience comes at a price; absent full transparency and controls, that price remains too heavy to pay.
— to www.forbes.com