Apple’s presentation of its new child-safety features – most controversially the scanning of stored photos for child-abuse imagery -was clumsy to say the least, and the response distinctly mixed. In recent days, however, Apple has tried to better explain its decision to the American media.
As part of this charm offensive Craig Federighi gave a long interview to the Wall Street Journal, embedded in a video further down this article. The interview focused on two new functions: a scan function for iCloud photos and a new child protection function for iMessage.
Federighi admits, “in hindsight”, that introducing both functions at the same time “was a recipe for this kind of confusion”, and attempts both to clear up some misconceptions around Apple’s plans and to defend the policies against the more widespread criticisms.
What and where does Apple scan?
In the interview, Federighi vehemently denies the blanket claim that “Apple is scanning my phone for images”. According to the software VP, the detection of illegal image files is limited specifically to photos that are uploaded to iCloud – even if, as we will explain, some of the scanning does technically take place on the device.
Such a check has long been common practice with all other cloud storage services, Federighi points out. Furthermore, these rival services would routinely scan every one of their users’ photos, something which Apple wanted to avoid in order to protect their privacy.
“It’s very important, we felt,” Federighi says, “before we could offer this kind of capability, to do it in a way much, much more private than anything that’s been done in this area before.”
Only if you use iCloud Photo Library, which is optional, will the images be checked when uploading them to Apple’s servers.
When an image is uploaded, a so-called NeuralHash of the image is generated, a kind of fingerprint. This is compared against a database of known illegal images maintained by the organisation NCMEC. Since only known child abuse imagery is used as a benchmark for the checks, a picture of your own child in the bathtub will not be flagged, and nor will pornographic content in general.
If an image matches one listed in the database, a so-called “safety voucher” is automatically created and added to the image. Only if a threshold number of safety vouchers is exceeded – according to Federighi, giving a number for the first time, it is around 30 – will Apple be notified about the account. And even then, Apple can only access flagged image files, not others.
According to Federighi, an algorithm carries out part of the analysis on the device, and then a second part of the analysis is carried out on the server. However, as Joanna Stern rightly points out, it is the part of the analysis that happens on the device that has irritated many. This gives the impression that “Apple can do things on your device,” she explains.
According to Federighi, however, this is a misunderstanding. The audit only takes place, he says, during the upload to the cloud. It is not a process that runs across all image files – such as photos in iMessage or Telegram.
Federighi also firmly denies that this is a backdoor. It is not the case, he says, that the data on a server is checked without restriction. There is only one database for all iPhones, which is used in the US, Europe and China.
It is not even necessary to trust Apple, Federighi says, because the process is checked by several authorities. He claims that Apple deliberately designed the system in such a way that no foreign authority could enforce changes (for example, to search for other data).
Nudity in Messages
A second new function Apple announced at the same time also relates to child safety, but it is not the same as the photo-scanning function, and may have created some additional confusion.
The new feature affects Messages and is intended to monitor and filter images received by children. If the feature is turned on by a parent and the child receives a nude photo, the picture will be hidden and the child warned. The parents can also be informed.
This is to protect children from contact with sexual predators and is completely under parental control. An intelligent image recognition system on the iPhone is able to recognise nudity.
Who really owns an iPhone?
Finally, however, Stern once again raises the question of whether someone actually still owns their iPhone.
According to Federighi, this is still the case: the phone still belongs to the owner. A check is only made for photos that are stored in the cloud. The system therefore uses the best possible privacy protection.
Apple has obviously made an effort to combine a balance between privacy and child protection. However, the company should not be surprised by the criticism from privacy advocates.
It would appear that too little attention was paid before the announcement to the impression this complex procedure would make on the customer. The customer will probably ask the simple question: “Is Apple scanning my photos or not?” And despite complex verification processes and vouchers, in the end, the answer is yes, Apple does.
This article originally appeared on Macwelt. Translation by David Price.