‘Committed to protect customer privacy’ – is what Apple has constantly repeated and most times did a good job proving its commitment. The company had its slips, though, and the most recent controversy probably questions their privacy prerogative the most.
Apple’s announced plan to release a new messages photo scanning feature to flag child sexual abuse materials (CSAM) has generated a massive public outburst.
The outburst recently went over the roof as an influential group of security experts uncovered that the EU had already planned to use this technology before Apple’s idea to implement it in its own systems.
EU’s proposal regarding device photo scanning is likely to be approved this year. Worrying on the possible consequences of this move, if it should happen, doesn’t even begin to cover it.
Let’s delve into Apple’s and EU’s suggestions and what they could mean for your privacy.
Apple’s Approach on Preventing Child Abuse
In August 2021, Apple made its notable announcement that it will roll out a new feature to scan iPhones and iPads and look for child sexual abuse images. The company stated its purpose was to protect children and limit their exposure to this kind of abuse.
As explained by Apple, the photo scanning feature requires an opt-in and is supposed to work in the following way:
- Each time users under 13 years old send photos via Messages, an algorithm would scan these photos.
- If the algorithm marks the photos as containing “sexually explicit” material, the user can choose not to receive/ send the photo. However, if the user decides to accept/send the photo, Apple will send a notification to the parent account on the Family Sharing plan.
Additionally, the algorithm looks for images that match the ones law enforcement use to find and track child abuse material on the internet.
Not long after Apple’s handout, privacy advocates and security researchers publicly expressed their concerns regarding this technology’s very possible dangerous outcomes.
Apple then tried to soften public opinion’s vocal fears and provided additional explanations on how the scanning feature works. The company’s representatives mentioned the feature would only ring the alarm if an iCloud account has at least 30 disturbing photos. Apple also said it’s willing to change the setting after it becomes live.
Apple didn’t succeed in tempering privacy worriers’ doubts and unease. Around 90 policy advocates casted an open letter. The Electronic Frontier Foundation (EFF) gathered 25,000 customer signatures. Even concerned Apple employees demanded the company to abandon its plan completely.
As a result, the company communicated in September that it will just temporarily back down from the release and will work on improving the feature.
Apple has insisted more than once that it will never let authoritarian governments force company employees to create backdoors that would temper their prized and utmost privacy features. It convinced everyone with this claim in 2016 with the San Bernardino case.
Later, Apple took one or more steps back in this regard.
Apple has already lost ground to some of its privacy engagements. It would have to come up with a very strong guarantee that it won’t create a backdoor, even if it’s a well-intentioned and thoroughly designed feature.
The Dangerous Implications of Photo Scanning
Despite its opt-in and limitations to protect Apple users, scanning iCloud photos for illegal activity is a potential invitation for unwanted monitoring. The feature could backfire or even create the opposite effect of its initial purpose.
Some of the most crucial privacy risks activists mentioned include:
- Apple could change the feature’s settings in time and enable spying on users.
- It creates a dangerous precedent that can lead to government surveillance.
- Apple can create a similar program that would scan not just images that portray child abuse, but also look for images that represent signs of organized crime or terrorist plots.
Governments Planned to Enforce Photo Scanning before Apple
Coincidence or not, Apple’s big plan with photo scanning doesn’t seem to be an original idea. The European Union came up with it before Apple.
A group of security researchers discovered the EU’s proposal for a similar program. They enacted a 46-page study stating that both Apple and the European Union’s photo scanning scheme is “dangerous technology”.
Even if the EU’s suggestion is separate from Apple’s approach, one can only assume what can happen from combining the two possibilities together. The EU can easily pass a law requesting Apple company to widen the purpose of photo scanning.
As the researchers and privacy advocates declared:
It should be a national-security priority to resist attempts to spy on and influence law-abiding citizens […] It’s allowing scanning of a personal private device without any probable cause for anything illegitimate being done […] It’s extraordinarily dangerous. It’s dangerous for business, national security, for public safety and for privacy.
Don’t Give Up Your Privacy Just Yet
An Apple user or not, this news is disappointing, to say the least.
We seem to lose control of our digital privacy more and more each day. While it all sounds incredibly gloomy, no one should give up on trying their best to stay as far away as possible from potentially dangerous tools, features, or apps that invade your right to stay private.
Not everything is lost, especially if you stay alert and updated on everything that happens in the online privacy and security scene. One particularly important thing you can do is to educate your kids about how the internet works and its plausible risks.
And you can always do your share in holding governments and companies accountable for their promises and ask for explanations when they break them.
Do you believe Apple’s photo scanning feature and similar technologies can protect your privacy?
Let me know in the comments section below.