Privacy campaigners express fears Apple’s plan to scan iPhones for child abuse images

Fury at Apple’s plan to scan iPhones for child abuse images and report ‘flagged’ owners to the police after a company employee has looked at their photos

  • New safety tools unveiled to protect young people and limit spread of material
  • The measures are initially only being rolled out in the US, the tech giant said
  • Apple plans for technology to soon be available in the UK and across the globe
  • But security experts branded the plan as ‘absolutely appalling’ and ‘regressive’

Data privacy campaigners are raging today over Apple’s ‘appalling’ plans to automatically scan iPhones and cloud storage for child abuse images, accusing the tech giant of opening a new back door to accessing a user’s personal data.

The new safety tools will also be used to look at photos sent by text messages to protect children from ‘sexting’, automatically blurring images Apple’s algorithm’s detect could child sexual abuse material [CSAM]. 

The iPhone maker said the new detection tools have been designed to protect user privacy and do not allow the tech giant to see or scan a user’s photo album.  Instead, the system will look for matches, securely on the device, based on a database of ‘hashes’ – a type of digital fingerprint – of known CSAM images provided by child safety organisations. 

Child safety campaigners who have urged tech giants to do more to prevent the sharing of illegal images have welcomed the move – but there are major privacy concerns emerging about the policy.

There are concerns that the policy could be a gateway to snoop on iPhone users and could also target parents innocently taking or sharing pictures of their children because ‘false positives’ are highly likely,

Others fear that totalitarian governments with poor human rights records, could, for instance, harness it to convict people for being gay if homosexuality is a crime.

While the measures are initially only being rolled out in the US, Apple plans for the technology to soon be available in the UK and other countries worldwide. 

Ross Anderson, professor of security engineering at Cambridge University, has branded the plan ‘absolutely appalling’. Alec Muffett, a security researcher and privacy campaigner who previously worked at Facebook and Deliveroo, described the proposal as a ‘huge and regressive step for individual privacy’. 

iPhones will send sexting warnings to parents if their children send or receive explicit images – and will automatically report child abuse images on devices to the authorities, Apple has announced

iPhones will send sexting warnings to parents if their children send or receive explicit images – and will automatically report child abuse images on devices to the authorities, Apple has announced

iPhones will send sexting warnings to parents if their children send or receive explicit images – and will automatically report child abuse images on devices to the authorities, Apple has announced

The key areas for concern about Apple’s new plan to scan iPhones for child abuse images

‘False positives’

The system will look for matches, securely on the device, based on a database of ‘hashes’ – a type of digital fingerprint – of known CSAM images provided by child safety organisations. 

These fingerprints do not search for identical child abuse images because paedophiles would only have to crop it differently, rotate it or change colours to avoid detection.

As a result the technology used to stop child abuse images will be less rigid, making it more likely flag perfectly innocent files. 

In the worst cases the police could be called in and disrupt the life or job of the person falsely accused just, perhaps, for sending a picture of their own child.

Misuse by Governments

There are major concerns that the digital loophoile could be adapted by an authoritarian government to enforce other crimes and infringe human rights.

For example, in countries where being gay is illegal, private photos of someone with a partner could be hoovered up and used against them in court, experts warn.

Expansion into texts

This new Apple policy will be looking at photos and videos.

But there are concerns that the technology could be used to allow companies to see usually encrypted messages such as iMessage or WhatsApp.  

‘Collision attacks’

There are concerns that somebody could send someone a perfectly innocent photograph to someone – knowing it will get them in trouble,

If the person, or government, has a knowledge of the algorithm or ‘fingerprint’ being used, they could use it to fit someone up for a crime.

Backdoor through privacy laws

If the policy is rolled out worldwide, privacy campaigners fear that the tech giants will soon be allowed unfettered access to files via this backdoor.  

 

Advertisement

Mr Anderson said: ‘It is an absolutely appalling idea, because it is going to lead to distributed bulk surveillance of our phones and laptops.’ 

Campaigners fear the plan could easily be adapted to spot other material. 

A trio of new safety tools have been unveiled in a bid to protect young people and limit the spread of child sexual abuse material (CSAM), the tech giant said.   

The new Messages system will show a warning to a child when they are sent sexually explicit photos, blurring the image and reassuring them that it is OK if they do not want to view the image as well as presenting them with helpful resources.

Parents using linked family accounts will also be warned under the new plans. 

Furthermore, it will inform children that as an extra precaution if they do choose to view the image, their parents will be sent a notification.

Similar protections will be in place if a child attempts to send a sexually explicit image, Apple said. 

Among the other features, is new technology that will allow the company to detect known CSAM images stored in iCloud Photos and report them to law enforcement agencies.

It will be joined by new guidance in Siri and Search which will point users to helpful resources when they perform searches related to CSAM.

The iPhone maker said the new detection tools have been designed to protect user privacy and do not allow the tech giant to see or scan a user’s photo album. 

Instead, the system will look for matches, securely on the device, based on a database of ‘hashes’ – a type of digital fingerprint – of known CSAM images provided by child safety organisations. 

This matching will only take place when a user attempts to upload an image to their iCloud Photo Library. 

Apple said that only if a threshold for matches for harmful content is exceeded would it then be able to manually review the content to confirm the match and then send a report to safety organisations.

The new tools are set to be introduced later this year as part of the iOS and iPadOS 15 software update due in the autumn, and will initially be introduced in the US only, but with plans to expand further over time.

The company reiterated that the new CSAM detection tools would only apply to those using iCloud Photos and would not allow the firm or anyone else to scan the images on a user’s camera roll.

The announcement is the latest in a series of major updates from the iPhone maker geared at improving user safety, following a number of security updates early this year designed to cut down on third-party data collection and improve user privacy when they use an iPhone.

While the measures are initially only being rolled out in the US, Apple plans for the technology to soon be available in the UK and other countries worldwide

While the measures are initially only being rolled out in the US, Apple plans for the technology to soon be available in the UK and other countries worldwide

While the measures are initially only being rolled out in the US, Apple plans for the technology to soon be available in the UK and other countries worldwide

Advertisement

Loading

Leave a Reply

Your email address will not be published. Required fields are marked *

Follow by Email
Pinterest
LinkedIn
Share