From Amazon to Google, Microsoft and IBM, the world’s biggest tech companies have pressed pause on facial recognition technology, and experts say Australia should follow suit.
The calls follow revelations earlier this year that the Australian Federal Police trialled a controversial facial recognition tool by Clearview AI, despite having previously denied doing so.
On Monday, the ABC published details of internal AFP emails obtained using freedom of information laws that show AFP officers joking about using the technology while the public was kept in the dark.
“Maybe someone should tell the media that we are using it!” one said.
“Or should we stop using it since everyone is raising the issue of approval,” said another, with a smiley face emoji.
Clearview AI is currently under federal investigation, with the Office of the Australian Information Commissioner and its UK counterpart last week announcing they would work together to probe the firm’s use of data and biometrics scraped from the internet.
The firm was founded by Australian tech entrepreneur Hoan Ton-That, and has faced international criticism for its service, which matches photos to images uploaded to the internet.
In January, The New York Times published an article about the firm titled ‘The secretive company that might end privacy as we know it’.
“You take a picture of a person, upload it and get to see public photos of that person along with links to where those photos appeared,” the NYT wrote.
“The system – whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites – goes far beyond anything ever constructed by the US government or Silicon Valley giants.”
In Australia, a growing chorus of concerned researchers and privacy advocates are calling on the government to put a moratorium on the use of facial recognition technology, with many citing Clearview AI as an example of why it is needed.
“Clearview is quite worrisome, because it has been hoovering up pictures of people across the whole internet, and then it runs an algorithm across them to say which of these look like each other,” Digital Rights Watch director Tom Sulston explained.
That’s worrying because people have not put their photographs onto Facebook or Instagram or wherever so that they can be harvested by a shady company, and then sold to law enforcement agencies.’’
However, Mr Sulston emphasised that not all facial recognition technology is the same.
“Some are scary and horrible and others are quite benign,” he said.
- Related: Perth’s facial recognition cameras prompt scowls – and a campaign to stop ‘invasive’ surveillance
“We’re not talking about things like your iPhone recognising you, or going through passport control, because those are all kind of controlled, constrained environments.
“With your iPhone, the data that constitutes what your face looks like is stored on your phone, and it’s encrypted. So only your phone can get to it – Apple can’t get it, the police can’t get it, it’s completely yours and it’s a pretty secure piece of technology.
“So we’re not talking about that.
“We’re really talking about the network and the control of people’s data without their consent in the broad sense … where you may not want your face to go into a database that’s stored for perpetuity.”
Consent and privacy fears
Facial recognition technology exposes “a gap in our legal and moral standards across the world” and raises huge issues around consent and privacy, Mr Sulston said.
What we’re now seeing is that facial recognition technology is changing our understanding of what it is to be in public,’’ he said.
“When people put things onto the internet to share with family and friends they’re not thinking, ‘Well, here’s a picture that’s going to be hoovered up into a database, and used to identify me or used to identify someone who looks like me in the future, forever, with no sense of control over that or being able to have a meaningful discussion of what that means.”
The nature of the internet also allows companies such as Clearview AI to “jurisdiction shop”, Mr Sulston said.
“They’re not hosted in Australia for very good reasons. We have privacy principles,” he said.
The world has now reached a “horrible point” where “unethical companies” can “exploit” internet users’ data and sell it to various actors without consent, Mr Sulston said.
The speed with which facial recognition technology is advancing shows why a moratorium, or ban, is urgently needed, he said.
“We don’t have a good moral legal framework that reflects our need for privacy and consent,” Mr Sulston said.
The technology is going way faster than our legal protections are able to, and definitely faster than our politicians.
“So we just need to get off the train.”