Advertisement

‘Nerd’, ‘non-smoker’, ‘wrongdoer’: How might artificial intelligence label you?

The exhibition unveiled in Milan shows how AI systems have been trained to “see” and categorise the world.

The exhibition unveiled in Milan shows how AI systems have been trained to “see” and categorise the world. Photo: Fondazione Prada

When Tabong Kima checked his Twitter feed early on Wednesday morning, the hashtag of the moment was #ImageNetRoulette.

Everyone, it seemed, was uploading selfies to a website where some sort of artificial intelligence analysed each face and described what it saw.

The site, ImageNet Roulette, pegged one man as an “orphan”.

Another was a “non-smoker.”

A third, wearing glasses, was a “swot, grind, nerd, wonk, dweeb”.

Across Kima’s Twitter feed, these labels – some accurate, some strange, some wildly off base – were played for laughs. So he joined in.

But Kima, a 24-year-old African-American, did not like what he saw. When he uploaded his smiling photo, the site tagged him as a “wrongdoer” and an “offender”.

“I might have a bad sense of humour,” he tweeted, “but I don’t think this is particularly funny”.

As it turned out, his response was just what the site was aiming for.

ImageNet Roulette is a digital art project intended to shine a light on the quirky, unsound and offensive behaviour that can creep into the artificial-intelligence technologies that are rapidly changing our everyday lives, including the facial-recognition services used by internet companies, police departments and other government agencies.

Facial recognition and other AI technologies learn their skills by analysing vast amounts of digital data.

Drawn from old websites and academic projects, this data often contains subtle biases and other flaws that have gone unnoticed for years.

ImageNet Roulette, designed by American artist Trevor Paglen and Microsoft researcher Kate Crawford, aims to show the depth of this problem.

“We want to show how layers of bias and racism and misogyny move from one system to the next,” Paglen said in a phone interview from Paris.

“The point is to let people see the work that is being done behind the scenes, to see how we are being processed and categorised all the time.”

Unveiled this week as part of an exhibition at the Fondazione Prada museum in Milan, the site focuses attention on a massive database of photos called ImageNet.

First compiled more than a decade ago by a group of researchers at Stanford University, located in Silicon Valley in California, ImageNet played a vital role in the rise of “deep learning,” the mathematical technique that allows machines to recognise images, including faces.

Packed with more than 14 million photos pulled from all over the internet, ImageNet was a way of training AI systems and judging their accuracy.

By analysing various kinds of images, such as flowers, dogs and cars, these systems learned to identify them.

What was rarely discussed among communities knowledgeable about AI is that ImageNet also contained photos of thousands of people, each sorted into their own categories.

This included straightforward tags like “cheerleaders”, “welders” and “Boy Scouts” as well as highly charged labels like “failure, loser, non-starter, unsuccessful person” and “slattern, slut, slovenly woman, trollop”.

By creating a project that applies such labels, whether seemingly innocuous or not, Paglen and Crawford are showing how opinion, bias and sometimes offensive points of view can drive the creation of artificial intelligence.

The ImageNet labels were applied by thousands of unknown people, mostly likely in the US, hired by the team from Stanford.

Working through the crowdsourcing service Amazon Mechanical Turk, they earned pennies for each photo they labelled, churning through hundreds of tags an hour.

As they did, biases were baked into the database, though it’s impossible to know whether these biases were held by those doing the labelling.

They defined what a “loser” looked like. And a “slut”. And a “wrongdoer”.

The labels originally came from another sprawling collection of data called WordNet, a kind of conceptual dictionary for machines built by researchers at Princeton University in the 1980s.

But with these inflammatory labels included, the Stanford researchers may not have realised what they were doing.

Artificial intelligence is often trained on vast data sets that even its creators haven’t quite wrapped their heads around.

“This is happening all the time at a very large scale – and there are consequences,” said Liz O’Sullivan, who oversaw data labelling at the artificial intelligence startup Clarifai and is now part of a civil rights and privacy group called the Surveillance Technology Oversight Project that aims to raise awareness of the problems with AI systems.

Many of the labels used in the ImageNet data set were extreme. But the same problems can creep into labels that might seem inoffensive. After all, what defines a “man” or a “woman” is open to debate.

“When labelling photos of women or girls, people may not include non-binary people or women with short hair,” O’Sullivan said. “Then you end up with an AI model that only includes women with long hair.”

In recent months, researchers have shown that face-recognition services from companies like Amazon, Microsoft and IBM can be biased against women and people of colour.

With this project, Paglen and Crawford hoped to bring more attention to the problem – and they did.

At one point this week, as the project went viral on services like Twitter, ImageNet Roulette was generating more than 100,000 labels an hour.

“It was a complete surprise to us that it took off in the way that it did,” Crawford said, while with Paglen in Paris.

“It let us really see what people think of this and really engage with them.”

For some, it was a joke.

But others, like Kima, got the message.

“They do a pretty good job of showing what the problem is – not that I wasn’t aware of the problem before,” he said.

New York Times 

Stay informed, daily
A FREE subscription to The New Daily arrives every morning and evening.
The New Daily is a trusted source of national news and information and is provided free for all Australians. Read our editorial charter
Copyright © 2024 The New Daily.
All rights reserved.