News Politics Google, Facebook back ‘blunt force’ crackdown on extremism, but Twitter warns it could backfire
Updated:

Google, Facebook back ‘blunt force’ crackdown on extremism, but Twitter warns it could backfire

Facebook, Twitter and Google logos displayed on a phone screen and keyboard
A 'blunt force' ban on extremism could be 'incredibly problematic', Twitter has warned. Photo: Getty
Share
Twitter Facebook Reddit Pinterest Email
Voiced by Amazon Polly

Google and Facebook have backed calls to ban certain extremist content and insignias online, but Twitter warned that it won’t be enough to deal with a rise in online extremism, and could even make it worse.

Appearing before a parliamentary committee on Friday afternoon, representatives for the tech giants said they were on the front foot in dealing with extremist content online.

But Labor Senator Kristina Keneally challenged Google on their efforts in confronting extremism, after finding a terrorist manifesto on the first page of the company’s search results.

“I’m pretty sure if I looked on other platforms, that would similarly be the case,” Senator Keneally said.

Working with police

It came after the Australian Federal Police told senators on Thursday that tougher laws banning possession of extremist content and symbols would help them combat a sharp rise in online extremism.

Senator Keneally asked the platforms whether laws banning possession or promotion, or selling or distribution, of extremist insignias or texts would be useful in their efforts to combat hate groups online.

Google public policy and government relations representative Samantha Yorke supported calls for a crackdown, saying such laws would give the company “legal certainty” around its moderation decisions.

Facebook broadly agreed, saying there was a “need for more regulation”.

While Twitter’s senior director of public policy Kathleen Reen supported new laws in principle, she warned a ban wouldn’t solve the problem and could even make it worse.

“It could be incredibly problematic to use a blunt force instrument like a ban,” Ms Reen said.

“You may find yourself effectively chasing it [content] off our platforms where the companies are working to address these issues.

“Stopping the conversation entirely won’t address the problem in our view – in fact it will make it worse.”

Expert doubts need for crackdown

Federal police on Thursday claimed to have identified gaps in the law they believe prevent law enforcement from disrupting extremist violence.

The AFP suggested making the possession of a symbol an offence, without requiring any proof of an intention to commit a terrorist act.

Richard Wilson, co-chair of the Australian Law Council, warned claims by police about gaps in the law require “rigorous testing”.

“Proof of such connection is a deliberate safeguard which limits the scope of criminality, and associated police powers,” Mr Wilson said.

“Merely symbols and insignia shouldn’t be criminalised.”

The parliamentary joint committee on intelligence and security is probing how to combat extremist movements, including whether Australia’s laws are fit for purpose in tackling hate groups online.

Other experts told the hearing on Friday that Australia’s laws have failed to keep pace with the rapidly shifting online landscape, where extremist groups are able to gather, share ideas and grow their audiences.

‘Plots, plans and people’

Rachael Falk, head of the Cyber Security Research Centre, said content on social media was filled with “plots, plans and people”, but that these groups just move to fringe apps when hit with new regulations.

“There is a social, moral and legal role for big tech to be playing here,” Ms Falk said.

Representatives from Google, Facebook and Twitter explained they were using artificial intelligence (AI) to remove hate speech and other extremist content from their platforms en masse, but said the task never ends.

Facebook removes 97 per cent of hate speech it identifies using AI, while Google said more than 90 per cent of videos it removes are found through its algorithms, and Twitter removes 96 per cent of the extremist content it finds through “proactive technologies”.

Brian Fishman, Facebook’s public policy director for counter-terrorism, said dealing with the creation of ‘echo-chambers of hate’ – as ASIO has described it – was “less a matter for social media” than society broadly.

“There is a real place for governments to support good efforts to vet civil society organisations and to help foster and support those social functions. Social media can come in and do what we do best, which is help them build an audience,” Mr Fishman said.

-with AAP

Comments
View Comments