Advertisement

Alan Kohler: Can social media survive artificial intelligence?

The government has released draft legislation to deal with disinformation and misinformation online that is less than draconian, to say the least.

(Disinformation is intended to deceive while misinformation is a mistake; “dis”, needless to say, is worse than “mis”.)

Generative artificial intelligence changes everything about deliberate digital deception because it’s soon going to be impossible to tell phoney from true.

My guess is the gentle regulatory approach will last until the first video appears of a cabinet minister confessing to being a paedophile, or straightening a line of cocaine with their credit card before sniffing it through a rolled-up $100 note.

Communications Minister Michelle Rowland says the new framework, to be operated by the Australian Communications and Media Authority (ACMA), aims “to strike the right balance between protection from harmful mis- and dis-information online and freedom of speech”.

No power to remove posts

But amazingly, ACMA will not have the power to request specific content or posts be removed.

The sanctions available to ACMA, and contained in Section 14 of the proposed law, are that ACMA “may make digital platform rules in relation to records”.

These basically involve requiring the digital platform service involved to “make and retain records” about the prevalence of misinformation or disinformation and the measures taken to prevent or respond to it.

But take the fakes down or, God forbid, fine Facebook, Instagram, TikTok, YouTube and Twitter for publishing lies? Nope – that might interfere with freedom of speech, or worse – their profits.

The Australian approach seems to be based on Section 230 of the US Communications Decency Act of 1996, which says “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”.

That is, they come under the ‘C’ for communications in the ACMA acronym rather than the ‘M’ for media and are not to be treated as publishers. The analogy is that if someone lies or plans a crime on the phone, the phone company doesn’t get blamed. The social media firms are treated like phone companies.

Admittedly, this stuff is not easy for any government. People like social media quite a lot, but it wouldn’t exist if they were regulated as publishers and made responsible for what appears.

Their business models rely on them not having to check and edit what is published before it goes out. They have systems to respond to complaints and mop up afterwards, but that’s it.

AI, a different level of fakery

The problem that regulators like Michelle Rowland and ACMA now face is that generative AI is an entirely different kettle of fakery.

The tools now available can create duplicates of appearance and voice that are indistinguishable from reality, and they’re getting better all the time. Combine that with the vast amount of data, images and recordings of public figures that are available these days, and the potential to tilt elections is obvious.

Donald Trump and Ron DeSantis have both already used deepfake images of each other and the Republican primaries are still eight months away.

A ‘deepfake’ image of Donald Trump and Anthony Fauci, used by Ron DeSantis’ campaign.

Digital deception combined with micro-targeting to maximise impact is likely to increase exponentially from here. What’s more, AI won’t require big budgets to produce believable bullsh-t – political mendacity is being democratised, available to all.

It’s not difficult to see a future in which we can’t believe anything online at all, and anything can be convincingly denied by claiming it’s an AI fake.

It’s hard to know which would be worse – mass cynicism or mass gullibility.

Either way, if a National Party frontbencher appears on Facebook with a swastika armband announcing that he has always admired Hitler, it feels like it’s not really enough for ACMA to require a report to be written describing the measures that will be taken to deal with disinformation in future.

ACMA needs the power to require it to be taken down, not wait for Facebook to do it voluntarily (which it would, of course) but even after it is taken down, nothing dies on the internet – it will always exist somewhere.

A regulatory twilight

It’s true that there are plenty of lies told in the traditional media, but at least there are clear rules against it and broadcasters can lose their licence if the breach is egregious and persistent. But social media exists in a sort of regulatory twilight.

In the end, I suspect that governments might have to ask whether society really needs social media at all.

If their – very lucrative – business model requires that they be treated like phone companies, passively facilitating communication between individuals, then maybe that simply can’t go on when it’s combined with AI, so the fakes and lies can’t be detected and no one knows what’s true any more.

If that’s what the combination of AI and social media looks like, then it’s hard to see how social media itself can survive.

I remember when we didn’t have it, and we managed to get by. We had contact books, and just met up.

Alan Kohler is founder of Eureka Report and finance presenter on ABC news. He writes twice a week for The New Daily

Stay informed, daily
A FREE subscription to The New Daily arrives every morning and evening.
The New Daily is a trusted source of national news and information and is provided free for all Australians. Read our editorial charter
Copyright © 2024 The New Daily.
All rights reserved.