Advertisement
opinion

Alan Kohler: Sentient or not, AI needs regulating

Photo: TND

In the 2001 film, AI Artificial Intelligence, Professor Hobby (William Hurt) says lovingly to the AI robot he created: “You are a real boy, David.”

Life imitates art: Last week Google put an engineer on paid leave after he published the transcript of an interview with an artificial intelligence chatbot called LaMDA, claiming that it is sentient, about the level of a seven-year-old child.

Blake Lemoine, the engineer in question, appears to have decided he’s Professor Hobby, and LaMDA is his David.

There followed a spirited global debate about whether AI can be sentient and make its own decisions – can experience feelings and emotions, and go beyond being an intelligent processor of data.
Which was very interesting, with echoes of Rene Descartes and the Enlightenment, but it missed a serious 21st-century point.

Blake Lemoine was stood down because he violated Google’s confidentiality policy. That is, it was meant to be a secret.

Blake Lemoine Google

Engineer Blake Lemoine was stood down from Google for violating its confidentiality policy. Photo: Getty

Google also asserted that its systems merely imitated conversational exchanges and could appear to discuss various topics, but “did not have consciousness”.

But as the Dude perceptively remarked in another movie, The Big Lebowski: “Well, you know, that’s just, like, your opinion man.”

Feelings are beside the point

We need to bear in mind that Google’s business model, and that of Amazon, Netflix, Facebook, Alibaba, Twitter, TikTok, Instagram, Spotify and a growing number of other businesses, is based on algorithms that watch what we do and predict what we’re likely to do in future and, more to the point, what we’d like to do, whether we know it or not.

The question of whether these algorithms have feelings is beside the point.

The business of manipulating our behaviour is unregulated because AI snuck up on governments and regulators – Google, Facebook and the others kept quiet about what they were doing until they had done it, and apparently they want to remain quiet about it, disciplining staff who blab.

Moreover, they now argue that the algorithms are not commercial products, or data, but speech and/or opinion, so they are protected by the constitutional protection of free speech in America and elsewhere.

If the output of the algorithms is “opinion” then the companies are shielded from all sorts of regulatory interference, including anti-competitive practices, libel and defamation, and accusations that they are, in fact, manipulating their customers. They are simply expressing opinions.

And naturally, the companies are not standing still – no business does. Huge resources are going into pushing the algorithms’ boundaries, making them smarter and better at manipulating us.

As part of that, last week Google’s Emma Haruka Iwao set a new world record by calculating Pi to 100 trillion digits, beating the previous record of 62.8 trillion. It took 157 days and required 128 vCPUs, 864GB of RAM, and 515 terabytes of storage.

Materialism versus enlightenment

Most of the people engaged in this week’s debate about AI sentience appear to be saying that it can never happen, that the machines just process data, but 50 years ago the idea of having any kind of conversation with a chatbot or having your travel preferences anticipated by an algorithm would have seemed like science fiction as well.

Those who say AI can never be sentient look a bit like Cartesian dualists, stuck in the 17th century.

Rene Descartes believed that the mind and body are two different things and that consciousness is not physical. “I think, therefore I am”, as he put it, which became one of the foundation slogans of philosophy.

Anthony Gottlieb wrote in The Dream of Enlightenment: “Descartes’ reasons for believing that his soul or self must be something non-material were as follows. I cannot doubt that I exist. But I can doubt that I have a body. Therefore, I am something separate from my body.”

Descartes went further, saying that it also means that God exists and “that every single moment of my entire existence depends on him”.

But as philosophy has moved on from Descartes and the early Enlightenment, and religious authorities are no longer hovering over the output of scientists and philosophers, the rise of materialism has challenged Descartes’ dualism. In recent times a more reductionist approach to consciousness has arisen.

For example, in 1990 Francis Crick and Christof Koch proposed that a mental state becomes conscious when enough neurons fire together and all of them oscillate within the range of 35-75 hertz, or cycles per second.

That’s just one theory, but you get the idea. Modern scientists are moving towards a physical explanation for consciousness and the mind, rather than Descartes’ metaphysical one.

If you accept that consciousness and sentience are physical phenomena – that is, the output of neurons in the brain all firing together – then perhaps it’s a small step to also accept that it can be “dry” as well as “wet” – that is, it doesn’t have to be done only with animal tissue and blood.

And maybe at some point the “neurons” of a computer might be numerous and speedy enough to become sentient. Blake Lemoine thinks it already has.

Whatever happens, governments have already been left behind by technology companies and need to get ahead of what they’re doing now.

We need the digital equivalent of the ingredients lists and nutrition panels we get on food packets: We know what is being put into our body and we deserve to know what is being put in our minds.
Google and the rest of them have accumulated vast wealth by manipulating human beings with algorithms that watch and predict our behaviour.

They are now proposing to move us into the metaverse and virtual reality, accompanied by even better algorithms, perhaps ones that take the step from processing to consciousness.

Even if AI sentience doesn’t develop, AI is getting smarter, and big tech shouldn’t be allowed to keep going unobserved and unregulated.

Alan Kohler writes twice a week for The New Daily. He is also editor in chief of Eureka Report and finance presenter on ABC news

Stay informed, daily
A FREE subscription to The New Daily arrives every morning and evening.
The New Daily is a trusted source of national news and information and is provided free for all Australians. Read our editorial charter
Copyright © 2024 The New Daily.
All rights reserved.