Advertisement

Musk, experts urge pause to AI systems

Elon Musk has joined key figures in artificial intelligence in calling for training of powerful AI systems to be suspended amid fears of a threat to humanity.

The Twitter and Tesla boss joined a group of artificial intelligence experts and industry executives calling for a six-month pause in training systems more powerful than OpenAI’s newly launched model GPT-4, in an open letter on Wednesday (US time), citing potential risks to society and humanity.

The letter was issued by the non-profit Future of Life Institute and signed by more than 1000 people including Mr Musk, Stability AI CEO Emad Mostaque, Apple co-founder Steve Wozniak, researchers at Alphabet-owned DeepMind, as well as AI heavyweights Yoshua Bengio and Stuart Russell.

It calls for a pause on advanced AI development until shared safety protocols for such designs are developed, implemented and audited by independent experts.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter says.

The letter also details potential risks to society and civilisation by human-competitive AI systems in the form of economic and political disruptions, and calls on developers to work with policymakers on governance and regulatory authorities.

The letter comes as European Union police force Europol on Monday joined a chorus of ethical and legal concerns over advanced AI such as ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.

Mr Musk, whose car maker Tesla uses AI in an autopilot system, has been vocal about his concerns about AI.

He was a co-founder of OpenAI, although he resigned from its board some time ago. In the open letter he is listed as an adviser to the Future of Life Institute.

Since its release last year, Microsoft-backed OpenAI’s ChatGPT has prompted rivals to accelerate developing similar large language models, and companies to integrate generative AI models into their products.

OpenAI chief executive Sam Altman did not sign the letter, a spokesperson at Future of Life told Reuters. OpenAI didn’t immediately respond to request for comment.

“The letter isn’t perfect, but the spirit is right: We need to slow down until we better understand the ramifications,” said Gary Marcus, an emeritus professor at New York University who signed the letter.

“They can cause serious harm … the big players are becoming increasingly secretive about what they are doing, which makes it hard for society to defend against whatever harms may materialise.”

-with AAP

Stay informed, daily
A FREE subscription to The New Daily arrives every morning and evening.
The New Daily is a trusted source of national news and information and is provided free for all Australians. Read our editorial charter
Copyright © 2024 The New Daily.
All rights reserved.