More than 350 top executives and researchers in artificial intelligence have signed a statement urging policymakers to see the serious risks posed by unregulated AI, warning the future of humanity may be at stake.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the signatories, including OpenAI CEO Sam Altman, said in a 23-word letter published Tuesday by the nonprofit Center for AI Safety (CAIS).
Competition in the industry has led to a sort of “AI arms race,” CAIS executive director Dan Hendrycks told CBC News in an interview.
“That could escalate and, like the nuclear arms race, potentially bring us to the brink of catastrophe,” he said, suggesting humanity “could go the way of the Neanderthals.”
Recent developments in AI have created tools supporters say can be used in applications from medical diagnostics to writing legal briefs, but this has sparked fears the technology could lead to privacy violations, powerful misinformation campaigns and lead to issues with “smart machines” thinking for themselves.
“There are many ways that [AI] could go wrong,” said Hendrycks. He believes there is a need to examine which AI tools may be used for generic purposes and which could be used with malicious intent.
He also raised the concern of artificial intelligence developing autonomously.
“It would be difficult to tell if an AI had a goal different from our own because it could potentially conceal it. This is not completely out of the question,” he said.
‘Godfathers of AI’ among critics
Hendrycks and the signatories to the CAIS statement are calling for international co-operation to treat AI as a “global priority” in order to address its risks.
And you don’t have to have be an expert — or even have an interest in artificial intelligence — to be affected by it going forward, said technology analyst and journalist Carmi Levy.
“Just like climate change, even if you’re not a meteorologist, it’s going to touch your life,” Levy said, citing the relationships between governments and citizens, financial markets and organizational development. “AI is going to touch all of our lives.”
The letter coincided with the U.S.-EU Trade and Technology Council meeting in Sweden where politicians are expected to talk about regulating AI.
As well as Altman, signatories included the CEOs of AI firms DeepMind and Anthropic, and executives from Microsoft and Google.
Also among them were British-Canadian computer scientist Geoffrey Hinton and Université de Montréal computer science professor Yoshua Bengio — two of the three so-called “godfathers of AI” who received the 2018 Turing Award for their work on deep learning. Professors from institutions ranging from Harvard to China’s Tsinghua University also signed on.
AI development has reached a milestone, known as the Turing Test, which means machines have the ability to converse with humans in a sophisticated fashion, Yoshua Bengio told CBC News.
The idea that machines can converse with us, and humans don’t realize they are talking to an AI system rather than another person, is scary, he added.
Bengio worries the technology could lead to an automation of trolls on social media, as AI systems have already “mastered enough knowledge to pass as human.”
“We are creating intelligent entities,” he said. AI systems aren’t as smart as humans on everything “right now” but that could change, Bengio continued.
“Are they going to behave well with us? There are a lot of questions that are very, very concerning and there’s too much unknown.”
A statement from CAIS criticized Meta, where the third godfather of AI, Yann LeCun, works, for not signing the letter.
Bengio and Elon Musk, along with more than 1,000 other experts and industry executives, had already cited potential risks to society in April.
European Commission president Ursula von der Leyen will meet Altman on Thursday.
AI regulation ‘still playing catch up’
Not everyone believes AI is existential threat, yet.
“I think there are incredibly pressing practical ramifications of AI that affect people negatively, that I think we don’t yet have good solutions for,” said Rahul Krishnan, an assistant professor at the University of Toronto’s department of computer science.
He believes there is a need for “responsible AI,” which includes “having a set of principle that users and developers of machine learning models agree on.”
Krishnan said AI regulation is “still playing catch up” but there needs to be a “good balance” to ensure technologies are developed and used safely without hindering improvements.
However, he sees the potential for “biases” to affect how machine learning algorithms are programmed.
He offered the example of AI being used to determine who should be approved for a credit card. If an AI tool is trained to work with data about past lending decisions that already “have a degree of bias,” he said the algorithm could further perpetuate that bias in its predictions.
Luke Stark, who studies the social, ethical and cultural impacts of AI at Western University in London, Ont., agreed. If the data AI systems are using exhibit historical bias around race or gender, it’s going to get exacerbated, built up and further expressed through the system, Stark said.
“I think it’s a real danger that we’re facing today and that’s already affecting marginalized communities. You know, people in society who often have the least say about how computing works and how computers are designed,” he said.
Stark, however, believes the warnings about an existential threat from AI have gone overboard, at least for now.
“From my perspective, it’s these everyday real-world, real-life cases of contemporary AI systems being used to control different groups in society that are not getting as much attention.”