Skip to content

A group of CEOs and scientists from companies including OpenAI and Google DeepMind have warned of the threat of nuclear conflict and disease to humanity from fast-growing technological rivals.

“Mitigating the risk of loss from AI should be a global priority alongside other societal-level threats such as pandemics and nuclear war,” said a statement published by the Center for AI Safety, a San Francisco-based nonprofit organization.

More than 350 AI executives, researchers and engineers, including Sam Altman of OpenAI, Demis Hassabis of Google DeepMind and Dario Amodei of Anthropics, were signatories to the one-sentence statement.

Jeffrey Hinton and Joshua Bengio, Turing Award winners for their work on neural networks and often described as the “godfathers” of AI, signed the document. Hinton left his position at Google earlier this month and spoke freely about the technology’s downsides.

The announcement follows regulatory questions in the sector after several AI startups raised awareness of potential flaws from big tech companies, including the spread of misinformation, societal bias and employee turnover.

EU lawmakers are pushing ahead with European artificial intelligence legislation, while the US is exploring regulation.

Backed by Microsoft, OpenAI’s ChatGPT, launched in November, appears to be leading the way in the widespread adoption of artificial intelligence. Altman testified before the US Congress for the first time this month, calling for regulation in the form of a permit.

OpenAI's Sam Altman testified to the US Congress

OpenAI’s Sam Altman testifies before the US Congress this month © Elizabeth Franz/Reuters

In March, Elon Musk and more than 1,000 other researchers and tech executives called for a six-month pause in what they called an “arms race” in developing advanced AI systems.

The letter was criticized for its methodology, including by some researchers who were cited as reasons, while others disagreed with the proposed pause in the technology.

In a one-line statement, the Center for AI Safety told The New York Times that it hopes to avoid misunderstandings.

“We didn’t want to have a huge list of 30 interventions,” said executive director Dan Hendricks. “When that happens, it blurs the message.”

Microsoft’s chief technology officer Kevin Scott and chief science officer Eric Horvitz signed the statement on Tuesday, as did Mustafa Sulaiman, former co-founder of Deepmind and now startup Inflection AI.

[ad_2]

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *