Tech

AI CEOs Say AI Poses ‘Risk of Extinction,’ Are Trying to Find the Guy Who Did This

"Essentially, it is a misdirection of public attention away from what matters towards that which suits their narrative and business model."
sam hot dog
Image: Getty Images and Netflix 

Hundreds of AI researchers, CEOs, and engineers signed a statement warning that AI has as much of a possibility to cause extinction as pandemics and nuclear war. 

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement reads. It was signed by top AI executives including OpenAI CEO Sam Altman, OpenAI Co-founder and Chief Scientist Ilya Sutskever, Google’s DeepMind CEO Demis Hassabis, and Anthropic CEO Dario Amodei, among hundreds of others.

Advertisement

“AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks,” an introduction to the statement says. “The succinct statement [...] aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.” 

Other notable signatories include Geoffrey Hinton, the professor known as the “godfather of AI,” and Yoshua Bengio, a professor at the University of Montreal, who was a top signatory of a previous open letter calling for a six-month pause on the development of AI.  

Since being shared, a number of AI researchers and ethicists have pointed out that the statement deepens a hypothetical existential fear and detracts from the very real problems of AI, many of which were created and remain unaddressed by the people who signed this very statement. 

“Putting existential AI risk on the same level as climate change and pandemics is misleading, since both of these are very concrete, current risks (which millions of people are being affected by at this very moment), whereas the risk of AI destroying humanity is very hypothetical,” Sasha Luccioni, a Research Scientist and Climate Lead at Hugging Face, told Motherboard. Luccinoi tweeted charts showing how Google’s emission types went up between 2020 and 2021, with electricity-related emissions more than doubling, and wondered if this is correlated to Google’s widescale deployment of Large Language Models (LLMs) like Bard. 

Advertisement

“I see this as a way for companies like Open AI to control the narrative and move public attention away from things like data consent, the legality of their systems, and the false and misleading information that they produce (and how all of these can impact our livelihoods). Essentially, it is a misdirection of public attention away from what matters towards that which suits their narrative and business model,” Luccioni added. 

“The whole thing looks to me like a media stunt, to try to grab the attention of the media, the public, and policymakers and focus everyone on the distraction of scifi scenarios,” Emily M. Bender a Professor in the Department of Linguistics at the University of Washington, told Motherboard. “This would seem to serve two purposes: it paints their tech as way more powerful and effective than it is and it takes the focus away from the actual harms being done, now, in the name of this technology. These include the pollution of the information ecosystem, the reinforcement and replication of biases at scale, exploitative labor practices and further gigification of the economy, enabling oppressive surveillance such as the ‘digital border wall’ and theft of data and creative work.” 

In late March, a group of public figures and leading AI researchers, including people such as Elon Musk and Andrew Yang, signed an open letter calling for a six-month pause on training AI systems more powerful than GPT-4. That open letter took a longtermist view to AI. Longtermism is an outlook that many Silicon Valley tech elite have embraced and is what drives them to seek hoards of wealth to solve problems for humans in the far future, rather than focus on issues in the present. 

Similarly, Tuesday’s brief statement is a fear-mongering prediction about the future that detracts from the very real problems AI is creating today. It's also made by the exact same people who are financially benefiting from their own development and investment in AI; it takes for granted that there is simply nothing that can be done to slow or stop the pace of AI development. 

“Why should something need to be framed as an ‘existential risk’ in order to matter? Is extinction status that necessary for something to be important?” Deborah Raji, a computer scientist and fellow at Mozilla, tweeted. “The need for this ‘saving the world’ narrative over a ‘let's just help the next person’ kind of worldview truly puzzles me.” 

Last week, Altman was threatening that OpenAI might “cease operations” in the EU due to his concerns over the EU AI Act, which would require “summaries of copyrighted data for training.” OpenAI has become increasingly closed-source and secretive with the release of its models. Revealing its training data and other information behind the model would open the company up to a lot more competition and perhaps even lawsuits from those whose copyrighted work can be found in the training dataset. 

“That sounds to me like the regulators are not falling for the AI hype and asserting the right of elected governments to protect the interests of people against overreach by corporations. In such a context, it's no wonder that the corporations profiting off of a business model based on data theft and labor exploitation would want to try to reframe the discussion,” Bender told Motherboard.