Jump to content
Wikipedia The Free Encyclopedia

Pause Giant AI Experiments: An Open Letter

From Wikipedia, the free encyclopedia
2023 letter calling for a pause on AI system training
Part of a series on
Artificial intelligence (AI)
Glossary

Pause Giant AI Experiments: An Open Letter is the title of a letter published by the Future of Life Institute in March 2023. The letter calls "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4", citing risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control.[1] It received more than 30,000 signatures, including academic AI researchers and industry CEOs such as Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak and Yuval Noah Harari.[1] [2] [3]

Motivations

[edit ]

The publication occurred a week after the release of OpenAI's large language model GPT-4. It asserts that current large language models are "becoming human-competitive at general tasks", referencing a paper about early experiments of GPT-4, described as having "Sparks of AGI".[4] AGI is described as posing numerous important risks, especially in a context of race-to-the-bottom dynamics in which some AI labs may be incentivized to overlook security to deploy products more quickly.[5]

It asks to refocus AI research on making powerful AI systems "more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal". The letter also recommends more governmental regulation, independent audits before training AI systems, as well as "tracking highly capable AI systems and large pools of computational capability" and "robust public funding for technical AI safety research".[1] FLI suggests using the "amount of computation that goes into a training run" as a proxy to for how powerful an AI is, and thus as a threshold.[6]

Reception

[edit ]

The letter received widespread coverage, with support coming from a range of high-profile figures. As of July 2024, a pause has not been realized - instead, as FLI pointed out on the letter's one-year anniversary, AI companies have directed "vast investments in infrastructure to train ever-more giant AI systems".[7] However, it was credited with generating a "renewed urgency within governments to work out what to do about the rapid progress of AI", and reflecting the public's increasing concern about risks presented by AI.[8]

Eliezer Yudkowsky wrote that the letter "doesn't go far enough" and argued that it should ask for an indefinite pause. He fears that finding a solution to the alignment problem might take several decades and that any misaligned AI sufficiently intelligent might cause human extinction.[9]

Some IEEE members have expressed various reasons for signing the letter, such as that "There are too many ways these systems could be abused. They are being freely distributed, and there is no review or regulation in place to prevent harm."[10] One AI ethicist argued that the letter provides awareness to multiple issues such as voice cloning, but argued the letter was unactionable and unenforceable.[10]

The letter has been criticized for diverting attention from more immediate societal risks such as algorithmic biases.[11] Timnit Gebru and others argued that the letter was sensationalist and amplified "some futuristic, dystopian sci-fi scenario" instead of current problems with AI today.[10]

Former Microsoft's CEO Bill Gates chose not to sign the letter, stating that he does not think "asking one particular group to pause solves the challenges".[12] Sam Altman, CEO of OpenAI, commented that the letter was "missing most technical nuance about where we need the pause" and stated that "An earlier version of the letter claimed OpenAI is training GPT-5 right now. We are not and won't for some time."[13] Reid Hoffman argued the letter was "virtue signalling", with no real impact.[14]

The experiments continued regardless; GPT-5 was announced in 2025.[15]

List of notable signatories

[edit ]

Listed below are some notable signatories of the letter.[1]

See also

[edit ]

References

[edit ]
  1. ^ a b c d "Pause Giant AI Experiments: An Open Letter". Future of Life Institute. Retrieved 2024年07月19日.
  2. ^ Metz, Cade; Schmidt, Gregory (2023年03月29日). "Elon Musk and Others Call for Pause on A.I., Citing 'Profound Risks to Society'". The New York Times. ISSN 0362-4331 . Retrieved 2024年08月20日.
  3. ^ Hern, Alex (2023年03月29日). "Elon Musk joins call for pause in creation of giant AI 'digital minds'". The Guardian. ISSN 0261-3077 . Retrieved 2024年08月20日.
  4. ^ Bubeck, Sébastien; Chandrasekaran, Varun; Eldan, Ronen; Gehrke, Johannes; Horvitz, Eric; Kamar, Ece; Lee, Peter; Lee, Yin Tat; Li, Yuanzhi; Lundberg, Scott; Nori, Harsha; Palangi, Hamid; Ribeiro, Marco Tulio; Zhang, Yi (2023年04月12日). "Sparks of Artificial General Intelligence: Early experiments with GPT-4". arXiv:2303.12712 [cs.CL].
  5. ^ "MPs warned of AI arms race to the bottom | Computer Weekly". ComputerWeekly.com. Retrieved 2023年04月13日.
  6. ^ Support (2023年03月31日). "FAQs about FLI's Open Letter Calling for a Pause on Giant AI Experiments". Future of Life Institute. Retrieved 2023年04月13日.
  7. ^ Aguirre, Anthony (2024年03月22日). "The Pause Letter: One year later". Future of Life Institute. Retrieved 2024年07月19日.
  8. ^ "Six months after call for AI pause, are we closer to disaster?". euronews. 2023年09月21日. Retrieved 2024年07月19日.
  9. ^ "The Open Letter on AI Doesn't Go Far Enough". Time. 2023年03月29日. Archived from the original on 2023年04月02日. Retrieved 2023年04月13日.
  10. ^ a b c Anderson, Margo (7 April 2023). "'AI Pause' Open Letter Stokes Fear and Controversy". IEEE Spectrum .
  11. ^ Paul, Kari (2023年04月01日). "Letter signed by Elon Musk demanding AI research pause sparks controversy". The Guardian. ISSN 0261-3077 . Retrieved 2023年04月14日.
  12. ^ Rigby, Jennifer (2023年04月04日). "Bill Gates says calls to pause AI won't 'solve challenges'". Reuters. Retrieved 2023年04月13日.
  13. ^ Vincent, James (April 14, 2023). "OpenAI's CEO confirms the company isn't training GPT-5 and 'won't for some time'". The Verge.
  14. ^ Heath, Ryan (22 September 2023). "The great AI "pause" that wasn't". Axios.
  15. ^ "Introducing GPT-5". OpenAI. 7 August 2025.
[edit ]

AltStyle によって変換されたページ (->オリジナル) /