Open letter seeking pause on AI experiments signed by Steve Wozniak, Elon Musk, and more

AI experts, technologists, and business leaders are among the 1,000 plus petitioners imploring AI labs to pause training of systems “more powerful than GPT-4.” 

The open letter was published on Wednesday by the Future of Life Institute, a non-profit for mitigating risks of transformative technology. The list of signees include Apple Co-Founder Steve Wozniak, SpaceX, Tesla, and Twitter CEO Elon Musk, Stability AI CEO Emad Mostaque, Executive Director of the Center for Humane Technology Tristan Harris and Yoshua Bengio, founder of AI research institute Mila. 

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” said the letter. “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

Companies like OpenAI, Microsoft, and Google have been charging full speed ahead with their generative AI models. Fueled by ambitions to corner the AI market, new advancements and product releases are announced on an almost daily basis. But the letter says it’s all happening too fast for ethical, regulatory, and safety concerns to be considered.

“We must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” 

The open letter is calling for a six-month pause on AI experiments that “should be developed only once we are confident that their effects will be positive and their risks will be manageable.” It recommends that AI labs should use this time to collectively develop safety protocols that can be audited by third parties. If they don’t pause, governments should step in and impose a moratorium.

At the same time, the petition calls for policymakers to get involved with regulatory authorities dedicated to oversight and tracking of AI systems, distinguishing real from generated content, auditing and certifications for AI models, legal accountability for “AI-caused harm,” public funding for AI research, and institutions “for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.”

The letter isn’t saying pause all AI developments full stop. It’s just saying pump the brakes. Societies need a quick time-out to build the proper infrastructure so the AI revolution can be safe and beneficial to all. Let’s enjoy a long AI summer,” the letter concludes, “not rush unprepared into a fall.”