A group of prominent tech figures and experts in artificial intelligence, including Elon Musk and Apple co-founder Steve Wozniak, have contributed their names in an open letter calling for a halt to the development of highly advanced artificial intelligence (AI) systems, citing potential risks to society.
The letter calls for AI developers to stop training AI systems more powerful than GPT-4 for at least six months and for independent, third-party sources need to develop safety protocols to guide the future of AI systems. The letter was issued by the Musk Foundation-funded Future of Life Institute on Wednesday and has been signed by over 1,000 people.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects,” said the letter.
The contributors include Stability AI CEO Emad Mostaque, and researchers at Alphabet-owned (NASDAQ: GOOG) DeepMind.
The letter mirrors common anxieties expressed by Musk, who has called it the biggest existential threat of our times.
He also called for a regulatory authority to provide oversight to ensure it’s operating in the public interest. Meanwhile, the letter included a call for collaboration between AI developers and policymakers to produce strong governance systems for the technology.
In the months ahead, we will use AI to detect & highlight manipulation of public opinion on this platform.
Let’s see what the psy ops cat drags in …
— Elon Musk (@elonmusk) March 18, 2023
Read more: Quebec Copper & Gold to apply artificial intelligence mineralization targeting via partnership with Windfall Geotek
Read more: Opera jumps on generative artificial intelligence bandwagon with ChatGPT
An entire regulatory system in place for AI
The systems themselves would include several elements. These include the establishment of new regulatory authorities, tracking and oversight of highly advanced AI systems. Also provenance and watermarking for mechanisms that help to distinguish between real and synthetic models and track model leaks.
a big deal: @elonmusk, Y. Bengio, S. Russell, @tegmark, V. Kraknova, P. Maes, @Grady_Booch, @AndrewYang, @tristanharris & over 1,000 others, including me, have called for a temporary pause on training systems exceeding GPT-4 https://t.co/PJ5YFu0xm9
— Gary Marcus (@GaryMarcus) March 29, 2023
Additionally, the letter calls for the construction of an auditing and certification ecosystem, alongside liability for any harm caused. Also, robust public funding for technical AI safety research, and well-funded institutions to tackle the economic and political disruptions that will inevitably accompany the technology’s rise.
Follow Joseph Morton on Twitter