In a recent open letter, a group of researchers and experts have called on all AI labs to pause for at least 6 months the training of AI systems more powerful than GPT-4. The authors of the letter believe that AI systems with human-competitive intelligence pose significant risks to society and humanity, and the current level of planning and management is inadequate.
The letter argues that recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one can understand, predict, or reliably control. With contemporary AI systems becoming human-competitive at general tasks, the authors ask whether we should let machines flood our information channels with propaganda and untruth, automate away all jobs, or risk developing nonhuman minds that might eventually replace us.
The authors believe that decisions about the development of powerful AI systems should not be delegated to unelected tech leaders. Instead, they argue that AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.
The letter suggests that AI labs should refocus their research and development on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal. In parallel, AI developers must work with policymakers to dramatically accelerate the development of robust AI governance systems, including new and capable regulatory authorities dedicated to AI, oversight and tracking of highly capable AI systems, and liability for AI-caused harm.
The authors conclude that humanity can enjoy a flourishing future with AI, but this must be done responsibly. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we engineer these systems for the clear benefit of all and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society, and the authors believe we can do the same here. By pausing the training of AI systems more powerful than GPT-4, we can take the time to ensure that the development of advanced AI is safe and beneficial for all.