The non-profit said powerful AI systems should only be developed “once we are confident that their effects will be positive and their risks will be manageable.” It cited potential risks to humanity and society, including the spread of misinformation and widespread automation of jobs.
The letter urged AI companies to create and implement a set of shared safety protocols for AI development, which would be overseen by independent experts.
Apple cofounder Steve Wozniak, Stability AI CEO Emad Mostaque, researchers at Alphabet’s AI lab DeepMind, and notable AI professors have also signed the letter. At the time of publication, OpenAI CEO Sam Altman had not added his signature.
Comments are closed.