Top AI Companies Join Government Effort to Set Safety Standards
The top US artificial intelligence companies will participate in a government-led effort intended to craft federal standards on the technology to ensure that it’s deployed safely and responsibly, the Commerce Department said Thursday.
OpenAI, Anthropic, Microsoft Corp., Meta Platforms Inc. and Alphabet Inc.’s Google are among more than 200 members of a newly established AI Safety Institute Consortium under the department, Commerce Secretary Gina Raimondo said. Also on the list are Apple Inc., Amazon Inc., Hugging Face Inc. and IBM.
The top industry players will work with the National Institute of Standards and Technology, a body within Commerce, along with other technology companies, civil society groups, academics, and state and local government officials to establish safety standards regarding AI.
“President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem,” Raimondo said in a statement.
Major tech companies have been engaging with the Biden administration and policymakers in Washington on regulating AI as the technology rapidly advances and is poised to disrupt industries. Federal officials are seeking to maintain US leadership on AI development, intending to set rules that protect Americans from hazards, such as misinformation and privacy violations, but still promote the technology’s potential to spur progress in health care, education, and other industries.
“Progress and responsibility have to go hand in hand. Working together across industry, government and civil society is essential if we are to develop common standards around safe and trustworthy AI,” Nick Clegg, president of global affairs at Meta, said in a statement. “We’re enthusiastic about being part of this consortium and working closely with the AI Safety Institute.”