OpenAI and Anthropic, both AI startups, have forged a deal with the U.S. government for the testing and evaluation of their AI models. This was the information released by the Artificial Intelligence Safety Institute on Thursday.
According to Reuters, the agreement that both OpenAI and Anthropic signed with the government is considered as a first-of-their-kind. These came up at a time that companies are being scrutinized for the proper use of AI technologies, particularly on matters of safety and ethical use.
Following the close scrutiny that AI companies are getting, legislators in California are slated to be voting on a bill that is geared to regulate the manner by which AI is developed, as well as on how it will be deployed within the state.
Jack Clark, Co-Founder and Head of Policy at Anthropic, noted how a safe and trustworthy AI plays a crucial role in the positive impact of AI.
“Safe, trustworthy AI is crucial for the technology’s positive impact. Our collaboration with the U.S. AI Safety Institute leverages their wide expertise to rigorously test our models before widespread deployment,” Clark said.
To help ensure the safety and reliability of the company’s AI products, the deal gives the government access to the companies’ major new models before they will be released, as well as after their release, U.S. News reported.
Aside from a systems of checks, the deal also allows collaboration between the businesses and the government’s own institute. The research would pave the way to better evaluate the capabilities of the AI models that are being developed as well as the associated risks.
“We believe the institute has a critical role to play in defining U.S. leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on,” said the chief strategy officer of OpenAI, Jason Kwon.
Aside from working with OpenAI and Anthropic, the institute will also be collaborating with the U.K. AI Safety Institute.