OpenAI CEO Sam Altman was in Washington on Tuesday to testify before the Senate Judiciary Committee about his thoughts on how to regulate the artificial intelligence industry. In a hearing that lasted just under three hours, Altman answered questions about AI’s impact on everything from jobs and advertising to elections, legal liability, and music.
And ultimately, he encouraged the government’s participation in helping to determine how AI is developed and used in the future, both in the US and abroad.
“We have tried to be very clear about the magnitude of the risks here,” Altman said. “I believe that companies like ours can partner with governments…facilitating processes to develop and update safety measures and examining opportunities for global coordination,” he added later.
The Senate Judiciary Subcommittee on Privacy, Technology, and the Law also heard from IBM Chief Privacy and Trust Officer Christina Montgomery and scientist Gary Marcus in the first of a series of hearings “intended to write the rules of AI,” Committee Chair Richard Blumenthal said.
The Connecticut senator, who opened the hearing by playing an AI-generated version of his own voice reading a statement written by ChatGPT, later said Congress “failed to meet the moment on social media,” but should push for transparency and accountability from AI firms.
Building a new framework
Altman endorsed the use of independent audits and suggested a combination of “licensing and testing requirements for development and release of AI models above a threshold of capabilities,” as well as the creation of a new international agency tasked solely with regulating AI.
Montgomery said that IBM is urging Congress to deploy a “precision regulation approach to AI,” creating rules for deployment and individual use cases, rather than for the technology itself.
Keep up with the innovative tech transforming business
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.
“The strongest regulation should be applied to use cases with the greatest risks to people and society…By following a risk-based, use-case-specific approach at the core of precision regulation, Congress can mitigate the potential risks of AI without hindering innovation,” Montgomery said.
Regulating risks
Altman acknowledged the potential impact of AI on jobs, and expressed his concern about the role tools like GPT-4 could play in spreading election disinformation.
“GPT-4 will, I think, entirely automate away some jobs, and it will create new ones…So there will be an impact on jobs. We tried to be very clear about that. And I think it will require partnership between the industry and government, but mostly action by government, to figure out how we want to mitigate that,” Altman said.
He also pointed to the potential for GPT-4 and systems like it to impact political elections as a “significant area of concern” and an area where “some regulation would be quite wise.”
But Altman cautioned senators against using the “frame of social media” as a way to think about AI’s potential impact. “This is different, and so the response that we need is different. This is a tool that a user is using to help generate content more efficiently than before,” he explained.
While Altman stopped short of Marcus’s categorization of AI as having the potential to threaten “democracy itself,” he did give senators a warning.
“I think if this technology goes wrong, it can go quite wrong,” he said. “And we want to be vocal about that. We want to work with the government to prevent that from happening. But we tried to be very clear-eyed about…the work that we have to do to mitigate that.”