Skip to main content
AI

New AI watchdog hopes to thwart 2024 disinformation campaigns

CITED plans California, national policy slate.
article cover

Adamkaz/Getty Images

3 min read

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

Seeing is believing, but not when it comes to deepfakes.

AI-generated videos like the viral “This is not Morgan Freeman” clip can be amusing, but advances in the technology mean that voters could soon be duped by messages that appear to be from real politicians.

The new California Institute for Technology and Democracy (CITED) hopes to shift that reality by raising awareness about and lobbying for regulation around artificial intelligence ahead of the 2024 election cycle.

During a virtual event and press briefing Tuesday, Ishan Mehta, national media and democracy program director for Common Cause, a nonpartisan government accountability watchdog, said CITED hopes to build on the momentum generated by the Biden administration’s recent AI executive order. (CITED was created by California Common Cause, the entity operating on behalf of Common Cause in the state.)

“It’s really encouraging to see this sort of whole-of-government mobilization to understand and combat the sort of harms of AI,” Mehta said. “But especially on the issues we’re talking about, on democracy and its intersection with technology, we really are looking for Congress to step up.”

CITED’s organizers say it will bring subject-matter experts together to study how California can develop its own state-level framework for responsible AI usage. According to a news release, the think tank is the first of its kind to provide state-level policy leadership on emerging AI and democracy issues.

But CITED won’t stop with state-level legislation and initiatives. According to Jonathan Mehta Stein, executive director of California Common Cause, the group plans to roll out a national policy agenda in Washington, DC, in January.

“This is a new policy field, still developing. And so we are currently investigating a number of possible policy solutions. We’re looking at digital watermarking. We’re looking at deepfake labeling. We’re looking at algorithmic transparency,” Stein said during the event.

He noted that the group will also study “how a broader set of issues impacts” the world of generated content, including boosting local news coverage to help combat misinformation and expanding digital media literacy training and civics education initiatives.

AI scrutiny will be especially timely as political campaigns ramp up, Angélica Salceda, democracy and civic engagement director for the ACLU of Northern California, told attendees Tuesday. In the wrong hands, the technology can be used to amplify disinformation about candidates.

“Even without the use of AI, we’ve seen some unscrupulous campaigns use these tactics,” Salceda said. “So now imagine the same tactics supercharged, with the use of AI to identify and target historically disenfranchised voters. The use of AI is widespread, and it’s only growing. It’s having, already, a profound impact on our liberties and rights.”

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.