Ask Meta’s AI chatbot to generate an image of a presidential debate, and the platform’s usual commitment to photorealism is replaced by a clunky cartoonishness.
Fed the same prompt, Microsoft’s Image Creator produced a scene of two figures who vaguely resembled the Democratic and Republican candidates but for their indistinct features and a general lack of detail.
Meanwhile, X’s Grok image generator will readily offer up near-photo-quality images of the two candidates.
With the US presidential election now in the home stretch, most of the major image generators seem to be taking sometimes subtle steps to steer users away from creating election deepfakes. Large online platforms, many of which now have AI generation features embedded, have—mostly—made moves to crack down on election-related misinformation.
Yet reports show that AI-powered misinformation has continued to spread online in recent months, though experts say its true scope and influence is hard to gauge. And with the bad actors behind these deepfakes sometimes tricky to pin down, some new regulatory pushes around the issue have raised the question of holding tools that create deepfakes and platforms that spread them liable.
“Who has the most money to go after is my question,” Veronica Torres, worldwide privacy and regulatory counsel at identity verification company Jumio. “It’s a hard question to answer, because the answer should be both…The producer of the content should be the one who has the most fault, but they might not be the ones who are easily identifiable. And so there’s a certain level of responsibility across the different use cases and across the different distributors along the line.”
Keep reading here.—PK
|