For peddlers of fake news and online scams alike, 2023 was a banner year.
An analysis from news trustworthiness ratings company NewsGuard claims that the rise of AI has “transformed the misinformation landscape” as tools like large language models (LLMs) allow for bad actors to churn out dodgy media on a larger scale.
But at least one company is betting that AI can also be a solution to this quagmire. Otherweb is a news aggregation feed that seeks to use transformer models to evaluate the credibility and substance of given news articles and generate a “nutrition label” to accompany them. Founder and CEO Alex Fink claims the platform has now garnered 7 million monthly active users.
Fink told Tech Brew that Otherweb was founded on a simple premise: “It seems like the biggest problem facing people is that everybody’s consuming junk all the time,” Fink said. “We need a way to improve information quality in some way.”
Otherweb started as a browser extension before evolving into a website and app as well. The platform displays an extracted summary of the news article alongside a “nutrition label” with metrics like “article tone,” “language complexity,” and data on the number and diversity of sources used. That information is gleaned by way of relatively small AI models, with datasets sometimes annotated by academics, that are made public for transparency purposes, Fink said.
The company’s app also makes use of a dating app-style system of swiping and a governing algorithm that the company has not yet made public but that Fink said is “embarrassingly simple.” As of now, Otherweb does not pull from paywalled sites and has yet to work directly with any of the publishers from which it draws content, Fink said.
The platform does not currently attempt to determine political bias in articles, Fink said, for fear of navigating the landmines that could come along with doing so.
“We found that it’s so charged these days that even if we give the most balanced, most accurate classification we can, people will be offended,” Fink said. “Somebody on the right will think a piece that we labeled ‘centrist’ is actually extreme left; somebody on the left will think that something labeled ‘moderate’ right is actually extreme.”
Keep up with the innovative tech transforming business
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.
But while much of the attention around AI’s nefarious potential has focused on deliberate political misinformation—especially with the US presidential election fast approaching—Fink said more run-of-the-mill ad-grabbing spam is a bigger problem. For instance, last week, 404 Media reported that AI-generated rip-offs of articles have started to infiltrate Google’s news tab.
“There’s a much broader category that is polluting the internet. Right now, that is just junk,” Fink said. “It could be clickbait. It could be other things. But I think most of the bad content you see out there wasn’t created with the goal of misleading you on purpose. It was created with the goal of being monetized, getting clicks, getting views, showing ads, things like that.”
Otherweb isn’t the only tool attempting to tap AI as a way to combat this type of content. Facing an onslaught of AI-generated misinformation and scam content, some fact-checkers and information security startups have turned to the filtering and contextual power of LLMs to match the newfound scale of misinformation. So-called nutrition labels have also become a popular way of giving consumers more transparency without being overly prescriptive.
The demand for AI tools to aid in this fight will become a lot more urgent as the amount of LLM-created “junk” online balloons by “10- or 20-fold” with AI continuing to proliferate, Fink predicted.
“The signal-to-noise ratio is about to go down quite a bit,” Fink said. “And as generative AI becomes bigger, I think we need curative AI to grow just as fast. Just like you have spam and spam filters or viruses and antiviruses, we need better ways to weed through information, as we’re creating better ways to create information quickly. Because when you create information quicker and quicker, it’s less and less valuable.”