Almost everybody has an opinion about the future of AI these days. But what do experts who spend day in and day out studying this technology think about its impact?
That’s what a recently published poll of more than 4,200 AI researchers from University College London aims to pin down. The authors claim the preprint paper is the largest social science survey of such technologists to date.
One caveat, however, is that the authors conducted the survey last summer, before the progress of the last several months, including the advent of reasoning models.
On the whole, the global pool of researchers—71% of whom have PhDs—were optimistic or neutral about the future of AI. More than half (54%) said AI has more benefits than risks, 33% said benefits and risks are roughly equal, and 9% said risks outnumber benefits.
Risk analysis: Among the top potential positives of AI that researchers cited were increased access to education (75%), making jobs easier (72%), improved access to healthcare (57%), and ease of household tasks (55%). Their top concerns were around the misinformation and fake news (77%), personal data being used without consent (65%), increases in cybercrime (59%), and reducing social interaction (47%).
Just over half (51%) said AI research will inevitably lead to artificial general intelligence (AGI), the research term for a system that can perform on par with or better than humans at most tasks, while 24% disagreed (the rest were neutral). However, the survey question didn’t ask about any hypothetical timeline for the rise of the machines.
Keep up with the innovative tech transforming business
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.
Slow down: Four in 10 respondents said AI shouldn't be developed as quickly as possible, whereas 29% were all for pushing its development. Those who believe AGI is inevitable were more likely to be in favor of fast development and see more benefits than risks overall.
Perhaps unsurprisingly, those working in the AI industry were more in favor of AI companies being allowed to train on any publicly available data than members of the UK public (25% versus 20%). But academics studying AI agreed with the general public, and were even more likely to argue that “AI companies should only be allowed to train their models...where they have explicit permission...from the original creator.”
The authors of the survey ended with an appeal for more opinion research into a wider community of AI researchers.
“There is a wide range of views among those researching AI, but these are currently drowned out by the loud voices of a few powerful people,” the authors wrote. “Some of these people lead AI companies, some are evangelists for the technology, and some are self-proclaimed ‘experts.’”
“Our survey results support the hypothesis that ‘distance lends enchantment’ in AI,” they added. “There is more uncertainty among the researchers who are closest to the technology. Our survey reveals that, beneath the surface, AI researchers have a range of hopes and fears about the technology that are broader and more complex than the public debate suggests.”