Skip to main content
AI

Here are 3 big takeaways from Stanford’s AI Index report

The 434-page report covers everything from model progress to regulation.

Image of a robot and a human working together.

Nanostockk/Getty Images

3 min read

If you have questions about the current state of generative AI, you might be hard-pressed to find one that’s not at least touched on in Stanford’s 434-page AI Index.

The report, from the university’s Institute for Human-Centered AI (HAI), covers everything from business adoption of AI and global regulation to research progress and its role in scientific discovery. We spoke with Stanford HAI’s director of research, Vanessa Parli, about some of the biggest takeaways from this year’s edition.

AGI nigh? AI models just keep improving on every test thrown their way. The report finds that the performance of AI systems on some of the most challenging benchmarks has improved rapidly in the past year. “There are very few task categories where human ability surpasses AI,” the authors write, and “the performance gap between AI and humans is shrinking rapidly.”

Given that finding, we asked Parli whether some of the more worrying headlines about AI and government leaders’ hand-wringing about artificial general intelligence (AGI) coming sooner than previously thought are onto something.

“It’s very difficult to tell,” she said. “We are seeing, as we have for the past few years, that these AI tools are getting better and better and at a faster rate.”

Some major caveats, however: There’s been more debate lately about whether the benchmarks being used are actually the best way to measure human intellect, Parli said. And these AI companies are essentially “grading their own homework”—there’s no third-party org that vets their performance claims, she added.

AI access: AI has become more accessible in the past year as small models have become more capable, and LLMs cheaper to run overall. The cost of running a GPT 3.5-level model has dropped 280-fold since 2022, according to the report, while hardware costs have dropped and energy efficiency has improved.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

Open-weights models have also significantly reduced the gap with closed models, “reducing the performance difference from 8% to just 1.7% on some benchmarks in a single year,” the report noted. Parli said the stir around DeepSeek is likely to spur open-weights growth further in the coming year.

“Scarcity drives innovation, and I think that’s what we saw with DeepSeek,” Parli said. “And I think that it likely inspired many others, if they were not already, to start getting more and more creative about engineering and use of resources.”

More scientific discovery: Some discussion in the AI community lately has focused on a persistent question: If AI is trained on all human knowledge, then why is it not making new connections and yielding more new scientific discoveries?

Parli said AI started to fuel more science research over the last couple years, with Stanford adding a “Science and Medicine” chapter for the first time last year. This year, the report highlighted advancements in biology, materials science, and satellite fire monitoring, as well as AI’s two Nobel Prizes.

“So I actually find this area to be one of the more exciting areas of the report this year, and I think that it will continue to grow,” Parli said. “There’s just a lot going on in this space.”

You can peruse the full Stanford HAI AI index here.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.