Skip to main content
AI

Stanford researcher sees finance, medicine benefiting from open-source AI

Rishi Bommasani, the society lead at Stanford’s HAI, also discusses legislative proposals with Tech Brew.
article cover

Francis Scialabba

6 min read

Advocates say more openness around artificial intelligence boosts healthy competition and distributes power while furthering scientific research and collaboration. But open models also pose the risk of aiding nefarious uses of the tech.

A new Science paper from researchers at Stanford’s Institute for Human-Centered AI (HAI) aims to take a clear-eyed look at just how much marginal risk open models pose relative to their closed counterparts, as well as benefits and policy considerations.

We spoke with Rishi Bommasani, society lead at the HAI’s Center for Research on Foundation Models and coauthor of the paper, about where AI is actually proving most dangerous, why openness is important, and how regulators are thinking about the open–closed divide.

This conversation, the second of two parts, has been edited for length and clarity, and contains references to materials related to child abuse.

I want to ask about the benefits. As OpenAI has moved toward a more closed model, and some of these other big companies have become more closed as more money has gone into the AI race, is there still a robust open foundation model ecosystem?

It’s definitely been true for a while—and I would say it’s still true—that if you only care about raw capabilities, the most capable models are still on the closed side. Both the open developers and the closed developers are constantly improving capabilities…and right now, what you would see is that the open developers are closer to the closed ones than they have been in the past. But I think this also reflects that we don’t have great benchmarks for really revealing the most capable models, like a lot of things are saturating at the moment.

The other thing I would say, though, is where open models are doing very well is really not [in] just broad capabilities, but the capabilities-to-cost tradeoff, which is ultimately what most businesses will care about. And this appears in a few different ways. You see much smaller open models, which are both necessary for some applications, like on end devices, and are just desirable for an even larger number of applications because they’re cheaper to run. And what you’re also seeing is the cost per token is being driven down, both on the open and closed side, because of probably the existence of these open alternatives. So, you’re seeing increasingly competitive pricing around the cost in dollars to, say, produce a million tokens from a given model. You’re seeing also this layer of platforms, essentially, that take in a bunch of open models and perform inference on those models for some user. And if you look at the prices those platforms are charging—things like Amazon Bedrock or Together, or Microsoft Azure…You’re seeing the rates are coming down quite a bit, which is obviously great for consumers and for competition. So, that’s a huge benefit…that we’re seeing play out empirically.

Another benefit, which I’m seeing more anecdotally, also [when] talking to some colleagues who work in medicine, including at Stanford’s hospital, is you’re seeing that a huge benefit of open models is that you can run inference locally, and therefore you don’t need to send, let’s say, OpenAI your data, which in some domains is strictly very important. Even if OpenAI has great data privacy and security, just by legal requirements and so on, you might not want to send data. So, we’re seeing that play out both in medicine and finance. It’s still early to see what the overall economic impact of that will be, but I think you’re seeing at least the more basic thing that downstream actors are adopting open models preferentially because of this advantage. And this is not different from other levels of the stack as well.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

Are there any examples of proposals or legislation in place that you think that do a good job of walking the line between regulating the risks and not overburdening open models?

If you’re going to impose a greater burden on open models, there should be justification. The issue is—and SB-1047 is an example here—it’s fully possible to state a piece of policy that, in language, doesn’t distinguish open versus closed, but the effect is that it has a greater burden on one—on open, usually, than closed. The AI Act in the EU is an interesting example because what they do there is basically say…if a model poses systemic risk, then it doesn’t matter if it’s open versus closed, there are some significant obligations. But for models that don’t pose systemic risk, then open models are largely exempt, but closed ones have some obligations to be transparent. And the idea is that because openness is beneficial to innovation, and especially in the EU, there’s a strong belief that openness will help the EU specifically innovate in the space. For that reason, in addition to the fact that openness allows for some types of transparency, openness is exempt and has fewer obligations…I think that’s an interesting approach. The EU is still working through its implementation, which I’m helping with, so we’ll see exactly how it lands in the final implementation of the AI Act. But I think that’s interesting, where you’re seeing proactively a recognition that openness may be good, and therefore a lessening of obligations. And the EU has actually done this before, and so it has generally had a pretty pro-openness posture in some of its digital technology regulation. So, we’ll see how that evolves.

[In the US, President Biden’s] executive order on openness required the NTIA, this agency within the Department of Commerce, to prepare a report for the president on open models. And the report very much talks about marginal risk and so on. And it mostly says it recognizes that openness has many benefits, and mainly just says that we’re in a wait-and-see period. We should monitor the risks of openness, but right now, there are none that have risen to the fact that we should take very strong policy action. To me, that’s where we are—we probably want transparency-related obligations for all foundation models, or increasing our transparency and reducing information asymmetries. And it seems like that’s where most of the different jurisdictions are moving in terms of the AI Act, the executive order, the G7’s code of conduct—I think we might see some stuff out of the UK as well. So, that seems to also have broad consensus as a good policy intervention. And then I think as we start getting to more substantive things beyond transparency, then that’s probably where we’ll start seeing the asymmetry between open versus not. And I think what we might also see is instead, approaches where we try to address specific harms that are coming about from openness, i.e., where there is marginal risk. So, we might see more bespoke solutions for, say, addressing CSAM [child sexual abuse material] and NCII [non-consensual intimate images] through a vertical approach, rather than a horizontal approach across all models.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.