AI

Salesforce’s principal AI ethicist on the state of AI regulation worldwide

Kathy Baxter talked with Tech Brew about how companies can implement safe AI at every level.
article cover

Kathy Baxter

8 min read

With dozens of new AI laws and rules now in place across the globe, companies are making countless decisions about potential dangers, adding complexity to using AI responsibly.

Kathy Baxter, principal architect of Salesforce’s responsible AI and tech practice, chatted with us about how her team thinks ahead when it comes to AI responsibility, her role in advising government agencies, and the state of the regulation landscape.

This conversation has been edited for length and clarity.

You’ve been working in the responsible AI space for years now. What has changed about your work since everybody went crazy for generative AI?

With generative AI, many of the risks are the same as predictive AI, but on steroids. So even higher risks, say bias and toxicity and content creation; there’s some additional new concerns, like hallucinations. So just completely making up information out of thin air. There’s also a real risk of sustainability, because this technology takes much more carbon and water than the traditional, smaller predictive models do. And so as we’re going through and we’re thinking about the products, it gives us an increased risk space that we need to be thinking about.

It’s also changed in that it’s not as straightforward for how you address each of these issues. There are some techniques that we can use that can help, for example, RAG—the retrieval-augmented grounding of our models in our customers’ data—that can really help with hallucinations, but it’s not always sufficient, especially if our customers aren’t doing good data hygiene. If they don’t have complete, accurate, up-to-date data, then the model can end up hallucinating just as much as if you weren’t pointing to the customer’s data. And so thinking about this increased surface area, all the ways that we need to mitigate it. That’s basically how our practice has changed. And with just a whole lot more people on our team covering all of this work now, as well.

Then me, personally, I’ve increasingly worked more and more with our government affairs team to engage with policymakers and governmental groups to think about, “How do we set standards? How do we develop policies to ensure that this technology is safe for everyone?” And so, for example, I’m wrapping up my second year as a visiting AI fellow at NIST [the National Institute of Standards and Technology], working on the AI risk management framework, and I’ll be spending my third year working on ARIA [Assessing Risks and Impact of AI], their testing and evaluation framework and standard. And then I also work with Singapore. I’m on their Ethical Use of AI and Data Advisory Council, as well as part of the AI Verify Foundation.

I hear a lot of companies or people in the space looking to the NIST framework as a blueprint for AI risk assessment. What kind of work did you do on that?

I initially just started giving feedback to the folks working on NIST…I got a hold of the framework [during the public comment period], and I started reviewing it, and I was like, “Oh, my God, this thing isn’t usable.” And I reached out to them and said, “Hey, I’m a highly motivated subject-matter expert. And if I can’t use this thing, people who are not highly motivated, people who are not experts, there’s no way they can use this thing.” And they didn’t have somebody from industry that was participating in this work, so I was the first nonacademic fellow to work with them, and they actually had to create a new job code for me to come and work and not get paid and be able to contribute to this work. So what I have brought to this work is an industry lens to say, “OK, as a practitioner, how can I use this or can’t? What are the things that we need to do to make this actually applicable in actual practice?”

Do you feel like it’s made a lot of progress toward that goal since then?

Yeah, the difference between the version that I initially had read, and what came out, I think, was significantly more usable. And earlier this year, as part of the AI executive order, they made an update for generative AI, and so I gave a ton of feedback back on that, and they’re continuing to iterate on that. So I haven’t seen the second version that has emerged, but there’s a brand new one that has come out that adds a whole bunch of additional checks for generative AI.

What do you think about the Biden executive order and the state of the broader AI regulatory landscape right now?

Well, the state being the key word that really is where most of the regulation is coming from, is individual states identifying the issues that they believe are the most pressing, and creating regulations around that. At a federal level, we still really need a federal data privacy law in place. Salesforce strongly supports doing that, and I know there’s a lot of bipartisan support for that as well. So I have my fingers crossed that perhaps this year that can make it over the finish line.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

Because if you don’t have data ethics, you can’t have AI ethics. AI is run on data, so we really need foundationally strong data privacy regulations. I think the AI executive order was a great first step to get a lot of momentum going…[but] on a global stage, how do we create harmonization? Because each region is doing their own thing. EU is doing something. UK, US, Canada, Asia has a lot of different perspectives. Singapore has said we’re going to be hands-off in terms of regulations, but what we are going to do is provide a ton of resources to make it as easy as possible for companies to build AI responsibly. Malaysia, their AI regulation is much more specific, and they have encoded into their national AI policy Islamic values and making statements that or having requirements that your models cannot say anything against the monarchy or against Islam. So there’s a lot of variability. And what we really need is a harmonization to be able to say, you know, here’s 80% of all the different requirements across all of the different regions—they’re pretty much the same. And if you can do 80%, you’re pretty darn good. And then call out what is unique in each region. So if you’re going to go into Malaysia, you’re going to go into the EU, here are the additional things that you need to be able to do…That’s what I would really love to see nations come together to create.

Speaking of state legislation, do you have an opinion on SB 1047, the proposed AI safety bill in California?

Yes, there are lots and lots of opinions. I’ve read quite a bit, and I’ve attended many, many events where people have shared their feelings on it. I love that [State Senator] Scott Wiener has been so passionate about this and has really been very bold in what he’s putting forward. I think there are areas that we do need clarification on. We do need to be more specific in the wording of the regulations. So I think there’s some fine-tuning that needs to be done on the regulations and the requirements for it, but I absolutely love that California is always at the forefront in thinking about, “How do we make sure that the technology is safe for everyone?”

Are there still misconceptions on how to set up a responsible AI practice?

I don’t think most boards know what is necessary to move along the maturity progression, but it’s incredibly important for the board to be hearing from a group of experts, so that they are seeing and they’re getting some type of reporting. The first couple of recommendations, I would say, for boards are, one, make sure that there is a diverse group of voices that you’re hearing from with regards to AI, so you’re hearing from security, you’re hearing from privacy, you’re hearing from user or customer research. You’re hearing from—hopefully you have an ethical use team—and so you’re hearing from all of these different groups, the different components that are critical to building AI responsibly, and then you get documentation from them about that. It might be in the form of model cards or an impact assessment, so getting some type of documentation that the board can then take and digest…before the meeting, and then you’re having a real discussion about what the evaluations are that have been done, what have you found, what have you helped to mitigate?

Then, you want to make sure that you really are understanding all the ways that your company is using AI, particularly if it’s anything that is high-risk or about making decisions with legal or similarly significant impact. And I use that wording very specifically because that wording comes from GDPR, but it’s also very relevant to the EU AI Act. And so increasingly, we’re going to see regulation that is focused on high-risk use cases. Some regulation, including SB 1047 talks about only models above a certain number of flops…but most of the regulation is focused on what the risk surface is for what you’re trying to do. So it’s important for the board to understand what all of the applications are that you are building or implementing AI for. And then the final one goes back to the issue of data: What shape is your data house in? Where are you getting your data from? Is it fully consented? How accurate and up-to-date is it? Is this something that you can use in your models, and you feel confident it’s safe, that it’s not going to be leaking sensitive data? So those are a lot of really thorny issues that a board needs to be asking questions about.

Update 07/18/24: This piece has been updated to clarify when Baxter reviewed the NIST framework.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

T
B