Companies are finding all kinds of tasks for AI technology these days, from summarizing financial documents to aiding in legal briefs. But should they?
Part of Kathy Pham’s job as VP of AI and machine learning at HR software giant Workday is to evaluate the risk and efficacy of applying AI to a given function across the company’s sprawling hiring and HR tools. Love or hate its ubiquitous software, the company is using AI in a host of ways, from expediting expense reports to surfacing skills for a job—a tricky task when a growing number of laws are addressing the use of AI in hiring.
Pham is also drawing on experience asking those kinds of questions on a broader scale: She served as executive director of the National Artificial Intelligence Advisory Committee at the Commerce Department’s National Institute of Standards and Technology (NIST) and held previous roles at the White House and the Federal Trade Commission.
We talked to Pham about why she made the switch to Workday last fall, how she evaluates AI risk at the company, and where AI regulation is headed next.
This conversation has been lightly edited for length and clarity.
What brought you to Workday after spending time in the public sector?
My time in the public sector taught me a couple of different things. One is the power of infrastructure that works, and also the power of policies that exist when that infrastructure doesn’t work. And so for me, with Workday, I saw an opportunity to be part of a company that is the system of record for so many other customers, companies, organizations around the world, and one that has been doing it really, really well. And when you have a system of record for people and money data, you can’t mess that up, especially when you think about how to really enhance that work with any kind of AI and machine-learning technologies…What, candidly, was really refreshing was that [Workday] co-president Sayan [Chakraborty] was serving as a member of the National AI Advisory Committee. And he just had this very practical, honest approach to [talking] about AI, the technology itself, what it can and can’t do, and the problems that it should or should not solve. And it was so refreshing to have that perspective amongst a sea of the extremes of AI.
What do you mean by extremes?
If you were to reduce it down to extremes, you have the groups that are [saying], “Let’s put an AI in everything, it’ll solve all the problems.” Then you have the groups that are [saying], “No AI on anything at all, because it will cause so many problems”…And I actually talk to my colleagues a lot about fostering that sense of curiosity, to dive into “What even is this technology?” It’s been around since the 1950s. The first concept, the Turing test, came around the 1950s. And we’ve had various AI, data analytics, natural language processing, sentiment analysis, some version of the technology, be built over time.
In pulling together so many of these different groups, with the National AI Advisory Committee and other groups I’m part of, really fostering that curiosity to question, “What can this technology do?” Maybe it’s really good at pattern matching. Or maybe it’s really good at analyzing a large amount of text that, if our brains were physically large enough, we could do. And then, what is it not good at? What should we not use it for? Or maybe it’s too risky…There are also just things these technologies are not good for—maybe you should use an Excel spreadsheet, or maybe you should use two SQL joins in a database. And so I think to mitigate some of the extremes, it’s just a better education of when it should and shouldn’t be used, and that takes an understanding of what it is and an understanding of what it is across disciplines, too, from the people building the technologies to the ones thinking about the experience to the ones building the models, the ones handling data privacy and engineering, and we have people in all of those buckets thinking about their role in building an AI system.
How do you evaluate risk of using AI for a given task at Workday?
We have an internal risk evaluation framework where we think of low-risk, medium-risk, high-risk, and unacceptable risk. We built that off of other frameworks that exist out there, including the US risk management framework that the National Institute of Standards and Technology (NIST) came out of. So some of it is applying our own expertise and understanding the workforce, and some of it’s reaching out to governments, like, “Oh, what do you have? Can we take a look?” This is not new, we’ve used things like security frameworks in governments forever. And then, in the process, we also contribute our learnings back into the government, like, “Oh, in practice, this is what it looks like for us.” And we help them and enhance their framework as well. So we have this framework, from low to unacceptable risk, which is one version of that, where we use that across our product, engineering, and experience teams, where we just evaluate…High-risk doesn’t mean we don’t do it. It just means we think about the engineering experience and product practices that we put in place to mitigate a high-risk scenario versus a low-risk scenario.
Keep up with the innovative tech transforming business
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.
Can you give an example of a high-risk scenario where Workday has decided to go ahead with it anyway?
I’ll give you both a low-risk and a high-risk. In low-risk, you might look at something like OCR [optical character recognition] or scanning an expense report receipt. You don’t want to get it wrong. But we’ll have things in place where, if you do get it wrong, the mitigation process isn’t that bad—we’ll figure it out. A high-risk is, let’s say, someone’s performance or compensation. That can affect how much they get paid, it could affect their livelihood, maybe they’re living paycheck to paycheck. It could be any number of scenarios—we don’t want to mess that up. It doesn’t mean that we don’t want to use any kind of machine learning to help in the process. It just means that we have another set of engineering and product and privacy and experience practices that we layer on top of that, to make sure that we do that right.
Are there examples of things that Workday has looked at and decided that’s way too risky?
We decided early on that we don’t do workplace surveillance technology. We might have the capability, we might know the details of certain things, but we just don’t want to be in that business…[Also], because of the infrastructure we’ve built, we store all 10,000 of our customers’ data in one cloud. And we have the technical capability to pretty much train on all the data, but then we have privacy structures in place where we allow customers to decide what we can or can’t train on. And we take that very seriously as well.
Does Workday use AI in hiring decisions or looking at candidates or anything along those lines?
What we think about with hiring decisions, for us, it’s always the human or the company that makes the final decision. So we might have something like Skills Cloud that we started launching 10 years ago, where we might surface interesting, relevant skills, where if you put out, let’s say, a gig inside your company, because you need…temporary healthcare workers for a while, and you might surface interesting skills that match, but we ultimately don’t make the final decision. Because actually, one of our core tenets of AI that we’ve listed externally [is that] we see that it’s really important that there is a person making the final decision.
Now that many different parts of President Biden’s executive order have rolled out and we’re starting to see some of it take effect, what do you think needs to be next in terms of regulation? Or what would you like to see done?
In the US, we have executive orders. We also had the NIST Risk Management Framework, which is like a governance structure that we contributed to. We have the Federal Trade Commission, which is an enforcement agency. It kind of mirrors our three branches: You have the executive branch, and then you have legislative, and then you have justice, so you have enforcement. But in legislation, that moves a bit slower. So I do think, like the EU AI Act, we have some work to do. And we have a whole team at Workday thinking about working with our lawmakers as well. But I do think there’s some room for growth in that space. And I think what that means is we need folks that come to the table to help our government understand the technology so that we can write…legislation, executive orders, guidance, whatever it is, in a way that is rooted in some level of practice.