We sat down with Mark Surman, president of the Mozilla Foundation (parent of Mozilla Corp., of Firefox fame), to talk all about…what else? Artificial intelligence. We chatted about how to define “responsible tech,” how the foundation spots smart AI investments, and about his “slightly countercultural” view on AI regulation in the US.
This interview has been edited for length and clarity.
How is the Mozilla Foundation thinking about AI? What, in your view, are the top concerns and use cases?
About five years ago, it was really clear to me and our board that the web and privacy, the things you think about as Mozilla’s traditional issues, that that frame wasn’t going to be the thing that was going to get us to have the impact we needed to as things move forward…And so I took the foundation side of Mozilla and pointed us at trustworthy AI.
So of the $30 million we spend on that nonprofit side, on policy, advocacy, fellowships, and grants—all that stuff—100% of it has focused on trustworthy AI for the last four or five years, which is kind of ahead of the conversation now, and ahead of the rest of Mozilla. The reason being, we already saw that the web, data-driven computing, automated decision-making, all the things that make up AI, were gonna shape where the next decade or two would go…It also became clear…maybe 18 months ago, that while it’s important to do that movement-building advocacy stuff, we also needed to get into the game-building AI tech that had our values. And so we sped up two new things; some of this is starting to get into the main Firefox company as well.
[One is] a venture firm with a 100% focus on responsible tech companies. And the idea is that we want to be in the market for responsible tech, but it certainly will not all come from us. So we set up a $35-million experiment…an initial venture fund focused on responsible tech with a big AI focus.
And then also an AI R&D company that’s meant to take academic computer science research or community open-source and transform it into commercial products or scaleable tools, and really to productize a lot of the best thinking that we think is out there. Technical thinking at work around responsible AI needs help getting to scale and getting to relevance; that’s what our AI R&D company is focused on.
How does Mozilla define responsible tech? How do you define responsible AI and what are you looking for in investments?
What we decided to do for fund one is every investment memo has got to have one element in the Mozilla Manifesto that, if the company succeeds at its vision for the product, would actually advance the Mozilla Manifesto in some way, and at least does no harm on others.
And so this little manifesto has got 14 principles that are things like “privacy and security are non-negotiable” or there’s a piece around inclusivity, or there’s a piece around healthy, respectful communities. So those then become kind of the things we look for in the company: Does the founder or just the founding team have a kind of real desire and a vision for how their product can advance one of these things?...On the tech, the phrase we use is “trustworthy AI,” not “responsible AI.” But it goes back to those same things I said before: agency and accountability. Is there some piece where, effectively, the use of AI is there for the purpose of empowering users? Is there kind of an approach to think about guardrails or accountability in how AI is being built in the company?
Keep up with the innovative tech transforming business
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.
From your perspective, where are we right now when it comes to regulating AI? What are the key stepping stones the industry and policymakers need to reach?
I have a probably unsurprising but also a bit countercultural view on this.
The unsurprising part is [that] the internet and then AI as a kind of technology defining this era of the internet have become so infused in our society, and also just completely defined by the private sector, that we’re just at a moment where we have to kind of step back and make some rules about how we want the interests of society and the interests of private actors, the interests of the US and the tech companies, people and the products they use, how we want those things to balance. And it’s just screamingly urgent to look at that…There’s been an increasing consensus on the need for tech regulation in the last few years. And AI, as we’re talking about systems that self-generate or kind of grow over time…it actually adds a bunch of complexity to how you would strike that balance between people’s rights and the things that tech companies build. So that’s what I think is at stake.
My slightly countercultural view is…we’re not too late. I think often, in the current debate, people are like, “We’ve screwed up, we haven’t regulated tech enough, we’re way too late, it’s out of control.”
My view is that this is what happens in major technological and economic revolutions. They start unregulated because nobody even knows that they’re going to be major technological revolutions. You don’t know what’s going to be the thing that takes off…I think we're at the spot where the industry is mature enough and its relationship to society is clear enough that now is the time to really get good at regulating.
That’s the first step: to get good at regulating. It won’t be that we write a perfect law…So I actually think that the thing we should be asking for from our governments—and it’s happening, and has happened—is to bring people into government and build the muscle to be able to regulate tech well.