Everyone might be talking about Web3, but Sarah Guo is thinking about what she calls “software 3.0.”
A former general partner at Greylock Partners, Guo set off on her own this June and in early October, debuted Conviction Partners, a new firm with $100 million in funding. The fund will invest in seed-stage software and hardware companies at the forefront of software innovation, opening the door for what she called “AI-native” companies.
We caught up with Guo earlier this month to talk about her investment vision and what she sees as the next step in software and AI.
This conversation has been edited for length and clarity.
What’s your vision of the future of software and the role of AI in developing the next leap forward in tech?
The base premise here is that we have these new, really powerful, very horizontal capabilities, and capabilities in terms of modern machine learning algorithms—especially models that work like that are better and better at prediction and generation across all these different modalities. So language, math, voice, code, etc.
They’re going to keep getting better; that’s what we’ve seen in the research. On a 10-year basis, you see research progressing, models getting better and getting bigger, which improves performance.
I think there’ll be a decade-plus ingestion of those capabilities up and down the stacks of a bunch of different companies. That’s opportunities for existing companies; there’s a new mindset required to build machine learning companies, what I call software 3.0 or AI-native companies. There’s also startup opportunity.
How are you planning on investing in companies to facilitate this change and help companies develop their technology?
We’re going to invest full stack. Think: chips processors made for this type of computation, cloud infrastructure that is designed for this type of computation, developer tools for building these types of applications. And then specific applications in sort of every vertical.
There’s a huge amount of opportunity around how you work with, for example, observability or security data around how you automate workflows. It could be applications in sort of every other departmental application. HR, finance, CRM, etc., as well as some areas that we think of as completely new applications.
Keep up with the innovative tech transforming business
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.
And so I think some of those are the easiest examples to think of are autonomy, robotics, some of these things. You’re not going to get self-driving cars without machine learning. It’s not an existing application category.
Or you can even think of perhaps some of the new generative creative tools as new categories of software. It’s quite different. In a very short amount of time, [you’ll be able to] upload a few images and a natural language prompt and get a short-form video that you could use for advertising. That’s not an existing category of software.
The capabilities are going to get broadly much more powerful. And that is going to disrupt a huge number of industries up and down the stack and across verticals.
Do you think we’ll develop a clearer understanding of how black-box algorithms work in the near future?
I wish the answer was, like, a hard yes…At the extreme, if we want something that is so flexible in its capability that it mirrors the human brain and can learn for itself—we don’t understand how the human brain works. So, it is quite possible that we will be able to replicate a similar range of capability without fully understanding the complexity.
There’s a judgment call about how you manage that. What I do think is going to happen, or what I see happening already, is that these algorithms are not hard-coded black boxes. They’re learning [by] themselves. That doesn’t mean you just give up on explainability, safety, and guardrails.
There’s a lot of really interesting work on the research side to sort of do that ex post facto, after the fact, and say we can determine the major factors that the algorithm is looking at when it makes certain predictions or classifications or generates certain content, we can put [in] guardrails to make sure that certain biases that exist—because the data itself is biased, the internet is biased—don’t show up in the outcomes of the model.