Skip to main content
AI

How this early LLM researcher is taking on ChatGPT with his own search engine

You.com aims to offer more accurate information than other chat-based assistants.
article cover

Richard Socher

6 min read

When former Salesforce Chief Scientist Richard Socher co-founded AI search engine startup You.com in 2020, ChatGPT had yet to spawn a viral sensation, and few people outside of tech circles were thinking about large language models (LLMs).

Fast-forward a few years, and the small startup is now looking to go head-to-head with the AI-powered search ambitions of tech giants like Google and Microsoft.

Socher—who ranks third on Google Scholar’s list of top-cited natural language processing (NLP) researchers—said You.com is differentiating itself with a focus on accurate answers and a breadth of tools built on its LLM tech, from a writing aid to a research assistant. The company declined to reveal user numbers, but Socher claimed it has reached “millions” of people.

Tech Brew caught up with Socher about what it was like to be early to the LLM party, how to curb AI falsehoods, and what’s in store for the hype this year.

This conversation has been edited for length and clarity.

What is You.com?

You.com is all about amazing answers. It’s a chatbot that helps you search, write, create code, and many more things. Now we think about amazing answers to any question you may have. And an amazing answer is accurate. It’s fast, it’s beautiful. It’s useful. And it’s the right length for what you’re asking. And so that’s what You.com is all about—we’ve built a lot of technology, we’re ahead of basically everyone. About this time last year, we were the only search engine with millions of users that had an internet-connected LLM, but obviously, this year, it has gotten copied millions of times, it feels like—at least a couple dozen times. But we’re staying ahead—we now have these amazing new research modes that would actually do quite sophisticated, deep research for you. And yeah, it’s been very exciting to see the progression of the whole space over the course of last year.

What was it like to have already been in this space and then see Google and Microsoft Bing getting involved in it?

On the one hand, it’s a little scary, of course, for a small startup. On the other hand, it kind of shows you that we’re innovating in the right way. And our goal is, to a large degree, to change search the way we know it. Because I come from a research background and invented word vectors, contextual vectors, prompt engineering, in an attempt to have a single model for all of NLP. This had been my goal from 2010 to 2018. And then we finally did it in 2018 and inspired a few other folks, including folks at OpenAI, who were citing us when they were still doing open research in AI, and we were very happy. But we were sort of surprised that Google hadn’t changed in, like, 20 years, despite all of that progression in NLP. So we thought, clearly, it’d be better if you get a summary and an answer to your query and your question than a list of 10 blue links with a bunch of stuff around it. And so we set out to change that, and we iterated a bunch, and we’re excited to see that we’re on the right track.

And what sets You apart specifically from Google Bard and Microsoft Bing?

So I guess we can just look at the main competitors. So here’s an example from our research mode: If you ask a medical question, for instance, a lot of people “Google doctor” themselves, so it’s not uncommon for them to ask this kind of question. So from You.com, you get a nice summary of what it is that you could do, you want to evaluate them first, and then you’d have a bunch of treatments. And each of these notes here, you see each comment—especially for something in research mode and like a tricky subject like medical—you get a citation. And you can actually see where that comes from, and get to real research articles, and get essentially really highly accurate citations and answers and the combination of the two.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

Now if we compare that to the default Google experience, then you just don’t get that—you now have to read all these websites yourself, to spend a lot of time researching and being your own research assistant, rather than having an AI research assistant. And if you use ChatGPT, which is our main competitor, you get a subsection of that answer, but you have no way to verify it very quickly. I think Bing and others are trying to sort of merge a search engine with a chat engine. But what we’re seeing is users just want a chat-first experience.

Working on this kind of NLP early on, did you see the arms race around search between Microsoft and Google coming?

That was the hypothesis of why we started in 2020. So clearly, LLMs are one instantiation. But the idea that you can have a single model? So the big idea in 2018 that we published with the decaNLP paper was that, at the time, everyone built one model for one task. So you want to do sentiment analysis on Twitter, you build a sentiment analysis model. You want to do summarization, you build a summarization model. You want to do translation, you go off and you build a translation model. And our idea was, “Why don’t you just have a single model that you ask a different question each time?” Then you can have a much more general-purpose, natural language understanding system. So that was kind of our idea, and so it felt very obvious to us that that should change search the way it was done back then. And I think that’s showing to be correct.

Do you see the hype around LLMs continuing into the next year?

It’s kind of interesting; sometimes there’s hype, like in some places in crypto and others. But with AI, the hype can’t really die. Like, people are using these technologies in their workflows, and they don’t want to just lose them now. They don’t want them to go away. So I do see a lot of that continuing, and maybe some parts will mellow out a little bit. But there’s still so much more that can be done. The answers that we give are more and more accurate now. And it wasn’t easy to reduce hallucinations. It isn’t, like, a flashy thing, but it gets from sort of, “OK, this is kind of interesting technology” to “I can use this every day.” And that needed to have that extra, like, 15% accuracy to get to the, like, high-90% accuracy range for people to really trust the technology.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.