Skip to main content
AI

This Google exec says we’re in a ‘golden age’ of tech research

The search giant’s head of research talked about how the company picks its projects and what’s next in the new year.
article cover

Yossi Matias

7 min read

From traffic light timing to flood forecasting, Google’s research arm is bringing AI to a slew of real-world problems that might seem far afield from the company’s core search business.

Yossi Matias, who took over as head of Google Research this May, is hoping to tap into what he calls a new “golden age” of tech research to make progress across these varied projects. Matias said the time between research findings and their practical application—something he calls the “magic cycle”—is shrinking, which could open new doors in a host of different fields.

We spoke with Matias about how he chooses investment priorities, the future of LLM progress, and his goals for 2025.

This conversation has been lightly edited for length and clarity.

Google’s research projects cover everything from flood and wildfire forecasting to weather systems and LLMs. How do you decide where you focus resources?

So you mentioned flooding—let me pick on that for a moment…I’ve been overseeing what we call Google Crisis Response, which is essentially how to respond to people turning to Google for information or trusting Google with information when something happens. We just had an earthquake in California, so immediately, people want to know what’s going on and what they should be doing. And as part of that, I was leading the development of what we call SOS Alerts, which provided actionable information to people—quite often within an hour or minutes of when something happens…We’ve had over 4 billion views of SOS Alerts since we launched it, anything from natural disasters to terror attacks to even Covid alerts.

One thing we discovered early on is that if our mission was making sure that any available information that can be helpful is given to people right away, there were certain things that we could not be helpful because nobody had information. There were two notable examples—one was about wildfires. Nobody had really good, actionable information that we could provide to people. The other one was about floods, which turned out to be perhaps the most deadly natural disaster. And to be helpful, you need to give a prediction. Nobody had this. But worse, when I asked the experts on floods at the time, the answer I got was, “Well, it’s too difficult a problem to actually give a prediction, because there are so many variables, etc.”

So this is an example where there’s a high motivation for a problem. It’s questionable whether we can do anything about it, because all the experts say it’s not possible, but it’s worthwhile trying to see if we can make some progress. And we started working on that, and we did make some progress—enough to actually motivate us to invest more and make more progress…And every time we got into a blockage of “Hey, this is great, but how do we scale it?” Then we looked into another milestone of research. Eventually, we got to what we call the global hydrologic model that we published in Nature. By the time we published it, we already had 80 countries covered by our flood hub, and now we have 100 countries, 700 million people covered, and making it available to partners through APIs coming soon.

It’s really driven by this intersection of what is the impact that we could achieve if successful to the ability that we can actually have doing that, and then doing it in a way that would maximize our possible impact.

In contrast, by the way, there’s still a lot of work to be done on earthquake predictions, and even though we published a paper in Nature’s Scientific Reports showing some progress in this area, this is not something that made as much progress as flood forecasting. So there are other areas that we’re actually investing in to find the right way to do these breakthroughs. Similarly, by now, we already provide wildfire detection in near real time so that people can actually take some action.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

You’ve talked about a “magic cycle”—the shortening window between identifying a real-world problem, conducting research, and applying those findings. What factors are causing that time frame to shorten?

It’s a combination of a few things. First, of course, the technology development, which allows us to actually bring together ideas so fast. Second is, I think we are past an inflection point in the industry where practically everybody realized that technology could change everything, and AI could change everything, that you see the thirst for that to adopt technologies and actually to move them faster. For example, we just hosted an event for educators on AI for learning, and we heard loud and clear how actually educators are looking to see how they can leverage AI in order to help out with the education system. Or healthcare, which has been traditionally a very conservative or slow-moving industry in terms of adopting new technologies, we see that actually there’s a good realization that [they] could use that. So I think it’s a combination of the interest of players to actually adopt that technology, and also the infrastructure that enables that.

I think we learned as a tech community how to take technology and more rapidly actually integrate it and test it…and the progress of the technology itself also requires us all to actually build systems in a way that can adapt more rapidly to those changes. And again, the pace of change is staggering, and it creates all these new opportunities such that the cycle between something looking like science fiction to surprising to magical and to just assumed to be part of what we’re using has also shortened quite a bit.

Google CEO Sundar Pichai said recently that AI progress might slow next year as a lot of the “low-hanging fruit is gone” in AI research. Does that affect how you think about planning research projects around LLMs? Do you have to get more creative with where you look for improvements in LLMs?

From a Google Research perspective, we’re always looking into making the big-step foundational thing. So of course, there are some low-hanging fruits that are out there, but at least the focus that I’m looking into typically is, “How do we actually make the next step function?” So this is only becoming a bigger opportunity in many ways, and the question is how to actually go deeper. So in a way, the opportunity is just becoming more significant. That’s why I think it’s a bigger scope and still faster cycles, since we’re not going after the necessarily low-hanging fruit, but to see how to actually take the fundamental next steps. Let me also point out that one of the opportunities I see for AI is to actually help accelerate research itself: research on science, research in other areas.

Looking toward the new year, what are your priorities?

When I think about the different areas, I think that in each of them, there are step functions that we want to do. We want to better understand the foundations of machine learning and beyond, in computer science; we want to keep on looking for better algorithms…There are big questions about, “How can we do LLMs substantially more efficiently? How can we find new ways to actually [add] consistency and factuality and grounding for generative AI, not only text, but also images and videos?”

When I think about climate, one thing that motivates me with flood forecasting is that we took a problem deemed to be impossible and we actually solved it practically—I mean, at least a good part of it; there’s a lot of work to be done on flash floods and other things. So, what other problems appear to be impossible and we could actually solve, and that could make a difference?

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.