Skip to main content
AI

Is Google rethinking its open-source AI strategy with Gemma?

The move touches on a long-brewing debate within the AI field.
article cover

Future Publishing/Getty Images

3 min read

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

Google appears to be swiveling its AI strategy in a way that could shake up an ongoing debate over how available the tech behind generative models should be to (almost) anyone who wants it.

The search giant has rolled out a set of models based on its flagship Gemini system. Gemma, as the initiative is called, notably comes with open access to the code underpinning them. Models come in two pre-trained sizes, both small enough to run on a laptop or desktop, along with a responsibility toolkit meant to make for “safer AI applications,” according to a company blog post.

Gemma models are far more available to developers to use than, say, the full code or training data for Google’s Gemini chatbot, but calling it “open access” doesn’t mean it is without rules: For example, Gemma comes with very specific terms of use.

The move marks a departure from Google’s otherwise closed-door approach of late to AI releases and positions it in more direct competition with Meta’s Llama models, which have championed a similar, open approach.

Playing both sides? Should the code and training data used by LLMs be public? Companies like OpenAI (despite its name) and Anthropic have argued that the code behind their latest models could be dangerous in the wrong hands, as an explanation for their more closed approaches. Meta, IBM, Mistral, Databricks and others have argued that openness can spur innovation and collaboration.

Whose lunch is this?

A recent report from the Mozilla Foundation claimed that while the AI field has taken “a radical swing in the direction of closed technology” in the past few years, some approaches, like Llama and Mistral, could represent a push in the opposite direction.

“While these new [open-source] models are gaining steam and offer huge promise, there is still a long way to go before they are easy to use and easy to trust,” the authors wrote.

It was only last year when a senior software engineer at Google made waves when they fretted—in a leaked memo—that open-source alternatives like Llama were “quietly eating our lunch” in the AI race, Bloomberg reported.

Google claims its Gemma models can edge out “significantly larger models” like Meta’s Llama 2 on “key benchmarks,” like math, reasoning, and code, according to the company blog post.

Biden asks around: Regulation could also play a role in settling the debate. The White House said last week that it would begin seeking public comment on the safety of open versus closed models as part of a sweeping executive order President Biden signed last fall.

“Open-weight AI models raise important questions around safety challenges, and opportunities for competition and innovation,” Alan Davidson, assistant secretary of commerce for communications and information, said in a statement, also noting that while “these models can help unleash innovation…by making powerful tools accessible…that same accessibility also poses serious risks.”

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.