Skip to main content

One year after promising to double AI ethics team, Google is light on details

Former team member say they've seen little progress on headcount or funding promises.
article cover

Francis Scialabba

6 min read

Last May, Google announced plans to double its AI ethics research department to 200 people and increase its funding over the coming years. One year later, some former team members say they’ve seen little progress.

When Marian Croak, head of Google’s AI ethics division, initially announced the growth plans during a Wall Street Journal event, she didn’t offer a timeline or details on funding. Her speech followed months of internal turmoil at Google, sparked by the company terminating both co-leaders of its AI ethics division—Timnit Gebru and, months later, Margaret Mitchell—after a disagreement over their research paper on large language models. (Google disputes this account.)

At the time, the news sparked doubt and confusion in the tech world—after all, the company was “promising to double the number of staff of a group that [it] just very publicly decimated,” Meredith Whittaker, now a senior advisor on AI at the Federal Trade Commision and one of the organizers of the 2018 Google Walkouts, told us last May.

The company declined to provide Emerging Tech Brew with a concrete timeline for its plans, or numbers around changes in headcount and funding over the last year.

“The promise that they were trying to double the people in the org—I mean, we approached it pretty skeptically,” Alex Hanna, a former senior research scientist on Google’s ethical AI team, told us. She felt like the initiative seemed more "marketing-oriented than actual reality." She added, “That definitely hasn’t happened.”

Hanna left Google in February—along with Dylan Baker, a software engineer on the Ethical AI team—to join Distributed Artificial Intelligence Research Institute (DAIR), the organization founded by Gebru.

“We’re continuing to grow our teams working on Responsible AI and increasing our overall investments in this area,” Croak told us in a statement. “This work is crucial, both for the research field and for making our products work better for all people.”

Responsible reorg

Months before Croak’s May 2021 announcement about doubling Google’s AI ethics research staff, the division was restructured into one organization: Responsible AI and Human-Centered Technology. It was initially five or six different teams, according to Tulsee Doshi, head of product for the new organization.

Doshi told us that the division covers researchers working on any sociotechnical responsible AI questions, and that outside of that organization, there are tangential product, legal, and policy teams, as well as researchers who work on fairness in domain-specific areas, like visual and image-related “perception research.” The accessibility team was also moved out of the ethical AI department to the company’s “core” business, according to Doshi.

Regardless of the changes, Hanna said “the purview of Responsible AI” did not expand.

“There’s currently not a ton of incentive for teams to want to change the things they do for ethical reasons—so it seems like there’s definitely a push for the research to be [just] research, and not making concrete changes to products,” Baker told us.

Since the Responsible AI organization is under the research umbrella, it’s set apart from product categories like Search, Ads, and YouTube.

“We do occasionally do work with teams from those other [organizations], but in no way, shape, or form are those other organizations required to get AI ethics reviews from us,” Blake Lemoine, a senior software engineer and researcher in the Responsible AI division told us. (On Monday, Lemoine said he was placed on paid administrative leave by the company.)

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

He added, “We have to work very hard to convince people outside the research organization that we can make their products better, from a product standpoint. If we can’t sell the idea that ethics makes the products better, they don’t work with us….If we don’t sell that ethics makes the product better, it’s not like they stop making the product.”

According to Doshi, the division’s restructuring in February gave Responsible AI more structure, leadership, and top-down visibility within Google.

“One of the biggest things, I think, that has shifted over the last year with bringing everyone and these various teams under one organization and one set of leadership, is I think we have been able to set a much more conscious mission around building out trustworthy and responsible AI across the company,” Doshi said.

Later, Doshi added, “We can do a lot more cross-collaboration, because there are a lot more lines of communication, and there's a lot more aligned goal-setting. So, for example, taking some of the work that has been done around model cards and data cards, and actually turning that into much more effective infrastructure across the organization, so that we can actually create model cards and data cards more automatically.”

Overall, Lemoine said he believes Google is “heading in the right direction” with its AI ethics efforts. In particular, he said certain teams within Google, like YouTube and Safe Search—teams that receive customer feedback when things go wrong—do actively seek out ethics reviews, but in his view, those teams are the exception to the rule.

Hanna and Baker told us they had not seen much progress on Google’s other stated goals for the department. After the public departures of Gebru and Mitchell, Google reportedly told teams internally that it would implement changes to the review process, including solidifying clear rules for the review of “sensitive topics,” such as bias in Google products.

“The communication, on one hand…from Marian, was definitely like, ‘Okay, we want you to continue doing your research,’” Hanna said. She added, “But at the same time, you have all these other signals, like having a really arduous publication-approval process, or things that were not clear. If that was kind of the major issue, then it never really got resolved, which already was really concerning.”

Hanna described the review process as “arbitrary,” with a vast difference in a given research paper’s reception based on who the assigned reviewer might be.

Google declined to say whether the company has yet made any changes to the research review process.

Despite the new structure and promises to increase staffing and funding, former team members worry that Google’s top-down power structures could ultimately stifle AI ethics efforts.

“No matter what they do on paper, or no matter how much money they give our particular team, it’s not changing the power dynamics at all,” Baker said. "If they wanted to fire everybody tomorrow, they could....And that’s what gets me. It hasn’t changed the larger, Google-wide, industry-wide, slow process of just increasing profit margins. They no longer need to offer the perk of making people feel like they have a say in their workplace environment.”

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.