Some AI computer scientists and researchers are declining to review Google AI's research, unless the tech giant alters its stance on the firing of its former AI ethics co-lead Timnit Gebru. Gebru was dismissed from the position after she says managers asked her to retract an AI research paper or take her name off it, which she said was a form of censorship. The controversy has raised concerns about corporate control and influence on academic research.
- Gebru, whose work has included research showing racial bias in facial recognition systems, says she was effectively fired after speaking out against Google's request to retract the research paper.
- Her paper details problems, such as gender and racial biases, that large language models could have on marginalized communities.
- Gebru's paper questioned whether many AI language models are too large, costly, and don't have enough regulations against issues such as bias. Google has its own such language model, BERT, which is incorporated into its search engine.
- While Google AI chief Jeff Dean says Gebru offered to resign, Gebru and her Google AI ethics co-lead said that's not the case and Gebru wasn't given a chance to discuss the matter.
- Gebru has accused Google of "silencing marginalized voices."
What Google says:
- In an email to the Google Research team, Dean shared his version of the experience. He said the research paper "was only shared with a day’s notice before its deadline," though Google usually requires two weeks for such a review.
- He claims that the paper "ignored too much relevant research." For example, it failed to take into account greater efficiencies with the environmental impact of large models.
- Dean said Gebru has asked former Google colleagues to stop working on DEI (diversity, equity, and inclusion) programs for the tech giant. "Please don’t," he requested. "I understand the frustration about the pace of progress, but we have important work ahead and we need to keep at it."
- Since her firing last week, the hashtags #IStandWithTimnit and #BelieveBlackWomen have cropped up to support Gebru on social media.
- In a tweet, AI researcher Isaac Tamblyn said he would no longer peer-review Google AI's publications, which are done voluntarily, because Google isn't following academic norms.
- Other prominent AI experts, including computer science professor Dagmar Monett and Nvidia AI research director Anima Anandkumar, said they would also not review Google papers.
- Stanford graduate student Ali Alkhatib replied that he didn't know how to take papers published by Google anymore, as "every paper will have a cloud over it of 'what wasn't allowed?'"
- Others took a different stance: "Is it not an academic norm to accept your employee's resignation? Should they have refused to let her resign?" tweeted Jeff Kingswell.