Former Google Researcher: AI Workers Need Whistleblower Protection

0

Artificial intelligence research leads to new cutting-edge technologies, but it is expensive.

Big tech companies, which are powered by AI and have deep pockets, often take on this work – but it gives them the power to censor or obstruct research that cast them in a negative light, according to computer scientist Timnit Gebru. , co-founder of the non-profit organization Black in AI and former co-leader of the Ethical AI team at Google.

The situation jeopardizes both the rights of AI workers in these companies and the quality of research shared with the public, Gebru said, speaking at the recent EmTech MIT conference hosted by MIT Technology Review.

“These are all the incentive structures that are not in place for you to challenge the status quo,” she said.

Gebru was forced to quit Google last December (Gebru said she was fired, while Google said she quit) after co-authoring an article on the risks of large AI language models , such as environmental impacts and the difficulty of finding integrated biases. Google’s search engine operates on such a large linguistic model.

Citing concerns, Google asked Gebru to remove the article from a conference or to remove his name and the names of other Google researchers, according to the New York Times. Gebru refused without a fuller explanation from Google, which led to Google announcing its departure.

During her recent speech, Gebru highlighted what she sees as the labor rights concerns of AI workers, how to protect them, and why academia is not always a better avenue for researchers. . Ultimately, she said, the goal is better and fairer artificial intelligence.

“The moment you push a little hard, you’re outside”

Gebru’s research focuses on the unintended negative impacts of artificial intelligence. An article she co-wrote with MIT Media Lab researcher Joy Buolamwini explored biases in facial recognition algorithms.

After joining Google in 2018, “I had problems from the start,” Gebru said. She said some people doubted she would be able to change a business as big as Google. “I thought, ‘Okay, maybe I can cut a little piece out of it… that’s safe for people from marginalized groups,” she said. “What I learned was that’s impossible, because as soon as you push a little hard you’re out. So if you survive it might be because you’re not stinging… something they think is great important.

It’s important to hold tech companies accountable from the outside, Gebru said.

“We can’t have the current momentum that we have and expect some kind of non-propaganda technology to come out of tech companies,” she said. “Because when you start censoring research, that’s what happens, right? The papers that come out end up looking more like propaganda.

The problems extend outside Big Tech

Since leaving Google, Gebru has been working on the development of an independent research institute. While many AI researchers work in academia, Gebru said that, in his experience, this avenue has its own issues of access control, harassment, and an incentive structure that does not reward. long-term research.

Tech companies that fund AI research at academic institutions are also of concern. Gebru cited The Gray Hoodie Project, a research paper by Mohamed Abdalla of the University of Toronto and Moustafa Abdalla of Harvard Medical School. Researchers compared how Big Tech (big tech companies like Google, Amazon, and Facebook) funds and leads AI research with how big tobacco companies have funded research in an effort to allay concerns regarding the health effects of smoking.

“At an independent research institute, you can do research that the company doesn’t think will make money right now. You can do research that really shows fundamental flaws in any technology a company might use, ”Gebru said.

How to improve protection for Big Tech workers

Gebru said she did not oppose researchers working for big tech companies, but said they needed protection to do their jobs. Otherwise, tech companies may cancel unfavorable search results or feeds. She suggested three things that might help:

  1. Improved protection of whistleblowers for AI researchers. Recent events have shown the importance of whistleblowers in big tech companies, such as the former Facebook data scientist who exposed internal company documents that show Facebook knew how bad the company was causing damage. damage.

  2. Anti-discrimination laws. “Often these organizations harm marginalized communities the most,” Gebru said. “It is people from marginalized groups who will see, who will think about these negative impacts [of AI]. A lot of other people might be thinking, ‘Oh, everything is going to be great.’… From their perspective in life, they don’t see how it’s going to impact people negatively. But after you have had certain experiences, it might be very clear to you, and you are going to push hard on that angle. “
  3. Labor law. When Google became involved in a federal program to use AI to potentially improve drone strikes, employees protested and the company ended up not renewing the contract. “I think workers are empowering, which is great, but we need much stronger labor protection laws to allow even AI researchers to organize against things they really don’t see.” , she said. After Gebru left Google, a letter of support for him was signed by nearly 2,700 Google employees and over 4,300 others.

Gebru also advised people working in tech who are struggling with these issues to form a coalition.

“You can still do a lot without these labor protections,” she said. “If you have a coalition of people around you, there is a lot you can do. ”

Related Articles

And that includes research that might be unpopular.

“Try to think of the one thing you can do that pushes the boundaries that won’t make businesses happy because it means you’re doing the right thing,” she said.

A fairer view of AI

As AI technologies become ubiquitous, it is increasingly important to consider who is involved in shaping the future.

“There is nothing really revolutionary. I just want to work on AI research [that’s] rooted in thinking about the perspectives of people in marginalized groups, ”Gebru said. “It could be either thinking about research in the future and what kind of technologies we should be building, what kind of AI technology research we should be doing, or criticizing it after it’s built.”

Gebru said she hopes to see artificial intelligence become more task-specific and designed to be used for specific groups of people. Right now, AI is abstract and general, which tends to mean that the dominant’s group vision is implemented, marginalizing non-dominant groups.

“Let’s look at the most marginalized from the start and start from that angle,” she said.

Learn more about artificial intelligence

Share.

Leave A Reply