OpenAI has appealed to pay workers in Kenya less than $2 an hour to make ChatGPT less toxic, with experts saying the rate is low because of the workers’ “good deeds”.


Behind the unprecedented prowess of ChatGPT may be hidden abused and underpaid workers. OpenAI’s latest report says so. The company reportedly relied on Kenyan outsourcers, many reportedly paid less than $2 an hour, to go through some of the darkest corners of the internet to create the AI ​​filter system that ChatGPT will integrate. The filter should allow the chatbot to scan for signs of humanity’s worst horrors. OpenAI has been heavily criticized and accused of following the same path as Silicon Valley giants like Apple, Amazon and Meta.

The seemingly simple, bright and clean world associated with technology seems inevitable and almost always underpinned by something darker lurking beneath the surface. Three years ago, Apple, Foxconn and 81 other major tech brands were accused of complicity in forced labor in China. In March 2020, the Australian Strategic Policy Institute (ASPI) claimed in a report that between 2017 and 2019, more than 80,000 Ugurs were transferred from their home region of Xinjiang to prison camps or factories to do forced labor for companies such as Apple. .

Lately, it’s OpenAI, the American AI lab behind ChatGPT, which has been accused of “torturing its external workers.” While ChatGPT has caught everyone’s attention since its launch in late November, a new Time investigation claims that OpenAI has enlisted Kenyan employees to help develop a tool that flags problematic content. The detector should be responsible for filtering responses from ChatGPT, which currently has over a million users, to make it acceptable to the general public. But these workers would be paid less than $2 an hour.

Also, the filter will help remove toxic entries from the large database used to train ChatGPT. While end users received polished and hygienic products, Kenyan workers acted as AI gatekeepers of sorts, sifting through snippets of text depicting startling tales of child sexual abuse, torture, murder, suicide, and consanguinity. In fact, it takes a labeled database of terrible content to train the AI ​​to recognize and remove terrible content, and that’s part of what the staff is working on.

OpenAI is said to be working with Sama, an American company known for hiring workers in Kenya, Uganda and India to perform data tagging tasks for Silicon Valley giants like Google and Meta. Sama was actually Meta’s (Facebook’s) top content moderator in Africa before the company announced this month that it was no longer working together due to the “current economic climate”. Sama and Meta are currently the subject of a lawsuit filed by a former content moderator who claims the companies violated the Kenyan constitution.

In the case of OpenAI, the report says Kyyan workers would have earned between $1.32 and $2 an hour. According to some analysts, this amount is too small for a company like OpenAI, which has raised hundreds of millions of dollars in the past few years. Microsoft invested more than $1 billion in the company in 2019, and recent reports suggest that the Redmond giant is set to invest nearly $10 billion in the AI ​​company next. Other investors should also participate in this round. If that happens, OpenAI should be valued at $29 billion.

However, according to some comments, the rate OpenAI paid Kyyan employees would be in line with the local reality. To put this into perspective, the average wage in Kenya is about $1.25 an hour, and as a Kenyan, I don’t see that as a bad thing. Most of the people here live in poverty, in situations you can’t even imagine. The user, who introduces himself as Knyan, says that any form of help is welcome. However, analysts believe that OpenAI can do a better job considering the tasks of employees.

Whether it’s mentally ill content moderators filtering infamous Facebook posts or overworked kids mining the cobalt needed to make fancy electric cars and other types of electronics, frictionless efficiency has a human cost. “Our mission is to ensure that general AI benefits all of humanity, and we’re working hard to build safe and useful AI systems that limit bias and harmful content,” chatbot developer ChatGPT said in a statement to Time.

Categorizing and filtering harmful content is a necessary step to minimize the amount of violent and sexual content included in training data, and to create tools that can detect harmful content. Still, the nature of the work has caused serious distress for some data taggers. Like some content moderators, Sama said their work often stays with their employees after they log out. One of them said he had recurring vision problems after reading a description of a man having sex with a dog.

It was torture, he said. In total, the work teams were tasked with reading and tagging approximately 150-250 passages of text during a nine-hour shift. Although the workers were given the opportunity to consult with health counselors, they nevertheless told Time that they were psychologically scarred by the work. Sama disputed those numbers, telling Time that workers only have to tag 70 links per shift. Sama told Time that it offers one-on-one mental health counseling and wellness programs for employees to relieve stress.

Sama, which has reportedly signed three contracts with OpenAI worth nearly $200,000, has decided to stay out of the malicious data tagging field altogether, at least for now. Earlier this month, he announced that he would be shedding the rest of his content-sensitive business to focus on “computer vision data annotation solutions” for both OpenAI and others. The survey results come as companies that are adapting AI to improve their products and services continue to outsource content moderation to low-wage workers.

Some contractors regularly report adverse effects on their physical or mental health. Companies like Amazon reportedly hire video reviewers in India and Costa Rica to review thousands of videos. This would result in physical ailments such as headache and eye pain in these people. In 2019, after some Facebook contractors claimed to suffer from PTSD (traumatic stress disorder) as a result of moderation work, CEO Mark Zuckerberg downplayed the impact on employees’ mental health, calling reports of such complaints “a little too dramatic”.

The report lays out in stark detail the difficult human challenges that underlie the development of such technology. While new, seemingly frictionless technologies created by the world’s biggest tech companies often tout their ability to solve big problems with low overhead, OpenAI’s reliance on Kenyan workers like Social Media Companies’ Large Army of international content moderators underscores the importance of human labor. often inseparable from the final product.

Source: Time

And you?

How do you feel about the topic?
What do you think about the wages of Kenyan workers?
What about employee torture claims targeting OpenAI?
What do you think about the fate of outsourced workers employed by Silicon Valley companies?
Why do you think these working conditions persist despite the denunciations and many cases of employee depression?
Who do you think is to blame? Governments of India, China, Kyan, etc. or large multinationals like Apple or Facebook?

See also

The NGO reports that “Apple”, “Foxconn” and 81 other big brands are involved in forced labor in China.

A moderator who watched traumatic videos for hours is suing TikTok for not being able to keep his sanity

Chinese workers describe extremely harsh conditions at an iPhone assembly plant, but Apple says that’s not the case

Leave a Reply

Your email address will not be published. Required fields are marked *