Ocweedly

News Update

Technology

Firm expresses sorrow about accepting Facebook moderation work

In retrospect, a company hired to filter Facebook posts in East Africa says it should not have taken the job.

Former Sama (an outsourcing company) employees in Kenya claim they were traumatized by exposure to explicit posts.

Some people are now filing lawsuits against the company in Kenyan courts.

Wendy Gonzalez, Sama’s CEO, stated that the company will no longer accept employment involving the moderation of hazardous content.

This article contains upsetting material.

Some former employees have recounted being traumatized by witnessing videos of beheadings, suicide, and other gruesome material at the firm’s moderation hub, which it operated from 2019 to 2019.

Daniel Motaung, a former moderator, previously told the BBC that the first gruesome video he viewed was “a live video of someone being beheaded.”

Mr. Motaung has filed a lawsuit against Sama and Facebook’s owner, Meta. According to Meta, all companies with which it collaborates must provide 24-hour support. According to Sama, certified wellness consultants were constantly on hand.

Ms Gonzalez told the BBC that the contract, which never accounted for more than 4% of the firm’s revenue, was one she would not accept again. Sama announced that it would be ending in January.

“You ask yourself, ‘Do I regret it?’ Well, I guess I’d put it this way. I would not have entered [the arrangement] if I had known what I know now, which includes all of the opportunity and energy it would take away from the main business.”

She stated that “lessons were learned” and that the firm would no longer take on jobs that involves regulating inappropriate content. In addition, the corporation would not collaborate on artificial intelligence (AI) projects that “support weapons of mass destruction or police surveillance.”

Ms Gonzalez declined to address if she believed the claims of employees who claimed they were affected by viewing explicit content, citing ongoing litigation. When asked if she thought moderation work was bad in general, she responded it was “a new area that absolutely requires study and resources.”

The stepping stone

Sama is a unique outsourcing company. Its stated purpose has always been to bring people out of poverty by offering digital skills and an income through outsourced computing activities for technology corporations.

The BBC paid a visit to the company in 2018, where they saw employees from low-income areas of Nairobi receive $9 (£7) a day for “data annotation” – labeling items in driving films, such as people and street lights, which would then be used to train artificial intelligence (AI) systems. Employees interviewed indicated the money helped them get out of poverty.

According to her, the company is still primarily focused on similar computer vision AI projects that do not expose employees to potentially dangerous content.

“I’m extremely proud that we’ve lifted over 65,000 people out of poverty,” Ms Gonzales added.

She believes it is critical that Africans participate in the digital economy and the development of AI systems.

Throughout the conversation, Ms Gonzales emphasized that her choice to accept the position was prompted by two factors: that moderating was critical, necessary job done to protect social media users from harm. It was also critical that African content be vetted by African teams.

“You cannot expect someone from Sydney, India, or the Philippines to effectively moderate local languages in Kenya, South Africa, or elsewhere,” she explained.

She also admitted to doing the moderation job herself.

Moderators’ remuneration at Sama began around 90,000 Kenyan shillings ($630) each month, Ms Gonzalez said, a decent wage by Kenyan standards equal to nurses, firemen, and bank officers.

When asked if she would perform the task for that money, she replied, “I did the moderation, but that’s not my job in the company.”

AI Education

Sama also started working with OpenAI, the firm behind ChatGPT.

Richard Mathenge, an employee whose job it was to read through massive amounts of text the chatbot was learning from and highlight anything hazardous, spoke to the BBC’s Panorama programme. He claimed he was exposed to frightening material.

Sama stated that it canceled the job after personnel in Kenya had concerns about requests for image-based material that were not included in the contract. “We finished this work right away,” Ms Gonzalez added.

OpenAI stated that it has its own “ethical and wellness standards” for data annotators and that it “recognizes this is difficult work for our researchers and annotation workers in Kenya and around the world.”

However, Ms Gonzalez views this type of AI work as another form of moderation, which the company would not repeat.

“We focus on non-harmful computer vision applications, like driver safety, drones, fruit detection, crop disease detection, and things of that nature,” she explained.

“When it comes to AI development, Africa needs a seat at the table.” We don’t want to keep reinforcing biases. We need people from all across the world to assist us construct this global technology.”

Related Subjects

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *