Home / Breaking News / Facebook Says 'Unacceptable' AI Error Led to Video of Black Man Being Tagged as a Video About Primates

Facebook Says 'Unacceptable' AI Error Led to Video of Black Man Being Tagged as a Video About Primates

FILE - In this April 14, 2020 file photo, the thumbs up Like logo is shown on a sign at Facebook headquarters in Menlo Park, Calif, USA. Facebook’s purchase of Giphy will hurt competition for animated images, U.K. regulators said Thursday Aug. 12, 2021, following an investigation, indicating the social network could be forced to sell off the company if the provisional finding’s concerns are confirmed. Giphy’s library of short looping videos, or GIFs, are a popular tool for internet users sending messages or posting on social media.

FILE – In this April 14, 2020 file photo, the thumbs up Like logo is shown on a sign at Facebook headquarters in Menlo Park, Calif, USA. Facebook’s purchase of Giphy will hurt competition for animated images, U.K. regulators said Thursday Aug. 12, 2021, following an investigation, indicating the social network could be forced to sell off the company if the provisional finding’s concerns are confirmed. Giphy’s library of short looping videos, or GIFs, are a popular tool for internet users sending messages or posting on social media.
Photo: Jeff Chiu (AP)

MAHK ZUCKENBERGER!!! You got some ‘splainin’ to do, my guy.

The New York Times reports that Facebook has apologized after users who watched a video posted on the site by The Daily Mail in 2020 saw automated prompts asking if they wanted to “keep seeing videos about Primates.”

There was nary a primate, Most Valuable or otherwise, in the video. Instead, it was a clip of a Black man interacting with officers after a white man called the police on him at a marina. According to the Times, Facebook says this was an error on behalf of an artificial intelligence feature on the site.

From the Times:

Darci Groves, a former content design manager at Facebook, said a friend had recently sent her a screenshot of the prompt. She then posted it to a product feedback forum for current and former Facebook employees. In response, a product manager for Facebook Watch, the company’s video service, called it “unacceptable” and said the company was “looking into the root cause.”

Ms. Groves said the prompt was “horrifying and egregious.”

Dani Lever, a Facebook spokeswoman, said in a statement: “As we have said, while we have made improvements to our A.I., we know it’s not perfect, and we have more progress to make. We apologize to anyone who may have seen these offensive recommendations.”

Here we have another example of how artificial intelligence programs routinely demonstrate biases against people of color. We’ve seen stories on how this Minority Report-ass facial recognition software has led to innocent Black people being arrested or discriminated against due to computer errors.

Then there’s the racial bias found in voice recognition programs and Twitter cropping Black faces out of photos at a higher rate than it does white faces.

So yeah, it’s safe to say this problem ain’t nothing new. The question is, what are Big Tech companies like Facebook and the others going to do about it?

Based on what Groves told the Times, so far the answer is “not enough.”

Ms. Groves, who left Facebook over the summer after four years, said in an interview that a series of missteps at the company suggested that dealing with racial problems wasn’t a priority for its leaders.

“Facebook can’t keep making these mistakes and then saying, ‘I’m sorry,’” she said.


Source link

About admin

Check Also

Ruby Garcia’s Family Upset Over Trump’s Claims He Talked To Them

by Daniel Johnson April 5, 2024 Mavi, who has taken on the role of the …

Leave a Reply

Your email address will not be published. Required fields are marked *

Powered by keepvid themefull earn money