Facebook has apologized after a video featuring Black men was classified as a video featuring “primates” by the social network’s artificial intelligence software, the latest of several examples of anti-Black bias in A.I. and facial recognition.
The New York Times first reported that several Facebook users, after watching a Daily Mail video posted on the platform documenting Black men being harassed by a white man, were sent an automated message asking if they would like to “keep seeing videos about Primates.”
After being made aware of the issue, the company disabled the A.I. feature responsible for the message and is investigating it for errors.
“As we have said, while we have made improvements to our A.I., we know it’s not perfect, and we have more progress to make,” said Facebook spokesperson Dani Lever in a statement. “We apologize to anyone who may have seen these offensive recommendations.”
Studies have revealed racial biases embedded into a number of A.I. and facial recognition systems, specifically that the technology is not designed to identify Black facial features as accurately as white features, leading to instances of misidentification and even wrongful imprisonment.
Former Facebook content design manager Darci Groves said she was sent a screenshot of the automated message by a friend and promptly uploaded it to a product feedback forum comprised of former and current Facebook employees.
A product manager for Facebook’s video service responded shortly after calling the message “unacceptable” and promising the company was making efforts to identify the “root cause.”
This is not the first time Facebook has been under fire for exhibiting racial bias. CEO Mark Zuckerberg had to ask employees in 2016 to stop crossing out the words “Black Lives Matter” written in a communal space at one of their headquarters, replacing the phrase with “All Lives Matter.”
In 2019, an anonymous group of Facebook employees of color penned an open letter to the company titled: Facebook Empowers Racism Against Its Employees of Color, outlining several ways in which the culture at the company had failed its non-white workers while pretending to be inclusive.
“Facebook can’t keep making these mistakes and then saying, ‘I’m sorry,’” Groves said.
In July 2020, the social media site released a civil rights audit of its platform after identifying an increase in white supremacist hate speech from its users during the 2020 presidential election cycle.
The audit found that Facebook had not applied voter suppression and misinformation policies evenly. Former ACLU Director Laura Murphy, who was a part of the audit, told Politico at the time that Facebook is “moving in the right direction, but the results are not adequate.”
In January, to help mitigate the issue internally, Facebook hired civil rights attorney Roy L. Austin Jr. as its first-ever vice president of civil rights and deputy general counsel.
“Technology plays a role in nearly every part of our lives, and it’s important that it be used to overcome the historic discrimination and hate which so many underrepresented groups have faced, rather than to exacerbate it,” Austin said in a statement after being hired.