But when researchers submitted ads threatening to "lynch," "murder," and "execute" election workers around Election Day this year, the company's largely automated moderation systems approved many of them, the New York Times reported.
Out of the 20 ads submitted by researchers containing violent content, Facebook approved 15, according to a new test published by Global Witness, a watchdog group, and New York University's Cybersecurity for Democracy.
The researchers submitted ten of the test ads in Spanish. Facebook approved six of those ads, compared with nine of the ten ads in English.
The research adds to previous tests conducted by the same groups, including one in which they submitted 20 ads containing political misinformation this year.
In that test, Facebook approved only two misleading ads in English from an account in the U.S., while TikTok approved about 90% of them. YouTube suspended the account that tried to submit the ads.
The researchers said they wanted to see social networks like Facebook increase content moderation efforts and offer more transparency around the moderation actions they take.
"The fact that YouTube and TikTok managed to detect the death threats and suspend our account, whereas Facebook permitted the majority of the ads to be published, shows that what we are asking is technically possible," they wrote.
Price Action: META shares closed higher by 1.98% at $120.44 on Thursday.
Photo by Chetraruc from Pixabay
See more from Benzinga
Don't miss real-time alerts on your stocks - join Benzinga Pro for free! Try the tool that will help you invest smarter, faster, and better.
© 2022 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.