Facebook’s self-service ad-buying platform made it possible for individuals and businesses to target users on the social network who expressed anti-Semitic views and interests, according to an investigation by ProPublica published on Thursday.
Advertisers were allowed to target fields such as “Jew hater,” as well as related categories covering people who had expressed interest in topics like “How to burn jews” and “History of ‘why jews ruined the world,’” ProPublica found. Those categories were available until earlier this week, when ProPublica flagged them to Facebook , which has since removed them.
ProPublica discovered the ad categories after receiving a tip about them, and found that together they could reach the news feeds of about 2,300 people. ProPublica confirmed the ad categories were functional by spending $30 to direct three ads containing ProPublica articles and posts to users associated with those categories. Facebook approved the ads within 15 minutes, ProPublica said.
ProPublica noted that the anti-Semitic categories it reviewed represented too few Facebook users to enable an ad campaign on their own. However, Facebook’s ad platform automatically recommended additional categories to ProPublica, such as “Second Amendment,” in order for the audience to meet reach requirements. The recommendation suggests a correlation between anti-Semites and people interested in guns. ProPublica also targeted its ads among categories such as “Nazi Party” and “the SS.” When Facebook approved ProPublica’s ads, its system automatically changed the category “Jew hater” to “Antysemityzm,” the Polish word for anti-Semitism.
Facebook said the anti-Semitic categories were created by algorithms, not directly by people, and are generated based on interests and tendencies expressed by users. The company said it is exploring potential ways to address the issue, for example, by reviewing ad categories before they are made available to buyers. A Facebook spokesperson told ProPublica that the ad categories were not commonly used or widespread.
The potential for Facebook’s advertising platform, which reaches 2 billion people, to be misused by bad actors has come under heightened scrutiny in the aftermath of the U.S. presidential election. Last month, Facebook told federal investigators that it sold about $100,000 in political ads during the 2016 election season to “inauthentic” accounts, likely affiliated with and operated in Russia.
The sheer volume of user-generated content and ads on the social network creates major moderation challenges. Facebook invests heavily in artificial intelligence teams and technology in an effort to understand everything from the meaning of text to the real-time interpretation of videos streamed live to the social network. In March, Facebook CEO Mark Zuckerberg announced the company would hire 3,000 additional moderators to review user-generated content. Facebook employees, in some cases, also review individual Facebook ads. ProPublica’s finding highlights that substantial moderation difficulties across Facebook persist.
After violent protests last month in Charlottesville, Virginia by right-wing groups, including self-described Nazis, Zuckerberg posted on Facebook about the company’s commitment to balancing freedom of speech and protecting users.
More Info: www.forbes.com