top of page

Meta ads, and their history of inciting violence

Meta

Social media's ability to shape public opinion and people's motivations is clear in our constantly changing digital world. One worrying trend is that falsehoods often spread faster than facts. This matters because people today seem quicker to anger, easier to persuade, and more apt to react violently. Misinformation acts like gasoline, inflaming feelings of hate in susceptible individuals. This can lead to meaningless, unjustified violence and losses of life and property.


Meta, formerly known as Facebook, is a dominant force shaping the online world. As a premier social media platform, Meta's ad network is highly desirable for people and groups seeking to spread ideas or sell products. Meta wields immense influence over the digital sphere, with its unparalleled reach enabling the spread of commercial and ideological messaging alike. Though Meta's supremacy in the social media landscape is undisputed, the implications of the company's power remain debated. The company has faced criticism for allowing content on its advertisement services that promotes violence. A brief overview of the issue's history is as follows:


Rohingya Displacement

Myanmar (2017): Investigations suggested that Facebook's algorithms amplified hateful content targeting Rohingya people. This created an "echo chamber" of hate speech and fueled anti-Rohingya sentiment. Additionally, critics claim Meta failed to act despite warnings from human rights groups and internal studies acknowledging the risks. The spread of hate speech online is believed to have contributed to real-world violence against the Rohingya. This included mass killings, displacement, and horrific human rights abuses. In 2018, Facebook released a statement accepting the role it played in the genocide against the Rohingya Muslims of Myanmar:

“The ethnic violence in Myanmar is horrific and we have been too slow to prevent misinformation and hate on Facebook.”

Ethnic violence in Kenya

Ethiopia (2022): Whistleblower Frances Haugen's leaked documents revealed internal discussions at Meta about the potential for its algorithms to amplify ethnic tensions in Ethiopia. The documents suggested that the platform's automated systems struggled to detect hate speech and calls to violence, particularly in languages like Amharic and Oromo, spoken by major Ethiopian ethnic groups. This potentially allowed discriminatory and violent content to slip through the cracks, contributing to real-world conflicts.


Violence in Gaza

Israel-Palestine Conflict (2023):  The advocacy organization Digital Rights Monitor recently undertook an investigation, during which they submitted hateful and violent advertisements targeting Palestinians to Meta for review. These ads, which were written in both Hebrew and Arabic, shockingly received approval from the platform. The language and messaging used in the advertisements were dehumanizing and called for violent actions against Palestinian children, as well as the forced removal of Palestinians from their homes. This incident brought attention to the potential for Meta's moderation systems to be circumvented, particularly in situations involving sensitive geopolitical issues.


Hate speech in India

 India (May 2024):  A recent report by The Guardian exposed the use of manipulated political ads on Facebook and Instagram during India's elections. These AI-generated ads allegedly spread disinformation and incited religious violence. Facebook approved ads in India that contained hateful language targeting Muslims. Some of the offensive phrases included in the ads were "let’s burn this vermin" and claims that "Hindu blood is spilling, these invaders must be burned". The ads also used Hindu supremacist rhetoric and spread falsehoods about political figures. One approved ad called for the execution of an opposition leader, falsely alleging he wanted to "remove Hindus from India." This claim appeared next to an image of the Pakistan flag. 


The report suggests that Meta might not have robust enough measures to detect deepfakes and manipulated content, allowing them to be used for malicious purposes. This incident raises questions about the potential for Meta's platform to be weaponized during elections and create social unrest.


Looking ahead, Meta needs to take these key steps:


  • Enhance its content screening mechanisms to efficiently identify violent speech across all languages.

  • Be more open about its advertising guidelines and how they are enforced.

  • Collaborate with independent experts and civil society groups to tackle these challenges in a more impactful way.


This is an ongoing struggle, and holding Meta responsible is crucial. We, the users, must stay alert and flag any ads advocating violence. Until Meta visibly prioritizes safety over revenue, the danger of online violence proliferating through its platforms stays high.

Comments


Top Stories

bottom of page