Online Extremism, AI, and (Human) Content Moderation

Feminist Philosophy Quarterly 8 (3/4) (2022)
  Copy   BIBTEX

Abstract

This paper has 3 main goals: (1) to clarify the role of Artificial Intelligence (AI)—along with algorithms more broadly—in online radicalization that results in ‘real world violence’; (2) to argue that technological solutions (like better AI) are inadequate proposals for this problem given both technical and social reasons; and (3) to demonstrate that platform companies’ (e.g., Meta, Google) statements of preference for technological solutions functions as a type of propaganda that serves to erase the work of the thousands of human content moderators and conceal the harms they experience. I argue that the proper assessment of these important, related issues must be free of the obfuscation that the ‘better AI’ proposal generates. For this reason, I describe the AI-centric solutions favoured by major platform companies as a type of obfuscating and dehumanizing propaganda.

Similar books and articles

Content moderation, AI, and the question of scale.Tarleton Gillespie - 2020 - Big Data and Society 7 (2):2053951720943234.
Detecting Fake News: Two Problems for Content Moderation.Elizabeth Stewart - 2021 - Philosophy and Technology 34 (4):923-940.

Analytics

Added to PP
2022-07-29

Downloads
263 (#77,781)

6 months
33 (#103,502)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Michael Randall Barnes
Australian National University

Citations of this work

Freedom of speech.David van Mill - 2008 - Stanford Encyclopedia of Philosophy.
Freedom of Speech.D. V. Mill - forthcoming - Stanford Encyclopedia of Philosophy.

Add more citations