Abstract
This chapter provides an overview of the challenges involved in algorithmic content moderation. Content moderation is the organized practice of screening user-generated content (UGC) on Internet sites, social media, and other online outlets to determine the appropriateness of the content for a given site, locality, or jurisdiction. The most common technical approaches consist in using classifier systems that assign predefined category labels to individual posts. We briefly introduce pre- and post-moderation and provide real-world examples of algorithmic moderation systems used by an Austrian daily newspaper. We point to significant challenges of moderation such as the ambiguities of natural language and the implications for freedom of expression. We conclude with issues that algorithmic content moderation raises for societal power relations and democratic control.