At the intersection of humanity and technology: a technofeminist intersectional critical discourse analysis of gender and race biases in the natural language processing model GPT-3

AI and Society:1-19 (forthcoming)
  Copy   BIBTEX

Abstract

Algorithmic biases, or algorithmic unfairness, have been a topic of public and scientific scrutiny for the past years, as increasing evidence suggests the pervasive assimilation of human cognitive biases and stereotypes in such systems. This research is specifically concerned with analyzing the presence of discursive biases in the text generated by GPT-3, an NLPM which has been praised in recent years for resembling human language so closely that it is becoming difficult to differentiate between the human and the algorithm. The pertinence of this research object is substantiated by the identification of race, gender and religious biases in the model’s completions in recent research, suggesting that the model is indeed heavily influenced by human cognitive biases. To this end, this research inquires: How does the Natural Language Processing Model GPT-3 replicate existing social biases?. This question is addressed through the scrutiny of GPT-3’s completions using Critical Discourse Analysis (CDA), a method which has been deemed as amply valuable for this research as it is aimed at uncovering power asymmetries in language. As such, the analysis is specifically centered around the analysis of gender and race biases in the model’s generated text. Research findings suggest that GPT-3’s language generation model significantly exacerbates existing social biases while replicating dangerous ideologies akin to white supremacy and hegemonic masculinity as factual knowledge.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,497

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

AI and consciousness.Sam S. Rakover - forthcoming - AI and Society:1-2.
AI is a ruler not a helper.Z. Liu - forthcoming - AI and Society:1-2.
Call for papers.[author unknown] - 2018 - AI and Society 33 (3):457-458.
Call for papers.[author unknown] - 2018 - AI and Society 33 (3):453-455.
Is LaMDA sentient?Max Griffiths - forthcoming - AI and Society:1-2.
The inside out mirror.Sue Pearson - 2021 - AI and Society 36 (3):1069-1070.
Against spectatorial utopianism.Robert Rosenberger - 2023 - AI and Society 38 (5):1965-1966.
The dissolution of the condicio humana.Tim Rein - 2023 - AI and Society 38 (5):1967-1968.
Review of Reality+. [REVIEW]Miloš Agatonović - forthcoming - AI and Society:1-2.

Analytics

Added to PP
2023-11-26

Downloads
15 (#953,911)

6 months
11 (#248,819)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Jorge Gonçalves
Universidade Nova de Lisboa

Citations of this work

No citations found.

Add more citations

References found in this work

Do artifacts have politics?Langdon Winner - 1980 - Daedalus 109 (1):121--136.
Anthropomorphism in AI.Arleen Salles, Kathinka Evers & Michele Farisco - 2020 - American Journal of Bioethics Neuroscience 11 (2):88-95.
Machine learning and power relations.Jonne Maas - forthcoming - AI and Society.

View all 9 references / Add more references