Order:
  1. AI Language Models Cannot Replace Human Research Participants.Jacqueline Harding, William D’Alessandro, N. G. Laskowski & Robert Long - forthcoming - AI and Society:1-3.
    In a recent letter, Dillion et. al (2023) make various suggestions regarding the idea of artificially intelligent systems, such as large language models, replacing human subjects in empirical moral psychology. We argue that human subjects are in various ways indispensable.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Fairness in Machine Learning: Against False Positive Rate Equality as a Measure of Fairness.Robert Long - 2021 - Journal of Moral Philosophy 19 (1):49-78.
    As machine learning informs increasingly consequential decisions, different metrics have been proposed for measuring algorithmic bias or unfairness. Two popular “fairness measures” are calibration and equality of false positive rate. Each measure seems intuitively important, but notably, it is usually impossible to satisfy both measures. For this reason, a large literature in machine learning speaks of a “fairness tradeoff” between these two measures. This framing assumes that both measures are, in fact, capturing something important. To date, philosophers have seldom examined (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  3.  94
    How wishful seeing is not like wishful thinking.Robert Long - 2017 - Philosophical Studies 175 (6):1401-1421.
    On a traditional view of perceptual justification, perceptual experiences always provide prima facie justification for beliefs based on them. Against this view, Matthew McGrath and Susanna Siegel argue that if an experience is formed in an epistemically pernicious way then it is epistemically downgraded. They argue that "wishful seeing"—when a subject sees something because he wants to see it—is psychologically and normatively analogous to wishful thinking. They conclude that perception can lose its traditional justificatory power, and that our epistemic norms (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  4. Introspective Capabilities in Large Language Models.Robert Long - 2023 - Journal of Consciousness Studies 30 (9):143-153.
    This paper considers the kind of introspection that large language models (LLMs) might be able to have. It argues that LLMs, while currently limited in their introspective capabilities, are not inherently unable to have such capabilities: they already model the world, including mental concepts, and already have some introspection-like capabilities. With deliberate training, LLMs may develop introspective capabilities. The paper proposes a method for such training for introspection, situates possible LLM introspection in the 'possible forms of introspection' framework proposed by (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  5.  25
    Nativism and empiricism in artificial intelligence.Robert Long - 2024 - Philosophical Studies 181 (4):763-788.
    Historically, the dispute between empiricists and nativists in philosophy and cognitive science has concerned human and animal minds (Margolis and Laurence in Philos Stud: An Int J Philos Anal Tradit 165(2): 693-718, 2013, Ritchie in Synthese 199(Suppl 1): 159–176, 2021, Colombo in Synthese 195: 4817–4838, 2018). But recent progress has highlighted how empiricist and nativist concerns arise in the construction of artificial systems (Buckner in From deep learning to rational machines: What the history of philosophy can teach us about the (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark