Contents
5 found
Order:
  1. Taking It Not at Face Value: A New Taxonomy for the Beliefs Acquired from Conversational AIs.Shun Iizuka - forthcoming - Techné: Research in Philosophy and Technology.
    One of the central questions in the epistemology of conversational AIs is how to classify the beliefs acquired from them. Two promising candidates are instrument-based and testimony-based beliefs. However, the category of instrument-based beliefs faces an intrinsic problem, and a challenge arises in its application. On the other hand, relying solely on the category of testimony-based beliefs does not encompass the totality of our practice of using conversational AIs. To address these limitations, I propose a novel classification of beliefs that (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  2. Morality First?Nathaniel Sharadin - forthcoming - AI and Society.
    The Morality First strategy for developing AI systems that can represent and respond to human values aims to first develop systems that can represent and respond to moral values. I argue that Morality First and other X-First views are unmotivated. Moreover, according to some widely accepted philosophical views about value, these strategies are positively distorting. The natural alternative, according to which no domain of value comes “first” introduces a new set of challenges and highlights an important but otherwise obscured problem (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  3. Personalized Patient Preference Predictors are Neither Technically Feasible Nor Ethically Desirable.Nathaniel Sharadin - forthcoming - American Journal of Bioethics.
    Except in extraordinary circumstances, patients' clinical care should reflect their preferences. Incapacitated patients cannot report their preferences. This is a problem. Extant solutions to the problem are inadequate: surrogates are unreliable, and advance directives are uncommon. In response, some authors have suggested developing algorithmic "patient preference predictors" (PPPs) to inform care for incapacitated patients. In a recent paper, Earp et al. propose a new twist on PPPs. Earp et al. suggest we personalize PPPs using modern machine learning (ML) techniques. In (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. Data over dialogue: Why artificial intelligence is unlikely to humanise medicine.Joshua Hatherley - 2024 - Dissertation, Monash University
    Recently, a growing number of experts in artificial intelligence (AI) and medicine have be-gun to suggest that the use of AI systems, particularly machine learning (ML) systems, is likely to humanise the practice of medicine by substantially improving the quality of clinician-patient relationships. In this thesis, however, I argue that medical ML systems are more likely to negatively impact these relationships than to improve them. In particular, I argue that the use of medical ML systems is likely to comprise the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  5. Exploring, expounding & ersatzing: a three-level account of deep learning models in cognitive neuroscience.Vanja Subotić - 2024 - Synthese 203 (3):1-28.
    Deep learning (DL) is a statistical technique for pattern classification through which AI researchers train artificial neural networks containing multiple layers that process massive amounts of data. I present a three-level account of explanation that can be reasonably expected from DL models in cognitive neuroscience and that illustrates the explanatory dynamics within a future-biased research program (Feest Philosophy of Science 84:1165–1176, 2017 ; Doerig et al. Nature Reviews: Neuroscience 24:431–450, 2023 ). By relying on the mechanistic framework (Craver Explaining the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark