This category needs an editor. We encourage you to help if you are qualified.
Volunteer, or read more about what this involves.
Related

Contents
2 found
Order:
  1. Morality First?Nathaniel Sharadin - forthcoming - AI and Society.
    The Morality First strategy for developing AI systems that can represent and respond to human values aims to first develop systems that can represent and respond to moral values. I argue that Morality First and other X-First views are unmotivated. Moreover, according to some widely accepted philosophical views about value, these strategies are positively distorting. The natural alternative, according to which no domain of value comes “first” introduces a new set of challenges and highlights an important but otherwise obscured problem (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  2. Personalized Patient Preference Predictors are Neither Technically Feasible Nor Ethically Desirable.Nathaniel Sharadin - forthcoming - American Journal of Bioethics.
    Except in extraordinary circumstances, patients' clinical care should reflect their preferences. Incapacitated patients cannot report their preferences. This is a problem. Extant solutions to the problem are inadequate: surrogates are unreliable, and advance directives are uncommon. In response, some authors have suggested developing algorithmic "patient preference predictors" (PPPs) to inform care for incapacitated patients. In a recent paper, Earp et al. propose a new twist on PPPs. Earp et al. suggest we personalize PPPs using modern machine learning (ML) techniques. In (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark