Abstract
The problem of original intentionality—wherein computational states have at most derived intentionality, but intelligence presupposes original intentionality—has been disputed at some length in the philosophical literature by Searle, Dennett, Dretske, Block, and many others. Largely absent from these discussions is the problem of original agency: Robots and the computational states upon which they depend have at most derived agency. That is, a robot’s agency is wholly inherited from its designer’s original agency. Yet intelligence presupposes original agency at least as much as it does original intentionality. In this talk I set out the problem of original agency, distinguish it from the problem of original intentionality, and argue that the problem of original agency places as much of a limit on computational models of cognition and is thus at least as vexing as the problem of original intentionality.