Abstract
As advances are made in artificial intelligence and machine learning, the distance between the activity of the designers/programmers of the system and the behavior of the system grows. This gap, between human action and the effects and consequences of that action, is not new, but emerging computing paradigms are presenting this challenge with a new urgency, and revealing the poverty of our tools for reasoning about what human responsibility means in a world with ubiquitous artificial agents. This paper proposes a new addition to our existing collection of frameworks for considering this issue.