When a robotic dog wags its tail, we do not hesitate to interpret it as a sign of happiness. We get upset when seeing little robot dinosaurs, barely more than sophisticated toys, mistreated. We hesitate to strike little bug-like robotic objects. Evidence shows, in other words, that our interactions with robots are laden with affect; and this is despite our full awareness that robots do not, ultimately, feel. What are the factors that influence these attributions? What aspects of design can influence the way we interact with artificial agents? I ague that considering interactions with artificial agents in terms of emotionally-loaded scripts can contribute to explaining our attribution of emotional states to social robots as well as our emotional reactions during interactions with them. Moreover, it helps us identify the normative components of such interactions.
In this talk, I first explore the ways design features influence basic mechanisms of spontaneous perspective taking with social robots, presenting data that shows how visual appearance modulates these effects. Next, I propose that to explain more sophisticated mental state attribution, we should consider social interactions as activating scripts and schemata (Bicchieri and McNally, 2018) that come with expectations on how agents should behave and feel. Scripts contain information about expected emotional reactions, and their activation prescribe the interpretation of emotions in normative ways, as well as emotional attributions. In this sense, I suggest, when interacting with social robots, our behaviors and emotions, as well as our attributions, are highly normatively regulated. To conclude, I discuss how basic design features relate to the activation of scripts.
If you are interested in participating online, please register via the following form: https://forms.microsoft.com/r/W3whw0ac3B. If you would like to attend in person, please send an e-mail to udnn.ht@tu-dortmund.de.
This lecture is a special edition of the AI Colloquium at TU Dortmund University. This lecture series will investigate fundamental issues in AI from the vantage point of philosophy of science, which includes topics such as the transparency and interpretability of AI within scientific research, as well as the impact of AI on scientific understanding and explanation.
JProf. Florian Boge