![ ](./assets/banner2.png width="480px" align=left)   [Home](index.html)  [Speakers](speakers.html)  [Program](program.html)  [Call for Papers](call.html)  [Accepted Papers](papers.html)  [Organizers](organizers.html) **Accepted papers** [CT1] Towards Anthropomorphising Autonomous Vehicles: Speech and Embodiment on Trust and Blame After an Accident, Christopher D. Wallbridge, Victoria Marcinkiewicz, Qiyuan Zhang, and Phil Morgan. [PDF](https://drive.google.com/file/d/184vwgaWZKF-__x9pWOGjDX73bD0oh4TN/view?usp=sharing) **Abstract:** A novel experiment is presented of our research on the effects of anthropomorphism on trust and blame after an accident involving an Autonomous Vehicle (AV). We presented 147 –out of an expected 300 based on power calculations participants simulation software generated animations of a hypo- thetical accident involving an AV, with manipulation of presence of a humanoid robot, and different conversation styles. So far we have found no direct effect on trust, but we have found promising results on factors that correlate with trust; measures of competence and discomfort, and a potential effect on blame. --- [CT2] How Much Do We Trust Social Robots? The Effect of Error Context and Agent Appearance, Deniz Erdil, Imge Saltik, Tugce E. Boz, and Burcu A. Urgen. [PDF](https://drive.google.com/file/d/17D1ECf0k95mplSXR2AQ5I7F3TOwnWrdW/view?usp=sharing) **Abstract:** Human-robot interaction is becoming a trend topic with the improvements in technology. One of the most critical factors in the improvement of this interaction is trust in robots. Previous studies investigated the effect of agent appearance and context in trust in robots separately. In this study, we combined these two important factors to examine their effect on trust as well as several related measures including negative outcome and compensability of the errors, and intentionality of the agent in making the errors. We investigated the effect of error context and agent appearance on trust in robots using Multi-Dimensional Measure of Trust Scale (MDMT) and explicit trust ratings. 25 participants were presented twelve error scenarios from four different contexts (Health, Education, Service, and Cleaning) with three different agents (Human, Human-like robot, Machine-like robot) and asked to rate the agents and the error scenarios. We found a main effect of context and a marginal main effect of agent appearance on trust in robots. In addition, we also found that context affects ratings about the negative outcome and compensability of the errors and agent appearance affects intentionality ratings. We did not find any interaction effect between context and agent appearance in any of the ratings. --- [CT3] A Robot’s Promise: The Philosophy of Promising in Robotics, Henry Cerbone. [PDF](https://drive.google.com/file/d/1_z9zqMNDL0mV-qVbGnCkIwE8vkgkJpHQ/view?usp=sharing) **Abstract:** In this paper, I offer a survey of frameworks of promising as a proxy for thinking about trust in robotics. Throughout this survey, I apply the framework to the question: Can robots make promises? Via this application, it is my hope that we learn something about promising in general. Through the eschewing of moral grounding for that of games and then returning to morality, I will show that in order for a positive view of robot’s promises, we are required to give up something fundamental to promises. I do not offer a fixed definition of what a robot is and often rely on thought experiments resolving basic algorithms (die rolling) to complex models (GPT-3). The motivation behind this is similar to imagining a brain in a vat [1]: the part of a robot concerning promising in any substantive sense is the things that govern its control/actions. --- [CT4] Distribution of Responsibility During the Usage of AI-Based Exoskeletons for Upper Limb Rehabilitation, Huaxi (Yulin) Zhang, Melanie Fontaine, Marianne Huchard, Baptiste MEREAUX, and Olivier Remy-Neris. [PDF](https://drive.google.com/file/d/1nuYUJvBGowtZDjI0mTMvmy4s9wwvephQ/view?usp=sharing) **Abstract:** The ethical issues concerning the AI-based ex- oskeletons used in healthcare have already been studied literally rather than technically. How the ethical guidelines can be integrated into the development process has not been widely studied. However, this is one of the most important topics which should be studied more in real-life applications. Therefore, in this paper we highlight one ethical concern in the context of an exoskeleton used to train a user to perform a gesture: during the interaction between the exoskeleton, patient and therapist, how is the responsibility for decision making distributed? Based on the outcome of this, we will discuss how to integrate ethical guidelines into the development process of an AI-based exoskeleton. The discussion is based on a case study: AiBle. The different technical factors affecting the rehabilitation results and the human-machine interaction for AI-based exoskeletons are identified and discussed in this paper in order to better apply the ethical guidelines during the development of AI-based exoskeletons. --- [CT5] Exploring Privacy Implications for Domestic Robot Mediators, Manuel Dietrich and Thomas H. Weisswange. [PDF](https://drive.google.com/file/d/1Hu3grH1L0FulY-xF_pv-WYdAkEJvVSuu/view?usp=sharing) **Abstract:** To become part of our everyday social environment, robots will need to be developed in a way to gain an appropriate level of trust of humans, both users and bystanders. One important aspect influencing trust is the handling of privacy. In this paper, we explore privacy challenges of social robots when acting in the role of mediators in human- human interactions within domestic assistance scenarios. We approach this topic by reviewing privacy research on related technologies that already exist in many households, namely smart home and smart speaker devices. We extract common user concerns and requirements with respect to general data sharing based on their experience with using these technologies and evaluate which of these will be relevant for a robotic mediator design. We also discuss research on privacy implications of shared devices and telepresence robots, which extend the use cases to possible data sharing between humans. Finally, we point out mediator-specific privacy challenges in the realm of social privacy to highlight open research questions that should be addressed when targeting a deployment of robotic mediators. --- [CT6] Modeling robot trust through multimodal cognitive load in robot-robot interaction, Anna L. Lange, Murat Kirtay, and Verena V. Hafner. [PDF](https://drive.google.com/file/d/1ihr48APDfybci1zyphOL0H2NJJpGIWlB/view?usp=sharing) **Abstract:** This study presents a multimodal robot trust implementation in a robot-robot interaction setting. The computational trust model is composed of a multimodal auto-associative memory network to extract the cost of audio-visual perceptual processing (that is, cognitive load) for audio-visual pattern recall and an internal reward module that uses cognitive load value to extract reward in performing the interactive task, i.e., sequential multimodal pattern recall. In this setting, a learner robot performs the interactive task with partner robots that have different guiding strategies: reliable and unreliable. Overall, the learner robot forms trust in a partner that has a reliable guiding strategy that reduces the cognitive load incurred on the learner robot during the experiments. We verify this outcome by providing free choice to the learner robot to select the trustworthy interaction partner after performing the task with the partners with different guiding strategies. ---