![ ](../assets/banner2.png width="480px" align=left)   [Home](../index.html)  [Speakers](../speakers.html)  [Program](../program.html)  [Call for Papers](../call.html)  [Accepted Papers](../papers.html)  [Organizers](../organizers.html)
![ ](./photos/Winfield.jpeg width="210px" align=left) [Alan Winfield](https://homes.luddy.indiana.edu/selmas/bio.html) is Professor of Robot Ethics at the University of the West of England (UWE), Bristol, UK, and Visiting Professor at the University of York. He received his PhD in Electronic Engineering from the University of Hull in 1984, then co-founded and led APD Communications Ltd until taking-up appointment at UWE, Bristol in 1992. Alan co-founded the Bristol Robotics Laboratory where his research is focussed on cognitive robotics; he is especially interested in robots as working models of life, evolution, intelligence and culture. Alan is an advocate for robot ethics; he was a member of the British Standards Institute working group that drafted BS 8611: Guide to the Ethical Design of Robots and Robotic Systems, and he currently chairs the General Principles committee of the IEEE Global Initiative on Ethical Considerations in the Design of Autonomous Systems. Alan has published over 200 works, including ‘Robotics: A Very Short Introduction’ (Oxford University Press, 2012); he lectures widely on robotics, presenting to both academic and public audiences, and blogs at [http://alanwinfield.blogspot.com/](http://alanwinfield.blogspot.com/). **Title:** On Trust, Theory of Mind and Transparency **Abstract:** “Trust is important, but it is also dangerous” [1]. I will begin this talk by reflecting on notions of trust and trustworthiness in robotics and argue that we need to be very careful indeed about describing a robot – or indeed any artefact – as trustworthy. Trust between humans is scaffolded by theory of mind – our ability to infer the beliefs and intentions of each other. Some years ago I wrote that there is a fundamental asymmetry between robots and humans: suggesting that while a social robot might be programmed with a model of the human(s) it interacts with, those humans are likely to have the wrong theory of mind for the robot [2]. In the second part of my talk, I will briefly introduce our work on robots with simulation-based internal models, and how this approach provides a robot with a simple artificial theory of mind [3]. In this work I argue that this approach has the potential to provide the cognitive machinery for social robots to be able to answer what-if questions such as “Robot what would you do if I fall down?”. In the final part of my talk, I will turn to the ethics of social robotics and the key importance of transparency. I will introduce new IEEE Standard 7001-2021 on Transparency of Autonomous systems [4][5] and show that – for the first time – we now have a set of measurable, testable levels for both transparency and explainability. One thing is certain: if we aim to ‘trust’ our robots, they must be transparent. [1] McLeod, Carolyn (2020), Trust, The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.) https://plato.stanford.edu/archives/fall2021/entries/trust/ [2] Winfield AFT, 'You really need to know what your bot(s) are thinking about you', pp 201-208 in Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues, Ed. Y Wilks, John Benjamins, 2010. [3] Winfield, A. F. (2018) Experiments in Artificial Theory of Mind: from safety to storytelling. Front. Robot. AI 5:75. [4] IEEE Standard for Transparency of Autonomous Systems, in IEEE Std 7001-2021, pp.1-54, 4 March 2022, https://ieeexplore.ieee.org/document/9726144 [5] Winfield AFT, Booth S, Dennis LA, Egawa T, Hastie H, Jacobs N, Muttram RI, Olszewska JI, Rajabiyazdi F, Theodorou A, Underwood MA, Wortham RH and Watson E (2021) IEEE P7001: A Proposed Standard on Transparency. Front. Robot. AI 8:665729.