Lectures from the 2017 Workshop:
Moral Machines?
Ethical Implications of Contemporary Robotics


Human Autonomy and the Hazards of Principle Agency in an Era of Expanding AI
Dr. Joanna Bryson

Artificial Intelligence is often treated as an alien force or an unruly, potentially dangerous child. In fact, it is just a special case of computation being commodified. Intelligence is the triggering of appropriate actions in response to perceived events. Information technology has been allowing us to enhance our capacity to do this arguably for thousands of years. It allows us to both remember and perceive more than we could as individuals, sometimes at the expense of others. In this talk I will rst redescribe AI as an ecological feature of one species and show how it affects not only our world but ourselves as individuals. Then I will talk about British efforts to regulate AI. Finally, I will address why we should not construct AI to be a legal or moral agent, because it is ill advised and easily avoided, at least for commercial products.


Why Robot Ethics?
Dr. Peter Asaro

This talk will examine the main perspectives on Robot Ethics. This will include the primary motivations for Robot Ethics, and illustrative cases including self-driving cars and autonomous weapons. Part of the challenge of Robot Ethics lies in delineating the various perspectives of engineers, manufacturers, users, and the public. Addressing those challenges requires recognizing the complex dynamics between technology and society, and how the framing of engineering problems intersects with social and ethical values.


Embedded Ethics: The Philosophical-Anthropological Substrate of AI
Dr. Vanessa Rampton

It has been argued that ethical precepts are embedded in every form of artificial intelligence. Yet the ethical reflection that accompanies AI has traditionally focused on extreme situations or dramatic questions. In this lecture I explore the myriad of ethical choices that arise from the fact that machines that think and talk are now a part of the everyday life of many households. It discusses several approaches to moral deliberation that are currently well established in the Western philosophical tradition (including consequentialism, deontology and care ethics). The lecture will highlight their core features, address some common misunderstandings, and illustrate how these approaches complement or conflict with each other regarding some roboethics issues, such as driverless cars and care robots.


Embodiment versus Memetics: What AI semantics tells us about human intelligence
Dr. Joanna Bryson

Does language understanding require direct experience of the physical world, or can culture–like life evolve without the understanding of its hosts? Here in joint work with Aylin Caliskan-Islam and Arvind Narayanan I show that human-like semantic biases are present in standard natural language processing tools (e.g. GloVe and word2vec) learning from the World Wide Web. We have replicated a spectrum of standard human biases as exposed by the Implicit Association Test and other well-known psychological studies. Our results indicate that language itself contains recoverable and accurate imprints of our visceral reactions and historic biases, whether these are morally neutral as towards insects or flowers, problematic as towards race or gender, or even simply veridical, reflecting the status quo for the distribution of gender with respect to careers or first names. I discuss implications for both AI and NI.


Robot Ethics as a Design Problem
Dr. Peter Asaro

This talk will survey various approaches to incorporating Robot Ethics concerns in the design and engineering of robotics. This will include design processes such as Value Sensitive Design, Participatory Design, and the development of standards such as the British Standards Institution’s BS 8611:2016 Standard for «Robots and Robotic Devices: Guide to the ethical design and application of robots and robotic systems» and the IEEE P7000 Standard for a «Model Process for Addressing Ethical Concerns During System Design».