Robot Rights

Informed by Western and Eastern Principles
Newsletter Signup

Overview

With the rise of social robotics and assistive AI, the debate over robot rights is becoming increasingly important. What are robot rights? How should we navigate the complex landscape of human and robot rights from moral, legal, and social perspectives? As AI competition increases between East and West, can we bridge the growing divide by achieving a shared framework of robot rights applicable in cross-cultural settings?

We present a robot rights curriculum in which we construct and work through a foundational framework to engage in critical thinking, understanding, and discussion of decisions regarding robot rights. We cultivate an understanding of the robot rights debate informed by Western and Eastern principles as a step toward uniting our global community. Through the study of robot rights motivated by both Western and Eastern principles, we take an important first step toward addressing existing gaps in knowledge and discourse on robot rights.

Curriculum

INSTRUCTORs

Curriculum Guide - Instructor edition

STUDENTS

Curriculum Guide - Student edition

READINGS

Curriculum - Compiled readings

Acknowledgment

This project was generously funded by the Peter J. Eloranta Summer Undergraduate Research Fellowship award and the Undergraduate Research Opportunities Program at MIT.

If you want to stay up-to-date on robot rights, please sign up for my substack newsletter. Thank you!

Some Title Here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique.
Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat.
Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Curriculum Guide - Instructor Edition

Download as PDF

Overview

In recent years, great strides in artificial intelligence (AI) and social robotics have led scholars to question whether and how society should have any responsibility toward intelligent systems. Some have argued that these entities are automatically removed from moral consideration by way of their ontological difference from humans. Others believe artificially intelligent systems deserve some moral status conditioned on their developing consciousness or sentience. Still others argue that this debate should be set aside altogether in consideration of more pressing moral issues. Regardless of stance, scholars, engineers, and AI professionals find general agreement in that robots are taking on increasing importance in social contexts. This raises questions regarding our mutual interactions, whether robots should be treated as entities beyond mere tools, and whether they should have rights. Such considerations have prompted an influx of academic attention to the moral, social, and legal status of robots.

Even so, there is distinct need for greater and more well-informed discourse on robot rights and, more broadly, AI ethics. Consider for instance the question: even if we recognize robots with moral standing, how should societies situate them within an existing hierarchy of values? In many societies, the law is the key determiner of the hierarchy of values, and accordingly the law is taken into consideration when making decisions. In other societies, tradition underpins a framework of values and must be taken into account. While many current legal systems prioritize human beings, recent studies have indicated that people are likely to hesitate to sacrifice robots in order to save humans. This hesitancy is strongly related to the increasing anthropomorphization of robots. As robots continue to evolve, how should we navigate the complex landscape of human and robot rights from moral, legal, and social perspectives?
Furthermore, with increasing AI competition between the East and West, there is a growing divide in the progress of intelligent systems that may open the door to disastrous consequences, exacerbate inequity, and shift global power dynamics. How can we attempt to strike a balance between divergent ideals in the development of robot rights? How can we achieve a shared framework of robot rights applicable in cross-cultural settings?

Given these developments, the debate over robot rights is becoming increasingly important. What are robot rights? How should we navigate the complex landscape of human and robot rights from moral, legal, and social perspectives? As AI competition increases between East and West, can we bridge the growing divide by achieving a shared framework of robot rights applicable in cross- cultural settings? This course will provide a foundational framework for students and researchers to engage in critical thinking, understanding, and discussion of decisions regarding robot rights. It aims to cultivate an understanding of the robot rights debate informed by Western and Eastern principles as a step toward uniting our global community.

Through the study of robot rights motivated by both Western and Eastern principles, this course serves as a first step toward addressing existing gaps in knowledge and discourse on robot rights.

Course Objectives

Upon completion of the course, students should able to accomplish the following:

  • Describe the concept of robot rights and understand its nuances.
  • Gain familiarity with the problems of AI ethics and some of the possible solutions specifically related to robots.
  • Describe the landscape of robot rights discourse today from both Western and Eastern perspectives.
  • Identify and discuss the relationship and dynamics between robot-rights-related Western and Eastern principles over time.
  • Analyze the social, ethical, and technical benefits and consequences of various frameworks and notions of robot rights.
  • Identify challenges associated with public engagement on robot rights issues in which different communities have differing interests that are inherently in opposition.
  • Discuss challenges facing engineers and researchers working on robot rights-related technologies.
  • Discuss implications for robot rights in the context of law enforcement, government regulation, private corporations, research institutions, and scientific research.
  • Apply ethical understanding to analyze case studies involving AI, engineering, society, politics, and governance.

Course Components

The material for this course is divided into three units. Each module, students will be given a set of readings pertaining to that module’s topic of study. They will then respond to 3-5 Study Questions based upon the readings and submit their responses prior to class. During class, students will be called upon to participate in graded class discussions of the material assigned for the module. Throughout the course, students will additionally complete and submit two book annotations. If desired, a final paper may be included for synthesis of the course material on a relevant topic of choice. Specifications for book annotations and final papers may be found at the end of this document.

Readings and Digital Resources

Readings each module will draw upon a variety of sources, including books, news articles, videos, and more.

  • Required Textbooks:
  • *Gunkel, Robot Rights (2018 Edition)
  • *Pereira & Lopez, Machine Ethics: From Machine Morals to the Machinery of Morality (2020 Edition)
  • Required Readings on Course Website
    Additional readings consist of articles and research publications.
  • Recommended Books
  • Anderson & Anderson, Machine Ethics (2011 Edition)
  • Thompson, Machine Law, Ethics, and Morality in the Age of Artificial Intelligence (2021 Edition)
  • Pasquale, News Laws of Robotics: Defending Human Expertise in the Age of AI (2020 Edition)
  • Digital Resources:
    Kami Annotation Tool**

*For annotation assignments.
**Or annotation software of choice.

Suggested Grading Scheme

Below is the recommended grading breakdown for the course. The second set of percentages reflects the recommended grading scheme should a final paper be included in the course.

  • Written assignments (60% / 45%): Each module, students must answer each study question in a paragraph (or more, if desired).
  • Classroom participation (30% / 35%): Students are required to attend class and are expected to make contributions to class discussion.
  • Book Annotations (10% / 5%): We will be working through the texts Robot Rights by Gunkel and Machine Ethics by Pereira and Lopez. Please see specifications on page XX.
  • Final Paper (0% / 15%): Students taking the advanced version of this course will be required to complete a final paper demonstrating their ability to analyze a particular topic relevant to the material covered in this course. Please see specifications on page XX.

COURSE CONTENT

Unit One: Introduction to Foundational Questions in Robot Rights

Module 1: Introduction to Robot Rights
  • Readings:
  • Gunkel, “2020: The Year of Robot Rights,” The MIT Press Reader, January, 2020
  • Falk, “The rise of smart machines puts spotlight on 'robot rights',” NBC Mach, December, 2017
  • Sherman & Shaw, “Now is the time to figure out the ethical rights of robots in the workplace,” CNBC Work, December, 2018
  • Banik, “Will Sentient AI Gain Equal Rights as Humans in the Future?,” Analytics Insight, June, 2022
  • Chan, “What you need to know about China’s AI ethics rules,” TechBeacon, January, 2022
  • Cole, “The Global Race to Robot Law: 5th Place, China,” October 2012
  • Foundational Texts:
  • Gunkel, Robot Rights, Introduction & Chapter 1 (Thinking the Unthinkable)
  • Study Questions:
  • What do you consider a robot?
  • How do we define a right? How do you define a right? How should a right be defined?
  • What are the key ethics issues involving robot rights?
  • How should we as a society (Eastern, Western, or otherwise collaborative) pursue the risk vs. benefits of the pursuit of robot rights?
  • Lesson Objectives:
    In this module, students will begin to familiarize with the notion of robot rights as well as history behind the topic. Class discussion will provide relevant fundamentals of moral and ethical theory. Students will acquire tools necessary for upcoming material that builds upon this module’s foundation. Relevant sample prompts to outline class discussion are provided in Instructor Notes below. The goal is for students to come with their unique backgrounds experiences to engage with each other on specific areas and differing aspects of the robot rights debate and gain exposure otherwise inaccessible nuances.
  • Instructor Notes:
    Guide students in discussion of the Study Questions by drawing upon their personal encounters with robots and rights in fiction, news, and the real world; encourage students to draw upon the readings and point out important passages to support discussion.
  • Begin by providing historical context for the word robot and the term robot rights.
  • How has science fiction shaped students’ notion of robots and identity? Many have seen the movie Wall-E and the series Star Wars. Can students come up with other examples of technologies for which science fiction has established expectations in advance of engineering?
  • How does the current state of robot rights conflict or agree with developments in human-robot interaction, if at all?
  • Introduce the sense-think-act paradigm as mentioned on page 34 of Robot Rights. Do students agree with the sense-think-act paradigm and why or why not? How should we navigate the changing effect of social perception and role on the definition of robot?
  • Guide the students through the Hohfeldian incidents by calling upon students to describe them and provide examples. Do the same for will and interest theory. Ask the students to think about which they subscribe to in their daily lives and elaborate in light of the perspective shared on page 46 of Robot Rights.
  • Call upon students to share what they think of the notion of robot rights in general.
  • Do they agree with or dismiss the need for discourse and why?
  • Why is robot rights discourse so often recentered upon as extending human rights to robots?
  • Should robot rights be defined separately and why?
  • Should robots be able to ‘think’ in order to have rights? If it is “premature” (Gunkel, Robot Rights, p.50), when would instead be the “right time?”
  • How do our answers to these questions compare to discourse on animal rights?
  • Where do, and should, we draw the boundary between science fiction and research & development? How should it be evaluated?
  • Do we have a duty to protect robots with rights? What do students think of Teubner’s quote on page 56 – do we have a similar responsibility to protect robots against the destructive tendencies of human society? On the flip side, is this destruction universal across all societies or unique to society in the West?
  • Have students now turn to the two articles on robot regulation in China. Are robot rights admissible under the 5 “ethical norms” established in the Chan piece? Are there loopholes to keeping AI “under meaningful human control?”
  • How does the Chinese perspective appear to compare to the European stance on robot rights? How does the Chinese stance appear to have evolved since 2012 as described in the Cole piece?
  • Do the students find it possible to envision a future in which a unified framework of robot rights can exist between the East and the West? Can we find common ground?
  • Do we have a responsibility to ensure robot rights are not unthinkable, and do the same to likewise ensure a cross-cultural framework of robot rights and law?
Module 2: Machine Ethics
  • Readings:
  • “Ethics of Artificial Intelligence and Robotics,” Stanford Encyclopedia of Philosophy, April, 2020
  • Deng, “Machine ethics: The robot’s dilemma,” Nature, July, 2015
  • Jermsittiparsert, “The Influence of Machine Ethics on the Performance of AI of the ASEAN Countries,” Social Science Asia, 2021
  • (Optional resource) The Moral Machine
  • Foundational Texts:
  • Anderson & Anderson, Machine Ethics, Chapters 1-3
  • Thompson, Machine Law, Ethics, and Morality in the Age of Artificial Intelligence, Chapters 5, 7, 8
  • Study Questions:
  • What criteria does Asimov lay out about the objectives of robotics? Do you believe they are sufficient or not, and why? Are Asimov’s criteria admissible across the East and West?
  • Do you believe AGI is ethical? Why or why not?
  • What do Anderson and Anderson propose as the ultimate goal of machine ethics? What are the counters to those concerns as discussed by those authors and also by Desol?
  • What are the benefits and drawbacks of the value-sensitive design (VSD) approach toward machine ethics? How would you judge the objectives of machine ethics on this basis?
  • Lesson Objectives:
    This module aims to introduce students to:
  • Machine ethics, with a step toward understanding differing perspectives on the impacts of AI between the East and West.
  • Aspects of philosophical and ethical theory closely tied to machine ethics.
  • The importance of machine ethics toward creating ethical agents.
  • Relevant ethical problems within AI.
  • The distinction between the mechanical and the machinic – the impact of art on our perception of machines and machine ethics.
  • Instructor Notes:
    Guide students in discussion of the questions below by drawing upon their personal experiences; encourage students to draw upon the readings and point out important passages to support discussion. Bring in relevant concepts and explanations of theoretical knowledge as appropriate.
  • Should we treat machines as subjects or objects? Should we treat AI technologies as AI agents or patients? What would happen in each case?
  • Draw students’ attention to “Ethics of Artificial Intelligence and Robotics”
  • How would you define machine ethics? In particular, is anything lacking from the definitions provided in the “Ethics of Artificial Intelligence and Robotics” piece? What did you think of these definitions? Are these definitions applicable across cultures?
  • Is machine ethics needed? Are there distinctions in which types of machines do and don’t need ethics?
  • Should robot rights be defined separately and why?
  • Can machines ever become full ethical agents?
  • Machine Ethics, page 18
  • Machine Law, Ethics, and Morality in the Age of Artificial Intelligence, page 78
  • Consider from standpoints of varying ethical schools of thought
  • Is the singularity a shared fear across cultural boundaries? Across generations? Religions? Why or why not, and when? How does ‘Frankenphobia’ differ across the East and West?
  • Point out the view shared in the fourth full paragraph on page 42 of Machine Ethics; how might Eastern practitioners respond to this view?
  • How does religion affect our perception of ethics? Should religion be removed from consideration of machine ethics? Is it possible to sever this connection?
  • What is the difference between the “good” and the “right”? Which should be prioritized in developing machine ethics? (Machine Ethics, page 37)
  • What are the strengths and weaknesses, benefits and dangers, of a utilitarian approach to machine ethics? (Machine Ethics, page 38)
  • Is it possible to reach an ethical framework for machines and/or AI without first understanding our own moral and ethical natures? (Machine Ethics, page 42)
  • Will it be possible for nations to come to agreement as to how AI should be ethically governed? (Machine Law, Ethics, and Morality in the Age of Artificial Intelligence, page 111)
  • Is value-sensitive design (VSD) truly universally applicable? Why or why not?
  • Consider politics, religion and mythology, traditions, philosophy
  • How does art affect our valuation of machines?
  • Are machines truly the “internalized other”? Is there a difference in this perception across cultures? Can machines ever shift away from this prescribed role?
Module 3: Comparison of Western and Eastern SERC Principles
  • Readings:
  • Sharma, “Robots bring Asia into the AI research ethics debate,” November, 2017
  • Gal, “Perspectives and Approaches in AI Ethics: East Asia,” June, 2019
  • “Asia’s AI Agenda: The Ethics of AI,” MIT Technology Review Insights, July, 2019
  • Jones, “Is the AI ethics issue hindering innovation?” Techwire Asia, February, 2021
  • Kratsios, “AI That Reflects American Values,” Bloomberg Opinion, January, 2020
  • Hongladarom, “The case for uniting the East and West to build ethical AI,” Quartz, May, 2019
  • Tan, “Alternative ethical frameworks for AI: A critical view of AI ethics,” Association for Progressive Communications, August, 2020
  • (Optional) “Artificial Intelligence: Policies in East Asia,” Asian Pacific Foundation of Canada, 2019
  • Study Questions:
  • Do we need “globally acceptable guidelines” for AI ethics? What are the key arguments in support of and against devising a global framework of (AI) ethics united across East and West? Do you think achieving this is possible? Why or why not?
  • What are the key stances, arguments, and evidence for and against “partner AI” in East Asia?
  • What are the societal concerns addressed in the MIT Tech Review Insights report?
  • How might we make an AI ethical framework actionable and enforceable?
  • Lesson Objectives:
    This module aims to introduce students to:
  • Key differences between Eastern and Western thought, discourse, and future prospects on AI ethics and the legal and moral standing of robots.
  • The ways in which religion, history, traditions, and culture have shaped East Asian societies’ views on “partner AI.”
  • The Anthropomorphized Tools Paradox and its ties to other societal problems and tensions.
  • Prevailing views on the need and possibility of developing a cross-cultural framework for AI ethics and/or robot rights.
  • Current tensions at the regional and international levels regarding computing ethics and regulation.
  • Instructor Notes:
    Guide students in discussion of the topics below by drawing upon their personal experiences; encourage students to draw upon the readings and point out important passages to support discussion. Bring in relevant concepts and explanations of theoretical knowledge as appropriate.
  • In what sectors today have globally acceptable guidelines been successfully established? What made these guidelines necessary, what spurred such action, and what made possible their adoption? Can we take inspiration from this for AI ethics?
  • Why do we (not) need a globally admissible framework for responsible computing and AI? What steps are necessary to make this possible?
  • Is it possible to come up with such a global framework? Why or why not? What does this say about doing so for robot rights?
  • What makes current AI ethical thinking non-universal?
  • (How) Does language affect our perception of ethics? How should we navigate the
    language barrier in developing a framework of AI ethics?
  • How do political dimensions affect discussion on matters of AI regulation and ethics? How can we skillfully navigate this landscape?
  • How do South Korea, China, and Japan differ in their approaches to and perspectives toward partner AI? What are the merits and drawbacks of each? (Gal piece)
  • Should such regional differences and disagreements be addressed prior to or simultaneously to cross-continental dialogue?
  • What are the implications of the Anthropomorphized Tools Paradox?
  • Do you believe an AI or robot can truly become a romantic partner?
  • How are positive and negative sentiments on AI balanced in Asia? (e.g. workplace) (MIT Tech Review piece)
  • How do AI governance and regulation differ between the US and Asia?
  • Do you think AI should be government-regulated? Why or why not, and how?
  • How does the gender imbalance in CS and AI-related fields affect the trajectory of such technologies?
  • Discuss the figure on page 10 of the MIT Tech Review Insights report.
  • Do you think (the majority of) humans will ever trust AI and perceive it as trustworthy? In what ways are trust and trustworthiness perceived differently between the East and West?
  • Overall, what are the key similarities and differences between Eastern and Western views on AI ethics and its trajectory?
Module 4: Machines in the Media
  • Readings:
  • None
  • Study Questions:
  • None
  • Lesson Objectives:
  • First book annotation due (Pereira & Lopez).
  • Instructor Notes:
  • In-class movie showing – Hi, AI
  • Post-movie discussion
  • Prompt students to reflect based on their reading of the book.
  • Invite students to share 1-2 of their annotations if desired.
  • Should we strive toward independence or more dependency on machines and why?

Unit TWO: History and Trajectory of Robot Rights in the East and West

Module 5: Robot Rights in the US & EU
  • Readings:
  • “Principles of Robotics,” UK Research and Innovation
  • Cuddy, “Robot Rights Violate Human Rights, Experts Warn EU,” EuroNews, April, 2018
  • Open Letter to the European Commission on Artificial Intelligence and Robotics
  • Marko, “Robot rights – a legal necessity or ethical absurdity?” Diginomica, January, 2019
  • Langman, et al., “Roboethics principles and policies in Europe and North America,” Science News Applied Sciences, November, 2021
  • Calo, “Robots in American Law,” University of Washington School of Law, April, 2016
  • Foundational Texts:
  • Gunkel, Robot Rights, Chapter 2 (!S1->!S2)
  • Pasquale, News Laws of Robotics: Defending Human Expertise in the Age of AI, Introduction
  • Study Questions:
  • Would you want to live in a world in which you cannot determine whether you are interacting with a human or a machine? Why or why not?
  • How transparent should a machine’s behavior be to humans? Why?
  • What are the key issues of regarding AI versus human expertise, and how do they relate to 5 principles outlined in “Principles of Robotics?”
  • Based on this module’s readings, what are the key arguments for and against granting legal personage to robots and AI machines?
  • Lesson Objectives:
  • Understand the current status of robot rights regulation, law, and discourse in the United States and European Union, along with relevant historical developments leading up to the status quo.
  • Begin to draw comparisons between the Western and Eastern stances on robot rights and AI ethics.
  • Critically evaluate historical decisions and current proposals for regulating robot rights.
  • Explore Western views on the tension between robots/AI and human expertise.
  • Instructor Notes:
    Guide students in discussion of the topics below by drawing upon their personal experiences; encourage students to draw upon the readings and point out important passages to support discussion. Bring in relevant concepts and explanations of theoretical knowledge as appropriate.
  • Walk through each of the principles listed in “Principles of Robotics.” Invite students to share their perspectives and critically analyze advantages and limitations of the framework provided.
  • Do students agree with each of the points? Why or why not?
  • Why or why not might we treat robots as products instead of people? On the other hand, what are central arguments for granting robots legal personage?
  • Can robots have emotions? Can we love a machine and a machine love us back?
  • Who can claim to have ‘expertise’ on robot rights?
  • Compare and evaluate the implementation of roboethics into policy between the US, Canada, and the EU. (Langman, et al. piece)
  • Walk through the main case studies presented in “Robots in American Law.”
  • Would increasing resemblance between humanoid robots and humans render the considerations raised in Section I.A obsolete?
  • Can robots commit crimes? Should they be legally liable?
  • Robots tend to frequent discussions of judicial bias. In what other domains should we be careful about letting robots take on the roles of humans?
  • Should robots be admitted as witnesses in court? On a related note, can robots ever be sufficiently trustworthy? What would be needed to achieve this?
  • How should the definition of robot in American law (p.41) be updated?
  • Discuss the four main rules introduced by Pasquale.
  • Should we place limitations on the degree to which robots can resemble humans?
  • Is it ethical to delineate robots by their ‘owners?’ If legal personage for robots is achieved, should these two coexist?
Module 6: Robot Rights in East Asia
  • Readings:
    Note: English Translations in Progress
  • Zhu, Williams, and Wen, “Confucian robot ethics,” Computer Ethics, 2019
  • 机器人是否应当拥有权利吗?如果机器有自我意识了又该怎么做呢?Tencent, December, 2019
  • Foun人工智能机器人的权利与义务 OfWeek, August, 2018
  • 機器人的生命權力——《再.創世》專題 PanSci, August, 2021
  • 对机器人“法律人格论”的质疑
  • Guo and Zhang, “Robot Rights,” Science, 2009
  • Weng et al., “Toward the Human-Robot Co-Existence Society: On Safety Intelligence for Next Generation Robots,” International Journal of Social Robotics, 2009
  • Weng, “Beyond Robot Ethics: On a Legislative Consortium for Social Robotics,” Advanced Robotics, 2010
  • Dang and Liu, “Robots are Friends as Well as Foes: Ambivalent Attitudes Toward Mindful and Mindless AI Robots in the United States and China,” Computers in Human Behavior, 2021
  • (Optional) Robertson, “Human Rights vs. Robot Rights: Forecasts from Japan,” Critical Asian Studies, 2014
  • (Optional) South Korean Robot Rights Charter 2012
  • (Optional) “Robot Law and Ethics”
  • Study Questions:
  • What do 对机器人“法律人格论”的质疑 and 人工智能机器人的权利与义务 name as obstacles to granting legal personage and rights to robots? Do you believe rights are derived from consciousness? Why or why not?
  • Based on this module’s readings, describe three key differences between the Western and Eastern approaches to robot rights and/or the motivations involved.
  • Evaluate the necessity of avoiding “colonization of robot legislative affairs” (Weng 2010). Why is it important? Do you agree? Why or why not?
  • What are the differences between an electronic person, natural person, and legal person? What are the criteria for each?
  • Lesson Objectives:
  • Acquire the necessary background to draw effective comparisons between Eastern and Western approaches to robot rights and robot law.
  • Identify key themes in and unique qualities of East Asian nations’ approaches to robot rights.
  • Understand nuances in the Chinese stance toward extending rights to robots.
  • Gain exposure to Korean and Japanese discourse on robot rights.
  • Identify key tenets of the East Asian-driven argument for globalization of robot regulation.
  • Evaluate the objective of robot and human co-existence put forth by Korea and Japan in the context of robot law.
  • Gain a deeper understanding of how cultural influences play a role in robot governance and regulation, as well as public engagement.
  • Identify areas of tension between the East and West regarding the trajectory of robots’ potential rights and their coexistence with humans.
  • Instructor Notes:
    Guide students in discussion of the topics below by drawing upon their personal experiences; encourage students to draw upon the readings and point out important passages to support discussion. Bring in relevant concepts and explanations of theoretical knowledge as appropriate.
  • What have human behavioral experiments revealed about people’s views toward robots in the US and China?
  • Do you consider Siri to be a robot?
  • What do you think of the ten laws proposed by Tezuka? (Robertson piece, p.584)
  • What do you consider preconditions to the possibility of possessing (moral) rights? Must a being have a sense of self and be able to self-create a ‘life plan?’ Must it be able to ‘awaken’ as stated in 人工智能机器人的权利与义务? Why or why not?
  • How about legal rights? Note the difference between the two.
  • What do you consider to be ‘living’ versus ‘non-living?’ Discuss the ‘Frankenstein Complex’ and invite students to think in terms of their own religious and cultural experiences.
  • Reference differing systems such as Confucianism, animism, and other indigenous traditions. Highlight climate change and environmentalism as areas of insight into an alternative approach.
  • Introduce the notion of infinite deferral. Is there a difference between the East and West, whether in the status quo or stemming from cultural beliefs, as to the optimal timing of robot rights discourse and moral and legal decision-making?
  • Highlight the possibility mentioned by Weng et al. of adopting two different codes of ethics for robots. Is the self-awareness criterion sufficient for this delineation? Why or why not? What other delineations are possible?
  • Related: industrial vs. social robots, as mentioned by Weng; moral vs. legal rights for robots in different societies
  • Draw students’ attention to the legislative consortium for social robotics proposed by Weng. Discuss the benefits and drawbacks of constructing such a framework for robot law at the general level. Is this a more realizable possibility in consideration of current arguments against international robot regulations (c.f. Guo and Zhang)?
  • Discuss the possibility for robot technologies to widen gaps between nations. Should robot law include measures for preventative arms control?
  • What are some current challenges facing public engagement on robot rights, specifically on issues in which different communities have differing interests that are inherently in opposition? Can you name some additional examples of such issues?
  • Provide background the South Korean Robot Rights Charter effort (originally intended for completion in 2007) and discuss why it may have failed to be completed and disseminated to the public. What could have been done differently?
  • Draw students’ attention to the case study on robot-perpetrated harm to humans. Discuss the question of responsibility and differences between the US and China on this topic. When should humans bear responsibility versus robots?
  • How do you think we should let animal rights impact judgment of robot rights, if at all? What would Eastern versus Western practitioners say?
  • More broadly, there is less openly accessible literature on robot rights in the East compared to the West, and Western views are often cited as reference points for comparison. Why might this be the case? How can we navigate cultural and language barriers in this field? How can we avoid reinforcing a colonialist mindset?
  • Invite a few students to share their responses to any study question(s) of choice.
Module 7: Robot Rights Today
  • Readings:
  • “Re-thinking “Human-centric” AI: An Introduction to Posthumanist Critique,” EuropeNow, November, 2021
  • Foundational Texts:
  • Gunkel, Robot Rights, remaining chapters*
  • Study Questions:
  • None, Second book annotation due.
  • Lesson Objectives:
  • Explore robot rights in various areas of application.
  • Practice synthesizing principles of AI ethics in discussing robot rights case studies.
  • Identify key challenges and opportunities in advancing robot rights discourse in a non- ethnocentric manner.
  • Understand the benefits and drawbacks of human-centric AI and robotics.
  • Instructor Notes:
    Guide students in discussion of the topics below by drawing upon their personal experiences; encourage students to draw upon the readings and point out important passages to support discussion. Bring in relevant concepts and explanations of theoretical knowledge as appropriate. If applicable, students should begin thinking about what they would like to write about for their final papers.
  • Discuss the prospect of delineating different rights regimes for different uses of robots. What are the benefits and drawbacks? Relate this to differing moral and legal frameworks in the East and West. Draw upon existing debates in AI ethics, including responsibility and liability, intellectual property, privacy and surveillance, transparency, existentialism and singularity, fairness, and inequality.
  • Manufacturing
  • Healthcare
  • Warfare
  • Elderly care
  • Education
  • Artistic Production
  • Taxation
  • How can we move away from human-centric dialogue on robot rights?
  • How should we define metrics for ‘success’ in moving toward robot rights?
  • What are the greatest obstacles facing robot rights (discourse) today? What are additional needs that must be addressed?

Unit Three: Robots and Humanity

Module 8: Medical AI and Robotics
Module 9: Anthropomorphic, Animal, and “Inanimate” Robots
Module 10: Killer Robots
Module 11: Robots and Work
Module 12: Robots, Love, and Friendship

Sample Annotation Assignment Instructions

This semester we will have 2 annotation assignments. For the first annotation, you will read Pereira and Lopez’s book Machine Ethics: From Machine Morals to the Machinery of Morality, and for the second you will read David Gunkel’s recent book Robot Rights.

I am looking for a total of 20-25 meaningful notes spaced out throughout the six chapters of the reading. The introduction is important background (read it!) but I would suggest not marking it up since it's really just an introduction to the book (unless you would like to). You could do four notes per chapter, or it could mean 10 in one chapter and the rest in the others. You are welcome to add more than 25 if you'd like. If you do add more than 25, I will make a note in the gradebook that you went above and beyond for the assignment and keep that in mind down the road, but I am not expecting more than 25 so don't feel pressure. I'd much rather you enjoy the text and comment when you see fit.

Scoring

This assignment will be marked out of a score of 10, with 10 being the highest. Please note the expectations below:

2 Points: The correct number of notes on the text (20-25+)

3 Points: The quality of the notes, including each note being 2-3 sentences containing your thoughts, reflections, or questions. A note should not simply be a question mark or 4 words. You do not have to do intensive research for these notes. I just want to see how you think about environmental history. All you have to do is tell me what you think about what is written.

3 Points: The inclusion of at least 5 "cited notes" in the 20-25 total. For 5 of your notes, cite a book (our textbooks or a book from google books is fine) that answers a question or identifies a term or concept you don't know. Try to pick a range of topics and follow your own instincts with details you find really interesting and striking. There are so many great possibilities to choose from in this book, and I am looking forward to learning from all of you through reading your reflections. Use this opportunity to challenge yourself.

2 Points: Locate a recent news article (English, Chinese, or any other language is fine!) that connects to the themes or subject of the book. Paste the link and add a brief discussion about how the book clarifies an issue at stake in the article.

How to annotate and submit the assignment

Step 1: Upload the document to Kami (free) and proceed to annotate it.

Step 2: After you finish the annotation, share the finished product as a link, via the Share button in the top right corner, which looks like three dots connected by two lines.

Step 3: This opens a link you can share.

Step 4: Submit this link.

Sample Final Paper Assignment Instructions

The guidelines for this class’s final paper are broad, since you all come from different backgrounds and majors and have diverse interests pertaining to robot rights and AI ethics. The content of your final paper content should be sufficient to fill 8-10 pages in length when double-spaced.

Here are ways you could think about the paper:

  1. Which topics discussed this term do you want to know more about? Read more about them and write a report on your findings!
    Examples: "Where was coal mined in China historically and how did the geography of coal relate to economic development?" "How has the image of tigers changed between the premodern and modern era - when it the "turning point" from ferocious animal to a creature needing of protection?" "Why has Beijing seen three sandstorms in 2021? - looking at urbanization and the Gobi Desert"
  2. Is there a book or series of articles you'd really like to read? Read them and write a report on your findings!
  3. Is there a story you've read in the media that you'd like to understand more of the background to? Investigate it!

You don't necessarily have to formulate a formal academic argument. An argument is welcome, but a good summary of a problem/question with your own thoughts works too. Feel free to email to ask about potential essay topics.

Grading Rubric

The final paper is worth 30 points:

  • 5 points: Introduction of the topic with a clear question (you don't have to answer the question; you just have to explore the question)
  • 10 points: Discussion of the topic with at least 3 sources used
  • 5 points: Contextual Accuracy ("does the paper make sense?")
  • 5 points: Use of at least one primary source
  • 5 points: Length, Footnotes, and Works Cited (at the end of the paper)

Supplementary Materials

Interview with David Gunkel

Q: What drew you to robot rights and why should we care about it?

A: It’s a good question, because robot rights is one of these things that has been on the periphery of AI ethics and robot ethics for a while. It gets mentioned from time to time, but the majority of the early effort to grapple with the moral quandaries of this technology was more on the side of responsibility. If something goes wrong, who do we hold accountable for the bad outcome? And the question of the status or standing of the individual technologies, whether it be a robot or an AI, was something that was a bit in play, but it wasn’t really as forward in the dialogue as the question of responsibility was.

So when I got into this field I said to myself, I can work on the question of responsibility and contribute to that conversation, or I can pick up this more marginal idea that is getting some play and see what’s possible to develop in that area. So that’s really where it began—with this idea that I wanted to contribute something unique to the conversation that I saw at play but not really forwarded in the conversation. Then I discovered that as soon as you raise this question, it really divided the room. People were either for this or really against this, and there was really little down the middle. Whenever you have that kind of strong reaction, it means that there’s a lot of assumptions, metaphysics, religious ideologies at play. So I wanted to tease out all these different elements to see how and why the moral and legal questions regarding the status of technology led to this response, this kind of polarizing response that people had been providing. So that’s kind of what drew me to this originally. To try to—just for myself—figure out what’s really going on, why it is important, and most importantly why people get really fired up one way or another when it comes to having this conversation.

I think one of the things that happens in the conversation where things might get very polarizing very quickly is the assumption that when we say rights, we must mean human rights. I think that is an error because rights are something that doesn’t necessarily mean human rights. Rights apply to all kinds of things, whether animals, the environment, corporations, you name it. It doesn’t mean just human rights. But I think for a lot of people who are involved on the conversation, or who hear about the conversation, the assumption must mean you’re talking about human rights for robots. No, we’re talking about a very specific set of powers, privileges, claims, or immunities that need to be granted to another socially interactive entity that is not human. The way I explain it to my students is, whatever robot rights we decide be granted to an AI or robot, they’re going to be different from what we call human rights. There might be some overlap, but humans can have human rights. Robots can have robot rights. So that’s the first thing I think is important to keep in mind when we talk about this.

Another thing that is important is there’s a difference in our traditions—mainly Western thinking about these matters—between what are called moral rights and what are called legal rights. Moral rights are rights that you see in the rights of the citizen in France, or the Declaration of Independence of the United States, this idea of God-given rights that every human being has— rights that belong to us because of our nature. These are sometimes called natural rights. There are other rights that are bestowed by human power, which is mainly law. And in the law, we decide that certain things will have these kinds of rights and certain things will not have these kinds of rights. Therefore, a lot depends on whether we’re talking about moral rights or legal rights. I think the question about moral rights is still a rather complicated question because it depends on the nature of the artifact, and whether the artifact could by nature have some kind of claim on us to have respect or some sort of protection. That question I think is very science fiction oriented, and we see it play in our science fiction all the time, where the robots have their uprising.

The legal question I think is much more present for us right now, because we do need to integrate these technologies into our social systems. And legal rights are one of the ways that we recognize something as not just a thing but as a legal subject that can be either held accountable for its actions or protected from the actions of others. So even though this sounds like it might be science fiction, we already have laws that grant robots rights—legal rights. For example, in the US, there have been a number of jurisdictions that have decided that delivery robots operating on the streets and in the sidewalks autonomously will have the rights of pedestrians when they’re crossing the street. If you and your automobile strike one of these things in an accident, you will be held accountable in the same way you would be held accountable if you hit a human child in the crosswalk. And this is not because we want to recognize the natural capacities of the delivery robots, or we believe they’re sentient, or conscious. It’s because we have to figure out how to assign accountability, legally, in these accident situations. So I think we are already looking at the assignment of legal rights. Whether or not this ever evolves to an assignment of moral rights is another question.

One final thing that relates to this is the entire conversation about rights, we have to recognize, is very much a Western idea. Other cultures, distributed in time and space, don’t necessarily respond to these questions in terms of rights. A really good example is indigenous traditions, not only in North America but also in Australia and elsewhere in the world, where rights is not even a concept that is in play in their culture. They talk about kinship and about how will we build kinship with machines. So that’s an entirely different way of looking at our social relations with these objects that don’t require us to utilize this idea of rights. I think we have to look at this from a multicultural perspective and ask whether or not rights is even the right framework for addressing these questions because it is a concept that is very European in its origins.

Q: Do people confuse legal and moral rights in discussion often?

A: All the time! In fact, when these laws came out about the delivery robots, there was a huge Twitter explosion of people saying “How can you do this! This is terrible. These things aren’t conscious, they’re not sentient, they’re just objects and artifacts and technologies.” And this is because we tend to slip very quickly from legal protection to the moral natural rights questions. I think a lot can be gained by really being precise about our language—being really careful and exact about what we’re talking about in terms of rights and how we define these items.

Q: Has there been as much discussion on emotions or pain as a criterion for moral rights?

A: There’s been a lot of discussion and a lot of work on the side of emotions. There’s been some speculating about what would happen if a robot could exhibit pain and how we would respond to that. There’s also been actual engineering experimentation where people have built robots that can respond with cues—with behaviors—that indicate pain. They’ve used those robots in human studies to see how human beings respond to the robot. There’s been a lot of really recent and important work done in both of these areas. There’s a whole area in the social sciences called robot abuse studies, where experimenters will bring a robot into the room and ask people to torture or harm it or do other sorts of things. They find that people are very reticent to do this, even when the robot doesn’t exhibit behaviors of experiencing pain. People are very reticent to engage in violent behaviors that they think could elicit something like pain in the artifact. There’ve been other studies that have been done with human subjects involving a survey where they ask the human, “Would you do this to a robot?” And we see the same kind of results come out of that sort of investigation. So we’re finding that even if we are unsure or at least not entirely convinced one way or the other about the pain of the artifact, because of the way that we operate, it is very difficult for us to engage in social behaviors with something when we think it is maybe experiencing pain. All this comes down to what philosophers call the problem of other minds. You don’t know whether someone else is in pain until they give you behaviors that you read as pain. Take, for example, an animal. How do you know an animal is in pain? It gives behaviors that we interpret as pain. And then the question is, if a robot does that, is it really in pain or is it just pretending to be in pain? The real epistemological difficulty is, we really can’t distinguish these two things very easily because we don’t know how to separate what is truly pain from the exhibition of behaviors that are painful. This is what causes people to be empathetic with robots even when you tell them it doesn’t feel anything.

Q: In your opinion, how strongly do you think we should take examples from animal rights for robot rights?

A: It’s an important question, because I think the animal rights movement which began in the 1970’s gave us a lot of new ways of thinking about who is a moral subject and who needs to be treated with respect. For a long time, philosophers thought animals were just machines basically (such as the argument that you can torture animals and kill and eat them, and they feel nothing, or what they feel is unimportant and therefore it isn’t morally a responsibility that we have to consider). That shifts in the late 20th century. I think the innovations in animal rights thinking gave us a way of thinking about the rights of others that are not human—asking us to consider things like sentience and the experience of pain and pleasure that are experienced by other entities as morally significant. The question of robots feeling pain really is a byproduct of the animal rights framework being picked up and utilized for technology. And that’s where I think the entire tradition of animal rights thinking has given us a lot of resources for thinking about the status of artifacts.

Another innovation that I think is related to this but doesn’t receive as much attention is the environmental ethics that came alongside animal rights. Environmental ethics says not only are animals worthy of our moral respect because they are sentient or they feel, but the environment is as well—rivers, mountains, the earth itself. You can see now in a moment of climate change, this way of thinking, which really is rooted in indigenous traditions—this idea that we have responsibilities to the earth could have given us some really good ways of dealing with climate change before it got out of control. We would be treating the environment in which we live with the same sort of moral responsibility that we accord to each other and to animals. There’s been some recent effort, myself included, to try to also utilize the environmental rights movement alongside animal rights. Animal rights thinking usually leads down the direction of sentience, consciousness, and pain, but if the artifact has none of that, we might still have to consider it as a moral subject from the framework and the experience we have out of the environmental ethics developments in the same period.

Q: What do you think of the prospect of developing a global, or cross-cultural framework for robot rights? Do you think that is feasible and necessary? Differences in cultural norms in this space are becoming more of a topic of discussion—how should we navigate cultural and language barriers when discussing robot rights?

A: This is a really crucial point. It’s a very important aspect of how we evolved this thinking. Most of the thinking about AI ethics and robot ethics really, when it began, was grounded in a very Western-European-American way of thinking. We talked about consequentialism, utilitarianism, deontology, etc. and we utilized all the resources of Western philosophy and Western legal concepts to try to answer these questions. We are now beginning to see how this could be ethnocentric—how this could perpetuate colonialism because you are taking one particular culture’s way of solving these problems and saying this is the way everyone should do it. I think we need to be more open to learning from other cultures and making it a much broader conversation about these matters.

Two examples of where this can be really important:

We talk about robot rights. R-i-g-h-t-s. But in Confucianism, you can talk about robot rites. R-i-t- e-s. Because in Confucianism, the communal ritual of belonging to the larger unit of your community is the focal point, not the individual. Rights are about the individual. Rites—as ritual— are about the community. I think there’s a way to rethink and recast robot rights as robot rituals and understand how we ritualize these things in our social environment—and how we would do so looking at it not from the individualistic mode of thinking, which is very Cartesian, but doing it in a form that is very non-Western. Looking at it through Confucianism or other traditions that have a more communal understanding of these matters.

The second one is what we talked about earlier with indigenous traditions, where questions of rights and obligations don’t really play the same way they do in Western traditions. This notion of kinship is about building connections with others, not only other human beings but with animals, the environment, and artifacts. To come up with an interchange between these differences that doesn’t erase difference but recognizes the difference as part of what makes us vibrant. By expanding the conversation to learn from these other traditions, we can hopefully not only create a much more robust and responsible dialogue, but also get better results since we’re looking not only at one perspective.

Q: Do you feel like it would be more effective to have different systems for robot rights or rituals for different communities compared to a unified framework?

A: So we already see how this works in law. Different countries have different legal structures, different legal systems: different cities and regions each have different legal structures. We’re able to get away with these kinds of differences in law. I think we might learn something by looking at how law has operationalized a lot of these things to take into account regional differences without saying there has to be one global way of doing it for everybody. As soon as you say there has to be one global way, you’re erasing important differences and are not really being sensitive to these regional experiences. Our moral philosophy often tends to be absolutist. We say it has to fit everybody everywhere at all times, whereas our legal philosophy says we can be responsive and responsible with difference. We can learn something from the way that law and legal philosophy have dealt with difference that can then inform our moral frameworks a little better.

Q: Do you think it will ever be possible to move away from and beyond human-centric discussion on robots?

A: This is a really important question. A lot of recent innovation in AI ethics has been human- centric technology, or human-centric AI. Putting the human at the center has been a very important thing for us as human beings of course, but it also is something that has consequences. One of the ways you get the Anthropocene, this idea that we’ve now re-created the earth for us and marginalized other creatures is by thinking that we’re number one. By thinking that we’re at the top of the food chain and center of the universe. And we’re not. We’re one among many. A more holistic understanding of our position in the world that doesn’t put the focus necessarily on us alone can provide better ways of thinking about a future in which we are responsive and responsible to all of these other creatures that we exist alongside. We’re just learning now that this planet is a very fragile planet, and that if we take this human exceptionalism too far, and we make everything a resource to serve us, we do devastating things to the rest of the beings on the earth, whether it be a living like an animal or an inanimate thing. Challenging human-centric thinking is one of the really important consequences of having this conversation. As we challenge human- centrism, we are not just inviting others into our moral circle but we are actually challenging ourselves to think more holistically and in a way that will be more responsible to our futures and the other entities that we live alongside.

Q: What do you see as the biggest obstacle facing robot rights discussion today?

A: I would say the biggest obstacle right now is human-centric privilege. We have the privilege to talk about these things. We have the privilege to make decisions for ourselves and the others that surround us. But if we use that privilege only to cement our centric positioning—our human- centric way of thinking and serve only ourselves—it’s a very solipsistic and selfish way of thinking about our relationship to the rest of the world. We can see historically that this has led to some bad outcomes. Climate change is probably the best example. We’re not going to solve climate change through more human-centric thinking. We’re going to solve it by thinking more holistically. The real challenge to all these conversations generally comes from attempts to try to make things more special about us. I think that probably is not the solution. That is the problem we’re trying to resolve.

Q: Are there any other areas you feel are challenges that need to be addressed with equal importance as of now?

A: So I see these things as being parallel tracks. The question of robot rights or AI ethics is not something separate from the questions regarding climate change, the Anthropocene, the challenges of environmentalism. I see these things as informing each other, the same way in which the struggle for civil rights wasn’t just one human group struggling. Different human communities were struggling for the same kinds of recognitions. I think we’re looking at it in a way in which this conversation needs to be open to all these different struggles going on simultaneously, because I think they can all inform each other in ways that would not be possible if they were operating in a vacuum. The real importance here is to open up the dialogue in a way that not only calls on other cultures, other traditions, other backgrounds, and ways of thinking, but also recognizes these affiliations across these different endeavors. So that we’re not just creating these silos where we compartmentalize things and don’t really think in a much more interdisciplinary collaboration.

Q: Do you see public engagement as a strong factor in pursuing that goal? What do you envision to be the role of public engagement on robot rights?

A: Insofar as this all comes down to democratic self-governance, yes. At some point, we need to make decisions about who we put into office and how those people vote and create laws, and how those laws affect us. If we want to solve any of these problems, we have to be engaged citizens, and that means that really good science and technology communication to the general public is absolutely crucial. We also need to make sure the cat doesn’t get out of the bag way too fast. We need clear and concise communication. Science and technology communication is something that we always assumed other people would take care of. I think it’s our responsibility. We have to do a good job of it.

Q: As you see it, how would you define “success” in terms of robot rights advancing?

A: I think some people think that I would define success as a declaration of the rights of robots. I don’t think that is success. I think success is defined as follows. If we can get this on the agenda of not just philosophers but of people working in the arts, social sciences, engineering, policy, and regulation—if this conversation can be recognized as something that we all can contribute to and that we all take very seriously, learning about it and discussing it broadly, then I think we have success. I think this is something that we have the responsibility and the privilege to decide, but we have to answer that charge. We have to do so in a way that calls on the best of what we have to offer.

Q: Has there been anything you’ve found particularly surprising as you navigated the robot rights space?

A: One of the most surprising things I discovered when I started this, is I had this really clever idea—maybe not so clever—but I had this idea. I made a sign that said “robot rights now” and had a student take a picture of me holding the sign. My intention with the sign was to just put it out on social media to sort of do exactly what we were talking about—spur conversation, spark people’s imagination, and say “hey, come and talk with me about this and let’s see what we can do and learn and develop.” And as soon as I put this picture out there, the reaction was vehement! I was called a mad scientist, like Victor Frankenstein! I had to add a little asterisk and a disclaimer below that said I’m not an activist. This was really surprising. I thought I was doing something very clever to popularize this idea and get people talking about it, and it turned into this storm of backlash.

Q: Is there anything else that you would like to add to anything we talked about today?

A: I find it really forward-thinking to include [robot rights] in this course design. This is where I think this has to go. We need to bring this into the classroom and get students engaging this question. Not because we want to tell them the right way to think, obviously. The idea is to have them recognize the importance of having these conversations and contributing to the conversation. And I think education is one of the places we do this best, and getting out in front of this and helping the next generation of students become the leaders is absolutely crucial.

Curriculum Guide - Student Edition

Download as PDF

Key Learning Goals

  • Describe the concept of robot rights and understand its nuances
  • Gain familiarity with the problems of AI ethics and some of the possible solutions specifically related to robots.
  • Describe the landscape of robot rights discourse today from both Western and Eastern perspectives.
  • Identify and discuss the relationship and dynamics between robot-rights-related Western and Eastern principles over time.
  • Analyze the social, ethical, and technical benefits and consequences of various frameworks and notions of robot rights.
  • Identify challenges associated with public engagement on robot rights issues in which different communities have differing interests that are inherently in opposition.
  • Discuss challenges facing engineers and researchers working on robot rights-related technologies.
  • Discuss implications for robot rights in the context of law enforcement, government regulation, private corporations, research institutions, and scientific research.
  • Apply ethical understanding to analyze case studies involving AI, engineering, society, politics, and governance.

Readings and Digital Resources

  • Required Textbooks:
  • *Gunkel, Robot Rights (2018 Edition)
  • *Pereira & Lopez, Machine Ethics: From Machine Morals to the Machinery of Morality (2020 Edition)
  • Required Readings on Course Website
    Additional readings consist of articles and research publications.
  • Recommended Books
  • Anderson & Anderson, Machine Ethics (2011 Edition)
  • Thompson, Machine Law, Ethics, and Morality in the Age of Artificial Intelligence (2021 Edition)
  • Pasquale, News Laws of Robotics: Defending Human Expertise in the Age of AI (2020 Edition)
  • Digital Resources:
    Kami Annotation Tool**

*For annotation assignments.
**Or annotation software of choice.

Unit One: Introduction to Foundational Questions in Robot Rights

Module 1: Introduction to Robot Rights
Module 2: Machine Ethics
Module 3: Comparison of Western and Eastern SERC Principles
Module 4: Machines in the Media

No readings or study questions this week – first book annotation due. We will screen the movie Hi, AI during class and discuss briefly afterward.

Unit TWO: History and Trajectory of Robot Rights in the East and West

Module 5: Robot Rights in the US & EU
Module 6: Robot Rights in East Asia
Module 7: Robot Rights Today

Unit Three: Robots and Humanity

Module 9: Anthropomorphic, Animal, and “Inanimate” Robots
Module 10: Killer Robots
Module 11: Robots and Work
Module 12: Robots, Love, and Friendship

Compiled Readings Download (PDF)

Module 1 Readings

Module 2 Readings

Module 3 Readings

Module 4 No Readings

Module 5 Readings

Module 6 Readings

Module 7 Readings

Module 8 Readings

Module 9 Readings

Module 10 Readings

Module 11 Readings

Module 12 Readings