Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique.
Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat.
Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
In recent years, great strides in artificial intelligence (AI) and social robotics have led scholars to question whether and how society should have any responsibility toward intelligent systems. Some have argued that these entities are automatically removed from moral consideration by way of their ontological difference from humans. Others believe artificially intelligent systems deserve some moral status conditioned on their developing consciousness or sentience. Still others argue that this debate should be set aside altogether in consideration of more pressing moral issues. Regardless of stance, scholars, engineers, and AI professionals find general agreement in that robots are taking on increasing importance in social contexts. This raises questions regarding our mutual interactions, whether robots should be treated as entities beyond mere tools, and whether they should have rights. Such considerations have prompted an influx of academic attention to the moral, social, and legal status of robots.
Even so, there is distinct need for greater and more well-informed discourse on robot rights and, more broadly, AI ethics. Consider for instance the question: even if we recognize robots with moral standing, how should societies situate them within an existing hierarchy of values? In many societies, the law is the key determiner of the hierarchy of values, and accordingly the law is taken into consideration when making decisions. In other societies, tradition underpins a framework of values and must be taken into account. While many current legal systems prioritize human beings, recent studies have indicated that people are likely to hesitate to sacrifice robots in order to save humans. This hesitancy is strongly related to the increasing anthropomorphization of robots. As robots continue to evolve, how should we navigate the complex landscape of human and robot rights from moral, legal, and social perspectives?
Furthermore, with increasing AI competition between the East and West, there is a growing divide in the progress of intelligent systems that may open the door to disastrous consequences, exacerbate inequity, and shift global power dynamics. How can we attempt to strike a balance between divergent ideals in the development of robot rights? How can we achieve a shared framework of robot rights applicable in cross-cultural settings?
Given these developments, the debate over robot rights is becoming increasingly important. What are robot rights? How should we navigate the complex landscape of human and robot rights from moral, legal, and social perspectives? As AI competition increases between East and West, can we bridge the growing divide by achieving a shared framework of robot rights applicable in cross- cultural settings? This course will provide a foundational framework for students and researchers to engage in critical thinking, understanding, and discussion of decisions regarding robot rights. It aims to cultivate an understanding of the robot rights debate informed by Western and Eastern principles as a step toward uniting our global community.
Through the study of robot rights motivated by both Western and Eastern principles, this course serves as a first step toward addressing existing gaps in knowledge and discourse on robot rights.
Upon completion of the course, students should able to accomplish the following:
The material for this course is divided into three units. Each module, students will be given a set of readings pertaining to that module’s topic of study. They will then respond to 3-5 Study Questions based upon the readings and submit their responses prior to class. During class, students will be called upon to participate in graded class discussions of the material assigned for the module. Throughout the course, students will additionally complete and submit two book annotations. If desired, a final paper may be included for synthesis of the course material on a relevant topic of choice. Specifications for book annotations and final papers may be found at the end of this document.
Readings each module will draw upon a variety of sources, including books, news articles, videos, and more.
*For annotation assignments.
**Or annotation software of choice.
Below is the recommended grading breakdown for the course. The second set of percentages reflects the recommended grading scheme should a final paper be included in the course.
Articles & Publications (English translations in progress)
Media
This semester we will have 2 annotation assignments. For the first annotation, you will read Pereira and Lopez’s book Machine Ethics: From Machine Morals to the Machinery of Morality, and for the second you will read David Gunkel’s recent book Robot Rights.
I am looking for a total of 20-25 meaningful notes spaced out throughout the six chapters of the reading. The introduction is important background (read it!) but I would suggest not marking it up since it's really just an introduction to the book (unless you would like to). You could do four notes per chapter, or it could mean 10 in one chapter and the rest in the others. You are welcome to add more than 25 if you'd like. If you do add more than 25, I will make a note in the gradebook that you went above and beyond for the assignment and keep that in mind down the road, but I am not expecting more than 25 so don't feel pressure. I'd much rather you enjoy the text and comment when you see fit.
Scoring
This assignment will be marked out of a score of 10, with 10 being the highest. Please note the expectations below:
2 Points: The correct number of notes on the text (20-25+)
3 Points: The quality of the notes, including each note being 2-3 sentences containing your thoughts, reflections, or questions. A note should not simply be a question mark or 4 words. You do not have to do intensive research for these notes. I just want to see how you think about environmental history. All you have to do is tell me what you think about what is written.
3 Points: The inclusion of at least 5 "cited notes" in the 20-25 total. For 5 of your notes, cite a book (our textbooks or a book from google books is fine) that answers a question or identifies a term or concept you don't know. Try to pick a range of topics and follow your own instincts with details you find really interesting and striking. There are so many great possibilities to choose from in this book, and I am looking forward to learning from all of you through reading your reflections. Use this opportunity to challenge yourself.
2 Points: Locate a recent news article (English, Chinese, or any other language is fine!) that connects to the themes or subject of the book. Paste the link and add a brief discussion about how the book clarifies an issue at stake in the article.
How to annotate and submit the assignment
Step 1: Upload the document to Kami (free) and proceed to annotate it.
Step 2: After you finish the annotation, share the finished product as a link, via the Share button in the top right corner, which looks like three dots connected by two lines.
Step 3: This opens a link you can share.
Step 4: Submit this link.
The guidelines for this class’s final paper are broad, since you all come from different backgrounds and majors and have diverse interests pertaining to robot rights and AI ethics. The content of your final paper content should be sufficient to fill 8-10 pages in length when double-spaced.
Here are ways you could think about the paper:
You don't necessarily have to formulate a formal academic argument. An argument is welcome, but a good summary of a problem/question with your own thoughts works too. Feel free to email to ask about potential essay topics.
Grading Rubric
The final paper is worth 30 points:
Q: What drew you to robot rights and why should we care about it?
A: It’s a good question, because robot rights is one of these things that has been on the periphery of AI ethics and robot ethics for a while. It gets mentioned from time to time, but the majority of the early effort to grapple with the moral quandaries of this technology was more on the side of responsibility. If something goes wrong, who do we hold accountable for the bad outcome? And the question of the status or standing of the individual technologies, whether it be a robot or an AI, was something that was a bit in play, but it wasn’t really as forward in the dialogue as the question of responsibility was.
So when I got into this field I said to myself, I can work on the question of responsibility and contribute to that conversation, or I can pick up this more marginal idea that is getting some play and see what’s possible to develop in that area. So that’s really where it began—with this idea that I wanted to contribute something unique to the conversation that I saw at play but not really forwarded in the conversation. Then I discovered that as soon as you raise this question, it really divided the room. People were either for this or really against this, and there was really little down the middle. Whenever you have that kind of strong reaction, it means that there’s a lot of assumptions, metaphysics, religious ideologies at play. So I wanted to tease out all these different elements to see how and why the moral and legal questions regarding the status of technology led to this response, this kind of polarizing response that people had been providing. So that’s kind of what drew me to this originally. To try to—just for myself—figure out what’s really going on, why it is important, and most importantly why people get really fired up one way or another when it comes to having this conversation.
I think one of the things that happens in the conversation where things might get very polarizing very quickly is the assumption that when we say rights, we must mean human rights. I think that is an error because rights are something that doesn’t necessarily mean human rights. Rights apply to all kinds of things, whether animals, the environment, corporations, you name it. It doesn’t mean just human rights. But I think for a lot of people who are involved on the conversation, or who hear about the conversation, the assumption must mean you’re talking about human rights for robots. No, we’re talking about a very specific set of powers, privileges, claims, or immunities that need to be granted to another socially interactive entity that is not human. The way I explain it to my students is, whatever robot rights we decide be granted to an AI or robot, they’re going to be different from what we call human rights. There might be some overlap, but humans can have human rights. Robots can have robot rights. So that’s the first thing I think is important to keep in mind when we talk about this.
Another thing that is important is there’s a difference in our traditions—mainly Western thinking about these matters—between what are called moral rights and what are called legal rights. Moral rights are rights that you see in the rights of the citizen in France, or the Declaration of Independence of the United States, this idea of God-given rights that every human being has— rights that belong to us because of our nature. These are sometimes called natural rights. There are other rights that are bestowed by human power, which is mainly law. And in the law, we decide that certain things will have these kinds of rights and certain things will not have these kinds of rights. Therefore, a lot depends on whether we’re talking about moral rights or legal rights. I think the question about moral rights is still a rather complicated question because it depends on the nature of the artifact, and whether the artifact could by nature have some kind of claim on us to have respect or some sort of protection. That question I think is very science fiction oriented, and we see it play in our science fiction all the time, where the robots have their uprising.
The legal question I think is much more present for us right now, because we do need to integrate these technologies into our social systems. And legal rights are one of the ways that we recognize something as not just a thing but as a legal subject that can be either held accountable for its actions or protected from the actions of others. So even though this sounds like it might be science fiction, we already have laws that grant robots rights—legal rights. For example, in the US, there have been a number of jurisdictions that have decided that delivery robots operating on the streets and in the sidewalks autonomously will have the rights of pedestrians when they’re crossing the street. If you and your automobile strike one of these things in an accident, you will be held accountable in the same way you would be held accountable if you hit a human child in the crosswalk. And this is not because we want to recognize the natural capacities of the delivery robots, or we believe they’re sentient, or conscious. It’s because we have to figure out how to assign accountability, legally, in these accident situations. So I think we are already looking at the assignment of legal rights. Whether or not this ever evolves to an assignment of moral rights is another question.
One final thing that relates to this is the entire conversation about rights, we have to recognize, is very much a Western idea. Other cultures, distributed in time and space, don’t necessarily respond to these questions in terms of rights. A really good example is indigenous traditions, not only in North America but also in Australia and elsewhere in the world, where rights is not even a concept that is in play in their culture. They talk about kinship and about how will we build kinship with machines. So that’s an entirely different way of looking at our social relations with these objects that don’t require us to utilize this idea of rights. I think we have to look at this from a multicultural perspective and ask whether or not rights is even the right framework for addressing these questions because it is a concept that is very European in its origins.
Q: Do people confuse legal and moral rights in discussion often?
A: All the time! In fact, when these laws came out about the delivery robots, there was a huge Twitter explosion of people saying “How can you do this! This is terrible. These things aren’t conscious, they’re not sentient, they’re just objects and artifacts and technologies.” And this is because we tend to slip very quickly from legal protection to the moral natural rights questions. I think a lot can be gained by really being precise about our language—being really careful and exact about what we’re talking about in terms of rights and how we define these items.
Q: Has there been as much discussion on emotions or pain as a criterion for moral rights?
A: There’s been a lot of discussion and a lot of work on the side of emotions. There’s been some speculating about what would happen if a robot could exhibit pain and how we would respond to that. There’s also been actual engineering experimentation where people have built robots that can respond with cues—with behaviors—that indicate pain. They’ve used those robots in human studies to see how human beings respond to the robot. There’s been a lot of really recent and important work done in both of these areas. There’s a whole area in the social sciences called robot abuse studies, where experimenters will bring a robot into the room and ask people to torture or harm it or do other sorts of things. They find that people are very reticent to do this, even when the robot doesn’t exhibit behaviors of experiencing pain. People are very reticent to engage in violent behaviors that they think could elicit something like pain in the artifact. There’ve been other studies that have been done with human subjects involving a survey where they ask the human, “Would you do this to a robot?” And we see the same kind of results come out of that sort of investigation. So we’re finding that even if we are unsure or at least not entirely convinced one way or the other about the pain of the artifact, because of the way that we operate, it is very difficult for us to engage in social behaviors with something when we think it is maybe experiencing pain. All this comes down to what philosophers call the problem of other minds. You don’t know whether someone else is in pain until they give you behaviors that you read as pain. Take, for example, an animal. How do you know an animal is in pain? It gives behaviors that we interpret as pain. And then the question is, if a robot does that, is it really in pain or is it just pretending to be in pain? The real epistemological difficulty is, we really can’t distinguish these two things very easily because we don’t know how to separate what is truly pain from the exhibition of behaviors that are painful. This is what causes people to be empathetic with robots even when you tell them it doesn’t feel anything.
Q: In your opinion, how strongly do you think we should take examples from animal rights for robot rights?
A: It’s an important question, because I think the animal rights movement which began in the 1970’s gave us a lot of new ways of thinking about who is a moral subject and who needs to be treated with respect. For a long time, philosophers thought animals were just machines basically (such as the argument that you can torture animals and kill and eat them, and they feel nothing, or what they feel is unimportant and therefore it isn’t morally a responsibility that we have to consider). That shifts in the late 20th century. I think the innovations in animal rights thinking gave us a way of thinking about the rights of others that are not human—asking us to consider things like sentience and the experience of pain and pleasure that are experienced by other entities as morally significant. The question of robots feeling pain really is a byproduct of the animal rights framework being picked up and utilized for technology. And that’s where I think the entire tradition of animal rights thinking has given us a lot of resources for thinking about the status of artifacts.
Another innovation that I think is related to this but doesn’t receive as much attention is the environmental ethics that came alongside animal rights. Environmental ethics says not only are animals worthy of our moral respect because they are sentient or they feel, but the environment is as well—rivers, mountains, the earth itself. You can see now in a moment of climate change, this way of thinking, which really is rooted in indigenous traditions—this idea that we have responsibilities to the earth could have given us some really good ways of dealing with climate change before it got out of control. We would be treating the environment in which we live with the same sort of moral responsibility that we accord to each other and to animals. There’s been some recent effort, myself included, to try to also utilize the environmental rights movement alongside animal rights. Animal rights thinking usually leads down the direction of sentience, consciousness, and pain, but if the artifact has none of that, we might still have to consider it as a moral subject from the framework and the experience we have out of the environmental ethics developments in the same period.
Q: What do you think of the prospect of developing a global, or cross-cultural framework for robot rights? Do you think that is feasible and necessary? Differences in cultural norms in this space are becoming more of a topic of discussion—how should we navigate cultural and language barriers when discussing robot rights?
A: This is a really crucial point. It’s a very important aspect of how we evolved this thinking. Most of the thinking about AI ethics and robot ethics really, when it began, was grounded in a very Western-European-American way of thinking. We talked about consequentialism, utilitarianism, deontology, etc. and we utilized all the resources of Western philosophy and Western legal concepts to try to answer these questions. We are now beginning to see how this could be ethnocentric—how this could perpetuate colonialism because you are taking one particular culture’s way of solving these problems and saying this is the way everyone should do it. I think we need to be more open to learning from other cultures and making it a much broader conversation about these matters.
Two examples of where this can be really important:
We talk about robot rights. R-i-g-h-t-s. But in Confucianism, you can talk about robot rites. R-i-t- e-s. Because in Confucianism, the communal ritual of belonging to the larger unit of your community is the focal point, not the individual. Rights are about the individual. Rites—as ritual— are about the community. I think there’s a way to rethink and recast robot rights as robot rituals and understand how we ritualize these things in our social environment—and how we would do so looking at it not from the individualistic mode of thinking, which is very Cartesian, but doing it in a form that is very non-Western. Looking at it through Confucianism or other traditions that have a more communal understanding of these matters.
The second one is what we talked about earlier with indigenous traditions, where questions of rights and obligations don’t really play the same way they do in Western traditions. This notion of kinship is about building connections with others, not only other human beings but with animals, the environment, and artifacts. To come up with an interchange between these differences that doesn’t erase difference but recognizes the difference as part of what makes us vibrant. By expanding the conversation to learn from these other traditions, we can hopefully not only create a much more robust and responsible dialogue, but also get better results since we’re looking not only at one perspective.
Q: Do you feel like it would be more effective to have different systems for robot rights or rituals for different communities compared to a unified framework?
A: So we already see how this works in law. Different countries have different legal structures, different legal systems: different cities and regions each have different legal structures. We’re able to get away with these kinds of differences in law. I think we might learn something by looking at how law has operationalized a lot of these things to take into account regional differences without saying there has to be one global way of doing it for everybody. As soon as you say there has to be one global way, you’re erasing important differences and are not really being sensitive to these regional experiences. Our moral philosophy often tends to be absolutist. We say it has to fit everybody everywhere at all times, whereas our legal philosophy says we can be responsive and responsible with difference. We can learn something from the way that law and legal philosophy have dealt with difference that can then inform our moral frameworks a little better.
Q: Do you think it will ever be possible to move away from and beyond human-centric discussion on robots?
A: This is a really important question. A lot of recent innovation in AI ethics has been human- centric technology, or human-centric AI. Putting the human at the center has been a very important thing for us as human beings of course, but it also is something that has consequences. One of the ways you get the Anthropocene, this idea that we’ve now re-created the earth for us and marginalized other creatures is by thinking that we’re number one. By thinking that we’re at the top of the food chain and center of the universe. And we’re not. We’re one among many. A more holistic understanding of our position in the world that doesn’t put the focus necessarily on us alone can provide better ways of thinking about a future in which we are responsive and responsible to all of these other creatures that we exist alongside. We’re just learning now that this planet is a very fragile planet, and that if we take this human exceptionalism too far, and we make everything a resource to serve us, we do devastating things to the rest of the beings on the earth, whether it be a living like an animal or an inanimate thing. Challenging human-centric thinking is one of the really important consequences of having this conversation. As we challenge human- centrism, we are not just inviting others into our moral circle but we are actually challenging ourselves to think more holistically and in a way that will be more responsible to our futures and the other entities that we live alongside.
Q: What do you see as the biggest obstacle facing robot rights discussion today?
A: I would say the biggest obstacle right now is human-centric privilege. We have the privilege to talk about these things. We have the privilege to make decisions for ourselves and the others that surround us. But if we use that privilege only to cement our centric positioning—our human- centric way of thinking and serve only ourselves—it’s a very solipsistic and selfish way of thinking about our relationship to the rest of the world. We can see historically that this has led to some bad outcomes. Climate change is probably the best example. We’re not going to solve climate change through more human-centric thinking. We’re going to solve it by thinking more holistically. The real challenge to all these conversations generally comes from attempts to try to make things more special about us. I think that probably is not the solution. That is the problem we’re trying to resolve.
Q: Are there any other areas you feel are challenges that need to be addressed with equal importance as of now?
A: So I see these things as being parallel tracks. The question of robot rights or AI ethics is not something separate from the questions regarding climate change, the Anthropocene, the challenges of environmentalism. I see these things as informing each other, the same way in which the struggle for civil rights wasn’t just one human group struggling. Different human communities were struggling for the same kinds of recognitions. I think we’re looking at it in a way in which this conversation needs to be open to all these different struggles going on simultaneously, because I think they can all inform each other in ways that would not be possible if they were operating in a vacuum. The real importance here is to open up the dialogue in a way that not only calls on other cultures, other traditions, other backgrounds, and ways of thinking, but also recognizes these affiliations across these different endeavors. So that we’re not just creating these silos where we compartmentalize things and don’t really think in a much more interdisciplinary collaboration.
Q: Do you see public engagement as a strong factor in pursuing that goal? What do you envision to be the role of public engagement on robot rights?
A: Insofar as this all comes down to democratic self-governance, yes. At some point, we need to make decisions about who we put into office and how those people vote and create laws, and how those laws affect us. If we want to solve any of these problems, we have to be engaged citizens, and that means that really good science and technology communication to the general public is absolutely crucial. We also need to make sure the cat doesn’t get out of the bag way too fast. We need clear and concise communication. Science and technology communication is something that we always assumed other people would take care of. I think it’s our responsibility. We have to do a good job of it.
Q: As you see it, how would you define “success” in terms of robot rights advancing?
A: I think some people think that I would define success as a declaration of the rights of robots. I don’t think that is success. I think success is defined as follows. If we can get this on the agenda of not just philosophers but of people working in the arts, social sciences, engineering, policy, and regulation—if this conversation can be recognized as something that we all can contribute to and that we all take very seriously, learning about it and discussing it broadly, then I think we have success. I think this is something that we have the responsibility and the privilege to decide, but we have to answer that charge. We have to do so in a way that calls on the best of what we have to offer.
Q: Has there been anything you’ve found particularly surprising as you navigated the robot rights space?
A: One of the most surprising things I discovered when I started this, is I had this really clever idea—maybe not so clever—but I had this idea. I made a sign that said “robot rights now” and had a student take a picture of me holding the sign. My intention with the sign was to just put it out on social media to sort of do exactly what we were talking about—spur conversation, spark people’s imagination, and say “hey, come and talk with me about this and let’s see what we can do and learn and develop.” And as soon as I put this picture out there, the reaction was vehement! I was called a mad scientist, like Victor Frankenstein! I had to add a little asterisk and a disclaimer below that said I’m not an activist. This was really surprising. I thought I was doing something very clever to popularize this idea and get people talking about it, and it turned into this storm of backlash.
Q: Is there anything else that you would like to add to anything we talked about today?
A: I find it really forward-thinking to include [robot rights] in this course design. This is where I think this has to go. We need to bring this into the classroom and get students engaging this question. Not because we want to tell them the right way to think, obviously. The idea is to have them recognize the importance of having these conversations and contributing to the conversation. And I think education is one of the places we do this best, and getting out in front of this and helping the next generation of students become the leaders is absolutely crucial.
*For annotation assignments.
**Or annotation software of choice.
No readings or study questions this week – first book annotation due. We will screen the movie Hi, AI during class and discuss briefly afterward.