There is an interesting post up at The Splintered Mind Two Arguments for AI (or Robot) Rights: The No-Relevant-Difference Argument and the Simulation Argument. It is an interesting read.
Wednesday, I argued that artificial intelligences created by us might deserve more moral consideration from us than do arbitrarily-chosen human strangers (assuming that the AIs are conscious and have human-like general intelligence and emotional range), since we will be partly responsible for their existence and character.
In that post, I assumed that such artificial intelligences would deserve at least some moral consideration (maybe more, maybe less, but at least some). Eric Steinhart has pressed me to defend that assumption. Why think that such AIs would have any rights?
First, two clarifications:
(1.) I speak of “rights”, but the language can be weakened to accommodate views on which beings can deserve moral consideration without having rights.
(2.) AI rights is probably a better phrase than robot rights, since similar issues arise for non-robotic AIs, including oracles (who can speak but have no bodily robotic features like arms) and sims (who have simulated bodies that interact with artificial, simulated environments).
Now, two arguments.