Of Immanuel Kant and Sexbots

Dollfriend  (CC)

Dollfriend (CC)

posits the question, are sexbots more like animals or more like stones? Via Talking Philosophy

The Fox sci-fi buddy cop show Almost Human episode on sexbots inspired me to revisit the ethics of sexbots. While the advanced, human-like models of the show are still things of fiction, there is already considerable research and development devoted to creating sexbots. As such, it seems well worth considering the ethical issues involving sexbots real and fictional.

At this time, sexbots are clearly mere objects—while often made to look like humans, they do not have the qualities that would make them even person-like. As such, ethical concerns involving these sexbots would not involve concerns about wrongs done to such objects—presumably they cannot be wronged. One potentially interesting way to approach the matter of sexbots is to make use of Kant’s discussion of ethics and animals.

In his ethical theory Kant makes it quite clear that animals are means rather than ends. They are mere objects. Rational beings, in contrast, are ends. For Kant, this distinction rests on the fact that rational beings can (as he sees it) chose to follow the moral law. Animals, lacking reason, cannot do this. Since animals are means and not ends, Kant claims that we have no direct duties to animals. They are classified in with the other “objects of our inclinations” that derive value from the value we give them. Sexbots would, obviously, qualify as paradigm “objects of our inclinations.”

Interestingly enough, Kant argues that we should treat animals well. However, he does so while also trying to avoid ascribing animals themselves any moral status. Here is how he does it (or tries to do it).

While Kant is not willing to accept that we have any direct duties to animals, he “smuggles” in duties to them indirectly. As he puts it, our duties towards animals are indirect duties towards humans. To make his case for this, he employs an argument from analogy: if a human doing X would obligate us to that human, then an animal doing X would also create an analogous moral obligation. For example, a human who has long and faithfully served another person should not simply be abandoned or put to death when he has grown old. Likewise, a dog who has served faithfully and well should not be cast aside in his old age.

While this would seem to create an obligation to the dog, Kant uses a little philosophical sleight of hand here. The dog cannot judge (that is, the dog is not rational) so, as Kant sees it, the dog cannot be wronged. So, then, why would it be wrong to shoot the dog?

Kant’s answer seems to be rather consequentialist in character: he argues that if a person acts in inhumane ways towards animals (shooting the dog, for example) then his humanity will likely be damaged. Since, as Kant sees it, humans do have a duty to show humanity to other humans, shooting the dog would be wrong. This would not be because the dog was wronged but because humanity would be wronged by the shooter damaging his humanity through such a cruel act.

Interestingly enough, Kant discusses how people develop cruelty—they often begin with animals and then work up to harming human beings. As I point out to my students, Kant seems to have anticipated the psychological devolution of serial killers.

Kant goes beyond merely enjoining us to not be cruel to animals and encourages us to be kind to them. He even praises Leibniz for being rather gentle with a worm he found. Of course, he encourages this because those who are kind to animals will develop more humane feelings towards humans. So, roughly put, animals are essentially practice for us: how we treat them is training for how we will treat human beings.

In the case of the current sexbots, they obviously lack any meaningful moral status of their own. They do not feel or think—they are mere machines that might happen to be made to look like a human. As such, they lack all the qualities that might give them a moral status of their own.

CONTINUE READING

, , , , , , , ,

  • jose chung

    “In the case of the current sexbots, they obviously
    lack any meaningful moral status of their own. They do not feel or
    think—they are mere machines that might happen to be made to look like a
    human. As such, they lack all the qualities that might give them a
    moral status of their own.”

    The future is already the past. I’m sure the military-entertainment(google, amazon, etc) complex have working prototypes of drone/sexbot hybrids. It won’t be long now.

    “I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched c-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain. Time to die.”

    More human than human indeed, considering so many people are no more than robots anymore.