Saturday, September 15, 2007

The Ethics of AI

My personal system of ethics as it applies to humans is quite simple and straightforward. Happiness is good. An action that makes happiness is good and an action that makes unhappiness is bad.

But how does this apply to The Sims 15? The characters in the game can pass the Turing test and no one could say that they're incapable of making intelligent decisions. But does that mean that burning their house down just to watch them panic is evil?

Well, for me at least, the answer to this question relies on whether they're actually feeling unhappy. I know some people that would say "Of course the computer doesn't actually feel unhappy. It just has an algorithm that determines whether it should react in a way that looks like it's happy or unhappy." The problem I have with that is that Descartes made the same argument against animals, and one can even make that argument against people. I know I can feel happiness and unhappiness, but I have no way of knowing if you do too, or if you're just reacting as if you were happy or unhappy.

This line of reasoning leads the thought that since we can't know what others feel, we should assume that if they show signs of suffering or pleasure, they are really feeling those things. But consider a computer program consists entirely of a smiley face and a button. When you push the button the smiley face frowns. Surely that's not actually unhappiness?

Personally, I think there is a line somewhere, where the complexities of the algorithms determining behaviors are such that the being can be said to be aware. Above that line, causing apparent unhappiness is bad. But below it, it doesn't matter. Where is the line? I have no idea.

No comments: