Here. By Paul Bloom and Sam Harris, a psychologist and a neuroscientist.
The biggest concern is that we might one day create conscious machines: sentient beings with beliefs, desires and, most morally pressing, the capacity to suffer. Nothing seems to be stopping us from doing this. Philosophers and scientists remain uncertain about how consciousness emerges from the material world, but few doubt that it does. This suggests that the creation of conscious machines is possible.
“Nothing seems to be stopping us from doing this?” Nothing has made it possible for you to do this. Yes, nothing except for the fact that no robot will ever have feelings or desires or beliefs.
The reason is really quite simple. The robot is an automated material product created by humans using electronic and mechanical devices. Everything a robot does is the result of a sequence of software commands and electronic instructions. A robot sticks his tongue out at you because a programmer instructed it to. It will never, ever decide, out of the blue, to stick out its tongue at you, to the shock, surprise, and consternation of it’s builders. Never. If it puts out its tongue, the programmers will giggle: we did it.
I have yet to read a cogent explanation of how this string of actions by human designers and builders is somehow interrupted by an independent entity that will cause it to behave differently from its instructions. Without that step, it is never and can never be conscious or sentient and it will never have a feeling. Everything it has is the product of design. There is no way to interrupt this process.
I will note here that some argue that we will be able to connect biological organisms to mechanical or electronic devices. Then we will have a biological organism that can direct the activities of mechanical or electronic devices. That is all. That’s not a “sentient” machine. The biological organism, presumably, will be sentient: the machine parts will not. There would, in fact, be nothing remarkable about it, other than the extension of human capability with technology. We already have that.
But the height of absurdity is reached here, where the author confuses actors imitating robots imitating human gestures for the real thing (in reference to West World):
It’s quite another to witness the torments of such creatures, as portrayed by actors such as Evan Rachel Wood and Thandie Newton. You may still raise the question intellectually, but in your heart and your gut, you already know the answer.
The answer that you are supposed to know– the author is trying very hard to bat you around the head with it– is of course they have feelings: look at Thandie’s face! She’s obviously in distress. So now you know, yes, robots have feelings.
I was wrong: that previous quote was not the height of absurdity. Here it is: (Remember, there is no reason why such “sentient” machines need to look like us. They can look your smartphone.)
After all, if we do manage to create machines as smart as or smarter than we are — and, more important, machines that can feel — it’s hardly clear that it would be ethical for us to use them to do our bidding, even if they were programmed to enjoy such drudgery.
So… seriously, it might just be unethical right now for you to make your smartphone do all that work for you. You are a bully. A slave owner. And we now need a name for a person who believes that machines are not human and do not have reel feelings: techist? Machinist? Robot-owner? Nothing sounds catchy.
The comments from readers, at first, stunned me as well. Reader after reader agreed that it was likely we would soon have sentient robots, they would have feelings, and they should not be made to “suffer”.
I have never been able to imagine that fantastic leap these writers believe in, fervently, in which a series of switches and devices suddenly makes a free, willful decision — to do anything– that did not originate in a line of code written by a human programmer.
[whohit]The Robots Will Have Feelings or we Won’t[/whohit]