"through inaction, allow a human being to come to harm." So a robot "sees/perceives" a human about to be killed by another. Is the robot allowed to stop the threat? If so, how does a robot know the difference between someone about to kill another and a parent spanking their child?
I think robots should simply do what they are told, just like the coffee maker; else, you get into the whole moral issue thingamabobber and no one likes that now do we?
It's science fiction, so wasn't meant to be a full fledged code of conduct.
However, there is a lot of wisdom in those 3 laws. Let's take your proposal that it's merely a machine like any other. And you give it a command to kill your neighbor. Now, you didn't kill your neighbor, the robot did. If you also told the robot to wipe it's memory then there might not be any record that you were the culprit. That's the reason for developing a hard-coded bias in robotic "brains" that embraces something like those 3 laws.
As for spanking, the robot would need to have various parameters - spanking (a swat on the bum) would not cause "harm" but a beating certainly would. In the case of one human looking to kill another, the robot would take any actions needed to prevent such, short of causing harm to anyone, up to an including sacrificing themselves to prevent harm to a human (that's the point of the book/movie).
Isaac Asimov created the 3 Laws of Robotics in the book I Robot (and were briefly mentioned in the movie).
The Three Laws are: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I think robots should simply do what they are told, just like the coffee maker; else, you get into the whole moral issue thingamabobber and no one likes that now do we?
However, there is a lot of wisdom in those 3 laws. Let's take your proposal that it's merely a machine like any other. And you give it a command to kill your neighbor. Now, you didn't kill your neighbor, the robot did. If you also told the robot to wipe it's memory then there might not be any record that you were the culprit. That's the reason for developing a hard-coded bias in robotic "brains" that embraces something like those 3 laws.
As for spanking, the robot would need to have various parameters - spanking (a swat on the bum) would not cause "harm" but a beating certainly would. In the case of one human looking to kill another, the robot would take any actions needed to prevent such, short of causing harm to anyone, up to an including sacrificing themselves to prevent harm to a human (that's the point of the book/movie).
The Three Laws are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.