Asimov’s Three Laws of Robots Meets Ayn Rand
Here are Asimov’s three laws:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
(by the way you can thank Jbrenner for this)
The first question that comes to mine is: Are we violating any of Asimov’s Laws already? The second question is: should my robot save Osama, Stalin, Mao, Charles Manson, etc. when inaction would cause them to die? In law there is no duty to be a good Samaritan (Although some socialist altruists are trying to change this). In other words you cannot be prosecuted for your inaction, when you could have saved someone. I think the inaction part would cause all sorts of problems.
I think Rand would say that Robots are human tools and as a result they should not do anything a human should not do morally; they should follow the orders of their owners as long as they are consistent with Natural Rights. What do you think?
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
(by the way you can thank Jbrenner for this)
The first question that comes to mine is: Are we violating any of Asimov’s Laws already? The second question is: should my robot save Osama, Stalin, Mao, Charles Manson, etc. when inaction would cause them to die? In law there is no duty to be a good Samaritan (Although some socialist altruists are trying to change this). In other words you cannot be prosecuted for your inaction, when you could have saved someone. I think the inaction part would cause all sorts of problems.
I think Rand would say that Robots are human tools and as a result they should not do anything a human should not do morally; they should follow the orders of their owners as long as they are consistent with Natural Rights. What do you think?
How about requiring leaders of governments to follow the laws of robotics.
I believe your projection of Rand's outlook is correct. Robots are tools which cannot employ both Reason and Emotions.
Yours statement is a matter of practicality, not philosophy. Being a programmer by profession I can tell you that I can program a machine to do lots of things that you can't count on a human to do 100% of the time. Barring a hardware error or a programming error the machine will always do what its coding tells it to do.
The robot is not intended to to employ emotions at all, solely reason, and that reason is dictated by programming.
As for protecting Hitler or Stalin, it's not a conundrum. The robot does not make value judgements. It would save them.
The Zeroth law can introduce some big problems because it requires predicting the future. Assume that a robot comes upon a man hanging from the edge of a tall building. The robot has knowledge that the person is a serial killer. Given laws 1 to 3 the robot would save the serial killer. Given law 0 he has a problem and some serious computing to do. My guess is that he would save the man and restrain him until the legal system could take over.
Asimov's robots ended up sentient. How they could be expected to obey any laws after that is unclear to me since sentience implies free will, doesn't it?
The robot novels have been on my list to read so I guess I'd better get to it!
another thing, is it not? -- j
software program which writes software. after the
tedious process of defining business rules in a
precise way, we would run the thing and find out
that we needed to provide more detail. between
that exercise and the specification of manufacturing
processes in workstream, the successor to "opt",
I got thoroughly seasoned. sophisticated computer
programs are hard to write.
it may be awhile before we learn whether
self-awareness implies free will. -- j
Another commenter explained that the 3 laws weren't programmed into the positronic brains but were in some way a part of their very construction. I'll find out what Asimov intended when I read the novels.
By this law the robot would have not shot Hitler. But by inaction the robot caused the deaths of 50 million humans. Hence the law must be contradictory and contradictions are not allowed in the Gulch. Neither Man nor Robot can comply with it.
abended. AI is an attempt to replicate our brains.
As for sentience and free will, humans also have free will, but many adopt a code of ethics that may prohibit them from doing (and have) some actions just as assuredly they would a programmed computer.
And those ethics can be modified for rational or irrational reasons.
Sentient of volitional?
0. "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."
A familiar type of law which justifies robots killing or enslaving any number of humans to protect "humanity." (for the "greater good" of course)
The insanity of this well demonstrates the collective minded.
Asimov was a careful thinker, but he was, of course writing fiction. Moreover, science fiction allows a degree of fantasy that is not present in Rand's works except for Anthem.
I'd always thought of Anthem as science fiction, but Rand herself called it a poem.
As I see law 1: a robot would not perform or allow mercy killing of a human or voluntary euthanasia whatever the amount of pain and suffering being experienced.
Was it Godel? It is not possible to teach a robot the meaning of words. Words as defined in say a dictionary must be self-referential. Meaning must come from a breakout, for humans this is the totality of perception which is not the same as what a machine can detect. We are a very long way from robots that can have or pretend to have human perceptions.
Going back to Asimov's laws, I have seen discussion as to whether they can be made foolproof. I think the answer is no.
The practical answer is to limit the power of robots and machines so they can do damage but not too much- same as for persons, and groups of persons, Do not give too much power.
Asimov actually wrote a story in which a robot lied to its owner to spare her the mental anguish which the truth would have caused.
"As I see law 1: a robot would not perform or allow mercy killing of a human or voluntary euthanasia whatever the amount of pain and suffering being experienced."
As I see it (having read all of Asimov's robot stories), a robot confronted with a human's incurable suffering would be faced with an impossible dilemma, and its positronic brain would burn out.
But, yes, I would agree with Rand on robots being human tools, and therefore, the human owner would be responsible for the actions of his "property".
I had thought, though, that Asimov wrote the three laws so that people would not have anything to fear from robots--they can be programmed so as not to harm humans. I think that may have been a very real fear at the time.
I was thinking of RUR--don't remember if the robots harmed humans or now. Will look it up.
The Laws are not programmed onto the positronic brain in words or any language that a human could understand. The Laws are impressed into the fabric of the positronic brain which manifest as a set of potentials that guide the core decision process of the positronic brain and all subsequent behavior.
The robot is never faced with a singular 'decision', the Three Laws are so deep in the vital algorithms that the robot cannot help but to act in compliance with the laws.
The earliest robots were prone to conflicting potentials, but these shortcomings were corrected by the scientists at 'US Robotics and Mechanical Men' in later generations of robots.
The novels also breakdown into three distinct eras of robotics; the time of Susan Calvin, the time of early space exploration, the time Elijah Bailey and the Spacers. The Foundation novels are set so far in the future that Earth has become a legend and it's actual location has been forgotten in the mist of time
Humans and robots differ in memory. Humans are apt to forgetting things, which produces all manner of behaviors and choices that a robot with an internal bank of history would not logically make. As a result of this eccentricity, the presumption in the Zeroth and First Laws that a robotic mind can actually predict to any degree of certainty the outcome of human behavior is fundamentally flawed. I would contend that it is more likely for a computer to be able to accurately forecast the weather a month in advance than for it to predict even a single human's behavior a week in advance. The outcome of this is that any action instigated by reason of the First Law alone - let alone the Zeroth Law - is going to be of immediate action or inaction only due to imminent circumstances. Think of it as a chess game, but where you just keep adding pieces to the board. One of the side-effects of humans' ignorance and forgetfulness is that we have the ability to deal with only the circumstances at hand rather than trying to deal with both that AND the historical aspects as well. A computer that relied on history for future decision-making in the realm of predicting even a single human's behavior would quickly run out of memory and processing power trying to see even a couple of hours ahead due to the magnitude of inputs. If the computer were not carefully programmed to concentrate on one specific issue and limited to a very short time-frame for consideration, it would quickly seize up in a permanent logic loop.
To me, while Asimov's stories are an interesting look into a possible future, I look at the reality of decision-making and realize that the sheer volume of data that would have to be handled to predict human behavior - especially en masse - with any high degree of certainty is so ridiculously huge that no machine could handle it.
I would also bring up one scenario that I didn't see posed in any of Asimov's writings, but which I think provides an interesting theoretical question: what happens to a robot who needs to be shut down to undergo repairs? Is the robot to operate on faith that it will actually get turned back on after repairs are complete? Would that also have to be programmed in as a Law of Robotics? How is the robot to deal with the potential for demise when anyone could simply be using this as a pretense? How much can the robot rely on internal diagnostics to detect flaws in its behavior?
Jan
Rand's and Aristotle's view of humans as the "rational animal" doesn't take this into consideration.
With advances such as computers beating the best humans at chess and Jeopardy ,a number of computer science theorists think such an advance of computers could happen within the next century, bringing the "Terminator" scenario into the real world of possibility. Computers may be deterministic machines but with some, their power and complexity currently gives them the external, measureable, objective appearances of passing the Turing test and possessing volition and therefore in the position of appearing to act like "moral agents".
All of this could happen while Rand supporters are obliviously asking what would Rand have said, or still debating the Rand / Branden, or the Peikoff / Kelley schisms.
But it's only a matter of time before the two questions will need to be answered: 1)Does the power of potentially autonomous computers pose a threat to humans because these computers are capable of forming their own purposes and conclusions and seeing humans as a threat to defend against, and do these automatons have the equivalent of rights because they have the essentials of what humans possess as their requisite claim to rights?
If the answer is yes to the first, then Asimov's laws are relevant. If the answer to the second is yes, then the computer automatons should be regarded as equals of humans and not merely the servants of humans as Asimov's laws require.
Another story I like, along this line, is the episode in the original Star Trek series, called "The Ultimate Machine".
Computers and software are already advanced enough that in some cases the Turing test has already been passed making it impossible to distinguish the whether responses are coming from a human or an automaton computer.
There's nothing in principle preventing automaton "Drone" aircraft from making instantaneous decisions in identifying targets or threats to their survival and destroying the target or threat without active human intervention.
Even if the automaton drones of today aren't "self aware", their use raises serious ethical issues to assure that innocent people aren't killed.
Also, there's nothing preventing computers from developing to the level where they can be "self aware" and can debug themselves and direct the next steps of their own development. At that point, there's nothing to indicate that humans would know when this would happen or that computers would let us know because it wouldn't be in their self-interest to do so.
At that point they would look laughingly at Asimov for thinking that self-aware computers could be controlled.
Jan
Commenting on the sentiment above, there is nothing immoral with the 3 robot laws. If a person's hierarchical values system moves him to act according to the laws nobody would call him immoral. And since it's a machine that operates solely by the set of instructions given to it by humans there is no element of force involved by imposing the laws on robots.
when it is needed? I thought that we were already
there! -- j
Jan
An interesting conflict would be, should a robot save the life of a person holding a gun to another person's head. I would say yes, but there is then still an obligation to save the other person. In 1950s sci-fi movies, that would be the end of the robot as it collapses in a contradicting logic loop.