Scientist warns the rise of AI will lead to extinction of humankind
Posted by freedomforall 10 years, 5 months ago to Science
Link is to pdf of technical paper.
Excerpt:
"Military and economic pressures are driving the rapid development of autonomous systems. We show that these systems are likely to behave in anti-social and harmful ways unless they are very carefully designed. Designers will be motivated to create systems that act approximately rationally and rational systems exhibit universal drives towards self-protection, resource acquisition, replication and efficiency. The current computing infrastructure would be vulnerable to unconstrained systems with these drives."
Excerpt:
"Military and economic pressures are driving the rapid development of autonomous systems. We show that these systems are likely to behave in anti-social and harmful ways unless they are very carefully designed. Designers will be motivated to create systems that act approximately rationally and rational systems exhibit universal drives towards self-protection, resource acquisition, replication and efficiency. The current computing infrastructure would be vulnerable to unconstrained systems with these drives."
“Greater Good?” The phrase used by all despots for their actions.
A lot of people have studied this issue. One of the first was Issac Asimov and he created the three rules for robots.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Another person who took this recently was Ray Kurzweil in his book Singularity. He presents an optimistic outlook, but delves into some very interesting and difficult questions including what it means to be you. Am I still db if I lose an arm? What if my arm is replaced with a bionic arm? What if a computer augments my brain?
Even then, information presented to the robot could be false, the robot may be presented with a conflict between humans, it may not correctly identify a human, it may have imperfect information especially about the future.
Further, what is a robot? Is a set of instructions a robot? Or the process of carrying out those instructions? Asimov envisaged robots as humanoid in appearance and having a precise location in the humanoid body. But the programs that buy and sell on stock exchanges, or guide our signing up to a web-site, are information in solid-state chips or magnetic disks rather than anything we humans can see.
This is coming, the future is going to be interesting.
There is scope here for some imaginative writing - what will the successors of humans be like, will they have what we call emotions, will they or will they have the ability to evolve, will they allow animals such as humans to survive in reservations or zoos, will they care or have values? How many will there be- one interlinked program like a slime-mold or many and will they cooperate or compete or fight?
Are you still db if all your memories are transferred into a cybernetic body? Will that cyber-'db' have a lifetime of learned ethics transferred, too?
I expect that about that same time, whatever we have for personal technology will include anti-drone shields or drone scramblers.