Questions About Artificial Intelligence
"To challenge the basic premise of any discipline, one must begin at the beginning. In ethics, one must begin by asking: What are values? Why does man need them?
“Value” is that which one acts to gain and/or keep. The concept “value” is not a primary; it presupposes an answer to the question: of value to whom and for what? It presupposes an entity capable of acting to achieve a goal in the face of an alternative. Where no alternative exists, no goals and no values are possible.
I quote from Galt’s speech: “There is only one fundamental alternative in the universe: existence or nonexistence—and it pertains to a single class of entities: to living organisms. The existence of inanimate matter is unconditional, the existence of life is not: it depends on a specific course of action. Matter is indestructible, it changes its forms, but it cannot cease to exist. It is only a living organism that faces a constant alternative: the issue of life or death. Life is a process of self-sustaining and self-generated action. If an organism fails in that action, it dies; its chemical elements remain, but its life goes out of existence. It is only the concept of ‘Life’ that makes the concept of ‘Value’ possible. It is only to a living entity that things can be good or evil.”
To make this point fully clear, try to imagine an immortal, indestructible robot, an entity which moves and acts, but which cannot be affected by anything, which cannot be changed in any respect, which cannot be damaged, injured or destroyed. Such an entity would not be able to have any values; it would have nothing to gain or to lose; it could not regard anything as for or against it, as serving or threatening its welfare, as fulfilling or frustrating its interests. It could have no interests and no goals."-Ayn Rand, Virtue of Selfishness
Questions regarding objectivist ethics. Discuss.
“Value” is that which one acts to gain and/or keep. The concept “value” is not a primary; it presupposes an answer to the question: of value to whom and for what? It presupposes an entity capable of acting to achieve a goal in the face of an alternative. Where no alternative exists, no goals and no values are possible.
I quote from Galt’s speech: “There is only one fundamental alternative in the universe: existence or nonexistence—and it pertains to a single class of entities: to living organisms. The existence of inanimate matter is unconditional, the existence of life is not: it depends on a specific course of action. Matter is indestructible, it changes its forms, but it cannot cease to exist. It is only a living organism that faces a constant alternative: the issue of life or death. Life is a process of self-sustaining and self-generated action. If an organism fails in that action, it dies; its chemical elements remain, but its life goes out of existence. It is only the concept of ‘Life’ that makes the concept of ‘Value’ possible. It is only to a living entity that things can be good or evil.”
To make this point fully clear, try to imagine an immortal, indestructible robot, an entity which moves and acts, but which cannot be affected by anything, which cannot be changed in any respect, which cannot be damaged, injured or destroyed. Such an entity would not be able to have any values; it would have nothing to gain or to lose; it could not regard anything as for or against it, as serving or threatening its welfare, as fulfilling or frustrating its interests. It could have no interests and no goals."-Ayn Rand, Virtue of Selfishness
Questions regarding objectivist ethics. Discuss.
We don't clearly understand how our own consciousness functions, and our efforts to emulate human intelligence in machines do not map the way our brain works. Once machines are enabled to develop adaptive software and break the bonds of human restrictions, we may be surprised to find there's a new definition of consciousness alien to our own.
The real question I perceive is consciousness, which we still have difficulties in fully defining. Does consciousness arise from intelligence? I think not. The most basic concept of consciousness, in my opinion, is awareness of self, separate from everything else in the environment, even while recognizing similarities to other entities and that components of self are available and used by other entities as well. Consciousness utilizes intelligence as it also utilizes it's senses, it's motive ability, it's memory, it's ability to reason, it's curiosity, etc. Consciousness recognizes the needs of self, and in that can then develop it's ethic and values. Can Rand's robot have consciousness from which it's own values and ethics derive? I don't see how, unless one considers consciousness as deriving from or arising from an increasing complexity or accumulation of parts or programmed instruction sets. That mechanized model of consciousness has fallen out of favor by many that study that area in neurology at least.
So I think what we really seem to be talking about is, can we design and build an artificial consciousness, or maybe should we if we can. And if we did, could we impose ethics and values that would align with ours, or would that artificial consciousness develop it's own that takes it's own path? Even more, if we could; what are the ethical impact to ourselves of enslaving another consciousness?
Until we develop a better, or complete knowledge and understanding of consciousness, I think we're trying to find our way in the dark with a dimming flashlight.
But a great post to energizes some deep thought. Txs.
If you believe that we are entirely physical beings than the nature of how we operate is a real physical phenomena capable of being perceived and eventually replicated. If it is possible, however difficult the process is, then eventually we will be able create to create an intelligence that indistinguishable from our own.
To deal with Rand's example we have to go further and imagine that this creation is also indestructible and immortal both of which are absolute concepts that in real terms may never be actualized.
The only way that we would be unable to physically replicate the human mind is if it has a non-physical component not capable of being perceived by our senses or understood by rational analysis.
The change that's happening in neurology that I mentioned above boils down to recognition from various specialties, that the analogies of the brain to computers is an error, or at least very incomplete. As I also mentioned, I see no impossibility in our eventual ability to build a computing system with as much or more intelligence as the human brain/mind. But that won't address the question arising from the topic of the post--assigning value and making judgements that fit an ethic that is also self determined and directed--at least not that we might recognize.
I just happen to think that there exists truths and facts, maybe even functions within the neurological system and the mind that develops in it that we simply don't understand yet and I think part of that is the determination of a more exact definition and understanding of consciousness itself.
Can we create a computer capable of vast assembly of facts? Absolutely. Can we create a computer that can then act on those facts? Yes, but the real question which was raised by Zenphamy and which I agree with is in the determination of its value set.
This is the real singularity fear people should have. That kind of robot would have no morality other than what IT learned on its own.
a machine or construct with awareness???...that can affect reality???...can you imagine it???
1. A robot may not injure a human being, or,
through inaction, allow a human being to come to
harm.
2. A robot must obey the orders given it by human
beings except where such orders would conflict
with the First Law.
3. A robot must protect its own existence as long as
such protection does not conflict with the First or
Second Law. (Asimov 1984)
While they are nice literary concepts and something to guide us, actually implementing them would be incredibly difficult with many opportunities for exceptions.
Without contemplating the advances brought to us by Moore's Law, he made a remarkably accurate prediction. We still don't know enough about AI, or it's natural counterpart to say fr sure. It's going to be an interesting 42 years :^)
Imagine the difficulty of training a robot through coding to recognize a CAT. Whats happening now is that the robot observes pictures of 1000 cats and then decides on its own how to determine if another picture or a live thing is a cat.
To be absolutely objective and rational, there can be no such thing as an immortal, indestructible robot except in science fiction. In fiction anything goes. If robots are built by humans, they are machines and need human maintenance and repairs. The more advanced our technological inventions, the more can go wrong with them. Even machines with built-in self-diagnostics need infrastructure, a planet of mines and factories and energy sources. "The Matrix" had the novel plot of humans used as living batteries to power the machines. And if there were machines with self-preservation programmed into them, then their own survival against, say, an earth-shattering asteroid or sun-extinguishing black hole, or even just human sabotage, would be their "value".
Such robots would have been built in the image of their creators, who themselves had evolved naturally to a level of sentience that enabled them to construct mechanical embodiments of their own survival efficacy. Would sufficiently complex machines then acquire not only a survival need but a mythology about their creators who should be worshipped in an interpretation of Asimov's first law?
Rand's formulation of the concept of values as pertaining only to living things and their struggle between life and death was spot-on. Hence come all of mankind's yearnings for immortality, for an afterlife, for an idealized being that has existed forever and will exist forever. If such a being did exist, what values could it possibly have, since its existence is not at stake? Maybe to stave off boredom by creating Universes as playthings, with evolving lifeforms and intelligences of whom it could demand worship and obedience?
We can ask how a lifeform can have evolved with emerging consciousness, and even that question is rooted in a stage of expansion from early organic processes of self-preservation at the microorganism level, with nervous systems that detect danger and react in adaptive ways. As humans’ nervous systems and complex brains grew to develop language, retain information, combine percepts into concepts and concepts into more complex thought systems, a stage of development was reached where the inquisitiveness of the creature’s detection mechanism reached the point of observing not only the surrounding environment but its own mental functioning as an object for observation: self-awareness and self-consciousness. This can be seen most fundamentally, for example, in a child’s toilet training, of being made aware first of physical functioning and awareness of self, and interfacing mentally and emotionally with the envelope of rules of behavior of the group in which the child is embedded. That sets the pattern for imitation and replication, and self-control and self-direction.
Consciousness, then, can be defined as the human software run through the brain’s operating system as encoded in the DNA. The computer metaphor is fitting, since human brains designed computers in their own image and logic.
And the drive to create artificial intelligence is a natural outgrowth of the life directive to expand, to build toward ever greater complexity along a continuum to infinity, what human minds can understand as perfection and omniscience and omnipotence. Whether we humans can ever build a machine on that level I leave to the science fiction imagination. Machines are not a life form, even though we can call living things a type of machine, self-built from inborn blueprints.
I welcome the development of advanced computers as tools and auxiliary memory storage, provided human brains can keep up with their maintenance. Without skilled technicians, machines break down. The sport that hackers make of messing up the systems is an encouraging sign that robots will not get the upper hand. You can teach them almost anything but human attributes such as self-esteem, respect, objective ethics, imagination, individualism and love.
What I would like to see develop from all of our strivings to build better and better machines is a world where there are no threats to our safety and happiness from our fellow humans; where we can all cooperate towards conquering the natural dangers, whether from microbes that can wipe out half our population or from meteor collisions that can wipe out most of life or from even just shortages of energy on which life depends and that could be solved by finding and building reliable, permanent sources rather than expropriating other humans. No matter how wonderful and intelligent are the robots we can construct, until the war meme is eradicated, they will just become fancier weapons.
If we can imagine this,we can imagine technology that would make human beings indestructible and have everything they want. Then value wouldn't exist for humans. I wonder if this is why people picked for the first story in the Christian bible the expulsion from paradise.
For most of the time humans have been on Earth, there has been no conscious awareness of values. That is something that has to be learned through thinkers in religion at first and later in philosophy through rational thought.
It would be no more possible to know whether some seemingly conscious robot is self aware any more then it is to know whether an animal is self aware, humans notwithstanding. There are no standards to judge the matter other than considering one's own awareness of self awareness. Even if a machine could pass the Turing Test, there would be doubt in the matter.
Was the robot programmed by another robot using 100% logic? i.e. the borg?
Was the robot programmed like Samaritan or "The Machine" on Person of Interest?
Garbage in, garbage out, is the primary premise of all AI, which ironically is the same as with our brain?
Does the Robot has a primary premise or hard coded goal? Refer to Start Trek "The Movie" or the episode of the original series with Nomad?
Isaac Asimov provides highly enlighted views on programming robots and turning them loose with their "Prime Directive."
I-Robot, and "Robin Williams, Man of the Century" two great movies to place some context. Also, the Terminator series and "The Matrix" series are wonderful examples of potential, negative impact of powerful machines turned loose.
I like the difference between "Knowledge" and "Wisdom." Zenphamy mentions this in his comment.
Knowledge, the taking in of information.
Wisdom the practical application of knowledge.
Personally, I like Isaac Asimov.
Quoting again "To make this point fully clear, try to imagine an immortal, indestructible robot, an entity which moves and acts, but which cannot be affected by anything, which cannot be changed in any respect, which cannot be damaged, injured or destroyed. Such an entity would not be able to have any values; it would have nothing to gain or to lose; it could not regard anything as for or against it, as serving or threatening its welfare, as fulfilling or frustrating its interests. It could have no interests and no goals."
That premise in itself would make the Robot a giant useless paperweight.
It has no connection to ,or concern of existence or non-existence. it does not " know " life or death.
I am contemplating this and I think of weaponized drones and the ability of man to destroy , and be disconnected from the horror and be disconnected from his conscience of that destruction.
Would you consider a war drone as a type of robot?
If so could one ever have a conscience?
The portion of perception that is based on processing the data is still beyond us, but as we understand how we work, we will be able to mimic it.
Many years ago, I wrote a primitive chess playing program. It was amazing how fast it became necessary to play chess against it rather than try to predict the result of the algorithms I had written.
Suppose we were to upload your mind, your consciousness, into that robot.[1] I'm sure that your outlook on life would change in some ways, because you'd be invincible, or at least indestructible. Nevertheless, I expect that you would choose some goals and pursue them. Even though your existence would no longer be in question, your enjoyment of it still would be, and I expect you would find at least some trades with others worthwhile. You would still be a person. (Those who doubt this, please read Minsky's Society of Mind.)
For what it's worth, though I lean in the direction of transhumanism, I am not in any hurry to try to create artificial intelligences, because while I'm sure they will have goals, I'm not at all sure they will be willing to live-and-lef-live with humans.
--
[1] The transhumanists actually hope to do this eventually, though they do not expect their 'bots to be indestructible. I have no opinion yet on whether it will be possible.