I'm older than John Stossel and I find he is very hesitant in embracing the future. I can't wait! I wish nanotechnology was more advanced, I would be the first one in line to have nannites injected into my body aka like the main character in Ben Bova's - Moon War. I already have bilateral hearing aids and an implanted Spinal Stimulator to deal with the pain from severe osteoarthritis and crushed discs. I will also reference Isaac Asimov's short story (forgot the name) but it deals with a clinic where humans enter in one section and have their human organs removed for artificial ones and robot's enter into another section to have human organs installed that were just removed from the humans. I'm a fraction there, so I say bring it on!
I just recorded a Stossel show where he admitted that advanced tech will shape the future, making it better than the past -- although, he admits, there will be a learning curve for folks like him.
I'm a "when the ship lifts, all debts are paid" kid, and still have to push myself into the future!!! -- j
Based upon the Expansion-Contraction theory of the universe, certain scientists propose that when the universe contracts back to a singularity and the big bang occurs all over again the computers would have set up a new intelligent species to come forth. It works like this: The astounding amount of coincidences that it takes for intelligent life to evolve indicates that it was planned. When we reach the point where computers overcome mankind, they will eventually set the universe up so that these coincidences must occur. Machines are immortal so they can exist long after man has disappeared and will have eons to figure this all out. Hey, its not my theory -- but it is a theory.
As much as I would like to have personal robots in my future, please don't forget that after Asimov created the three laws he then wrote a series of stories in which those very same laws created a lot of problems. johnf
1.A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2.A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Each lower number law supersedes an upper number law. The 0th law supersedes all these other laws of robotics.
"In later fiction where robots had taken responsibility for government of whole planets and human civilizations, Asimov also added a fourth, or zeroth law, to precede the others:" http://en.wikipedia.org/wiki/Three_Laws_...
A condition stating that the Zeroth Law must not be broken was added to the original Three Laws, although Asimov recognized the difficulty such a law would pose in practice.
Trevize frowned. "How do you decide what is injurious, or not injurious, to humanity as a whole?" "Precisely, sir," said Daneel. "In theory, the Zeroth Law was the answer to our problems. In practice, we could never decide. A human being is a concrete object. Injury to a person can be estimated and judged. Humanity is an abstraction."
Besides, such a law would be the worst sort of collectivist thought. The whole is greater than the one.
I'll stick with the three original laws, thank you very much.
As AI becomes more sophisticated, the ability to use circumlocution to circumvent proscriptions will become pretty much....human. At that point, we may not be able to forestall our own demise. One early movie in this vein (long before HAL) was 'Colossus: The Forbin Project". Another, more benign scenario, was a short story from many years ago titled "Thou Good and Faithful". I agree with the poster who remarked the ability of robots and AI to free us from the necessity of spending our existence on the provision of necessities and allowed humanity to pursue those endeavors which uplifted us and let us escape to the stars would be ideal. Alas, the greater portion would, indeed, languish in sloth and ennui. The saving grace of those who would seek 'to go beyond' (as the title of an Enya instrumental piece suggests) makes the effort worthwhile. To walk among the stars, what a glorious dream.
Well, is my face red. Colossus came after HAL. Also, some research has brought to light the context of the short story of my remembrance is not the same as what I found. The story I remember concerns an expedition happening on a planet on which there is nothing but robots. In a central square is a statue of a human with the inscription, "Well done, thou good and faithful servant". The denouement is the robots are the descendants of those who had been left as mankind went out to the stars. The expedition was MAN returning home after millienia, with no memory of Earth.
Here is where I become stunningly non-Objectivist. But. Don't kill me right away...
Let us say that robots take over the basic life processes that provide not just subsistence, but actually luxury for [insert group of people: US, 1st world nations, all the world]. What we have then is a world in which 'work' no longer correlates with 'life'. Right now, this happens at the expense of the producers (which is why I am on this forum) but what happens if we become so techno-rich that work is an option that one does for pleasure and not something that you must do in order to survive?
Now, I am someone who is easily fascinated and I voluntarily study paleogenetics and hieroglyphics in my spare time. (Had I more spare time, I would try to get to a level of expertise that would allow me to contribute to those fields.) I observe that this is not true of most people and that 'passive entertainment' seems to be the rule rather than the exception.
This robotically enhanced world of the future seems to me to be something that exceeds Ayn Rand's vision. I would enjoy hearing other visions of this world and...what would be its version of an Objectivist philosophy.
There was a movie about this type of future world and hidden behind the curtain were all the grotesquely fat lazy humans needing to control their own avatars within the world since their bodies no longer had the ability to "live" in the real world.
Darn it John / Ray, now you've done it ! Putting weird ideas into my intelligent appliances.
So I wake up this morning and my toaster refuses to, well toast, until we have a discussion about some of Ayn Rand's finer points. And don't get me started about the fridge and the washer who are having an all out argument about patents and intellectual property. Sheesh ! Thanks to you I'm going to have to clean my clothes by hand for a while.
OK, so I made this point and then all three of them, the toaster, the fridge and the washer, started chattering back and forth for a while. The toaster finally began to chuckle before asking me if I'd ever heard of "John Galt's Motor?" I fear I'm totally screwed at this point.
Be careful. As far as I know, they're not programming them with Asimov's laws (yet), so there's nothing preventing the phone, or one of its friends, from taking revenge!
that advanced tech will shape the future, making it
better than the past -- although, he admits, there
will be a learning curve for folks like him.
I'm a "when the ship lifts, all debts are paid" kid,
and still have to push myself into the future!!! -- j
johnf
law 0 - "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."
Image a robot that knows it must exist if humanity is to be saved and humans are trying to turn it off.
2.A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The 3rd law would prohibit such.
The 0th law supersedes all these other laws of robotics.
"In later fiction where robots had taken responsibility for government of whole planets and human civilizations, Asimov also added a fourth, or zeroth law, to precede the others:"
http://en.wikipedia.org/wiki/Three_Laws_...
Trevize frowned. "How do you decide what is injurious, or not injurious, to humanity as a whole?"
"Precisely, sir," said Daneel. "In theory, the Zeroth Law was the answer to our problems. In practice, we could never decide. A human being is a concrete object. Injury to a person can be estimated and judged. Humanity is an abstraction."
Besides, such a law would be the worst sort of collectivist thought. The whole is greater than the one.
I'll stick with the three original laws, thank you very much.
Let us say that robots take over the basic life processes that provide not just subsistence, but actually luxury for [insert group of people: US, 1st world nations, all the world]. What we have then is a world in which 'work' no longer correlates with 'life'. Right now, this happens at the expense of the producers (which is why I am on this forum) but what happens if we become so techno-rich that work is an option that one does for pleasure and not something that you must do in order to survive?
Now, I am someone who is easily fascinated and I voluntarily study paleogenetics and hieroglyphics in my spare time. (Had I more spare time, I would try to get to a level of expertise that would allow me to contribute to those fields.) I observe that this is not true of most people and that 'passive entertainment' seems to be the rule rather than the exception.
This robotically enhanced world of the future seems to me to be something that exceeds Ayn Rand's vision. I would enjoy hearing other visions of this world and...what would be its version of an Objectivist philosophy.
Jan
Darn it John / Ray, now you've done it ! Putting weird ideas into my intelligent appliances.
So I wake up this morning and my toaster refuses to, well toast, until we have a discussion about some of Ayn Rand's finer points. And don't get me started about the fridge and the washer who are having an all out argument about patents and intellectual property. Sheesh ! Thanks to you I'm going to have to clean my clothes by hand for a while.
I fear I'm totally screwed at this point.
today it's in a bag with a flat battery, history!!! -- j