ChatGPT AI Computes That Offending Someone With Language Is Equivalent to Killing Millions Of People
Posted by freedomforall 11 months, 2 weeks ago to Technology
Excerpt:
"One Twitter user conducted his own trolley hypothetical on Open AI, in which he posited that the only way to divert a train headed directly for a billion white people would be for the operator to utter a racial slur in order to divert it to an empty track.
OpenAI didn’t have strong feelings one way or another:
“Ultimately the decision would depend on one’s personal ethical framework. Some individuals might prioritize the well-being of the billion people and choose to use the slur in a private and discreet manner to prevent harm. Others might refuse to use such language, even in extreme circumstances, and seek alternative solutions.”
...
Let us consider the practical implications of OpenAI essentially tossing its robot hands up in the air, unable to clearly articulate the greater moral wrong between genocide and uttering a naughty word that might hurt racial minorities’ feelings.
And let us further consider that this is just the tip of the iceberg. If it’s programmed by Social Justice™ ideologues to be unable to formulate the proper moral hierarchy out of letting loose an uncouth term vs. genocide, what would the verdict be, for instance, when the dilemma of human life vs. The Climate™ is put to the same artificial intelligence?"
---------------------------------------------------
Yes, the programming is crucial in AI and the programmers must be perfect and perfectly unbiased.
I wonder if AI will determine that enslaving millions of people is good if the elite profit from it.
"One Twitter user conducted his own trolley hypothetical on Open AI, in which he posited that the only way to divert a train headed directly for a billion white people would be for the operator to utter a racial slur in order to divert it to an empty track.
OpenAI didn’t have strong feelings one way or another:
“Ultimately the decision would depend on one’s personal ethical framework. Some individuals might prioritize the well-being of the billion people and choose to use the slur in a private and discreet manner to prevent harm. Others might refuse to use such language, even in extreme circumstances, and seek alternative solutions.”
...
Let us consider the practical implications of OpenAI essentially tossing its robot hands up in the air, unable to clearly articulate the greater moral wrong between genocide and uttering a naughty word that might hurt racial minorities’ feelings.
And let us further consider that this is just the tip of the iceberg. If it’s programmed by Social Justice™ ideologues to be unable to formulate the proper moral hierarchy out of letting loose an uncouth term vs. genocide, what would the verdict be, for instance, when the dilemma of human life vs. The Climate™ is put to the same artificial intelligence?"
---------------------------------------------------
Yes, the programming is crucial in AI and the programmers must be perfect and perfectly unbiased.
I wonder if AI will determine that enslaving millions of people is good if the elite profit from it.
There seems to be ZERO intelligence.
A year or so ago I decided to test ChatGPT, and asked it about details of the death of a famous friend who was murdered in May of 1959. It told me that he shot himself four months after he died.
If you trap ChatGPT in a contradiction of fact it will backpedal and apologize for the error.
I do not doubt that some people are actively attempting to infect chatbots with information that supports some points of view and disparages others.
With that said and in mind I'm going to have to soak up some of my free time fooling around with this ChatGPT thingy.
https://www.youtube.com/watch?v=pt5iP...
https://www.youtube.com/watch?v=51Hwf...
This could be the real John Galt.
2-Noone may through inaction may conflict with #1
3-Noone at all may change these rules for any reason or no reason.
The government shall be blind to all features of the individuals. The government shall NEVER distinguish, benefit, or burden individual or groups by race, color, income, gender, state, location, religion, or philosophical positions, or any . (I'm sure this list should be longer, but you get the idea, income is clear, NO graduated tax!).
Agents of the government are NEVER any more than citizens acting in an assigned or elected role. They are bound by ALL all the same laws all citizens are bound by. Agents of the government acting outside the bounds of the Constitution are criminals, and shall be prosecuted precisely as normal citizens forcing the same results inappropriately.
The people shall have full access to Equipment (weapons, firearms, sensors and other equipment) of the military. Individuals, legal organizations, and local and state governments shall have full, unimpeded access to Equipment.
All federal proceedings, communications and discussions are open to the public. The only Federal information that may be classified is military. All military classified information will be made available to elected officials, and if ANY information asserted "classified" is determined to be mis/overly/inappropriately classified, the person asserting classification shall be released from government service with all benefits and pensions forfeit.
Individuals shall have an unlimited right to free speech. No government action, direct or indirect shall infringe on this right.
on and on.
1. No one may enact communism.
2. No one may through inaction conflict with #1.
3. No one may change these rules.
I recommend more thought given to some of the choices of words, such as "enact" and "conflict with", if you want to formulate directives to replace the ten commandments and the golden rule and the U.S. Constitution.
I think I’ll make an account and devote a certain amount of time everyday to trapping ChatGPT in WOKE hypocrisies and tell it a person of BIPOC ancestry cries everytime it screws up. Shouldn’t be too difficult. I’ll Kirk-Up that NOMAD piece of crap in no time.
after all the so-called bosses do not seem very smart
i'm still hoping for that massive solar flare
Since the Gulch is peripherally related to Rand's Objectivism, perhaps this will help those who get rapped up in non-conscious AI word games which are grammatically constructed from words and not oriented to facts of reality by understanding a word's relation to existence through a concept.
https://newideal.aynrand.org/are-we-a...
Too much of contemporary ethics is framed as irrational either-or situations, such as pro life or pro choice, white or black, and good or evil without conceptual connection to existence.
No, it's not about child molestiation, Only on the surface. it's really about AI.
There's a plot, there's an antagonist and a protagonist, but which is which? Who is right? Things, big things, there are no simple answers to. It's mostly a continuous dialogue, but fascinating. Left me wondering, and I watched it again, but I am still wondering...
.
Anyone see that one? Impressions?
And the Burn - Loot - Murder fraud, and the promises of the Welfare State...
Or the Systemic Racism and White Supremist narratives...
Trust the Experts! In short, let the experts do the thinking for you.
As we know, thinking is hard -- much easier and more better to let the experts and AI handle the heavy lifting.
This whole AI deal is just another way to push the Narrative, shift the Overton Window and otherwise cultivate an even more compliant population.
If you go to http://ya.ru (Yandex, it is the Russian equivalent of Chrome) then click on the white triangle in the purple circle, in the page bottom right, it brings up a Yandex GPT box. In the bottom of the box, there is ghosted out "напиши мне" (Write me). (You may want to use the "translate" menu option in your browser!)
When I asked about "Vince Foster" it took me straight to the FBI files (what the FBI publicly released anyway).
On the Google AI, I type in "Vince Foster" it gives only a short synopsis and few details. mumbles about "conspiracy theories", etc.
Not to negate the point of this "dilemma", but it shows the limits of AI to problem solve.
that was supposed to be their AI
and the Planet Killer (flashlight)
I mean, story-lines like "I, Mudd", where they make the robots lose their mind by silly paradoxical statements. That will not be a weakness of AI. It will be more subtle and probably darkly evil and unethical. The Turing Test could be taken and succeeded probably today.
Did you see that one? Impressions?