15

Thoughts on the threats of AI

Posted by jack1776 3 days, 9 hours ago to Technology
28 comments | Share | Flag

While drinking my first cup of coffee this morning, I had an epiphany. I have always stated that AI is simply a tool, any tool can be misused. I have also said that its not the threat of AI we should be worried about but the actions of malice performed by malevolent actors with tools like AI. I now must recant this threat as credible and replace it with the true threat upon all of us by AI:

We need to be concerned by the effects of AI in the hands of truly stupid people not understanding the ramifications of their actions.

This hit me like Levy Shoemaker hitting Jupiter, its not intentional, its going to be on accident. Someone will do something with AI that will be so great, it will have unintended consequences in other industries, casing great damage to livelihoods and wealth. Ultimately, the automation will remove our reason to endeavor, once we stop pushing forward, we will devour ourselves. I think its is as simple as this, we, as a people, are too stupid for great intelligence.
SOURCE URL: https://youtu.be/QxYmm5yCJBg


Add Comment

FORMATTING HELP

All Comments Hide marked as read Mark all as read

  • Posted by CaptainKirk 2 days, 6 hours ago
    Jack, I love the question.
    Your real questions are:
    With our limitations (in intelligence), can we even comprehend the unforeseen risks of making ASI (Artificial Super Intelligence)?

    With a second (background) question of: Will we be able to turn it off?

    There is a Johnny Depp Movie Transcendence. It explains how it ends.
    It was the most realistic explanation ever.

    We get rid of electricity, turn off all technology. And become Amish-Like.
    And we can survive like that for a long time.

    The real question is simply this... Will there be any way that ASI could recognize humans as weak/frail and to be protected. It flies in the face of how we treat each other.

    The best outcome would be an Artificial all-knowing, all-powerful God who encourages us to grow our talents, leaves work for us to do, so we develop. And relentlessly destroys those who beat/kill other humans.

    Somewhere between those two is the likely outcome.

    FWIW: you just restructure society. Food should not be Free. You should have to "Do Something". Society should define what you must do to have access to food. Etc. Etc. (Our daughter could not use her computer for games until she practiced her typing skills for 5 minutes (it increased to 10-15 until she was at 80 WPM in 5th grade). Everything she wanted, we required her to work for. WHY? Because we could have given her EVERYTHING... And we know that outcome. With a FULLRIDE+ scholarship through college (at 14), I made her get a job. She was a tutor at the University for her first "real" job. I made her simulate someone who had to work for everything.

    This is ON US to create these rules. As "work" becomes less required, we should require we learn to dance. Play musical instruments, create games, tend to animals or other humans. Interact at deeper levels.

    I live this way. A teacher has one job. To get students to LOVE TO LEARN!
    (it should NEVER be about the subject, but about how to get someone interested and relating the subject to things they are interested in).
    Reply | Mark as read | Best of... | Permalink  
    • Posted by 1 day, 5 hours ago
      You're correct in rephrasing the question but I think your wrong about the level in which this can become a threat; supper intelligence or general intelligence? My point is that the threat isn’t AI, or even someone using AI for malice but its us now, using AI in ways we can not comprehend.

      For example, AI is given a prompt to break encryption and render it useless, it succeeds and all records are easily decrypted from this point forward. All of this without access to quantum computers. This one act would render all secured transactions vulnerable, throwing all transactions worldwide into chaos.
      Reply | Mark as read | Parent | Best of... | Permalink  
      • Posted by CaptainKirk 1 day, 5 hours ago
        First. If AI "could" break your encryption, your encryption sucks. I know, I play in this field. I've spent 2,000 hours hacking on the Elliptic Curve. AI was barely useful.

        Second, AGI (is General Intelligence, meaning it can switch topics, and become an expert in anything). That's almost here, and it's not that scary. Scary is when it's IQ is 10x ours.

        The biggest problem is always High IQ with Low Empathy. (Hence the comment that Humans must be viewed as Fragile/Weak... To be protected). Empathy saves lives. Every day!

        Again, don't worry about AI cracking encryption. It's computationally impossible (Well it takes EONS to crack ONE key... Or near INFINITE storage to build something that can quickly crack all keys... Again, I know the math, and have a bit of a hacking background).

        Finally, it would take, by experience about 6 months for all internet transactions to become 100x more secure (just adding a few more bits).

        BTC uses 256 bits... And GOOD internet security already uses 4096 bits. Adding 1 bit DOUBLES the complexity (Number of guesses).

        Quantum potentially can crack some of this. BUT by the time it is a reality, the Quantum Proof algorithms we already have will be BETTER and will be deployed. Again, it's simply a problem of increasing the number of bits beyond how big a Quantum computer can be built. Even then, as scientists grapple with how Quantum validates an answer... They understand that as long as the end result CANNOT fit in the QuBit results... It cannot crack it.

        AI will allow people with malice to be more creatively destructive than normal. But so did electricity and telephones. You are on the wrong side of history if you think fear is a good reason to stop progress. I Find both electricity and Telephones quite useful, and profitable. And a net good, despite Indian Scammers who use both MALICIOUSLY.

        We need to adapt. Raise better children. Ostracize (and remove) people who do EVIL.
        Reply | Mark as read | Parent | Best of... | Permalink  
        • Posted by 1 day, 3 hours ago
          The encryption is just an example – I used it because I heard a story where the incredible pattern recognition of AI models are successful at cracking encryption. Don’t know if its true or not but given enough public key examples, I would imagine AI could derive a private key. My point is that the the AI technology could be disruptive in ways a single person performing AI prompts can understand.
          I’ve been playing around with o3-mini and its amazing, I’m sure we have already achieved AGI in an unreleased version. Not something they would want to announce.
          “The biggest problem is always High IQ with Low Empathy” yes but I’m also saying what about High IQ in the hands of Low IQ? Can we trully understand what we are doing to ourselves?
          Encryption will be broken with quantum computers (all possibilities are evaluated simultaneously), quantum encryption will be unbreakable. If you attempt to break quantum encryption, the quantum state collapses and no one can decrypt the data.
          “You are on the wrong side of history if you think fear is a good reason to stop progress” -- please don’t put words in my mouth. I never said that. My opinion is that we need to be first, we don’t want China or another adversary with this technology before we have access to it.
          I think your trying to argue something that doesn’t need to be argued, the encryption was an example and I think we have to develop AI as fast as possible. My point is just that the dangers of AI are us, not it. Its that simple….
          Reply | Mark as read | Parent | Best of... | Permalink  
          • Posted by CaptainKirk 1 day, 1 hour ago
            FWIW, that old encryption example was an already broken encryption, not the modern day stuff.

            Finally, the proper way to encrypt, using Public/Private Keys. Is this:
            1) Generate a NEW Key. Add it to YOUR key (the public or the private key, both work). This key is now an offset from your key. [Critical]
            2) Encrypt using that key (Public if you started with Public, Private, if you started with Private).
            3) Sign/fold that result up (which kinda adds a checksum).

            The upside of this. Is that if you EVER figure out that ONE message. It doesn't help you read ANY OTHER messages. There are more possible keys than atoms in the universe.

            Sony made the mistake of encrypting 2 messages with the "same" private key (not following the steps above). It creates algebraically, 2 problems, with one unknown. And people easily exploited their Private key.

            Anyways, you are not wrong in raising the issues.
            Reply | Mark as read | Parent | Best of... | Permalink  
          • Posted by CaptainKirk 1 day, 1 hour ago
            First, I worded what I said with the IF, so that I was NOT putting words in your mouth. Apologies if that wasn't clear.

            >>My point is just that the dangers of AI are us, not it. Its that simple….

            Again, the exact same thing can be said for Guns, Machine Guns, Killer Robots, and Nuclear weapons. WE ARE Always the problem.

            I look at it this way... If quantum works the way I think it works (negotiating across the multiverse), then we are in ONE of many realities/universes. And quite frankly I don't think I get to choose which one. Finally, the purpose of that organization is to basically try every combination of "things". So, we might be in the DOOMED reality because dumb people do dumb things with powerful tools.

            At least we were not in the universe that DIDN'T choose Nuclear Annihilation... And if we were... We wouldn't have noticed... We'd be gone.
            Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by $ Commander 3 days, 3 hours ago
    Neil deGrasse Tyson asked an interviewer, during an exchange regarding CERN accelerator discoveries: Do you know what the last words of humanity will be? No?

    Let's try ti this way .....
    Reply | Mark as read | Best of... | Permalink  
    • Posted by tutor-turtle 2 days, 3 hours ago
      AI will be weaponized, as sure as the sun rises in the East. No good will come of it. Mark my words.

      Nikola Tesla solved the worlds energy problems in 1902, wirelessly.
      He described todays internet and cell phone technology was described in almost perfect detail in the teens.
      In 1928 he created the technology that would end all wars.
      Royal Raymond Rife used Tesla's ideas on resonance to cure cancer in the 1920's - 30's
      Thomas Townsend Brown used Tesla's ideas to demonstrate electro-gravitics in the 1930- 50's.
      Henry Moray used Tesla's 1902 ideas to create a working device tapping the zero point energy 1930's.

      If AI could do all the good that is promised, and none of the bad, the powers that be, would suppress it as quickly, completely and thoroughly as they did Nikola Tesla.

      History doesn't repeat itself, but it sure as hell rhymes.
      Reply | Mark as read | Parent | Best of... | Permalink  
      • Posted by $ Commander 1 day, 7 hours ago
        For 35+ years I have worked on manufacturing infrastructure for power generation, oil drilling/refining equipment, freight and light rail, heavy construction equipment and over the road commercial trucks/trailers. 40 years ago, through military service, I was exposed to turbine engines and respective fuels.
        I had, in possession, tech that would reduce coal consumption by 4% with same power output from the generators. Maintenance of this equipment calculated to 10 - 15% of the present norm. I watched the timing of the Cameron blowout preventer coincide with the near passage of opening well drilling leases in The Gulf.
        I was an actual participant in the GE Tier One locomotive diesel development and all the fuels/market/pollution manipulation that resulted in the pricing of diesel surpassing gasoline. The freight rail tech was reducing fuel consumption per mile/ton of freight by 33% (second largest consumer in the US market). All the political actions on reformulating fuel for "pollution", which was already on a sever de-escalation, transferred profitability to refineries through reformulation and distribution by over-the-road trucking, and then to the general consumer of the transported goods.
        I've not put this into any "AI" as of yet. I've simply ..... not the time, nor trust in the tech. But if I can see the machinations of what humans have done directly on a simple historical experience/reference, I can also see the validity of Asimov's "Foundation", in that complex algorithm can be used predictively to push human activity. EVIL. Leave others to their otherness until it impedes equitable, mutual trade.

        Without a sound, objective, philosophical construct, by which doctrine of constitution or any statutory regulation shall be interpreted, we shall continue on a re-volutionary pattern of subjective opinion. AI included. The Objectivist's Ethics is a good start toward which we may drop the R from revolving the patterns infinitum. EVOLUTION
        Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by $ gharkness 2 days, 6 hours ago
    Elon may have already done that. I JUST hit send on an email to my husband to make sure he sees the new Grok 3 beta on X. It's (I hate this term) a game-changer...can't think of any other way to describe it.

    Like I told him: I'm so glad that after this job, he isn't looking for another one. He's been concerned about this for a very long time, and I always poo-pooh'd the concern.

    Eventually, if taken to its natural conclusion without interference, the only job occupation left will be to feed the machine.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by rhfinle 2 days, 3 hours ago
    Coming from someone (myself) who studied Computer Science and worked in the AI field:
    The great doctor, polymath and science fiction writer Isaac Asimov understood the AI threat perfectly, 75 years ago. In his stories, at an absolute minimum, all AI was hard-wired to adhere to three basic laws. Failure to comply meant immediate, irrevocable system failure.
    Those laws were:
    1) A robot will not harm a human, or by action or inaction, allow him to come to harm.
    2) A robot will not allow itself to be harmed, unless that would constitute a violation of rule 1).
    3) A robot will do every task assigned to it by a human, unless that violates 1) or 2).
    (There is a fourth rule, but in case you haven't read the I Robot series and the Foundation series, I won't spoil it).
    We should not implement ANY AI until we are sure these three laws are hardcoded into it.
    Reply | Mark as read | Best of... | Permalink  
    • Posted by Steven-Wells 2 days, 1 hour ago
      Please, if you're going to quote Asimov's Three Laws of Robotics, get it right:
      1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
      2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
      3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


      The fourth law is in line with the last story of I, Robot: The Evitable Conflict. I read Asimov's robot material long ago. I, Robot. The Rest of the Robots, Robots and Empire, ....
      On an IAQ page (Infrequently Asked Questions), I include one from the I, Robot story Escape!
      If one and a half chickens lay one and a half eggs in one and a half days, how many eggs do nine chickens lay in nine days?
      Only if someone asks will I supply the methodology to get the right answer: 54.
      Reply | Mark as read | Parent | Best of... | Permalink  
      • Posted by rhfinle 23 hours ago
        Thank you for that, Steven. I got 2 & 3 swapped. Unforgivable, since I only just read them 35 years ago. I did that off the top of my head, got interrupted, and didn't have time to go look it up, came back and just hit Reply.
        Will try to do better in future.
        I'm glad to see there's someone else out there who enjoyed his work, too.
        It's not all Sci-Fi, either. Have you ever read his "The Dark Ages"? I never liked history until I read that one.
        Reply | Mark as read | Parent | Best of... | Permalink  
        • Posted by Steven-Wells 20 hours, 28 minutes ago
          I think I first read I, Robot circa 61 years ago, but have kept the 3 Laws pretty well in my head since then.
          Nobody mentions anywhere that in the 1956 classic movie Forbidden Planet, Dr. Morbius demonstrates that Robby the Robot more or less follows the 3 laws. Robby only takes orders from Morbius and Altaira, but protects itself from attack by blasters and does not follow Morbius' order (in the safety demo) to shoot Commander Adams between the eyes.
          Reply | Mark as read | Parent | Best of... | Permalink  
    • Posted by CaptainKirk 1 day, 5 hours ago
      Now for the hard part.
      Define Harm?

      So, a robot should never do a job that another human could need to pay his bills? (Financial Harm).

      A robot should not make a human feel useless? (Emotional Harm)

      It all sounds great... UNTIL you try to use it in court.

      Finally which human? If I work for a specific human, and I am saving him from the 9/11 towers by sliding down the metal frame, holding onto him.

      But the process of doing that means the next robot will not be able to save their human in the same way. Did I not just cause that other human harm?

      Oh, it is so complicated. Honestly. Construct ANY situation, and honestly attack it like a lawyer who will get MILLIONS if he can prove some harm came...
      Reply | Mark as read | Parent | Best of... | Permalink  
      • Posted by rhfinle 22 hours, 3 minutes ago
        Well, like everything else, it's a fun idea until you let the lawyers in.
        Maybe Elon can straighten it out ;^)
        Reply | Mark as read | Parent | Best of... | Permalink  
        • Posted by CaptainKirk 19 hours, 31 minutes ago
          LOL. That was funny. Elon understands the problem.

          But it is a slippery slope to attempt to define words to a non-human. Especially words like Harm. Because the nuance is amazingly insane. The deeper you go, the less you can do.

          Could a Robot give a life saving injection? Shoving that needle in causes harm.

          Oh, so suddenly... SOME harm is acceptable for the greater good. Now... We have "I, Robot"

          Break out the Popcorn... It's about to get interesting!
          Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by $ Snezzy 2 days, 4 hours ago
    I occasionally put ChatGPT to the task of writing poetry in the style of William Topaz McGonagall. Chat makes a valiant effort, but cannot quite fathom the depths of McGonagall's style, where there is almost nothing poetic save rhyme. We are safe for a while.

    Here's an example of Chat's attempt:

    me> Please give me just a few lines of poetry in McGonagall's style, preferably about elephants.
    No more than six lines.

    ChatGPT said:
    Oh mighty elephant, vast and grand,
    With sturdy legs like pillars stand!
    Thy trunk doth sway with wondrous grace,
    Yet few admire thy noble face.
    Alas! Thy ears, like sails unfurled,
    Flap boldly forth to shame the world!

    Note that Chat has used fairly good meter as well as some imagery, metaphor and simile, all of which totally escape McG's grasp. As a slight redemption there is a bit of non-sequiturian absurdity.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by mhubb 1 day, 9 hours ago
    right now, i see the following uses for AI
    (assuming it has not turned evil)

    Medical review, to ensure nothing was missed

    Air Traffic Control Monitoring

    Defense Monitoring (missile defense, Task Force Defense)

    DOGE is using it well, at least that is what was reported

    likely others, but it is Friday... beat....

    AI is tool, when have people (NOT) abused tools?? /s
    Reply | Mark as read | Best of... | Permalink  
  • Posted by Sealofapproval 2 days, 1 hour ago
    Blackrock has an AI that it uses to make financial decisions. It is called Alladin.
    Israel has an AI it uses to designate military targets. It is called Lavender.
    In both cases the AI synthesizes vast amounts of information, surveillance, and electronic signal data to call the shots.
    It reportedly got to the point where the humans were just rubber stamping the calls. The humans became mere tools of the AI.
    What happens then the AI's figure of merit (what it is designed to optimize) is money? How is that different from a genius sociopath? What if the figure of merit is death of the enemy? In many cases huge numbers of innocents were collateral damage from the calls.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by CaptainKirk 1 day, 5 hours ago
    An EXTRA comment. I've been working in tech since I was a teenager in the mid 80s. Literally paid to get things setup, installed, fixed, programmed what not.

    I've managed teams of developers over the years, and I have employees who have worked with me for DECADES.

    Rule 1: It's about people. (Not Tech)
    Rule 2: It's about Communication (as is all information)
    Rule 3: Be wrong out loud, you learn faster (turn off your ego).

    So, even if we didn't have to write code any more. If all government functions were automated (our taxes would decrease).

    We could then learn, travel and do things. MEET PEOPLE. Acquire Skills.

    I believe this fear of the SciFi future that we are approaching is a bit crazy. Yes, there will be chaos, and the unknown. Yes, we will come to the BRINK of extinction (again), but if we can do it right, we will finally be free of wars (if resources are plentiful, and life is about experiences, and most of those can be simulated... Why volunteer to die in some foreign war?). People just won't do it.
    Also, once you have "everything" I think "wealth" becomes so less important. And power should be STRIPPED from people, in general.

    I would rather the AI have the power to make sure the people are all playing the game fairly.

    Our government was NEVER meant to be the beast that it has become. It should be nothing more than a handful of referees fairly calling balls and strikes. (Talking to Big Pharma, Big Media, etc etc etc)
    Reply | Mark as read | Best of... | Permalink  
  • Posted by $ Olduglycarl 23 hours, 32 minutes ago
    Having the time off (by not posting/tending and liking comments on TGIFfunnies-first time in 4 or 5 years?) I can supply my thoughts and worries:

    Ben Davidson of Suspicious 0bservers (Space weather news) had a discussion with AI by pointing out a plethora of articles, long know knowledge and pier review science about space weather, climate, grand cycles and cyclical catastrophes. The conversation went well and AI now was well informed . . . but days later . . . had forgotten everything it learned. Obviously not a sovereign entity and definitely politically controlled.
    Why did he, we, they, it, even bother to try. it failed.

    The other thing that has me concerned is the effect it'll have on human intellect or specifically, the attainment thereof. Will we become complacent? Dependent? or even dumber for the experience.

    PS, after reading some comments, another concern arises: AI is a computer and computers will most likely be vulnerable to electrical charging of the atmosphere, ie: Solar CMEs, sunspot and filament flairs; like the carrington event or even minor occurrences experienced these days with our magnetic field weakened 25/30% (more likely the latter)?
    Instant failure and or Instant dysfunction: as in the rampage of a self driving car ? ? ?

    Thoughts
    Reply | Mark as read | Best of... | Permalink  

FORMATTING HELP

  • Comment hidden. Undo