• Omega_Haxors@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    4
    ·
    edit-2
    4 months ago

    It’s like you didn’t even read what I posted. Why do I even bother? Sophists literally don’t care about facts.

    • UraniumBlazer@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      4 months ago

      Yes, I read what you posted and answered accordingly. Only, I didn’t spend enough time dumbing it down further. So let me dumb it down.

      Your main objection was the simplicity of the goal of LLMs- predicting the next word that occurs. Somehow, this simplistic goal makes the system stupid.

      In my reply, I first said that self awareness occurs naturally after a system becomes more and more intelligent. I explained the reason as to why. I then went on to explain how a simplistic terminal goal has nothing to do with actual intelligence. Hence, no matter how stupid/simple a terminal goal is, if an intelligent system is challenged enough and given enough resources, it will develop sentience at a given point in time.

      • Omega_Haxors@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        4
        ·
        4 months ago

        Exactly I literally said none of that shit you’re just projecting your own shitty views onto me and asking me to defend them.