AI chatbots tend to choose violence and nuclear strikes in wargames::undefined

  • Arkaelus@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    This says more about us than it does about the chatbots, considering the data on which they’re trained…

      • BearOfaTime@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        8 months ago

        Not that I want one, but the propaganda around nuclear war has been pretty extensive.

        Michael Chrichton wrote about it in the late 90s if I remember right. He made some very interesting points about science, the politicization of science, and “Scientism”.

        “Nuclear Winter” for example, is based on some very bad, and very incorrect, math.

    • bassomitron@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      I’d say it does to an extent, dependant on the source material. If they were trained on actual military strategies and tactics as their source material with proper context, I’d wager the responses would likely be different.

      • remotelove@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        8 months ago

        Totally. Properly trained AI would probably just flood a country with misinformation to trigger a civil war. After it installs a puppet government, it can leverage that countries resources against other enemies.