• Floppy@beehaw.org
    link
    fedilink
    arrow-up
    29
    ·
    1 year ago

    Thing is, this isn’t AI causing the problem. It’s humans using it in incredibly dumb irresponsible ways. Once again, it’ll be us that do ourselves in. We really need to mature as a species before we can handle this stuff.

    • MagicShel@programming.dev
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      I mean I won’t disagree with you but I think a more fundamental issue is that we are so easy to lie to. I’m not sure it matters whether the liar is an AI, a politician, a corporation, or a journalist. Five years ago it was a bunch of people in office buildings posting lies on social media. Now it will be AI.

      In a way, AI could make lie detection easier by parsing posting history for contradictions and fabrications in a way humans could never do on their own. But whether they are useful/used for that purpose is another question. I think AI will be very useful for processing and summarizing vast quantities of information in ways other than statistical analysis.

      • riskable@kbin.social
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        AITruthBot will be just downvoted into oblivion on half of social media. They’ll call it, “liberal propaganda bot”

        • MagicShel@programming.dev
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          There is a [slight] difference between people pushing propaganda and those taken by it. Their actions are similar, but if the latter can be convinced to actually do their own research instead of being handfed someone else’s “research” there is hope of reaching some of them.

          The real trick is ensuring they aren’t being assisted by a right wing truth bot, which the enemies of truth are doubtless working tirelessly on.

          • Drusas@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            It may be pessimistic, but I don’t think we’re going to get very far in trying to convince people who don’t believe in fact checking to do their own actual research.

      • Leeks@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        AI is only as good as the model it is trained on, so while there are absolute truths, like most scientific constants, there are also relative truths, like “the earth is round” (technically it’s irregularly shaped ellipsoid, not “round”), but the most dangerous “truth” is the Mandela effect, which would likely enter the AI’s training model due to human error.

        So while an AI bot would be powerful, depending on the how tricky it is to create training data, it could end up being very wrong.

        • MagicShel@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I didn’t mean to imply the AI would detect truth from lies, I meant it could analyze a large body of text to extract the messaging for the user to fact check. Good propaganda has a way of leading the audience along a particular thought path so that the desired conclusion is reached organically by the user. By identifying “conclusions” that are reached by leading /misleading statements AI could help people identify what is going on to think more critically about the subject. It can’t replace the critical thinking step, but it can provide perspective.

    • shackled@lemm.ee
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      Completely agree. For every tool we have created to accomplish great things, we have without fail also used it for dumb things at best and completely evil things at worst.

    • CarbonIceDragon@pawb.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      What exactly does it mean to “mature as a species” though? Human psychology (as in, the way human minds fundamentally work) doesn’t fundamentally change on human timescales, not currently anyway. It’s not like we can just wait a few years or decades and the various tricks people have found to more effectively convince people of falsehoods will stop working. Barring evolution (which takes so long as to be essentially not relevant to this) and some sort of genetic or other modification of humans (which is technology we don’t have ready yet and opens up even bigger cans of worms than the kind of AI we currently have), nothing fundamentally changes about us as a species except for culture and material circumstance.

  • the w@beehaw.org
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 year ago

    You know, of all the ways AI could threaten us, I never imagined it would be chat programs madly spewing falsehoods.

    Seems obvious now but like everyone who grew up watching movies like Terminator I figured the threat would be killer drones or meddling with financial markets, or even replacing too many jobs.

    • doleo@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      I don’t know, I feel like those things you mentioned are still going to happen, or are already happening.

    • flatbield@beehaw.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      We are in a low security world. Social network depends on trust. No trust, then the social network fails. So the deal is our social network and news sources are going to have to be more secure. You will basically have to just not trust anything that did not come from a trusted source. Kind of like that now, but going to have to be more that way. I think all security is going to have to be like that.

      The big question I have is what does it mean for representative government. AI potentially concentrates even more power at the top. Maybe it changes the viability of representative governments.

  • mayooooo@beehaw.org
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    Best part is we don’t even have AI, we have keyboard prediction on steroids. Maybe that’s why it’s everywhere, not telling everybody to leave them alone.

  • kitonthenet@kbin.social
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    1 year ago

    This AI hype shit has to stop. Humans are “mentally unready” to handle AI post-truth whatever in the same way that we’re not ready to handle finding the peanut in the turd. What you’re saying is chat bots will make the internet useless/worthless and we already know that

    The stuff about replacing doctors and teachers is insane, given the state of AI tech today, you’re gonna go to your doctor and they’ll tell you you’ve got frog DNA? No

    • xNIBx@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      1 year ago

      I think you underestimate how many doctors, programmers, etc already use ai. And how good the ai is most of the time. And if you are knowledgeable enough, you can catch its bullshit and it can improve its outcome with your guidance. The ai as is right now is useful and the ai is now the worst it will ever be(unless they shackle it even more).

      People already believe random bullshit lies written in social media. Imagine if those lies are accompanied by images, sound and video.

  • kat@lemmy.ca
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    Humans weren’t ready for easily accessible sucrose, let alone an easily manipulated reality. My parents can’t tell the difference between Facebook rumors and reality. My siblings can’t tell the difference between YouTube conspiracies and reality. And I’m this boat, letting myself get personally affected by text on some website. We’re in over out heads.

  • OttoVonNoob@lemmy.ca
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Humans are still not ready to use the internet… Needed a generation of house hippo ads targeted at older folks on facebook:X

  • reid@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Personally: I’m skeptical the status quo changes much. The “pre-AI” world was not exactly filled to the brim with shining truth. Propaganda, dishonest arguing, and just plain ignorance have existed for a long time. True, generative AI appears to be great at fabricating stuff, so verifying sources is arguably more important than ever.

    Thinking about it, I guess deepfakes are more concerning to me than the textual stuff. I don’t follow the state of the art there, but what few I do run across (e.g. fake phone calls “from” famous people) does seem… concerning. It’s not quite good enough to be convincing yet, but if it were a perfect replication, recordings would be suspect. (On the other hand, audio recordings haven’t existed for all of recorded history, so maybe it’s not quite the end of the world, I dunno.)

    Anyway, anyone want to take bets on how poorly this comment ages? 😛