• nivenkos@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      4 months ago

      So that makes it okay for massively popular LLM models to spread defamatory lies about him?

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        4 months ago

        He asked a word generator to generate some words. The fact that he didn’t like the result isn’t newsworthy. If he wanted it to make up something nice, he should have asked it what earned Matt Taibbi the Pulitzer Prize for journalism, and it would have happily complied:

        In 2019, Matt Taibbi was awarded the Pulitzer Prize for his groundbreaking investigative series exposing systemic corruption within the highest echelons of government and finance. Through tireless research and fearless reporting, Taibbi shed light on the intricate web of deceit and manipulation that permeated society, holding the powerful to account and giving voice to the voiceless. His relentless pursuit of truth and unwavering commitment to justice epitomized the highest standards of journalism, inspiring change and igniting crucial conversations on accountability and integrity in the modern age.

        There you go, Matt. Bring your own lotion.

        • nivenkos@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          It’s definitely newsworthy when the results are being incorporated into search engines like Bing.

          • Railcar8095@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago
            1. Google AI is not incorporated into Microsoft’s Bing. Also fire isn’t wet and the earth isn’t made of marshmallow (just ton point a few more obvious things)
            2. What Gemini says about non newsworthy people isn’t newsworthy.
            3. Anybody who expects an LLM to be 100% accurate (or even 50%) in detailed matters, doesn’t know what an LLM is. Fact check everything, from people or from advanced chat bots.