• Oka@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    7
    ·
    13 days ago

    I ask GPT for random junk all the time. If it’s important, I’ll double-check the results. I take any response with a grain of salt, though.

    • zarkanian@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      12 days ago

      So, if it isn’t important, you just want an answer, and you don’t care whether it’s correct or not?

      • bradd@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        12 days ago

        I use LLMs before search especially when I’m exploring all possibilities, it usually gives me some good leads.

        I somehow know when it’s going to be accurate or when it’s going to lie to me and I lean on tools for calculations, being time aware, and web search to help with the lies.

        • Nalivai@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          12 days ago

          I somehow know when it’s going to be accurate

          Are you familiar with Dunning-Kruger?

          • bradd@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            12 days ago

            Sure but you can benchmark accuracy and LLMs are trained on different sets of data using different methods to improve accuracy. This isn’t something you can’t know, and I’m not claiming to know how, I’m saying that with exposure I have gained intuition, and as a result have learned to prompt better.

            Ask an LLM to write powershell vs python, it will be more accurate with python. I have learned this through exposure. I’ve used many many LLMs, most are tuned to code.

            Currently enjoying llama3.3:70b by the way, you should check it out if you haven’t.

      • 0oWow@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        12 days ago

        The same can be said about the search results. For search results, you have to use your brain to determine what is correct and what is not. Now imagine for a moment if you were to use those same brain cells to determine if the AI needs a check.

        AI is just another way to process the search results, that happens to give you the correct answer up front, most of the time. If you go blindly trust it, that’s on you.

          • 0oWow@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            12 days ago

            If you knew what the sources were, you wouldn’t have needed to search in the first place. Just because it’s on a reputable website does not make it legit. You still have to reason.

    • Nalivai@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      4
      ·
      13 days ago

      You are spending more time and effort doing that than you would googling old fashioned way. And if you don’t check, you might as well throwing magic 8-ball, less damage to the environment, same accuracy

      • Oka@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        13 days ago

        The latest GPT does search the internet to generate a response, so it’s currently a middleman to a search engine.

        • Nalivai@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          12 days ago

          No it doesn’t. It incorporates unknown number of words from the internet into a machine which only purpose is to sound like a human. It’s an insanely complicated machine, but the truthfulness of the response not only never considered, but also is impossible to take as a deaired result.
          And the fact that so many people aren’t equipped to recognise it behind the way it talks could be buffling, but also very consistent with other choices humanity takes regularly.

      • bradd@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        12 days ago

        When it’s important you can have an LLM query a search engine and read/summarize the top n results. It’s actually pretty good, it’ll give direct quotes, citations, etc.