• fidodo@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    These aren’t simulations that are estimating results, they’re language models that are extrapolating off a ton of human knowledge embedded as artifacts into text. It’s not necessarily going to pick the best long term solution.

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        I want to be careful about how the word reasoning is used because when it comes to AI there’s a lot of nuance. LLMs can recall text that has reasoning in it as an artifact of human knowledge stored into that text. It’s a subtle but important distinction that’s important for how we deploy LLMs.