• wise_pancake@lemmy.ca
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    1
    ·
    2 months ago

    I can’t believe this got released and this is still happening.

    This is the revolution in search of RAG (retrieval augmented generation), where the top N results of a search get fed into an LLM and reprioritized.

    The LLM doesn’t understand it want to understand the current, it loads the content info it’s content window and then queries the context window.

    It’s sensitive to initial prompts, base model biases, and whatever is in the article.

    I’m surprised we don’t see more prompt injection attacks on this feature.