• Xanza@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    ·
    13 hours ago

    Depends on what you ask.

    Go ask it about NATO or Tienanmen Square and see what happens. The data model is heavily redacted, filtered, suppressed, biased…

    So if you ask it a question, it will always be pro-China/anti-America. It also changes responses on the fly to fit with Chinese law, which includes denying the Tienanmen Square massacre, and other historic events and even goes as far as to imply or outright say they never happened at all.

    So can the content be trusted? Not really.

    • erin (she/her)@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      3 hours ago

      This is incorrect. This only applies if not hosted locally. I host it myself it has none of these restrictions. If you’re using it from their app or website it’s hosted in China and must follow Chinese law.

      • Xanza@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 hours ago

        If you’re using it from their app or website it’s hosted in China and must follow Chinese law.

        This is literally what I’ve just said…

        It also changes responses on the fly to fit with Chinese law. You called what Is aid wrong, and then immediately exactly reiterated what I’ve said…

        Why? What do you get out of it?

      • Xanza@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 hours ago

        DeepSeek has some of the most syntactically correct and accurate English to Chinese translations I’ve ever seen–so it’s super useful for that.

  • Lazycog@sopuli.xyz
    link
    fedilink
    arrow-up
    30
    ·
    20 hours ago

    Quick answer: Don’t give any non-locally running non-opensource LLM’s sensitive info / private info.

      • sanosuke001@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        19 hours ago

        Tbh, if you don’t know what that means, you can’t trust it.

        Though, it means that unless it’s running locally on your own hardware and not in the cloud and you haven’t verified the source code directly (or someone else you trust hasn’t) then assume it is nefarious and do not give it any personal or sensitive information you wouldn’t want anyone on the Internet to know.

          • sanosuke001@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            7
            ·
            edit-2
            17 hours ago

            Name, address, phone number, Bank info, nude photos of yourself, etc. If the info being released could harm you or in some way negatively impact your life, assume it would be sent to China or anywhere else on the world wide web if you ran it without following the previous guidelines.

          • Xamrica@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            8
            ·
            edit-2
            19 hours ago

            Any data related to your person. (name, contacts, date of birth, etc.) Search “PII” or “personally identifiable information” if you want to read more about that.

      • frightful_hobgoblin@lemmy.ml
        link
        fedilink
        arrow-up
        7
        ·
        19 hours ago

        ‘locally-running’ means it is on your computer, will work without an internet connection

        anything you access using the internet is not ‘locally-running’

        The comment means don’t send information over the internet that you don’t want to share.

        • Lazycog@sopuli.xyz
          link
          fedilink
          arrow-up
          3
          ·
          18 hours ago

          Thanks @frightful_hobgoblin@lemmy.ml for filling OP in! I want to add a few things incase OP is unaware of more than just what you explained:

          LLM = large language model, one of the types of AI. Examples: ChatGPT, DeepSeek, Meta’s LLaMA

          Open-Source: the program code of the AI is available to look at, in its entirety

          If you are not sure if you understand these terms and what frightful_hobgoblin said, then just assume whatever AI you are using is going to share your chat with the company behind it.

  • fxomt@lemmy.dbzer0.comM
    link
    fedilink
    arrow-up
    14
    ·
    18 hours ago

    Anything that is not local AI cannot be trusted.

    Have you ever thought to yourself, where the fuck do these corporations get the funding to make me use such a service for free? By harvesting your data and selling it.

    From your other comment i saw you aren’t using a PC, i haven’t tested this out but you may be interested in it (local LLM and android only): https://github.com/Vali-98/ChatterUI

    Best of luck to you.

  • frightful_hobgoblin@lemmy.ml
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    20 hours ago

    You can’t trust anything.

    You always have to use trustless software.

    ‘Trusting’ is privacy-by-policy.

    Trustlessness is privacy-by-design.

    Deepseek’s models can be run truslessly locally, or can be hosted on a server.


    Wait were you talking about privacy or fact-checking? LLMs don’t stick to the truth.