TikTok’s parent company, ByteDance, has been secretly using OpenAI’s technology to develop its own competing large language model (LLM). “This practice is generally considered a faux pas in the AI world,” writes The Verge’s Alex Heath. “It’s also in direct violation of OpenAI’s terms of service, which state that its model output can’t be used ‘to develop any artificial intelligence models that compete with our products and services.’”

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      8 months ago

      Ever since that paper about “model decay” this has been a common talking point and it’s greatly misunderstood. Yes, if you just repeatedly cycle content through AI training over and over through successive generations, you get AIs that lose “fidelity.” But that’s not what any actual real world training regimen using synthetic data does. The helper AI is usually used to process input data. For example, if you’re training an AI to respond in a chat-like format, you could take raw non-conversational text (like a book) and have the helper AI create a conversation about that content for the new AI to learn from. Or to take a real-world example, Dalle3 was trained by having a helper AI look at pictures and create detailed text descriptions of them to use as the caption to associate with the image when training.

      OpenAI has put these restrictions in its TOS as a way of trying to “pull up the ladder behind them”, preventing rivals from trying to build AIs as good as the ones they have already. Fortunately it’s not going to work. There are already open LLMs that can be used as “helpers” without needing OpenAI at all. ByteDance was likely just being lazy here.

  • Buttons@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    I hope this harms OpenAI in their lawsuits somehow. Their argument of “we can train on the output of others, but nobody can train on our output” has no moral foundation. Pick a lane.

  • Mahlzeit@feddit.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    I wonder if that clause is legal. It could be argued that it legitimately protects the capital investment needed to make the model. I’m not sure if that’s true, though.

    • Nick@mander.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      I can’t speak for every jurisdiction, but I’d be hard pressed to see why it wouldn’t be legal in the US, especially in these circumstances. ByteDance is a massive legally sophisticated corporation, so they should’ve been expected to fully read and understand the terms and conditions before accepting them. They probably won’t bring a legal challenge, because they know they don’t have a particularly strong legal argument or a sympathetic angle to use.

        • Nick@mander.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          Sorry for the late reply, but this doesn’t really seem like it’d come close to invoking any of the US’s neutered antitrust enforcement. Open AI doesn’t have a monopoly position to abuse, since there are other large firms offering LLMs that see reasonable amounts of usage. This clause amounts more to an effort to stop reverse engineering than stifle anyone trying to build an LLM.

          • Mahlzeit@feddit.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 months ago

            I doubt if it is clear-cut enough to bring down enforcement in any case. However, that does not mean that the clause is enforceable.

            It is easy to circumvent such a ban. Eventually, the only option that MS has is suing. Then what?