Which of the following sounds more reasonable?

  • I shouldn’t have to pay for the content that I use to tune my LLM model and algorithm.

  • We shouldn’t have to pay for the content we use to train and teach an AI.

By calling it AI, the corporations are able to advocate for a position that’s blatantly pro corporate and anti writer/artist, and trick people into supporting it under the guise of a technological development.

  • assassin_aragorn@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    My argument is that an LLM here is reading the content for different reasons than a student would. The LLM uses it to generate text and answer user queries, for cash. The student uses it to learn their field of study, and then apply it to make money. The difference is that the student internalizes the concepts, while the LLM internalizes the text. If you used a different book that covered the same content, the LLM would generate different output, but the student would learn the same thing.

    I know it’s splitting hairs, but I think it’s an important point to consider.

    My take is that an LLM algorithm can’t freely consume any copyrighted work, even if it’s been reproduced online with the consent of the author. The company would need the permission of the author for the express purpose of training the AI. If there’s a copyright, it should apply.

    You have me thinking though about the student comparison. College students pay to attend lectures on material that can be found online or in their textbooks. Wouldn’t paying for any copyright material be analogous to this?

    • Zeth0s@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Students and LLM do the same with data, simply in a different way. LLM can learn more data, student can understand more concepts, logic and context.

      And students study to make money.

      Both LLMs and students map the data in some internal representation, that is however pretty different, because a biological mind is different from an AI.

      Regarding your last paragraph, this is exactly the point. What shall openai and Microsoft pay, as they are making a lot of money out of other people work? Currently it is unclear as openai hasn’t released what data they used, and because copyright laws do not cover generative AI. We need to wait for interpretation of existing laws and for new ones. But it will change soon in the future for sure