I can already tell this is going to be a unpopular opinion judging by the comments but this is my ideology on it
it’s totally true. I’m indifferent on it, if it was acquired by a public facing source I don’t really care, but like im definitly against using data dumps or data that wasn’t available to the public in the first place. The whole thing with AI is rediculous, it’s the same as someone going to a website and making a mirror, or a reporter making an article that talks about what’s in it, last three web search based AI’s even gave sources for where it got the info. I don’t get the argument.
if it’s image based AI, well it’s the equivalent to an artist going to an art museum and deciding they want to replicate the art style seen in a painting. Maybe they shouldn’t be in a publishing field if they don’t want their work seen/used. That’s my ideology on it it’s not like the AI is taking a one-to-one copy and selling the artwork as , which in my opinion is a much more harmful instance and already happens commonly in today’s art world, it’s analyzing existing artwork which was available through the same means that everyone else had of going online loading up images and scraping the data. By this logic, artist should not be allowed to enter any art based websites museums or galleries, since by looking at others are they are able to adjust their own art which is stealing the author’s work. I’m not for or against it but, the ideology is insane to me.
@Pika@flop_leash_973 This is largely my thoughts on the whole thing, the process of actually training the AI is no different from a human learning
The thing about that, is that there’s likely enough precedent in copyright law to actually handle that, with most copyright law it’s all about intent and scale and I think that’s likely where this will all go
Here the intent is to replace and the scale is astronomical, whereas an individual’s intent is to add and the scale is minimal
The process of training the model is arguably similar to a human learning, and if the model just sat on a server doing nothing but knowing, there’d be no problem. Taking that knowledge and selling it to the public en mass is the issue.
This is precisely what copyrights and patents are here to safeguard. Is there already a book like A Song of Ice and Fire? Write something else, maybe better! There’s already a patent for an idea you have? Change and improve upon it and get your own patent!
You see, copyrights and patents are supposed to spur creativity, not hinder it. OpenAI should improve upon its system so that it actually thinks and is creative itself rather than regurgitating copyrighted materials, themes and ideas. Then they wouldn’t have this problem.
OpenAI wants literally all of human knowledge and creativity for free so that they can sell it back to you. And you’re okay-ish with it?
@Subverb that is, quite impressively, the opposite of what I said
Is a person infringing on copyright by producing content? No. It’s about intent and scale. Humans don’t just sit on this knowledge, they do something with it
There is nothing illegal about WHAT it’s doing, there is everything illegal about HOW and WHY
I very clearly stated that OpenAI’s intent and their scale at which they operate are blatant copyright infringement and that it has been backed up with decades of precedents
@zbyte64 with everything you see you are scraping data from your environment whether you want to or not
How does a child learn what pain is? How does a teenager learn what heartbreak is? It’s certainly not because they made the decision to find that out themselves
I bring up agency and I get an exemplary response what I mean.
Raising a child well requires someone who is able to engage in the child’s own theory of mind. If you just treat a child as an information sponge they will need more therapy than usual. A good parent takes interest in their child’s ability to exercise agency.
Then I guess my original point of agency being an essential element in human learning had nothing to do with your conversation about how AI learns like humans. Carry on.
Agreed. I don’t understand how training LLM on publicly available data is an issue. As you says, it doesn’t copy the work. Rather the data is used as “inspiration” to stay in the art analogy.
Maybe I’m ignorant. Would love to be proven wrong. Right now it seems to me that failing media publishers are trying to do a money grab and use copyright as an argument, even though their data/material isn’t getting illegally reproduced.
I can already tell this is going to be a unpopular opinion judging by the comments but this is my ideology on it
it’s totally true. I’m indifferent on it, if it was acquired by a public facing source I don’t really care, but like im definitly against using data dumps or data that wasn’t available to the public in the first place. The whole thing with AI is rediculous, it’s the same as someone going to a website and making a mirror, or a reporter making an article that talks about what’s in it, last three web search based AI’s even gave sources for where it got the info. I don’t get the argument.
if it’s image based AI, well it’s the equivalent to an artist going to an art museum and deciding they want to replicate the art style seen in a painting. Maybe they shouldn’t be in a publishing field if they don’t want their work seen/used. That’s my ideology on it it’s not like the AI is taking a one-to-one copy and selling the artwork as , which in my opinion is a much more harmful instance and already happens commonly in today’s art world, it’s analyzing existing artwork which was available through the same means that everyone else had of going online loading up images and scraping the data. By this logic, artist should not be allowed to enter any art based websites museums or galleries, since by looking at others are they are able to adjust their own art which is stealing the author’s work. I’m not for or against it but, the ideology is insane to me.
@Pika @flop_leash_973 This is largely my thoughts on the whole thing, the process of actually training the AI is no different from a human learning
The thing about that, is that there’s likely enough precedent in copyright law to actually handle that, with most copyright law it’s all about intent and scale and I think that’s likely where this will all go
Here the intent is to replace and the scale is astronomical, whereas an individual’s intent is to add and the scale is minimal
The process of training the model is arguably similar to a human learning, and if the model just sat on a server doing nothing but knowing, there’d be no problem. Taking that knowledge and selling it to the public en mass is the issue.
This is precisely what copyrights and patents are here to safeguard. Is there already a book like A Song of Ice and Fire? Write something else, maybe better! There’s already a patent for an idea you have? Change and improve upon it and get your own patent!
You see, copyrights and patents are supposed to spur creativity, not hinder it. OpenAI should improve upon its system so that it actually thinks and is creative itself rather than regurgitating copyrighted materials, themes and ideas. Then they wouldn’t have this problem.
OpenAI wants literally all of human knowledge and creativity for free so that they can sell it back to you. And you’re okay-ish with it?
@Subverb that is, quite impressively, the opposite of what I said
Is a person infringing on copyright by producing content? No. It’s about intent and scale. Humans don’t just sit on this knowledge, they do something with it
There is nothing illegal about WHAT it’s doing, there is everything illegal about HOW and WHY
I very clearly stated that OpenAI’s intent and their scale at which they operate are blatant copyright infringement and that it has been backed up with decades of precedents
Hello fellow human. I also learn by having information shoveled to me without regard to my agency.
@zbyte64 with everything you see you are scraping data from your environment whether you want to or not
How does a child learn what pain is? How does a teenager learn what heartbreak is? It’s certainly not because they made the decision to find that out themselves
I bring up agency and I get an exemplary response what I mean.
Raising a child well requires someone who is able to engage in the child’s own theory of mind. If you just treat a child as an information sponge they will need more therapy than usual. A good parent takes interest in their child’s ability to exercise agency.
@zbyte64 you’re getting away from the original conversation
Then I guess my original point of agency being an essential element in human learning had nothing to do with your conversation about how AI learns like humans. Carry on.
@zbyte64 we’re saying the same thing
It’s a matter scale, not process
I’m literally saying (an aspect of) process matters, how are we saying the same thing?
Agreed. I don’t understand how training LLM on publicly available data is an issue. As you says, it doesn’t copy the work. Rather the data is used as “inspiration” to stay in the art analogy.
Maybe I’m ignorant. Would love to be proven wrong. Right now it seems to me that failing media publishers are trying to do a money grab and use copyright as an argument, even though their data/material isn’t getting illegally reproduced.