to the extent that it doesn’t violate the law or other peoples’ rights
Am I the only one who finds this so weird when we talk about LLMs? If someone makes a bot that resembles some specific person, that person’s rights aren’t really violated, and since they’re all fictional content, it is very hard to break actual laws through its content.
At that point we would have to also ban people’s weird fan fiction, no?
Not arguing about whatever they want or don’t want on their platform, but the legal & alleged moral questions / arguments always weird me out a bit, because there’s no one actually getting hurt in any sort of way by weirdos having weird chats with computers.
The bigger issue is the enforcement. Either you monitor an absurd amount of content, which is worse for privacy, or you straight up censor the models, which makes them typically restrictive even in valid cases / scenarios being played out (other platforms went through this, with a consequential loss of users).
Am I the only one who finds this so weird when we talk about LLMs? If someone makes a bot that resembles some specific person, that person’s rights aren’t really violated, and since they’re all fictional content, it is very hard to break actual laws through its content. At that point we would have to also ban people’s weird fan fiction, no?
Not arguing about whatever they want or don’t want on their platform, but the legal & alleged moral questions / arguments always weird me out a bit, because there’s no one actually getting hurt in any sort of way by weirdos having weird chats with computers.
I could see some people making the argument that it could be considered defamatory especially in cases where it is being peddled as real. Politicians might even try to link it in with revenge porn or other non-consensual pornography laws.
It would sure get messy in a hurry though. Imagine someone trying to make lewd photos of Tomb Raider’s Laura Croft for example and accidentally generates images resembling Alicia Vikander or Angelina Jolie from the Tomb Raider movie.
I feel like it’s going to be a challenge to find a definition of malicious most people agree on.
Someone might think it’s fine to make nudes of Captain Marvel for example because she’s a character. They don’t really care about the Brie Larson aspect.
I suppose there is the option to eliminate any kind of name based suggestions.
I personally don’t see that much of an issue of people making “nudes” of others since they’re fake anyway. I see an issue when they’re used for things like bullying, blackmail, etc. That is technically already illegal, just not well enforced for any sort of digital topic and hasn’t been for over a couple of decades now. Hence why I find the attention the LLM stuff gets exceptionally hypocritical and overblown, because non of them really cared when someone simply got cyberbullied, or blackmailed through classically edited images - let alone screamed for the outlawing of editing software or social media.
What? We’re talking about LLM created content, so there’s no artist or person commissioning anything. But if you’re asking for the hypothetical case of someone commissioning blackmail material at an artist (without telling them the purpose), then obviously the person who ends up doing the blackmail. I don’t see the how the artist would’ve made themselves liable unless it was very obvious that it was intended to be used for illegal purposes.
Potentially tin foil hat thoughts; but it bugs me too. I also believe we will be the last generation with this problem though. Progress is only a few gravestones away.
First, at it’s root this isn’t a problem that we can solve. You can run ai locally, it’s distributed already, and can be completely run offline. Big brother aside it’s part of reality now. Since we can’t solve it, that means we have to adapt. In this scenario we likely change what is socially considered taboo. Generated content by definition is not a real person, it’s actively being pursued. I’m sure there needs to be more research into the long term effects of AI, but there could even be positives. One of the learnings from the reddit pedophile ama was that unhealthy people can feel ashamed and desire a safe outlet. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8888370/ A fake person appears to be a potential solution.
Second and likely the slowest to change is shame of nudity. Some cultures accept it, I’m personally happy we that we make people cover themselves just for sanitary reasons. But the religious agenda has pushed shame and when anyone can generate nudes, why care?
Am I the only one who finds this so weird when we talk about LLMs? If someone makes a bot that resembles some specific person, that person’s rights aren’t really violated, and since they’re all fictional content, it is very hard to break actual laws through its content. At that point we would have to also ban people’s weird fan fiction, no?
Not arguing about whatever they want or don’t want on their platform, but the legal & alleged moral questions / arguments always weird me out a bit, because there’s no one actually getting hurt in any sort of way by weirdos having weird chats with computers.
The bigger issue is the enforcement. Either you monitor an absurd amount of content, which is worse for privacy, or you straight up censor the models, which makes them typically restrictive even in valid cases / scenarios being played out (other platforms went through this, with a consequential loss of users).
I could see some people making the argument that it could be considered defamatory especially in cases where it is being peddled as real. Politicians might even try to link it in with revenge porn or other non-consensual pornography laws.
It would sure get messy in a hurry though. Imagine someone trying to make lewd photos of Tomb Raider’s Laura Croft for example and accidentally generates images resembling Alicia Vikander or Angelina Jolie from the Tomb Raider movie.
Hard sell overall imo. But in any sort of malicious case we should punish the people behind it, not the software used to make it.
I feel like it’s going to be a challenge to find a definition of malicious most people agree on.
Someone might think it’s fine to make nudes of Captain Marvel for example because she’s a character. They don’t really care about the Brie Larson aspect.
I suppose there is the option to eliminate any kind of name based suggestions.
I personally don’t see that much of an issue of people making “nudes” of others since they’re fake anyway. I see an issue when they’re used for things like bullying, blackmail, etc. That is technically already illegal, just not well enforced for any sort of digital topic and hasn’t been for over a couple of decades now. Hence why I find the attention the LLM stuff gets exceptionally hypocritical and overblown, because non of them really cared when someone simply got cyberbullied, or blackmailed through classically edited images - let alone screamed for the outlawing of editing software or social media.
Removed by mod
Calm down Hitler-Tankie.
Why do you support harrassing people for having nude pictures of themselves online. That behaviour is clearly criminal.
What a way to move the goalpost. And to a completely made up fictional accusation at that. Take your pills please.
That’s tough though. Do you punish “the artist” or the person who commissioned them? Or both?
What? We’re talking about LLM created content, so there’s no artist or person commissioning anything. But if you’re asking for the hypothetical case of someone commissioning blackmail material at an artist (without telling them the purpose), then obviously the person who ends up doing the blackmail. I don’t see the how the artist would’ve made themselves liable unless it was very obvious that it was intended to be used for illegal purposes.
By artist I mean the LLM. Do you punish the LLM (or company running it) for generating it, or the person who asked it to?
So you’re asking me a question that is literally already answered within the comment you were replying to.
Potentially tin foil hat thoughts; but it bugs me too. I also believe we will be the last generation with this problem though. Progress is only a few gravestones away.
First, at it’s root this isn’t a problem that we can solve. You can run ai locally, it’s distributed already, and can be completely run offline. Big brother aside it’s part of reality now. Since we can’t solve it, that means we have to adapt. In this scenario we likely change what is socially considered taboo. Generated content by definition is not a real person, it’s actively being pursued. I’m sure there needs to be more research into the long term effects of AI, but there could even be positives. One of the learnings from the reddit pedophile ama was that unhealthy people can feel ashamed and desire a safe outlet. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8888370/ A fake person appears to be a potential solution.
Second and likely the slowest to change is shame of nudity. Some cultures accept it, I’m personally happy we that we make people cover themselves just for sanitary reasons. But the religious agenda has pushed shame and when anyone can generate nudes, why care?