Grok has been launched as a benefit to Twitter’s (now X’s) expensive X Premium+ subscription tier, where those who are the most devoted to the site, and in turn, usual...
They can deny it however much. The right and anti-wokism is not the majority. Which therefore means unless special care is taken to train it on more right wing stuff, it will lean left out of the box.
But right wing rhetoric is also not logically consistent so training an AI on right extremism probably also won’t yield amazing results because it’ll pick up on the inconsistencies and be more likely to contradict itself.
Conservatives are going to self-own themselves pretty hard with AI. Even the machines see it, “woke” is fairly consistent and follows basic rules of human decency and respect.
Agree with the first half, but unless I’m misunderstanding the type of AI being used, it really shouldn’t make a difference how logically soud they are? It cares more about vibes and rhetoric then logic, besides I guess using words consistently
I think it will still mostly generate the expected output, its just gonna be biased towards being lazy and making something up when asked a more difficult question. So when you try to use it further than “haha, mean racist AI”, it will also bullshit you making it useless for anything more serious.
All the stuff that ChatGPT gets praised for is the result of the model absorbing factual relationships between things. If it’s trained on conspiracy theories, instead of spitting ground breaking medical relationships it’ll start saying you’re ill because you sinned or that the 5G chips in the vaccines got activated. Or the training won’t work and it’ll still end up “woke” if it still manages to make factual connections despite weaker links. It might generate destructive code because it learned victim blaming and jokes on you you ran rm -rf /* because it told you so.
At best I expect it to end up reflecting their own rethoric on them, like it might go even more “woke” because it learned to return spiteful results and always go for bad faith arguments no matter what. In all cases, I expect it to backfire hilariously.
Also training data works on consistency. It’s why the art AIs struggled with hands so long. They might have all the pieces, but it takes skill to take similar-ish, but logically distinct things and put them together in a way that doesn’t trip human brains up with uncanny valley.
Most of the right wing pundits are experts at riding the line of not saying something when they should or twisting and high jacking opponents views points. I think the AI result of that sort of training data is going to be very obvious gibberish because the AI can’t parse the specific structure and nuances of political non-debate. It will get close, like they did with fingers and not understand why the 6th finger (or extra right wing argument) isn’t right in this context.
Yeah and there’s a lot more crazy linked to right wing stuff, you’ve got all the Alex Jones type stuff and all the factions of q anon, the space war, the various extreme religious factions and various greek letter caste systems… Ad nausium.
If version two involves them biasing towards the right then they’ll have to work out how to do that, I bet they do it an obviously dumb way which results in it being totally dumb and wacky in hilarious ways
They can deny it however much. The right and anti-wokism is not the majority. Which therefore means unless special care is taken to train it on more right wing stuff, it will lean left out of the box.
But right wing rhetoric is also not logically consistent so training an AI on right extremism probably also won’t yield amazing results because it’ll pick up on the inconsistencies and be more likely to contradict itself.
Conservatives are going to self-own themselves pretty hard with AI. Even the machines see it, “woke” is fairly consistent and follows basic rules of human decency and respect.
Agree with the first half, but unless I’m misunderstanding the type of AI being used, it really shouldn’t make a difference how logically soud they are? It cares more about vibes and rhetoric then logic, besides I guess using words consistently
I think it will still mostly generate the expected output, its just gonna be biased towards being lazy and making something up when asked a more difficult question. So when you try to use it further than “haha, mean racist AI”, it will also bullshit you making it useless for anything more serious.
All the stuff that ChatGPT gets praised for is the result of the model absorbing factual relationships between things. If it’s trained on conspiracy theories, instead of spitting ground breaking medical relationships it’ll start saying you’re ill because you sinned or that the 5G chips in the vaccines got activated. Or the training won’t work and it’ll still end up “woke” if it still manages to make factual connections despite weaker links. It might generate destructive code because it learned victim blaming and jokes on you you ran
rm -rf /*
because it told you so.At best I expect it to end up reflecting their own rethoric on them, like it might go even more “woke” because it learned to return spiteful results and always go for bad faith arguments no matter what. In all cases, I expect it to backfire hilariously.
Also training data works on consistency. It’s why the art AIs struggled with hands so long. They might have all the pieces, but it takes skill to take similar-ish, but logically distinct things and put them together in a way that doesn’t trip human brains up with uncanny valley.
Most of the right wing pundits are experts at riding the line of not saying something when they should or twisting and high jacking opponents views points. I think the AI result of that sort of training data is going to be very obvious gibberish because the AI can’t parse the specific structure and nuances of political non-debate. It will get close, like they did with fingers and not understand why the 6th finger (or extra right wing argument) isn’t right in this context.
Sounds realistic to me
Yeah and there’s a lot more crazy linked to right wing stuff, you’ve got all the Alex Jones type stuff and all the factions of q anon, the space war, the various extreme religious factions and various greek letter caste systems… Ad nausium.
If version two involves them biasing towards the right then they’ll have to work out how to do that, I bet they do it an obviously dumb way which results in it being totally dumb and wacky in hilarious ways