The recent chat bot advances have pretty much changed my life. I used to get anxiety by receiving mails and IMs, sometimes even from friends. I lost friendships over not replying. My main issue being that I am sometimes get completely stuck in a loop of how to formulate things in the best way to the point of just abandoning the contact. I went to therapy for that and it helped. But the LLM advancements of the recent years have been a game changer.
Now I plop everything into ChatGPT, cleaning out personal information as much as possible, and let the machine write. Often I’ll make some adjustments but just having a starting point has changed my life.
So, my answer, I use it all the fucking time.
The one time I drafted an email using ai, I was told off as being " incredibly inappropriate " so heck no. I have no idea what was inappropriate either, it looked fine to me. Spooky that I can’t notice the issues, so I don’t touch it
If you’re using it right then there’d be no way for the recipient to even tell whether you’d used it, though. Did you forget to edit a line that began with “As a large language model”?
Once you know someone is using it, it’s very easy to know when you’re reading AI generated text. It lacks tone and any sense of identity.
While I don’t mind it in theory, I am left with the feeling of “well if you can’t be bothered with this conversation…”
With a little care in prompting you can get an AI to generate text with a very different tone than whatever its “default” is.
Yeah, you can, which is why it’s lazy af when someone just serves you some default wikipedia-voice answer.
My point is largely this: I can talk to AI without a human being involved. They become an unnecessary middle man who adds nothing of use other than copying and pasting responses. The downside of this is I no longer value their opinion or expertise, and that’s the feedback they’ll get from me at performance review time.
I’ve told one individual already that they must critically assess solutions provided to them by ChatGPT as, if they don’t, I’ll call them out on it.
This, I hate it so much when people take it on themselves to insert Chat GPT responses into social media threads. If I wanted to know what the LLM has to say I would have just asked the LLM.
It’s the modern equivalent of pasting links to Wikipedia pages without reading them, except that because you’re being direct with your question you have a higher confidence that what you’re parroting makes some sort of sense.
Once I was trying to make small talk irl with someone and asked her how many official languages were in her country of origin, and she stone cold looked it up on Wikipedia and started reading aloud to me from it at great length.
The kicker was I happened to already know about it, I just thought it might be something she would like to talk about from her perspective as an emigrant.
On the flipside, I’m kind of annoyed by posts cluttering up places like asklemmy that could be trivially answered by asking an AI (or even just a simple search engine). I can understand the opinion-type questions or the ones with a social aspect to them that you can reasonably want an actual human to give you advice on (like this one), but nowadays the purely factual stuff is mostly a solved problem. So when those get asked anyway I’m often sorely tempted to copy and paste an AI answer simply as a https://letmegooglethat.com/ style passive aggressive rebuke.
Fortunately my inner asshole is well chained. I don’t release him for such trivialities. :)
I mean, with the vast majority of inter-departmental emails, no, one can’t be bothered, because it’s pointless busywork communication.
Not in my role, but I can see how that happens elsewhere.
😆
“I’m not a cat!”
Please share this email lol, I wanna see it
ChatGPT, write an email to my boss explaining how much I want to plow his wife