The recent chat bot advances have pretty much changed my life. I used to get anxiety by receiving mails and IMs, sometimes even from friends. I lost friendships over not replying. My main issue being that I am sometimes get completely stuck in a loop of how to formulate things in the best way to the point of just abandoning the contact. I went to therapy for that and it helped. But the LLM advancements of the recent years have been a game changer.

Now I plop everything into ChatGPT, cleaning out personal information as much as possible, and let the machine write. Often I’ll make some adjustments but just having a starting point has changed my life.

So, my answer, I use it all the fucking time.

  • sciawp@lemm.ee
    link
    fedilink
    arrow-up
    47
    arrow-down
    3
    ·
    11 months ago

    I prefer to call them LLMs (Large Language Models). It’s how they are referred to in the industry and I think it’s far more accurate than “AI”

    • CalamityBalls@kbin.social
      link
      fedilink
      arrow-up
      16
      arrow-down
      7
      ·
      11 months ago

      Thank you, it’s frustrating seeing (almost) everyone call them AI. If/when actual AI comes into existence I think a lot of people are going to miss the implications as they’ve become used to every LLM and its grandmother being called AI.

    • pufferfischerpulver@feddit.deOP
      link
      fedilink
      arrow-up
      5
      arrow-down
      4
      ·
      11 months ago

      I debated whether I should write LLMs or AI. Generally I dislike AI as well, but choose it due to it’s popularity. Definitely share your sentiment though!

  • Lifecoach5000@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    ·
    11 months ago

    I’ve never done this and I guess I need to go yell at a cloud somewhere if this is about to become a thing.

    • pufferfischerpulver@feddit.deOP
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      11 months ago

      Understandable! I wouldn’t want to just talk to a chat bot either, whilst thinking I’m talking to a friend.

      The way I use it is mostly to get a starting point from which I’ll edit further. Sometimes the generated response is bang on though and I admit I have just copy pasted.

  • RonnieB@lemmy.world
    link
    fedilink
    arrow-up
    36
    arrow-down
    3
    ·
    11 months ago

    I’d be pretty mad if I knew someone was sending personal texts/emails to openai

    • Lemminary@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      7
      ·
      edit-2
      11 months ago

      I wouldn’t if they were stripping it of personal information. I lack imagination for what they could possibly do to harm anyone by having somewhat of an insight into mundane and trivial everyday problems.

      • papalonian@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        5
        ·
        11 months ago

        Um, don’t you know this is Lemmy? You’re supposed to be insanely protective over all aspects of your privacy. If I found out that someone copy/ pasted a tidbit of a conversation I was in after stripping all personal info from it into an LLM, I’d change my name, forge my birth certificate to alter my DOB, move states (twice), get a new phone number and shoot that person in the face. It’s the only way to keep the government or corpos from spying on my very secret very important conversations.

    • pufferfischerpulver@feddit.deOP
      link
      fedilink
      arrow-up
      17
      arrow-down
      4
      ·
      11 months ago

      No actually! It’s not a problem for me to write text per se. Actually it’s a significant part of my job to write guidelines, documentation, etc.

      What’s difficult about replying to people is putting my opinions in relation to the other’s expectations.

      • MrShankles@reddthat.com
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        11 months ago

        I tell people I have “phone anxiety”… but it sucks. Family, friends, new acquaintances… it doesn’t matter, trying to reply or answer a phone can feel like torture sometimes. Have absolutely lost a few friends over this. You’re not alone

  • Transporter Room 3@startrek.website
    cake
    link
    fedilink
    arrow-up
    26
    arrow-down
    7
    ·
    edit-2
    11 months ago

    … Literally never.

    And its never once crossed my mind.

    And if one of my friends told me they did this to talk to me, I think I’d just stop talking to them, because I want to talk to them. If I wanted to be friends with a computer, I’d get a tamagachi.

    • Lemminary@lemmy.world
      link
      fedilink
      arrow-up
      11
      arrow-down
      10
      ·
      edit-2
      11 months ago

      If one of my friends did this to overcome their anxiety, I’d empathize and congratulate them on figuring out a way to make it work. If I were in OP’s shoes and one of my friends did to me what you just said, I’d say bullet dodged and carry on.

      • Transporter Room 3@startrek.website
        cake
        link
        fedilink
        arrow-up
        7
        arrow-down
        2
        ·
        11 months ago

        Cool. I’d ask them to not tdo it with me and if they did anyway the above would happen.

        If it was someone who was not a current friend who did this, then we’re incompatible as friends, I wish you well in life but I won’t be part of it, that’s clearly better for both of us.

  • Moghul@lemmy.world
    link
    fedilink
    arrow-up
    20
    arrow-down
    3
    ·
    11 months ago

    No, and I’d say it’s probably not the solution to your problem that you think it is.

    Reading the rest of these comments, I can’t help but agree. If I found out a friend, family member, or coworker was answering me with chatgpt I’d be pretty pissed. Not only would they be feeding my private conversation to a third party, but they can’t even be bothered to formulate an answer to me. What am I, chopped liver? If others find out you’re doing this, it might be pretty bad for you.

    Additionally, you yourself aren’t getting better at answering emails and messages. You’ll give people the wrong impression about how you are as a person, and the difference between the two tones could be confusing or make them suspicious - not that you’re using chatgpt, but that there’s something fake.

    This is in the same ballpark as digital friends or significant others. Those don’t help with isolation, they just make you more isolated. Using chatgpt like this doesn’t make you a better communicator, it just stops you from practicing that skill.

    • dingus@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      6
      ·
      11 months ago

      even be bothered to formulate an answer to me. What am I, chopped liver?

      OP isn’t doing this because they don’t care. It’s the exact opposite. They care so much and stress so much about it that they have difficulty in expressing themselves.

      I agree that I don’t think it’s helpful for OP to continue doing this long term, but all of these comments here are so judgemental to OP.

      • Moghul@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        11 months ago

        You’re right, but I expect a lot of people are going to have that reaction. It will feel to them like a slight and an invasion of privacy. OP has to find a way to deal with the anxiety; this is an unhealthy coping mechanism.

  • livus@kbin.social
    link
    fedilink
    arrow-up
    18
    arrow-down
    1
    ·
    11 months ago

    No. I have the same problem you do, which is harming my friendships and networking.

    But I definitely am not going to reach for the solution you did. Because if anyone notices, it will effectively nuke that relationship from orbit.

    • Usernameblankface@lemmy.world
      link
      fedilink
      arrow-up
      10
      arrow-down
      5
      ·
      11 months ago

      Putting myself in the position of a friend who realized that you were using gpt or something to form thoughts…

      I’d be impressed that you found that solution, and then I’d want to check be sure that the things you said were true.

      Like, if I found out that 90% of your life as I knew it was just mistakes then computer made that you didn’t bother to edit, I’d be bummed and betrayed, and it would turn out how you said.

      On the other hand, if everything you sent is true to life and you formed the computer’s responses into your personality, I’d be very much impressed that you used this novel tool to keep in contact and overcome the frozen state that had kept you from responding before.

      • livus@kbin.social
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        11 months ago

        @Usernameblankface that’s a kind and generous interpretation, and I hope it’s the one OP’s friends will come to.

        I suspect it’s likely to be seen as an outsourcing of the friendship, though.

  • SamirCasino@lemm.ee
    link
    fedilink
    arrow-up
    20
    arrow-down
    6
    ·
    11 months ago

    I never used it, but damn are people here judgy. I don’t understand how it’s a personal insult if someone used it in the way you’re describing. As long as your actual thoughts and emotions are what you send, who cares you used a tool to express them.

    Anxiety is rough. I wish people were more understanding.

    • pufferfischerpulver@feddit.deOP
      link
      fedilink
      arrow-up
      8
      arrow-down
      3
      ·
      edit-2
      11 months ago

      Thank you! I probably could have been more elaborate in the op. But it doesn’t seem like people really paid attention to it regardless. I don’t just plop in a message I received and go with whatever response. I sanitize the received message of personal information as much as possible, then I let the LLM know what I want to say, and then use the response as a starting point which I’ll further edit. Admittedly sometimes I get something that is just bang on and I’ll copy paste. But it rarely happens since the model can’t match my personal writing style.

      As you recognise, it’s still my thoughts and feelings. It’s akin to having a secretary writing drafts for you maybe? Not that I would know anything about having a secretary, ha!

  • renrenPDX@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    11 months ago

    This sounds like a plot to a horror movie. It all starts out with good intent, but pretty soon you notice your AI responses seem a little off. You try to correct it but it in turn corrects you. Your reach out to family and friends but they dislike your ‘new’ tone and are concerned for your sudden change in behavior…

  • Thelsim@sh.itjust.works
    link
    fedilink
    arrow-up
    11
    ·
    11 months ago

    First of all, I can really empathize with your anxieties. I’ve lost contact with a few penpals years ago because of similar issues and I still hate myself for it.
    I don’t use chat-gpt for writing my replies, because my English is crap and my manner of writing distinct enough that any friend can immediately spot a real response from a generated one (not enough smileys for one :)
    But I still have similar anxieties. So if I feel anxious about writing something, I do sometimes give a general description of the original mail (“A friend of mine wrote about her mother’s funeral”, “a family member lost his cat”, etc.) and give it the reply I’ve written so far (names and personal details removed).
    I then explain that I feel anxious about my reply and worry if I hit the right tone. I never ask it to write for me, only to give critique where necessary and advice on how to improve (for good measure I always add some snide remarks on how it sounds too fake to ever pass as a human so don’t even bother trying, which it always takes in good humor because… well… AI :)
    I ignore most of the suggestions because it sounds like a corporate HR communique. But, what’s more important is that it usually tries to tell me that I was thoughtful, considerate and that that little light-hearted joke at the end was just sweet enough to add a personal touch without coming across as insensitive.
    Just to get some positive feedback, even from software that was designed specifically for that purpose, gives me that little push to hit the send button and hope for the best. I wouldn’t dare to ask someone else for advice because it would be an admission of how weak and insecure I feel about expressing myself in the first place, which would ramp up my anxiety by making it a ‘big thing’.

    Anyway, I can understand the animosity people show against AI. And I’m happy for those who don’t need or want it.

    PS: This reply was 100% written without any use of AI, direct or indirectly. I did spend a good half hour on it before feeling confident enough to hit “Post” :)

  • activestatue@lemmy.world
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    11 months ago

    The one time I drafted an email using ai, I was told off as being " incredibly inappropriate " so heck no. I have no idea what was inappropriate either, it looked fine to me. Spooky that I can’t notice the issues, so I don’t touch it

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      18
      arrow-down
      7
      ·
      11 months ago

      If you’re using it right then there’d be no way for the recipient to even tell whether you’d used it, though. Did you forget to edit a line that began with “As a large language model”?

      • killeronthecorner@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        1
        ·
        11 months ago

        Once you know someone is using it, it’s very easy to know when you’re reading AI generated text. It lacks tone and any sense of identity.

        While I don’t mind it in theory, I am left with the feeling of “well if you can’t be bothered with this conversation…”

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          4
          arrow-down
          3
          ·
          11 months ago

          With a little care in prompting you can get an AI to generate text with a very different tone than whatever its “default” is.

          • killeronthecorner@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            2
            ·
            edit-2
            11 months ago

            Yeah, you can, which is why it’s lazy af when someone just serves you some default wikipedia-voice answer.

            My point is largely this: I can talk to AI without a human being involved. They become an unnecessary middle man who adds nothing of use other than copying and pasting responses. The downside of this is I no longer value their opinion or expertise, and that’s the feedback they’ll get from me at performance review time.

            I’ve told one individual already that they must critically assess solutions provided to them by ChatGPT as, if they don’t, I’ll call them out on it.

            • livus@kbin.social
              link
              fedilink
              arrow-up
              7
              ·
              11 months ago

              They become an unnecessary middle man who adds nothing of use other than copying and pasting responses.

              This, I hate it so much when people take it on themselves to insert Chat GPT responses into social media threads. If I wanted to know what the LLM has to say I would have just asked the LLM.

              • killeronthecorner@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                1
                ·
                11 months ago

                It’s the modern equivalent of pasting links to Wikipedia pages without reading them, except that because you’re being direct with your question you have a higher confidence that what you’re parroting makes some sort of sense.

                • livus@kbin.social
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  11 months ago

                  links to Wikipedia

                  Once I was trying to make small talk irl with someone and asked her how many official languages were in her country of origin, and she stone cold looked it up on Wikipedia and started reading aloud to me from it at great length.

                  The kicker was I happened to already know about it, I just thought it might be something she would like to talk about from her perspective as an emigrant.

              • FaceDeer@kbin.social
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                11 months ago

                On the flipside, I’m kind of annoyed by posts cluttering up places like asklemmy that could be trivially answered by asking an AI (or even just a simple search engine). I can understand the opinion-type questions or the ones with a social aspect to them that you can reasonably want an actual human to give you advice on (like this one), but nowadays the purely factual stuff is mostly a solved problem. So when those get asked anyway I’m often sorely tempted to copy and paste an AI answer simply as a https://letmegooglethat.com/ style passive aggressive rebuke.

                Fortunately my inner asshole is well chained. I don’t release him for such trivialities. :)

        • Silverseren@kbin.social
          link
          fedilink
          arrow-up
          3
          arrow-down
          2
          ·
          11 months ago

          I mean, with the vast majority of inter-departmental emails, no, one can’t be bothered, because it’s pointless busywork communication.

  • Discover5164@lemm.ee
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    11 months ago

    my resume is 90% chatGPT… the informations are true, but i could never write in that style. it got me two jobs, so i know it works.

    i used it a couple of times to rewrite stuff given a context. like i wrote the email but it came out in a vague passive aggressive tone, and letting chatGPT rewrite it will reword it to be more appropriate given the context.

    • Lemminary@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      11 months ago

      Solution: Write everything in a passive aggressive tone to vent out your frustrations, let the bots do the cleanup.

      New problem: Get used to speaking in a passive aggressive tone. Oh shit.

  • Usernameblankface@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    11 months ago

    I use it whenever I need to write in Corporate Speak. Resume, cover letter, important email.

    I also avoid putting in sensitive information, so it needs editing. I found that usually it will leave me places that need specific information, (name here) for example.

    It is soooo much better than smashing out some sloppy attempts and rewording it until I get the style right.