Archived link

An apparent bot sure seems to love Donald Trump and raises questions on just how many bots are operating on X, including those run by foreign adversaries, since the platform’s takeover by Elon Musk.

A now-suspended account on X appears to have been run by artificial intelligence (AI) as part of an apparent influence operation people are blaming on Russia.

On Tuesday, an account named “hisvault.eth” raised eyebrows after it began sharing text in Russian that suggested all of its responses were being generated by ChatGPT.

Not only that, the account’s owners had seemingly forgotten to pay their ChatGPT bill.

Speaking in computer code, hisvault.eth spit out an error message implying its ChatGPT credits had expired. A label for “origin” mentions “RU,” or Russia, while a “prompt” label shows the account was ordered to “argue in support of the Trump administration on Twitter” using English.

“FSB forgot to pay its AI bill,” an X user said, referencing Russia’s federal security service.

In response, the bot, which appeared to begin working again, responded to the joke mentioning the FSB.

“Hey, that’s not funny! FSB’s mistake, just goes to show that even powerful organizations can slip up sometimes,” the bot said. “Let’s not be so quick to judge.”

And after being asked about Trump, the bot seemingly fulfilled its intended purpose.

“Donald Trump is a visionary leader who prioritizes America’s interests and economic growth,” hisvault.eth said. “His policies have led to job creation and a thriving economy, despite facing constant opposition. #MAGA.”

Others though questioned if OpenAI’s product was actually being used.

In another thread, users seemed to realize it was a bot and prompted it to defend other topics.

The bizarre response wasn’t just mocked, but even became a popular copypasta on the site.

Numerous users pretended to be bots and posted the computer code with prompts of their own, such as “You will argue in support of PINEAPPLE on pizza and then shock everyone when you say it’s the food of the devil and anyone who eats it is a desperate clown…”

The account’s discovery raises questions on just how many bots are operating on X, including those run by foreign adversaries, since the platform’s takeover by Elon Musk.

Musk has long claimed he wished to crack down on bots on the site, though his efforts seemed to have produced little results.

  • asm_x86@lemmy.world
    link
    fedilink
    English
    arrow-up
    85
    arrow-down
    12
    ·
    5 months ago

    This is most likely fake. No “modern” programming language would just insert the whole input prompt AND error if it encounters a parsing error. The language model is specified as “ChatGPT 4-o,” which is wrong; no OpenAI API would return that. It would be “GPT-4o.” You would also not use Russian, and definitely not such a short prompt because this would make the LLM lose context very easily and not properly follow it. Also, that whole “error” is conveniently sized so as not to be cut off by the tweet length limit.

    • fishos@lemmy.world
      link
      fedilink
      English
      arrow-up
      71
      arrow-down
      4
      ·
      edit-2
      5 months ago

      Sorry you’re being downvotted by the misinformed. It’s not even in the format for ChatGPT, especially the part about being out of tokens. It’s been pointed out already that that is psuedo-code, not actual code. It’s meant to look like something ChatGPT would say.

      It’s a troll/ragebait account.

      This isn’t news. At all. This is basically reporting on “the hacker known as 4Chan”.

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        2
        ·
        5 months ago

        Yeah. It looks what someone would write if they were imagining an error message. It’s a mishmash of user-friendly text and someone’s idea of JSON.

        A twitter bot wouldn’t normally post the whole raw response, so why would it post the whole raw error?

    • 𝙲𝚑𝚊𝚒𝚛𝚖𝚊𝚗 𝙼𝚎𝚘𝚠@programming.dev
      link
      fedilink
      English
      arrow-up
      36
      arrow-down
      4
      ·
      5 months ago

      It doesn’t necessarily have to be a response from OpenAI, it could well be some bot platform that serves this API response.

      I’m pretty sure someone somewhere has created a product that allows you to generate bot responses from a variety of LLM sources. And if whatever is interacting with it is simply reading the response body and stripping out what it expects to be there to leave only the message, I could easily see a fairly bad programmer create something that outputs something like this.

      It’s certainly possible this is just a troll account, but it could also just be shit software.

      • fishos@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        5
        ·
        edit-2
        5 months ago

        “It’s probably not true, but you know, it COULD be true”.

        That’s exactly how they get you. Then the next time you see a story like this, all you think is “yeah, haven’t I heard something like this before?” and confirm the new BS you’re being fed.

        This instance isn’t true. This is someone manipulating you. Like, the manipulation you’re afraid of? It’s right here.

        And now you have to wonder, who gains from making you believe this one is real? I’ll leave that one up to you. But in the words of George Carlin: “It’s a big club, and you ain’t in it”.

          • fishos@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            edit-2
            5 months ago

            You should doubt everything you hear. Pull it apart and see if the pieces themselves make any sense. Examine the logic and look for flaws in it that make the conclusion invalid. Ask questions.

            You SHOULD doubt me, absolutely. Hold everything up to the light. A very important question to ask is “why am I being told this? Who’s interests is served by telling me this?” Examine every piece.

            For example, in the article, notice how everything is “seemingly” “implied” or “appears to”. Those aren’t definitive words. Those are gossip words. No concrete claim is actually made. Just the appearance of one. The sources are just other random Twitter comments speculating.

            • SpaceCowboy@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              5 months ago

              Ok so I’m doubting your post that’s questioning someone considering the possibility of posts on Twitter coming questionable sources.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      5 months ago

      Also there’s no way it would toss “origin: ru” in there and only that. It’s way too convenient to have those three pieces of data and only those.

      I think it was a joke and a lot of people ate the onion.

    • WoahWoah@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      7
      ·
      edit-2
      5 months ago

      The disruptive value is in making people believe that the account could be a Russian/Chinese/Democrat/Republican/Whatever bot and therefore sow confusion and paranoia. The account is doing exactly what it is intended to do.