Archived link

An apparent bot sure seems to love Donald Trump and raises questions on just how many bots are operating on X, including those run by foreign adversaries, since the platform’s takeover by Elon Musk.

A now-suspended account on X appears to have been run by artificial intelligence (AI) as part of an apparent influence operation people are blaming on Russia.

On Tuesday, an account named “hisvault.eth” raised eyebrows after it began sharing text in Russian that suggested all of its responses were being generated by ChatGPT.

Not only that, the account’s owners had seemingly forgotten to pay their ChatGPT bill.

Speaking in computer code, hisvault.eth spit out an error message implying its ChatGPT credits had expired. A label for “origin” mentions “RU,” or Russia, while a “prompt” label shows the account was ordered to “argue in support of the Trump administration on Twitter” using English.

“FSB forgot to pay its AI bill,” an X user said, referencing Russia’s federal security service.

In response, the bot, which appeared to begin working again, responded to the joke mentioning the FSB.

“Hey, that’s not funny! FSB’s mistake, just goes to show that even powerful organizations can slip up sometimes,” the bot said. “Let’s not be so quick to judge.”

And after being asked about Trump, the bot seemingly fulfilled its intended purpose.

“Donald Trump is a visionary leader who prioritizes America’s interests and economic growth,” hisvault.eth said. “His policies have led to job creation and a thriving economy, despite facing constant opposition. #MAGA.”

Others though questioned if OpenAI’s product was actually being used.

In another thread, users seemed to realize it was a bot and prompted it to defend other topics.

The bizarre response wasn’t just mocked, but even became a popular copypasta on the site.

Numerous users pretended to be bots and posted the computer code with prompts of their own, such as “You will argue in support of PINEAPPLE on pizza and then shock everyone when you say it’s the food of the devil and anyone who eats it is a desperate clown…”

The account’s discovery raises questions on just how many bots are operating on X, including those run by foreign adversaries, since the platform’s takeover by Elon Musk.

Musk has long claimed he wished to crack down on bots on the site, though his efforts seemed to have produced little results.

  • justdoitlater@lemmy.world
    link
    fedilink
    English
    arrow-up
    109
    arrow-down
    2
    ·
    6 months ago

    Imho this is actually a very serious problem. They are undermining our society with this. We should push tech companies to block, its technically very feasible.

    • Milk_Sheikh@lemm.ee
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      1
      ·
      6 months ago

      Won’t anyone think of the shareholders?!?!!

      This is a very easy to flag, given the intelligence of the people working at OpenAI. Russian IP, political topic, high post frequency. But blocking them has an opportunity cost in identifiable dollar value, doing nothing only costs them a few pithy press releases and a “commitment to truthfulness and openness”.

      Move fast and break things, right? As long as the money rolls in… Just this time they’re breaking the fabric of reality binding society together.

    • xodoh74984@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      15 days ago

      This is a major problem for all democracies, and LLM driven troll accounts probably do exist. But this xitter post is a fake error message. It’s clearly a troll.

      Blocking fake accounts would help with the misinformation problem, but it’s a cat and mouse game. It could ultimately lend additional credence to the trolls who slip through the cracks if the platform is assumed to be safe. The reality is that there will always be ways for fake accounts to avoid detection and to spoof account verification. Making it harder would help, but it’s not a comprehensive solution. Not to mention the fact that the platform itself has the power to manipulate public opinion, amplify their preferred narrative, etc.

      The solution I’ve always preferred is the mentality the 4chan community had when I was younger and frequented it. Basically, and I’m paraphrasing:

      Everyone here needs to grow up and understand that no post should ever be taken at face value. This is an anonymous forum. Assume that everything was written by a bot or a troll in the absence of proof that it wasn’t.

      I think people put too much trust in social media precisely because they assume that there’s a real person behind every post. They assume that a face and a few photos gives an account legitimacy, despite the fact that it’s trivial to copy photos from a random account (2015/16 pro-Trump Facebook style) or just generate all of the content from scratch with AI to avoid duplicate detection.

      Trust itself is the driver of misinformation. People believing that things posted by randos on the internet are true without even bothering to do a quick web search to verify the information. On social media, people should only fully trust posts made by people they know. That is the simplest and most comprehensive solution to the problem.

      • justdoitlater@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 months ago

        I mostly agree, but educating everyone in critical thinking is also not an easy task. Both strategies should be done: we need to hold the platforms more acountable and help ppl have more critical thought.

    • blazeknave@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      I used to work in the industry that prevents this, trust and safety. It’s like DEI. If an individual with enough clout gives a shit and takes the time to make it happen, or if a bad thing happens and a corporation needs to make a show of caring to cover their asses, that’s when they invest the minimum.

      • asm_x86@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        That’s only going to stop the people who don’t want to give their ID away. If someone would actually want to spread propaganda or other trough bots they would just buy stolen information.