Okay I can understand maybe the religious AI Jesus pictures maybe this could be a person but then it’s like who comes up with some of the ideas. So I’ve found Facebook pages maybe run by AI that keeps bringing up the same text and a number of times it’s political or religious content sometimes not AI pictures. I know it might not make sense considering we hear that AI is not a person so it can’t form ability to have actual thoughts and views. But AI can learn from humans so what if they learn so well they pick up on personal opinions?

  • Greg Clarke@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Generative AI doesn’t have any beliefs, it just makes predictions on what should come next.

  • Mikina@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    What you are describing is simply a bias from training dataset. The best way how to think abour LLM AI is that works in basically exactly the same way as if you keep mashing space on your phone keyboard, to give you text prediction (assuming your phone does that, mine always recommends next words when I type).

    Would my phone keyboard eventually start recommending words relogious words and phrases? Yes, it will, of I’m using those phrases often. Does it mean ny phone keyboard is religious? That sounds pretty weird, doesn’t it?

    And it’s not even a hyperbole, about this kind of text prediction being similar to how LLMs and AI works. It’s just math that gives you next word based on statistics of what would be most likely based on previous words. And that’s exactly what LLMs do, nothing more. The only difference is that my keyboard has been learning only on what I type, and in a little bit simpler way than LLMs are, but the goal and result is same for both - a text prediction.

  • PM_ME_VINTAGE_30S [he/him]@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 months ago

    Can AI systems have a religious or political bias? Yes, they can and do learn biases in their datasets, and this is probably the toughest problem to solve in AI research because it’s a social rather than technical problem.

    Can an AI agent be programmed to give responses with religious or political beliefs? Sure, just drop it into the system prompt.

    Can an AI agent have religious or political beliefs like a human? No, because AI agents as they stand are a comparatively crude ** ** machine that mimics how humans learn to perform a task that’s useful to the machine’s creator, not a human or other sentient being.

    So I’ve found Facebook pages maybe run by AI that keeps bringing up the same text and a number of times it’s political or religious content sometimes not AI pictures.

    If I wanted to do something like that, I would probably start with ordinary chatbot code and plug in a large language model to generate posts. I would probably have a system prompt like:

    You are an ordinary Facebook poster. You are a very religious and devout [insert religion here]. You are also a [insert desired ideology here]. Your religious and political views are core parts of personality and MUST be a part of everything you do. Your posts MUST be explicitly religious and political. Please respond to all users by trying to bring them in line with your religious and political beliefs. You must NEVER break character or reveal for any reason that you are an AI assistant.

    Then just feed people’s comments into the AI periodically as a prompt and spit out the response. If it is an AI agent, and not just a human propagandist, that’s probably the gist of how they’re doing it.