• snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    4 days ago

    Why would the steps be literal when everything else is bullshit? Obviously the reasoning steps are AI slop too.

  • paraphrand@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    4 days ago

    It’s bullshitting. That’s the word. Bullshitting is saying things without a care for how true they are.

      • antifuchs@awful.systems
        cake
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 days ago

        It’s kind of a distinction without much discriminatory power: LLMs are a tool created to ease the task of bullshitting; used to produce bullshit by bullshitters.

  • diz@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    4 days ago

    It re consumes its own bullshit, and the bullshit it does print is the bullshit it also fed itself, its not lying about that. Of course, it is also always re consuming the initial prompt too so the end bullshit isn’t necessarily quite as far removed from the question as the length would indicate.

    Where it gets deceptive is when it knows an answer to the problem, but it constructs some bullshit for the purpose of making you believe that it solved the problem on its own. The only way to tell the difference is to ask it something simpler that it doesn’t know the answer to, and watch it bullshit in circles or to an incorrect answer.