• 4 Posts
  • 32 Comments
Joined 22 days ago
cake
Cake day: June 6th, 2025

help-circle
















  • Wow. LLM shills just really can’t cope with reality can they.

    Go to one of your “reasoning” models. Ask a question. Record the answer. Then, and here’s the key, ask it to explain its reasoning. It churns out a pretty plausible-sounding pile of bullshit. (That’s what LLMbeciles are good at, after all.) But here’s the key (and this is the key that separates the critical thinker from the credulous): ask it again. Not even in a new session. Ask it again to explain its reasoning. Do this ten times. Count the number of different explanations it gives for its “reasoning”. Count the number of mutually incompatible lines of “reasoning” it gives.

    Then, for the piece de resistance, ask it to explain how its reasoning model works. Then ask it again. And again.

    It’s really not hard to spot the bullshit machine in action if you’re not a credulous ignoramus.


  • I love how techbrodudes assume nobody else knows how to do what they do.

    I did my little test three fucking days before that message. Not years. DAYS.

    You understand that a huge part of LLMs is that they are stochastic, right? That you can ask the same question ten times and get ten (often radically) different answers. Right?

    What does that tell you about a) your experiment, and b) the LLMbeciles themselves?

    Compassionate fucking Buddha, are LLM pushers dense!




  • Huh. So there really is a 凤凰血. Weird how when I tried it (on several AIs) they just made shit up instead of giving me that information.

    It’s almost like how you ask the question determines how it answers instead of, you know, using objective reality. Almost as if it has no actual model of objective reality and is just a really sophisticated game of mad-libs.

    Almost.