Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

  • SorteKanin@feddit.dk
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    But how do you know that the human brain is not just a super sophisticated next-thing predictor that by being super sophisticated manages to incorporate nuance and all that stuff to actually be intelligent? Not saying it is but still.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Because we have reason, understanding. Take something as simple as the XY problem. Humans understand that there are nuances to prompts and questions. I like the XY because a human knows to step back and ask “what are you really trying to do?”. AI doesn’t have that capability, it doesn’t have reasoning to say “maybe your approach is wrong”.

      So, I’m not the one to define what it is or on what scale. But I can say that it’s not human intelligence.