We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.
But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.
This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).
Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.
Much of the universe can be modeled as probabilities. So what? I can model a lot of things as different things. That does not mean that the model is the thing itself. Scientists are still doing what scientists do: being skeptical and making and testing hypotheses. It was difficult to prove definitively that smoking causes cancer yet you’re willing to hop to “human thought is just an advanced chatbot” on scant evidence.
No, it’s again a case of you buying the bullshit arguments of tech bros. Even if we had a machine capable of replicating human thought, humans are more than walking brain stems.
You want proof of that? Take a look at yourself. Are you a floating brain stem or being with limbs?
At even the most reductive and tech bro-ish, healthy humans are self-fueling, self-healing, autonomous, communicating, feeling, seeing, laughing, dancing, creative organic robots with GI built-in.
Even if a person one day creates a robot with all or most of these capabilities and worthy of considering having rights, we still won’t be the organic version of that robot. We’ll still be human.
I think you’re beyond having to touch grass. You need to take a fucking humanities course.
Not what I said, my point is that humans are organic probabilistic thinking machine and LLMs are just an imitation of that. And your assertion that an LLM is never ever gonna be similar to how the brain works is based on what evidence, again?
What the hell are you even rambling about? Its like you completely ignored my previous comment, since you’re still going on about robots.
Bro, don’t hallucinate an argument I never made, please. I’m only discussing about how the human mind works, yet here you are arguing about human limbs and what it means to be human?
I’m not interested in arguing against someone who’s more interested with inventing ghosts to argue with instead of looking at what I actually said.
And again, go take your own advice and maybe go to therapy or something.
Yeah, you reduced humans to probabilistic thinking machines with no evidence at all.
I didn’t assert that LLMs would definitely never reach AGI but I do think they aren’t a path to AGI. Why do I think that? Because they’ve spent untold billions of dollars and put everything they had into them and they’re still not anywhere close to AGI. Basic research is showing that if anything the models are getting worse.
Where’d you get the idea that you know how the human mind works? You a fucking neurological expert because you misinterpreted some scientific paper?
I agree there isn’t much to be gained by continuing this exchange. Bye!