- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:
- Confident: 57% say the main LLM they use seems to act in a confident way.
- Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
- Sense of humor: 32% say their main LLM seems to have a sense of humor.
- Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
- Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
AI is essentially the human superid. No one man could ever be more knowledgeable. Being intelligent is a different matter.
Is stringing words together really considered knowledge?
And you know what? The people who believe that are right.
Note that that’s not a commentary on the capabilities of LLMs.
It’s sad, but the old saying from George Carlin something along the lines of, “just think of how stupid the average person is, and then realize that 50% are even worse…”
Given the US adults I see on the internet, I would hazard a guess that they’re right.
Half of all voters voted for Trump. So an LLM might be smarter than them. Even a bag of pea gravel might be.
Less than a third of all voters voted for Trump. Most voters stayed home.
Of you didn’t vote then you’re not a voter.
Most eligable voters stayed home
A bag of frozen peas’s is smarter than some of these Trump followers. Even half a frozen pea is.
This is hard to quantity. I use them constantly throughout my work day now.
Are they smarter than me? I’m not sure. Haven’t thought too much about it.
What they certainly are, and by a long shot, is faster. Given a set of data, I could analyze it and pull out insights and conclusions. It might take me a week or a month depending on the size and breadth of the data set. An LLM can pull out insights and conclusions in seconds.
I can read error stacks coming from my code, but before I’ve even read the first few lines the LLM has ingested all of them, checked the code, and reached a conclusion about the necessary fix. Is it right, optimal, and avoid creating other bugs? Like 75% at this point. I can coax it, interate on the solution my self, or do it entirely myself with the understanding of the bug that it granted me. This same bug might have taken hours to figure out myself.
My point is, I’m not sure how to compare smarter vs orders of magnitude faster.
Are you smarter than a calculator?
This is sad. This does not spark joy. We’re months from someone using “but look, ChatGPT says…” To try to win an argument. I can’t wait to spend the rest of my life explaining to people that LLMs are really fancy bullshit generator toys.
Already happened in my work. People swearing an API call exists because an LLM hallucinated it. Even as the people who wrote the backend tells them it does not exist
What a very unfortunate name for a university.
Wtf is an llm
Large language model. It’s what all these AI really are.
It’s probably true too.
The average literacy level is around that of a sixth grader.
This tracks
I believe LLMs are smarter than half of US adults
LLM is proof that even if you’re extremely stupid, having access to information can still make you sound smart.
I suppose some of that comes down to the personal understanding of what “smart” is.
I guess you could call some person, that doesn’t understand a topic, but still manages to sound reasonable when talking about it, and might even convince people that they actually have a deep understanding of that topic, “smart”, in a kind of “smart imposter”.
That is the problem with US adults. Half of them probably is dumber than AI…
The grammatical error here is chef’s kiss.
Intelligence and knowledge are two different things. Or, rather, the difference between smart and stupid people is how they interpret the knowledge they acquire. Both can acquire knowledge, but stupid people come to wrong conclusions by misinterpreting the knowledge. Like LLMs, 40% of the time, apparently.
My new mental model for LLMs is that they’re like genius 4 year olds. They have huge amounts of information, and yet have little to no wisdom as to what to do with it or how to interpret it.