

Headline writers believe there are two genders: pilots and female pilots.
polite leftists make more leftists
more leftists make revolution
Headline writers believe there are two genders: pilots and female pilots.
can you explain the problem?
it’s beyond me how people can feel bad for one but not the other. Feeling bad for neither or both, that I can understand.
Trump also got the covid vaccine invented, produced, and distributed. It was probably the greatest thing he ever did but now he doesn’t even want to be associated with it. Alas. His lunacy knows no bounds.
Then they’re smart. The technology is just not there yet.
I mean, among people who are proficient with IPA, they still struggle to read whole sentences written entirely in IPA. Similarly, people who speak and read chinese struggle to read entire sentences written in pinyin. I’m not saying people can’t do it, just that it’s much less natural for us (even though it doesn’t really seem like it ought to be.)
I agree that LLMs are not as bright as they look, but my point here is that this particular thing – their strange inconsistency understanding what letters correspond to the tokens they produce – specifically shouldn’t be taken as evidence for or against LLMs being capable in any other context.
I don’t think we disagree that much.
So read and learn. Okay, I agree that it can have environmental impact due to power usage and water consumption. But this isn’t a fundamental problem – we can use green power (I’ve heard there are plans to build nuclear plants in California for this reason) and build them in a place without water shortages (i.e. somewhere other than California.) AI differs from fossil fuels in this regard, which are fundamentally environmentally damaging.
But still, I cringe when someone implies open-model locally-hosted AIs are environmentally problematic. They have no sense of scale whatsoever.
But it still has to be revised, refined, and inevitably fixed when it hallucinates precedent citations and just about anything else. Well yeah, it’s slop, as I said. These are only suitable in cases where complete reliability is not required. But there’s no reason to believe that hallucinations won’t decrease in frequency over time (as they already have been), or at that the domains in which hallucinations are common won’t shrink over time. I’m not claiming these methods will ever reach 100% reliability, but humans (the thing they are meant to replace) also don’t have reliability. So how many years until the reliability of an LLM exceeds that of a human? Yes I know I’m making humans sound fungible, but to our corporate overlords we mostly are.
if you haven’t noticed what AI has done to the HR industry, let me summarize it thusly: it has destroyed it.
Good, so we agree that there is the potential for long-term damage. In other words, AIs are a long-term threat, not just a short-term one. Maybe the bubble will pop but so did the dotcom bubble and we still have the internet.
enshittification
No, I think enshittification started well before 2022 (ChatGPT). Sure, even before that LLMs were making SEO garbage webpages that google was reporting, so you can blame AI in that regard – but I don’t believe for a second that Google couldn’t have found a way to filter those kinds of results out. The user-negative feature was profitable for them, so they didn’t fix it. If LLMs hadn’t been around, they would have found other ways to make search more user-negative (and they probably did indeed employ such techniques).
based.
game doesn’t save your hitpoints, starts you at 30 hp every time
cringe.
When we see LLMs struggling to demonstrate an understanding of what letters are in each of the tokens that it emits or understand a word when there are spaces between each letter, we should compare it to a human struggling to understand a word written in IPA format (/sʌtʃ əz ðɪs/) even though we can understand the word spoken aloud normally perfectly fine.
This is deepseek model right? OP was posting about GPT o3
Massive environmental harms
I find this questionable; people forget that a locally-hosted LLM is no more taxing than a video game.
No chance it’s going to get better
Why do you believe this? It has continued to get dramatically better over the past 5 years. Look at where GPT2 was in 2019.
No consistently usable product other than beginner code tasks
It is not consistently usable for coding. If you are hoping this slop-producing machine is consistently useful for anything then you are sorely mistaken. These things are most suitable for applications where unreliability is acceptable.
No profitable product […] Tens of thousands of (useful!) careers terminated
Do you not see the obvious contradiction here? If you are sure that this is not going to get better and it’s not profitable, then you have nothing to worry about in the long-term about careers being replaced by AIs.
Destroyed Internet search, arguably the one necessary service on the Internet
Google did this intentionally as part of enshittification.
‘people’ in scare quotes since coders aren’t people I guess.
Too many people are failing to understand that fqughds are actually woplels, and until they understand that, they are just going to keep wasting their money on 💫woplels💫.
Even though woplels have proven to be useful for some things, they’re not as good as some people want them to be, so they’re useless.
Nobody thought it would do very well. This was a software dev’s little diversion.
We should praise attempts to make the public aware of the limitations of LLMs, not laugh at the guy who did this.
Turns out spicy autocomplete can contribute to the bottom line. Capitalism :(
I suppose if you’re going to be postmodernist about it, but that’s beyond my ability to understand. The only complete solution I know to Theseus’ Ship is “the universe is agnostic as to which ship is the original. Identity of a composite thing is not part of the laws of physics.” Not sure why you put scare quotes around it.
Hallucinations aren’t relevant to my point here. I’m not defending that AIs are a good source of information, and I agree that hallucinations are dangerous (either that or misusing LLMs is dangerous). I also admit that for language learning, artifacts caused from tokenization could be very detrimental to the user.
The point I am making is that LLMs struggling with these kind of tokenization artifacts is poor evidence for drawing any conclusions about their behaviour on other tasks.
Because LLMs operate at the token level, I think it would be a more fair comparison with humans to ask why humans can’t produce the IPA spelling words they can say, /nɔr kæn ðeɪ ˈizəli rid θɪŋz ˈrɪtən ˈpjʊrli ɪn aɪ pi ˈeɪ/ despite the fact that it should be simple to – they understand the sounds after all. I’d be impressed if somebody could do this too! But that most people can’t shouldn’t really move you to think humans must be fundamentally stupid because of this one curious artifact. Maybe they are fundamentall stupid for other reasons, but this one thing is quite unrelated.
Congrats, you’ve discovered reductionism. The human brain also doesn’t know things, as it’s composed of electrical synapses made of molecules that obey the laws of physics and direct one’s mouth to make words in response to signals that come from the ears.
Not saying LLMs don’t know things, but your argument as to why they don’t know things has no merit.
revolt.