Well… yeah. That’s not what LLMs do. That’s like saying “A leafblower got absolutely wrecked by 1998 Dodge Viper in beginner’s drag race”. It’s only impressive if you don’t understand what a leafblower is.
People write code with LLMs. Programming language is just a language specialised at precise logic. That’s what „AI” is advertised to be good at. How can you do that an not the other?
I shall await the moment when AI pretends to be as confident about communicating not being able to do something as it is with the opposite because it looks like it’s my job somehow.
It’s not very good at it though, if you’ve ever used it to code. It automates and eases a lot of mundane tasks, but still requires a LOT of supervision and domain knowledge to not have it go off the rails or hallucinate code that’s either full of bugs or will never work. It’s not a “prompt and forget” thing, not by a long shot. It’s just an easier way to steal code it picked up from Stackoverflow and GitHub.
Me as a human will know to check how much data is going into a fixed size buffer somewhere and break out of the code if it exceeds it. The LLM will have no qualms about putting buffer overflow vulnerabilities all over your shit because it doesn’t care, it only wants to fulfill the prompt and get something to work.
Well… yeah. That’s not what LLMs do. That’s like saying “A leafblower got absolutely wrecked by 1998 Dodge Viper in beginner’s drag race”. It’s only impressive if you don’t understand what a leafblower is.
People write code with LLMs. Programming language is just a language specialised at precise logic. That’s what „AI” is advertised to be good at. How can you do that an not the other?
“Precise logic” is specifically what AI is not any good at whatsoever.
AI might be able to write a program that beats an A2600 in chess, but it should not be expected to win at chess iteself.
I shall await the moment when AI pretends to be as confident about communicating not being able to do something as it is with the opposite because it looks like it’s my job somehow.
It’s not very good at it though, if you’ve ever used it to code. It automates and eases a lot of mundane tasks, but still requires a LOT of supervision and domain knowledge to not have it go off the rails or hallucinate code that’s either full of bugs or will never work. It’s not a “prompt and forget” thing, not by a long shot. It’s just an easier way to steal code it picked up from Stackoverflow and GitHub.
Me as a human will know to check how much data is going into a fixed size buffer somewhere and break out of the code if it exceeds it. The LLM will have no qualms about putting buffer overflow vulnerabilities all over your shit because it doesn’t care, it only wants to fulfill the prompt and get something to work.
I’m not saying it’s good at coding, I’m saying it’s specifically advertised as being very good at it.