It’s not good because it has no context on what is correct or not. It’s constantly making up functions that don’t exist or attributing functions to packages that don’t exist. It’s often sloppy in its responses because the source code it parrots is some amalgamation of good coding and terrible coding. If you are using this for your production projects, you will likely not be knowledgeable when it breaks, it’ll likely have security flaws, and will likely have errors in it.
Ask Daniel Stenberg.
My theory is not a lot of people like this AI crap. They just lean into it for the fear of being left behind. Now you all think it’s just gonna fail and it’s gonna go bankrupt. But a lot of ideas in America are subsidized. And they don’t work well, but they still go forward. It’ll be you, the taxpayer, that will be funding these stupid ideas that don’t work, that are hostile to our very well-being.
AI is just the lack of privacy, Authoritarian Dragnet, remote control over others computers, web scraping, The complete destruction of America’s art scene, The stupidfication of America and copyright infringement with a sprinkling of baby death.
Have you used AI to code? You don’t say “hey, write this file” and then commit it as “AI Bot 123 [email protected]”.
You start writing a method and get auto-completes that are sometimes helpful. Or you ask the bot to write out an algorithm. Or to copy something and modify it 30 times.
You’re not exactly keeping track of everything the bots did.
Or to copy something and modify it 30 times.
This seems like a very bad idea. I think we just need more lisp and less AI.
“Hey AI - Create a struct that matches this JSON document that I get from a REST service”
Bam, it’s done.
Or
"Hey AI - add a schema prefixed on all of the tables and insert statements in the SQL script.
Yeah integrating APIs has really become trivial with copilots. You just copy paste the documentation and all the boring stuff is done in the blink of an eye ! I love it
It’s exactly the sort of “tedious yet not difficult” task that I love it for. Sometimes you need to clean things up a bit but it does the majority of the work very nicely.
yeah, that’s… one of the points in the article
I’ll admit I skimmed most of that train wreak of an article - I think it’s pretty generous saying that it had a point. It’s mostly recounts of people complaining about AI. But if they hid something in there about it being remarkably useful in cases but not writing entire applications or features then I guess I’m on board?
Well, sometimes I think the web is flooded with advertising an spam praising AI. For these companies, it makes perfect sense because billions of dollars has been spent at these companies and they are trying to cash in before the tides might turn.
But do you know what is puzzling (and you do have a point here)? Many posts that defend AI do not engage in logical argumentation but they argue beside the point, appeal to emotions or short-circuited argumentation that “new” always equals “better”, or claiming that AI is useful for coding as long as the code is not complex (compare that to the objection that mathematics is simple as long it is not complex, which is a red herring and a laughable argument). So, many thanks for you pointing out the above points and giving in few words a bunch of examples which underline that one has to think carefully about this topic!
If humans are so good at coding, how come there are 8100000000 people and only 1500 are able to contribute to the Linux kernel?
I hypothesize that AI has average human coding skills.
Average drunk human coding skils
As a dumb question from someone who doesn’t code, what if closed source organizations have different needs than open source projects?
Open source projects seem to hinge a lot more on incremental improvements and change only for the benefit of users. In contrast, closed source organizations seem to use code more to quickly develop a new product or change that justifies money. Maybe closed source organizations are more willing to accept slop code that is bad but can barely work versus open source which won’t?
Maybe closed source organizations are more willing to accept slop code that is bad but can barely work versus open source which won’t?
Because most software is internal to the organisation (therefore closed by definition) and never gets compared or used outside that organisation: Yes, I think that when that software barely works, it is taken as good enough and there’s no incentive to put more effort to improve it.
My past few years of programming business-internal applications have been characterised by upper management imperatives to “use Generative AI, and we expect that to make you nerd faster” without any effort spent to figure out whether there is any net improvement in the result.
Certainly there’s no effort spent to determine whether it’s a net drain on our time and on the quality of the result. Which everyone on our teams can see is the case. But we are pressured to continue using it anyway.
I’d argue the two aren’t as different as you make them out to be. Both types of projects want a functional codebase, both have limited developer resources (communities need volunteers, business have a budget limit), and both can benefit greatly from the development process being sped up. Many development practices that are industry standard today started in the open source world (style guides and version control strategy to name two heavy hitters) and there’s been some bleed through from the other direction as well (tool juggernauts like Atlassian having new open source alternatives made directly in response)
No project is immune to bad code, there’s even a lot of bad code out there that was believed to be good at the time, it mostly worked, in retrospect we learn how bad it is, but no one wanted to fix it.
The end goals and proposes are for sure different between community passion projects and corporate financial driven projects. But the way you get there is more or less the same, and that’s the crux of the articles argument: Historically open source and closed source have done the same thing, so why is this one tool usage so wildly different?
most software isn’t public-facing at all (neither open source nor closed source), it’s business-internal software (which runs a specific business and implements its business logic), so most of the people who are talking about coding with AI are also talking mainly about this kind of business-internal software.
Does business internal software need to be optimized?
Does business internal software need to be optimized?
Need to be optimised for what? (To optimise is always making trade-offs, reducing some property of the software in pursuit of some optimised ideal; what ideal are you referring to?)
And I’m not clear on how that question is related to the use of LLMs to generate code. Is there a connection you’re drawing between those?
When did you last time decide to buy a car that barely drives?
And another thing, there are some tech companies that operate very short-term, like typical social media start-ups of which about 95% go bust within two years. But a lot of computing is very long term with code bases that are developed over many years.
The world only needs so many shopping list apps - and there exist enough of them that writing one is not profitable.
There are commercial open source stuff too
who makes a contribution made by aibot514. noone. people use ai for open source contributions, but more in a ‘fix this bug’ way not in a fully automated contribution under the name ai123 way
Counter-argument: If AI code was good, the owners would create official accounts to create contributions to open source, because they would be openly demonstrating how well it does. Instead all we have is Microsoft employees being forced to use and fight with Copilot on GitHub, publicly demonstrating how terrible AI is at writing code unsupervised.
Bingo
Bing. O.
Mostly closed source, because open source rarely accepts them as they are often just slop. Just assuming stuff here, I have no data.
To be fair if a competent dev used an ai “auto complete” tool to write their code, I’m not sure it’d be possible to detect those parts as an ai code.
I generally dislike those corporate AI tools but gave a try for copilot when writing some terraform script and it actually had good suggestions as much as bad ones. However if I didn’t know that well the language and the resources I was deploying, it’d probably have led me to deep hole trying to fix the mess after blindly accepting every suggestion
People seem to think that the development speed of any larger and more complex software depends on the speed the wizards vsn type in code.
Spoiler: This is not the case. Even if a project is a mere 50000 lines long, one is the solo developer, and one has a pretty good or even expert domain knowledge, one spends the mayor part of the time thinking, perhaps looking up documentation, or talking with people, and the key on the keyboard which is most used doesn’t need a Dvorak layout, bevause it is the “delete” key. In fact, you don’t need yo know touch-typing to be a good programmer, what you need is to think clearly and logically and be able to weight many different options by a variety of complex goals.
Which LLMs can’t.
I don’t think it makes writing code faster, just may reduce the number of key presses required
They do more than just autocomplete, even in autocomplete mode. These Ai tools suggest entire code blocks and logic and fill in multiple lines, compared to a standard autocomplete. And to use it as a standard autocomplete tool, no Ai is needed. Using it like that wouldn’t be bad anyway, so I have nothing against it.
The problems arise when the Ai takes away the thinking and brain functionality of the actual programmer. Plus you as a user get used to it and basically “addicted”. Independent thinking and programming without Ai will become harder and harder, if you use it for everything.
Creator of curl just made a rant about users submitting AI slop vulnerability reports. It has gotten so bad they will reject any report they deem AI slop.
So there’s some data.
And when they contribute to existing projects, their code quality is so bad, they get banned from creating more PRs.
I created this entirely using mistral/codestral
https://github.com/suoko/gotosocial-webui
Not a real software, but it was done by instructing the ai about the basics of the mother app and the fediverse protocol