- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
No one uses this meme correctly and it makes me irrationally upset.
At this point, this movie is probably older than most of the people that use this meme template.
brb crawling into a hole and crying
In which case would a competent dev use an LLM?
It’s outstanding at bridging the gap between “I need to mash these two concepts/technologies together” and “the answer is spread across six different StackOverflow threads.” Hunting that stuff down using Google has been a delicate operation even at the best of times in the last 25 years, but it always took a lot of time. With an LLM and each such query, I’ve saved hours, maybe even whole workdays. Fact-checking an AI takes far less effort.
When the documentation is shit and you do not have time to scroll through 100 classes to find that one optional argument that one method accepts, I found LLMs very useful. They are pretty good at text understanding and summarizing, not so much at logic though, which is key for developing.
If you need to use a new language that you are not yet used to, it can get you through the basics quite efficiently.
I find it quite proficient at translating complex mathematical functions into code. Specially since it accept the latex pretty print as input and usually read it correctly.
As an advanced rubber duck that spits wrong answers so your brain can achieve the right answer quickly. A lot of the times I find myself blocked on something, ask the AI to solve and it gives me a ridiculous response that would never work, but seeing how that won’t work it makes easier for me how to make something that will work.
When Management™️ demands the app “Do AI” because “it’s the hot new thing”
Manglement will have to fill an open position real soon
Looking up how to do something, as an improved stackoverflow. Especially if it provides sources in the answer.
Boilerplate unit tests. Yes, yes, I know - use parametrized test, but it’s often not practical.
Mass refactoring. This is tricky because you need to thoroughly review it, but it saves you annoying typing.
I’m sure there’s more, it’s far from useless. But you need to know what you want it to do and how to check if done correctly.
Boilerplate unit tests.
It will generate bad tests, so you will have lots of tests blocking your work, but won’t actually test the important properties.
Mass refactoring.
That’s an amount of trust in the LLM capacity to not create hidden corner cases and your capacity to review large-scale changes that… I find your complete faith disturbing.
I mean, it’s not like it ships it to production. You can read code it writes and modify it if you don’t like it, or choose not to use it.
If you can read the code it writes and modify it, a project manager can remove that time from you and take the AI slop direct to production.
Another good reason to never let the company’s project become your project.
That’s a different problem. The original question was when would a competent dev use an LLM.
As always, the specific situation matters. Some refactors are mostly formulaic, and AI does great at that. For example, “add/change this database field, update the form, then update the api, update the admin page, update the ui, etc.” is perfectly reasonable to send an AI off to do, and can save plenty of programmer time.
Until you don’t properly check the diff, a +/- or </=/>/<=/>= was reversed, and you now have an RCE in test, soon to be in prod.
What kind of moron doesn’t check the diff? Plus, modern AI coding tools explicitly show the diff and ask you to confirm each edit directly.
I wouldn’t let a human muck about in my code unchecked, much less an AI. But that doesn’t mean it’s useless.
I very rarely find result summarizers useful. If I didn’t find something normally, there won’t be anything in there.
I sure love tests and huge codebases with errors in them. In the time I read and understood an LLM’s output, I could write it myself. And save on time later when expanding/debugging.
I am so far from trusting and LLM to do mass refactoring even with heavy review. Refactoring bugs can be super insidious.
I use it daily. I wouldn’t blindly trust code it writes, but it offers alternative solutions and when I’m hunting for a but it’s very good at giving me ideas of what might be wrong at a glance. Terraform and infra too it can catch nuances i may be missing.
Finding logic errors 7 hours into the workday.
False logic errors created by the AI while asking it to solve real world logic errors?
No, plain old human made ones.
I asked it to translate all my string to another language. So I guess i18n support. It’s decent.
And you are sure it’s not spewing hallucinations or neo-fascism in a language you don’t understand… why?
You should try using an LLM to translate things. It’s wctually pretty good compared to more traditional translators. I think translation is actually an area LLMs excels in.
I have to, for my KPIs! I guess job interviews are the real personal performance meetings though.
Do quickly write the same pattern thousands of people write every day
What a terrible idea.
If I were to pay for a digital item, I would get a guarantee that the data is valid and works.
As in, paying for LLMs, imo, is actually a worse deal than standard micro transactions.
They are more like loot boxes, you pay for the chance of a good result.
Pay per loaf of bread at the baker
… Microtransactions for hungry people