I’ve tried a few GenAI things, and didn’t find them to be any different than CleverBot back in the day. A bit better at generating a response that seems normal, but asking it serious questions always generated questionably accurate responses.
If you just had a discussion with it about what your favorite super hero is, it might sound like an actual average person (including any and all errors about the subject it might spew), but if you try to use it as a knowledge base, it’s going to be bad because it is not intelligent. It does not think. And it’s not trained well enough to only give 100% factual answers, even if it only had 100% factual data entered into it to train on. It can mix two different subjects together and create an entirely new, bogus response.
It’s incredibly effective for task assistance, especially with information that is logical and consistent, like maths, programming languages and hard science. What this means is that you no longer need to learn Excel formulas or programming. You tell it what you want it to do and it spits out the answer 90% of the time. If you don’t see the efficacy of AI, then you’re likely not using it for what it’s currently good at.
Had to spend 3 weeks fixing a tiny app that a vibe coder built with AI. It required rewriting significant portions of the app from the ground up because AI code is nearly unusable at scale. Debugging is 10x harder, code is undocumented and there is no institutional knowledge of how an internal system works.
AI code can maybe be ok for a bootstrap single programmer project, but is pretty much useless for real enterprise level development
I’ve tried a few GenAI things, and didn’t find them to be any different than CleverBot back in the day. A bit better at generating a response that seems normal, but asking it serious questions always generated questionably accurate responses.
If you just had a discussion with it about what your favorite super hero is, it might sound like an actual average person (including any and all errors about the subject it might spew), but if you try to use it as a knowledge base, it’s going to be bad because it is not intelligent. It does not think. And it’s not trained well enough to only give 100% factual answers, even if it only had 100% factual data entered into it to train on. It can mix two different subjects together and create an entirely new, bogus response.
It’s incredibly effective for task assistance, especially with information that is logical and consistent, like maths, programming languages and hard science. What this means is that you no longer need to learn Excel formulas or programming. You tell it what you want it to do and it spits out the answer 90% of the time. If you don’t see the efficacy of AI, then you’re likely not using it for what it’s currently good at.
Developer here
Had to spend 3 weeks fixing a tiny app that a vibe coder built with AI. It required rewriting significant portions of the app from the ground up because AI code is nearly unusable at scale. Debugging is 10x harder, code is undocumented and there is no institutional knowledge of how an internal system works.
AI code can maybe be ok for a bootstrap single programmer project, but is pretty much useless for real enterprise level development