I haven’t tried it either. Not even as a joke. I didn’t need to. I’ve seen its effects and came to a conclusion: that I would reject AI and whatever convenience it might bring in order to improve my own organic skills.
It’s not a terrible tool if you already have critical thinking skills and can analyze the output and reject the nonsense. I consider it an ‘idea’ machine as it was sometimes helpful when coding to give me a new idea, but I never used what it spit out because it writes nonsensical code far too frequently to be trusted. The problem is that if you don’t already know what you’re doing, you don’t have the skills to do that critical analysis. So it turns into a self-defeating feedback loop. That’s what we aren’t ready for, because our public education has been so abysmal for the last… forever.
But if you can analyze the content and reject the nonsense, then you didn’t need it in the first place, because you already knew enough about the topic.
And when you’re using it for things you don’t know enough about, that’s where you can’t tell the nonsense! You will say to yourself, because you noticed nonsense before, that “you can tell”, but you won’t actually be able to, because you’re going from known-unknown into unknown-unknown territory. You won’t even notice the nonsense because you don’t know what nonsense could even be there.
Large language models are just that, they generate some language without sense behind it, if you use it for anything at all that requires reasoning, then you’re using it wrong.
The literally only thing LLMs are good for is shit like “please reword this like that”, “please write an ad text praising these and these features of a product”, stuff that is about language and that’s it.
I certainly have bias on their usefulness because all I’ve ever used them for was to get coding ideas when I had a thorny problem. It was good for giving me a direction of thought on a function or process that I hadn’t considered, but there was so much garbage in the actual code I would never use it. It just pointed me in the right direction to go write my own. So it’s not that I ‘needed’ it, but it did on a few occasions save me some time when I was working on a difficult programming issue. Certainly not earth shattering, but it has been useful a few times for me in that regard.
I don’t even like to talk very much about the fact that I found it slightly useful at work once in a while, because I’m an anti-LLM person, at least in the way they are being promoted. I’m very unhappy with the blind trust so many people and companies put in them, and I think it’s causing real harm.
I haven’t tried it either. Not even as a joke. I didn’t need to. I’ve seen its effects and came to a conclusion: that I would reject AI and whatever convenience it might bring in order to improve my own organic skills.
It’s not a terrible tool if you already have critical thinking skills and can analyze the output and reject the nonsense. I consider it an ‘idea’ machine as it was sometimes helpful when coding to give me a new idea, but I never used what it spit out because it writes nonsensical code far too frequently to be trusted. The problem is that if you don’t already know what you’re doing, you don’t have the skills to do that critical analysis. So it turns into a self-defeating feedback loop. That’s what we aren’t ready for, because our public education has been so abysmal for the last… forever.
But if you can analyze the content and reject the nonsense, then you didn’t need it in the first place, because you already knew enough about the topic.
And when you’re using it for things you don’t know enough about, that’s where you can’t tell the nonsense! You will say to yourself, because you noticed nonsense before, that “you can tell”, but you won’t actually be able to, because you’re going from known-unknown into unknown-unknown territory. You won’t even notice the nonsense because you don’t know what nonsense could even be there.
Large language models are just that, they generate some language without sense behind it, if you use it for anything at all that requires reasoning, then you’re using it wrong.
The literally only thing LLMs are good for is shit like “please reword this like that”, “please write an ad text praising these and these features of a product”, stuff that is about language and that’s it.
I certainly have bias on their usefulness because all I’ve ever used them for was to get coding ideas when I had a thorny problem. It was good for giving me a direction of thought on a function or process that I hadn’t considered, but there was so much garbage in the actual code I would never use it. It just pointed me in the right direction to go write my own. So it’s not that I ‘needed’ it, but it did on a few occasions save me some time when I was working on a difficult programming issue. Certainly not earth shattering, but it has been useful a few times for me in that regard.
I don’t even like to talk very much about the fact that I found it slightly useful at work once in a while, because I’m an anti-LLM person, at least in the way they are being promoted. I’m very unhappy with the blind trust so many people and companies put in them, and I think it’s causing real harm.