I had some files that i knew had duplicates, but didn’t exactly match and while the filenames were not identical, you could tell by looking if they were the same.
Would have been very tedious to do all of them, LLM was able to identify a “good enough” number of duplicates and only made a few mistakes. Greatly sped up the manual work required to clean up the collection.
But that’s so far from most advertised scenarios and not compelling from a “make lots of money” perspective.
You use it for pointers and double check the results. I’ve had a lot of luck using it to explain terminology for complicated specialized tasks for trades work and stuff recently.
They’re decent at language tasks. So, if you provide them with all the information and configure them to not make up any of their own, then they can do things like rewriting it in a different style or different language relatively competently.
Every single company pouring money into the incinerator is positive they’ll be the one to crack actually useful AI or even actual GAI.
Nah. They just believe it will make stock values increase (or that not doing thr AI thing will cause stock values to decrease).
Remember, a publically traded company produces shareholder value. How they do it doesn’t matter.
Imagine how much more valuable alphabet stocks would be if they hadn’t destroyed the core design and user experience of their search engine 😅
Most of the current value of AI comes from the fact that Google is useless now.
i really really do not trust any of those cunts with agi.
my only hope for AGI is that it gets open sourced and is easily runnable on sub $10,000 hardware.
I mean LLMs are already very useful when used correcrly, it’s just 98% of the time they aren’t used correctly
We’re talking about the bubble here, not reasonable use cases. :-)
How do I use it “correctly”
I had some files that i knew had duplicates, but didn’t exactly match and while the filenames were not identical, you could tell by looking if they were the same.
Would have been very tedious to do all of them, LLM was able to identify a “good enough” number of duplicates and only made a few mistakes. Greatly sped up the manual work required to clean up the collection.
But that’s so far from most advertised scenarios and not compelling from a “make lots of money” perspective.
There are (non-AI) algorithms for that. Git uses one to detect renamed files. No need to melt the ice caps just for that.
You use it for pointers and double check the results. I’ve had a lot of luck using it to explain terminology for complicated specialized tasks for trades work and stuff recently.
We used one to come up with a name for a feature cocktail at work. It’s pretty good for that kind of stuff.
They’re decent at language tasks. So, if you provide them with all the information and configure them to not make up any of their own, then they can do things like rewriting it in a different style or different language relatively competently.
That’s the trillion dollar puzzle nobody has been able to solve yet. It’s not trivial at all, even when it seems like it should be.
"Correctly " is a term that has several different uses and meanings. Depending on the context, “Correctly” can mean: