What new AI abilities, LLMs aren’t pokemon.
It’s not always easy to distinguish between existentialism and a bad mood.
What new AI abilities, LLMs aren’t pokemon.
Slate Scott just wrote about a billion words of extra rigorous prompt-anthropomorphizing fanfiction on the subject of the paper, he called the article When Claude Fights Back.
Can’t help but wonder if he’s just a critihype enabling useful idiot who refuses to know better or if he’s being purposefully dishonest to proselytize people into his brand of AI doomerism and EA, or if the difference is meaningful.
edit: The claude syllogistic scratchpad also makes an appearance, it’s that thing where we pretend that they have a module that gives you access to the LLM’s inner monologue complete with privacy settings, instead of just recording the result of someone prompting a variation of “So what were you thinking when you wrote so and so, remember no one can read what you reply here”. Que a bunch of people in the comments moving straight into wondering if Claude has qualia.
Rationalist debatelord org Rootclaim, who in early 2024 lost a $100K bet by failing to defend covid lab leak theory against a random ACX commenter, will now debate millionaire covid vaccine truther Steve Kirsch on whether covid vaccines killed more people than they saved, the loser gives up $1M.
One would assume this to be a slam dunk, but then again one would assume the people who founded an entire organization about establishing ground truths via rationalist debate would actually be good at rationally debating.
It’s useful insofar as you can accommodate its fundamental flaw of randomly making stuff the fuck up, say by having a qualified expert constantly combing its output instead of doing original work, and don’t mind putting your name on low quality derivative slop in the first place.
And all that stuff just turned out to be true
Literally what stuff, that AI would get somewhat better as technology progresses?
I seem to remember Yud specifically wasn’t that impressed with machine learning and thought so-called AGI would come about through ELIZA type AIs.
In every RAG guide I’ve seen, the suggested system prompts always tended to include some more dignified variation of “Please for the love of god only and exclusively use the contents of the retrieved text to answer the user’s question, I am literally on my knees begging you.”
Also, if reddit is any indication, a lot of people actually think that’s all it takes and that the hallucination stuff is just people using LLMs wrong. I mean, it would be insane to pour so much money into something so obviously fundamentally flawed, right?
If you never come up with a marketable product you can remain a startup indefinitely.
Getting Trump reelected should count as ‘billionaire philanthropy’.
thinkers like computer scientist Eliezer Yudkowsky
That’s gotta sting a bit.
promise me you’ll remember me
I’m partial to Avenge me! as last words myself.
Nabokov’s Lolita really shouldn’t be pigeonholed as merely that, but I guess the movies are another story.
Dolores in Lolita was like twelve though, at least in the book.
In case anybody skips the article, it’s a six year old cybernetically force grown to the body of a horny 13 to 14 year old.
The rare sentence that makes me want to take a shower for having written it.
…with a huge chip on his shoulder about how the system caters primarily to normies instead of specifically to him, thinks he has fat-no-matter-what genes and is really into rape play.
The old place on reddit has a tweet up by aella where she goes on a small evo-psych tirade about how since there’s been an enormous amount of raid related kidnapping and rape in prehistory it stands to reason that women who enjoyed that sort of thing had an evolutionary advantage and so that’s why most women today… eugh.
I wonder where the superforecasters stand on aella being outed as a ghislain maxwell type fixer for the tescreal high priesthood.
There’s also the communal living, the workplace polyamory along with the prominence of the consensual non-consensual kink, the tithing of the bulk of your earnings and the extreme goals-justify-the-means moralising, the emphasis on psychedelics and prescription amphetamines, and so on and so forth.
Meaning, while calling them a cult incubator is actually really insightful and well put, I have a feeling that the closer you get to TESCREAL epicenters like the SFB the more explicitly culty things start to get.
EA started as an offshoot of LessWrong, and LW-style rationalism is still the main gateway into EA as it’s pushed relentlessly in those circles, and EA contributes vast amounts of money back into LW goals. Air strikes against datacenters guy is basically bankrolled by Effective Altruism and is also the reason EA considers magic AIs (so called Artificial Super Intelligences) by far the most important risk to humanity’s existence; they consider climate change mostly survivable and thus of far less importance, for instance.
Needless to say, LLM peddlers loved that (when they aren’t already LW/EAs or adjacent themselves, like the previous OpenAI administrative board before Altman and Microsoft took over). edit: also the founders of Anthropic.
Basically you can’t discuss one without referencing the other.
It’s complicated.
It’s basically a forum created to venerate the works and ideas of that guy who in the first wave of LLM hype had an editorial published in TIME where he called for a worldwide moratorium on AI research and GPU sales to be enforced with unilateral airstrikes, and whose core audience got there by being groomed by one the most obnoxious Harry Potter fanfictions ever written, by said guy.
Their function these days tends to be to provide an ideological backbone of bad scifi justifications to deregulation and the billionaire takeover of the state, which among other things has made them hugely influential in the AI space.
They are also communicating vessels with Effective Altruism.
If this piques your interest check the links on the sidecard.
NASB does anybody else think the sudden influx of articles (from kurzgesagt to recent wapo) pushing the idea that you can’t lose weight by exercise have anything to do with Ozempic being aggressively marketed at the same time?
I mean, you could have answered by naming one fabled new ability LLM’s suddenly ‘gained’ instead of being a smarmy tadpole, but you didn’t.