

PZ Myers boosted the pivot-to-ai piece on veo3: https://freethoughtblogs.com/pharyngula/2025/06/23/so-much-effort-spiraling-down-the-drain-of-ai/
It’s not always easy to distinguish between existentialism and a bad mood.
PZ Myers boosted the pivot-to-ai piece on veo3: https://freethoughtblogs.com/pharyngula/2025/06/23/so-much-effort-spiraling-down-the-drain-of-ai/
Fund copyright infringement lawsuits against the people they had been bankrolling the last few years? Sure, if the ROI is there, but I’m guessing they’ll likely move on to then next trendy sounding thing, like a quantum remote diddling stablecoin or whatevertheshit.
I too love to reminisce over the time (like 3m ago) when the c-suite would think twice before okaying uploading whatever wherever, ostensibly on the promise that it would cut delivery time (up to) some notable percentage, but mostly because everyone else is also doing it.
Code isn’t unmoated because it’s mostly shit, it’s because there’s only so many ways to pound a nail into wood, and a big part of what makes a programming language good is that it won’t let you stray too much without good reason.
You are way overselling coding agents.
Ah yes, the supreme technological miracle of automating the ctrl+c/ctrl+v parts when applying the LLM snippet into your codebase.
On the other hand they blatantly reskinned an entire existing game, and there’s a whole breach of contract aspect there since apparently they were reusing their own code that they wrote while working for Bethesda, who I doubt would’ve cared as much if this were only about an LLM-snippet length of code.
I’d say that incredibly unlikely unless an LLM suddenly blurts out Tesla’s entire self-driving codebase.
The code itself is probably among the least behind-a-moat things in software development, that’s why so many big players are fine with open sourcing their stuff.
Yet, under Aron Peterson’s LinkedIn posts about these video clips, you can find the usual comments about him being “a Luddite”, being “in denial” etc.
And then there’s this:
From: Rupert Breheny Bio: Cobalt AI Founder | Google 16 yrs | International Keynote Speaker | Integration Consultant AI Comment: Nice work. I’ve been playing around myself. First impressions are excellent. These are crisp, coherent images that respect the style of the original source. Camera movements are measured, and the four candidate videos generated are generous. They are relatively fast to render but admittedly do burn through credits.
From: Aron Peterson (Author) Bio: My body is 25% photography, 25% film, 25% animation, 25% literature and 0% tolerating bs on the internet. Comment: Rupert Breheny are you a bot? These are not crisp images. In my review above I have highlighted these are terrible.
AI is the product, not the science.
Having said that:
you know that there’s almost no chance you’re the real you and not a torture copy
I basilisk’s wager was framed like that, that you can’t know if you are already living in the torture sim with the basilisk silently judging you, it would be way more compelling that the actual “you are ontologically identical with any software that simulates you at a high enough level even way after the fact because [preposterous transhumanist motivated reasoning]”.
Scott A. comes off as such a disaster of a personality. Hope it’s less obvious in his irl interactions.
I’d say if there’s a weak part in your admittedly tongue-in-cheek theory it’s requiring Roko to have had a broader scope plan instead of a really catchy brainfart, not the part about making the basilisk thing out to be smarter/nobler than it is.
Reframing the infohazard aspect as an empathy filter definitely has legs in terms of building a narrative.
Not wanting the Basilisk eternal torture dungeon to happen isn’t an empathy thing, they just think that a sufficiently high fidelity simulation of you would be literally you, because otherwise brain uploads aren’t life extension. It’s basically transhumanist cope.
Yud expands on it in some place or other, along the lines that the gap in consciousness between the biological and digital instance isn’t that different from the gap created by anesthesia or a night’s sleep, it’s just on the space axis instead of the time axis, or something like that.
And since he also likes the many world interpretations it turns out you also share a soul with yourselves in parallel dimensions; this is why the zizians are so eager to throw down, since getting killed in one dimension just lets supradimensional entities know you mean business.
Early 21st century anthropology is going to be such a ridiculous field of study.
Here’s the exact text in the prompt that I had in mind (found here), it’s in the function specification for the js repl:
[…] The analysis tool (also known as the REPL) can be used to execute code in a JavaScript environment in the browser.
What is the analysis tool?
The analysis tool is a JavaScript REPL. You can use it just like you would use a REPL. But from here on out, we will call it the analysis tool.
When to use the analysis tool
Use the analysis tool for:
Complex math problems that require a high level of accuracy and cannot easily be done with “mental math”
- To give you the idea, 4-digit multiplication is within your capabilities, 5-digit multiplication is borderline, and 6-digit multiplication would necessitate using the tool.
- […]
What if this is not a being terminally AI pilled thing? What if this is the absolute pinnacle of what billions and billions of dollars in research will buy you for requiring your lake-drying sea-boiling LLM-as-a-service not look dumb compared to a pocket calculator?
Except not really, because even if stuff that has to be reasoned about in multiple iterations was a distinct category of problems, reasoning models by all accounts hallucinate a whole bunch more.
Anecdotally, it took like one and a half week from the c-suite okaying using copilot to people beginning to consider googling beneath them and to start elevating to me the literal dumbest shit just because copilot was having a hard time with it.
Claude’s system prompt had leaked at one point, it was a whopping 15K words and there was a directive that if it were asked a math question that you can’t do in your brain or some very similar language it should forward it to the calculator module.
Just tried it, Sonnet 4 got even less digits right
425,808 × 547,958 = 233,325,693,264
(correct is 233.324.900.064)
I’d love to see benchmarks on exactly how bad at numbers LLMs are, since I’m assuming there’s very little useful syntactic information you can encode in a word embedding that corresponds to a number. I know RAG was notoriously bad at matching facts with their proper year for instance, and using an LLM as a shopping assistant (ChatGTP what’s the best 2k monitor for less than $500 made after 2020) is an incredibly obvious use case that the CEOs that love to claim so and so profession will be done as a human endeavor by next Tuesday after lunch won’t even allude to.
(No spoiler tags because it’s just background lore for Dune that’s very tangential to the main plot)
Dune - after catastrophic wars between humans and AIs, computers are forbidden.
That’s a retcon from the incredibly shit Dune-quel books from like 15 years after the original author had died. The first Dune was written well before computers as we know them would come in vogue, and the Butlerian Jihad was meant to be a sweeping cultural revolution against the stranglehold that automated decision-making had achieved over society, fought not against off-brand terminators but the entrenched elites that monopolized access to the setting’s equivalent to AI.
The inciting incident semi-canonically (via the Dune Encyclopedia) I think was some sort of robo-nurse casually euthanizing Serena Butler’s newborn baby, because of some algorithmic verdict that keeping it alive didn’t square with optimal utilitarian calculus.
tl;dr: The Butlerian Jihad originally seemed to be way more about against-the-walling the altmans and the nadellas and undoing the societal damage done by the proliferation of sfba rationalism, than it was about fighting epic battles against AI controlled mechs.
It’s been ages since I read Hyperion but I think it’s one of those settings that start out somewhat utopian but as the story progresses you are meant to realize they are deeply fucked.
Also I had to look up Camp of the Saints, and I think complaining about living there may be a racist dog whistle.
I mean even if you somehow miss the whole computers are haram aspect of the duniverse, being a space peasant ruled by psychic tyrants still hardly seems like a winning proposition.
Ed Zitron summarizes his premium post in the better offline subreddit: Why Did Microsoft Invest In OpenAI?
Summary of the summary: they fully expected OpenAI would’ve gone bust by now and MS would be looting the corpse for all it’s worth.