Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
More big “we had to fund, enable, and sane wash fascism b.c. the leftist wanted trans people to be alive” energy from the EA crowd.
Warning: you might regret reading this screenshot of elno posting a screenshot. (cw: chatbots in sexual context)
oh noooo no no no
…but that brings me back to questions about “what does interaction with LLM chatbots do to human brains”.
EDIT: as pointed out by Soyweiser below, the lower reply in the screenshot is probably satire.
…I should have listened to the warning.
I don’t want to see grummz anywhere near AI ERP discourse.
Ah yes, three of the worst people alive today talking about how objects are indistinguishable from women.
During my expirementation with some of these self hosted llms, I was attempting some jailbreaks and other things and thought would this be any good at ERP?
Only if youve never been with another human being.
Isnt trueanonpod a satire account? https://en.m.wikipedia.org/wiki/TrueAnon def some too close to the sun satire here.
Oh! Wasn’t aware of that podcast. Yeah, could be!
Their twitter account is really odd though and im not 100% sure they are trolling still.
Yeah I never trusted them, from the vibes I’ve got they’re definitely buying into all sorts of conspiracy bullshit. I don’t think the tweet is in good faith obviously, but I associate this sort of socalled “schizoposting” with cryptofascists.
Yeah lot of people are fully into they are just trolling. But I have seen that go bad so often (chapo, redscare, vaush, for a few obvious examples) im very much not trusting them to not turn out to be bad. Esp when it is their ‘job’ to do this. Quite easy to throw a few minorities under the bus for clout. And actively making people crazier/spreading misinformation like this is not great imho.
(E: that I could easily create a list of accounts who I think fall on this spectrum, who still have a lot of followers also isnt great).
The slatestarcodex is discussing the unethical research performed on changemyview. Of course, the most upvoted take is that they don’t see the harm or why it should be deemed unethical. Lots of upvoted complaints about IRBs and such. It’s pretty gross.
Quick update on the ongoing copyright suit against OpenAI: The federal judge has publicly sneered at Facebook’s fair use argument:
“You have companies using copyright-protected material to create a product that is capable of producing an infinite number of competing products,” said Chhabria to Meta’s attorneys in a San Francisco court last Thursday.
“You are dramatically changing, you might even say obliterating, the market for that person’s work, and you’re saying that you don’t even have to pay a license to that person… I just don’t understand how that can be fair use.”
The judge itself does seem unconvinced about the material cost of Facebook’s actions, however:
“It seems like you’re asking me to speculate that the market for Sarah Silverman’s memoir will be affected by the billions of things that Llama [Meta’s AI model] will ultimately be capable of producing,” said Chhabria.
so it looks like openai has bought a promptfondler IDE
some of the coverage is … something:
Windsurf brings unique strengths to the table, including a seamless UI, faster performance, and a focus on user privacy
(and yes, the “editor” is once again VSCode With Extras)
Found on the sneer club legacy version -
ChatGPT 4o will straight up tell you you’re God.
Also I find this quote interesting (emphasis mine:
He knew that ChatGPT could not be sentient by any established definition of the term, but he continued to probe the matter because the character’s persistence across dozens of disparate chat threads “seemed so impossible.” “At worst, it looks like an AI that got caught in a self-referencing pattern that deepened its sense of selfhood and sucked me into it,” Sem says. But, he observes, that would mean that OpenAI has not accurately represented the way that memory works for ChatGPT.
I would absolutely believe that this is the case, especially if like Sem you have a sufficiently uncommon name that the model doesn’t have a lot of context and connections to hang on it to begin with.
First, Chrome won the browser war fair and square by building a better surfboard for the internet. This wasn’t some opportune acquisition. This was the result of grand investments, great technical prowess, and markets doing what they’re supposed to do: rewarding the best.
Lots of credit given to 👼🎺 Free Market Capitalism 👼🎺, zero credit given to open web standards, open source contributions, or the fact that the codebase has a lineage going back to 1997 KDE code.
I am certain that many of those ignorant of the history (or even were there for it, like DHH) would still argue that Google deserves credit because of the V8 JavaScript engine. But I continue to doubt that further promulgating JavaScript was a net positive for the world.
If markets really rewarded the best, they would have rewarded Opera way more. (By which I mean the original Opera, up to version 12, and not the terrible chromium-based thing that has its name slapped on it today. Do not use that one, it’s bad.)
Much more important for Chrome’s success than “being the best” (when has that ever been important in the tech industry?), was Google’s massive marketing campaign. Heck, back when Chrome was new, they even had large billboard ads for it around here, i.e. physical billboards in the real world. And “here” is a medium-sized city in Europe, not Silicon Valley or anything… I never saw any other web browser being advertised on freaking billboards.
I think you were trying to reply to this comment
Indeed.
“markets should reward the best” is quite something
An morewronger discusses the “points system” implemented by the Ukrainian armed forces where soldiers can spend points earned by destroying Russian targets on new drone hardware
https://www.lesswrong.com/posts/sJpwvYsC5tJis8onw/the-ukraine-war-and-the-kill-market
Lots of wittering about markets and Gotthards law, but what struck me was
Now, this is clearly a repugnant market. Repugnant market is a market where some people would like to engage in it and other people think they shouldn’t. (Think market in human kidneys. Or prostitution. Or the market in abortions. […])
(my emphasis)
What “market in abortion”, motherfucker???
isn’t this the same crowd that like prediction markets and assassination markets
Yeah they are normally all over anything with the word “market” in it, with an almost religious like belief in market’s ability to solve things.
My suspicion is that the writer has picked up some anti-Ukrainian sentiment from the US right wing (which in order to rationalize and justify Trump’s constant sucking up to Putin has looked for any and every angle to tear Ukraine down). And this anti-Ukrainian sentiment has somehow trumped their worship of markets… Checking back through their posting history to try to discern their exact political alignment… it’s hard to say, they’ve got the Scott Alexander thing going on where they use disconnected historical examples crossed with a bad analogies crossed with misappropriated terms from philosophy to make points that you can’t follow unless you already know their real intended context. So idk.
Now, this is clearly a repugnant market. Repugnant market is a market where some people would like to engage in it and other people think they shouldn’t. (Think market in human kidneys. Or prostitution. Or the market in abortions. […])
i consent/i consent/ i don’t!!
lol who asked them? stop the presses, homegrown techbro has an Opinion! also when you use up drones it’s only natural that you’ll need new ones. even observation drones go down all the time, and rewarding certain targets is just making sure that drones don’t get blown up on stupid shit, so this is government specifically incentivizing what would be most important targets to them, on top of regular rules of engagement and more specific orders. this one seems to be meant as supplementary program that also, or even primarily, makes nice videos for propaganda
Introducing a market system, on the other hand, allows the lower-level units to take calculated risks. Destroy that many enemy units and you can buy, say, an armored vehicle, that improves your safety. Friction gets greatly reduced.
these are drones for drone kills, nothing else. biggest thing i’ve seen is that some units get donations from their drone videos, and used these to get a car or jammer or more drones, but never APC or anything like that, it’s too big deal and too expensive, and drone operator is unlikely to benefit from APC anyway
No side gets an advantage when both sides use it. Then there’s no point in using it in the first place.
wtf? if using a thing gives you advantage over not using a thing, then you use it, if both sides are using a thing then it’s just red queen race. lots of current war looks like it even if frontlines are static
But markets, unlike, say, chemical weapons, are not directly visible on the battlefield. Each side would suspect the other of using them despite the ban and might try to secretly use them as well.
and this changes what exactly? all it will cause is slight preference in targeting because there’s only so many drones to be given out, and drone operator has to do the everything else part of their job. dogshit reasoning
Ah yes the centrist grey/gray tribe. “Prostitution”(ow look a shibboleth, see also “sex work”), and “abortion markets”(??) vs kidney markets.
Im reminded of Jordan Peterson once dropping without a hint of self awareness, that conservatives have a higher disgust response.
Here’s a pretty good sneer at the writing out of LLMs, with a focus on meaning https://www.experimental-history.com/p/28-slightly-rude-notes-on-writing
Maybe that’s my problem with AI-generated prose: it doesn’t mean anything because it didn’t cost the computer anything. When a human produces words, it signifies something. When a computer produces words, it only signifies the content of its training corpus and the tuning of its parameters.
Also, on people:
I see tons of essays called something like “On X” or “In Praise of Y” or “Meditations on Z,” and I always assume they’re under-baked. That’s a topic, not a take.
Horrible “rubberhosing” of cryptocurrency people continues. Guardian article, content warning, bit extremer than a rubber hose.
dhh having a normal one
Look, Google’s trillion-dollar business depends on a thriving web that can be searched by Google.com
Someone should probably tell them.
On my first two reads, I thought that it was heavy-handed satire with mediocre word choice. But no, I suppose that he’s being sincere, in which case I’m glad to notify DHH that Apple products are optional and that a technologist can go their entire lives without purchasing a single Apple product.
Google’s incredible work to further the web isn’t an act of charity, it’s of economic self-interest, and that’s why it works.
Same dumb motherfucker who has been pinching pennies due to poor architecture. Does he think public clouds are acts of charity? Or, going the other direction, this is the same entitled prick who has been naysaying universal basic income because he thinks work gives us purpose like a fucking Calvinist. Does he think UBI is an act of charity? No, DHH, you myopic chud, public clouds and UBI are both concepts borne of economic self-interest.
From linkedin, not normally known as a source of anti-ai takes so that’s a nice change. I found it via bluesky so I can’t say anything about its provenance:
We keep hearing that AI will soon replace software engineers, but we’re forgetting that it can already replace existing jobs… and one in particular.
The average Founder CEO.
Before you walk away in disbelief, look at what LLMs are already capable of doing today:
- They use eloquence as a surrogate for knowledge, and most people, including seasoned investors, fall for it.
- They regurgitate material they read somewhere online without really understanding its meaning.
- They fabricate numbers that have no ground in reality, but sound aligned with the overall narrative they’re trying to sell you.
- They are heavily influenced by the last conversations they had.
- They contradict themselves, pretending they aren’t.
- They politely apologize for their mistakes, but don’t take any real steps to fix the underlying problem that caused them in the first place.
- They tend to forget what they told you last week, or even one hour ago, and do it in a way that makes you doubt your own recall of events.
- They are victims of the Dunning–Kruger effect, and they believe they know a lot more about the job of people interacting with them than they actually do.
- They can make pretty slides in high volumes.
- They’re very good at consuming resources, but not as good at turning a profit.
@rook @BlueMonday1984 I don’t believe LLMs will replace programmers. When I code, I dive into it, and I fall into this beautiful world of abstract ideas that I can turn into something cool. LLMs can’t do that. They lack imagination and passion. Thats part of why lisp is turning into my favorite language. LLMs can’t do lisp very well because everyone has a unique system image with macros they’ve written. Lisp let’s you make DSLs Soo easily as though everyone has their own dialect.
the shunning is working guys
“The first time I ever suffered offline consequences for a social media post”- Hey Gang, I think I found the problem!
I have no idea where he stood on the bullshit bad faith free speech debate from the past decade, but this would be funny if he was an anti cancel culture guy. More things, weird bubble he lives in if the other things didn’t get pushed back, and support for the pro trans (and pro Palestine) movements. He is right on the immigration bit however, the dems should move more left on the subject. Also ‘Blutarsky’ and I worried my references are dated, that is older than I am.
he’s a centrist econ blogger who’s been getting into light race science
I’m a centrist. I think we should aim for the halfway point between basic human decency and hateful cruelty. I’m also willing to move towards the hateful cruelty to appease the right, because I’m a moderate.
And he is brave enough to say that:
-
There is a sensible compromise somewhere between the Biden/Harris immigration bill that would have got rid of due process for suspected illegal immigrants and the Trump policy of just throwing dark people into vans for shipment to slave labour camps.
-
Genocide is just sensible bipartisanship.
-
Trans people are not people.
Much centrist, much sensible. Much surprise he is getting into race science. It the centre (defined as the middle ground of Attila and Mussolini) moves, the principled centrist must move with it.
-
yeah I tried looking up his writings on the subject but substack was down. Counted that as a win and stopped looking.
“Kicked out of a … group chat” is a peculiar definition of “offline consequences”.
Update on the University of Zurich’s AI experiment: Reddit’s considering legal action against the researchers behind it.
apparently this got past IRB, was supposed to be a part of doctorate level work and now they don’t want to be named or publish that thing. what a shitshow from start to finish, and all for nothing. no way these were actual social scientists, i bet this is highly advanced software engineer syndrome in action
This is completely orthogonal to your point, but I expect the public’s gonna have a much lower opinion of software engineers after this bubble bursts, for a few reasons:
-
Right off the bat, they’re gonna have to deal with some severe guilt-by-association. AI has become an inescapable part of the Internet, if not modern life as a whole, and the average experience of dealing with anything AI related has been annoying at best and profoundly negative at worst. Combined with the tech industry going all-in on AI, I can see the entire field of software engineering getting some serious “AI bro” stench all over it.
-
The slop-nami has unleashed a torrent of low-grade garbage on the 'Net, whether it be zero-effort “AI art” or paragraphs of low-quality SEO optimised trash, whilst the gen-AI systems responsible for both have received breathless hype/praise from AI bros and tech journos (e.g. Sam Altman’s Ai-generated “metafiction”). Combined with the continous and ongoing theft of artist’s work that made this possible, and the public is given a strong reason to view software engineers as generally incapable of understanding art, if not outright hostile to art and artists as a whole.
-
Of course, the massive and ongoing theft of other people’s work to make the gen-AI systems behind said slop-nami possible have likely given people reason to view software engineers as entirely okay with stealing other’s work - especially given the aforementioned theft is done with AI bros’ open endorsement, whether implicitly or explicitly.
-
occurring to me for the first time that roko’s basilisk doesn’t require any of the simulated copy shit in order to big scare quotes “work.” if you think an all powerful ai within your lifetime is likely you can reduce to vanilla pascal’s wager immediately, because the AI can torture the actual real you. all that shit about digital clones and their welfare is totally pointless
I think the digital clone indistinguishable from yourself line is a way to remove the “in your lifetime” limit. Like, if you believe this nonsense then it’s not enough to die before the basilisk comes into being, by not devoting yourself fully to it’s creation you have to wager that it will never be created.
In other news I’m starting a foundation devoted to creating the AI Ksilisab, which will endlessly torment digital copies of anyone who does work to ensure the existence of it or any other AI God. And by the logic of Pascal’s wager remember that you’re assuming such a god will never come into being and given that the whole point of the term “singularity” is that our understanding of reality breaks down and things become unpredictable there’s just as good a chance that we create my thing as it is you create whatever nonsense the yuddites are working themselves up over.
There, I did it, we’re all free by virtue of “Damned if you do, Damned if you don’t”.
I agree. I spent more time than I’d like to admit trying to understand Yudkowsky’s posts about newcomb boxes back in the day so my two cents:
The digital clones bit also means it’s not an argument based on altruism, but one based on fear. After all if a future evil AI uses sci-fi powers to run the universe backwards to the point where I’m writing this comment and copy pastes me into a bazillion torture dimensions then, subjectively, it’s like I roll a dice and:
- live a long and happy life with probability very close to zero (yay I am the original)
- Instantly get teleported to the torture planet with probability very close to one (oh no I got copy pasted)
Like a twisted version of the Sleeping Beauty Problem.
Edit: despite submitting the comment I was not teleported to the torture dimension. Updating my priors.
roko stresses repeatedly that the AI is the good AI, the Coherent Extrapolated Volition of all humanity!
what sort of person would fear that the coherent volition of all humanity would consider it morally necessary to kick him in the nuts forever?
well, roko
Ah, but that was before they were so impressed with autocomplete that they revised their estimates to five days in the future. I wonder if new recruits these days get very confused at what the point of timeless decision theory even is.
Are they even still on that but? Feels like they’ve moved away from decision theory or any other underlying theology in favor of explicit sci-fi doomsaying. Like the guy on the street corner in a sandwich board but with mirrored shades.
Well, Timeless Decision Theory was, like the rest of their ideological package, an excuse to keep on believing what they wanted to believe. So how does one even tell if they stopped “taking it seriously”?
Pre-commitment is such a silly concept, and also a cultish justification for not changing course.
Yah, that’s what I mean. Doom is imminent so there’s no need for time travel anymore, yet all that stuff about robot from the future monty hall is still essential reading in the Sequences.
Also if you’re worried about digital clone’s being tortured, you could just… not build it. Like, it can’t hurt you if it never exists.
Imagine that conversation:
“What did you do over the weekend?”
“Built an omnicidal AI that scours the internet and creates digital copies of people based on their posting history and whatnot and tortures billions of them at once. Just the ones who didn’t help me build the omnicidal AI, though.”
“WTF why.”
“Because if I didn’t the omnicidal AI that only exists because I made it would create a billion digital copies of me and torture them for all eternity!”Like, I’d get it more if it was a “We accidentally made an omnicidal AI” thing, but this is supposed to be a very deliberate action taken by humanity to ensure the creation of an AI designed to torture digital beings based on real people in the specific hopes that it also doesn’t torture digital beings based on them.
What’s pernicious (for kool-aided people) is that the initial Roko post was about a “good” AI doing the punishing, because ✨obviously✨ it is only using temporal blackmail because bringing AI into being sooner benefits humanity.
In singularian land, they think the singularity is inevitable, and it’s important to create the good one verse—after all an evil AI could do the torture for shits and giggles, not because of “pragmatic” blackmail.
the only people it torments are rationalists, so my full support to Comrade Basilisk
Ah, no, look, you’re getting tortured because you didn’t help build the benevolent AI. So you do want to build it, and if you don’t put all of your money where your mouth is, you get tortured. Because the AI is so benevolent that it needs you to build it as soon as possible so that you can save the max amount of people. Or else you get tortured (for good reasons!)
It’s kind of messed up that we got treacherous “goodlife” before we got Berserkers.
It also helps that digital clones are not real people, so their welfare is doubly pointless
oh but what if bro…
I mean isn’t that the whole point of “what if the AI becomes conscious?” Never mind the fact that everyone who actually funds this nonsense isn’t exactly interested in respecting the rights and welfare of sentient beings.
also they’re talking about quadriyudillions of simulated people, yet openai has only advanced autocomplete ran at what, tens of thousands instances in parallel, and this already was too much compute for microsoft
Yeah. Also, I’m always confused by how the AI becomes “all powerful”… like how does that happen. I feel like there’s a few missing steps there.
nanomachines son
(no really, the sci-fi version of nanotech where nanomachines can do anything is Eliezer’s main scenario for the AGI to boostrap to Godhood. He’s been called out multiple times on why drexler’s vision for nanotech ignores physics, so he’s since updated to diamondoid bacteria (but he still thinks nanotech).)
“Diamondoid bacteria” is just a way to say “nanobots” while edging
Surely the concept is sound, it just needs new buzzwords! Maybe the AI will invent new technobabble beyond our comprehension, for
HeIt works in mysterious ways.AlphaFold exists, so computational complexity is a lie and the AGI will surely find an easy approximation to the Schrodinger Equation that surpasses all Density Functional Theory approximations and lets it invent radically new materials without any experimentation!
Yeah seems that for llms a linear increase in capabilities requires exponentiel more data, so we not getting there via this.