this is Habryka talking about how his moderating skills are so powerful it takes lesswrong three fucking years to block a poster who’s actively being a drain on the site

here’s his reaction to sneerclub (specifically me - thanks Oliver!) calling LessOnline “wordy racist fest”:

A culture of loose status-focused social connection. Fellow sneerers are not trying to build anything together. They are not relying on each other for trade, coordination or anything else. They don’t need to develop protocols of communication that produce functional outcomes, they just need to have fun sneering together.

He gets us! He really gets us!

  • self@awful.systemsM
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    from the (extensive) footnotes:

    Occupy Wallstreet strikes me as another instance of the same kind of popular sneer culture. Occupy Wallstreet had no coherent asks, no worldview that was driving their actions.

    it’s so easy to LessWrong: just imagine that your ideological opponents have no worldview and aren’t trying to build anything, sprinkle in some bullshit pseudo-statistics, and you’re there!

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Lesswrong and SSC: capable of extreme steelmanning of… check notes… occult mysticism (including divinatory magic), Zen-Buddhism based cults, people who think we should end democracy and have kings instead, Richard Lynn, Charles Murray, Chris Langan, techbros creating AI they think is literally going to cause mankind’s extinction…

      Not capable of even a cursory glance into their statements, much less steelmanning: sneerclub, Occupy Wallstreet

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        It is gonna be worse, they can back up their statements by referring to people who were actually there, but they person they then would be referring to is Tim Pool, and you can’t as an first principles intellectual of the order of LessWrong, reveal that actually you get your information from disgraced yt’ers like all the other rightwing plebs. It has to remain an unspoken secret.

  • Amoeba_Girl@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    A small sidenote on a dynamic relevant to how I am thinking about policing in these cases:

    A classical example of microeconomics-informed reasoning about criminal justice is the following snippet of logic.

    If someone can gain in-expectation X dollars by committing some crime (which has negative externalities of Y>X dollars), with a probability p of getting caught, then in order to successfully prevent people from committing the crime you need to make the cost of receiving the punishment (Z) be greater than X/p, i.e. X<p∗Z.

    Or in less mathy terms, the more likely it is that someone can get away with committing a crime, the harsher the punishment needs to be for that crime.

    In this case, a core component of the pattern of plausible-deniable aggression that I think is present in much of Said’s writing is that it is very hard to catch someone doing it, and even harder to prosecute it successfully in the eyes of a skeptical audience. As such, in order to maintain a functional incentive landscape the punishment for being caught in passive or ambiguous aggression needs to be substantially larger than for e.g. direct aggression, as even though being straightforwardly aggressive has in some sense worse effects on culture and norms (though also less bad effects in some other ways), the probability of catching someone in ambiguous aggression is much lower.

    Fucking hell, that is one of the stupidest most dangerous things I’ve ever heard. Guy solves crime by making the harshness of punishment proportional to the difficulty of passing judgement. What could go wrong?

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      tbf being able to write thousand word long blog posts and using phrases like “good and important” is part of his job description

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    That it took this long to ban this guy and this many words is so delicious. What a failure of a community. What a failure in moderation.

    Based on the words and analogies in that post: participating in LW must be like being in a circlejerk where everyone sucks at circlejerking. Guys like Said run around the circle yelling at them about how their technique sucks and that they should feel bad. Then they chase him out and continue to be bad at mutual jorkin.

    E: That they don’t see the humor in sneering at “celebrating blogging” and that it’s supposedly us at our worst is very funny.

      • o7___o7@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        You live rent-free in so many big ol noggins.

        All that acreage has to be adding up. Have you ever considered going into real estate?

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        You called them racist without proving from first principles it is bad to be racist, that they are racist, and their specific form of racism is also bad and will not lead to better outcomes in than being non-racist in the megafuture.

        • swlabr@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Hey if a tree is racist in the woods and two nerd blogs that pretend to be diametrically opposed on the political spectrum but are actually just both fascist don’t spend millions of words discussing it, is it really racist or should we assume more good faith

  • YourNetworkIsHaunted@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    You know, this whole conversation reminds me of the discussion of moderation policy I remembered from a gaming blog I used to read somewhat religiously. I think the difference in priorities is pretty significant. In Shamus’ policy the primary obligation of the moderator is to the community as a whole to protect it from assholes and shitweasels. These people will try to use hard-and-fast rules against you to thwart your efforts, and so are best dealt with by a swift boot. If they want to try again they’re welcome to set up a new account or whatever and if they actually behave themselves then all the better. I feel like this does a far better job of creating a welcoming and inclusive community even when discussing contentious issues like the early stages of gamergate or the PC vs Console wars. Also it doesn’t require David to drive himself fucking insane trying to build an ironclad legal case in favor of banning any particular Nazi, including nearly a decade of investigation and “light touch” moderation.

    Also in grabbing that link I found out that Shamus apparently died back in 2022. RIP and thanks for helping keep me from falling into the gamergate or Rationalist pipelines to fascism.

  • diz@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Lol I literally told these folks, something like 15 years ago, that paying to elevate a random nobody like Yudkowsky as the premier “ai risk” researcher, in so much that there is any AI risk, would only increase it.

    Boy did I end up more right on that than my most extreme imagination. All the moron has accomplished in life was helping these guys raise cash due to all his hype about how powerful the AI would be.

    The billionaires who listened are spending hundreds of billions of dollars - soon to be trillions, if not already - on trying to prove Yudkowsky right by having an AI kill everyone. They literally tout “our product might kill everyone, idk” to raise even more cash. The only saving grace is that it is dumb as fuck and will only make the world a slightly worse place.

    • BlueMonday1984@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      The billionaires who listened are spending hundreds of billions of dollars - soon to be trillions, if not already - on trying to prove Yudkowsky right by having an AI kill everyone. They literally tout “our product might kill everyone, idk” to raise even more cash. The only saving grace is that it is dumb as fuck and will only make the world a slightly worse place.

      Given they’re going out of their way to cause as much damage as possible (throwing billions into the AI money pit, boiling oceans of water and generating tons of CO2, looting the commons through Biblical levels of plagiarism, and destroying the commons by flooding the zone with AI-generated shit), they’re arguably en route to proving Yud right in the dumbest way possible.

      Not by creating a genuine AGI that turns malevolent and kills everyone, but in destroying the foundations of civilization and making the world damn-nigh uninhabitable.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      some UN-associated ACM talk I was listening to recently had someone cite a number at (iirc) $1.5tn total estimated investment $800b[0]. haven’t gotten to fact-check it but there’s a number of parts of that talk I wish to write up and make more known

      one of the people in it made some entirely AGI-pilled comments, and it’s quite concerning

      this talk; looks like video is finally up on youtube too (at the time I yanked it by pcap-ing a zoom playout session - turns out zoom recordings are hella aggressive about not being shared)

      the question I asked was:

      To Csaba (the current speaker): it seems that a lot of the current work you’re engaged in is done presuming that AGI is a certainty. what modelling you have you done without that presumption?

      response is about here

      [0] edited for correctness; forget where I saw the >$1.5t number

      • diz@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Yeah a new form of apologism that I started seeing online is “this isn’t a bubble! Nobody expects an AGI, its just Sam Altman, it will all pay off nicely from 20 million software developers worldwide spending a few grand a year each”.

        Which is next level idiotic, besides the numbers just not adding up. There’s only so much open source to plagiarize. It is a very niche activity! It’ll plateau and then a few months later tiny single GPU models catch up to this river boiling shit.

        The answer to that has always been the singularity bullshit where the biggest models just keep staying ahead by such a large factor nobody uses the small ones.

        • ReversalHatchery@beehaw.org
          link
          fedilink
          English
          arrow-up
          0
          ·
          26 days ago

          Which is next level idiotic, besides the numbers just not adding up. There’s only so much open source to plagiarize.

          but they can plagiarize all the code too that gets sent to them from software dev companies where employees use AI coding tools

      • David Gerard@awful.systemsOPM
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        hearing him respond like that in real time and carefully avoiding the point makes clear the attraction of ChatGPT

  • Hackworth@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    The thing that united [Occupy Wall Street] was a shared dislike of something in the vague vicinity of capitalism, or government, or the man…

    Was it not, specifically, Wall Street?

  • blakestacey@awful.systemsM
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    “They don’t need to develop protocols of communication that facilitate buying castles, fluffing our corporate overlords, or recruiting math pets. They share vegan recipes without even trying to build a murder cult.”

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I, the man from the internet who called Peter Thiel a racist hotdog, am the one with real power.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Funniest are all the commenters loudly complaining about this decision and threatening/promising to delete their accounts.

  • blakestacey@awful.systemsM
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Of course, commenters on LessWrong are not dumb, and have read Scott Alexander,

    It’s like sneering at fish in an aquarium

  • blakestacey@awful.systemsM
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    From the comments:

    If Said returns, I’d like him to have something like a “you can only post things which Claude with this specific prompt says it expects to not cause <issues>” rule, and maybe a LLM would have the patience needed to show him some of the implications and consequences of how he presents himself.

    And:

    Couldn’t prediction markets solve this?

    Ain’t enough lockers in the world, dammit

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I’m feeling an effort sneer…

    For roughly equally long have I spent around one hundred hours almost every year trying to get Said Achmiz to understand and learn how to become a good LessWrong commenter by my lights. Every time I read about a case like this my conviction grows that sneerclub’s vibe based moderation is the far superior method!

    The key component of making good sneer club criticism is to never actually say out loud what your problem is. We’ve said it multiple times, it’s just a long list that is inconvenient to say all at once. The major things that keep coming up: The cult shit (including the promise of infinite AGI God heaven and infinite Roko’s Basilisk hell; and including forming high demand groups motivated by said heaven/hell); the racist shit (including the eugenics shit); the pretentious shit (I could actually tolerate that if it didn’t have the other parts); and lately serving as crit-hype marketing for really damaging technology!

    They don’t need to develop protocols of communication that produce functional outcomes Ahem… you just admitted to taking a hundred hours to ban someone, whereas dgerad and co kick out multiple troublemakers in our community within a few hours tops each. I think we are winning on this one.

    For LessWrong to become a place that can’t do much but to tear things down. I’ve seen some outright blatant crank shit (as opposed to the crank shit that works hard to masquerade as more legitimate science) pretty highly upvoted and commented positively on lesswrong (GeneSmith’s wild genetic engineering fantasies come to mind).

    • blakestacey@awful.systemsM
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I’ve seen some outright blatant crank shit (as opposed to the crank shit that works hard to masquerade as more legitimate science) pretty highly upvoted and commented positively on lesswrong (GeneSmith’s wild genetic engineering fantasies come to mind).

      Their fluffing Chris Langan is the example that comes to mind for me.

    • o7___o7@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Ya don’t debate fascists, ya teach them the lesson of history. The Official Sneerclub Style Manual indicates that this is accomplished with various pedagogical tools, including laconic mockery, administrative trebuchets, and socks with bricks in them.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        That too.

        And judging by how all the elegantly charitably written blog posts on the EA forums did jack shit to stop the second manifest conference from having even more racists, debate really doesn’t help.

    • blakestacey@awful.systemsM
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      The key component of making good sneer club criticism is to never actually say out loud what your problem is.

      I wrote 800 words explaining how TracingWoodgrains is a dishonest hack, when I could have been getting high instead.

      But we don’t need to rely on my regrets to make this judgment, because we have a science-based system on this podcast instance. We can sort all the SneerClub comments by most rated. Nothing that the community has deemed an objective banger is vague.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        The problem is they dont read sneerclub well, so they dont realize we dont relitigate the same shit every time. So when they come in with their hammers (prediction markets, being weird about ai, etc) we just go ‘lol, these nerds’ and dont go writing down the same stuff every time. As the community has a shared knowledge base, they do the same by not going into details every time how a prediction market would help and work. But due to their weird tribal thinking and thinking they are superior they think when we do it it is bad.

        It is just amazing how much he doesn’t get basic interactions. And not like we dont like to explain stuff when new people ask about it. Or often when not even asked.

        Think one of the problems with lw is that they think stuff that is long, is well written and argued, even better if it used a lot of complex sounding words. see how they like Chris Langan as you mentioned. Just a high rate of ‘I have no idea what he is talking about but it sounds deep’ shit.

        To quote from the lw article you linked on the guy

        CTMU has a high-IQ mystique about it: if you don’t get it, maybe it’s because your IQ is too low. The paper itself is dense with insights, especially the first part.

        Makes you wonder how many people had a formal academic education, as one of the big things of that is that it has none of this mystique, as it build on top of each other and often can feel reasonable easy and making sense. (Because learning the basics preps you for the more advanced stuff, which is not to say this is the case every time, esp if some of your skills are lacking, but none of this high-IQ mystique (which also seems the utter wrong thing to look for)).

  • David Gerard@awful.systemsOPM
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    btw I read Said’s responses to his banning and if that dude ever shows up here he’s gone the second he’s spotted

    • blakestacey@awful.systemsM
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      They gave him a thread in which to complain about being banned… Are these people polyamorous just because they don’t know how to break up?