• blakestacey@awful.systemsM
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 days ago

    You might think that this review of Yud’s glowfic is an occasion for a “read a second book” response:

    Yudkowsky is good at writing intelligent characters in a specific way that I haven’t seen anyone else do as well.

    But actually, the word intelligent is being used here in a specialized sense to mean “insufferable”.

    Take a stereotypical fantasy novel, a textbook on mathematical logic, and Fifty Shades of Grey.

    Ah, the book that isn’t actually about kink, but rather an abusive relationship disguised as kink — which would be a great premise for an erotic thriller, except that the author wasn’t sufficiently self-aware to know that’s what she was writing.

    • blakestacey@awful.systemsM
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      If you want to read Yudkowsky’s explanation for why he doesn’t spend more effort on academia, it’s here.

      spoiler alert: the grapes were totally sour

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      You could argue that another moral of Parfit’s hitchhiker is that being a purely selfish agent is bad, and humans aren’t purely selfish so it’s not applicable to the real world anyway, but in Yudkowsky’s philosophy—and decision theory academia—you want a general solution to the problem of rational choice where you can take any utility function and win by its lights regardless of which convoluted setup philosophers drop you into.

      I’m impressed that someone writing on LW managed to encapsulate my biggest objection to their entire process this coherently. This is an entire model of thinking that tries to elevate decontextualization and debate-team nonsense into the peak of intellectual discourse. It’s a manner of thinking that couldn’t have been better designed to hide the assumptions underlying repugnant conclusions if indeed it had been specifically designed for that purpose.

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    Open Phil generally seems to be avoiding funding anything that might have unacceptable reputational costs for Dustin Moskovitz

    “reputational cost” eh? Let’s see Mr. Moskovitz’s reasoning in his own words:

    Spoiler - It's not just about PR risk

    But I do want agency over our grants. As much as the whole debate has been framed (by everyone else) as reputation risk, I care about where I believe my responsibility lies, and where the money comes from has mattered. I don’t want to wake up anymore to somebody I personally loathe getting platformed only to discover I paid for the platform. That fact matters to me.

    I cannot control what the EA community chooses for itself norm-wise, but I can control whether I fuel it.

    I’ve long taken for granted that I am not going to live in integrity with your values and the actions you think are best for the world. I’m only trying to get back into integrity with my own.

    If you look at my comments here and in my post, I’ve elaborated on other issues quite a few times and people keep ignoring those comments and projecting “PR risk” on to everything. I feel incapable of being heard correctly at this point, so I guess it was a mistake to speak up at all and I’m going to stop now. [Sorry I got frustrated; everyone is trying their best to do the most good here] I would appreciate if people did not paraphrase me from these comments and instead used actual quotes.

    again, beyond “reputational risks”, which narrows the mind too much on what is going on here

    “PR risk” is an unnecessarily narrow mental frame for why we’re focusing.

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    Holy smokes that’s a lot of words. From their own post it sounds like they massively over-leveraged and have no more sugar daddies so now their convention center is doomed (yearly 1 million dollar interest payments!); but they can’t admit that so are desperately trying to delay the inevitable.

    Also don’t miss this promise from the middle:

    Concretely, one of the top projects I want to work on is building AI-driven tools for research and reasoning and communication, integrated into LessWrong and the AI Alignment Forum. […] Building an LLM-based editor. […] AI prompts and tutors as a content type on LW

    It’s like an anti-donation message. “Hey if you donate to me I’ll fill your forum with digital noise!”

  • blakestacey@awful.systemsM
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    Among the leadership of the biggest AI capability companies (OpenAI, Anthropic, Meta, Deepmind, xAI), at least 4/5 have clearly been heavily influenced by ideas from LessWrong.

    I’m trying, but I can’t not donate any harder!

    The most popular LessWrong posts, SSC posts or books like HPMoR are usually people’s first exposure to core rationality ideas and concerns about AI existential risk.

    Unironically the better choice: https://archiveofourown.org/donate

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      Yes but if I donate to Lightcone I can get a T-shirt for $1000! A special edition T-shirt! Whereas if I donated $1000 to Archive Of Our Own all I’d get is… a full sized cotton blanket, a mug, a tote bag and a mystery gift.

  • blakestacey@awful.systemsM
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 days ago

    The post:

    I think Eliezer Yudkowsky & many posts on LessWrong are failing at keeping things concise and to the point.

    The replies: “Kolmogorov complexity”, “Pareto frontier”, “reference class”.

  • blakestacey@awful.systemsM
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    The collapse of FTX also caused a reduction in traffic and activity of practically everything Effective Altruism-adjacent

    Uh-huh.

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    and have clearly been read a non-trivial amount by Elon Musk (and probably also some by JD Vance).

    Look, i already wasn’t donating, no need to make it worse.

    • blakestacey@awful.systemsM
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      The lead-in to that is even “better”:

      This seems particularly important to consider given the upcoming conservative administration, as I think we are in a much better position to help with this conservative administration than the vast majority of groups associated with AI alignment stuff. We’ve never associated ourselves very much with either party, have consistently been against various woke-ish forms of mob justice for many years, and have clearly been read a non-trivial amount by Elon Musk (and probably also some by JD Vance).

      “The reason for optimism is that we can cozy up to fascists!”