You could argue that another moral of Parfit’s hitchhiker is that being a purely selfish agent is bad, and humans aren’t purely selfish so it’s not applicable to the real world anyway, but in Yudkowsky’s philosophy—and decision theory academia—you want a general solution to the problem of rational choice where you can take any utility function and win by its lights regardless of which convoluted setup philosophers drop you into.
I’m impressed that someone writing on LW managed to encapsulate my biggest objection to their entire process this coherently. This is an entire model of thinking that tries to elevate decontextualization and debate-team nonsense into the peak of intellectual discourse. It’s a manner of thinking that couldn’t have been better designed to hide the assumptions underlying repugnant conclusions if indeed it had been specifically designed for that purpose.
I’m impressed that someone writing on LW managed to encapsulate my biggest objection to their entire process this coherently. This is an entire model of thinking that tries to elevate decontextualization and debate-team nonsense into the peak of intellectual discourse. It’s a manner of thinking that couldn’t have been better designed to hide the assumptions underlying repugnant conclusions if indeed it had been specifically designed for that purpose.