• Sure; I’m not saying you’re wrong. Ray is unrealistically optimistic, and his predictions are heavily dependent on several iffy factors: that we’ll create GAI; that it’ll be able to exponentially improve itself; that it’ll be benevolent; and that it’ll see value in helping us obtain immortality, and decide that this is good for us.

    I just don’t think it’s fair to lump him in with SovCits and homeopaths (or whatever Linus Pauling is). He’s a different kind of “wrong”; not crazy or deluded, just optimistic.