• MudMan@fedia.io
    link
    fedilink
    arrow-up
    24
    ·
    22 hours ago

    That’s what’s fascinating about how it does language in general.

    The article is interesting in both the ways in which things are similar and the ways they’re different. The rough approximation thing isn’t that weird, but obviously any human would have self-awareness of how they did it and not accidentally lie about the method, especially when both methods yield the same result. It’s a weirdly effective, if accidental example of human-like reasoning versus human-like intelligence.

    And, incidentally, of why AGI and/or ASI are probably much further away than the shills keep claiming.