• 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: June 19th, 2023

help-circle





  • I have read the blog post that you’ve linked, which is full of exaggeration.

    The developer rejected PR that changed documentation to use one instance of they/them instead of he/him, responded “This project is not an appropriate arena to advertise your personal politics.”, and then promptly got brigaded. Similar PRs were appearing and getting closed from time to time.

    A satirical PR has been opened and closed for being spam - despite the blogger’s commentary, it’s abundantly clear that the developer didn’t call the person opening the PR a “spam” (what would that even mean?).

    The project also had code of conduct modified, probably due to the brigading, to essentially include the aforementioned “not an appropriate arena to advertise your personal politics or religious beliefs” line - I don’t know what part of this is for the blogger a “white supremacist” language.

    From what I can tell, this is all they’ve done. No racism, no sexism, no white supremacy. Would it be better if they just accepted the PR? Yes. Does it make the developer part of one of the worst groups of people that ever existed? No.






  • I don’t think that anyone would argue that the general public can even solve a mathematical matrix, much less that they can only comprehend a stool based on going down a row in a matrix to get the mathematical similarity between a stool, a chair, a bench, a floor, and a cat.

    LLMs rely on billions of precise calculations and yet they perform poorly when tasked with calculating numbers. Just because we don’t calculate anything consciously to get a meaning of a word doesn’t mean that no calculations are actually done as part of our thinking process.

    What’s your definition of “the actual meaning of the concept represented by a word”? How would you differentiate a system that truly understands the meaning of a word vs a system that merely mimics this understanding?




  • I don’t think your assumption holds. Corporations are not, as a rule, incompetent - in fact, they tend to be really competent at squeezing profit out of anything. They are misaligned, which is much more dangerous.

    I think the more likely scenario is also more grim:

    AI actually does continue to advance and gets better and better displacing more and more jobs. It doesn’t happen instantly so barely anything gets done. Some half-assed regulations are attempted but predictably end up either not doing anything, postponing the inevitable by a small amount of time, or causing more damage than doing nothing would. Corporations grow in power, build their own autonomous armies, and exert pressure on governments to leave them unregulated. Eventually all resources are managed by and for few rich assholes, while the rest of the world tries to survive without angering them.
    If we’re unlucky, some of those corporations end up being managed by a maximizer AGI with no human supervision and then the Earth pretty much becomes an abstract game with a scoreboard, where money (or whatever is the equivalent) is the score.

    Limitations of human body act as an important balancing factor in keeping democracies from collapsing. No human can rule a nation alone - they need armies and workers. Intellectual work is especially important (unless you have some other source of income to outsource it), but it requires good living conditions to develop and sustain. Once intellectual work is automated, infrastructure like schools, roads, hospitals, housing cease to be important for the rulers - they can give those to the army as a reward and make the rest of the population do manual work. Then if manual work and policing through force become automated, there is no need even for those slivers of decency.
    Once a single human can rule a nation, there is enough rich psychopaths for one of them to attempt it.

    There are also other AI-related pitfalls that humanity may fall into in the meantime - automated terrorism (e.g. swarms of autonomous small drones with explosive charges using face recognition to target entire ideologies by tracking social media), misaligned AGI going rogue (e.g. the famous paperclip maximizer, although probably not exactly this scenario), collapse of the internet due to propaganda bots using next-gen generative AI… I’m sure there’s more.