cross-posted from: https://beehaw.org/post/20524171
“The Wikimedia Foundation has been exploring ways to make Wikipedia and other Wikimedia projects more accessible to readers globally,” a Wikimedia Foundation spokesperson told me in an email. “This two-week, opt-in experiment was focused on making complex Wikipedia articles more accessible to people with different reading levels. For the purposes of this experiment, the summaries were generated by an open-weight Aya model by Cohere. It was meant to gauge interest in a feature like this, and to help us think about the right kind of community moderation systems to ensure humans remain central to deciding what information is shown on Wikipedia.”
Some very out of touch people in the Wikimedia Foundation. Fortunately the editors (people who actually write the articles) have the sense to oppose this move in mass.
yes you need to read things to understand them, but also going balls deep into a complex concept or topic with no lube can be pretty rough, and deter the attempt, or future attempts.
also do you know what else is corporate slop? the warner/disney designed art world? every non-silicon paperclip maximizing pattern? the software matters more than the substrate.
the pattern matters more than the tool.
people called digital art/3d art ‘slop’ for the same reason.
my argument was the same back then. it’s not the tool, it’s the system.
‘CGI doesn’t help anyone’
attacking the tool of CGI doesn’t help anyone either.
that being said… AI does literally help some people. for many things. google search was my favourite AI tool 25 years ago, but it’s definitely not right now.
the slop algorithms were decided by something else even before that. see: enshittification and planned obsolescence.
aka, overfitting towards an objective function in the style of goodheart’s law.
also you can read a ‘thing’ but if you’re just over-fitting without making any transferable connections, you’re only feeding your understanding of that state-space/specific environment. also other modalities are important. why LLMs aren’t ‘superintelligent’ despite being really good with words. that’s an anthropocentric bias in understanding intelligent systems. i know a lot of people who read self help/business novels, which teach flawed heuristics. which books unlearn flawed heuristics?
early reading can lead to better mental models for interacting with counterfactual representations. can we give mental tools for counterfactual representation some hype?
could you dive into that with no teachers/AI to help you? would you be more likely to engage with the help?
it’s a complicated situation, but overfitting binary representations is not the solution to navigating complexity.
god I looked at your post history and it’s just all this. 2 years of AI boosterism while cosplaying as a leftist, but the costume keeps slipping
are you not exhausted? you keep posting paragraphs and paragraphs and paragraphs but you’re still just a cosplay leftist arguing for the taste of the boot. don’t you get tired of being like this?
OK, here’s your free opportunity to spend more time doing that. Bye now.
lol