cross-posted from: https://beehaw.org/post/20524171
“The Wikimedia Foundation has been exploring ways to make Wikipedia and other Wikimedia projects more accessible to readers globally,” a Wikimedia Foundation spokesperson told me in an email. “This two-week, opt-in experiment was focused on making complex Wikipedia articles more accessible to people with different reading levels. For the purposes of this experiment, the summaries were generated by an open-weight Aya model by Cohere. It was meant to gauge interest in a feature like this, and to help us think about the right kind of community moderation systems to ensure humans remain central to deciding what information is shown on Wikipedia.”
Some very out of touch people in the Wikimedia Foundation. Fortunately the editors (people who actually write the articles) have the sense to oppose this move in mass.
A well written wikipedia article about a complex topic is already a summary!
There’s also simplified english available for many pages
https://en.wikipedia.org/wiki/Encyclopedia
An encyclopedia is a reference work or compendium providing summaries of knowledge, either general or special, in a particular field or discipline.
Obligatory cross-reference: This came up in the stubsack before 404media wrote about it.
Fuck it, repeating my joke from the earlier thread: Inviting the most pedantic nerds on Earth to critique your chatbot slop is a level of begging to be pwned that’s on par with claiming the female orgasm is a myth.
I need to check the stubsack more often.
What is stubsack? It just links to Lemmy threads .
The stubsack is the weekly thread of miscellaneous, low-to-mid-effort posts on awful.systems.
The simultaneous problem and benefit of the stubstack thread is that a good chunk of the best posts of this community are contained within them.
I knew AI would eventually come for one of the greatest things humans have ever used the internet for, but I’m so disappointed that it has come from within.
I’ve cancelled my monthly donations. We can’t trust the Wikimedia Foundation at all, ever again. Genuinely sickening anti-human sentiment from those freaks.
It is so concerning given that they’re entrusted with something so collaborative and so amazing.
time to donate my money to a different wiki that only has the noblest of intentions, wikifeet (jk)
Refreshing. An online community that wears its intentions on its sleeve.
Is there anything closer to the human soul?
I think you’re deliberately setting up for this response, so: “more like human sole”.
I wasn’t, but that is toetally the perfect response
You should consider donating to the internet archive.
Don’t worry; I already do! But, great suggestion.
de-paywalled link
Thank God we didn’t get help for people in digesting complex topics. Then how would they blame the experts for not making things simple enough that they should have to try learning.
Also, people should learn about complex intelligent systems, and how all of their problems with AI are just problems with capitalism that will still inevitably exist even without AI/the loom.
AI is a pseudoscience that conflates a plagiarism-fueled lying machine with a thinking, living human mind. Fuck off.
hey dawg if you want to be anti-capitalist that’s great, but please interrogate yourself on who exactly is developing LLMs and who is running their PR campaigns before you start simping for AI and pretending like a hallucination engine is a helpful tool in general and specifically to help people understand complex topics where precision and nuance are needed and definitely not fucking hallucinations. Please be serious and for real
points at literally every other technology or piece of shared socio-economic infrastructure
gestures more heavily
also checks your sources, whether it’s wikipedia, LLMs, or humans! all confabulate!
Dis you:
could you explain how? or how the examples i gave are not as valid to your current direction of critique?
i’m not saying ‘i’m intelligent’ or ‘the system will not abuse these tools’
are you suggesting my understanding is overfit to a certain niche, and there is a flagrant blindspot that wasn’t addressed by my earlier comment?
also i use uncommon words for specificity, not to obfuscate. if something hasn’t made sense, i would also elaborate. (we also have modern tools to help unravel such things as well, if you don’t have a local tutor for the subject.)
or we can just give inaccurate caricatures of each other, and each-others points of view. surely that will do something other than feed the ignorance and division driven socio-economic paperclip maximizer that we are currently stuck in.
Note to the Peanut gallery: this guy knows about paperclipmaxxing but not this more famous comic. Curious. lmfao
holy shit I’m upgrading you to a site-wide ban
so many paragraphs and my eyes don’t want any of them
Incredible work as always, self
this one was definitely my pleasure
“how can you fools not see that Wikipedia’s utterly inaccurate summary LLM is exactly like digital art, 3D art, and CGI, which are all the same thing and are/were universally hated(???)” is a take that only gets more wild the more you think on it too, and that’s one they’ve been pulling out for at least two years
I didn’t catch much else from their posts, cause it’s almost all smarm and absolutely no substance, but fortunately they formatted it like paragraph soup so it slid right off my eyeballs anyway
Ai doesn’t help anyone, its just corporate slop.
You learn to digest deep subjects by reading them.
yes you need to read things to understand them, but also going balls deep into a complex concept or topic with no lube can be pretty rough, and deter the attempt, or future attempts.
also do you know what else is corporate slop? the warner/disney designed art world? every non-silicon paperclip maximizing pattern? the software matters more than the substrate.
the pattern matters more than the tool.
people called digital art/3d art ‘slop’ for the same reason.
my argument was the same back then. it’s not the tool, it’s the system.
‘CGI doesn’t help anyone’
attacking the tool of CGI doesn’t help anyone either.
that being said… AI does literally help some people. for many things. google search was my favourite AI tool 25 years ago, but it’s definitely not right now.
the slop algorithms were decided by something else even before that. see: enshittification and planned obsolescence.
aka, overfitting towards an objective function in the style of goodheart’s law.
also you can read a ‘thing’ but if you’re just over-fitting without making any transferable connections, you’re only feeding your understanding of that state-space/specific environment. also other modalities are important. why LLMs aren’t ‘superintelligent’ despite being really good with words. that’s an anthropocentric bias in understanding intelligent systems. i know a lot of people who read self help/business novels, which teach flawed heuristics. which books unlearn flawed heuristics?
early reading can lead to better mental models for interacting with counterfactual representations. can we give mental tools for counterfactual representation some hype?
could you dive into that with no teachers/AI to help you? would you be more likely to engage with the help?
it’s a complicated situation, but overfitting binary representations is not the solution to navigating complexity.
god I looked at your post history and it’s just all this. 2 years of AI boosterism while cosplaying as a leftist, but the costume keeps slipping
are you not exhausted? you keep posting paragraphs and paragraphs and paragraphs but you’re still just a cosplay leftist arguing for the taste of the boot. don’t you get tired of being like this?
yes you need to read things to understand them
OK, here’s your free opportunity to spend more time doing that. Bye now.
that being said… AI does literally help some people. for many things. google search was my favourite AI tool 25 years ago, but it’s definitely not right now.
lol