cross-posted from: https://lemmy.wtf/post/22145277
Hey fellow inhabitants of the Fediverse, particularly those lurking on Lemmy,
I’ve been thinking a lot lately about the nature of information, discourse, and where genuine human connection can still thrive online. It leads me back to platforms like this one.
We often talk about censorship in terms of direct bans or content removal, which is obviously a critical concern. But what about the more insidious forms of control? I’m talking about the subtle fiddling of algorithms, the deliberate hiding of certain content without outright deletion, the ‘shadowbanning’ that makes you feel like you’re shouting into a void. How resistant is the decentralized nature of Lemmy, and the wider fediverse, to those kinds of pressures? It feels like the very architecture here might offer a unique defense, but I’m curious about the community’s thoughts.
I know we’re not exactly bursting at the seams with users, and frankly, if you’re not already clued into how something like Lemmy works, you’re probably never finding it through a casual search – SEO seems like a foreign concept here, battling potential duplicate content issues across instances. Is this quiet corner its strength, or its eventual downfall if the ‘outside’ world becomes too noisy?
Speaking of noise, it feels like nearly 90% of the content generated on the broader internet these days is starting to feel like it’s churned out by LLMs. Autogenerated articles, comments, even entire ‘conversations’ that ring hollow. Is the Fediverse, specifically, a safe haven from that rising tide of artificial content? Does the human-centric, community-driven nature of these instances inherently push back against such automation?
I’ve looked into ActivityPub and other federation tools in the past, and my observation has often been that they’ve been adopted primarily by marginalized groups in society, seeking refuge from mainstream platforms. While that’s incredibly valuable and a testament to their utility, what could truly happen to extend this concept, to genuinely get more people involved without compromising the very principles that make it appealing – decentralization, human curation, and resilience against algorithmic manipulation?
Just throwing it out there. Would appreciate any insights or theories.
I assume it’s not as appealing to propagandists because accounts cannot accumulate “people love my takes so believe in what I’m saying” points. But I really don’t know much about the fediverse/Lemmy…
Not that resilient. Mods and admins controlling discussion, defederation due to dislike of the messenger, etc. I think the federation model is flawed and it should be done at the client side instead of on servers.
Yers, algorithmic content discovery should happen client side and all the machinery of this algorithm should have its knobs and levers exposed to the user.
I even want a checkbox “show deleted content”
Anything short of a 100% auditable forum that’s 100% in the control of the users is gonna slide towards total bullshit. And quick. It’s inevitable. Everybody thinks they’re so right that censorship is ok. Everybody.
Censorship on lemmy is rampant. Even on those instances where modlog is visible.
Not that modlog is anything user friendly. Severely lacking in search and filtering capability it also seems most modlog actions are autopurged, so if you’re not keeping up with your modlog on a weekly basis then you’re not getting a picture of just how much skewed your picture of reality is being shaped and skewed by the moderators of your instance.
Moderation calls itself janitorial, but they are the enginneer of tge digital reality you choose to inhabit.
Despite the “checkbox transparency” of lemmy this is not the cure-all we all wish it were.
First and foremost, you are a prisoner here the more you participate, the more you have to lose. Because lemmy does not have any kind of seamless account migration. So watch your thoughts or else you could be sent to the memory hole.
And then we have the dirty trick of “rules” and as we all know, these are cover stories for silencing voices while quieting the voices that remain, by telling them "you are not at risk, YOU wouldn’t ‘spread misinformation’ " or whatever.
If you want to play a game, read your community rulea and try to find which one is the catch all rules. All rulesets have catchalls that can be argued into justifying any moderator actions.
And all of that applies to good faith moderation. Bad faith moderation doesn’t have to convince itself of its fair neutrality, it will just use these same system knowingly to obfuscate its own actions.