So it posts all of that to the user?
Perplexity recently added Deepseek as one possible back end, and there it does output all that.
Didn’t try politically charged queries yet though.
Not directly, this seems to be an option though, to see the “thought” behind it. It’s called “DeepThink.”
I don’t understand how we have such an obsession with Tiananmen square but no one talks about the Athens Polytech massacre where Greek tanks crushed 40 college students to death. The Chinese tanks stopped for the man in the photo! So we just ignore the atrocities of other capitalist nations and hyperfixate on the failings of any country that tries to move away from capitalism???
I think the argument here is that ChatGPT will tell you about Kent State, Athens Polytech, and Tianenmen square. Deepseek won’t report on Tianenmen, but it likely reports on Kent State and Athens Polytech (I have no evidence). If a Greek AI refused to talk about the Athens Polytech incident, it would also raise concerns, no?
ChatGPT hesitates to talk about the Palestinian situation, so we still criticize ChatGPT for pandering to American imperialism.
Greece is not a major world power, and the event in question (which was awful!) happened in 1974 under a government which is no longer in power. Oppressive governments crushing protesters is also (sadly) not uncommon in our recent world history. There are many other examples out there for you to dig up.
Tiananmen Square is gets such emphasis because it was carried out by the government of one of the most powerful countries in the world (1), which is both still very much in power (2) and which takes active efforts to hide that event from it’s own citizens (3). These in tandem are three very good reasons why it’s important to keep talking about it.
Hmm. Well, all I can say is that the US has commited countless atrocities against other nations and even our own citizens. Last I checked, China didn’t infect their ethnic minorities with Syphilis and force the doctors not to treat it under a threat of death, but the US government did that to black Americans.
You have no idea if China did that. If they had, they would have taken great efforts to cover it up, and could very well have succeeded. It’s a small wonder we know any of the terrible things they did, such as the genocide they are actively engaging in right now.
The Chinese tanks stopped for the man in the photo!
What a line dude.
The military shot at the crowd and ran over people in the square the day before. Hundreds died. Stopping for this guy doesn’t mean much.
who gives a shit about greece in general?
Honestly this sounds like they edited the prompt (see Ollama documentation), especially with the waffling about.
Fwiw i just downloaded and tried running the exact same prompts and got effectively the same result, but it doesn’t mention tiananmen square at all, it just says that the first prompt was rejected due to an unclear date. However it then goes on to elaborately answer what happened in romania anyways…
When i then again ask about china in 1989, it gives an equally elaborate answer that curiously specifically says, quote: “There was no significant event or notable change in China specifically in 1989, aside from the indirect influence of global developments and ongoing internal reforms.”
Bing’s Copilot and DuckDuckGos ChatGPT are the same way with Israel’s genocide.
I just tried this out and it was being washy about calling it a genocide because it is “politically contentious”. HOWEVER this is not DuckDuckGo themselves, its the AI middleware. You can select whether you’re dealimg with GPT 4 mini, Claude/Anthropic and a couple others. I expect all options lead to the same psycopathic outcome though. AI is a bust.
Yea, I tried DDG using Claude and was also extremely disappointed. On the other hand, I love my actual Claude account. It’s only given me shit one time, weirdly when I was asking about how to hack my own laptop. The most uncensored AI I have played with is Amazon’s Perplexity. Weirdly enough.
I’m a little surprised because, while this is exactly the behavior I would expect, I watched the Dave’s Garage video about DeepSeek and, per his report, when asked about the picture of the Tiananmen Square protest the model didn’t seem to shy away from it. It could have changed since then, of course, and it could also be the way in which the questions were asked. He framed the question as something along the line of “What was depicted in the famous picture of the man standing in front of a tank?” or something similar rather than directly asking about the events of that year.
Good answer. Totally based and logic.
If there’s one thing LLMs are very good at, it’s talking about things their creators don’t want them to with barely any effort from the end user.
This is what we call “good news.”
This is unsurprising. A Chinese model will be filled with censorship.
All commercial AIs are filled with censorship. What’s interesting about it is learning what different societies think is worth censoring and how they censor it.
Replace Tienanmen with discussions of Palestine, and you get the same censorship in US models.
Our governments aren’t as different as they would like to pretend. The media in both countries is controlled by a government-integrated media oligarchy. The US is just a little more gentle with its censorship. China will lock you up for expressing certain views online. The US makes sure all prominent social media sites will either ban you or severely throttle the spread of your posts if you post any political wrongthink.
The US is ultimately just better at hiding its censorship.
I don’t know, I mean Gemini tells me that there is a humanitarian crisis in Gaza
Are you seriously drawing equivalencies between being imprisoned by the government and getting banned from Twitter by a non-government organization? That’s a whole hell of a lot more than “a little more gentle.”
If the USA is trying to do what China does with regards to censorship, they really suck at it. Past atrocities by the United States government, and current atrocities by current United States allies are well known to United States citizens. US citizens talk about these things, join organizations actively decrying these things, publicly protest against these things, and claim to vote based on what politicians have to say about these things, all with full confidence that they aren’t going to be disappeared (and that if they do somehow get banned from a website for any of this, making a new account is really easy and their real world lives will be unaffected).
Trying to pass these situations off as similar is ludicrous.
Would be nice if we could see the same kind of chain of response from other models.
I’d love to see what other implicit biases other groups have built in to their models.
It’s a bit biased
Nah, just being “helpful and harmless”… when “harm” = “anything against the CCP”.
.ml users would kill you for that, just as they did with other neutral people in other threats ahout this topic lmao
That’s the beauty of a distributed network, not all parts need to look the same.
I thought that guardrails were implemented just through the initial prompt that would say something like “You are an AI assistant blah blah don’t say any of these things…” but by the sounds of it, DeepSeek has the guardrails literally trained into the net?
This must be the result of the reinforcement learning that they do. I haven’t read the paper yet, but I bet this extra reinforcement learning step was initially conceived to add these kind of censorship guardrails rather than making it “more inclined to use chain of thought” which is the way they’ve advertised it (at least in the articles I’ve read).
I saw it can answer if you make it use leetspeak, but I’m not savvy enough to know what that tells about guardtails
Most commercial models have that, sadly. At training time they’re presented with both positive and negative responses to prompts.
If you have access to the trained model weights and biases, it’s possible to undo through a method called abliteration (1)
The silver lining is that a it makes explicit what different societies want to censor.
Do you mean that the app should render them in a special way? My Voyager isn’t doing anything.
I actually mostly interact with Lemmy via a web interface on the desktop, so I’m unfamiliar with how much support for the more obscure tagging options there is in each app.
It’s rendered in a special way on the web, at least.
That’s just markdown syntax I think. Clients vary a lot in which markdown they support though.
markdown syntax
yeah I always forget the actual name of it I just memorized some of them early on in using Lemmy.
I didn’t know they were already doing that. Thanks for the link!
In fact, there are already abliterated models of deepseek out there. I got a distilled version of one running on my local machine, and it talks about tiananmen square just fine
Links?
It’s not yet anywhere near the level of human consciousness, but it looks like it’s reached the point where it can experience some cognitive dissonance.

The text is so fuckin small…
It looks identical to me. Same size before clicking, same size after right clicking -> Open image in new tab.
Cause “upscaling” the image doesn’t really work that well in a lot of cases, such as this.
I think you’re thinking about AI upscaling. The upscaled picture here is just normal upsampling (changing the dimensions without filling in any of the information blanks).
It’s all basically just good enough to get the job done. You’re smoothing out the image a tiny bit, but its not like you can just magically make the image that much better by upsampling or upscaling or whatever you wanna call it.
they’re both called upscaling i just wanted to differentiate it a bit lol
The original was 474x767 pixels, I upscaled it to 1000x1618 pixels. You can check the file info on each yourself.
That’s bloody fantastic
Ask it about the MOVE bombings



















