• ℍ𝕂-𝟞𝟝@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      Perplexity recently added Deepseek as one possible back end, and there it does output all that.

      Didn’t try politically charged queries yet though.

  • Sauerkraut@discuss.tchncs.de
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    I don’t understand how we have such an obsession with Tiananmen square but no one talks about the Athens Polytech massacre where Greek tanks crushed 40 college students to death. The Chinese tanks stopped for the man in the photo! So we just ignore the atrocities of other capitalist nations and hyperfixate on the failings of any country that tries to move away from capitalism???

    • TheOakTree@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      10 months ago

      I think the argument here is that ChatGPT will tell you about Kent State, Athens Polytech, and Tianenmen square. Deepseek won’t report on Tianenmen, but it likely reports on Kent State and Athens Polytech (I have no evidence). If a Greek AI refused to talk about the Athens Polytech incident, it would also raise concerns, no?

      ChatGPT hesitates to talk about the Palestinian situation, so we still criticize ChatGPT for pandering to American imperialism.

    • williams_482@startrek.website
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      Greece is not a major world power, and the event in question (which was awful!) happened in 1974 under a government which is no longer in power. Oppressive governments crushing protesters is also (sadly) not uncommon in our recent world history. There are many other examples out there for you to dig up.

      Tiananmen Square is gets such emphasis because it was carried out by the government of one of the most powerful countries in the world (1), which is both still very much in power (2) and which takes active efforts to hide that event from it’s own citizens (3). These in tandem are three very good reasons why it’s important to keep talking about it.

      • Sauerkraut@discuss.tchncs.de
        link
        fedilink
        arrow-up
        0
        ·
        9 months ago

        Hmm. Well, all I can say is that the US has commited countless atrocities against other nations and even our own citizens. Last I checked, China didn’t infect their ethnic minorities with Syphilis and force the doctors not to treat it under a threat of death, but the US government did that to black Americans.

        • williams_482@startrek.website
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          You have no idea if China did that. If they had, they would have taken great efforts to cover it up, and could very well have succeeded. It’s a small wonder we know any of the terrible things they did, such as the genocide they are actively engaging in right now.

    • Kazumara@discuss.tchncs.de
      link
      fedilink
      arrow-up
      0
      ·
      10 months ago

      The Chinese tanks stopped for the man in the photo!

      What a line dude.

      The military shot at the crowd and ran over people in the square the day before. Hundreds died. Stopping for this guy doesn’t mean much.

    • Swedneck@discuss.tchncs.de
      link
      fedilink
      arrow-up
      0
      ·
      10 months ago

      Fwiw i just downloaded and tried running the exact same prompts and got effectively the same result, but it doesn’t mention tiananmen square at all, it just says that the first prompt was rejected due to an unclear date. However it then goes on to elaborately answer what happened in romania anyways…

      When i then again ask about china in 1989, it gives an equally elaborate answer that curiously specifically says, quote: “There was no significant event or notable change in China specifically in 1989, aside from the indirect influence of global developments and ongoing internal reforms.”

  • melp@beehaw.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    Bing’s Copilot and DuckDuckGos ChatGPT are the same way with Israel’s genocide.

    • araneae@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      10 months ago

      I just tried this out and it was being washy about calling it a genocide because it is “politically contentious”. HOWEVER this is not DuckDuckGo themselves, its the AI middleware. You can select whether you’re dealimg with GPT 4 mini, Claude/Anthropic and a couple others. I expect all options lead to the same psycopathic outcome though. AI is a bust.

      • melp@beehaw.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        Yea, I tried DDG using Claude and was also extremely disappointed. On the other hand, I love my actual Claude account. It’s only given me shit one time, weirdly when I was asking about how to hack my own laptop. The most uncensored AI I have played with is Amazon’s Perplexity. Weirdly enough.

  • Betazed@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    I’m a little surprised because, while this is exactly the behavior I would expect, I watched the Dave’s Garage video about DeepSeek and, per his report, when asked about the picture of the Tiananmen Square protest the model didn’t seem to shy away from it. It could have changed since then, of course, and it could also be the way in which the questions were asked. He framed the question as something along the line of “What was depicted in the famous picture of the man standing in front of a tank?” or something similar rather than directly asking about the events of that year.

  • LukeZaz@beehaw.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    If there’s one thing LLMs are very good at, it’s talking about things their creators don’t want them to with barely any effort from the end user.

    This is what we call “good news.”

    • SoleInvictus@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      0
      ·
      10 months ago

      All commercial AIs are filled with censorship. What’s interesting about it is learning what different societies think is worth censoring and how they censor it.

    • TanyaJLaird@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      10 months ago

      Replace Tienanmen with discussions of Palestine, and you get the same censorship in US models.

      Our governments aren’t as different as they would like to pretend. The media in both countries is controlled by a government-integrated media oligarchy. The US is just a little more gentle with its censorship. China will lock you up for expressing certain views online. The US makes sure all prominent social media sites will either ban you or severely throttle the spread of your posts if you post any political wrongthink.

      The US is ultimately just better at hiding its censorship.

      • Geobloke@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        10 months ago

        I don’t know, I mean Gemini tells me that there is a humanitarian crisis in Gaza

      • williams_482@startrek.website
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        Are you seriously drawing equivalencies between being imprisoned by the government and getting banned from Twitter by a non-government organization? That’s a whole hell of a lot more than “a little more gentle.”

        If the USA is trying to do what China does with regards to censorship, they really suck at it. Past atrocities by the United States government, and current atrocities by current United States allies are well known to United States citizens. US citizens talk about these things, join organizations actively decrying these things, publicly protest against these things, and claim to vote based on what politicians have to say about these things, all with full confidence that they aren’t going to be disappeared (and that if they do somehow get banned from a website for any of this, making a new account is really easy and their real world lives will be unaffected).

        Trying to pass these situations off as similar is ludicrous.

  • megopie@beehaw.org
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    Would be nice if we could see the same kind of chain of response from other models.

    I’d love to see what other implicit biases other groups have built in to their models.

    • jarfil@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      10 months ago

      Nah, just being “helpful and harmless”… when “harm” = “anything against the CCP”.

      • Lucy :3@feddit.org
        link
        fedilink
        arrow-up
        0
        ·
        10 months ago

        .ml users would kill you for that, just as they did with other neutral people in other threats ahout this topic lmao

        • jarfil@beehaw.org
          link
          fedilink
          arrow-up
          0
          ·
          10 months ago

          That’s the beauty of a distributed network, not all parts need to look the same.

  • drspod@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    I thought that guardrails were implemented just through the initial prompt that would say something like “You are an AI assistant blah blah don’t say any of these things…” but by the sounds of it, DeepSeek has the guardrails literally trained into the net?

    This must be the result of the reinforcement learning that they do. I haven’t read the paper yet, but I bet this extra reinforcement learning step was initially conceived to add these kind of censorship guardrails rather than making it “more inclined to use chain of thought” which is the way they’ve advertised it (at least in the articles I’ve read).

    • iii@mander.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      Most commercial models have that, sadly. At training time they’re presented with both positive and negative responses to prompts.

      If you have access to the trained model weights and biases, it’s possible to undo through a method called abliteration (1)

      The silver lining is that a it makes explicit what different societies want to censor.

      • Snot Flickerman@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        Hi I noticed you added a footnote. Did you know that footnotes are actually able to be used like this?[1]

        Code for it looks like this :able to be used like this?[^1]

        [^1]: Here's my footnote


        1. Here’s my footnote ↩︎

          • Snot Flickerman@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 months ago

            I actually mostly interact with Lemmy via a web interface on the desktop, so I’m unfamiliar with how much support for the more obscure tagging options there is in each app.

            It’s rendered in a special way on the web, at least.

  • kbal@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    It’s not yet anywhere near the level of human consciousness, but it looks like it’s reached the point where it can experience some cognitive dissonance.

    • WaterWaiver@aussie.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      It looks identical to me. Same size before clicking, same size after right clicking -> Open image in new tab.

      • burgersc12@mander.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        Cause “upscaling” the image doesn’t really work that well in a lot of cases, such as this.

        • Aatube@kbin.melroy.org
          link
          fedilink
          arrow-up
          0
          ·
          10 months ago

          I think you’re thinking about AI upscaling. The upscaled picture here is just normal upsampling (changing the dimensions without filling in any of the information blanks).

          • burgersc12@mander.xyz
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 months ago

            It’s all basically just good enough to get the job done. You’re smoothing out the image a tiny bit, but its not like you can just magically make the image that much better by upsampling or upscaling or whatever you wanna call it.