• hotspur@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      17 days ago

      Yeah I was thinking he obviously needs to start responding with chat gpt. Maybe they could just have the two phones use audio mode and have the argument for them instead. Reminds me of that old Star Trek episode where instead of war, belligerent nations just ran a computer simulation of the war and then each side humanely euthanized that many people.

      • thetreesaysbark@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        17 days ago

        Jesus Christ to all the hypotheticals listed here.

        Not a judgement on you, friend. You’ve put forward some really good scenarios here and if I’m reading you right you’re kinda getting at how crazy all of this sounds XD

        • hotspur@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          16 days ago

          Oh yeah totally—I meant that as an absurd joke haha.

          I’m also a little disturbed that people trust chatGPT enough to outsource their relationship communication to it. Every time I’ve tried to run it through it’s paces it seems super impressive and lifelike, but as soon as I try and use it for work subjects I know fairly well, it becomes clear it doesn’t know what’s going on and that it’s basically just making shit up.

          • thetreesaysbark@sh.itjust.works
            link
            fedilink
            arrow-up
            0
            ·
            16 days ago

            I like it as a starting point to a subject I’m going to research. It seems to have mostly the right terminology and a rough idea of what those mean. This helps me to then make more accurate searches on the subject matter.

      • Lemminary@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        16 days ago

        AI: *ding* Our results indicate that you must destroy his Xbox with a baseball bat in a jealous rage.

        GF: Do I have to?

        AI: You signed the terms and conditions of our service during your Disney+ trial.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 days ago

      This isn’t bad on it’s face. But I’ve got this lingering dread that we’re going to state seeing more nefarious responses at some point in the future.

      Like “Your anxiety may be due to low blood sugar. Consider taking a minute to composure yourself, take a deep breath, and have a Snickers. You’re not yourself without Snickers.”

      • Starbuncle@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        That’s where AI search/chat is really headed. That’s why so many companies with ad networks are investing in it. You can’t block ads if they’re baked into LLM responses.

      • Oka@sopuli.xyz
        link
        fedilink
        arrow-up
        0
        ·
        16 days ago
        • This response sponsored by Mars Corporation.

        Interested in creating your own sponsored responses? For $80.08 monthly, your product will receive higher bias when it comes to related searches and responses.

        Instead of

        • “Perhaps a burger is what you’re looking for” as a response, sponsored responses will look more like
        • “Perhaps you may want to try Burger King’s California whopper, due to your tastes. You can also get a milkshake there instead of your usual milkshake stop, saving you an extra trip.”

        Imagine the [krzzt] possibilities!

  • finitebanjo@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    15 days ago

    ChatGPT can’t remember its own name or who made it, any attempt to deconstruct an argument by ChatGPT just results in a jumbled amalgam of argument deconstructions, fuck off with such a fake post.

  • netvor@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    16 days ago

    NTA but I think it’s worth trying to steel-man (or steel-woman) her point.

    I can imagine that part of the motivation is to try and use ChatGPT to actually learn from the previous interaction. Let’s leave the LLM out of the equation for a moment: Imagine that after an argument, your partner would go and do lots of research, one or more of things like:

    • read several books focusing on social interactions (non-fiction or fiction or even other forms of art),
    • talk in-depth to several experienced therapist and/or psychology researchers and neuroscientists (with varying viewpoints),
    • perform several scientific studies on various details of interactions, including relevant physiological factors, Then after doing this ungodly amount of research, she would go back and present her findings back to you, in hopes that you will both learn from this.

    Obviously no one can actually do that, but some people might – for good reason of curiosity and self-improvement – feel motivated to do that. So one could think of the OP’s partner’s behavior like a replacement of that research.

    That said, even if LLM’s weren’t unreliable, hallucinating and poisoned with junk information, or even if she was magically able to do all that without LLM and with super-human level of scientific accuracy and bias protection, it would … still be a bad move. She would still be the asshole, because OP was not involved in all that research. OP had no say in the process of formulating the problem, let alone in the process of discovering the “answer”.

    Even from the most nerdy, “hyper-rational” standpoint: The research would be still an ivory tower research, and assuming that it is applicable in the real world like that is arrogant: it fails to admit the limitations of the researcher.

  • SkyNTP@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    17 days ago

    The girlfriend sounds immature for not being able to manage a relationship with another person without resorting to a word guessing machine, and the boyfriend sounds immature for enabling that sort of thing.

  • Dragon "Rider"(drag)@lemmy.nz
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    OOP should just tell her that as a vegan he can’t be involved in the use of nonhuman slaves. Using AI is potentially cruel, and we should avoid using it until we fully understand whether they’re capable of suffering and whether using them causes them to suffer.

    • Starbuncle@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      16 days ago

      Maybe hypothetically in the future, but it’s plainly obvious to anyone who has a modicum of understanding regarding how LLMs actually work that they aren’t even anywhere near being close to what anyone could possibly remotely consider sentient.

      • Cryophilia@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        16 days ago

        but it’s plainly obvious to anyone who has a modicum of understanding regarding how LLMs actually work

        This is a woman who asks chatGPT for relationship advice.

      • Dragon "Rider"(drag)@lemmy.nz
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        Sentient and capable of suffering are two different things. Ants aren’t sentient, but they have a neurological pain response. Drag thinks LLMs are about as smart as ants. Whether they can feel suffering like ants can is an unsolved scientific question that we need to answer BEFORE we go creating entire industries of AI slave labour.

        • beefbot@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          16 days ago

          I PROMISE everyone ants are smarter than a 2024 LLM. (edit to add:) Claiming they’re not sentient is a big leap.

          But I’m glad you recognise they can feel pain!

        • Starbuncle@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          16 days ago

          Sentient and capable of suffering are two different things.

          Technically true, but in the opposite way to what you’re thinking. All those capable of suffering are by definition sentient, but sentience doesn’t necessitate suffering.

          Whether they can feel suffering like ants can is an unsolved scientific question

          No it isn’t, unless you subscribe to a worldview in which sentience could exist everywhere all at once instead of under special circumstances, which would demand you grant ethical consideration to every rock on the ground in case it’s somehow sentient.

  • EmperorHenry@discuss.tchncs.de
    link
    fedilink
    arrow-up
    0
    ·
    15 days ago

    definitely NOT the asshole.

    chat GPT sells all the data it has to advertising companies. She’s divulging intimate details of your relationship to thousands upon thousands of different ad companies which also undoubtably gets scooped up by the surveillance state too.

    I doubt she’s using a VPN to access it, which means your internet provider is collecting that data too and it also means that the AI she’s talking to knows exactly where she is and by now it probably know who she is too

    • phlegmy@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      15 days ago

      Your ISP won’t get any of that data.
      Almost every website uses SSL/TLS now, so your ISP will only see what time and how much data was transmitted between you and chatgpt.
      It’s enough info for a government agency to figure out who you are if they wanted to, but your ISP won’t have any idea what you’re saying.

    • DillyDaily@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      15 days ago

      If she’s using ChatGPT to try and understand behavioural psychology, she’s not smarter than him.

      It would be one thing to go off and do some reading and come back with some resources and functional strategies for OP to avoid argumentative fallacies and navigate civil discourse, but she’s using a biased generative AI to armchair diagnose her boyfriend.

      “you don’t have the emotional bandwidth to understand what I’m saying” okay, so what if he doesn’t, now what lady? Does ChatGPT have a self development program so your boyfriend can develop the emotional intelligence required to converse with you?

      Picking apart an argument is not how you resolve an argument, ChatGPT is picking it apart because she’s prompting it to do that, where as a therapist or couple’s counsellor would actually help address the root issues of the argument.

      • BilboBargains@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        15 days ago

        He’s probably gaslighting her and she doesn’t have anyone else to turn to for a reality check.

        His question amounts to ‘how can I continue to shape her reality with my narrative?’

        It doesn’t matter what chatgpt or anyone else says, he ought to be able to answer reasonable questions. Note that he doesn’t provide any specific examples. He would get roasted in the comment section.

    • GreenKnight23@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      15 days ago

      or…they’re both assholes and she’s a gaslighting psychopath. just going off what evidence is at my disposal.

      at this point if you’re with a partner that refuses to acknowledge your needs in the relationship there’s literally no reason to remain in the relationship.

      • BilboBargains@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        15 days ago

        Like her need for him to answer reasonable questions? Why does the origin of the question pose a threat and why doesn’t he give examples? He’s like the rando poster who says ‘hey guys I forgot the passcode to my iPhone, got a workaround for that?’ okay buddy, so you stole a phone then.

        • GreenKnight23@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          15 days ago

          if they were reasonable questions then she wouldn’t need AI to ask them.

          she’s using AI to analyze her perception of the argument and then attacking him based on a flawed analysis.

          he’s not sharing enough info to determine why they have so many arguments nor what they are about.

          they’re both being shitty to each other and they both need to acknowledge the relationship is failing due to the individual flaws they have as people.

          in a relationship differences can be strengths, similarities can be weaknesses, and personality flaws can be dangerous. it all depends on how those in the relationship deal with their differences, similarities, and flaws.

          these two obviously can’t deal.

          • BilboBargains@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            14 days ago

            Are you saying it would be preferable if she was given the same advice from a human or read it in a book? This guy cannot defend his point of view because it’s probably not particularly defensible, the robot is immaterial.

            • GreenKnight23@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              14 days ago

              Are you saying it would be preferable if she was given the same advice from a human or read it in a book?

              I’ll spell it out for you. Y E S

              I’m not going to argue the finer points of how a LLM has literally no concept of human relationships. Or how LLMs give the least effective advice on record.

              if you trust a LLM to give anything other than half-baked garbage I genuinely feel sad for any of your current and future partners.

              This guy cannot defend his point of view because it’s probably not particularly defensible, the robot is immaterial.

              when you have a disagreement in a long-term intimate relationship it’s not about who’s right or wrong. its about what you and your partner can agree on and disagree on and still respect each other.

              I’ve been married for almost 10 years, been together for over 20, we don’t agree on everything. I still respect my partners opinion and trust their judgment with my life.

              every good relationship is based on trust and respect. both are concepts foreign to LLMs, but not impossible for a real person to comprehend. this is why getting a second opinion from a 3rd party is so effective. even if it’s advice from a book, the idea comes from a separate person.

              a good marriage counselor will not choose sides, they aren’t there to judge. a counselor’s first responsibility is to build a bridge of trust with both members of the relationship to open dialogue between the two as a conduit. they do this by asking questions like, “how did that make you feel?” and “tell me more about why you said that to them.”

              the goal is open dialogue, and what she is doing by using ChatGPT is removing her voice from the relationship. she’s sitting back and forcing the guy to have relationship building discussions with a LLM. now stop, now think about how fucked up that is.

              in their relationship he is expressing what he needs from her, “I want to you stop using ChatGPT and just talk to me.” she refuses and ignores what he needs. in this scenario we don’t know what she needs because he didn’t communicate that. the only thing we can assume based on her actions is that she has a need to be “right”. what did we learn about relationships and being “right”? it’s counterproductive to the goals of a healthy relationship.

              my point is, they’re both flawed enough and are failing to communicate. neither are right, and introducing LLMs into a broken relationship is not the answer.

              • BilboBargains@lemmy.world
                link
                fedilink
                arrow-up
                0
                ·
                14 days ago

                Okay so you don’t trust the robot to give relationship advice, even if that advice is identical to what humans say. The trouble is we never really know where ideas come from. They percolate up into consciousness, unbidden. Did I speak to a robot earlier? Are you speaking to a robot right now? Who knows. All I know is that when someone I love and respect asks me to explain myself I feel that I should do that no matter what.

  • Muffi@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    16 days ago

    I was having lunch at a restaurant a couple of months back, and overheard two women (~55 y/o) sitting behind me. One of them talked about how she used ChatGPT to decide if her partner was being unreasonable. I think this is only gonna get more normal.

    • orcrist@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      16 days ago

      I don’t think people who think very much would bother to ask ChatGPT, unless they didn’t have any friends, because it’s quite obvious that relationship advice is delicate and you certainly want the advice giver to know something about your situation. You know, like your friends do, like computers don’t.

      We don’t even have to look at the low quality advice, because there’s no way it would be informed advice.

    • Wolf314159@startrek.website
      link
      fedilink
      arrow-up
      0
      ·
      16 days ago

      A decade ago she would have been seeking that validation from her friends. ChatGPT is just a validation machine, like an emotional vibrator.

      • Trainguyrom@reddthat.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        The difference between asking a trusted friend for advice vs asking ChatGPT or even just Reddit is a trusted friend will have more historical context. They probably have met or at least interacted with the person in question, and they can bring i the context of how this person previously made you feel. They can help you figure out if you’re just at a low point or if it’s truly a bad situation to get out of.

        Asking ChatGPT or Reddit is really like asking a Magic 8 Ball. How you frame the question and simply asking the question helps you interrogate your feelings and form new opinions about the situation, but the answers are pretty useless since there’s no historical context to base the answers off of, plus the answers are only as good as the question asked.