• vivendi@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      6
      ·
      7 hours ago

      My most honest goal is to educate people which on lemmy is always met with hate. people love to hate, parroting the same old nonsense that someone else taught them.

      If you insist on ignorance then be ignorant in peace, don’t try such misguided attempts at sneer

      There are things in which LLMs suck. And there are things that you wrongly believe as part of this bullshit twitter civil war.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        7 hours ago

        My most honest goal is to educate people

        oh and I suppose you can back that up with verifiable facts, yes?

        and that you, yourself, can stand as a sole beacon against the otherwise regularly increasing evidence and studies that both indicate toward and also prove your claims to be full of shit? you are the saviour that can help enlighten us poor unenlightened mortals?

        sounds very hard. managing your calendar must be quite a skill

        • vivendi@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          6
          ·
          edit-2
          7 hours ago

          and that you, yourself, can stand as a sole beacon against the otherwise regularly increasing evidence and studies that both indicate toward and also prove your claims to be full of shit?

          Hallucination rates and model quality has been going up steadily, same with multishot prompts and RAG reducing hallucination rates. These are proven scientific facts, what the fuck are you on about? Open huggingface RIGHT NOW, go to the papers section, FUCKING READ.

          I’ve spent 6+ years of my life in compsci academia to come here and be lectured by McDonald in his fucking basement, what has my life become

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            7 hours ago

            ah yes, my ability to read a pdf immediately confers upon me all the resources required to engage in materially equivalent experimentation of the thing that I just read! no matter whether the publisher spent cents or billions in the execution and development of said publication, oh no! it is so completely a cost paid just once, and thereafter it’s ~totally~ free!

            oh, wait, hang on. no. no it’s the other thing. that one where all the criticisms continue to hold! my bad, sorry for mistaking those. guess I was roleplaying a LLM for a moment there!

            • vivendi@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              4
              ·
              edit-2
              7 hours ago

              You can experiment on your own GPU by running the tests using a variety of models of different generations (LLAMA 2 class 7B, LLAMA 3 class 7B, Gemma, Granite, Qwen, etc…)

              Even the lowest end desktop hardware can run atleast 4B models. The only real difficulty is scripting the test system, but the papers are usually helpful with describing their test methodology.

              • swlabr@awful.systems
                link
                fedilink
                English
                arrow-up
                4
                ·
                6 hours ago

                👨🏿‍🦲: how many billions of models are you on

                🗿: like, maybe 3, or 4 right now my dude

                👨🏿‍🦲: you are like a little baby

                👨🏿‍🦲: watch this

                glue pizza

                • vivendi@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  3
                  ·
                  edit-2
                  6 hours ago

                  The most recent Qwen model supposedly works really well for cases like that, but this one I haven’t tested for myself and I’m going based on what some dude on reddit tested

              • froztbyte@awful.systems
                link
                fedilink
                English
                arrow-up
                4
                ·
                7 hours ago

                You can experiment on your own GPU

                you have lost the game

                you have been voted off the island

                you are the weakest list

                etc etc etc

                • vivendi@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  3
                  ·
                  7 hours ago

                  This is the most “insufferable redditor” stereotype shit possible, and to think we’re not even on Reddit

                  • self@awful.systems
                    link
                    fedilink
                    English
                    arrow-up
                    6
                    ·
                    6 hours ago

                    nah, the most insufferable Reddit shit was when you decided Lemmy doesn’t want to learn because somebody called you out on the confident bullshit you’re making up on the spot

                    like LLM like shithead though am I right?

                  • froztbyte@awful.systems
                    link
                    fedilink
                    English
                    arrow-up
                    5
                    ·
                    6 hours ago

                    a’ight, sure bub, let’s play

                    tell me what hw spec I need to deploy some kind of interactive user-facing prompt system backed by whatever favourite LLM/transformer-model you want to pick. idgaf if it’s llama or qwen or some shit you’ve got brewing in your back shed - if it’s on huggingface, fair game. here’s the baselines:

                    • expected response latencies: human, or better
                    • expected topical coherence: mid-support capability or above
                    • expected correctness: at worst “I misunderstood $x” in the sense of “whoops, sorry, I thought you were asking about ${foo} but I answered about ${bar}”; i.e. actual, contextual, concrete contextual understanding

                    (so, basically, anything a competent L2 support engineer at some random ISP or whatever could do)

                    hit it, I’m waiting.

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            ·
            7 hours ago

            also

            I’ve spent 6+ years of my life in compsci academia

            eh. look.

            I realize you’ll probably receive/perceive this post negatively, ranging as anywhere from “criticism”/“extremely harsh” through … “condemnation”?

            but, nonetheless, I have a request for you

            please, for the love of ${deity}, go out and meet people. get out of your niche, explore a bit. you are so damned close to stepping in the trap, and you could do not-that.

            (just think! you’ve spent a whole 6+ years on compsci? now imagine what your next 80+ years could be!)