• MonkderVierte@lemmy.zip
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    56 minutes ago

    Translation: they want to ramp up the useless feature rate and quality control can go from bad to shit.

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    85
    ·
    1 day ago

    Liuson told managers that AI “should be part of your holistic reflections on an individual’s performance and impact.”

    who talks like this

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      7 hours ago

      People who work at Microsoft.

      Source: Me, I used to, was driven moderately insane by their highly advanced and pervasive outbreak of corpospeak.

      They are impressed by LLMs because they reproduce their inane dialect.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      18
      ·
      9 hours ago

      A company that forces you to write a “Connect” every half-year where you reflect on your performance and Impact™ : (click here for the definition of Impact™ in Microsoft® Sharepoint™)

    • Pyr@lemmy.ca
      link
      fedilink
      English
      arrow-up
      14
      ·
      10 hours ago

      Business grads who think it makes them sound smart. I have to deal with way too many of them. It’s infuriating, because behind it all I know just how dull most of them truly are.

    • Saleh@feddit.org
      link
      fedilink
      English
      arrow-up
      54
      ·
      1 day ago

      By creating a language only they are able to speak and interpret, the managerial class is protecting its existence and self reproduction, while keeping people of other classes out or only let them in after passing through a proper reeducation camp, e.g. MBA program.

    • TinyTimmyTokyo@awful.systems
      link
      fedilink
      English
      arrow-up
      23
      ·
      1 day ago

      I have no doubt that a chatbot would be just as effective at doing Liuson’s job, if not moreso. Not because chatbots are good, but because Liuson is so bad at her job.

    • thesystemisdown@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      1 day ago

      Some C-Suite executives that think they’re important/interesting enough to hold a Ted Talk. Usually it’s just buzzword babble, but it occasionally escalates.

  • HedyL@awful.systems
    link
    fedilink
    English
    arrow-up
    50
    ·
    1 day ago

    FWIW, I work in a field that is mostly related to law and accounting. Unlike with coding, there are no simple “tests” to try out whether an AI’s answer is correct or not. Of course, you could try these out in court, but this is not something I would recommend (lol).

    In my experience, chatbots such as Copilot are less than useless in a context like ours. For more complex and unique questions (which is most of the questions we are dealing with everyday), it simply makes up smart-sounding BS (including a lot of nonexistent laws etc.). In the rare cases where a clear answer is already available in the legal commentaries, we want to quote it verbatim from the most reputable source, just to be on the safe side. We don’t want an LLM to rephrase it, hide its sources and possibly introduce new errors. We don’t need “plausible deniability” regarding plagiarism or anything like this.

    Yet, we are being pushed to “embrace AI” as well, we are being told we need to “learn to prompt” etc. This is frustrating. My biggest fear isn’t to be replaced by an LLM, not even by someone who is a “prompting genius” or whatever. My biggest fear is to be replaced by a person who pretends that the AI’s output is smart (rather than filled with potentially hazardous legal errors), because in some workplaces, this is what’s expected, apparently.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 hours ago

      Unlike with coding, there are no simple “tests” to try out whether an AI’s answer is correct or not.

      So for most actual practical software development, writing tests is in fact an entire job in and of itself and its a tricky one because covering even a fraction of the use cases and complexity the software will actually face when deployed is really hard. So simply letting the LLMs brute force trial-and-error their code through a bunch of tests won’t actually get you good working code.

      AlphaEvolve kind of did this, but it was testing very specific, well defined, well constrained algorithms that could have very specific evaluation written for them and it was using an evolutionary algorithm to guide the trial and error process. They don’t say exactly in their paper, but that probably meant generating code hundreds or thousands or even tens of thousands of times to generate relatively short sections of code.

      I’ve noticed a trend where people assume other fields have problems LLMs can handle, but the actually competent experts in that field know why LLMs fail at key pieces.

      • HedyL@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        I’ve noticed a trend where people assume other fields have problems LLMs can handle, but the actually competent experts in that field know why LLMs fail at key pieces.

        I am fully aware of this. However, in my experience, it is sometimes the IT departments themselves that push these chatbots onto others in the most aggressive way. I don’t know whether they found them to be useful for their own purposes (and therefore assume this must apply to everyone else as well) or whether they are just pushing LLMs because this is what management expects them to do.

    • diz@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      19 hours ago

      I was writing some math code, and not being an idiot I’m using an open source math library for doing something called “QR decomposition”, and its efficient, and it supports sparse matrices (matrices where many numbers are 0), etc.

      Just out of curiosity I checked where some idiot vibecoder would end up. AI simply plagiarizes from some shit sample snippets which exist purely to teach people what QR decomposition is. It’s actually unusable, due to being numerically unstable.

      Who in the fuck even needs this shit to be plagiarized, anyway?

      It can’t plagiarize a production quality implementation, because you can count those on the fingers of one hand, they’re complex as fuck and you can’t just blend a few together to try to pretend you didn’t plagiarize.

      The answer is, people who are peddling the AI. They are the ones who ordered plagiarism with extra plagiarism on top. These are not coding tools, these are demos to convince the investors to buy the actual product, which is company’s stock. There’s a little bit of tool functionality (you can ask them to refactor the code), but it’s just you misusing a demo to try to get some value out of it.

      And to that end, the demos take every opportunity to plagiarize something, and to talk about how the “AI” wrote the code from scratch based on its supposed understanding of fairly advanced math.

      And in coding, it is counter productive to plagiarize. Many of the open source libraries can be used in commercial projects. You get upstream fixes for free. You don’t end up with some bugs or worse yet security exploits that may have been fixed since the training cut-off date.

      No fucking one in the right mind would willingly want their product to contain copy pasted snippets from stale open source libraries, passed through some sort of variable-renaming copyright laundering machine.

      Except of course the business idiots who are in charge of software at major companies, who don’t understand software. Who just failed upwards.

      They look at plagiarized lines and count them as improved productivity.

    • Saleh@feddit.org
      link
      fedilink
      English
      arrow-up
      21
      ·
      1 day ago

      I have the same worries in engineering. We had a presentation of some AI “consultancy” firm that was telling us that now is the time to stop hesitating and start doing with LLMs and gave some examples of companies “they” found in regards to our industry. When i asked, if they know any company that is willing to take the legal risks if their designs turn out hazardous, there was the sound of crickets. And just with that, LLMs are completely useless for any design tasks. If i still have to check the design to be in adherence with all relevant laws, norms and other standards, i might just do the design myself.

      That is not to say, that there wouldn’t be useful tools that fall into what is called “AI” these days. But these tools are designed for specific purposes, by people who do understand the specific purpose and its caveats.

    • paequ2@lemmy.today
      link
      fedilink
      English
      arrow-up
      32
      ·
      1 day ago

      I work in a field that is mostly related to law and accounting… My biggest fear is to be replaced by a person who pretends that the AI’s output is smart

      Aaaaaah. I know this person. They’re an accountant. They recently learned about AI. They’re starting to use it more at work. They’re not technical. I told them about hallucinations. They said the AI rarely wrong. When he’s not 100% convinced, he says he asks the AI to cite the source… 🤦 I told him it can hallucinate the source! … And then we went back to “it’s rarely wrong though.”

      • HedyL@awful.systems
        link
        fedilink
        English
        arrow-up
        22
        ·
        1 day ago

        And then we went back to “it’s rarely wrong though.”

        I am often wondering whether the people who claim that LLMs are “rarely wrong” have access to an entirely different chatbot somehow. The chatbots I tried were rarely ever correct about anything except the most basic questions (to which the answers could be found everywhere on the internet).

        I’m not a programmer myself, but for some reason, I got the chatbot to fail even in that area. I took a perfectly fine JSON file, removed one semicolon on purpose and then asked the chatbot to fix it. The chatbot came up with a number of things that were supposedly “wrong” with it. Not one word about the missing semicolon, though.

        I wonder how many people either never ask the chatbots any tricky questions (with verifiable answers) or, alternatively, never bother to verify the chatbots’ output at all.

        • David Gerard@awful.systemsOPM
          link
          fedilink
          English
          arrow-up
          17
          ·
          1 day ago

          AI fans are people who literally cannot tell good from bad. They cannot see the defects that are obvious to everyone else. They do not believe there is such a thing as quality, they think it’s a scam. When you claim you can tell good from bad, they think you’re lying.

          • diz@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            ·
            19 hours ago

            They’re also very gleeful about finally having one upped the experts with one weird trick.

            Up until AI they were the people who were inept and late at adopting new technology, and now they get to feel that they’re ahead (because this time the new half-assed technology was pushed onto them and they didn’t figure out they needed to opt out).

            • HedyL@awful.systems
              link
              fedilink
              English
              arrow-up
              5
              ·
              14 hours ago

              Up until AI they were the people who were inept and late at adopting new technology, and now they get to feel that they’re ahead

              Exactly. It is also a new technology that requires far fewer skills to use than previous new technologies. The skills are needed to critically scrutinize the output - which in this case leads to less lazy people being more reluctant to accept the technology.

              On top of this, AI fans are being talked into believing that their prompting as such is a special “skill”.

          • sturger@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            7
            ·
            22 hours ago
            • They string words together based on the probability of one word following another.
            • They are heavily promoted by people that don’t know what they’re doing.
            • They’re wrong 70% of the time but promote everything they say as truth.
            • Average people have a hard time telling when they’re wrong.

            In other words, AIs are BS automated BS artists… being promoted breathlessly by BS artists.

          • HedyL@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            23 hours ago

            That’s why I find the narrative that we should resist working with LLMs because we would then train them and enable them to replace us problematic. That would require LLMs to be capable of doing so. I don’t believe in this (except in very limited domains such as professional spam). This type of AI is problematic because its abilities are completely oversold (and because it robs us of our time, wastes a lot of power and pollutes the entire internet with slop), not because it is “smart” in any meaningful way.

              • HedyL@awful.systems
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                5 hours ago

                This has become a thought-terminating cliché all on its own: “They are only criticizing it because it is so much smarter than they are and they are afraid of getting replaced.”

        • paequ2@lemmy.today
          link
          fedilink
          English
          arrow-up
          14
          ·
          1 day ago

          never bother to verify the chatbots’ output at all

          I feel like this is happening.

          When you’re an expert in the subject matter, it’s easier to notice when the AI is wrong. But if you’re not an expert, it’s more likely that everything will just sound legit. Or you won’t be able to verify it yourself.

          • HedyL@awful.systems
            link
            fedilink
            English
            arrow-up
            11
            ·
            1 day ago

            But if you’re not an expert, it’s more likely that everything will just sound legit.

            Oh, absolutely! In my field, the answers made up by an LLM might sound even more legit than the accurate and well-researched ones written by humans. In legal matters, clumsy language is often the result of facts being complex and not wanting to make any mistakes. It is much easier to come up with elegant-sounding answers when they don’t have to be true, and that is what LLMs are generally good at.

      • Passerby6497@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        21 hours ago

        I’m of two minds about AI, as I can have the AI find a flaw in my payload object that was causing problems in an edge case that I’ve only run into on 1/10 customers on a new product we’re deploying. But I also have days like last week when it said that the expiration date of 5/27 was only days away until I asked it what the 5th month of the year was…

        AI is at best an idiot savant that’s also a habitual liar.

    • rebelsimile@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      23 hours ago

      Even in code it’s only “right” a small percentage of the time if you count “right” as being able to get the answer quickly, accurately, without it losing context, and happening in less time than it would if you’d been searching. To me, LLMs are just another way of getting to data, and are about as “right” as Google is by shotgunning literally millions of results at you. You (the human) still have to parse through it all, and choose to do something with it.

    • underscores@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      I work with someone who is supposed to be a key person for x kind of product we work with and they very obviously send us AI slop answers. I almost wanted to back out of the project they plan to implement solely because our consultant can’t even answer basic questions without passing it through GPT.

    • Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      14 hours ago

      What about using LLMs to convert legal language in contracts etc. into basic English that is more accessible to the lay person?

      • HedyL@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        10 hours ago

        First, we are providing legal advice to businesses, not individuals, which means that the questions we are dealing with tend to be even more complex and varied.

        Additionally, I am a former professional writer myself (not in English, of course, but in my native language). Yet, even I find myself often using complicated language when dealing with legal issues, because matters tend to be very nuanced. “Dumbing down” something without understanding it very, very well creates a huge risk of getting it wrong.

        There are, of course, people who are good at expressing legal information in a layperson’s way, but these people have usually studied their topic very intensively before. If a chatbot explains something in “simple” language, their output usually contains serious errors that are very easy for experts to spot because the chatbot operates on the basis of stochastic rules and does not understand its subject at all.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        13 hours ago

        sure sounds like a great way to get bad advice full of holes

        LLMs continue to be abysmal at fine detail, and that matters a lot with law

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    55
    ·
    1 day ago

    Before LLMs came along no one cared what tools I did or didn’t use at work. Hell will freeze over before I let a text predictor write code for me even if that eventually costs me a job. I’m the sort who can’t stand any sort of auto-completion or other typing “help”, much less spending all my time reviewing LLM output.

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      29
      ·
      1 day ago

      LLMs are the next wave of popups here in the second quarter of the 21st century. I’ve become skilled at removing all the requests to let AI help me in whatever I’m actively doing. I about lost it recently when Excel threw one at me at work. NO, I DON’T WANT YOUR HELP!

      Having a better guided search in a help feature I don’t mind. But stop pushing it in everything, just have a way to get to it (and have it WORK when I use it!)

    • doleo@lemmy.one
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      even if that eventually costs me a job I mean it’s kind of a ‘damned both ways’ situation, here. Right? Lose your job if you refuse to use it, lose your job if you end up training it how to do your job.

      • zbyte64@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        1 day ago

        That’s just it though, it’s not going to replace you at doing your job. It is going to replace you by doing a worse job.

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 day ago

        A programmer automating his job is kind of his job, though. That’s not so much the problem as the complete enshittification of software engineering that the culture surrounding these dubiously efficient and super sketchy tools seems to herald.

        On the more practical side, enterprise subscriptions to the slop machines do come with assurances that your company’s IP (meaning code and whatever else that’s accessible from your IDE that your copilot instance can and will ingest) and your prompts won’t be used for training.

        Hilariously, github copilot now has an option to prevent it from being too obvious about stealing other people’s code, called duplication detection filter:

        If you choose to block suggestions matching public code, GitHub Copilot checks code suggestions with their surrounding code of about 150 characters against public code on GitHub. If there is a match, or a near match, the suggestion is not shown to you.

  • Archangel1313@lemmy.ca
    link
    fedilink
    English
    arrow-up
    30
    ·
    1 day ago

    We should expect some enterprising Microsoft coder to come up with an automated AI agent system that racks up chatbot metrics for them — while they get on with their actual job.

    Lol!

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      11
      ·
      1 day ago

      Even better, going off my view of the marketing AI Everything as an annoying popup, we need AI to fight AI, counter the attempt to ask the user if they’d like to try Copilot by yeeting it off the screen. Everyone likes robot fights.

  • miguel@fedia.io
    link
    fedilink
    arrow-up
    20
    ·
    1 day ago

    AI is so ridiculous. I literally asked copilot (ms’s ai) to recommend books to me based on some books I like… and most of them didn’t exist.

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 day ago

      Not really possible in an environment were the most useless person you know keeps telling everyone how AI made him twelve point eight times more productive, especially when in hearing distance from the management.