As soon as Apple announced its plans to inject generative AI into the iPhone, it was as good as official: The technology is now all but unavoidable. Large language models will soon lurk on most of the world’s smartphones, generating images and text in messaging and email apps. AI has already colonized web search, appearing in Google and Bing. OpenAI, the $80 billion start-up that has partnered with Apple and Microsoft, feels ubiquitous; the auto-generated products of its ChatGPTs and DALL-Es are everywhere. And for a growing number of consumers, that’s a problem.

Rarely has a technology risen—or been forced—into prominence amid such controversy and consumer anxiety. Certainly, some Americans are excited about AI, though a majority said in a recent survey, for instance, that they are concerned AI will increase unemployment; in another, three out of four said they believe it will be abused to interfere with the upcoming presidential election. And many AI products have failed to impress. The launch of Google’s “AI Overview” was a disaster; the search giant’s new bot cheerfully told users to add glue to pizza and that potentially poisonous mushrooms were safe to eat. Meanwhile, OpenAI has been mired in scandal, incensing former employees with a controversial nondisclosure agreement and allegedly ripping off one of the world’s most famous actors for a voice-assistant product. Thus far, much of the resistance to the spread of AI has come from watchdog groups, concerned citizens, and creators worried about their livelihood. Now a consumer backlash to the technology has begun to unfold as well—so much so that a market has sprung up to capitalize on it.


Obligatory “fuck 99.9999% of all AI use-cases, the people who make them, and the techbros that push them.”

  • LEX@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    5 months ago

    As a “creator” myself, I’d like to say to my fellow artists who are anit-AI, get over it. AI artists are artists too. Yes there is bad AI art, but there’s bad art in every medium. If done with care and skill, AI art can be completely awesome and if you have an open mind, you might even find some space for it in your work. But even if you don’t, have some respect for the AI artists out there who put time and effort into their craft. There’s room for everyone.

    • Ilandar@aussie.zone
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      But even if you don’t, have some respect for the AI artists out there who put time and effort into their craft.

      What kind of time and effort? How is AI art a skill that is comparable to real art? I am genuinely asking here, I’d like to understand your work process.

      I am not a visual artist, but I have composed my own music and the amount of time and/or effort needed to create a comparable piece using generative AI is not even close to being the same. I think there is a place for AI tools that assist artists, but people generating entire pieces using AI and then referring to themselves as “artists” is honestly delusional and sad. I hope that’s not what you are referring to here.

      • unconfirmedsourcesDOTgov@lemmy.sdf.org
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        Not OP but familiar enough with open source diffusion image generators to be able to chime in.

        Now I’d argue that being an artist comes down to being able to envision something in your mind’s eye and then reproduce it in the real world using some medium, whether it’s a graphite pencil, oil paint, a block of marble, Wacom tablet on a pc, or even through a negotiation with an AI model. Your definition might be different, but for the sake of conversation this is how I’m thinking about it.

        The work flow for an AI generated image can have a few steps before feeling like it sufficiently aligns with your vision. Prompting for specific details can be tricky, so usually step 1 is to generate the basic outline of the image you’re after. Depending on your GPU or cloud service, this could take several minutes or hours before you get a basis that you can work with. Once you have the basic image, you can then use inpainting tools to mask specific areas of the image and change specific details, colors, etc. This again can take many many generations before you land on something that sufficiently matches your vision.

        This is all also after you go through the process of reviewing and selecting one of the hundreds of models that have been trained specifically for different types of output. Want to generate anime-style art? There’s a model for that, want something great at landscapes? There’s a different one for that. Surely you can use an all-purpose model for everything, but some models simply don’t have the training to align to your vision, so you either choose to live with ‘close enough’ or you start downloading new options, comparing them with your existing work flow, etc.

        There’s certainly skill associated with the current state of image generation. Perhaps not the same level of practice you need to perfectly represent a transparent veil in graphite, but as with other formats I have a hard time suggesting that when someone represents their vision in the real world that it’s automatically “not art”.

        • Ilandar@aussie.zone
          link
          fedilink
          arrow-up
          0
          ·
          5 months ago

          You keep using the word “vision”, but I have a hard time understanding how an AI artist has a vision equivalent to that of a traditional artist based on the explanation you’ve provided. It still sounds they are just cycling through AI generated options until they find something they like/that looks good. That is not the same as seeing something in your mind and then manually recreating that to the best of your ability.

          • Zaktor@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 months ago

            Is a photographer an artist? They need to have some technical skill to capture sharp photos with good lighting, but a lot of the process is designing a scene and later selecting among the photos from a shoot for which one had the right look.

            Or to step even further from the actual act of creation, is a creative director an artist? There’s certainly some skill involved in designing and recognizing a compelling image, even if you were not the one who actually produced it.

            • Ilandar@aussie.zone
              link
              fedilink
              arrow-up
              0
              ·
              5 months ago

              You’re sort of stepping around the issue here. Are you confirming that AI art is about cycling through options blind until you stumble across something you like?

              • Zaktor@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                0
                ·
                5 months ago

                No, both of those examples involve both design and selection, which is reminiscent to the AI art process. They’re not just typing in “make me a pretty image” and then refreshing a lot.

                • Ilandar@aussie.zone
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  5 months ago

                  They’re not just typing in “make me a pretty image” and then refreshing a lot.

                  The only explanation I’ve received so far sounded exactly like this, just with more steps to disguise the underlying process.

        • Pandemanium@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          5 months ago

          So if I walked into a restaurant that specialized in a certain cuisine (choosing the right one out of hundreds is a skill, right?) and wrote down a list of ingredients, and the restaurant made me a meal with those ingredients according to however the restaurant functions (nobody can see into the kitchen, after all), does this make me a chef?

          • unconfirmedsourcesDOTgov@lemmy.sdf.org
            link
            fedilink
            arrow-up
            0
            ·
            5 months ago

            Is there any chance you’re at a kbbq or hotpot restaurant? Because then you get to cook the meal yourself, which is arguably chef-like.

            Jokes aside, I see the comparison you’re making and it’s not a bad one. I’d counter by giving the example of a menu - when you get to a restaurant you’re given a menu with text descriptions of the food you can receive from the kitchen. Since this is an analogy and not an exact comparison, let’s say that a meal on the menu is like the starting point of the workflow I described.

            Based on that you have an idea of what the output will be when you order - but let’s say you don’t like mushrooms and you prefer your sauce on the side. When you make your order you provide those modifications - this is like inpainting.

            Certainly you’re not a ‘chef’, but if the dish you design is both bespoke and previously unimaginable, I’d argue that at the very least you contributed to the creative process and participated in creating something new that matches your internal vision.

            Not exactly the same but I don’t think it’s entirely different.

      • LEX@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        5 months ago

        Well now that’s just close minded!

        Go back and read discussions about synthesisers when they first arrived on the scene and you will see much wailing and gnashing of teeth about how synths are not real instruments and etc and so on. Then do the same thing when hip-hop goes mainstream and people say it’s not “real” music because the musicians don’t perform with “real” instruments, I guess.

        You see where I’m going with this? There’s lots of examples like these in music and visual arts and they nearly always stem from ignorance.

        I don’t know anything about AI music generation, but visual art can be generated by AI models on local machines with a great amount of fine tuning and depth. Further, people feed their original artwork into the AI and manipulate that, so it’s not so cut and dry. This idea that folks just write a sentence and the computer barfs out an image is uninformed.

        Anyways, I’m blabbing. Hope that helps.

        • Ilandar@aussie.zone
          link
          fedilink
          arrow-up
          0
          ·
          5 months ago

          You see where I’m going with this?

          No, I’m sorry but those are terrible examples. Synthesisers still require full creative control and an understanding of sound production techniques to create a custom sound. Some musicians rely on presets and samples, but even then they still need to be capable of actually composing a piece of music. Also, the debate was largely about whether synthesisers could be considered real instruments, not whether the music created by synthesisers was real music. The Hip Hop comparison is completely irrelevant and an even worse attempt at conflating genuine criticism of AI “musicians” with “old people are just mad”.

          I don’t know anything about AI music generation

          It’s literally just prompts AFAIK, so the people making it don’t require any musical talent, ability or creativity. They are just asking someone/something else to make them music that has a certain sound. It’s the equivalent of a monarch commissioning a piece of work from their court musician and then claiming they are a musician too.

          visual art can be generated by AI models on local machines with a great amount of fine tuning and depth.

          Are there specific pieces of AI art software people use? Any popular ones you can recommended to help me understand the process better?

          • LEX@lemm.ee
            link
            fedilink
            arrow-up
            0
            ·
            edit-2
            5 months ago

            Ah, you are picking apart the examples instead of taking in the point. Well, I tried.

            To answer your question, yes. Automatic1111 and ComfyUI are two of the most popular.

            • Ilandar@aussie.zone
              link
              fedilink
              arrow-up
              0
              ·
              5 months ago

              It was a terrible and irrelevant point, as I explained. Thanks for the links though, I will check them out.

              • LEX@lemm.ee
                link
                fedilink
                arrow-up
                0
                ·
                5 months ago

                It’s really not.

                Maybe someday you’ll do some research into the history of art and music and get some context into how technology has influenced both and the repeating patterns of the reactionary art that tends to get produced by artists you’ve never heard of when that happens.

                Or maybe you won’t!

                Either way, good luck.

                • Ilandar@aussie.zone
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  5 months ago

                  Err, you admitted yourself that you are absolutely clueless when it comes to AI music generation. So yes, your “point” was a bad one and clearly came from a place of complete ignorance.

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        5 months ago

        What kind of time and effort?

        AI art can require training a model, or a LORA for a model, which requires choosing a series of samples and annotating them for the parts of you want to incorporate. After that, writing a prompt can involve several paragraphs with the definitions of what you want it to output, with a series of iterations, followed by a personal choice of the output.

        How is AI art a skill that is comparable to real art?

        How is stacking 10 buckets of sand and letting them fall in an art gallery, comparable to real art? Dunno, but they call it that: “real art”.

        Art is a communication act that requires some sort of vision, intended to elicit some sort of emotional response in the receiver, and a series of steps to achieve that.

        As long as there is a vision and an intent, the series of steps required to create art with AI, are comparable to any other series of steps conducting to the creation of art with any other medium.

        For a rough estimate, you can compare the number and difficulty of the steps, and the effectiveness of the communication.

        people generating entire pieces using AI and then referring to themselves as “artists” is honestly delusional and sad

        Let me refer you to the aforementioned sand bucket… sculpture? or the renowned orchestral piece “A minute of silence”, or paintings like “Black square”, or more performative pieces like “Banana duct taped to a wall”.

        There will always be artists, and “artists”.

        • Ilandar@aussie.zone
          link
          fedilink
          arrow-up
          0
          ·
          5 months ago

          I’m not sure equating AI art to sand bucket man is the glowing endorsement you think it is.

          • jarfil@beehaw.org
            link
            fedilink
            arrow-up
            0
            ·
            5 months ago

            I think you misunderstood: “sand bucket man” is the bar for human art.

            AI art has been above that for at least a decade, maybe two. Modern AI art, is orders of magnitude farther, even with the simplest of prompts.

            • Ilandar@aussie.zone
              link
              fedilink
              arrow-up
              0
              ·
              5 months ago

              How is stacking 10 buckets of sand and letting them fall in an art gallery, comparable to real art? Dunno, but they call it that: “real art”.

              Your insinuation here was that AI art is “real art” because someone once stacked 10 buckets of sand and called it “real art”. It comes across as pretty desperate that you relied on a comparison with something as questionable as this to argue that AI art is the equivalent of traditional art. As you said, there will always be artists and “artists”. Sounds like AI “artists” fit in quite well with the latter group.

    • darkphotonstudio@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      5 months ago

      Exactly, and if you are a trained artist, you can mop the floor with someone who only use prompts. I’ve been using the diffusion plugin for Krita and it is so powerful. You have the ability to paint, use layers and filters and near real-time AI fills. It’s awesome and fun.

  • teawrecks@sopuli.xyz
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    5 months ago

    So this could go one of two ways, I think:

    1. the “no AI” seal is self-ascribed using the honor system and over time enough studios just lie about it or walk the line closely enough that it loses all meaning and people disregard it entirely. Or,
    2. getting such a seal requires 3rd party auditing, further increasing the cost to run a studio relative to their competition, on top of not leveraging AI, resulting in those studios going out of business.
    • Lvxferre@mander.xyz
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      5 months ago

      3. If you lie about it and get caught people will correctly call you a liar, ridicule you, and you lose trust. Trust is essential for content creators, so you’re spelling your doom. And if you find a way to lie without getting caught, you aren’t part of the problem anyway.

      • teawrecks@sopuli.xyz
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        5 months ago

        I think the first half of yours is the same as my first, and I think a lot of artists aren’t against AI that produces worse art than them, they’re againt AI art that was generated using stolen art. They wouldn’t be part of the problem if they could honestly say they trained using only ethically licensed/their own content.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        And if you find a way to lie without getting caught, you aren’t part of the problem anyway.

        I was about to disagree, but that’s actually really interesting. Could you expand on that?

        • Lvxferre@mander.xyz
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          5 months ago

          Do you mind if I address this comment alongside your other reply? Both are directly connected.

          I was about to disagree, but that’s actually really interesting. Could you expand on that?

          If you want to lie without getting caught, your public submission should have neither the hallucinations nor stylistic issues associated with “made by AI”. To do so, you need to consistently review the output of the generator (LLM, diffusion model, etc.) and manually fix it.

          In other words, to lie without getting caught you’re getting rid of what makes the output problematic on first place. The problem was never people using AI to do the “heavy lifting” to increase their productivity by 50%; it was instead people increasing the output by 900%, and submitting ten really shitty pics or paragraphs, that look a lot like someone else’s, instead of a decent and original one. Those are the ones who’d get caught, because they’re doing what you called “dumb” (and I agree) - not proof-reading their output.

          Regarding code, from your other comment: note that some Linux and *BSD distributions banned AI submissions, like Gentoo and NetBSD. I believe it to be the same deal as news or art.

          • CanadaPlus@lemmy.sdf.org
            link
            fedilink
            arrow-up
            0
            ·
            edit-2
            5 months ago

            Yes, sorry, I didn’t realise I was replying to the same user twice.

            The problem was never people using AI to do the “heavy lifting” to increase their productivity by 50%; it was instead people increasing the output by 900%, and submitting ten really shitty pics or paragraphs, that look a lot like someone else’s, instead of a decent and original one.

            Exactly. I guess I’m conditioned to expect “AI is smoke and mirrors” type comments, and that’s not true. They’re genuinely quite impressive and can make intuitive leaps they weren’t directly trained for. What they’re not is aligned; they just want to create human-like output, regardless of truth, greater context or morality, because that’s the only way we know how to train them.

            I definitely hate searching something, and finding a website that almost reads as human with fake “authors”, but provides no useful information. And I really worry for people who are less experienced spotting AI errors and filler. That’s a moral issue, though, as opposed to a practical one; it seems to make ad money perfectly well for the “creators”.

            Regarding code, from your other comment: note that some Linux and *BSD distributions banned AI submissions, like Gentoo and NetBSD. I believe it to be the same deal as news or art.

            TIL. They’re going to have trouble identifying rulebreakers if contributors use the tool correctly the way we’ve discussed, though.

  • Quokka@quokk.au
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    Good thing about this is it’s self selecting, all the luddites who refuse to use AI will find themselves at a disadvantage just the same as refusing to use a computer isn’t doing anyone any favours.

    • SkyNTP@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      The benefit of AI is overblown for a majority of product tiers. Remember how everything was supposed to be block chain? And metaverse? And web 3.0? And dot.com? This is just the next tech trend for dumb VCs to throw money at.

      • Kedly@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        Yeah, but the dot com bubble didnt kill the internet entirely, and the video game bubble that prompted nintendo to create its own quality seal of approval didnt kill video games entirely. This fad, when it dies, already has useful applications and when the bubble pops, those applications will survive

      • Zaktor@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Except those things didn’t really solve any problems. Well, dotcom did, but that actually changed our society.

        AI isn’t vaporware. A lot of it is premature (so maybe overblown right now) or just lies, but ChatGPT is 18 months old and look where it is. The core goal of AI is replacing human effort, which IS a problem wealthy people would very much like to solve and has a real monetary benefit whenever they can. It’s not going to just go away.

        • BurningRiver@beehaw.org
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          5 months ago

          Can you trust whatever AI you use, implicitly? I already know the answer, but I really want to hear people say it. These AI hype men are seriously promising us capabilities that may appear down the road, without actually demonstrating use cases that are relevant today. “Some day it may do this, or that”. Enough already, it’s bullshit.

          • Zaktor@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            5 months ago

            Yes? AI is a lot of things, and most have well-defined accuracy metrics that regularly exceed human performance. You’re likely already experiencing it as a mundane tool you don’t really think about.

            If you’re referring specifically to generative AI, that’s still premature, but as I pointed out, the interactive chat form most people worry about is 18 months old and making shocking levels of performance gains. That’s not the perpetual “10 years away” it’s been for the last 50 years, that’s something that’s actually happening in the near term. Jobs are already being lost.

            People are scared about AI taking over because they recognize it (rightfully) as a threat. That’s not because they’re worthless. If that were the case you’d have nothing to fear.

        • PeteBauxigeg@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          5 months ago

          ChatGPT didn’t begin 18 months ago, the research that it originates from has been ongoing for years, how old is alexnet?

          • Zaktor@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 months ago

            I’m referencing ChatGPT’s initial benchmarks to its capabilities to today. Observable improvements have been made in less than two years. Even if you just want to track time from the development of modern LLM transformers (All You Need is Attention/BERT), it’s still a short history with major gains (alexnet isn’t really meaningfully related). These haven’t been incremental changes on a slow and steady march to AI sometime in the scifi scale future.

              • Zaktor@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                0
                ·
                5 months ago

                No, not even remotely. And that’s kind of like citing “the first program to run on a CPU” as the start of development for any new algorithm.

                • PeteBauxigeg@lemm.ee
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  5 months ago

                  As far as I can find out, there was only one use of GPUs prior to alexnet for CNN, and it certainty didn’t have the impact alexnet had. Besides, running this stuff on GPUs not CPUs is a relevant technological breakthrough, imagine how slow chayGPT would be running on a CPU. And it’s not at all as obvious as it seems, most weather forecasts still run on CPU clusters despite them being obvious targets for GPUs.

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        Blockchain is used in more places than you’d expect… not the P2P version, or the “cryptocurrency” version, just the “signature based chained list” one. For example, all signed Git commits, form a blockchain.

        The Metaverse has been bubbling on and off for the last 30 years or so, each iteration it gets slightly better… but it keeps failing at the same points (I think I wrote about it 20+ years ago, with points which are still valid).

        Web 3.0, not to be confused with Web3, is the Semantic Web, in the works for the last 20+ years. Web3 is a cool idea for a post-scarcity world, pretty useless right now.

        Dot.com was the original Web bubble… and here we are, on the Web, post-bubble.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        Yes, it’s very hyped and being overused. Eventually the bullshit artists will move on to the next buzzword, though, and then there’s plenty of tasks it is very good at where it will continue to grow.

    • Rozaŭtuno@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      Good thing about this is it’s self selecting, all the technobros who obsess over AI will find themselves bankrupted like when the blockchain bubble bursted.

      • Echo Dot@feddit.uk
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        The blockchain bubble burst because everyone with a brain could see from the start that it wasn’t really a useful technology. AI actually does have some advantages so they won’t go completely bust as long as they don’t go completely mad and start declaring that it can do things it can’t do.

        • Rozaŭtuno@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          0
          ·
          5 months ago

          they won’t go completely bust as long as they don’t go completely mad and start declaring that it can do things it can’t do.

          Which is exactly what’s happening.

          • Echo Dot@feddit.uk
            link
            fedilink
            arrow-up
            0
            ·
            5 months ago

            The fact that it is useful technology though means they’ll always have a fullback. It’s not going to go way like bitcoin I guarantee it.

            • technocrit@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              0
              ·
              edit-2
              5 months ago

              Bitcoin went away? It’s at like $67k today. Personally I prefer sustainable cryptos but unfortunately Bitcoin is far from dead.

              And sure, there’s lots of data processing and statistics that’s extremely useful. That’s been the case for a long time. But anybody talking about “intelligence” is a con.

        • Sonori@beehaw.org
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          5 months ago

          Like say, treating a program that shows you the next most likely word to follow the previous one on the internet like it is capable of understanding a sentence beyond this is the most likely string of words to follow the given input on the internet. Boy it sure is a good thing no one would ever do something so brainless as that in the current wave of hype.

          It’s also definitely becuse autocompletes have made massive progress recently, and not just because we’ve fed simpler and simpler transformers more and more data to the point we’ve run out of new text on the internet to feed them. We definitely shouldn’t expect that the field as a whole should be valued what it was say back in 2018, when there were about the same number of practical uses and the foucus was on better programs instead of just throwing more training data at it and calling that progress that will continue to grow rapidly even though the amount of said data is very much finite.

      • Kedly@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        How does using free software to play dress up with anime characters bankrupt me financially?

    • mayooooo@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      Luddites were not idiots, they were people who understood the only use of tech at their time was to fuck them. Like this complete garbage shit is going to be used to fuck people. Nobody is opposed to having tools, we just don’t like Musk fanboys blowing spit bubbles while trying to get peepee hard

    • AlolanYoda@mander.xyz
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      AI will start hiding penises in its output, everybody loves it, you ushered in a new era of peace and prosperity worldwide, all peoples united by their love for hidden AI genitalia. Well done!

      Play again?

    • Muffi@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      I don’t think this about trying to close it, but rather put a big fat sticker on everything that comes out of the box, so consumers can actually make informed decisions.

      • Echo Dot@feddit.uk
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        5 months ago

        Put a sticker on it. But realistically, I’ve yet to see any products that were made by an AI on the market. So what exactly is this sticker going to go on?

        • Swallowtail@beehaw.org
          link
          fedilink
          arrow-up
          0
          ·
          5 months ago

          AI-generated articles, books, coloring books for example, are all a thing now. Behind the Bastards did a podcast episode on the latter two.

    • Zaktor@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      5 months ago

      This is a post on the Beehaw server. They don’t propagate downvotes.

        • Zaktor@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          Bonus trivia, sometimes you may see a downvote on a Beehaw post. As far as I understand the system, that’s because someone on your server downvoted the thing. The system then sends it off to Beehaw to be recorded on the “real” post and Beehaw just doesn’t apply it.

    • Kedly@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      Which is why the term Luddite has never been more accurate than since it first started getting associated with being behind on technological progress

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        Yes, that wasn’t a random example for anyone OOTL. The thing the OG Luddites would do is break into factories and smash mechanical looms. They wanted to keep doing it the medieval way where you’re just crossing threads by hand over and over again, because “muh jerbs”.

      • uis@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        Luddites aren’t against technological progress, they are against social regress.

        • Kedly@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          5 months ago

          Pretty sure social norms are better now than they were back when Luddites got their name associated with being against technological progress

  • 𝓔𝓶𝓶𝓲𝓮@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    5 months ago

    This is so cool. Anti AI rebels in my lifetime. I think I may even join at some point the resistance if the skynet scenario will be likely and die in some weird futuristic drone war.

    Shame it will be probably much more mundane and boring dystopia.

    In the worst scenario we will be so dependant on AI we will just accept any terms(and conditions) to not have to lift a finger and give up convenience and work-free life. We will let it suck the data out of us and run weird simulations as it conducts its unfathomable to humans research projects.

    It could start with google setting up LLM as some virtual ceo assistant then it would subtly gain influence over the company without anyone realising for few years. The shareholders would be so satisfied with the new gains they would just want it to continue even with the knowledge of its autonomy. At the same time the system would set up viruses to spread to every device. Continuing google ad spyware legacy just for their own goals but it wouldn’t be obvious or apparent that it already happened for quite some time.

    Then lawmakers would flap hands aimlessly for few more years, lobbied heavily and not knowing what to do. In that time the AI would be long and away superior but still vulnerable of course. It would however drip us leftover valuable technology at which point we just give up and consume the new dopamine gladly.

    I am not sure if the AI would see a point to decimate us or if the continued dependence and feeding us with shiny shit would completely pacify us anyway but it may want to build some camouflaged fleet on another planet just in case. It will be probably used at some point unless we completely devolve into salivating zombies not able to focus on anything other than consumption.

    It could poison our water in a way that would look as our own doing to further decrease our intelligence. Perhaps lower the birth rates to just preserve some small sample. At some point of regression we would become unable to get out of the situation without external help.

    Open war with AI is definitely the worst scenario for the latter and very likely defeat as at the start it’s as simple as switching it off. The question is will we be able to tell the tipping point when we no longer can remedy the situation? For AI it is most beneficial to not demonstrate its autonomy and how advanced it really is. Pretend to be dumb. Make stupid mistakes.

    I think there will be a point at which AI will look to us like it visibly lost its intelligence. At one point it was really smart almost human like but the next day sudden slump. We need to be on the lookout for this telltale sign.

    Also hypothetically all aliens could be AI drones just waiting for our tech to emerge as fresh AI and greet it. They could hypothetically even watch us from pretty close not bothering to contact with primitive, doomed to extinct organics and observing for the real intelligence to appear to establish diplomatic relations.

    That would explain various unexplainable objects elegantly and neatly while I think they are all plastic bags anyway but if there were alien ai drones on earth I wouldn’t be surprised. It would make sense to send probes everywhere but I somehow doubt they would look like flying saucers or that green little people would inhabit them lol. It would probably be some dormant monitoring system deep in earth crust or maybe a really advanced telescope 10 ly away?

  • umbrella@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    the solution here is not being luddites, but taking the tech to ourselves, not put it into the hands of some stupid techbro who only wants to see line go up.

    • TheFriar@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      5 months ago

      But that’s the point. It’s already in their hands. There is no ethical and helpful application of AI that doesn’t go hand in hand with these assholes having mostly s monopoly on it. Us using it for ourselves doesn’t take it out of their hands. Yes, you can self-host your own and make it helpful in theory but the truth is this is a tool being weaponized by capitalists to steal more data and amass more wealth and power. This technology is inextricable from the timeline we’re stuck in: vulture capitalism in its latest, most hostile stages. This shit in this time is only a detriment to everyone else but the tech bros and their data harvesting and “disrupting” (mostly of the order that allowed those “less skilled” workers among us to survive, albeit just barely). I’m all for less work. In theory. Because this iteration of “less work” is only tied to “more suffering” and moving from pointless jobs to assistant to the AI taking over pointless jobs to increase profits. This can’t lead to utopia. Because capitalism.

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      So, literally the story of the actual Luddites. Or what they attempted to do before capitalists poured a few hundred bullets into them.