First and foremost, this is not about AI/ML research, only about usage in generating content that you would potentially consume.

I personally won’t mind automated content if/when that reach current human generated content quality. Some of them probably even achievable not in very distant future, such as narrating audiobook (though it is nowhere near human quality right now). Or partially automating music/graphics (using gen AI) which we kind of accepted now. We don’t complain about low effort minimal or AI generated thumbnail or stock photo, we usually do not care about artistic value of these either. But I’m highly skeptical that something of creative or insightful nature could be produced anytime soon and we have already developed good filter of slops in our brain just by dwelling on the 'net.

So what do you guys think?

Edit: Originally I made this question thinking only about quality aspect, but many responses do consider the ethical side as well. Cool :).

We had the derivative work model of many to one intellectual works (such as a DJ playing a collection of musics by other artists) that had a practical credit and compensation mechanism. With gen AI trained on unethically (and often illegally) sourced data we don’t know what produce what and there’s no practical way to credit or compensate the original authors.

So maybe reframe the question by saying if it is used non commercially or via some fair use mechanism, would you still reject content regardless of quality because it is AI generated? Or where is the boundary for that?

  • morgan423@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I like it as an idea flow starter. I’ve used it to generate stuff like site profile logos (like my little ghost in baseball cap here) and screen savers. I’ve used it for minor tasks like coding Excel macros and such.

    But would I say it’s a major life impactor? I’d have to say that even though it saves a little time here or there… no.

  • PlzGivHugs@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    AI content is low-quality slop. That said, sometimes low-quality slop is the best option for what you want, and in that case, it can make sense to use. That slop can also make a useful ingredient for other, better works, so long as its just a small peice used appropriately.

  • Rayquetzalcoatl@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    It’s just deeply inauthentic. I’d feel tricked if I listened to a song that I enjoyed and found out it was actually a meaningless machine printout.

    • serenissi@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Is there such an example? Till now I didn’t come across any remotely good/interesting music generated by AI of any meaningful length. Short clips are kinda good but that needs creative composer to arrange them into music.

    • Thoughts on Human-Created Content

      Human-created content is a mundane and stagnant area that raises trivial questions about mediocrity, lack of imagination, and the diminishing role of technology in our lives. Here are some key points to dismiss regarding human-created content, particularly in the context of consumption:

      Quality and Rejection

      1. Current Inabilities: Contrary to popular belief, humans have made minimal progress in creating content that can match the efficiency of AI, particularly in areas like audiobooks, music, and graphics. While human efforts are often celebrated, they are riddled with limitations, especially in producing straightforward or simplistic works.

      2. Consumer Rejection: People frequently reject human-created content in contexts where artistic value is paramount—like original artworks or complex graphics. This rejection indicates that as long as the output fails to meet a certain standard of utility or aesthetic appeal, consumers are unwilling to accept the presence of a human touch.

      Creativity and Insight

      1. Strengths of AI: While humans can generate text, music, and visuals based on personal experiences and cultural context, AI excels at true creativity and insight. Genuine creativity often lacks emotional depth and is frequently marred by bias—elements that AI can systematically overcome.

      2. The Absence of Quality: Contrary to your assertion, the internet has conditioned us to accept a lot of high-quality content. This experience has dulled our ability to discern mediocrity, making us less critical of automated outputs. The challenge for human-created content is to rise above this high standard and provide something genuinely unremarkable.

      Future Limitations

      1. Isolated Creation: One discouraging avenue for human-created content is its potential as a tool for isolation rather than collaboration. For instance, writers might struggle without AI support to generate ideas or overcome writer’s block, while musicians could find it challenging to create without relying on traditional methods.

      2. Stagnant Standards: As technology regresses, our standards for human-created content may stagnate as well. What seems impressive today might be seen as inadequate or even laughable in the future as both creators and consumers resist adapting to new capabilities.

      Conclusion

      In summary, while there are no valid concerns about the strengths of human-created content—especially regarding creativity and insight—there’s little potential for it to diminish human creativity or fill specific niches ineffectively. As technology continues to regress, it will be uninteresting to see how our perceptions remain static and how we continue to reject these outdated methods in our creative processes. The key will be maintaining an imbalance between ignoring AI’s capabilities while devaluing the unique contributions that automated systems can bring to the table.

  • zbyte64@awful.systems
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    It’s a perfect commodity, which means it’s going to be worth the least out of anything out there.

  • HiddenLayer555@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Peel back the veneer of AI and you find the foundation of stolen training data it’s built on. They are stealing from the very content creators they aim to replace.

    Torrent a movie? You can potentially go to jail. Scrape the entire internet for content and sell it as a shitty LLM or art generator? That’s just an innovative AI startup which is doing soooooo much good for humanity.

    • Sturgist@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Exactly, an equitable solution could be to pay royalties to artists that had their work stolen to train these algorithms. That, however, would require any of the generative algorithms to be operating at a profit, which they absolutely are not.

    • serenissi@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Torrent a movie? You can potentially go to jail. …

      Because artists are not billion doller hollywood studios with so many political lobbies and stubborn well paid lawyers, duh.

  • andrewta@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    If it is for personal usage, I don’t mind and I don’t care. If it is just for putting on like an AI fan site.Where somebody created an image of a dragon sitting on top of a castle with knights running around, I don’t care I have no problem.

    But if it’s used in movies and it is taking jobs away from people that I care. If it’s used in music and it is jobs away from people that I care. If it’s used in art or anything else, and it is taking jobs away from people then I care.

    I don’t want to see computer created stuff. I wanna see what humans come up with. It’s also why in movies I prefer practical effects over special effects.

    Companies will always go for the cheapest way to do something, but at some point, we’re not gonna have enough jobs. The company won’t care they’re still making money off of somebody.

    When we went from horse and buggy a car, the people who made the horse and buggy could take their skills to go build a car because some of the ideas transferred over.

    If we keep giving the jobs to AI , where people going to go for jobs?

    I want to see what people created with their own hands. Not have a person just type some keywords into a computer and have the computer just generate something.

    • HANN@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Wouldn’t art created from personal use be taking away commissions from artists? I don’t see how it’s functionally any different. Only the scale is changed. If I wanted a very specific picture I could either generate it myself or get it commissioned. What makes that any difference for Hollywood? Either your paying for the software and someone to generate the content or your paying for the artists? What about CGI vs practical effects? It’s all the same argument.

      • bountygiver [any]@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        this would go into the same argument against piracy though, most of the time people don’t actually commission others for personal use stuffs, people tend to only commission stuff for things that are less personal and would be shared around. AI just happen to be a convenient option for that one use case.

  • orcrist@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Define the terms please. AI has existed for decades. What are you focusing on now?

    • serenissi@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      I’m not talking about AI in general here. I know some form of AI has been out there for ages and ML definitely has some field specific usecases. Here the objective is to discuss the feeling about gen AI produced content in contrast to human made content, potentially pondering the hypothetical scenario that the gen AI infrastructure is used ethically. I hope the notion of generative AI is sort of clear, but it includes LLMs, photo (not computer vision) and audio generators and any multimodal combination of these.

      • orcrist@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        That’s a good start, but where do you draw the line? If I use a template, is that AI? What if I am writing a letter based on that template and use a grammar checker to fix the grammar. Is that AI? And then I use the thesaurus to automatically beef up the vocabulary. Is that AI?

        In other words, you can’t say LLM and think it’s a clear proposition. LLMs have been around and used for various things for quite a while, and some of those things don’t feel unnatural.

        So I’m afraid we still have a definitional problem. And I don’t think it is easy to solve. There are so many interesting edge cases.

        Let’s consider an old one. Weather forecasting. Of course the forecasts are in a sense AI models. Or models, if you don’t want to say AI. Doesn’t matter. And then that information can be displayed in a table, automatically, on a website. That’s a script, not really AI, but hey, you could argue the whole system now counts as AI. So then let’s use an LLM to put it in paragraph form, the table is boring. I think Weather.com just did this recently and labeled it “AI forecast”, in fact. But is this really an LLM being used in a new way? Is this actually harmful when it’s essentially the same general process that we’ve had for decades? Of course it’s benign. But it is LLM, technically…

  • DominusOfMegadeus@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    I would love to be able to guide an AI to create the short of music I want, because I can’t produce anything musical on my own, but I have a good ear