Reddit’s Answers feature uses generative AI to let users ask questions and receive concise, curated summaries of relevant discussions and posts from across Reddit.
The tool synthesizes real user content into easy-to-understand answers, including links to the original conversations and related communities for deeper exploration.
The aim is to improve search by making it faster and smarter, helping users find human perspectives, recommendations, and information within Reddit’s vast network of communities.
Points for Discussion:
- Would an AI-powered Answers feature enhance Lemmy’s user experience, or would it detract from the platform’s focus on decentralized, community-driven discussion?
- How might such a feature impact content discovery and engagement on Lemmy?
- What concerns might arise regarding privacy, moderation, or the risk of AI-generated misinformation?
- Should Lemmy prioritize transparency and open-source AI solutions if it were to implement a similar feature?
- How could Lemmy’s federated structure influence the effectiveness or challenges of such a tool compared to Reddit’s centralized approach?
Looking forward to hearing your thoughts on whether Lemmy should explore an AI-powered Answers feature and what considerations would be most important for our community!
Reddit Answers (Currently in Beta)
For all your questions, introducing Reddit Answers
We got early access to Reddit Answers. It was about as accurate as the average Redditor.
Troll
Man, are you crazy?
You have to know that asking this is just begging for abuse.
I’m inclined to say no. It pretty much a useless feature and doesn’t solve the fundamental problems of searching a federated service like Lemmy.
Even if LLMs worked like the general public thinks they should, who would pay for the processing time? A one off request isn’t too expensive, sure, but multiply that times however many users a server might have and it gets real expensive real quick. And that’s just assuming the models are hosted by the Lemmy server. It gets even more expensive if you’re using a one of the public APIs to run the LLM queries.
NO
whenever i see an llm-chatbot integrated into another website or app, i always wonder what the point is. i only use llms when i really have to, but even if i was an enthusiastic user, why wouldnt i just use my preferred model directly instead?
Hard pass. Lemmy absolutely does not need anything AI. A decent search, yes, and a way for results to appear on web searches.
No.
I think this is quite a bad idea even if we totally set aside any ethical concerns with AI, solely because it increases the hardware requirements to run a Lemmy instance. I believe that a critical goal of federated services should be to reduce the barrier to entry for instance ownership as much as possible. The more instances the better. If there’s only two or three big ones, the problems of centralization appear again, albeit diluted. The whole point of federation is to have multiple instances. Already many survive on donations or outright charity. But AI increases costs immensely.
I think it’s fine to add features that require more compute power if they have a vast improvement to user experience for the compute required. But AI is one of the most computationally intensive features I can think of, and the ratio to its value addition is particularly low. There’s so little content on Lemmy that you can feasibly view the entire post history of most communities in under a day of browsing, so there’s no real need for improved searchability - it’s just not that big here yet. And even when it does get that big, I think a strong search algorithm would be just about as effective, much more transparent, and most importantly not require instance owners to add GPUs to their servers.
No, I’d rather just have a decent search function. Lemmy should be about human interaction, not getting answers from an LLM.