Microsoft’s LinkedIn will update its User Agreement next month with a warning that it may show users generative AI content that’s inaccurate or misleading.

[…]

]The relevant passage, which takes effect on November 20, 2024, reads:

Generative AI Features: By using the Services, you may interact with features we offer that automate content generation for you. The content that is generated might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes. Please review and edit such content before sharing with others. Like all content you share on our Services, you are responsible for ensuring it complies with our Professional Community Policies, including not sharing misleading information.

In short, LinkedIn will provide features that can produce automated content, but that content may be inaccurate. Users are expected to review and correct false information before sharing said content, because LinkedIn won’t be held responsible for any consequences.

  • Ech@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    On the one hand, putting absolute faith into an llm and regurgitating anything it says as fact is just stupidity manifest. On the other, holding customers liable for their own shitty llm is hilariously duplicitous.

    Maybe, if this is a known issue, LI shouldn’t be pushing this crap on their platform in the first place, yeah? But some higher up already fully bought in to the grift and to pull back now would be admitting they got dupped, which will never happen of course.