- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
Microsoft’s LinkedIn will update its User Agreement next month with a warning that it may show users generative AI content that’s inaccurate or misleading.
LinkedIn thus takes after its parent, which recently revised its Service Agreement to make clear that its Assistive AI should not be relied upon.
LinkedIn, however, has taken its denial of responsibility a step further: it will hold users responsible for sharing any policy-violating misinformation created by its own AI tools.
The relevant passage, which takes effect on November 20, 2024.
In short, LinkedIn will provide features that can produce automated content, but that content may be inaccurate. Users are expected to review and correct false information before sharing said content, because LinkedIn won’t be held responsible for any consequences.
Lol
The real question is will this hold up in court. Judges are likely to frown on this type of thing. Sure the EULA that they know nobody reads says that, but their tools are giving advice in an authoritative tone. My company has got in trouble in court because in an advertisement it appeared our tools were being used in ways the warning label says don’t.
I would rather clean up dog vomit than use linkedin.
My hope is to get a government job so I can delete that shit from my life.
My dream is to build a secret laser big enough that I can deathstar linkedin out of existence in one zap, my hope is however much the same as yours.
You don’t need a laser. You need a computer virus. Leave advanced physics to KTU interns.
Well that’s not saying much as most dog owners have cleaned up vomit at least once.
Sooo shitty.
We need an alternate