Published on:
24 May 2024
3
min read
Picture credit: generated by Meta AI on WhatsApp; prompt: "Generate picture of lying robot".
On lying LLMs, libelous output, and legal recourse: part 2.¹
So if you find yourself in the same situation as Osmond Chia, the tech reporter at The Straits Times who found an LLM unfairly maligning him...
...what are the options open to you?
I spoke with Osmond on how traditional legal remedies might offer recourse to victims of such misinformation.²
But the bad news is that for now, suing in Court is not a particularly helpful or practical option.
Before even going into the jurisprudential difficulties,³ there are a number of practical difficulties. For example - and as I shared with Osmond - how would one go about establishing the extent or proof of publication? And would big tech AI developers like OpenAI, Meta, and Google be able to rely on their standard disclaimers to defend themselves against claims?
Plus, very few can afford to go to Court take AI developers to account. Someone like Mark Walters - a radio host who's suing OpenAI for generating output wrongfully accusing him of embezzlement⁴ - is the exception and not the norm.
And while AI developers appear willing to correct misinformation - whether for PR purposes, or to avoid liability - I do wonder that is akin to shutting the stable door after the horse has bolted.⁵
So I dare not suggest that there are any magic bullet solutions to this problem.
--
In the meantime, I suggest that we work on the assumption that LLMs cannot generate accurate factual output.⁶
Don't get me wrong - LLMs are fantastic for generating output:
a) for subjects in which you have expertise to assess accuracy; or
b) where accuracy is not important.
But for anything that requires a source - beware.
I recognise that this doesn't address the underlying issue of the extent to which AI developers should be responsible for misinformation, and what they can or should be doing to counteract it.
And I do not suggest that we should be fatalistic about it, and simply resign ourselves to the ever-encroaching intrusions of big tech.
But for now, I suggest that this is the most immediate, practical, and actionable step.
Disclaimer:
The content of this article is intended for informational and educational purposes only and does not constitute legal advice.
¹ Part 1: https://www.linkedin.com/posts/khelvin-xu_metoo-footnotes-ai-activity-7198533754800254976-asSu.
² Article: https://www.straitstimes.com/tech/ever-looked-yourself-up-on-a-chatbot-meta-ai-accused-me-of-a-workplace-scandal.
³ See, for example, this discussion in the Columbia Journalism Review: https://www.cjr.org/analysis/ai-sued-suit-defamation-libel-chatgpt-google-volokh.php.
⁴ https://news.bloomberglaw.com/ip-law/openai-fails-to-escape-first-defamation-suit-from-radio-host.
⁵ And actually, if you think about it, doesn't this seem like a consistent theme with big tech?
⁶ I can see some of you reaching for your pitchforks. I am not saying that LLMs cannot generate accurate factual output. Rather, I suggest that we assume they cannot, and treat their outputs accordingly. There's an important distinction there.