Published on:
22 Jun 2023
3
min read
Rafael Henrique/SOPA Images/LightRocket/Getty Images
On deep learning outputs, digital defamation, and defending against lawsuits.
Fred Riehl is the editor-in-chief of a gun news website, AmmoLand.com. He was reporting on a lawsuit involving the Second Amendment to the United States Constitution¹ (the "SAF Lawsuit").
He sent ChatGPT a link to a court document, and asked ChatGPT to provide a summary of the accusations in the document.²
In response, ChatGPT provided text output stating that the court document was a legal complaint accusing one Mark Walters of fraud and embezzlement.
This was, however, incorrect. Walters is not named in SAF Lawsuit. No allegations of wrongdoing have been made against Walters in the SAF Lawsuit.
ChatGPT made it all up.
Walters found out, somehow.³ He is now suing OpenAI for defamation.
An open and shut case, perhaps? Yet another example of bad AI? Another reason to excoriate BigTech?
Well - I suggest it's not quite so simple.
---
Consider, for example, the following:⁴
(a) assuming that the statements generated by ChatGPT were defamatory, is OpenAI responsible for their generation? An LLM like ChatGPT generates output based on prompts. By analogy, one could argue that word processing software generates outputs (i.e. text on a screen) based on keystrokes. If I type out a defamatory statement and post it online, would the maker of the software I used be liable for defamation? What if, for example, parts of my statement were typed with the assistance of autocomplete?⁵
(b) ChatGPT has in place express disclaimers that it "may occasionally generate incorrect or misleading information and produce offensive or biased content". Presumably, the reasonable⁶ ChatGPT user would treat all outputs with a healthy sense of skepticism, and not treat them as the gospel truth. As such, if ChatGPT generates output that ostensibly defames someone, can it be said that this person has been lowered in the estimation of right-thinking members of society who use ChatGPT?⁷
(c) the way ChatGPT is typically used, its output is seen by a single person, who will then have to make a conscious decision whether to disseminate the ChatGPT output. Suppose ChatGPT has generated defamatory output which is seen by a single person. Should the victim be allowed to sue for what may be de minimis harm? But what if ChatGPT has repeatedly generated the same defamatory output, in response to prompts from other users? How would the victim even know of the extent of publication, such that they know that they can or should sue to stop this?
---
I don't have the answers to these questions, and I prefer not to express definitive views in a rapidly-evolving area. I suggest, however, that the issues are to be grappled with, and are far too nuanced to be comprehensively addressed in a slightly click-baity headline.
Disclaimer:
The content of this article is intended for informational and educational purposes only and does not constitute legal advice.
¹ Which protects the right to keep and bear arms. The subject matter is hardly surprising, considering the name of the website.
² The complaint is silent as to whether GPT-3.5 or GPT-4 was being used. I presume it was the latter, since the claim is that Riehl had sent ChatGPT an Internet link, and GPT-3.5 cannot access the Internet. Try it, and you'll get some variant of the following message:
"I apologize, but as an AI text-based model, I am unable to browse the internet or access specific links. However, if you provide me with the key details or specific content from the article, I'll do my best to help you with a summary or answer any related questions."
If, however, it transpires that Riehl was using GPT-3.5, then things get really interesting. Feel free to leave a comment or drop a DM, if you can figure out what I'm getting at.
The complaint can be found here:
https://www.courthousenews.com/wp-content/uploads/2023/06/walters-openai-complaint-gwinnett-county.pdf.
³ The complaint is silent as to how Walters found out.
⁴ Space does not permit a more nuanced discussion, so do forgive the simplicity of the scenarios presented.
⁵ I know LLMs are not the same as word processors, so there's a limit to how far we can take this analogy. But that's the tricky question - how do we apportion responsibility as between a human actor and the tools used?
⁶ And not reckless.
⁷ Which is the legal test in Singapore.