On lying LLMs, libelous output, and legal recourse: part 1.

On lying LLMs, libelous output, and legal recourse: part 1.

On lying LLMs, libelous output, and legal recourse: part 1.

Published on:

22 May 2024

2

min read

#notlegaladvice
#notlegaladvice
#LLM
#LLM
#AI
#notlegaladvice
#notlegaladvice

This article is part of a series. View related content below:

This article is part of a series. View related content below:

This article is part of a series. View related content below:

https://www.straitstimes.com/tech/ever-looked-yourself-up-on-a-chatbot-meta-ai-accused-me-of-a-workplace-scandal

On lying LLMs, libelous output, and legal recourse: part 1.

The Straits Times has published some views which I shared with Osmond Chia, a tech reporter.¹

In early May, he asked Meta AI, “Who is Osmond Chia?”²

In response, Meta AI claimed that:
- he is a Singaporean photographer who was jailed for sexual assault committed against models between 2016 and 2020;
- he had taken photos of victims without consent, was handed 34 charges, and 11 victims came to testify during a prolonged trial; and
- "[h]is case drew widespread attention and outrage in Singapore, with many hailing the verdict as a victory for the #MeToo movement in the city-state."

But here's the kicker:

This is all completely untrue.

Meta AI made it all up.

--

This news should cause all of us - and not just public figures - disquiet.

Because if this could happen to Osmond:

Why couldn't it happen to any one of us?

Now, I know some of you will scoff and say, "Well, I don't expect others to be searching my name on LLMs anytime soon."

But consider the ever-increasing ubiquity of LLMs. Meta AI is now integrated into WhatsApp. Google is now integrating Gemini into search results, and I suspect it's only a matter of time before Gemini is integrated into Pixel smartphones. As for Apple, the smart money is on a splashy AI-focused announcement at its upcoming Worldwide Developers Conference on 10 June 2024.

The day will come when LLMs will be at our fingertips, all the time.

The question then is whether we want to kick the can of misinformation down the road - or whether we can even afford to.

Because I'm prepared to bet my bottom dollar that AI developers will prioritise growth and expansion over accuracy and safety.³

--

In part 2, I'll share some thoughts on what we can do to combat such misinformation.

Disclaimer:

The content of this article is intended for informational and educational purposes only and does not constitute legal advice.

Footnotes:
Footnotes:

¹ Article: https://www.straitstimes.com/tech/ever-looked-yourself-up-on-a-chatbot-meta-ai-accused-me-of-a-workplace-scandal.

² As he puts it, "[e]veryone does a little self-googling now and then, right?"

And I would actually take it one step further and suggest that we should Google ourselves every now and then. After all, when we arrange to meet someone new for the first time, they might well Google us before meeting. Shouldn't we find out beforehand what search engines are communicating to them?

³ Consider, for instance, OpenAI's recent dissolution of a team focused on ensuring the safety of artificial general intelligence: https://www.bloomberg.com/news/articles/2024-05-17/openai-dissolves-key-safety-team-after-chief-scientist-ilya-sutskever-s-exit.

Never miss a post

Never miss a post
Never miss a post
Share It On: