Published on:
5 Dec 2023
3
min read
https://github.blog/2023-10-27-demystifying-llms-how-they-can-do-things-they-werent-trained-to-do/#:~:text=According%20to%20Alireza%20Goudarzi%2C%20senior,next%20token%20in%20the%20text.%E2%80%9D
On #robot judges: part 8 - in which we resume this series after a long break.¹
In part 7,² I explained that I started this series in response to a column in The Business Times, in which the author suggests that there are "good reasons" to "embrace legally binding and enforceable AI-generated reasoned determinations".³
In this part, I finally turn to address some of the author's propositions.
---
Proposition 1.
The author suggests that it is a misconception that "#AI #LLM decisions are random and lack rationality, just like a coin toss". He suggests further that "AI LLMs provide responses based on data-sets they are trained on."
I agree!
Just because we don't know the reasons why an LLM generates its outputs, does not mean that the outputs are completely random: see part 4.⁴
But the author then asserts that AI LLMs can "give reasons for their decisions" if asked to do so. He inputs, into #ChatGPT:
a) a request for ChatGPT to make a decision on a dispute, and detailed reasons for the decision;
b) a set of facts, and each party's arguments.
ChatGPT's output includes not just a decision, but also a list of reasons for its decision. The author asserts that this shows that an LLM is able to give reasons for its decisions.
I disagree.
---
As we previously discussed,⁵ an LLM like ChatGPT is essentially a text-generating tool.
What this means is that when we ask an LLM for the reasons why it generated a certain output...
...it doesn't really give us the reasons, based on legal or computing logic, why it gave that output.
Rather, it treats that request for reasons as a prompt, and generates text as a response to that prompt.
In other words, it is generating text in response to the prompt asking for reasons, and is not setting out the reasons why it had generated the previous response to the previous prompt.
I'll illustrate with an analogy.
---
Suppose you have a pet parrot,⁶ who you train to ask for a cracker. Eventually, the parrot is able to utter the words "Polly wants a cracker" when prompted.⁷
Next, you train the parrot to respond to the question, "Why does Polly want a cracker?" You repeat, over and over, "Polly is hungry". Eventually, the parrot is able to utter those words when prompted.
Suppose then the parrot asks for a cracker. And you ask the parrot why it wants a cracker. The parrot answers "Polly is hungry."
The parrot isn't really telling you that the reason why it wants a cracker is because it's hungry.
Rather, the parrot is giving you that response because it has been trained to do so.
Likewise, if (a) an LLM is prompted with a question and gives an answer, and (b) the LLM is then further prompted to list the reasons for its previous answer, the LLM isn't really listing the reasons why it generated its previous answer.
Rather, it's generating text in response to the prompt asking for reasons.
---
We'll explore some implications in part 9.
Disclaimer:
The content of this article is intended for informational and educational purposes only and does not constitute legal advice.
¹ My apologies if you've been awaiting this installment with bated breath, but I doubt that there are more than a handful of you.
² Part 1: https://www.linkedin.com/posts/khelvin-xu_robot-ai-llm-activity-7100325203108397056-Ghnn
Part 2: https://www.linkedin.com/posts/khelvin-xu_robot-llm-ai-activity-7102135406124548096-KPpB
Part 3: https://www.linkedin.com/posts/khelvin-xu_robot-llm-chatgpt-activity-7111997957616373760-vna5
Part 4: https://www.linkedin.com/posts/khelvin-xu_robot-llm-chatgpt-activity-7113371842815393792-2atP
Part 5: https://www.linkedin.com/posts/khelvin-xu_robot-llm-chatgpt-activity-7115184116307791872-4B7t
Part 6: https://www.linkedin.com/posts/khelvin-xu_robot-llm-chatgpt-activity-7118450078150770689-dvdt
Part 7: https://www.linkedin.com/posts/khelvin-xu_robot-llm-chatgpt-activity-7120261657506779137-ZsAq
³ (🔒) https://www.businesstimes.com.sg/opinion-features/robot-judges-not-question-legitimacy-choice.
⁴ Part 4: https://www.linkedin.com/posts/khelvin-xu_robot-llm-chatgpt-activity-7113371842815393792-2atP.
⁵ Part 1: https://www.linkedin.com/posts/khelvin-xu_robot-ai-llm-activity-7100325203108397056-Ghnn.
⁶ A real, physical parrot, as opposed to a stochastic parrot: https://en.wikipedia.org/wiki/Stochastic_parrot.
⁷ Pun intended.