Published on:
14 Mar 2024
3
min read
Article: https://www.businesstimes.com.sg/opinion-features/genie-s-out-bottle-when-ai-meets-politics
On the ailments of #AI.
I was asked to share some thoughts with Jun Yuan Yong of The Business Times on some of the dangers of AI.
"Director of law firm Covenant Chambers Khelvin Xu says that one approach to legislation could be taking on guidance from bodies such as the European Union, which is currently working on an AI Act.
'The benefit of this… is that it is likely to lead to more harmonisation of the Singapore approach with (those) taken by other jurisdictions,' he says, adding that the Republic stands to benefit if AI tools can be used seamlessly locally and in other areas."
I should also clarify that as of press time, the AI Act has been approved by the EU's parliament,¹ so I expect various jurisdictions (including Singapore) to be paying even closer attention to the finalised rules.
---
Here's one more observation I shared, which didn't make it into the article, but which I think about a lot.
When we consider AI misuse, we often consider specific ills (e.g. voter disinformation,² commercial fraud,³ or the non-consensual generation of sexually explicit content⁴).
But there is a broader impact even in instances where AI is not misuse...
...and that's the rise of a trust deficit.
So much of modern society and social interactions depend on our capacity to trust one another. For example, I am prepared to transfer money to a friend via PayNow because I trust that:
a) the person who has texted me is indeed my friend;
b) the telephone number given is correct;
c) when I log onto my iBanking app on my phone, my details will not be sent to bad actors via a Trojan horse; and
d) the transaction will not be somehow intercepted.
However, if my trust in these systems is undermined because I keep reading about AI-powered fraud, that will affect my willingness to use PayNow, and potentially to the point where I am only prepared to deal with cold hard cash. Imagine the inconvenience and inefficiency caused if a significant number of us think this way.
And that’s just the tip of the iceberg. In a world where we cannot be sure whether the person on a video call, on the other end of the line, or sending an email or message is who they say they are:
a) agreements and transactions will get done much more slowly and inefficiently, or not at all;
b) social and aid workers will have more difficulty reaching out to the underserved and indigent; and
c) the well-intentioned will be that much more sceptical about giving to good causes, for fear of being defrauded.
And I don't really have a good solution for this, I'm afraid. I'm a little skeptical that it's enough to simply say "oh, let's just educate everyone, so that people know how to look out for AI-assisted scams, and they can then transact without fear!"
But...
...I suppose that's as good a place to start as any.
So - talk to the people around you about AI, y'all. Society's depending on you.
---
https://www.businesstimes.com.sg/opinion-features/genie-s-out-bottle-when-ai-meets-politics
Disclaimer:
The content of this article is intended for informational and educational purposes only and does not constitute legal advice.
¹ https://www.cnbc.com/2024/03/13/european-lawmakers-endorse-worlds-first-major-act-to-regulate-ai.html.
² https://www.theguardian.com/us-news/2024/mar/14/ai-biden-robocall-lawsuit.
³ https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html.
⁴ https://www.theverge.com/2024/1/25/24050334/x-twitter-taylor-swift-ai-fake-images-trending.