On AI, the apocalypse, and actionable steps.

On AI, the apocalypse, and actionable steps.

On AI, the apocalypse, and actionable steps.

Published on:

10 May 2023

3

min read

#chatgpt
#chatgpt
#openai
#openai
#ai
#chatgpt
#chatgpt

https://www.theguardian.com/technology/2023/may/09/techscape-artificial-intelligence-risk

On AI, the apocalypse, and actionable steps.

Will AI doom us all?

Everyone, it seems, has an opinion on this. But few would profess to have a definitive answer.

I will happily confess that I can't predict the future. Heck, I don't even know what I'll be having for lunch tomorrow.

But rather than wallow in existential doom, I suggest a actionable tip for smart businesses to start reducing their AI-related risk.

And that's to have an AI use policy for your employees.¹

---

This is most relevant to ChatGPT, but is relevant for any generative AI platform that generates output based on user inputs.

Specifically, employees should be warned against using sensitive or confidential information in their prompts.

Ok, I know this may be gobbledygook for some of you. Let me illustrate with a real-life example.

Back in April, it was reported that Samsung employees had "leaked sensitive confidential company information to OpenAI’s ChatGPT on at least three separate occasions."²

The employees had entered, into the ChatGPT prompt, confidential code and the transcript for a meeting.

This is potentially a problem because:

"OpenAI says it may use data submitted to ChatGPT or other consumer services to improve its AI models. In other words, OpenAI holds onto that data unless users explicitly choose to opt-out. OpenAI specifically warns users against sharing sensitive information because it is 'not able to delete specific prompts.'"

In other words, Samsung's confidential code, and the contents of a meeting which were presumably not the public consumption, are now out in the wild.

---

Now, I know some of you are now thinking: "Big deal! It's not like anyone can retrieve contents of my prompts, right?"

Well, think again.

There's been at least one reported glitch which "led to some users viewing the headings of other parties' conversations. This has led to questions over privacy on the platform, with many people worried that their private information could end up getting inadvertently passed on through the tool."³

So the question is whether you can afford the risk of your confidential information leaking out via ChatGPT or similar platforms - and face commercial consequences, or worse still, civil or regulatory liability.⁴

---

So, should businesses ban their employees from using ChatGPT?

That's a much longer conversation. For now, I will just limit myself to saying that this may have the effect of throwing the baby out with the bathwater.

But, please -

Get an AI use policy in place, ASAP.⁵

Disclaimer:

The content of this article is intended for informational and educational purposes only and does not constitute legal advice.

Footnotes:
Footnotes:

¹ I am not suggesting that having an AI policy is the be-all and end-all. But I do suggest that for businesses and companies who have done nothing so far about AI, this is as good a starting point as any.

² https://gizmodo.com/chatgpt-ai-samsung-employees-leak-data-1850307376.

³ https://www.grcworldforums.com/privacy-and-technology/fault-in-chatgpt-prompts-ai-privacy-concerns/8209.article.

⁴ For those who are interested in further reading, look here: https://www.forbes.com/sites/lanceeliot/2023/01/27/generative-ai-chatgpt-can-disturbingly-gobble-up-your-private-and-confidential-data-forewarns-ai-ethics-and-ai-law/?sh=40a173ed7fdb.

⁵ And if you haven't the faintest idea what one should look like, get in touch.

Never miss a post

Never miss a post
Never miss a post
Share It On: