Do you need to think twice about loading that draft sales pitch to ChatGPT or Bard, and asking their artificial intelligence to polish up your points?
And are there steps you should take to stop - or at least minimise - the amount of commercially sensitive data you release intothe public domain via these new “generative” AIs?
The answer to both questions: Yup.
AI expert Ben Reid says he’s “hugely positive” about Bard and ChatGPT’s ability to give you help with your day-to-day workflow - in his case giving him a power-boost with data science.
But while your organisation’s cybersecurity team might be happy with you uploading most types of data to, say, AWS, Microsoft Azure or Google Cloud, the new AI platforms are at a much less mature stage of their lifecycle, with boundaries over transparency still being established, says Reid - the founding executive director of the AI Forum NZ turned futurist with Memia and is the author of Fast Foward Aotearoa.
“When it comes to some types of content, we have to be wary that it’s still very early days for both platforms. If you’re handling sensitive, confidential data, then you [have to be] really clear on terms, conditions that those tools are being provided to you. With the public training model, there has to be an expectation that you will breach confidentiality to some degree.”
Throwing the baby out with the bathwater
In June, RNZ revealed the Ministry of Business, Innovation and Employment (MBIE) had banned staff from using artificial intelligence technology such as ChatGPT since March, citing data and privacy risks. So has Te Whatu Ora (Health NZ) - although ironically the Department of Internal Affairs, which steers a lot of government agency tech usage policy, has not.
Overseas, all the major United States banks have banned ChatGPT. Samsung blacklisted the app in May after three instances of employees posting sensitive source code on to the platform (a popular usage for ChatGPT - and one that it’s very good at, by all accounts - is asking the AI to write or improve code). Apple banned ChatGPT and Bard over the risk of staff spilling trade secrets.
University of Auckland senior law lecturer and AI law expert Nikki Chamberlain told the Herald it was understandable for some organisations to put a pause on AI use until more was known about how information was shared and until New Zealand put some kind of regulation in place.
“It’s a new technology and we don’t know the consequences of it yet,” she said.
The Privacy Act touched on a number of issues around the use of personal data, but the larger picture was that.
“There are no specific laws governing the use of AI. I do think it’s time for your lawmakers to start looking into this,” Chamberlain said.
The academic wants our politicians to look not just at generative AI (those that collate data from various sources to deliver text or pictures in response to questions) but also those that use algorithmic learning - like the infamous Cambridge Analytica episode that saw Facebook users’ responses to survey’s harvested without their consent, then used to predict their voting preferences.
But as things stand, there is not only a lack of any AI-specific regulation or legislation, but internal MBIE documents say there are “no rules or guidelines for all government agencies about staff use [of] AI tools”, RNZ reported.
“We are seeing organisations trying to control the misuse of these tools by blocking access to them, but it’s a bit like ‘whack-a-mole’ with new generative AI tools being launched all the time, so it’s important to provide guidance on how to use them securely rather than thinking they are not being used at all,” Liz Knight, head of cybersecurity security at Theta, told the Herald this week.
“The good news is that as AI is maturing, so are the options to protect your data. ChatGPT now allows you to disable ChatGPT from using your chat history for their training model,” Knight added. (See how below).
“Even so, it’s recommended that you treat any GPT as a public tool and avoid inputting any sensitive data into the tool.
“The best approach is to use generative AI tools like any other systems or applications - in a manner that is legal, ethical, secure and compliant with relevant laws and regulations such as the Privacy Act, being careful not to input any personally identifiable information such as people’s names, addresses and contact details.”
Privacy Commissioner warning
And mid-year, NZ Privacy Commissioner Michael Webster issued a note on what he called the potential risks associated with generative AIs like Google’s Bard, OpenAI’s ChatGPT (backed by billions from cornerstone investor Microsoft - early backer Elon Musk having bailed after an argument with management - and ChatGPT when used with Microsoft’s search engine Bing).
The guts of Webster’s cautionary note - which is being continuously updated here - is that ChatGPT and Bard store your prompts, which range from simple questions to whole documents you’ve uploaded and asked them for, say, bullet-point summaries. They do this so the AIs can train themselves, honing their capability as they gather more and more information, and react to follow-up queries to sharpen their responses.
“There is a risk that personal information entered into the generative AI is retained or disclosed by the provider of the generative AI tool and used to continue training the model,” Webster said.
One side note: While machine learning is the main concern (given the sheer volume of activity), it’s not software algorithms that are parsing your content. Read through Bard and ChatGPT’s privacy policies, and you’ll see both also have human reviewers. Information is anonymised, so they can’t see who has asked a certain question or uploaded a certain document, and they work to strict guidelines. Still, if you feed some financial data into ChatGPT and ask it for a quick summary or spot trends, be aware that one of around 900 OpenAI staffers in San Francisco could eyeball it.
Webster’s counterparts around the world have issued similar warnings. Last week, the US Federal Trade Commission opened an investigation into ChatGPT maker AI and the way it treats data. In the US, lawsuits have been launched against Google and ChatGPT, alleging they have copied copyright material. There are at least 10 legal actions launched by artists and authors (some of whom have had their works fed into AIs by third parties), but commercial information - indeed any information - goes into the stew as AIs build their “large language models” capable of human-like response.
Limiting access - to a fashion
There is no “incognito” mode for Bard, ChatGPT or any of their peers. Simply, there’s no way for them to evolve as AIs if everybody can opt out of sharing their questions, whatever text or images they upload, and their response to whatever the artificial intelligences generate.
You can minimise what you share, but you also have to be alert to the fact that there are a number of layers to the process.
For example, Bard gives you all sorts of options to manually delete your “activity” - including the last hour or the last day, or any unit of time. It can also be set to auto-delete all your activity every three, 18 or 36 months (18 months is the default).
But I had a nagging doubt about this policy, which was confirmed when I asked Bard, “Does deleting your Bard activity delete the information you’ve uploaded to Bard?”
The AI responded:
“No, deleting your Bard activity will not delete information you’ve uploaded to Bard. Deleting your Bard activity will only delete the following data:
Your IP address.
Your location data.
The prompts and responses you’ve given to Bard.
The time and date of your interactions with Bard.
“The information you’ve uploaded to Bard, such as your documents, images, and code, will not be deleted. This is because this information is used to train Bard and improve its performance.”
And an update to Bard’s terms and conditions, from six days ago, highlights that even when you delete your activity, not all activity is actually deleted, or at least not for 36 months. It reads:
“Bard conversations that have been reviewed or annotated by human reviewers are not deleted when you delete your Bard activity because they are kept separately and are not connected to your Google Account. Instead, they are retained for up to three years.”
You can tweak your activity settings from within Bard or, because it’s part of the Google family, from within your general Google privacy settings to control elements like go-tracking (go to myactivity.google.com/product/bard).
Wrangling ChatGPT
ChatGPT is more binary. Click on your email address on the bottom left of your screen (when you’re on a computer).
Then click on Settings then Data Controls to see the option to opt-out of ChatGPT saving your conversations and training (which are deleted every 30 days by default).
As with Bard, deleting your activity does not delete any files you’ve uploaded. And OpenAI says because it offers a ChatGPT API (application programming interface), which allows various third parties to integrate their software with its platform (and it’s very easy and low-cost to sign up to use the API), the handling of your data will vary.
Note that a change to ChatGPT’s privacy settings - unlike Bard’s - only applies to the device you’re using at the time. Update your ChatGPT data controls on your computer, and your phone and tablet will still be on your old settings until you update them individually too.
Finally, while both ChatGPT and Bard will use your conversation and any text or code you upload to train their artificial intelligences, and possibly as part of responses served up to other users, neither will sell your data to a third party.
Chris Keall is an Auckland-based member of the Herald’s business team. He joined the Herald in 2018 and is the technology editor and a senior business writer.