ChatGPT 2024 Best Practices
ChatGPT is a remarkable invention, with limitless potential to change and optimise how we live, work, and play. At the same time, using an advanced AI for everyday tasks comes with security issues. The following are our best practices for using ChatGPT and other AI programs while remaining secure and ensuring your privacy is protected.
Chat History & Activity
- Enable the option to turn off chat history storage to protect privacy.
- Be aware that some platforms may retain new chats for a limited period.
- For platforms like Bard, log in to the respective site and follow the steps to delete all chat activity by default.
- Bing users can open the chat-bot webpage, view their search history, and delete individual chats.
Reputable Platforms
- Trust reputable generative AI firms that prioritise user protection and take steps to safeguard privacy.
Personal Information
- Refrain from sharing sensitive information (Tax File Numbers etc.) with chat-bots.
- Exercise caution when discussing specific health conditions or financial information due to potential human involvement in reading conversations.
User Responsibility
- Recognise that the responsibility for data protection also lies with users.
- Pause before sharing sensitive information with any chat-bot.
Privacy Red Flags
- Avoid using chat-bots without a privacy notice, indicating potential security issues.
- Exercise caution if a chat-bot requests excessive personal information.
Terms of Service
- Always double-check the terms of service and privacy policies of chat-bots to understand how user data will be used.
Expert Recommendations
- Heed experts’ advice to avoid sharing personal information with generative AI tools.
- Stay informed about potential risks and evolving best practices in the use of AI chat-bots
Risk-Reward
- Consider the risk-reward aspect before inputting information into a chat-bot.
- Evaluate the accuracy of the AI’s responses and assess whether the potential benefits outweigh the risks.
Advanced Features
- Be cautious when using advanced features that involve processing personal emails (e.g., Google’s Bard) to understand writing style and tone, considering potential security risks.