Last year the arrival of ChatGPT was the talk of the town, overhauling how we use AI, interact with the internet, and even go about our daily lives — both personally and professionally. Being your ear on the ground, we published many articles on ChatGPT: from how it works, to phishing scams; from consequences to education, to the OpenAI data breach. As 2024 gets under way, we thought we’d begin with a ChatGPT dos and don’ts — read on for our take.
ChatGPT 2024 Best Practices
ChatGPT is a remarkable invention, with limitless potential to change and optimize how we live, work, and play. At the same time, using an advanced AI for everyday tasks comes with security issues. The following are our best practices for using ChatGPT and other AI programs while remaining secure and your privacy protected.
Chat History & Activity
- Enable the option to turn off chat history storage to protect privacy.
- Be aware that some platforms may retain new chats for a limited period.
- For platforms like Bard, log in to the respective site and follow the steps to delete all chat activity by default.
- Bing users can open the chatbot webpage, view their search history, and delete individual chats.
Reputable Platforms
- Trust reputable generative AI firms that prioritize user protection and take steps to safeguard privacy.
Personal Information
- Refrain from sharing sensitive information (Social Security numbers etc.) with chatbots.
- Exercise caution when discussing specific health conditions or financial information due to potential human involvement in reading conversations.
User Responsibility
- Recognize that the responsibility for data protection also lies with users.
- Pause before sharing sensitive information with any chatbot.
Privacy Red Flags
- Avoid using chatbots without a privacy notice, indicating potential security issues.
- Exercise caution if a chatbot requests excessive personal information.
Terms of Service
- Always double-check the terms of service and privacy policies of chatbots to understand how user data will be used.
Expert Recommendations
- Heed experts’ advice to avoid sharing personal information with generative AI tools.
- Stay informed about potential risks and evolving best practices in the use of AI chatbots.
Risk-Reward
- Consider the risk-reward aspect before inputting information into a chatbot.
- Evaluate the accuracy of the AI’s responses and assess whether the potential benefits outweigh the risks.
Advanced Features
- Be cautious when using advanced features that involve processing personal emails (e.g., Google’s Bard) to understand writing style and tone, considering potential security risks.
Protecting Your Identity and Personal Info
Trend Micro is here to have your back in 2024. We would encourage readers to head over to our new ID Protection platform, which has been designed to meet the security and privacy threats we now all face. With ID Protection, you can:
- Check to see if your data (email, number, password, credit card) has been exposed in a leak, or is up for grabs on the dark web;
- Secure your social media accounts with our Social Media Account Monitoring tool, with which you’ll receive a personalized report;
- Create the strongest tough-to-hack password suggestions from our advanced AI (they’ll be safely stored in your Vault);
- Enjoy a safer browsing experience, as Trend Micro checks websites and prevents trackers.
- Receive comprehensive remediation and insurance services, with 24/7 support.
Offering both free and paid services, ID Protection will ensure you have the best safeguards in place, with 24/7 support available to you through one of the world’s leading cybersecurity companies. Trend Micro is trusted by 8 of the top 10 Fortune 500 Companies — and we’ll have your back, too.
Why not give it a go today? As always, we hope this article has been an interesting and/or useful read. If so, please do SHARE it with family and friends to help keep the online community secure and informed — and consider leaving a like or comment below. Here’s to a secure 2024!