AI Chatbots Know More About You Than You Think: The Risks of Using AI Chatbots & How To Protect Your Information


Artificial intelligence (AI) chatbots, such as ChatGPT or Google Gemini, are becoming increasingly common in today’s society, aiding in our completion of daily tasks and providing quick access to information. These chatbots enhance efficiency in various settings, be it at home, work, or school. However, each interaction with an AI chatbot is stored in the company’s database, along with a plethora of other personal information that users are often unaware of.

When somebody utilizes an AI chatbot, the information they input into it is stored in the chatbot’s database. Therefore, when inputting personal information needed to assist in completing a certain task, that information is saved and can be examined by the company. In addition to the information inputted by users, these chatbots are also constantly collecting other user data automatically. According to OpenAI’s privacy policy, when using their popular AI chatbot ChatGPT, their system collects users’ account information, IP addresses, current locations, device information, browser types, browser settings, browser cookie data, and more. This means that this service is not only collecting data on your interactions with their service but also details about your device, browser, and other websites that you interact with on your browser. Companies, such as OpenAI, claim that they can use this information for research purposes and product improvements, but most people would not want their data revealed and are at risk of improper use. All of the data collected by these services are subject to review by the company and can also be at risk of being exploited by a cyber attack.

In order to reduce the amount of information that is provided to these companies when using their AI chatbot services, there are several tactics individuals can use. According to J.P. Morgan Private Bank, users can reduce the exposure of their information by doing the following:

  • Use reputable AI chatbots. Avoid using experimental or beta testing programs.
  • Ensure you do not use the same email you use for personal banking, work, or even social media. Create a dedicated email address for use with AI chatbots.
  • Use a reliable VPN to mask your IP address and exact location.
  • Reject any request by the application to store tracking cookies.
  • Do not disclose sensitive information when chatting with the bots. If you do not want the information to be public, do not provide it to the chatbot.
  • Clear all browsing data on your browser, like search history and cookies, before using the application.
  • Log off of the application after each use.

By using these techniques before, during, and after using chatbots, the data the companies are collecting on you will be lessened and insignificant. This will help protect your data from being exposed to the company or hackers who are attempting to steal your data. By utilizing these tactics, you can protect yourself and your personal information from being vulnerable to improper use.



About Author

IT Security Intern || Information Security and Assurance || Jason “Jay” Batavia is a senior at Fordham College at Rose Hill who is graduating in May 2024. After obtaining his Bachelor of Science degree in Computer Science, he plans to pursue a full-time career in cybersecurity. Currently, Jay is an IT Security Intern with the Information Security and Assurance department at Fordham University. Through constant learning and research, Jay hopes to share his growing knowledge of cybersecurity with as many people as possible.

Comments are closed.