The City of New York is in the Beta phase of implementing AI tools into its websites. This push to use AI was brought about to improve user experiences and allow residents to quickly access information that may have taken copious amounts of time to find. With this tool comes the many data privacy and security threats associated with AI chatbots. These threats include data collection, misinformation, and link manipulation attacks. The first threat occurs as users input information specific to themselves and/or their businesses into the chatbot. If this data is not handled correctly, there is a chance that private information may be unintentionally shared with other users based on how the chatbot uses inputs to train itself.
The second threat occurs when the chatbot is trained on inaccurate data. This article states that the training data will be limited to specific NYC government-run sites, but there is no guarantee that the chatbot won’t hallucinate and provide incorrect information. A means of remedying this last threat is training the chatbot to site sources. This is often done by having the chatbot give links to the website on which it found the answer so that the user can confirm the validity of the information provided. This remedy comes with its own risk. Hackers may be able to insert illegitimate links, directing users to sites that host malware. While AI will revolutionize user experiences, proper measures must be taken to protect users from the privacy and security risks associated with chatbots and AI tools. Organizations must be familiar with the risks associated with AI before implementing AI solutions into their websites.