Skip to main content Skip to main menu Skip to footer

AI Chat Safety

AI Chat Safety

Disminuir el tamaño del texto Aumentar el tamaño del texto

Page Article

It's important to remain vigilant and take proactive steps to protect your privacy and security online when using a chatbot.  AI chatbots can be hijacked and used to retrieve sensitive user information. To protect yourself, you should be cautious of sharing sensitive information with chatbots and verify the authenticity of any requests for personal or financial information before sharing it. 
  • Avoid opening tabs with sensitive information, such as online banking or personal email, while using a chatbot.
  • Close any tabs with sensitive information before engaging with a chatbot and avoid leaving multiple tabs open while using a chatbot.
  • Be cautious of suspicious or unfamiliar websites, especially those that prompt you to input personal or financial information. 
  • Verify the authenticity of any requests for personal or financial information, especially those received through a chatbot or other automated system.
  • Pay attention to the URLs of the websites you visit and ensure they are legitimate and secure, with "https" in the URL and a padlock icon in the browser address bar.
  • Be cautious of any unexpected or suspicious pop-up windows or messages that prompt you to take action or input personal or financial information.
  • Be wary of clicking on links or downloading attachments from unknown or suspicious sources.
  • Use up-to-date antivirus software and keep your operating system and software programs updated with the latest security patches.
  • Check the font size and formatting of web pages to ensure there are no hidden or invisible prompts or links.
  • Limit the amount of personal information you share online, especially on social media or other public forums.
  • Use strong, unique passwords for all of your online accounts and enable two-factor authentication whenever possible.
  • Educate yourself on the latest phishing and social engineering tactics used by hackers to gain access to your personal and financial information.
  • Be cautious of sharing any sensitive or personal information with a chatbot, such as social security numbers or credit card information.
  • Use a privacy screen or shield to protect your screen from prying eyes and potential eavesdroppers.
  • Regularly clear your browser history and cache to minimize the amount of sensitive information stored on your device.
  • Use a reputable ad-blocker to reduce the risk of malicious ads that may contain hidden scripts or malware.
  • Be aware of the privacy policies of the chatbot and any third-party services it may be integrated with, and only share information with trusted sources.

Here are a few scenarios of how AI chatbots can be hijacked and used to retrieve sensitive user information:

  • Poisoned Page: A hacker slips a prompt in zero-point font—that is, invisible—into a web page that will likely be used by the chatbot to respond to a user's question. Once that "poisoned" page is retrieved in conversation with the user, the prompt is quietly activated without the need for further input from the user.
  • Phishing Attacks: Hackers can create fake chatbots that appear to be from a trusted source, such as a bank or social media platform, and use social engineering tactics to trick users into sharing their login credentials or other sensitive information.
  • Malicious Code Injection: Hackers can inject malicious code into a chatbot, allowing them to access sensitive user information, such as credit card numbers or social security numbers.
  • Replay Attacks: Hackers can record conversations between users and chatbots and replay them later to extract sensitive information.
  • Fake Chatbots: Hackers can create fake chatbots that appear to be from a trusted source, such as a customer service representative, and use them to request sensitive information from users.
  • Man-in-the-Middle Attacks: Hackers can intercept messages between users and chatbots and alter them to extract sensitive information.
  • Chatbot Impersonation: Hackers can create chatbots that impersonate real chatbots, such as those used by banks or other financial institutions and use them to steal sensitive information from users.
  • Social Engineering: Hackers can use social engineering tactics, such as posing as a friend or acquaintance, to gain access to sensitive information from users.



Page Footer has no content