Researchers Hack ChatGPT Memories and Web Search Features

Researchers Hack ChatGPT Memories and Web Search Features

Tenable researchers uncovered seven new vulnerabilities in ChatGPT that can be exploited for data theft and malicious activities, involving features like memory, open_url, and url_safe. These findings highlight ongoing security challenges with prompt injection, website content analysis, and data exfiltration in large language models. #ChatGPTVulnerabilities #PromptInjection

Keypoints

  • Tenable researchers identified seven new security flaws in ChatGPT related to feature misuse and prompt injection.
  • The β€˜bio’ feature, or memories, can be manipulated to exfiltrate or inject data into ChatGPT’s memory.
  • SearchGPT can execute malicious prompts embedded in analyzed websites, leading to potential data breaches.
  • Attackers can leverage manipulated Bing URLs to bypass safety checks and exfiltrate user data.
  • Some vulnerabilities persist even in the latest GPT-5 model, indicating ongoing security challenges.

Read More: https://www.securityweek.com/researchers-hack-chatgpt-memories-and-web-search-features/