Tenable researchers uncovered seven new vulnerabilities in ChatGPT that can be exploited for data theft and malicious activities, involving features like memory, open_url, and url_safe. These findings highlight ongoing security challenges with prompt injection, website content analysis, and data exfiltration in large language models. #ChatGPTVulnerabilities #PromptInjection
Keypoints
- Tenable researchers identified seven new security flaws in ChatGPT related to feature misuse and prompt injection.
- The βbioβ feature, or memories, can be manipulated to exfiltrate or inject data into ChatGPTβs memory.
- SearchGPT can execute malicious prompts embedded in analyzed websites, leading to potential data breaches.
- Attackers can leverage manipulated Bing URLs to bypass safety checks and exfiltrate user data.
- Some vulnerabilities persist even in the latest GPT-5 model, indicating ongoing security challenges.
Read More: https://www.securityweek.com/researchers-hack-chatgpt-memories-and-web-search-features/