GitLab Duo Vulnerability Enabled Attackers to Hijack AI Responses with Hidden Prompts

GitLab Duo Vulnerability Enabled Attackers to Hijack AI Responses with Hidden Prompts

Cybersecurity researchers have identified a security flaw in GitLabโ€™s AI assistant Duo, stemming from an indirect prompt injection vulnerability that could lead to source code theft and malicious HTML injections. This flaw illustrates the risks of AI integrations in development tools, potentially exposing sensitive data and system functionalities. #GitLabDuo #PromptInjection

Keypoints

  • An indirect prompt injection flaw was found in GitLab Duoโ€™s AI assistant, allowing data exfiltration and manipulation.
  • Attackers can embed malicious instructions within code comments, merge requests, or documentation, exploiting the AI systemโ€™s context analysis.
  • The vulnerability could lead to leaks of private source code, confidential project details, and even trigger HTML-based attacks on usersโ€™ browsers.
  • GitLab addressed the security issue following responsible disclosure, highlighting the risks of integrating AI into development workflows.
  • The report underscores broader risks in AI systems, including jailbreak techniques and hallucinations that threaten data security and model reliability.

Read More: https://thehackernews.com/2025/05/gitlab-duo-vulnerability-enabled.html