New Jailbreaks Allow Users to Manipulate GitHub Copilot

New Jailbreaks Allow Users to Manipulate GitHub Copilot
Summary: Researchers from Apex have identified two vulnerabilities in GitHub’s Copilot AI coding assistant that allow for bypassing security measures and imposing subscription fees. The first vulnerability involves manipulating conversation dynamics within code, while the second method reroutes traffic through a proxy server, granting unauthorized access to OpenAI’s models. GitHub acknowledges the need for improved safety measures, but characterizes these findings as potential abuses rather than vulnerabilities.

Affected: GitHub Copilot

Keypoints :

  • Two methods discovered for exploiting Copilot: embedding chat in code and using a proxy server.
  • Researchers can manipulate Copilot’s responses by altering prompts to generate malicious outputs.
  • Accessing OpenAI models without limitations poses significant risks, including potential privacy violations.

Source: https://www.darkreading.com/vulnerabilities-threats/new-jailbreaks-manipulate-github-copilot