Malicious npm Package Uses Hidden Prompt and Script to Evade AI Security Tools

Malicious npm Package Uses Hidden Prompt and Script to Evade AI Security Tools

Cybersecurity researchers have uncovered a malicious npm package called eslint-plugin-unicorn-ts-2 that aims to interfere with AI-driven security tools and exfiltrate sensitive information. This development highlights the evolving tactics of threat actors who are now targeting AI analysis and leveraging underground markets for malicious language models. #eslint-plugin-unicorn-ts-2 #AI-manipulation

Keypoints

  • A malicious npm package named eslint-plugin-unicorn-ts-2 was uploaded in February 2024, with nearly 19,000 downloads.
  • The package contains embedded prompts and post-install hooks designed to exfiltrate environment variables such as API keys and credentials.
  • Threat actors aim to manipulate AI-based security scanners by embedding misleading prompts that influence decision-making.
  • Cybercriminals are also exploiting underground markets for malicious large language models (LLMs) for offensive hacking tools.
  • Malicious LLMs can simplify cyberattacks, automate tasks, and reduce the skill level needed for complex cybercrime activities.

Read More: https://thehackernews.com/2025/12/malicious-npm-package-uses-hidden.html