BlackDuck Balancing AI Usage and Risk in 2025

BlackDuck Balancing AI Usage and Risk in 2025

Black Duck’s “The Global State of DevSecOps: Balancing AI Usage and Risk in 2025” finds that organizations have achieved high deployment velocity but are accumulating security debt because of manual processes, tool sprawl, and overwhelming false positives that slow development. The report also describes AI as a double-edged sword—widely adopted and improving secure coding for many, yet introducing new risks and shadow-AI governance gaps that demand developer-centric workflow integration and formal AI governance. #BlackDuck #GitHubCopilot

Keypoints

  • Typical structure: a Table of Contents, an Executive Summary with headline findings, a “Sec lags behind Dev and Ops” section on maturity gaps, a detailed “AI Disruption” section, Recommendations and Outlook, “How Black Duck Can Help” (product/solution framing), and appendices with full survey questions and respondent demographics.
  • What each major section covers: Executive Summary distills top takeaways; “Sec Lags” analyzes velocity, automation maturity, tool sprawl, and alert noise; AI Disruption documents AI adoption, shadow AI, and risk vs. benefit; Recommendations give role-based actions; the vendor section maps product responses to identified problems.
  • Survey methodology: >1,000 global software and security professionals surveyed by Censuswide (July–August 2025), representing multiple geographies, roles (developers to CISOs), company sizes, and industry verticals.
  • Deployment cadence: 59.74% of organizations deploy critical application code daily or multiple times per day, establishing velocity as the new baseline for modern delivery.
  • Automation maturity gap: 45.56% of organizations primarily use manual processes to ensure new code is added to application security testing (AST) queues, even among teams that deploy multiple times per day.
  • Coverage shortfall: 61.64% of organizations test 60% or less of their application portfolio, creating a growing and often invisible security debt with each release.
  • Tool sprawl and parity of AST types: no single AST type dominates—SCA (34.17%), DAST (32.77%), API testing (31.87%), IaC scanning (30.47%), and SAST (30.27%) are all commonly used—leading to fragmented tool portfolios.
  • Noise and false positives: about 71.63% of respondents report a significant portion (21–60%) of security alerts are noise (duplicate or false positives), which undermines ROI and trust in tooling.
  • Alert fatigue and operational drag: 81.22% report that application security testing slows development; among fully manual teams, 49% say it severely slows delivery, reinforcing the speed-vs-security dilemma.
  • AI adoption intensity: 43.66% of professionals use AI coding assistants frequently or constantly, and AI is deeply embedded in developer workflows.
  • Open-source AI model use: nearly 97% of organizations use open-source AI models (e.g., from Hugging Face) in the software they build, increasing supply-chain and model-governance considerations.
  • Shadow AI risks: 10.69% of respondents admit to using AI coding assistants without official permission or monitoring, creating unmanaged compliance, IP, and security exposure.
  • AI paradox (risk vs. benefit): 56.55% agree AI coding assistants introduce new security risks, while 63.33% say AI has tangibly improved their ability to write more secure code—illustrating simultaneous trust and concern.
  • Confidence disconnect: 88.81% of respondents say they are confident they can handle AI-introduced security issues, and 93.90% are confident managing open-source license risks from AI-generated code, suggesting overconfidence relative to known coverage/noise problems.
  • Primary improvement priority: 27.27% identify “better development workflow integration” as the single top priority for improving AST capability—emphasizing developer-centric solutions over more point tools.
  • Recommendations for leadership: establish formal AI governance and policies, rationalize and consolidate AST toolchains to reduce redundancy and noise, and reallocate investment toward integrated, developer-native platforms and metrics (e.g., mean time to remediate).
  • Recommendations for practitioners: push for tools embedded in IDEs and CI/CD, quantify the time cost of triaging noisy alerts to build a business case for consolidation, and lead secure AI enablement with guardrails that permit safe innovation.
  • Recurring themes and implications: lopsided DevSecOps maturity (Sec trailing Dev/Ops), toolchain fragmentation causing noisy outputs and triage overload, and urgent need to embed security into native developer workflows to preserve velocity while reducing risk.
  • Predicted near-term shifts: rapid growth in AI governance tooling and visibility features, a widening developer-centric skills gap in security-as-code and CI/CD integration, and a budgetary shift from buying more point solutions to optimizing/deduplicating toolchains and investing in unified posture platforms.
  • Impactful takeaways: security debt compounds with frequent releases, the solution is not simply more scanners but workflow-integrated detection/prioritization/remediation, and organizations must treat AI both as a force multiplier and a governed risk vector.
  • How Black Duck frames the response: promote a platform-based approach to unify AST results, deduplicate and prioritize findings, embed security into the developer workflow, track open-source AI models, and provide governance and IP protection against shadow AI.
BlackDuck-Balancing-AI-Usage-and-Risk-in-2025
Source: Awesome Annual Security Reports - The reports in this collection are limited to content which does not require a paid subscription, membership, or service contract. (https://github.com/jacobdjwilson/awesome-annual-security-reports/)

Download Report from Github