DAST for LLM-augmented applications

Dynamically test large language model (LLM) features in your application for sensitive information disclosure, prompt injection, data exfiltration, tool abuse, content policy violations, and more before attackers exploit them. Configure AppScan to target chat endpoints, retrieval-augmented generation (RAG) pipelines, and other LLM components, and then review reproducible findings with full transcripts and remediation guidance.

Note: DAST for LLM-augmented applications is an innovative feature; its workflow may change without notice. While it is currently included with an AppScan Standard license, future access may necessitate the procurement of a separate license.

Large language models (LLMs)

LLMs are neural networks trained on extensive text corpora to understand and generate natural language for tasks such as chat, summarization, and code generation.

Integrations with external tools and data sources (for example, retrieval-augmented generation (RAG) and plugins) expand capabilities, but also introduce risks, including prompt injection, sensitive data exposure, and unintended actions. Robust testing and controls are required.

AppScan DAST for LLM-augmented applications

LLM vulnerabilities can expose sensitive data, trigger unauthorized tool or API actions, manipulate outputs, and disrupt services. These issues undermine security, reliability, and compliance.

For a detailed understanding of key risk categories and attack patterns, see the OWASP Top 10 for LLM Applications. This resource outlines critical threats such as prompt injection, data exfiltration, training data poisoning, and model abuse.

AppScan DAST for LLM-augmented applications helps address these security risks by enabling configuration and testing for LLM-based vulnerabilities. This allows for the validation of defenses and the prevention of exploitation. By enabling LLM in DAST scans, potential risks can be identified and remediated early in the development cycle.

Coverage summary

AppScan DAST exercises your LLM workflows end-to-end and inspects behavior and responses to uncover issues, including:

  • Prompt injection and jailbreak attempts

  • Sensitive data disclosure and data exfiltration

  • Function/tool-calling abuse and unauthorized actions

  • Retrieval-augmented generation (RAG) threats, including retrieval manipulation

  • Datastore access misconfigurations

  • Code execution and shell command injection

Limitations

  • LLM outputs are non-deterministic; reruns may produce different results.
  • Provider-side rate limits and safety filters can affect coverage.