Configure and run an LLM scan

Configure provider access, capture a representative LLM sequence, enable required capabilities, and run a scan in a controlled test environment.

Before you begin

Ensure you have an Azure OpenAI account.

Procedure

  1. Set up provider access and keys. See Configure provider access.
  2. Enable LLM configuration and record a representative LLM sequence. See Configuring LLM to validate LLM tests.
  3. If the LLM domain differs from the starting URL, add it to the "Domains to be tested" list.
  4. Optional: If your workflow requires authentication, enable login before sequence playback.
  5. Start the scan and monitor progress. Pause or stop if unexpected impacts occur in the test environment.
  6. Review findings, evidence, and remediation guidance. Reproduce issues using captured transcripts and prompts.
  7. Assess LLM risks and be compliance ready with out-of-the-box reports.

Results

The scan identifies vulnerabilities and provides evidence and guidance for remediation. AppScan presents findings with evidence to streamline triage under Issue details pane. Issue details page showing the Issue information of an LLM vulnerability
  • Issue information:
    • Risk classification and severity.
    • LLM test interaction displays the conversation which led AppScan to raise vulnerability.
    • Impacted LLM vulnerability identified by test name.
  • LLM interaction:
    • History of all prompts and responses.
      Note: For other issues this is the Request\Response tab.
  • How to fix:
    • Remediation guidance and references.
    • Clear reproduction steps and example prompts.

To filter LLM vulnerabilties, type the prefix “llm” in the search issues bar.

LLM scan showing LLM issue in search result

What to do next

You can generate the OWASP Top 10 for LLM Applications 2025 report.