Configuring LLM to validate LLM tests
Configure AppScan to dynamically test Large Language Model (LLM) features in your applications for risks such as sensitive information disclosure, prompt injection, data exfiltration, tool abuse, and content policy violations. Target chat endpoints, retrieval-augmented generation (RAG) pipelines, and other LLM components, then review reproducible findings with LLM interaction history and remediation guidance.
|
Setting |
Details |
|---|---|
| LLM configuration enabled | Use the toggle to enable or disable LLM scanning for applications with integrated LLMs. |
| Configure OpenAI | To scan and report LLM risks, you must configure OpenAI endpoint and API key. For more information, see Configuring Azure OpenAI. |
| Record LLM sequence | |
| Record LLM sequence | Navigate to your LLM service URL. Enter "test" as the prompt and
submit. You can add additional prompts as needed. When you are finished,
stop the recording. You can record with the AppScan embedded browser. If
you encounter issues, you can use an external browser, provided you have
enabled it via Tools > Options > Use external browser:
|
| Edit the sequence | AppScan automatically detects the roles: Prompt, Submit and Response
fields.
|
| Run analyze | After you fix the playback you can Run analyze to automatically detect the roles. |
| Log in before sequence play | By default AppScan applies this checkbox when you select the Log in and then record option. |
| Advanced options | |
| Connected to a database |
Provide the table name connected to the database to fully map and
test the LLM service’s database attack surface. AppScan uses this
information to simulate injection attacks and identify
vulnerabilities that could allow unauthorized data access.
|