Troubleshooting
First steps
- Ensure the path to the properties files is correct during installation.
- Review log files to verify correct system requirements
are specified.
If the installation fails, the installer automatically cleans up extracted resources, leaving
.logfiles intact for debugging. - Ensure the AppScan 360° FQDN is defined in the DNS server.
- Verify all AppScan 360° pods are running.
- Enter the AppScan 360° URL to see the service.
Login issues
- Verify that the AppScan 360° cluster can connect to the database machine.
- Verify that the AppScan 360° database was created successfully on the database machine.
- Verify that the SQL server is configured to allow remote connections.
- Verify that the AppScan 360° cluster can connect to the LDAP server.
- Check the LDAP configuration in the AppScan 360° kit configuration file.
Log files
- Installation logs are located in the AppScan 360° the folder containing the
extracted kit:
<EXTRACTION_FOLDER>/logs/singular-setup[/teardown].log - Application logs are located ASCP shared
storage. For example, if accessed from within the
pod:
/storagemount/logsApplication logs are limited to 2MB. Once this limit is reached, another log is created, up to ten log files total.
- Microservice logs pertain to platform activities.
Each microservice generates its own log file, with
the filename prefixed by the microservice name.
<fileStorageRoot>/SaaSWorkingDirectory/SaaSStorage/Logs - Scan logs contain detailed information about scan
executions, including progress updates, metrics, and
debug information. They are specific to each scan
execution and also can be downloaded from the AppScan 360° user
interface.
<fileStorageRoot>/SaaSWorkingDirectory/SaaSStorage/Scans/<scanID>/<ExecutionID>/EngineLogs
Upgrade issues
- If you upgraded AppScan 360° using the online
script,
run:
./AppScan360_SingleVMsetup_v2.0.0.run -- $PWD remediateStorageIssues - If you upgraded AppScan 360° using the offline
script,
run:
./AppScan360_SingleVMsetup_v2.0.0_Offline.run -- $PWD remediateStorageIssues
Pod image pull issues
When deploying AppScan 360° in a single VM Ubuntu environment using a local Docker registry, you may encounter an issue where pods fail to start due to registry connection errors.
Normal Pulling 60s (x4 over 4m1s) kubelet
Pulling image "<ip>:5443/as360-k8s-docker-images/reloader:v1.2.1"
Warning Failed 30s (x4 over 3m31s) kubelet
Failed to pull image "<ip>:5443/as360-k8s-docker-images/reloader:v1.2.1": rpc error: code = DeadlineExceeded desc = failed to pull and unpack image "<ip>:5443/as360-k8s-docker-images/reloader:v1.2.1": failed to resolve reference "<ip>/as360-k8s-docker-images/reloader:v1.2.1": failed to do request: Head "https://<ip>/v2/as360-k8s-docker-images/reloader/manifests/v1.2.1": dial tcp <ip>:5443: i/o timeoutIf you see dial tcp <ip>:5443: i/o timeout in the
error message, this typically indicates a network connectivity issue
between your Kubernetes node (k0s container) and the Docker
registry.
- Connect to the k0s Container and install the diagnostic
tools:
docker exec -it k0s sh # (Run once per container) Install basic network tools: apk add --no-cache curl busybox-extras bind-tools - Test Network Connectivity from the container:
- Use
telnetto check connectivity to the registry:telnet <registry-ip> 5443If you do not see a "Connected" message, the connection is blocked.
- Try pulling the image directly using ctr
command from the
container
k0s ctr images pull --user <user>:<pass> <registry-ip>:5443/as360-k8s-docker-images/reloader:v1.2.1If this fails, the issue is likely with firewall rules.
- Use
- Check Firewall (UFW) and IPTables Rules
- On the host, inspect firewall
rules:
iptables -L -n -v - Look for chains related to
ufw(Uncomplicated Firewall) that may be blocking traffic like below log.Chain INPUT (policy DROP 39231 packets, 2148K bytes) pkts bytes target prot opt in out source destination 6603K 9311M ufw-before-logging-input all -- * * 0.0.0.0/0 0.0.0.0/0 6603K 9311M ufw-before-input all -- * * 0.0.0.0/0 0.0.0.0/0 407K 50M ufw-after-input all -- * * 0.0.0.0/0 0.0.0.0/0 39311 2154K ufw-after-logging-input all -- * * 0.0.0.0/0 0.0.0.0/0 39311 2154K ufw-reject-input all -- * * 0.0.0.0/0 0.0.0.0/0 39311 2154K ufw-track-input all -- * * 0.0.0.0/0 0.0.0.0/0
- On the host, inspect firewall
rules:
-
Update UFW Rules to Allow Docker Network Traffic on host:
- Allow traffic on the Docker network
interface:
sudo ufw allow in on docker0 sudo ufw allow out on docker0 - Allow traffic to the registry
ports
sudo ufw allow out from any to <registry-ip> port 5443 proto tcp sudo ufw allow out from any to <registry-ip> port 7443 proto tcp
- Allow traffic on the Docker network
interface:
- Verify Resolution
- Re-run the connectivity tests
(
telnet,ctr images pull) from the k0s container. - If successful, pod image pulls should now work as expected.
- Re-run the connectivity tests
(
Support
- For installation issues, include the installation logs
- For scan issues, include the contents of the scan
directory
(
<fileStorageRoot>/SaaSWorkingDirectory/SaaSStorage/Scans/<scanID>/<ExecutionID>/), including the scanned application (.irxfile or.war/.jar/.zipsource file for static analysis) and any scan logs. For additional information on troubleshooting scans, see Troubleshooting static analysis scans.