Troubleshooting Single VM Setup Issues

Task manager pod failure - Storage ownership issue

Occasionally after upgrading to version 2.0.0 in a single VM environment, the Task Manager pod may fail to start. To resolve the issue:
  • If you upgraded AppScan 360° using the online script, run:
    ./AppScan360_SingleVMsetup_v2.0.0.run -- $PWD remediateStorageIssues
  • If you upgraded AppScan 360° using the offline script, run:
    ./AppScan360_SingleVMsetup_v2.0.0_Offline.run -- $PWD remediateStorageIssues

Pod image pull issues

When deploying AppScan 360° in a single VM Ubuntu environment using a local Docker registry, you may encounter an issue where pods fail to start due to registry connection errors.

A typical error in pod events may look like:
Normal   Pulling    60s (x4 over 4m1s)   kubelet            
Pulling image "<ip>:5443/as360-k8s-docker-images/reloader:v1.2.1"
Warning  Failed     30s (x4 over 3m31s)  kubelet            
Failed to pull image 
"<ip>:5443/as360-k8s-docker-images/reloader:v1.2.1": rpc error: code = 
DeadlineExceeded desc = failed to pull and unpack image 
"<ip>:5443/as360-k8s-docker-images/reloader:v1.2.1": failed to resolve 
reference "<ip>/as360-k8s-docker-images/reloader:v1.2.1": failed to do 
request: Head 
"https://<ip>/v2/as360-k8s-docker-images/reloader/manifests/v1.2.1": 
dial tcp <ip>:5443: i/o timeout

If you see dial tcp <ip>:5443: i/o timeout in the error message, this typically indicates a network connectivity issue between your Kubernetes node (k0s container) and the Docker registry.

To resolve the issue:

  • Connect to the k0s container and install the diagnostic tools:
    docker exec -it k0s sh
  • Test network connectivity from the container:

    • Use netcat to check connectivity to the registry:
      nc -zv <registry-ip> 5443
    • If you do not see open in message, the connection is not successful, indicating a network issue.
    • Try pulling the image directly using the ctr command from the container:
      k0s ctr images pull --user <user>:<pass> 
      <registry-ip>:5443/as360-k8s-docker-images/reloader:v1.2.1
    • If this fails, the issue is likely with firewall rules.
  • Check firewall (UFW) and IPTables rules:
    • On the host, inspect firewall rules:
      iptables -L -n -v
    • Look for chains related to ufw (Uncomplicated Firewall) that may be blocking traffic, like the example below:
      Chain INPUT (policy DROP 39231 packets, 2148K bytes)
       pkts bytes target     prot opt in     out     source               destination         
      6603K 9311M ufw-before-logging-input  all  --  *      *       0.0.0.0/0            
      0.0.0.0/0           
      6603K 9311M ufw-before-input  all  --  *      *       0.0.0.0/0            
      0.0.0.0/0           
       407K   50M ufw-after-input  all  --  *      *       0.0.0.0/0            
      0.0.0.0/0           
      39311 2154K ufw-after-logging-input  all  --  *      *       0.0.0.0/0            
      0.0.0.0/0           
      39311 2154K ufw-reject-input  all  --  *      *       0.0.0.0/0            
      0.0.0.0/0           
      39311 2154K ufw-track-input  all  --  *      *       0.0.0.0/0            
      0.0.0.0/0
      
  • Update UFW rules to allow Docker network traffic on the host:

    • Allow traffic on the Docker network interface:
      sudo ufw allow in on docker0
      sudo ufw allow out on docker0
      
    • Allow traffic to the registry ports:
      sudo ufw allow out from any to <registry-ip> port 5443 proto tcp
      sudo ufw allow out from any to <registry-ip> port 7443 proto tcp
      
  • Verify resolution:

    • Re-run the connectivity tests (netcat, ctr images pull) from the k0s container.
    • If successful, pod image pulls should now work as expected.

AppScan 360° fails to connect to the database - SCA deployment failed

  • Database connection was failed due to ip was not in network policy range
  • Consequently, SCA deployment also fails as ASCP deployment doesn't complete successfully
egress:
  - to:
    - namespaceSelector: {}
    - ipBlock:
        cidr: 10.0.0.0/8
    - ipBlock:
        cidr: 192.168.0.0/16
    - ipBlock:
        cidr: 172.16.0.0/12
    - ipBlock:
        cidr: fc00::/7

Troubleshooting steps

  • Check database connection string/ip

Resolution

  • Add database ip cidr to network policy egress rule

Coredns pod fail with crashloopbackoff error

  • coredns pod failing with crashloopbackoff error in k0s cluster setup
  • Error in coredns pod log
maxprocs: Leaving GOMAXPROCS=28: CPU quota undefined 
plugin/forward: no nameservers found 
stream closed: EOF for kube-system/coredns-6946cc8786-q52sw (coredns)

Resolution

  • Add nameserver entry in /etc/resolv.conf file in host machine if not present.
  • Delete coredns pods to recreate them after that.
  • Or set nameserver entry in coredns configmap if you want to keep it specific to cluster and not depend on host machine's resolv.conf
   kubectl -n kube-system edit configmap coredns

   # Replace forward . /etc/resolv.conf
   forward . 8.8.8.8 1.1.1.1 # Or specify any specific nameserver applicable for your environment

RHEL - Application not accessible on VM restart

  • After installing AppScan 360° on RHEL vm, if vm is restarted, the application may become inaccessible and show "503 Service Unavailable" error on accessing the application URL.
  • It is due to podman containers could be stopped and the k0s cluster might have got stopped.

Resolution

  • Helper scripts are available in the Single VM Setup kit in the path below to start the required components.
    # Navigate to aioWorkspace directory
    cd <AS360_SETUP_DIRECTORY>/aioWorkspace
    # Navigate to utils scripts
    cd bin/utils
    
    # Execute script to start podman containers
    ./startPodmanContainers.sh
    
    # Check if k0s cluster is running, if not start the cluster using below script
    k0s status
    kubectl get pods -A
    
    # Execute script to start k0s cluster if it is not running
    ./startK0sOnHost.sh
    
    # Verify if all pods are running
    kubectl get pods -A

Trouble in reinstalling AppScan 360° on single VM setup with istio pods stuck

  • During reinstallation of AppScan 360° on a single VM setup, istio pods might get stuck in pending state if they were not cleaned properly during uninstallation.

Resolution

  • Use a helper script to clean up istio resources and then proceed with installation.
    # Navigate to aioWorkspace directory
    cd <AS360_SETUP_DIRECTORY>/aioWorkspace
    # Navigate to utils scripts
    cd bin/utils
    
    # Execute script to clean up istio resources
    ./cleanIstio.sh
  • Then try installing AppScan 360° again using the installation command.