Troubleshooting AppScan 360° Static Analysis deployment
Error return from running the deployment script
Missing Cert Manager dependency
- Error:
Error: unable to build kubernetes objects from release manifest: [resource mapping not found for name: "analyzer-cert" namespace: "" from "": no matches for kind "Certificate" in version "cert-manager.io/v1" ensure CRDs are installed first, resource mapping not found for name: "ascp-adapter-cert" namespace: "" from "": no matches for kind "Certificate" in version "cert-manager.io/v1"
- Root cause:
This error is encountered when the cert-manager addon dependency is not deployed on the cluster. AppScan 360° SAST depend son the cert-manager addon for deployment.
- Solution:
Verify cert-manager is deployed and running on your Kubernetes cluster. Review system requirements and environment setup instructions.
Missing Keda dependency
- Error:
Error: unable to build kubernetes objects from release manifest: [resource mapping not found for name: "analyzer-hpa" namespace: "" from "": no matches for kind "ScaledObject" in version "keda.sh/v1alpha1" ensure CRDs are installed first, resource mapping not found for name: "ascp-adapter-hpa" namespace: "" from "": no matches for kind "ScaledObject" in version "keda.sh/v1alpha1"
- Root cause:
This error is encountered when the Keda addon dependency is not deployed on the cluster. AppScan 360° SAST depends on the Keda addon for deployment.
- Solution:
Verify Keda is deployed and running on your Kubernetes cluster. Review system requirements and environment setup instructions.
Resource already in use or cannot be recreated
- Error:
Previous PVC storage not completely cleaned up.
Incomplete finalizers on pods.
Namespace stuck in termination state.
- Root cause:
Resources from a previous deployment are not completely cleaned up during the removal process. This can also occur when the namespace is forcefully deleted, leaving resources untracked on the cluster.
- Solution:The solution varies on the type of resource issue involved:
- To clean up a namespace stuck in terminating state
kubectl get namespace "hcl-appscan-sast" -o json | tr -d "\n" | sed "s/\"finalizers\": \[[^]]\+\]/\"finalizers\": []/" | kubectl replace --raw /api/v1/namespaces/hcl-appscan-sast/finalize -f-
-
To clean up a pod stuck in terminating state
kubectl delete pod <pod-name> --grace-period=0 --force --namespace <namespace>
-
To delete orphaned PVs and PVCs
kubectl patch pv <pv-name> -p '{"metadata":{"finalizers":null}}' kubectl delete pv --grace-period=0 --force --namespace <namespace> <pv-name> kubectl patch pvc <pvc-name> -p '{"metadata":{"finalizers":null}} kubectl delete pvc --grace-period=0 --force --namespace <namespace> <pvc-name>
- To clean up a namespace stuck in terminating state
Required parameters not provided
- Errors:
-
ERROR: Authorization token is required for the deployment. Use the option '--auth-token' to specify the file path which contains the token.
-
ERROR: Authorization token file path specified for the option '--auth-token' does not exist. o ERROR: Rabbitmq password is required for the deployment. Use the option '--rabbitmq-pwd' to specify the file path which contains the password.
-
ERROR: Rabbitmq password file path specified for the option '--rabbitmq-pwd' does not exist.
-
ERROR: CA certificate & key are required for the deployment. Use the options '--cert, --cert-key' to specify the ca certificate and the private-key file paths.
-
ERROR: Certificate file path specified for the option '--cert' does not exist.
-
ERROR: Certificate private key file path specified for the option '--cert-key' does not exist.
-
ERROR: Configuration file path specified for the option '--config-file' does not exist.
-
- Root cause:
Required parameters to deployment script are not provided or invalid values are provided to the options.
- Solution:
Please review all required options to the deployment script and make sure valid values are specified to the options.
Pods not able to start after deployment script
As a post deployment process, you should login into the cluster to verify all AppScan 360° SAST pods are up and running. Some examples of pod-related deployment issues follow.
Persistent Volume (PV) & Persistent Volume Claim (PVC) related errors
PVC creation requires the requested storage size availability on disk. Verify that the disk space housing data storage has enough space to accommodate the requested capacity. Unprovisioned PVC will cause pod creation to fail.
azurefile
allows for large storage capacity for fees.If any existing AppScan 360° SAST related PV & PVCs are not completely removed, then any new deployment will fail to create PV & PVCs due to the name collision, which then causes the pod creation to fail..
-
Solution:
Wait for few seconds after removal to ensure all PV & PVC resources are released efficiently before trying a new deployment.
Insufficient CPU / Memory for pod creation
Each of the AppScan 360° SAST components have a defined set of resource requirements for the respective pods. Refer to the resource requirements section for more details.
Pod creation will fail when the required minimum resource limit set for each pod is not available on the node-pool(s).
-
Solution:
Ensure node-pool resource size meets the resource requirements defined by AppScan 360° SAST.
RabbitMQ not up
RabbitMQ service must be up and running for AppScan 360° SAST components to function as expected. RabbitMQ service takes a few minutes to get started and it is not unusual to have a few failed attempts getting started by the AppScan 360° SAST pods while RabbitMQ is being deployed. If RabbitMQ service is unsuccessful in getting started, AppScan 360° SAST pods fail in turn.
Image pull failure
Error:
failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized
Root cause:
This is caused by either the AppScan 360° SAST registry secret not being created or the current registry secret containing outdated credentials.
Solutions:
- Delete previous AppScan 360° SAST registry secret.
- Update the registry username and password provided to the deployment script. See Deployment configuration parameters
Certificate expired
- Rerun the deployment command, providing the new certificate and the respective private key.
- Delete the following secrets from the namespace that holds the certificates
for the internal components. The secrets will be automatically
regenerated.
kubectl delete secret sast-service-tls gateway-tls workflow-manager-tls scan-manager-tls preparer-tls analyzer-tls ascp-adapter-tls sast-service-rabbitmq-tls --namespace hcl-appscan-sast
- Delete all the pods in the namespace. New pods using the newly generated TLS
secrets will be created automatically.
kubectl delete --all pods --namespace=hcl-appscan-sast
Deployment fails on fresh install
Error:
The Helm deployment operation fails with the generic error - ERROR: Deploy
SAST services - failed. Installation aborted!
Root cause:
A previous attempt to upgrade on this cluster failed, leaving old secret files on the cluster. These files prevent the new deployment from creating similar files needed by the new deployment.
Solution:
Run the undeploy command before attempting a fresh install. See Removing AppScan 360° Static Analysis containers
Service not accessible
AppScan 360° SAST should be accessible through the FQDN (https://<sast-ingress-fqdn>) provided as a parameter to the deployment script. Examples of errors that can make the service inaccessible through the FQDN include:
- Missing ingress controller dependency
- Error:
- This site can’t be reached.
-
Root Cause:
This error can be seen when trying to access AppScan 360° SAST from the specified FQDN on a browser, when the ingress controller is not installed or configured properly.
- Solution:
- Verify ingress controller is deployed and running on your Kubernetes cluster. Please review the prerequisites section for all required dependencies.
-
Ensure the static IP used in configuring the ingress controller resolves to the FQDN used for deployment.
This can be done by creating a record set on your cloud DNS management tool or update local machine's etc/hosts file.
- Wrong FQDN
- Error:
- This site can’t be reached.
- Root Cause: The FQDN used in configuring the ingress is different from the one used in accessing the service. Another issue here can be wrong IP/DNS mapping. If the FQDN used is different from the value mapped to the ingress IP, then service will be unreachable.
- Solution:
- Verify the FQDN used is accessing the service matches the value passed to the deployment script.
- Verify the FQDN value is mapped to the ingress IP either on your local etc/hosts file or your cloud DNS host zone.
- Gateway service not running
- Deployment successful but pods failing to start
- Error:
- The error message here will depend on the root cause
- Root Cause:
- Imagepullbackoff: This is the most common reason for pod failures. Error occurs when image can not be pulled from the repository/repository path provided.
- Insufficient cpu, Insufficient memory: This error occurs when the minimum cpu and memory set for AppScan 360° SAST are not met by any node on the cluster nodepool.
- Solution:
- Verify the repository provided at deployment contains the images being deployed.
- Verify the repository authentication provided is correct. AppScan 360° SAST deployment will require an accurate authentication to pull images from the repo.
- Verify nodepool has sufficient memory and cpu resources to meet the minimum requirements set during deployment.
- Check pod error logs for more details.