Troubleshooting issues

You can find information about the issues or problems that you might encounter while working with HCL OneTest Server. Details about issues, their causes and the resolutions that you can apply to fix the issues are described.

Table 1. Troubleshooting issues: installation

Problem

Description

Solution

When you are installing the server software and you encounter errors in the scripts that are running.

At times, scripts might not appear to be running due to any of the following reasons:
  • Slow connection speeds.
  • Insufficient CPU, memory, or disk resources.
  • A firewall that was configured incorrectly is already enabled.
You can complete any of the following tasks:
  • To identify the issue, you can perform a diagnostic check by running the following command:
    microk8s.inspect
  • Run the following command to see what is not running:
    kubectl get pods -A

    Run the following command to get details about the pods:

    kubectl describe pod
  • Follow the on-screen instructions to resolve the errors.
  • Some issues can be solved by re-running the following script:
    ubuntu-init.sh

The DNS is not working as expected.

You can change the nameservers by using the following command:

kubectl edit cm coredns -n kube-system

The changes are applied when you restart the coredns pod.

If you are unsure of how to do this you can run the following script that helps you manage the DNS settings:

ubuntu-set-dns.sh

Table 2. Troubleshooting issues: server administration

Problem

Description

Solution

Less recently patched versions of OpenShift can give errors whenever OpenShift performs internal checks.

For example, the following error is displayed:
ValidationError(Route.spec.to): missing required field "weight"

The errors are caused because OpenShift performs the internal checks that are invalid.

Apply the latest OpenShift patches when they become available.

If you do not want to apply the patches or cannot apply the patches, you can disable these checks in OpenShift by appending the following option to the helm command:

--disable-openapi-validation

If your user-realm role is changed when you are logged in a session, the changed role is not applied immediately or even after the browser is refreshed.

You must log out of the session and log in again for the changed role to take effect.

You see the following message displayed on HCL OneTest Server instance that is installed on Ubuntu:

no healthy upstream

The cause for this issue might be that the infrastructure is over-provisioned with many virtual machines or applications that compete for connectivity with the physical hardware.

The message is displayed when you log in after a period of inactivity and the server ran normally earlier.

The pods appear to be in a Terminating state when you explore the logs.

The pods appear to be in the Terminating state because the sensitivity to the disk latencies is determined by the etcd key-value store in microk8s.

You can perform any of the following actions:
  • To reduce the likelihood of disk latencies that cause pods to start the termination process, run the following command after you run ubuntu-init.sh at the time of server software installation:
    sudo ionice -c2 -n0 -p `pgrep etcd`
  • To revert the server to a running state, run the following command to stop and start the microk8s:
    microk8s.stop && microk8s.start
Table 3. Troubleshooting issues: resource monitoring

Problem

Description

Solution

You are not able to add a Prometheus server as a Resource Monitoring source.

The cause might be that you have not installed the Prometheus server at the time of server installation.

Verify that the Prometheus server was installed in Helm at the time of server installation. See Installing the server software on Ubuntu using microk8s. If not, consult your cluster administrator to get the Promethues server installed and configured.

Table 4. Troubleshooting issues: configuring test runs

Problem

Description

Solution

When you configure a run of a schedule that matches the following conditions:
  • The schedule has two user groups configured to run on static agents when the schedule was created in HCL OneTest Performance V10.1.
  • One of the user groups is disabled and the asset is committed to the remote repository.
Both the static agents are displayed as available for the test run in the Location tab of the Execute test asset dialog box when only one agent that is configured for the user group must be available.
The cause might be because of the following reasons:
  • The schedule was created in HCL OneTest Performance V10.1.
  • The user group that is disabled is not removed or deleted from the test resources.
  • The agent configured on the disabled user group is already added as an agent to the server project and is available for selection.
To resolve the problem, select from either of the following methods:
  • By using HCL OneTest Performance V10.1.1.
    Perform the following steps:
    1. Open the schedule in HCL OneTest Performance V10.1.1.
    2. Save the schedule and the project.
    3. Commit your test asset to the remote repository.
    4. Proceed to configure a run for the schedule on HCL OneTest Server V10.1.1.
  • By using HCL OneTest Performance V10.1.
    Perform the following steps:
    1. Select the disabled user group.
    2. Click Remove.
    3. Save the schedule and the project.
    4. Commit your test asset to the remote repository.
    5. Proceed to configure a run for the schedule on HCL OneTest Server V10.1.1.
Table 5. Troubleshooting issues: test or stub runs

Problem

Description

Solution

You encounter any of the following issues:
  • When many tests are run simultaneously on the default cluster location and you observe the following issues:
    • Out of memory errors.
    • Observe that the test runs are slow with a high CPU usage.
    • The Kubernetes pods are getting evicted.
  • When you run an AFT suite that contains multiple Web UI tests and you observe the following issues:
    • Error stating that the browser might not be installed or the browser version is unsupported.
    • Error stating multiple random time-outs or an internal error.
The issue is seen when any of the following events occur:
  • Many tests are run in parallel.
  • The memory that is used by the tests during the test run exceeds the allocated default memory of 1 GB.
  • The default memory of the container is not adequate for the test run.
  • Pods are evicted due to low node memory.

To resolve the problem, you can increase the resource allocation for test runs.

You can enter arguments in the Java Arguments field in the Advanced settings panel of the Execute test asset dialog box when configuring a test run.

Important: The memory settings that you configure for a test run is persisted for the test when ever you run it. You must use this setting judiciously. Configuring all tests for an increased memory limit might affect subsequent test runs or cause other memory issues when tests run simultaneously.
You can increase the resource allocation for test runs by using any of the following arguments:
For... Enter the argument... Result

Specifying the memory limit of the init container .

-Dexecution.init.resource.memory.limit

Change the memory limit of the init container from the default value of 1 Gi to 1024Mi.

Configuring a larger memory request for the init container to avoid pod eviction.

-Dexecution.init.resource.memory.request=1024Mi

Increases the initial memory request for the init container from the default value of 64Mi to 1024Mi.

Specifying the cpu request for the init container.

-Dexecution.init.resource.cpu.request

Increases the cpu limit of the init container from the default value of 50m to 60m.

Specifying a maximum heap size for the test run.

-Xmx4g

Increases the allotted 1 GB memory to 4 GB.

Specifying the memory limit of the container explicitly for the test run.

-Dexecution.resource.memory.limit=<custom_memory_value>Gi
Note: You must enter the value for the memory limit that you want in <custom_memory_value>.

Increases the allotted memory to any value you specify.

For example, if you want to set the memory limit to 5 GB, the argument can be:

-Dexecution.resource.memory.limit=5Gi.

The argument enables the memory to be set to 5 GB and overrides the default memory size.

Specifying the memory request for the container used by the test run.

-Dexecution.resource.memory.request

Increases the memory request from the default value of 64Mi to 1Gi.

Specifying the cpu request for the container used by the test run.

-Dexecution.resource.cpu.request

Increases the cpu limit of the container from the default value of 50m to 60m.

You are not able to run the Istio stubs from the Execution page.

The cause might be that you have not enabled the service virtualization via Istio at the time of server installation. The default configuration does not enable service virtualization via Istio.

Contact your cluster administrator or if you have the privileges, configure Helm as follows:

  • For Multi-tenant clusters

    If the cluster is shared and the product may only virtualize services running in specific namespaces then add the following parameter to the Helm install:

    --set execution.istio.enabled=true

    Then enable service virtualization in specific namespaces using this command:

    kubectl create rolebinding istio-virtualization-enabled -n {namespace} --clusterrole={my-ots}-execution-istio-test-system --serviceaccount=test-system:{my-ots}-execution
    Note: Uninstalling the chart does not clean up these manually created role bindings.
  • For Single-tenant clusters

    If the cluster is not shared and the product may virtualize any service running in the whole cluster then add the following parameters to the Helm install:

    --set execution.istio.enabled=true

    --set execution.istio.clusterRoleBinding.create=true

The cause might be that the fully qualified domain name is not specified in the Host field for the stub when it was created. Verify and ensure to add the fully qualified domain name of the server in the Host field when the physical transport for the stub is configured in HCL OneTest API.

When HTTP stubs on HCL OneTest Server and called via the HTTP proxy, the calls fail with HTTP 404 errors.

HTTP stubs that run on MicroK8s make use of an Istio gateway as the default gateway for handling HTTP traffic. If the routing of traffic to the stubs is via the HTTP proxy then the proxy must be of V10.1.1 or later, to work correctly with the gateway. If you want to use a previous version of the proxy then the HTTP stubs can be run such that they use NodePorts rather than the gateway. To enable the use of NodePorts for all stubs, provide the following command to the Helm install commands when you install HCL OneTest Server on MicroK8s:
--set execution.ingress.type=nodeport
Alternatively, to enable a stub to use a NodePort for a specific run you can provide the following argument in the advanced settings for the run:
-Dexecution.ingress.type=nodeport
Table 6. Troubleshooting issues: test results and reports

Problem

Description

Solution

You are not able to view the Jaeger traces for the tests you ran.

The cause can be as follows:
  • Might be that you have not installed Jaeger at the time of server installation.
  • The Jaeger trace is not supported for the particular test you ran.
Check for any of the following solutions: