Deploying HCL Commerce on a Kubernetes cluster
HCL Commerce is a single, unified e-commerce platform that offers the ability to do business directly with consumers (B2C) or directly with businesses (B2B). It is a customizable, scalable, distributed, and high availability solution that is built to use open standards. It provides easy-to-use tools for business users to centrally manage a cross-channel strategy. Business users can create and manage precision marketing campaigns, promotions, catalog, and merchandising across all sales channels. HCL Commerce uses cloud friendly technology to make deployment and operation both easy and efficient.
A complete HCL Commerce environment is composed of an authoring (auth) environment and a live (live) environment. The authoring environment is for site administration and business users to make changes to the site, while the live environment is for shopper access. A third grouping also exists within HCL Commerce Version 9.1. The shared (share) environment contains the applications that can be consumed by both auth and live environment types. This is used for the new Elasticsearch-based search solution.
For more information on HCL Commerce, see HCL Commerce product overview.
HCL Commerce supports several deployment configurations. The default configuration mode used by the provided Helm Chart is Vault configuration mode. Vault is also the recommended configuration mode for HCL Commerce as it was designed to store configuration data securely. HCL Commerce also uses Vault as a Certificate Authority to issue certificate to each application to communicate with one another securely. Therefore, ensure that you have a Vault service available for HCL Commerce to access. The following steps highlight the minimum requirements before deploying HCL Commerce.
For non-production environments, you can consider the use of hcl-commerce-vaultconsul-helmchart to deploy and initialize Vault for HCL Commerce as it can initialize the Vault and populate data for HCL Commerce. However, that chart runs Vault in development and non-high availability (HA) mode and does not handle Vault token securely. Therefore, it should not be used for production environments. You can read Vault Concepts for all considerations that must be made to run Vault in a production setting.
With load balancing and ingress routing specifically, you can configure which services you want to expose externally, and restrict the remaining services within the cluster network. This configuration limits their access from and exposure to the wider Internet.
Before you begin
- Ensure that you have deployed Vault. Vault is a mandatory component that is used by default as a Certificate Agent to automatically issue certificates, as well as to store and retrieve essential deployment configuration variables and secrets. For more information, see Deploying a development Vault for HCL Commerce on Kubernetes
- Ensure that your environment is prepared. To set up the appropriate environment, see Prerequisites for deploying HCL Commerce on a Kubernetes cluster.
Beginning with HCL Commerce 9.1.7.0, a Power Linux version of the Helm Chart is included for use on that platform. Ensure that you are using the correct version of the Helm Chart for the platform that you are deploying to.
If you want to deploy Next.js with Solr search for 9.1.16.1, obtain the HCL_Commerce_Config_9.1.16.1.zip. Unzip to a folder, follow the runtime\README.md to build customized Docker images for
next.js-store-app
,store-web
,tooling-web
andts-app
.- Ensure that the
STORECONF
table in your database is configured for your deployment.- For solr-based search: Ensure that
wc.search.priceMode.compatiblePriceIndex is set to
0 for your
store.
update storeconf set VALUE='0' where NAME='wc.search.priceMode.compatiblePriceIndex';
- For Next.js with solr-based search: Ensure that hcl.imagePath
is set to the image path for your store. For the default Next.js sample store, use the
following SQL to set it to /hclstore.
-
INSERT INTO WCS.STORECONF (STOREENT_ID, NAME, VALUE) VALUES(41, 'hcl.imagePath', '/hclstore');
-
INSERT INTO WCS.STORECONF (STOREENT_ID, NAME, VALUE) VALUES(42, 'hcl.imagePath', '/hclstore');
-
- Ensure that headlessStore is set to true for
your store. For the default Next.js sample store, use the following SQL to set it to
true.
-
INSERT INTO WCS.STORECONF (STOREENT_ID, NAME, VALUE) VALUES(41, 'headlessStore', 'true');
-
INSERT INTO WCS.STORECONF (STOREENT_ID, NAME, VALUE) VALUES(42, 'headlessStore', 'true');
-
- For solr-based search: Ensure that
wc.search.priceMode.compatiblePriceIndex is set to
0 for your
store.
Procedure
- Optional: If you are using the Elasticsearch-based search solution for
HCL Commerce, you must deploy Elasticsearch, Zookeeper, and Redis.Important:
- Ensure that Elasticsearch, Zookeeper, and Redis are all deployed with persistence enabled. This will ensure that your search index and connector configurations are saved if the containers are restarted.
- The deployments of Elasticsearch, Zookeeper, and Redis require specific versions of their Helm Charts. These versions are compatible with the sample values bundled in your cloned HCL Commerce Helm Chart or HCL Commerce Plinux Helm Chart Git project. The specific version values are referenced in the hcl-commerce-helmchart/stable/hcl-commerce/Chart.yaml Helm Chart, from the cloned HCL Commerce Helm Chart Git project. Ensure that the versions specified in the following commands with the version parameter are aligned with the values that are referenced in this file.
Note: If you are deploying Elasticsearch, Zookeeper, and Redis on Red Hat OpenShift, you might be required to grant privileged Security Context Constraints (SCC) to the service account in order to prevent security errors. This is based on which service account you choose to use.For example,
oc adm policy add-scc-to-user privileged -z default –n NAMESPACE
- Optional:
If you intend to enable the Approval service for use within a Marketplace, you must deploy PostgreSQL to be used as the database.
- Create a namespace for
PostgreSQL.
kubectl create ns postgresql
- Add the Helm Chart
repository.
helm repo add bitnami https://charts.bitnami.com/bitnami
- Deploy PostgreSQL using a local postgresql-values.yaml file. A
sample version of this file is available in the sample_values
directory of your cloned HCL Commerce Helm Chart Git project.Important: An initialization SQL file is used to customize the database for use with the Approval server. You must update the sample password that is used in the script, and ensure that the datasource password under the Approval server section is updated with the same password.
helm install my-postgresql bitnami/postgresql -n postgresql -f postgresql-values.yaml --version="postgresql-chart-version"
- Monitor the deployment and ensure that all pods are healthy.
For more information about deploying PostgreSQL with Helm, see the PostgreSQL Helm Chart documentation.
- Create a namespace for
PostgreSQL.
- Configure your HCL Commerce deployment Helm Chart.Use the provided hcl-commerce-helmchart to customize your deployment. Review the following topics based on your configuration knowledge and requirements:Note: It is strongly recommended to not modify the default values.yaml configuration file for your deployment. Instead create a copy to use as your customized values file, for example, my-values.yaml. This will allow you to maintain your customized values for future deployments and upgrades.
-
Use Helm to control the deployment of HCL Commerce.
Once you have finished the configuration of your deployment in your my-values.yaml file and meet the environment prerequisites, you are ready to deploy HCL Commerce by using Helm.Important: Deploy the auth, live, and share groups into the same Kubernetes namespace to avoid any potential issues.
- First time deployment
- Deploy the share group with release name
demo-qa-share
into thecommerce
namespace.helm install demo-qa-share hcl-commerce-helmchart -f my-values.yaml --set common.environmentType=share -n commerce
- Deploy the auth group with the release
name
demo-qa-auth
into thecommerce
namespace.helm install demo-qa-auth hcl-commerce-helmchart -f my-values.yaml --set common.environmentType=auth -n commerce
- Deploy the live group with the release
name
demo-qa-live
into thecommerce
namespace.helm install demo-qa-live hcl-commerce-helmchart -f my-values.yaml --set common.environmentType=live -n commerce
Once the HCL Commerce applications are deployed, if you have further configuration changes or image updates, you can use Helm upgrade command to update the deployment.
- Deploy the share group with release name
- Updating a deploymentTo update a deployment run the following Helm command for the release and environmentType that you want to update.
helm upgrade release-name hcl-commerce-helmchart -f my-values.yaml --set common.environmentType=environmentType -n commerce
Note:If you are upgrading a deployment that uses NGINX or GKE ingress from a version prior to HCL Commerce 9.1.7.0 to a version 9.1.15.0 or greater, you must enable the ingressFormatUpgrade parameter within the values.yaml configuration file to trigger an upgrade job to clean up old ingress definitions. Failure to do so will result in errors with ingress definitions that are in conflict during the upgrade.
A non-root user for use within all HCL Commerce containers was introduced in the HCL Commerce 9.1.14.0 release. This change can impact various aspects of your deployment. Review HCL Commerce container users and privileges before upgrading to ensure that your deployment will continue to function as expected.
- There are several considerations when upgrading your deployment with regards
to the Assets Tool and its persisted storage configuration:
- If your existing deployment prior to the upgrade does not enable assetsPVC, then set the migrateAssetsPvcFromRootToNonroot parameter within the values.yaml configuration file to false.
- Instead of using commercenfs in the
values.yaml configuration file to create the NFS
storageclass, it is recommended to create an NFS storageclass manually.
Creating a storageclass manually will avoid issues that can be encountered
when running the helm upgrade command when deploying
separate environment types within a single namespace.
To create an NFS storageclass, see nfs-server-provisioner.
- If you use the Elasticsearch-based search solution, it is required to use a
completely new persistent volume for NiFi, and clear any existing Zookeeper
data before you redeploy. This is required so that the newer version of the
connectors can be created automatically during the deployment.
- To clear the NiFi data:
- See Persisting search data to create a new Persistent Volume Claim (PVC), and configure the new PVC name in your deployment values.yaml file.
- You can then remove the previous attached persistent volume
claim.
kubectl delete pvc previous_pvc_name -n commerce
- To clear Zookeeper data:
- Delete the existing Zookeeper
instance.
helm delete my-zookeeper -n zookeeper
- Remove the existing persistent volume
claims.
kubectl delete pvc --all -n zookeeper
- Delete the existing Zookeeper
instance.
- To clear the NiFi data:
- HCL Cache caches classes that can be modified in newer versions of HCL Commerce. To avoid errors in de-serializing an old version of the class, it is strongly recommended to clear Redis keys after upgrading HCL Commerce. Redis keys can be cleared with the Redis flushdb or flushall commands.
- Once you upgrade HCL Commerce, recreate any customized search profiles and connectors before your next search indexing.
- Removing a deploymentTo uninstall or delete a deployment run the following Helm command for the release that you want to remove.
helm delete release-name
- First time deployment
- Observe the deployment.
When you install or update HCL Commerce, the start-up must follow a precise sequence. The Support Container is primarily used for service dependency checks, to ensure that the various Commerce applications are brought online properly, and in the expected order. In addition, it is also used by some utility jobs, such as for TLS certificate generation for secure ingress. The deployment process can take up to 10 minutes depending on the capacity of your Kubernetes worker nodes.
You can check the status of your deployment. The following values are displayed in theStatus
column.Running: This container is started.
Init: 0/1: This container is pending on another container to start.
You can also observe the following values displayed in theReady
column:0/1: This container is started but the application is not yet ready.
1/1: This application is ready to use.
- Access your environments.By default, the Helm Chart uses the default values of tenant, env, and envtype. If you changed the default values, update the host names that are used within the following step examples.
- Check the ingress server IP address.
kubectl get ingress -n commerce
- Create the ingress server IP and hostname mapping by editing your development environment hosts file.
#Auth environment Ingress_IP store.demoqaauth.mycompany.com www.demoqaauth.mycompany.com cmc.demoqaauth.mycompany.com tsapp.demoqaauth.mycompany.com search.demoqaauth.mycompany.com #Live environment Ingress_IP store.demoqalive.mycompany.com www.demoqalive.mycompany.com cmc.demoqalive.mycompany.com tsapp.demoqalive.mycompany.com searchrepeater.demoqalive.mycompany.com
Note:For a Power Linux deployment on OpenShift, OpenShift routes must be utilized to expose services, instead of the ingress server. The
Ingress_IP
value in the hosts sample must be replaced by the IP address of the OpenShift service.- For Ambassador, or Emmisary ingress, the
Ingress_IP
is the IP address of the Ambassador or Emissary service. search.demoqaauth.mycompany.com
is used to expose the Search Master service.searchrepeater.demoqalive.mycompany.com
is used to expose the Search Repeater service within your live environment, to trigger index replication.
- Access your environment pages and tools with following URLs:
- An Aurora storefront: https://store.demoqaauth.mycompany.com/wcs/shop/en/auroraesite
- An Emerald storefront (The new React-based reference store): https://www.demoqaauth.mycompany.com/Emerald
- Management Center for HCL Commerce: https://cmc.demoqaauth.mycompany.com/lobtools/cmc/ManagementCenter
- Check the ingress server IP address.
- Build your search index.
- With the Solr-based search solution
- Trigger the Build Index job. This example uses the default spiuser, password,
and master catalog
ID.
A response with acurl -X POST -u spiuser:plain_text_spiuser_password https://tsapp.demoqaauth.mycompany.com/wcs/resources/admin/index/dataImport/build?masterCatalogId=10001 -k
jobStatusId
is displayed. - Check the Build Index job status using the
jobStatusId
value that was returned.
A returned value ofcurl -X GET -u spiuser:plain_text_spiuser_password https://tsapp.demoqaauth.mycompany.com/wcs/resources/admin/index/dataImport/status?jobStatusId=jobStatusId -k
0
indicates that the build completed successfully.
Note:- The default password for the spiuser user is
passw0rd
for HCL Commerce 9.1.0.0 through 9.1.8.0, andQxV7uCk6RRiwvPVaa4wdD78jaHi2za8ssjneNMdu3vgqi
for HCL Commerce 9.1.9.0 and greater. - It is essential to set your own spiuser password to secure your deployment. For more information, see Setting the spiuser password in your Docker images.
- Trigger the Build Index job. This example uses the default spiuser, password,
and master catalog
ID.
- With the Elasticsearch-based search solution
- Trigger the Build Index
job.
A response with acurl -X POST -k -u spiuser:plain_text_spiuser_password https://tsapp.demoqaauth.mycompany.com/wcs/resources/admin/index/dataImport/build?connectorId=auth.reindex&storeId=1
jobStatusId
is displayed.Note:- The default password for the spiuser user is
passw0rd
for HCL Commerce 9.1.0.0 through 9.1.8.0, andQxV7uCk6RRiwvPVaa4wdD78jaHi2za8ssjneNMdu3vgqi
for HCL Commerce 9.1.9.0 and greater. - It is essential to set your own spiuser password to secure your deployment. For more information, see Setting the spiuser password in your Docker images.
- The default password for the spiuser user is
- Check the Build Index job status using the
jobStatusId
value that was returned.curl -X GET -u spiuser:plain_text_spiuser_password https://tsapp.demoqaauth.mycompany.com/wcs/resources/admin/index/dataImport/status?jobStatusId=jobStatusId -k
A returned value of
0
indicates that the build completed successfully. For more information, see Building the Elasticsearch Index.
- Trigger the Build Index
job.
- With the Solr-based search solution