Installing IBM Cloud Private
Install and configure IBM Cloud Private to manage the deployment of the application images for Component Pack.
Before you begin
- Your Connections deployment is up and running. Component Pack for IBM Connections requires IBM Connections 6.0 CR1 or greater.
- If installing Orient Me, you must install the IBM Connections 6.0 CR1 iFix 6.0.0.0_CR1-IC-Common-IFLO93624 (APAR LO93624). You can download it from Fix Central.
- If installing Elasticsearch Metrics, you must install the IBM Connections 6.0 CR1 iFix 6.0.0.0-IC-Multi-IFLO93386 (APAR LO93386). You can download it from Fix Central.
- If you are deploying a single node test system, follow the minimum requirements specified in Component Pack installation roadmap and requirements.
- If you are deploying larger clustered ICp deployments (HA), you have the minimum number of servers, CPUs, memory, and disk space as outlined in Optional: Installing the Component Pack for IBM Connections
- If you are deploying a High Availability Cluster(HA), follow the requirements for an HA solution in the topic ICp High Availability (HA) Requirements and Configuration Prerequisites
- The servers are all running one of the operating systems outlined in Optional: Installing the Component Pack for IBM Connections
- The servers can all access each other and the Connections deployment.
- A user with sudo access on all the servers including the Connections HTTPServer.
- If you have a RedHat subscription you need to access and download yum updates on all ICp servers.
- All ICp servers have internet access to download Docker and Kubernetes dependencies, or have access to a proxy machine that has internet access.
Procedure
-
Download the latest Component Pack zip from IBM Fix Central and extract it to a folder of your choosing on
the ''boot'' node.
This guide will use hybrid as the name of the deployment folder. For example:
sudo unzip IC-ComponentPack-6.0.0.x.zip -d hybrid
The IC-ComponentPack-6.0.0.x.zip zip contains all the required files to deploy ICp and to deploy the microservices onto ICp.
-
On the designated Boot server, move the deployCfC folder to the
/opt/ directory to deploy ICp.
For example:
sudo mv -f hybrid/microservices/hybridcloud/deployCfC /opt/
-
Give the deployCfC folder the required permissions to run on your system.
sudo chmod -R 755 /opt/deployCfC
-
As you plan your installation scenario, run the deployment command with a
help parameter so you can view all available options:
sudo bash /opt/deployCfC/deployCfC.sh --help
Required arguments:Option Description --boot (Required) FQHN of the boot node. Must be lowercase. --master_list (Required) Comma-separated list of FQHNs of the master nodes for HA, or the FQHN of one master node for non-HA. Must be lowercase.
--worker_list (Required) Comma-separated list of the worker nodes. FQHN of each worker node. Must be lowercase. --proxy_list (Required) Comma-separated list of FQHNs of the proxy nodes for HA, or the FQHN of one proxy node for non-HA. Must be lowercase.
Required arguments for high availability support:Option Description --proxy_HA_vip The floating IP address for proxy servers --master_HA_vip The floating IP address for master servers --proxy_HA_iface The network interface for proxy servers. Tip: Example interfaces are 'eth0' and 'ens192' Your network interface may be different.--master_HA_iface The network interface for master servers --docker_storage_block_device The name of an unconfigured block device that has been set up on all ICp servers. It is used as the storage location for Docker which uses the direct-lvm devicemapper storage type. For more information, see this article on the Docker device mapper driver.Note: This parameter is required for HA deployments, but can optionally be used in non-HA deployments too.--cfc_ee_url The location of the installation files for IBM Cloud private Enterprise Edition which is required for HA deployments. Supports a URL or a directory location on the local machine.
Optional arguments for high availability support:Option Description --master_HA_mount_registry Usage: --master_HA_mount_registry=<NFS-server-FQDN>:/CFC_IMAGE_REPO. This will create the directory /CFC_IMAGE_REPO on the <NFS-server-FQDN> and export it as an NFS Share. The directory /var/lib/registry is then created on all of the master ICp servers which will consume the NFS Share by adding an entry to the /etc/fstab file. The image repository is mandatory when deploying HA, but this argument is not mandatory in case you would like to manually set it up. See ICp high-availability (HA) requirements and configuration prerequisites for manual steps. --master_HA_mount_audit Usage: --master_HA_mount_audit=<NFS-server-FQDN>:/CFC_AUDIT. This will create the directory /CFC_AUDIT on the <NFS-server-FQDN> and export it as an NFS Share. The directory /var/lib/icp/audit is then created on all of the master ICp servers which will consume the NFS Share by adding an entry to the /etc/fstab file. The audit repository is mandatory when deploying HA, but this argument is not mandatory in case you would like to manually set it up. See ICp high-availability (HA) requirements and configuration prerequisites for manual steps. Additional argument if deploying dedicated infrastructure for Elasticsearch:
Option Description --infra_worker_list Comma-separated list of the infrastructure worker nodes. FQHN of each infrastructure worker node. Must be lowercase. These will be dedicated nodes for Elasticsearch pods. Commonly used optional arguments to reduce interactive input during install.Note: If you set secrets (passwords), make sure you note them as you will need them in later administrative or configuration tasks.Option Description --use_docker_ee Supports Docker Enterprise Edition (EE) rather than the default Docker Community Edition (CE), provide this argument with the Docker EE subscription URL --set_redis_secret Password to connect and test that events are flowing to redis; must be at least 6 characters in length. --set_search_secret Set to the value of the Search Secret; must be at least 6 characters in length. --set_solr_secret Set to the value of the SOLR Secret; must be at least 6 characters in length. --set_elasticsearch_ca_password Set the Elasticsearch CA password; must be at least 6 characters in length. --set_elasticsearch_key_password Set the Elasticsearch Key password; must be at least 6 characters in length. --set_ic_host Set this to the FQDN of your Connections server. This value should be the "front door" entry point that clients use to connect to Connections. If you have a load balancer, a standalone IHS, or a reverse proxy in front of Connections, then enter the FQDN of that server. --internal_ic Set this value to the FQDN or IP address of the IHS that sits in front of Connections. This is the value that will be used for server-to-server communication. You only need to set this value if it differs from the one in --set_ic_host. --set_ic_admin_user Connections admin user --set_ic_admin_password Connections admin password --set_krb5_secret Path to Kerberos secret file for SPNEGO deployments with Mail and Calendar Optional arguments used to manage security-related steps:Option Description --pregenerated_private_key_file Use your own private key. Usage: --pregenerated_private_key_file=<full_path/key_name>
--non_root_user Install with a user other than the root user (Must have sudo access). --non_root_passwd Non root user password if using the --non_root_user flag. --root_login_passwd Root password will be used on the ICp servers in future. Note: As a Connections administrator, you may need to ask a Linux administrator to enter the root password on your behalf.Optional arguments specific to operating system modification:Option Description --ext_proxy_url Use this flag if the computers in your deployment do not have direct Internet access. Set the value to an http or https proxy URL that your machines can reach.
Example for non-authenticated proxy:
--ext_proxy_url=https://host:port
Example for authenticated proxy:
--ext_proxy_url=https://username:password@host:port
--configure_firewall Use this flag if you want to let the installation open the required ports: 4001, 8001, 8443 and 8500. Less commonly used optional arguments:Option Description --remove_worker Removes worker nodes from an existing deployment. Only one worker node can be removed at a time. FQHN of each worker node. For more information, see Adding or removing worker nodes. --add_worker Adds worker nodes to an existing deployment. Only one worker node can be added at a time. FQHN of each worker node.. For more information, see Adding or removing worker nodes. --add_infra_worker Adds infrastructure worker nodes to an existing deployment. These will be dedicated nodes for Elasticsearch pods. Only one infrastructure worker node can be added at a time. FQHN of each infrastructure worker node. For more information, see Adding or removing worker nodes. --remove_infra_worker Removes infrastructure worker nodes from an existing deployment. Only one infrastructure worker node can be removed at a time. FQHN of each infrastructure worker node.. For more information, see Adding or removing worker nodes. --ignore_os_requirements (Not recommended) Ignores operating system requirements. --uninstall Accepts these arguments: clean, cleaner, cleanest. During an uninstall, clean renames files that were being used by the install, cleaner renames some and delete some, while cleanest deletes all files. All arguments uninstall ICp and do not preserve all data. Configuration Data will be lost. Data in Persistent Volumes may be preserved but cannot be guaranteed. Note: We recommend that you back up the data contained in persistent volumes prior to performing any uninstall.--upgrade Use this flag to upgrade ICp. --skip_docker_deployment Use this flag if you want to skip the automatic deployment of Docker. You should not use this flag unless it is really required. You should let the installer deploy Docker, as it will install the version of Docker that is supported. -
Run the installation script, deployCfC.sh as a user with sudo permissions.
Specify the host names of the servers and the roles they will take by specifying the FQHN for each
node. If you are using more than one worker node, infra worker node or proxy node, separate the host
names with commas. For example, this scenario specifies three worker nodes and three infra worker
nodes in addition to the boot, master, and proxy servers. Note that the master node must be
co-located with the boot node so the master node definition reflects that requirement. Otherwise
master can be separated.
Note: Be prepared to respond to various prompts during the running of the script.For example, for a non-HA deployment with co-located boot and master::
For an HA deployment with co-located boot and master and co-located proxy and worker nodes:sudo bash /opt/deployCfC/deployCfC.sh \ --boot=bootserver.example.com \ --master_list=bootserver.example.com \ --worker_list=workerserver1.example.com,workerserver2.example.com,workerserver3.example.com \ --infra_worker_list=infraworkerserver1.example.com,infraworkerserver2.example.com,infraworkerserver3.example.com \ --proxy_list=proxyserver1.example.com
sudo bash /opt/deployCfC/deployCfC.sh \ --boot=bootserver.example.com \ --master_list=bootserver.example.com,masterserver2.example.com,masterserver2.example.com \ --worker_list=workerserver1.example.com,workerserver2.example.com,workerserver3.example.com \ --infra_worker_list=infraworkerserver1.example.com,infraworkerserver2.example.com,infraworkerserver3.example.com \ --proxy_list=workerserver1.example.com,workerserver2.example.com,workerserver3.example.com \ --proxy_HA_vip=9.x.x.y --master_HA_vip=9.x.x.n --master_HA_iface=<net iface> --proxy_HA_iface=<net iface> \ --docker_storage_block_device=/dev/XXX \ --cfc_ee_url=http://cfc_ee_url.example.com/files
Secrets are used to secure microservice-to-microservice communication. The example below shows how to set them non-interactively. If you don't provide these parameters, the install will prompt for them.
sudo bash /opt/deployCfC/deployCfC.sh \ --boot=bootserver.example.com \ --master_list=bootserver.example.com \ --worker_list=workerserver1.example.com,workerserver2.example.com,workerserver3.example.com \ --infra_worker_list=infraworkerserver1.example.com,infraworkerserver2.example.com,infraworkerserver3.example.com \ --proxy_list=proxyserver1.example.com \ --set_redis_secret=<redis_password> \ --set_search_secret=<search_password> \ --set_solr_secret=<solr_password> \ --set_elasticsearch_ca_password=<elasticsearch_ca_password> \ --set_elasticsearch_key_password=<elasticsearch_key_password>
Component Pack needs to know the hostname and credentials of the Connections on-premises server. The example below shows how to set them non-interactively. If you don't provide these parameters, the install will prompt for them.sudo bash /opt/deployCfC/deployCfC.sh \ --boot=bootserver.example.com \ --master_list=bootserver.example.com \ --worker_list=workerserver1.example.com,workerserver2.example.com,workerserver3.example.com \ --infra_worker_list=infraworkerserver1.example.com,infraworkerserver2.example.com,infraworkerserver3.example.com \ --proxy_list=proxyserver1.example.com \ --set_ic_host=connections-server.example.com \ --set_ic_admin_user=connections_admin_user \ --set_ic_admin_password=<connections_admin_password>
The Component Pack installation package deploys Docker Community Edition by default. If you have a Docker Enterprise Edition subscription, you can deploy Docker EE instead. This option is the only way to deploy Docker EE instead of Docker CE. The install will not prompt for this information. For example:--use_docker_ee=https://storebits.docker.com/ee/rhel/sub-XXXXX
If you are using SPNEGO with Mail and Calendar, specify the path to the file containing the Kerberos secret. For example:
--set_krb5_secret=/root/tmp/krb5keytab.yml
-
When the installation completes successfully, it outputs a URL with port and administrator user
name and password. Copy this URL and port into a browser and log in with the administrator
credentials. You should be able to see the IBM Cloud Private dashboard.
The Component Pack installation package deploys Kibana by default. Kibana allows you to visually represent data from an Elasticsearch as histograms, line graphs, pie charts, sunbursts, and more. You can access Kibana at http://master_host_name:5601. For more information, see Using the Kibana data visualization plug-in.