HERO is installed using Docker.
You can install HERO either on Windows or on Linux 64 bit operating system.
Google Chrome
Mozilla Firefox (Quantum recommended)
Docker CE (Community Edition) Engine 18.09.0 or later
Docker Compose 1.23.2 or later
CURL command line utility must be installed on the target Linux machines that you want to monitor from HERO
For Windows installation with process isolation (Windows containers):
Windows 10 build 1809+ or Windows Server 2019 Build 17763+
For Windows installation with Hyper-V (Linux containers):
Windows Desktop or Windows Server 2019 Build 17763+
Docker Hyper-V option must be enabled
Add Python and Pip command paths to the Path environment variable of Windows. For example, for default installation:
<YourDrive>:\Users\<YourUser>\AppData\Local\Programs\Python\Python36
<YourDrive>:\Users\<YourUser>\AppData\Local\Programs\Python\Python36\Scripts
Docker must be installed and configured with the following minimum requirements:
CPUs 4
RAM 24GB
Swap 1024MB
HERO must be installed and configured with the following minimum requirements:
CPU 64 bit, 2+ core
RAM 32GB+
Storage 200 GB HDD
HERO can monitor Workload Automation environments starting from Workload Automation version 9.1. in which the prerequisite feature Plan data replication in the database (also known as "mirroring") is enabled. For more information about the “mirroring” feature, see Replicating plan data in the database.
HERO supports direct monitoring for Master Domain Manager (MDM), Backup Domain Manager (BKM), and Dynamic Workload Console (DWC). Monitoring for all the remaining agents is performed using the database on MDM.
The supported operating systems for MDM, BKM, and DWC are AIX and Linux 64 bit.
The supported databases are DB2 and Oracle, installed in your Workload Automation environment.
Once the monitoring scripts are deployed to the target workstations, they are scheduled by HERO to run on a regular basis.
The HERO server requires a connection to each DB2 or Oracle instance in the monitored environments from which to collect throughput data and information about the agent status.
The following prerequisites must be met for HERO to monitor Workload Automation environments:
SSH server daemon must be up (SSH daemon allows authentication with username and password)
The user specified at workstation discovery time, i.e. the owner of the WA instance, must have the following permissions:
Permission to write in his own home directory and in the <TWA_HOME> directory. Access permission must be through SSH.
Permission to read the WA registries.
Permission to create job and job stream definitions, to submit, and cancel them.
For some recovery actions, and for some monitors based on previous versions of Workload Automation, it might be necessary to run actions as “sudo”. In this case, the sudo user must be authorized without specifying a password.
It must be possible for the target workstation to open an https or http connection with the HERO server (typically using port 8080, but the port number can be customized).
Starting from version 9.5, Workload Automation can be deployed on Docker containers. During the server discovery process, if containerized components are present on a server, you are requested to provide the image name for each component to be retrieved.
For the communication between HERO and the product servers, make sure ports are opened as follows:
from HERO to the product servers:
SSH port (default port 22), used by HERO to connect to the product servers
DB2 or Oracle Port (default port 50000), used by HERO to get throughput data from Workload Automation server
from the product servers to HERO:
https on port 443 (default port) or any custom port, required for status updating
for machines connected through WinRM protocol over https, port 5986 must be opened.
Hardware requirements depend on the number of servers where you want to run the prediction, but you must consider that CPUs with AVX instructions support are required (for this reason, old CPUs might not work). It is recommended to run the prediction on a different server, with similar specifications of the server where HERO is installed.
To start the training process, it is necessary to have enough data available: at least 17 hours for the training procedure, and at least 4 hours (of new data) for the retraining procedure. For the prediction procedure at least two hours of new observed data are required.
With these prerequisites, it will be possible to obtain a loss in the prediction accuracy of around 2%, that is to minimize the loss on a set of learning of around 2%.
The performance, in terms of computational time of the train (or retrain) procedure, is highly dependent on the technical specifications of the machine on which it runs.
There are no minimal requirements with the exception of 4GB of RAM to run TensorFlow. With 16 GB of RAM available and an Intel core i7 processor, the estimated execution time for the training procedure (for example on the data acquired in terms of throughput) for each single machine, on a quantity of about one thousand throughput observations, is around 20 seconds per epoch. Considering that in a basic configuration (see training_conf.json) there are 60 epochs, the estimated time is about 12 minutes for the training phase.
In addition to this time, the swap in / out time for writing and reading from Elasticsearch should also be considered.
Reducing the number of epochs will cause a substantial improvement of the performance at the expense of the quality of the model, learned during the training phase which will result being much faster.
The neural nets will train a lot faster if you get a GPU. Therefore, to increase the performance of train and retrain phase, we recommend having an additional GPU which supports the CUDA and cuDNN for artificial intelligence application. TensorFlow on GPU significantly exceeds TensorFlow on the CPUs.
Standard machine RAM is required (RAM is not as important as the GPU memory).