System Requirements

HERO is installed using Docker.

You can install HERO either on Windows or on Linux 64 bit operating system.

 

Supported browsers

 

Software requirements

<YourDrive>:\Users\<YourUser>\AppData\Local\Programs\Python\Python36

<YourDrive>:\Users\<YourUser>\AppData\Local\Programs\Python\Python36\Scripts

 

Hardware requirements

Docker must be installed and configured with the following minimum requirements:

 

HERO must be installed and configured with the following minimum requirements:

 

Workload Automation Requirements

HERO can monitor Workload Automation environments starting from Workload Automation version 9.1. in which the prerequisite feature Plan data replication in the database (also known as "mirroring") is enabled. For more information about the  “mirroring” feature, see Replicating plan data in the database.

 

HERO supports direct monitoring for Master Domain Manager (MDM), Backup Domain Manager (BKM), and Dynamic Workload Console (DWC). Monitoring for all the remaining agents is performed using the database on MDM.

 

The supported operating systems for MDM, BKM, and DWC are AIX and Linux 64 bit.

 

The supported databases are DB2 and Oracle, installed in your Workload Automation environment.

 

 

Prerequisites for connecting the monitored environments

Once the monitoring scripts are deployed to the target workstations, they are scheduled by HERO to run on a regular basis.

 

The HERO server requires a connection to each DB2 or Oracle instance in the monitored environments from which to collect throughput data and information about the agent status.

 

The following prerequisites must be met for HERO to monitor Workload Automation environments:

 

Starting from version 9.5, Workload Automation can be deployed on Docker containers. During the server discovery process, if containerized components are present on a server, you are requested to provide the image name for each component to be retrieved.

 

Ports to be opened for the communication between HERO and the product servers

For the communication between HERO and the product servers, make sure ports are opened as follows:  

 

System requirements and considerations for training and prediction

Hardware requirements depend on the number of servers where you want to run the prediction, but you must consider that CPUs with AVX instructions support are required (for this reason, old CPUs might not work). It is recommended to run the prediction on a different server, with similar specifications of the server where HERO is installed.

To start the training process, it is necessary to have enough data available: at least 17 hours for the training procedure, and at least 4 hours (of new data) for the retraining procedure. For the prediction procedure at least two hours of new observed data are required.

With these prerequisites, it will be possible to obtain a loss in the prediction accuracy of around 2%, that is to minimize the loss on a set of learning of around 2%.

The performance, in terms of computational time of the train (or retrain) procedure, is highly dependent on the technical specifications of the machine on which it runs.

There are no minimal requirements with the exception of 4GB of RAM to run TensorFlow. With 16 GB of RAM available and an Intel core i7 processor, the estimated execution time for the training procedure (for example on the data acquired in terms of throughput) for each single machine, on a quantity of about one thousand throughput observations, is around 20 seconds per epoch. Considering that in a basic configuration (see training_conf.json)  there are 60 epochs, the estimated time is about 12 minutes for the training phase.

In addition to this time, the swap in / out time for writing and reading from Elasticsearch should also be considered.

Reducing the number of epochs will cause a substantial improvement of the performance at the expense of the quality of the model, learned during the training phase  which will result being much faster.

The neural nets will train a lot faster if you get a GPU. Therefore, to increase the performance of train and retrain phase, we recommend having an additional GPU which supports the CUDA and cuDNN for artificial intelligence application. TensorFlow on GPU significantly exceeds TensorFlow on the CPUs.

Standard machine RAM is required (RAM is not as important as the GPU memory).