Starting production
About this task
- These steps are performed on the master domain manager immediately after successfully installing the product on the systems where you want to perform your scheduling activities.
- The user ID used to perform the operations is the same as the one used for installing the product.
If you are not familiar with HCL Workload Automation you can follow the non-optional steps to define a limited number of scheduling objects, and add more as you become familiar with the product. You might start, for example, with two or three of your most frequent applications, defining scheduling objects to meet their requirements only.
Alternatively, you can use the Dynamic Workload Console to perform both the modeling and the operational tasks. Refer to the corresponding product documentation for more information.
- Set up the HCL Workload Automation environment
variables
Run one of the following scripts:
. ./TWS_home/tws_env.sh
for Bourne and Korn shells in UNIX®. ./TWS_home/tws_env.csh
for C shells in UNIX®TWS_home\tws_env.cmd
in Windows®in a system shell to set the PATH and TWS_TISDIR variables.
- Connect to the HCL Workload Automation database You can use the following syntax to connect to the master domain manager as TWS_user:
where TWS_user is the user ID you specified at installation time.composer -user <TWS_user> -password <TWS_user_password>
Note: If you want to perform this step and the following ones from a system other than the master domain manager you must specify the connection parameters when starting composer as described in Setting up options for using the user interfaces. - Optionally add in the database the definitions to describe
the topology of your scheduling environment in terms of:
- Domains
Use this step if you want to create a hierarchical tree of the path through the environment. Using multiple domains decreases the network traffic by reducing the communications between the master domain manager and the other workstations. For additional information, refer to Domain definition.
- Workstations
Define a workstation for each machine belonging to your scheduling environment with the exception of the master domain manager which is automatically defined during the HCL Workload Automation installation. For additional information, refer to Workstation definition. The master domain manager is automatically defined in the database at installation time.
- Domains
- Optionally define the users allowed to run jobs on Windows® workstations
Define any user allowed to run jobs using HCL Workload Automation by specifying user name and password. For additional information, refer to User definition.
- Optionally define calendars
Calendars allow you to determine if and when a job or a job stream has to run. You can use them to include or exclude days and times for processing. Calendars are not strictly required to define scheduling days for the job streams (
simple
orrule
run cycles may be used as well); their main goal is to define global sets of dates that can be reused in multiple job streams. For additional information refer to Calendar definition. - Optionally define parameters, prompts, and resources
For additional information refer to Variable and parameter definition, Prompt definition, and Resource definition.
- Define jobs and job streams
For additional information refer to Job, and to Job stream definition.
- Optionally define restrictions and settings to control when jobs and job streams run.
You can define dependencies for jobs and job streams. There can be up to 40 dependencies for a job stream. If you need to define more than 40 dependencies, you can group them in a join dependency. In this case, the join is used simply as a container of standard dependencies and therefore any standard dependencies in it that are not met are processed as usual and do not cause the join dependency to be considered as suppressed. For more information about join dependencies, see Joining or combining conditional dependencies and join.They can be:
- Resource dependencies
- File dependencies
- Job and job stream follow dependencies, both on successful completion of jobs and job streams and on satisfaction of specific conditions by jobs and job streams
- Prompt dependencies
- Run cycles
- Time constraints
- Limit
- Priority
- Automate the plan extension at the end of the current production
term Add the
final
job stream to the database to perform automatic production plan extension at the end of each current production term by running the following command:
For additional information, refer to Automating production plan processing.add Sfinal
- Generate the plan
Run the JnextPlan command to generate the production plan. This command starts the processing of the scheduling information stored in the database and creates the production plan for the time frame specified in the JnextPlan command. The default time frame is 24 hours. If you automated the plan generation as described in the previous step, you only need to run the JnextPlan command the first time.
limit cpu
to
allow job execution on that workstation, see the section limit cpu for more details. If you want to modify anything while the production plan is already in process, use the conman program. While the production plan is processing across the network you can still continue to define or modify jobs and job streams in the database. Consider however that these modifications will only be used if you submit the modified jobs or job streams, using the command sbj for jobs or sbs for job streams, on a workstation which has already received the plan, or after a new production plan is generated using JnextPlan. See Managing objects in the plan - conman for more details about the conman program and the operations you can perform on the production plan in process.