Logging in InformixHQ
This topic provides a brief tutorial on logging in InformixHQ.
The InformixHQ server and agent use the log4j2 library for logging. During InformixHQ installation, logging configuration files are provided with default configurations, monitoring-server.log4j.xml for InformixHQ server and monitoring-agent.log4j.xml for InformixHQ agent. By default, these files are present at $INFORMIXDIR/hq folder.
Users can customize default logging behavior by modifying monitoring-server.log4j.xml or monitoring-agent.log4j.xml file in the current HQ directory or classpath when starting the InformixHQ server or InformixHQ agent respectively.
Users can do customizations with logging levels (ERROR, WARN, INFO, or DEBUG), configure path for publishing logging information to various preferred destinations (such as a database, file, console, UNIX Syslog, etc.), configure filename format, configure rolling log file depending upon size, time or both, etc.
By default, the InformixHQ server and agent will log messages at INFO level to an monitoring-server.log file and an monitoring-agent.log file respectively. Different logging levels are explained in the sections given below.
If monitoring-server.log4j.xml or monitoring-agent.log4j.xml is not available while starting InformixHQ server or agent, all the logging configurations will be set to default by the application similar to monitoring-server.log4j.xml or monitoring-agent.log4j.xml files respectively.
Additionally, for InformixHQ agent, if monitoring-agent.log4j.xml file is available, logs for all the agents running from same directory will be appended to one agent log file (default name - monitoring-agent.log or file name as configured in monitoring-agent.log4j.xml file) and if monitoring-agent.log4j.xml file is NOT available, then application will create a separate log file for each agent process. The file name will be like monitoring-agent_1.log, monitoring-agent_2.log, … depending on number of agent processes running. The number in filename represents Informix server id added in Agent properties file.
LOGGERS
The <Logger> tags describe the different log levels that can be changed according to the Java packages of the source code.
- FATAL: A log level which indicates that the application encountered an event or entered a state in which one of the crucial business functionalities is no longer working.
- ERROR: A log level which indicates that one or more functionalities are not working, preventing some functionalities from working correctly.
- WARN: A log level which indicates that an unexpected behaviour happened inside the application, but it is continuing its work and the key business features are operating as expected.
- INFO: A log level which indicates that an event happened, the event is purely informative and can be ignored during normal operations.
- DEBUG: A log level used for events considered to be useful during software debugging when more granular information is needed.
- TACE: A log level describing events showing step by step execution of your code that can be ignored during the standard operation.
- ALL: A log level which includes all logging levels.
- OFF: A log level which indicates that logging is turned off
Logging levels' summary for InformixHQ server and agent:
| Logging level | FATAL | ERROR | WARN | INFO | DEBUG | TRACE | ALL |
|---|---|---|---|---|---|---|---|
| OFF | |||||||
| FATAL | X | ||||||
| ERROR | X | X | |||||
| WARN | X | X | X | ||||
| INFO | X | X | X | X | |||
| DEBUG | X | X | X | X | X | ||
| TRACE | X | X | X | X | X | X | |
| ALL | X | X | X | X | X | X | X |
If we keep logging OFF then no logs will be printed on files or any other medium.
If we keep logging as Fatal, Error, Warn or Info - we will get a brief description of them and no stack trace for them will be printed on file or any other medium.
If we keep logging as Debug, Trace or ALL - we will get a brief description of them, and a detailed stack trace will be printed on a file or any other medium.
Here, the medium refers to a file, STDOUT and Console.
Example of a Logger:
<Loggers>
<!-- The base logging level is set here -->
<!-- You can choose from (TRACE, DEBUG, INFO, WARN, ERROR) -->
<Root level="INFO">
<AppenderRef ref="FILE" />
<!-- <AppenderRef ref="CONSOLE" /> -->
<!-- <AppenderRef ref=" STDOUT " /> -->
</Root>
AppenderRef: we are specifying the location of logs [a FILE/CONSOLE/STDOUT]
<!-- You can configure custom logging levels (TRACE, DEBUG, INFO, WARN,
ERROR) for any java package name -->
<Loggers>
<Configuration>
<Logger name="h2database" level="WARN" />
<Logger name="com.zaxxer.hikari" level="INFO" />
<Logger name="com.zaxxer.hikari.pool.HikariPool" level="OFF" />
<Logger name="com.zaxxer.hikari.HikariDataSource" level="OFF" />
</Loggers>
</Configuration>
There are a few external Java libraries used in InformixHQ for utilizing different functionalities provided by them. Logging levels for such libraries are defined and managed by respective vendors.
For example, in InformixHQ, description is printed for INFO level whereas external library might print different level of details at the same logging level. Hence, the logging level of some external packages are tuned through configuration file settings to suppress unnecessary information.
- It is recommended not to change the logging level for external packages mentioned above. If it is changed to a different logging level, unexpected logs may appear in log files.
-
When InformixHQ fails to connect to Informix server, connection failure logs will be printed per hour. Once connection is established with Informix server, success log will be printed to the log file.
-
<Configuration monitorInterval="5" status="FATAL">
Log4j has the ability to automatically detect changes to the configuration file and reconfigure itself by using the attribute monitorInterval. If the monitorInterval attribute is specified on the configuration element and is set to a non-zero value, then the file will be checked the next time a log event is evaluated and/or logged and the monitorInterval has elapsed since the last check. The minimum interval is 5 seconds.
The attribute status handles the level of internal Log4j events that should be logged to the console. This attribute is added to the Configuration element of monitoring-server-example.log4j.xml and monitoring-agent-example.log4j.xml files. You can change its level if you want to get the internal Log4j events logged on the console.Figure 1. Internal Log4j events
Similar behaviour is applicable for Agent log configurations as well.
Appenders:
ConsoleAppender
<Appenders>
<Console name="STDOUT" target="SYSTEM_OUT">
<PatternLayout pattern="%m%n"/>
</Console>
</Appenders>
<Loggers>
<Root level="INFO">
<AppenderRef ref="STDOUT"/>
</Root>
</Loggers>
RollingFile Appender
The RollingFileAppender is an OutputStreamAppender that writes to the file named in the fileName parameter and rolls the file over according to the TriggeringPolicy and the RolloverPolicy.
- Triggering Policies
- SizeBased Triggering Policy
Once the file reaches the specified size, the SizeBasedTriggeringPolicy causes a rollover. The size can be specified in bytes, with the suffix KB, MB or GB, for example 20MB. When combined with a time based triggering policy, the file pattern must contain a %i otherwise the target file will be overwritten on every rollover as the SizeBased Triggering Policy will not cause the timestamp value in the file name to change. When used without a time based triggering policy, the SizeBased Triggering Policy will cause the timestamp value to change.
For illustration purpose we have set the size limit of 1 KB in following configuration snippet and screenshot displaying new file generated after configured size.
<Appenders> <RollingFile name="FILE" fileName="logs/monitoring-server.log" filePattern="logs/$${date:yyyy-MM}/informixhq-server-%d{MM-dd-yyyy}-%i.log.gz"> <PatternLayout> <Pattern>%d %p %c{1.} [%t] %m%n</Pattern> </PatternLayout> <Policies> <SizeBasedTriggeringPolicy size="1 KB" /> </Policies> </RollingFile> </Appenders> <Loggers> <Root level="INFO"> <AppenderRef ref="FILE" /> </Root> </Loggers>
- TimeBased Triggering Policy
The TimeBasedTriggeringPolicy causes a rollover once the date/time pattern no longer applies to the active file. This policy accepts an interval attribute which indicates how frequently the rollover should occur based on the time pattern and a modulate boolean attribute. The default value of interval is 1. Following snippet shows configuration to create a new logfile everyday.
Note: To create a new log file everyday, configure filePattern parameter as {MM-dd-yyyy} whereas to create a new log file hourly, configure filePattern parameter as {MM-dd-yyyy-HH}.Parameters of TimeBasedTriggeringPolicy:Parameter Name Type Description interval integer How often a rollover should occur based on the most specific time unit in the date pattern. For example, with a date pattern with hours as the most specific item and increment of 4 rollovers would occur every 4 hours. The default value is 1. modulate boolean Indicates whether the interval should be adjusted to cause the next rollover to occur on the interval boundary. For example, if the item is hours, the current hour is 3 am and the interval is 4 then the first rollover will occur at 4 am and then next ones will occur at 8 am, noon, 4pm, etc. <Appenders> <RollingFile name="FILE" fileName="logs/monitoring-server.log" filePattern="logs/$${date:yyyy-MM}/informixhq-server-%d{MM-dd-yyyy}-%i.log.gz"> <PatternLayout> <Pattern>%d %p %c{1.} [%t] %m%n</Pattern> </PatternLayout> <Policies> <TimeBasedTriggeringPolicy interval="1" modulate="true"/> </Policies> </RollingFile> </Appenders> <Loggers> <Root level="INFO"> <AppenderRef ref="FILE" /> </Root> </Loggers>
- Composite Triggering Policy
The CompositeTriggeringPolicy combines multiple triggering policies and returns true if any of the configured policies return true. The CompositeTriggeringPolicy is configured simply by wrapping other policies in a Policies element.
For example, the following XML fragment defines policies that rollover the log when the JVM starts, when the log size reaches twenty megabytes, and when the current date no longer matches the log’s start date.
<Appenders> <RollingFile name="FILE" fileName="logs/monitoring-server.log" filePattern="logs/$${date:yyyy-MM}/informixhq-server-%d{MM-dd-yyyy}-%i.log.gz"> <PatternLayout> <Pattern>%d %p %c{1.} [%t] %m%n</Pattern> </PatternLayout> <Policies> <SizeBasedTriggeringPolicy size="20 MB" /> <TimeBasedTriggeringPolicy /> </Policies> </RollingFile> </Appenders> <Loggers> <Root level="INFO"> <AppenderRef ref="FILE"/> </Root> </Loggers>
- SizeBased Triggering Policy
- Default Rollover Policy
The default rollover takes both date/time pattern and an integer specified in filePattern Attribute in RollingFileAppender. If the pattern contains an integer, it will be incremented on every rollover. If the date/time pattern is present, it will be replaced with current date and time values. If the file pattern ends with ".gz", ".zip", ".bz2", ".deflate", ".pack200", or ".xz", the resulting archive will be compressed using the compression scheme that matches the suffix. Default Rollover Policy needs to have at least one triggering policy configured.
This example shows a rollover strategy that will keep up to 20 files before removing them.<Appenders> <RollingFile name="FILE" fileName="logs/monitoring-server.log" filePattern="logs/$${date:yyyy-MM}/informixhq-server-%d{MM-dd-yyyy}-%i.log.gz"> <PatternLayout> <Pattern>%d %p %c{1.} [%t] %m%n</Pattern> </PatternLayout> <Policies> <SizeBasedTriggeringPolicy size="1 KB" /> </Policies> <DefaultRolloverStrategy max="20" /> </RollingFile> </Appenders> <Loggers> <Root level="INFO"> <AppenderRef ref="FILE" /> </Root> </Loggers>
- We can specify pattern layout as per given format. For more information, see pattern layout.
- There are many more appenders available in Log4j2 framework. For more information, see log4j2 documentation.