We use SLF4J as API for logging. The recommended implementation is Logback for which we provide additional value such as configuration templates and an appender that prevents log-forging and reformatting of stack-traces for operational optimizations.
In the pom.xml of your application add this dependency (that also adds transitive dependencies to SLF4J and logback):
<dependency>
<groupId>com.devonfw.java</groupId>
<artifactId>devon4j-logging</artifactId>
</dependency>
The configuration file is logback.xml and is to put in the directory src/main/resources of your main application. For details consult the logback configuration manual. devon4j provides a production ready configuration here. Simply copy this configuration into your application in order to benefit from the provided operational and Security aspects. We do not include the configuration into the devon4j-logging module to give you the freedom of customizations (e.g. tune log levels for components and integrated products and libraries of your application).
The provided logback.xml is configured to use variables defined on the config/application.properties file. On our example, the log files path point to ../logs/ in order to log to tomcat log directory when starting tomcat on the bin folder. Change it according to your custom needs.
log.dir=../logs/
The general pattern for accessing loggers from your code is a static logger instance per class. We pre-configured the development environment so you can just type LOG and hit [ctrl][space] (and then [arrow up]) to insert the code pattern line into your class:
public class MyClass {
private static final Logger LOG = LoggerFactory.getLogger(MyClass.class);
...
}
Please note that in this case we are not using injection pattern but use the convenient static alternative. This is already a common solution and also has performance benefits.
We use a common understanding of the log-levels as illustrated by the following table. This helps for better maintenance and operation of the systems by combining both views.
Log-level | Description | Impact | Active Environments |
---|---|---|---|
FATAL |
Only used for fatal errors that prevent the application to work at all (e.g. startup fails or shutdown/restart required) |
Operator has to react immediately |
all |
ERROR |
An abnormal error indicating that the processing failed due to technical problems. |
Operator should check for known issue and otherwise inform development |
all |
WARNING |
A situation where something worked not as expected. E.g. a business exception or user validation failure occurred. |
No direct reaction required. Used for problem analysis. |
all |
INFO |
Important information such as context, duration, success/failure of request or process |
No direct reaction required. Used for analysis. |
all |
DEBUG |
Development information that provides additional context for debugging problems. |
No direct reaction required. Used for analysis. |
development and testing |
TRACE |
Like DEBUG but exhaustive information and for code that is run very frequently. Will typically cause large log-files. |
No direct reaction required. Used for problem analysis. |
none (turned off by default) |
Exceptions (with their stacktrace) should only be logged on FATAL or ERROR level. For business exceptions typically a WARNING including the message of the exception is sufficient.
We always use the following log files:
-
Error Log: Includes log entries to detect errors.
-
Info Log: Used to analyze system status and to detect bottlenecks.
-
Debug Log: Detailed information for error detection.
The log file name pattern is as follows:
«LOGTYPE»_log_«HOST»_«APPLICATION»_«TIMESTAMP».log
Element | Value | Description |
---|---|---|
«LOGTYPE» |
info, error, debug |
Type of log file |
«HOST» |
e.g. mywebserver01 |
Name of server, where logs are generated |
«APPLICATION» |
e.g. myapp |
Name of application, which causes logs |
«TIMESTAMP» |
YYYY-MM-DD_HH00 |
date of log file |
Example: error_log_mywebserver01_myapp_2013-09-16_0900.log
Error log from mywebserver01 at application myapp at 16th September 2013 9pm.
We use the following output format for all log entries to ensure that searching and filtering of log entries work consistent for all logfiles:
[D: «timestamp»] [P: «priority»] [C: «NDC»][T: «thread»][L: «logger»]-[M: «message»]
-
D: Date (Timestamp in ISO8601 format e.g. 2013-09-05 16:40:36,464)
-
P: Priority (the log level)
-
C: Correlation ID (ID to identify users across multiple systems, needed when application is distributed)
-
T: Thread (Name of thread)
-
L: Logger name (use class name)
-
M: Message (log message)
Example:
[D: 2013-09-05 16:40:36,464] [P: DEBUG] [C: 12345] [T: main] [L: my.package.MyClass]-[M: My message...]
In order to prevent log forging attacks we provide a special appender for logback in devonfw-logging. If you use it (see Configuration) you are safe from such attacks.
In order to correlate separate HTTP requests to services belonging to the same user / session, we provide a servlet filter called DiagnosticContextFilter
. This filter takes a provided correlation ID from the HTTP header X-Correlation-Id
. If none was found, it will generate a new correlation id as UUID
. This correlation ID is added as MDC to the logger. Therefore, it will then be included to any log message of the current request (thread). Further concepts such as service invocations will pass this correlation ID to subsequent calls in the application landscape. Hence you can find all log messages related to an initial request simply via the correlation ID even in highly distributed systems.
In highly distributed systems (from clustering up to microservices) it might get tedious to search for problems and details in log files. Therefore, we recommend to setup a central log and analysis server for your application landscape. Then you feed the logs from all your applications (using logstash) into that central server that adds them to a search index to allow fast searches (using elasticsearch). This should be completed with a UI that allows dashboards and reports based on data aggregated from all the logs. This is addressed by ELK or Graylog.
When you use a external system for gathering distrubited logs, we strongly suggest that you use a JSON based log format (e.g. by using the provided logback.xml
, see above).
In that case it might be useful to log customs field to the produced JSON. This is fully supported by slf4j together with logstash. The trick is to use the class net.logstash.logback.argument.StructuredArguments
for adding the arguments to you log message, e.g.
import static net.logstash.logback.argument.StructuredArguments.v;
...
public void processMessage(MyMessage msg) {
LOG.info("Processing message with {}", v("msgId", msg.getId());
...
}
This will produce something like:
{ "timestamp":"2018-11-06T18:36:37.638+00:00",
"@version":"1",
"message":"Processing message with msgId=MSG-999-000",
"msgId":"MSG-999-000",
"logger_name":"com.myapplication...Processor",
"thread_name":"http-nio-8081-exec-6",
"level":"INFO",
"level_value":20000,
"appname":"basic",}