Skip to content

Commit

Permalink
feat docs: more cross references and notes for USERVER_NO_LOG_SPANS a…
Browse files Browse the repository at this point in the history
…nd USERVER_LOG_DYNAMIC_DEBUG

Tests: на прод не влияет
commit_hash:2c9bd7714fc071cd37d337af9a72419ffdce6ff9
  • Loading branch information
apolukhin committed Nov 29, 2024
1 parent 8238116 commit 296cbb9
Show file tree
Hide file tree
Showing 3 changed files with 66 additions and 31 deletions.
2 changes: 2 additions & 0 deletions core/include/userver/tracing/span.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,8 @@ class SpanBuilder;
/// @brief Measures the execution time of the current code block, links it with
/// the parent tracing::Spans and stores that info in the log.
///
/// Logging of spans can be controled at runtime via @ref USERVER_NO_LOG_SPANS.
///
/// See @ref scripts/docs/en/userver/logging.md for usage examples and more
/// descriptions.
///
Expand Down
4 changes: 4 additions & 0 deletions scripts/docs/en/schemas/dynamic_configs.md
Original file line number Diff line number Diff line change
Expand Up @@ -1211,6 +1211,8 @@ schema:
type: string
```

@warning Use @ref USERVER_NO_LOG_SPANS to disable Span logs.

Used by components::LoggingConfigurator.


Expand Down Expand Up @@ -1351,6 +1353,8 @@ schema:
}
```

To disable specific log lines not related to spans use @ref USERVER_LOG_DYNAMIC_DEBUG.

Used by components::LoggingConfigurator and all the logging facilities.

@anchor USERVER_RPS_CCONTROL
Expand Down
91 changes: 60 additions & 31 deletions scripts/docs/en/userver/logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Macros are used for logging:
@snippet logging/log_test.cpp Sample logging usage

The right part of the expression is calculated only if the logging level is less or equal to the macro log level.
I.e., the right part of the expression `LOG_DEBUG ()<< ...` is calculated only in the case of the `DEBUG`
I.e., the right part of the expression `LOG_DEBUG() << ...` is calculated only in the case of the `DEBUG`
or `TRACE` logging level.

Sometimes it is useful to set the logging level of a given log entry dynamically:
Expand All @@ -33,8 +33,8 @@ Sometimes it is useful to set the logging level of a given log entry dynamically
Not all logs get into the log file, but only those that are not lower than the logger's log level. The logger log level
is set in static config (see components::Logging).

The log level can be changed in the static config for a particular handle. In this case, the log level of the logger is changed only
for the request handling task of the handle and for all the subtasks:
The log level can be changed in the static config for a particular handle. In this case, the log level of the logger
is changed only for the request handling task of the handle and for all the subtasks:

```
yaml
Expand All @@ -44,21 +44,29 @@ yaml
log-level: WARNING
```

Logging per line and file can be overriden at runtime using @ref USERVER_LOG_DYNAMIC_DEBUG dynamic config.


### Dynamic change of the logging level

The logging `level` that was set in the static config of the components::Logging
component for the entire service can be changed on the fly.

See @ref scripts/docs/en/userver/log_level_running_service.md for more info.

### Limit log length of the requests and responses

For per-handle limiting of the request body or response data logging you can use the `request_body_size_log_limit` and `request_headers_size_log_limit` and `response_data_size_log_limit` static options of the handler (see server::handlers::HandlerBase). Or you could override the server::handlers::HttpHandlerBase::GetRequestBodyForLogging and server::handlers::HttpHandlerBase::GetResponseDataForLogging functions.
For per-handle limiting of the request body or response data logging you can use the `request_body_size_log_limit` and
`request_headers_size_log_limit` and `response_data_size_log_limit` static options of the handler
(see server::handlers::HandlerBase). Or you could override the
server::handlers::HttpHandlerBase::GetRequestBodyForLogging and
server::handlers::HttpHandlerBase::GetResponseDataForLogging functions.

### Limiting the log frequency

If some line of code generates too many logs and a small number of them is enough, then `LOG_*` should be
replaced with `LOG_LIMITED_*`. For example, `LOG(lvl)` should be replaced with `LOG_LIMITED(lvl)`; `LOG_DEBUG()` should be replaced with `LOG_LIMITED_DEBUG()`, etc.
replaced with `LOG_LIMITED_*`. For example, `LOG(lvl)` should be replaced with `LOG_LIMITED(lvl)`; `LOG_DEBUG()` should
be replaced with `LOG_LIMITED_DEBUG()`, etc.

In this case, the log message is written only if the message index is a power of two. The counter is reset every second.

Expand Down Expand Up @@ -99,7 +107,7 @@ to use `tracing::Span`, which implicitly adds tags to the log.
### Stacktrace

Sometimes it is useful to write a full stacktrace to the log. Typical use case is for logging a "never should happen happened"
situation. Use `logging::LogExtra::Stacktrace()` for such cases:
situation. Use logging::LogExtra::Stacktrace() for such cases:

@snippet logging/log_extra_test.cpp Example using stacktrace in log

Expand Down Expand Up @@ -128,23 +136,23 @@ LOG_INFO_TO(*my_logger) << "Look, I am a new logger!";
Note: do not forget to configure the logrotate for your new log file!
## Tracing
The userver implements a request tracing mechanism that is compatible with the
[opentelemetry](https://opentelemetry.io/docs/) standard.
It allows you to save dependencies between tasks,
between requests through several services,
thereby building a trace of requests and interactions.
It can be used to identify slow query stages, bottlenecks,
It allows you to save dependencies between tasks, between requests through several services,
thereby building a trace of requests and interactions. It can be used to identify slow query stages, bottlenecks,
sequential queries, etc.
See tracing::DefaultTracingManagerLocator for more info.
### tracing::Span
When processing a request, you can create a `tracking::Span` object that measures the execution time of the current code block (technically, the time between its constructor and destructor) and stores the resulting time in the log:
When processing a request, you can create a tracing::Span object that measures the execution time of the current code
block (technically, the time between its constructor and destructor) and stores the resulting time in the log:
Example log `tracing::Span span("cache_invalidate")`:
Example log for `tracing::Span span{"cache_invalidate"}`:
```
tskv timestamp=2018-12-04T14:00:35.303132 timezone=+03:00 level=INFO module=~Impl ( userver/src/tracing/span.cpp:76 ) task_id=140572868354752 coro_id=140579682340544 text= trace_id=672a081c8004409ca79d5cc05cb5e580 span_id=12ff00c63bcc46599741bab62506881c parent_id=7a7c1c6999094d2a8e3d22bc6ecf5d70 stopwatch_name=cache_invalidate start_timestamp=1543921235.301035 total_time=2.08675 span_ref_type=child stopwatch_units=ms
Expand All @@ -156,41 +164,51 @@ Log record example for some `POST /v1/upload` handle:
tskv timestamp=2020-08-13T15:30:52.507493 level=INFO module=~Impl ( userver/core/src/tracing/span.cpp:139 ) task_id=7F110B765400 thread_id=0x00007F115BDEE700 text= stopwatch_name=http/handler-v1_upload-post total_time=36.393694 span_ref_type=child stopwatch_units=ms start_timestamp=1597321852.471086 meta_type=/v1/upload _type=response method=POST body={"status":"ok"} uri=/v1/upload?provider_id=driver-metrics http_handle_request_time=36.277501 http_serialize_response_data_time=0.003394 tags_cache_mapping_time=0.018781 find_service_time=21.702876 http_parse_request_data_time=0.053233 http_check_auth_time=0.029809 http_check_ratelimit_time=0.000118 entities_cache_mapping_time=0.01037 register_request_time=0.819509 log_to_yt_time=0.047565 save_request_result_time=1.523389 upload_queries_time=5.179371 commit_time=4.11817 link=48e0029fc25e460880529b9d300967df parent_link=b1377a1b20384fe292fd77cb96b30121 source_service=driver-metrics entity_type=udid merge_policy=append provider_name=driver-metrics tags_count_append=3 meta_code=200 trace_id=2f6bf12265934260876a236c373b37dc span_id=8f828566189db0d0 parent_id=fdae1985431a6a57
```
`tracing::Span` can only be created on stack. Currently, the ability to create `tracing::Span` as a member of a class whose objects can be passed between tasks is not supported.
tracing::Span can only be created on stack. Currently, the ability to create `tracing::Span` as a member of a class
whose objects can be passed between tasks is not supported.
### Tags
In addition to `trace_id`, `span_id`, `parent_id` and other tags specific to opentracing, the `tracing::Span` class can store arbitrary custom tags. To do this, Span implicitly contains LogExtra. You can add tags like this:
In addition to `trace_id`, `span_id`, `parent_id` and other tags specific to opentracing, the `tracing::Span` class can
store arbitrary custom tags. To do this, Span implicitly contains LogExtra. You can add tags like this:
@snippet tracing/span_test.cpp Example using Span tracing
Unlike simple `LogExtra`, tags from `Span` are automatically logged when using `LOG_XXX()`. If you create a `Span`, and you already have a `Span`, then `LogExtra` is copied from the old one to the new one (except for the tags added via `AddNonInheritableTag`).
Unlike simple `LogExtra`, tags from `Span` are automatically logged when using `LOG_XXX()`. If you create a `Span`, and
you already have a `Span`, then `LogExtra` is copied from the old one to the new one (except for the tags added via
`AddNonInheritableTag`).
### Built-in tag semantics
- `TraceId` propagates both to sub-spans within a single service, and from client to server
- `Link` propagates within a single service, but not from client to server. A new link is generated for the "root" request handling task
- `Link` propagates within a single service, but not from client to server. A new link is generated for the "root"
request handling task
- `SpanId` identifies a specific span. It does not propagate
- For "root" request handling spans, there are additionally:
- `ParentSpanId`, which is the inner-most `SpanId` of the client
- `ParentLink`, which is the `Link` of the client
### Span hierarchy and logging
All logs in the current task are implicitly linked to the current `Span` for the user, and tags of this `Span` are added to them (trace_id, span_id, etc.). All `Span` in the current task are implicitly stacked, and if there are several similar `Span`, the last created One (i.e., located at the top of the `Span` stack of the current task) will be used for logging.
All logs in the current task are implicitly linked to the current `Span` for the user, and tags of this `Span` are added
to them (trace_id, span_id, etc.). All `Span` in the current task are implicitly stacked, and if there are several
similar `Span`, the last created One (i.e., located at the top of the `Span` stack of the current task) will be used for
logging.
@snippet tracing/span_test.cpp Example span hierarchy
If you want to get the current `Span` (for example, you want to write something to `LogExtra`, but do not want to create an additional `Span`), then you can use the following approach:
If you want to get the current `Span` (for example, you want to write something to `LogExtra`, but do not want to create
an additional `Span`), then you can use the following approach:
@snippet core/src/tracing/span_test.cpp Example get current span
### Creating a Span
A `Span` is automatically created for each request, handled by the server::handlers::HttpHandlerBase inherited handles. `trace_id`, `parent_id` and `link` are automatically populated for the request headers (if any).
A `Span` is automatically created for each request, handled by the server::handlers::HttpHandlerBase inherited handles.
`trace_id`, `parent_id` and `link` are automatically populated for the request headers (if any).
Cache handlers do not have a parent Span, so a Span without a parent is automatically created for them, with a new `trace_id` for each update.
The situation is similar for all utils::PeriodicTask.
Cache handlers do not have a parent Span, so a Span without a parent is automatically created for them, with a new
`trace_id` for each update. The situation is similar for all utils::PeriodicTask.
When creating a new task via `utils::Async`, a new `Span` is created and linked with the current `Span` of the current task.
Expand All @@ -214,9 +232,12 @@ X-YaTraceId
### Selectively disabling Span logging
Using the server dynamic config @ref USERVER_NO_LOG_SPANS, you can set names and prefixes of Span names that do not need to be logged. If the span is not logged, then the ScopeTime of this span and any custom tags attached to the span via the methods of the `Add*Tag*()` are not put into the logs.
Using the server dynamic config @ref USERVER_NO_LOG_SPANS, you can set names and prefixes of Span names that do not need
to be logged. If the span is not logged, then the tracing::ScopeTime of this span and any custom tags attached to the
span via the methods of the `Add*Tag*()` are not put into the logs.
For example, this is how you can disable logging of all Span for MongoDB (that is, all Span with `stopwatch_name` starting with `mongo`) and `Span` with `stopwatch_name=test`:
For example, this is how you can disable logging of all Span for MongoDB (that is, all Span with `stopwatch_name`
starting with `mongo`) and `Span` with `stopwatch_name=test`:
```json
{
Expand All @@ -234,8 +255,9 @@ For example, this is how you can disable logging of all Span for MongoDB (that i
## OpenTelemetry protocol

It is possible to use OpenTelemetry protocol for logs and tracing exporting.
To use it, register all prerequisites for gRPC client and register `otlp::LoggerComponent` in component list in your `main` function.
The static config file should contain a component section with OTLP logger.
To use it, register all prerequisites for gRPC client and register `otlp::LoggerComponent` in component list in your
`main` function. The static config file should contain a component section with OTLP logger.

Also remove `default` logger from `logging.loggers` as the default logger is handled by otlp component from now on.

The related part of static config:
Expand All @@ -259,11 +281,15 @@ The service buffers not-yet-sent logs and traces in memory, but drops them on ov

### Separate Sinks for Logs and Tracing

In certain environments, such as Kubernetes, applications typically write logs to stdout/stderr, while traces are sent efficiently through the 'push' model (via OTLP transport). Kubernetes stores the container's stdout/stderr in files on nodes, making logs available for log collectors using the 'pull' model. This approach ensures that logs remain accessible even if the application fails, capturing the most critical information.
In certain environments, such as Kubernetes, applications typically write logs to stdout/stderr, while traces are sent
efficiently through the 'push' model (via OTLP transport). Kubernetes stores the container's stdout/stderr in files on
nodes, making logs available for log collectors using the `pull` model. This approach ensures that logs remain
accessible even if the application fails, capturing the most critical information.

To configure separate sinks for logs and traces, use the optional 'sinks' block in the 'otlp-logger' configuration:
To configure separate sinks for logs and traces, use the optional `sinks` block in the `otlp-logger` configuration:

```yaml
```
yaml
otlp-logger:
endpoint: $otlp-endpoint
service-name: $service-name
Expand All @@ -273,21 +299,24 @@ otlp-logger:
tracing: default | otlp | both
```

If the 'sinks' block is not present in the 'otlp-logger' configuration, both logs and traces use the OTLP transport and are delivered to an OpenTelemetry-compatible collector by a background task in userver.
If the 'sinks' block is not present in the 'otlp-logger' configuration, both logs and traces use the OTLP transport and
are delivered to an OpenTelemetry-compatible collector by a background task in userver.

In the `sinks` block, you can set a value for each stream. See: `otlp::LoggerComponent`.

If you choose `otlp` for both streams, ensure that `logging.loggers` is empty:

```yaml
```
yaml
logging:
fs-task-processor: fs-task-processor
loggers: {}
```

Otherwise, add the _default_ logger in the `logging` component's `loggers` field:

```yaml
```
yaml
logging:
fs-task-processor: fs-task-processor
loggers:
Expand Down

0 comments on commit 296cbb9

Please sign in to comment.