Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Log files are consuming too much disk space. #140

Open
loudonlune opened this issue Dec 12, 2022 · 6 comments
Open

Log files are consuming too much disk space. #140

loudonlune opened this issue Dec 12, 2022 · 6 comments
Labels
bug Something isn't working

Comments

@loudonlune
Copy link

Background:

I am with the UNH-InterOperability Lab, and we host a testbed instance of the OpenHorizon Management VM for the community. Recently, we have noticed that our VM hosting the instance has been running out of disk space. Upon further investigation, we found that certain containers are responsible for a sizable chunk of the disk space consumed.

Environment:
We are running Docker v20.10.18 on Ubuntu 20.04 LTS

Problem:

The exchange API and agbot container are generating gargantuan log files consuming ~750 MB and ~2.5 GB of disk space respectively.
image

Other containers have somewhat large logs but are dwarfed by these two.

Steps to Reproduce the Problem:

  1. Deploy the OpenHorizon Management Hub
  2. Let the management hub run under load for a while.
@joewxboy joewxboy added the bug Something isn't working label Mar 29, 2023
@joewxboy
Copy link
Member

@bencourliss and @naphelps Can we get someone to look at this? This issue is causing the community lab exchange to go into restart loops when the disk fills up. Can we provide a mechanism (flag, env var, whatever) that allows us to prune, limit or suppress the log files?

@naphelps
Copy link
Member

@joewxboy @loudonlune

The log level in the Exchange is configurable in the config.json. It can even be turned off if you like.

https://github.com/open-horizon/exchange-api/tree/master/docs#apilogging
Values: https://logback.qos.ch/apidocs/ch/qos/logback/classic/Level.html

@naphelps
Copy link
Member

INFO is the config default.

@bencourliss
Copy link
Member

@joewxboy @loudonlune

It's my understanding that all of the containers write to stdout correct? If so then is that captured in /var/log/messages or /var/log/syslog? You should be able to configure your system with logrotate to make sure those files stay manageable. Outside of controlling what gets logged as Nathan mentions, I don't think this is something that should be configured in the all-in-one mgmt hub script.

@loudonlune
Copy link
Author

@bencourliss The containers are writing their logs in /var/log/docker/containers/*/*-json.log as a list of serialized JSON objects, where * represents any container's ID. Those are the log files shown in the screenshot that are using high amounts of disk space.

All of the containers write to stdout inside the container environment. Docker reads that and uses a "logging driver" to choose what happens to that output. There is a driver that writes to the syslog, but that isn't the default behavior. The default logging driver is the json-file driver, which is what's creating these large files. There are other drivers that are lighter on disk space that you can choose from here: https://docs.docker.com/config/containers/logging/configure/

I can set the default logging driver in the daemon config to fix this for now, but whether this should be the default behavior of the management hub after it's installed out-of-the-box is the real question.

@bencourliss
Copy link
Member

bencourliss commented Mar 29, 2023

@loudonlune ah, got it. It still seems to me to be something that should be configured on the system itself. I don't feel the management hub should be configuring the system-wide Docker daemon settings. That seems to be more suited for the system administrator. Perhaps it would be good to document it. https://docs.docker.com/config/containers/logging/configure/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants