Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lockfile implementation causes pain for docker builds with cache mounts (unlike python-based dnf) #1685

Open
rmuir opened this issue Jan 13, 2025 · 2 comments
Labels
Triaged Someone on the DNF 5 team has read the issue and determined the next steps to take

Comments

@rmuir
Copy link

rmuir commented Jan 13, 2025

libdnf lockfile implementation has some "homemade" locking which doesn't play well with containerized environments where PIDs reproduce etc. It relies upon the existence of files and checking pids rather than using normal file-locking mechanisms.

In the docker world, users use "cache mounts" to share package cache across multiple containers. example:

RUN --mount=type=cache,target=/var/cache/yum,sharing=locked \
    microdnf install --setopt=keepcache=1 -y python3.11

It prevents downloading the same packages over and over again, especially with many containers.
See https://docs.docker.com/build/cache/optimize/#use-cache-mounts for more information.

The bugs happen when user ^C's their build, which kills the microdnf process, but leaves a lockfile in the build cache. Unfortunately the next time they run docker build, they'll run microdnf, and it will have the same PID, and so it will inspect the file/pids and think they the lockfile is in use.

The issue only happens with microdnf, not dnf. python-based dnf uses flock()-based lock, so when the process dies, the lock is released, and such troubles don't happen.

@m-blaha
Copy link
Member

m-blaha commented Jan 15, 2025

I confirm the bug, reproducible using following Containerfile:

FROM ubi9-minimal

RUN --mount=type=cache,target=/var/cache/yum,sharing=locked \
    microdnf install --setopt=keepcache=1 -y python3.11

Result:

❯ podman build .
STEP 1/2: FROM ubi9-minimal
Resolved "ubi9-minimal" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull registry.access.redhat.com/ubi9-minimal:latest...
Getting image source signatures
Checking if image destination supports signatures
Copying blob eeaa3613f51c done   | 
Copying blob 5808a11b7bed done   | 
Copying config 0f0fe44fbc done   | 
Writing manifest to image destination
Storing signatures
STEP 2/2: RUN --mount=type=cache,target=/var/cache/yum,sharing=locked     microdnf install --setopt=keepcache=1 -y python3.11

(microdnf:1): librhsm-WARNING **: 11:43:01.756: Found 0 entitlement certificates

(microdnf:1): librhsm-WARNING **: 11:43:01.756: Found 0 entitlement certificates
Downloading metadata...
Downloading metadata...
^C


❌1 ❯ podman build .
STEP 1/2: FROM ubi9-minimal
STEP 2/2: RUN --mount=type=cache,target=/var/cache/yum,sharing=locked     microdnf install --setopt=keepcache=1 -y python3.11

(microdnf:1): librhsm-WARNING **: 11:43:07.715: Found 0 entitlement certificates

(microdnf:1): librhsm-WARNING **: 11:43:07.716: Found 0 entitlement certificates
error: metadata[process] already locked by microdnf(1)
Error: building at STEP "RUN --mount=type=cache,target=/var/cache/yum,sharing=locked microdnf install --setopt=keepcache=1 -y python3.11": while running runtime: exit status 1

@m-blaha
Copy link
Member

m-blaha commented Jan 15, 2025

I ran a similar test with dnf5 (using the fedora-minimal base image), and it seems that dnf5 is not affected by this bug.

Development of the dnf4 stack (including libdnf) is currently on hold as we’re focusing on dnf5. If you need this fixed for CentOS Stream or RHEL, I recommend filing a downstream RHEL issue at: https://issues.redhat.com/.

@m-blaha m-blaha added the Triaged Someone on the DNF 5 team has read the issue and determined the next steps to take label Jan 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Triaged Someone on the DNF 5 team has read the issue and determined the next steps to take
Projects
None yet
Development

No branches or pull requests

2 participants