Skip to content

Multiple broken systems after rpm-ostree deployment.Β #5450

@CheckYourFax

Description

@CheckYourFax

Describe the bug

Hi. We have multiple users (Bazzite) since a short time that have a broken system after changing something in rpm-ostree. In this case a user who appended a karg to their system and rebooted.

Here are screenshots from the affected user:

Image

Image

Image

In this case it happened after running this command and rebooting:

rpm-ostree kargs --append=modprobe.blacklist=hid_uclogic

And another user:

Image

In this case it happened randomly after running rpm-ostree update

It seems to be fairly rare but this is the second user in a short time with an entirely broken system after either updating or changing a karg in rpm-ostree. Before this I've never seen this error happen.

Reproduction steps

  1. Make rpm-ostree deploy a new deployment.
  2. (not easily reproducible) next reboot OStree cannot find OStree root.

Expected behavior

  1. The system should reboot successfully.

Actual behavior

  1. The reboot causes both the primary and backup OStree deployments to fail booting with the error in the image and are dropped in an emergency shell.

System details

User can't use rpm-ostree command due to emergency shell, but as of writing this was on

rpm-ostree version 2025.8:

  • rust
  • compose
  • container
  • fedora-integration

Additional information

Before these dates in the images I've never seen this error happen on our support discord channel.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions