Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Builds a simple debian package on tag #386

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

svavassori
Copy link

@svavassori svavassori commented May 28, 2023

This PR creates a .deb package when a tag is created following the vX.Y.Z pattern.

Since libsoapysdr0.8 dependency package is available only in recent releases (Ubuntu 22.04 and Debian 12 out on 2023-06-10) it won't be able to resolve the dependencies on older systems.

For some other dependencies (like libpulse0) there wasn't a clear version to select; in case anyone reports a problem, the version can be adjusted accordingly.

As agreed on #384 the package is named after the executable instead of the project, so rtl-airband.

Any suggestion on how to improve the packaging are welcome.

@charlie-foxtrot
Copy link
Owner

Thanks for the work, I know pulling out those dependencies can be tedious!

A few comments / thoughts:

  • The GitHub action only running on a version tag makes sense, but not running on each PR means a code change that breaks something may not be detected until a release is tagged
  • Having the build / package commands embedded in the workflow yml means you can't run the commands / test locally

How about moving the build / package commands into a stand alone script in .github/ that is called by the workflow, then the workflow does the publish? This would mean the on-PR workflow can call the script to make sure build / package is working, but just not publish.

Finally, I'm in the process of adding a rpi3 build agent so arm packages would be able to be generated as well. Would the only change to the package command be the value of arch? If so can that be detected automatically?

@svavassori
Copy link
Author

I get your point and actually my original idea was to just attach the packaging job after the build one. Unfortunately, since I envisioned as just for the release, the tagging event could be received after the build and that is the reason for which I decided to rebuild it.
I can change it but then the .deb package naming has to change in order to avoid clashing between different branches.

(just as context)
The build pipeline I'm generally using is like this:

  1. checkout
  2. linter (when used)
  3. build (if needed and maybe package if it's easier in one shot)
  4. unit test + dynamic code analysis
  5. package (if not done at 3, language-specific artifacts like .jar/python wheel/npm...)
  6. integration tests
  7. platform-specific packaging (e.g. docker/helm/.deb)
  8. static code analysis / QA
  9. publish created artifacts to a central repository

And sometimes the static code checks can be done earlier, depending on how critical they are and if they also check or not the platform-specific packaging.

This is what I haven't fully understood:

but not running on each PR means a code change that breaks something may not be detected until a release is tagged

What could be undetected if tests are already performed in the build job?
If instead you are referring to the platform-specific packaging, I don't know which kind of tests you have in mind (I don't have a big expertise on creating .deb packages so I'm open to learn if there are any kind of tests that can be done).

Finally, I'm in the process of adding a rpi3 build agent so arm packages would be able to be generated as well. Would the only change to the package command be the value of arch?

Yes, just selecting the architecture is enough: I will check if all the dependencies (and their version) are also available for the Raspberry PI 3.

If so can that be detected automatically?

I don't know, I'll have a look at it. I guess it may be not fully automatic, but looking at some environment parameter should be enough to figure it out and select the appropriate label.

@charlie-foxtrot
Copy link
Owner

I get your point and actually my original idea was to just attach the packaging job after the build one. Unfortunately, since I envisioned as just for the release, the tagging event could be received after the build and that is the reason for which I decided to rebuild it.

Configure / compile is so quick that re-building across work flows seems fine to me

(just as context) The build pipeline I'm generally using is like this:

  1. checkout
  2. linter (when used)
  3. build (if needed and maybe package if it's easier in one shot)
  4. unit test + dynamic code analysis
  5. package (if not done at 3, language-specific artifacts like .jar/python wheel/npm...)
  6. integration tests
  7. platform-specific packaging (e.g. docker/helm/.deb)
  8. static code analysis / QA
  9. publish created artifacts to a central repository

👍

This is what I haven't fully understood:

but not running on each PR means a code change that breaks something may not be detected until a release is tagged

What could be undetected if tests are already performed in the build job? If instead you are referring to the platform-specific packaging, I don't know which kind of tests you have in mind (I don't have a big expertise on creating .deb packages so I'm open to learn if there are any kind of tests that can be done).

The current development workflow for this repo is that PRs are opened against the unstable branch and periodically the unstable branch is promoted to main where a release tag is created. Eventually I want to get to a point where PRs are opened against main, full testing (automated) is done on each PR so there is confidence that nothing is broken, and every merged PR generates a new version tag / release. Before I feel comfortable making this change I want to get the PR testing more robust and comprehensive.

Back to this question, as long as PRs are merged to unstable and releases are only generated periodically, there is a chance that a PR lands a breaking change (adds new library dependency) but that isn't detected until the release is tagged and a .deb package generated some weeks / months later (the workflow that generates the .deb is only triggered on a new version tag). Instead I would like to detect this breaking change during the testing on the PR with that change. I think an easy way to do this is simply to generate a .deb with a unique name (based on git hash?) as part of the workflows triggered on every pull request (ideally that .deb would be installed and used for the integration tests also run in that workflow).

Here is a bash script that will generate a version name based on tags or hashes and works both in a workflow and locally when testing:

#!/bin/bash

# make sure we're in the git repo
cd $(dirname $0)

TAG_OR_SHA="$(git describe --tags --exact HEAD || git rev-parse --short HEAD)"

# if this is being run from GitHub for a tag (ie v1.2.3) then use that
if [[ ! -z "${GITHUB_REF_NAME}" && ${GITHUB_REF_NAME:0:1} == "v" ]] ; then
    echo ${GITHUB_REF_NAME}

# if this is being run from GitHub for a commit then use the last characters of the SHA
elif [[ ! -z "${GITHUB_SHA}" ]] ; then
    echo "${GITHUB_SHA}" | tail -c 8

# if this is running outside of GitHub on a clean repo, use the tag or SHA
elif git diff --quiet ; then
    echo "${TAG_OR_SHA}"

# otherwise use the tag or SHA plus '-dirty'
else
    echo "${TAG_OR_SHA}-dirty"
fi

@svavassori
Copy link
Author

Thanks to your detailed explanation: now I understand better what you were aiming for and the reason to always build a .deb.

I agree with what you said about moving to master as target branch for PR and also having the possibility to execute tests on the PR branch before merging it.

Nonetheless I see those tests as against the executable and not the .deb package itself, I explain why.
There is a subtle thing to take in account about using the .deb (or .rpm) as final testing artifact: usually dependencies are defined as open (e.g. libX >= 3.1) to allow install the package on multiple compatible systems and this cannot guarantee you that the build is reproducible, especially if you're using moving tags like ubuntu-latest. This is not a limitation of the .deb package but is due to the fact that to guarantee reproducibility, you need to specify exactly the version (i.e. libX = 3.2.1) and this also defends against supply-chain attacks (very common in npm ecosystem for instance) but that is against the usual flexibility you would expect from a os-packaged software.

I'll give you a very recent example of that: while working on this PR I found out that dpkg-deb creates by default a .zstd compressed archive on Ubuntu (21.10 and later) but a .xz compressed archive on 21.04 or earlier and on Debian. The problem arise only when you are trying to install a .deb packages created on Ubuntu 22.04 on a Debian (<= 11) system because it cannot handle that format. All this without changing any commands on the build pipeline.

Thus, to reach a reproducible build pipeline, you also need to guarantee the compilation environment. A possible strategy would be to execute the compilation in a docker image, then extract the executable and copy it to another docker image containing only the runtime dependencies. In this way you can execute all the tests against this last image having the same environment (libs, permission, user/group, etc) as the only difference is the kernel which you're executing the container with.

If it is OK for you, I'll change the implementation to create always the .deb and also add the docker steps for compilation and runtime.

@charlie-foxtrot
Copy link
Owner

Running tests on executable and not package sounds good to me, as long as there is some basic validation that the package does install and the app will start up (ie not missing a new dependency).

Docker sounds good as well. I'm not sure where that fits into the GitHub workflow world, if there would be one action that runs a script that does all the docker stuff, or if the workflow is actually run in a container that you can leverage somehow.

I now have a rpi3b added as a worker with all the packages installed as well as docker. I'm planning to enable that to unstable / main today

@svavassori svavassori force-pushed the feature/debian-package branch from dc99474 to 6bc79ae Compare June 4, 2023 18:09
@svavassori
Copy link
Author

svavassori commented Jun 6, 2023

Having a Raspberry PI 3B as additional worker is really a great news!

This week I had less cycle to dedicate to the PR, I'm sorry for the delay, but I'm still working on it.
As a matter of fact I found out that libshout3 2.4.6 added some compiler warnings to deprecated methods and it turns out, with newer platform using that version, the build ends with an error because of those warnings.

Here an example:

/build/src/output.cpp:81:29: error: 'int shout_set_format(shout_t*, unsigned int)' is deprecated: Use shout_set_content_format() [-Werror=deprecated-declarations]
   81 |         if (shout_set_format(shouttemp, SHOUT_FORMAT_MP3) != SHOUTERR_SUCCESS){
      |             ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~

There are few of them but I think I can handle it even if C++ isn't my top proficient language.
I'll made a separate PR for them in order to focus this one only on packaging, please let me know if you want me to open an issue for those warnings before creating the PR.

@charlie-foxtrot
Copy link
Owner

No worries, any help is appreciated!!!

The libshout3 failures were previously reported and I have a fix in #382 but haven't merged it to unstable yet because I haven't had a chance to test the tags are still working . . . I'll try to get to that in the next few days, in the meantime you could merge that branch into yours or turn off handling warnings as errors

@svavassori svavassori force-pushed the feature/debian-package branch 5 times, most recently from ce0c0ea to 1876fe7 Compare June 20, 2023 22:28
@svavassori svavassori force-pushed the feature/debian-package branch 16 times, most recently from 220d5cd to cd85612 Compare June 26, 2023 21:38
@svavassori svavassori force-pushed the feature/debian-package branch 3 times, most recently from ecabd93 to 065000c Compare June 26, 2023 22:06
@svavassori
Copy link
Author

svavassori commented Jun 26, 2023

Hello,

I've finished to implement the general idea for packaging the application, it's still WIP and needs to be polished, but you can have a look at it to check if it is going in the right direction. I've temporary disabled treating warnings as errors in order to make the build pass.

Basically it creates the package inside the build job and publish it as build artifact (7 days retention time). Then, only on Ubuntu worker, it creates the same for the foreign architectures (ARM variants) and, as final step it recollect all previously created build artifacts to publish as (pre-)release.
To avoid repetition, I moved the install dependencies script together with the others and split between Linux and macOS (since in the VM you need to call sudo but not in the container).

I haven't found yet a way to apply retention on pre-release in order to avoid cluttering at each build and I'm still thinking about the best way to publish the created docker images in a similar way as (pre-)releases.

About the versioning, I started with your suggestion using the commit hash, but then, looking at how others have dealt with the same problem, I picked up this schema that seems generally accepted and parsed by some tools: 4.1.1+git20230627.946f17a

It is composed by:

  • the latest released version from which it is branched (e.g. 4.1.1)
  • the commit date to have it sorted and human-readable
  • the commit hash

This has a limitation that multiple releases in the same day won't be sorted, but I guess it isn't too critical.

@svavassori svavassori force-pushed the feature/debian-package branch from 546f09d to 2802f35 Compare June 27, 2023 07:16
@svavassori svavassori force-pushed the feature/debian-package branch 2 times, most recently from e7feafd to 3eaf97d Compare June 27, 2023 09:14
@svavassori svavassori force-pushed the feature/debian-package branch from 3eaf97d to 946f17a Compare June 27, 2023 09:56
Copy link
Owner

@charlie-foxtrot charlie-foxtrot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for all the work, this ended up being much more involved than I expected!!

I will merge the PR causing the warnings shortly

I added a bunch of comments / questions, most are for my understanding

script/snapshot_version.sh Show resolved Hide resolved
script/package.sh Outdated Show resolved Hide resolved
set -o errexit

if [ -L /usr/share/rtl_airband/config/default.conf ]; then
rm /usr/share/rtl_airband/config/default.conf
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if a "normal user" is using symlinks for their own config file this would remove that as well.

Copy link
Author

@svavassori svavassori Jul 8, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the user is creating a symlink named default.conf in that directory (root owned), yes it will be removed because, from the script perspective, there is no way to distinguish it from an untouched installation.

That is also the reason for the asymmetry check during installation about existence of a file instead of a symlink: to avoid overriding (or failing) if a non-script-created default.conf file is already there, that should be a very unlikely case since it' under package-reserved directory.

I wrote that as the postrm script to clean up all the installation and leave the system as before installing the package.

Either way, if the user wants to change the default file to read, I think it's best that he modifies the local systemd config instead of touching in package directory.

I have one question about current program's behavior if no configuration file is passed as parameter: wouldn't be better to look for configuration files in the current local directory (e.g. ./rtl_airband.conf) instead of using an hard-coded path? Or made that parameter as required?
Because I find very unusual for program to look by default for configuration in /usr/local/etc/, especially if it's installed as package :-)

mkdir --parents "${package_root}/lib/systemd/system/"
sed 's|/usr/local/bin|/usr/bin|g' init.d/rtl_airband.service > "${package_root}/lib/systemd/system/rtl_airband.service"
mkdir --parents "${package_root}/usr/bin"
cp build_Release_NFM/src/rtl_airband "${package_root}/usr/bin/"
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only packaging the NFM version, what would it take to build / package additional versions in the future? could the build directory be an input parameter to this script?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are already building all the 4 variants at the same time, so it would be a matter to parameterize that line.

I picked the non-debug-NFM-enabled as I thought it was the most useful from a release point of view but I don't have a strong opinion on that.

The only downside I see packaging all those is about how to put that variant information in the package name in a nice way.
I though also about including all those in the same package with different executable names, but that would work as "internal" package not as a release one.
Maybe it would make sense packaging the two non-debug variants if there is a big difference in performance by excluding NFM, but I'm not aware of it.

soapysdr \
pulseaudio \
pkg-config
brew link openssl
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is openssl a new dependency? should it be included in the install list as well?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It isn't a new dependency and it is installed by default; it is used by libshout3 if I remember correctly, so probably a hidden one.

On macOS 13 it doesn't have the link created by default, so the build fails because cannot find it (even if it's installed).
I don't know if it's a new default from macOS-13 or if it's a bug in the setup configuration of the worker VM

script/package.sh Show resolved Hide resolved
script/package.sh Show resolved Hide resolved
.github/workflows/build.yml Show resolved Hide resolved
*.tar.gz
retention-days: 7

build-foreign-arch:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is this doing? its running on a linux host and using docker to build amd64, arm64, arm/v7, and arm/v6? how does it differ from a matrix build? if it is independent can it be its own .yml?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It can be put in its own yaml. As first version I put together because was easier to link this job as a dependency of the previous one, I'll split in a separate one.

I used this way to get the build for foreign architecture in an easy and reliable manner. Another one would be to use gcc cross-compile capability (and it may be also faster), but with cross-compilation there is a small chance that linking with dynamic library could have some issues.
Same thing could be done on macOS (amd64 and arm64) if needed.

The main advantages of building by using this method are:

  • the build environment is under our control and it can be replicated in the local machine, so no dependency on which configuration/package version is used by the workers or side effects, so it makes debugging the build much easier (as a matter of fact, the version of the libraries we are compiling against is based on the one available in that OS version, not as a requirement of the project).
  • we are not limited to the architecture available on GitHub (e.g. RISC-V 32/64 bit in a future maybe?)
  • it doesn't need to modify the worker, so better isolation and pre-configured images could also be used.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had a look at dependencies between workflows and it is possible to link them, but it will run only on default branch see notes.

What it can be done is to create a parent workflow that calls in order the child workflows. I'm not sure if it is really helpful for this size, thought.

- name: Install packaged dependencies
run: .github/install_dependencies

- name: Configure
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's a lot of overlap between these steps and Dockerfile / script/package.sh, can this all be done in a single place / script that is leveraged by the two flows?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually this package.yml is the old one that I dind't deleted, but it should be merge/substituted with the "foreign-arch" one.

So the final flow should be:
Build OS/arch matrix > package > join all build artifact and pre-release if on PR or release if on tag

@svavassori
Copy link
Author

I haven't found yet a way to apply retention on pre-release in order to avoid cluttering at each build and I'm still thinking about the best way to publish the created docker images in a similar way as (pre-)releases.

About this one, the only possible solution to the moment is to run a "clean" action that implements a retention policy to clean up images older than X days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants