-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Builds a simple debian package on tag #386
base: main
Are you sure you want to change the base?
Builds a simple debian package on tag #386
Conversation
Thanks for the work, I know pulling out those dependencies can be tedious! A few comments / thoughts:
How about moving the build / package commands into a stand alone script in Finally, I'm in the process of adding a rpi3 build agent so |
I get your point and actually my original idea was to just attach the packaging job after the build one. Unfortunately, since I envisioned as just for the release, the tagging event could be received after the build and that is the reason for which I decided to rebuild it. (just as context)
And sometimes the static code checks can be done earlier, depending on how critical they are and if they also check or not the platform-specific packaging. This is what I haven't fully understood:
What could be undetected if tests are already performed in the build job?
Yes, just selecting the architecture is enough: I will check if all the dependencies (and their version) are also available for the Raspberry PI 3.
I don't know, I'll have a look at it. I guess it may be not fully automatic, but looking at some environment parameter should be enough to figure it out and select the appropriate label. |
Configure / compile is so quick that re-building across work flows seems fine to me
👍
The current development workflow for this repo is that PRs are opened against the Back to this question, as long as PRs are merged to Here is a bash script that will generate a version name based on tags or hashes and works both in a workflow and locally when testing:
|
Thanks to your detailed explanation: now I understand better what you were aiming for and the reason to always build a I agree with what you said about moving to Nonetheless I see those tests as against the executable and not the I'll give you a very recent example of that: while working on this PR I found out that Thus, to reach a reproducible build pipeline, you also need to guarantee the compilation environment. A possible strategy would be to execute the compilation in a docker image, then extract the executable and copy it to another docker image containing only the runtime dependencies. In this way you can execute all the tests against this last image having the same environment (libs, permission, user/group, etc) as the only difference is the kernel which you're executing the container with. If it is OK for you, I'll change the implementation to create always the |
Running tests on executable and not package sounds good to me, as long as there is some basic validation that the package does install and the app will start up (ie not missing a new dependency). Docker sounds good as well. I'm not sure where that fits into the GitHub workflow world, if there would be one action that runs a script that does all the docker stuff, or if the workflow is actually run in a container that you can leverage somehow. I now have a |
dc99474
to
6bc79ae
Compare
Having a Raspberry PI 3B as additional worker is really a great news! This week I had less cycle to dedicate to the PR, I'm sorry for the delay, but I'm still working on it. Here an example:
There are few of them but I think I can handle it even if C++ isn't my top proficient language. |
No worries, any help is appreciated!!! The |
ce0c0ea
to
1876fe7
Compare
220d5cd
to
cd85612
Compare
ecabd93
to
065000c
Compare
Hello, I've finished to implement the general idea for packaging the application, it's still WIP and needs to be polished, but you can have a look at it to check if it is going in the right direction. I've temporary disabled treating warnings as errors in order to make the build pass. Basically it creates the package inside the I haven't found yet a way to apply retention on pre-release in order to avoid cluttering at each build and I'm still thinking about the best way to publish the created docker images in a similar way as (pre-)releases. About the versioning, I started with your suggestion using the commit hash, but then, looking at how others have dealt with the same problem, I picked up this schema that seems generally accepted and parsed by some tools: 4.1.1+git20230627.946f17a It is composed by:
This has a limitation that multiple releases in the same day won't be sorted, but I guess it isn't too critical. |
546f09d
to
2802f35
Compare
e7feafd
to
3eaf97d
Compare
3eaf97d
to
946f17a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for all the work, this ended up being much more involved than I expected!!
I will merge the PR causing the warnings shortly
I added a bunch of comments / questions, most are for my understanding
set -o errexit | ||
|
||
if [ -L /usr/share/rtl_airband/config/default.conf ]; then | ||
rm /usr/share/rtl_airband/config/default.conf |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if a "normal user" is using symlinks for their own config file this would remove that as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the user is creating a symlink named default.conf
in that directory (root owned), yes it will be removed because, from the script perspective, there is no way to distinguish it from an untouched installation.
That is also the reason for the asymmetry check during installation about existence of a file instead of a symlink: to avoid overriding (or failing) if a non-script-created default.conf
file is already there, that should be a very unlikely case since it' under package-reserved directory.
I wrote that as the postrm
script to clean up all the installation and leave the system as before installing the package.
Either way, if the user wants to change the default file to read, I think it's best that he modifies the local systemd config instead of touching in package directory.
I have one question about current program's behavior if no configuration file is passed as parameter: wouldn't be better to look for configuration files in the current local directory (e.g. ./rtl_airband.conf
) instead of using an hard-coded path? Or made that parameter as required?
Because I find very unusual for program to look by default for configuration in /usr/local/etc/
, especially if it's installed as package :-)
mkdir --parents "${package_root}/lib/systemd/system/" | ||
sed 's|/usr/local/bin|/usr/bin|g' init.d/rtl_airband.service > "${package_root}/lib/systemd/system/rtl_airband.service" | ||
mkdir --parents "${package_root}/usr/bin" | ||
cp build_Release_NFM/src/rtl_airband "${package_root}/usr/bin/" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only packaging the NFM version, what would it take to build / package additional versions in the future? could the build directory be an input parameter to this script?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are already building all the 4 variants at the same time, so it would be a matter to parameterize that line.
I picked the non-debug-NFM-enabled as I thought it was the most useful from a release point of view but I don't have a strong opinion on that.
The only downside I see packaging all those is about how to put that variant information in the package name in a nice way.
I though also about including all those in the same package with different executable names, but that would work as "internal" package not as a release one.
Maybe it would make sense packaging the two non-debug variants if there is a big difference in performance by excluding NFM, but I'm not aware of it.
soapysdr \ | ||
pulseaudio \ | ||
pkg-config | ||
brew link openssl |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is openssl
a new dependency? should it be included in the install list as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It isn't a new dependency and it is installed by default; it is used by libshout3 if I remember correctly, so probably a hidden one.
On macOS 13 it doesn't have the link created by default, so the build fails because cannot find it (even if it's installed).
I don't know if it's a new default from macOS-13 or if it's a bug in the setup configuration of the worker VM
*.tar.gz | ||
retention-days: 7 | ||
|
||
build-foreign-arch: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is this doing? its running on a linux host and using docker to build amd64
, arm64
, arm/v7
, and arm/v6
? how does it differ from a matrix build? if it is independent can it be its own .yml
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It can be put in its own yaml. As first version I put together because was easier to link this job as a dependency of the previous one, I'll split in a separate one.
I used this way to get the build for foreign architecture in an easy and reliable manner. Another one would be to use gcc
cross-compile capability (and it may be also faster), but with cross-compilation there is a small chance that linking with dynamic library could have some issues.
Same thing could be done on macOS (amd64 and arm64) if needed.
The main advantages of building by using this method are:
- the build environment is under our control and it can be replicated in the local machine, so no dependency on which configuration/package version is used by the workers or side effects, so it makes debugging the build much easier (as a matter of fact, the version of the libraries we are compiling against is based on the one available in that OS version, not as a requirement of the project).
- we are not limited to the architecture available on GitHub (e.g. RISC-V 32/64 bit in a future maybe?)
- it doesn't need to modify the worker, so better isolation and pre-configured images could also be used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had a look at dependencies between workflows and it is possible to link them, but it will run only on default branch see notes.
What it can be done is to create a parent workflow that calls in order the child workflows. I'm not sure if it is really helpful for this size, thought.
- name: Install packaged dependencies | ||
run: .github/install_dependencies | ||
|
||
- name: Configure |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there's a lot of overlap between these steps and Dockerfile
/ script/package.sh
, can this all be done in a single place / script that is leveraged by the two flows?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually this package.yml is the old one that I dind't deleted, but it should be merge/substituted with the "foreign-arch" one.
So the final flow should be:
Build OS/arch matrix > package > join all build artifact and pre-release if on PR or release if on tag
Co-authored-by: charlie-foxtrot <[email protected]>
About this one, the only possible solution to the moment is to run a "clean" action that implements a retention policy to clean up images older than X days. |
This PR creates a
.deb
package when a tag is created following thevX.Y.Z
pattern.Since
libsoapysdr0.8
dependency package is available only in recent releases (Ubuntu 22.04 and Debian 12 out on 2023-06-10) it won't be able to resolve the dependencies on older systems.For some other dependencies (like
libpulse0
) there wasn't a clear version to select; in case anyone reports a problem, the version can be adjusted accordingly.As agreed on #384 the package is named after the executable instead of the project, so
rtl-airband
.Any suggestion on how to improve the packaging are welcome.