Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-stage build for Linux AArch64 #35

Merged
merged 2 commits into from
Feb 24, 2022
Merged

Conversation

odidev
Copy link
Contributor

@odidev odidev commented Feb 22, 2022

scripts/cibuildpkg.py Outdated Show resolved Hide resolved
@jlaine
Copy link
Member

jlaine commented Feb 22, 2022

This is awesome, let's see how it goes!

if: ${{ steps.libs-cache.outputs.cache-hit != 'true' }}
run: |
docker run -v $PWD:/project:rw --workdir=/project ${{ env.img }} bash -exc '
yum -y install gperf libuuid-devel zlib-devel;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Python script already invokes yum, do we need it here too?

Copy link
Contributor Author

@odidev odidev Feb 22, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yum requires liblzma.so which is already present in the system. We are building a different version of liblzma.so for our own purpose and adding its path to LD_LIBRARY_PATH. If we allow script to execute yum; yum will detect our built liblzma.so which is incompatible causing installation failure.

To avoid this scenario we need to execute yum before adding liblzma.so's path to LD_LIBRARY_PATH.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you need to touch LD_LIBRARY_PATH at all for stage1 and stage2, I think it's only actually needed to test the wheel which is built at the very end.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please take a look at below code:

Package(
            name="gnutls",
            requires=["nettle", "unistring", "zlib"],
            source_url="https://www.gnupg.org/ftp/gcrypt/gnutls/v3.6/gnutls-3.6.16.tar.xz",
            build_arguments=[
                "--disable-cxx",
                "--disable-doc",
                "--disable-libdane",
                "--disable-nls",
                "--disable-tests",
                "--disable-tools",
                "--with-included-libtasn1",
                "--without-p11-kit",
            ],
        ),

gnutls requires nettle, unistring and zlib. And these are being install on /tmp/vendor. So, to make it visible to build tool and use as a dependency we need to add /tmp/vendor to LD_LIBRARY_PATH. Please correct me if I am wrong.

However, I will do a side experiment on your suggestion and share you. :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LD_LIBRARY_PATH is not used when linking (we already handle this by setting LDFLAGS), it's used when a executable loads a shared library. The only case I can think of is when at the very end we test a wheel which is linked against ffmpeg (and transitively all its dependencies).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding LDFLAGS should work but not sure why it is failing in above experiment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is failing while running configure script and I think LDFLAGS will not help here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is very weird, but I can definitely reproduce it, my test build failed in the same way. I'll try and debug it. This really surprises me because the test in the gnutls configure script uses AC_CHECK_FUNCS which only relies on the ability to link a program, not execute it.

@odidev
Copy link
Contributor Author

odidev commented Feb 22, 2022

@jlaine can you please approve the workflow?

@jlaine
Copy link
Member

jlaine commented Feb 22, 2022

Watch out, I see you're saving / restoring only /tmp/vendor. However following some refactoring there are two directories you want to save / restore:

  • /tmp/vendor => these are the files for the "target"
  • /tmp/vendor.builder => these are the files for the "builder". This is where you will find things like nasm and gperf (all packages with for_builder=True).

@jlaine
Copy link
Member

jlaine commented Feb 22, 2022

I've launched my own experiment in #38

@jlaine jlaine mentioned this pull request Feb 22, 2022
@jlaine
Copy link
Member

jlaine commented Feb 23, 2022

Regarding LD_LIBRARY_PATH I've now remembered why I touch it at all: when repairing the wheels to incorporate external libraries, auditwheel uses LD_LIBRARY_PATH to determine its search path, see:

https://github.com/pypa/auditwheel/blob/f3c7691f258de76c11f679ab85826b730c658c14/src/auditwheel/lddtree.py#L187

I've opened an issue to see whether the auditwheel maintainers are open to adding an explicit option to pass additional paths:

pypa/auditwheel#373

@jlaine jlaine merged commit 440640e into PyAV-Org:main Feb 24, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support for ARM binary wheels (aarch64)
2 participants