-
Notifications
You must be signed in to change notification settings - Fork 443
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[chore] Build docker image for PHP auto-intrumentation #3409
base: main
Are you sure you want to change the base?
[chore] Build docker image for PHP auto-intrumentation #3409
Conversation
* Release 0.112.0 Signed-off-by: Yuri Sa <[email protected]> * Release 0.112.0 Signed-off-by: Yuri Sa <[email protected]> --------- Signed-off-by: Yuri Sa <[email protected]>
This is hidden behind a feature flag. Nothing changes by default.
password: ${{ secrets.GITHUB_TOKEN }} | ||
|
||
- name: Prepare files for docker image | ||
run: ./autoinstrumentation/php/prepare_files_for_docker_image.sh --ext-ver ${{ env.VERSION }} --dest-dir ${PWD}/autoinstrumentation/php/files_for_docker_image |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about running this inside the Dockerfile? As part of a multi-stage build
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you - good idea. I'll try - I wonder if it will work considering that the script runs other docker containers...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried running the script in Dockerfile during build but in order for it to work /var/run/docker.sock
has to be mounted from the host (because the script spawns docker containers to build various files). Unfortunately it seems it is not possible to mount /var/run/docker.sock
from the host during the build phase of docker image.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if it runs other containers, I think it doesnt make much sense to mount the docker sock. It simply makes it harder to execute it on machines with only e.g. podman available.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to second the request to run all of this as a multi-stage docker build instead. That will make it much easier to maintain. I see that the script requires running docker images - in my view, you should instead rework it so the Dockerfile itself accepts the PHP version and libc flavor as arguments. If you can't fit everything in a single Dockerfile, multiple different ones are also fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for clarification. I understand that multiple images built by the workflow using QEMU each image for the corresponding CPU architecture, the part about which I am not clear is how the corresponding image is selected at the runtime? Namely how pkg/instrumentation/python.go
knows which image to copy files from? Will the correct image with auto-instrumentation files be selected based on CPU architecture used by docker image with the instrumented application? Do I understand correctly that that determination will occur after pkg/instrumentation/python.go
execution?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And on a separate note, could you cleanly rebase your changes on
main
? Right now it looks like you have a very messy merge in there.
Yes, I did rebase your changes on main
- should I have done a merge instead of rebase? Is there a way to fix the current messy state?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That said, looking at your build script more, I can see why it wouldn't be that easy to switch to that method
Won't switching to multi-stage image approach (either by generating Dockerfile or writing it manually) handle CPU architecture automatically?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for clarification. I understand that multiple images built by the workflow using QEMU each image for the corresponding CPU architecture, the part about which I am not clear is how the corresponding image is selected at the runtime? Namely how pkg/instrumentation/python.go knows which image to copy files from? Will the correct image with auto-instrumentation files be selected based on CPU architecture used by docker image with the instrumented application? Do I understand correctly that that determination will occur after pkg/instrumentation/python.go execution?
The operator doesn't know the CPU architecture of the image. It could find out, but it doesn't care what it is. The container runtime on the K8s Node (containerd in most cases) will simply download the right image for the Node's CPU architecture.
Won't switching to multi-stage image approach (either by generating Dockerfile or writing it manually) handle CPU architecture automatically?
It will, but a Dockerfile with 6+ build stages, one for each combination of libc+php, is going to be messy and repetitive. I'm wondering if there's an elegant way of handing this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that neither of the discussed proposals is perfect but this docker image is just a bag of files and it's not used directly by the end user.
Maybe for now we can implement one of the proposals that is good enough, we will get the feature out, receive feedback and then if/when necessary we can improve upon it?
…t with mergeWithOverwriteWithEmptyValue (open-telemetry#3324) * e2e tests for additionalContainers added * additionalContainers related unit tests added for mutation * Changed apiversion to v1beta1 in nodeselector e2e test * removed explicit zero value for additionalContainers and changed apply to update in chainsaw test * added affinity in collector e2e tests * affinity unit tests added for daemonset, deployment, statefulset during mutation * collector container mutate args e2e tests added * Unit tests added for additional args mutation of collector container * e2e tests for changing labels in collectors * e2e tests for changing annotations in collectors * fix annotation change e2e test asserts * Error and label change related unit tests added for resource mutation * fix label change e2e tests for mutation * mutating the spec and labels of deployment, daemonset, statefulset with mergeWithOverwriteWithEmptyValue * Adjust reconcile tests to new mutation logic * Added chlog entry for new mutation logic * fix typo in mutate_test.go * Fix G601: Implicit memory aliasing in mutate_test.go * Revert "Adjust reconcile tests to new mutation logic" This reverts commit 9060661. * label and annotation changes with mergeWithOverride; adjust tests * copy over desired.spec.template.spec to existing.spec.template.spec * volumeClaimTemplates mutation through range * Change type to bugfix * Fix volume-claim-label e2e test --------- Co-authored-by: Israel Blancas <[email protected]>
… receiver (open-telemetry#3389) * Add automatic RBAC creation for kubeletstats receiver Signed-off-by: Israel Blancas <[email protected]> * Inject K8S_NODE_NAME environment variable when using the kubeletstats receiver Signed-off-by: Israel Blancas <[email protected]> * Revert change Signed-off-by: Israel Blancas <[email protected]> * Fix lint Signed-off-by: Israel Blancas <[email protected]> * Add missing tests Signed-off-by: Israel Blancas <[email protected]> * Remove debug statement Signed-off-by: Israel Blancas <[email protected]> --------- Signed-off-by: Israel Blancas <[email protected]>
…etry#3332) * Python auto-instrumentation: handle musl based containers Build and and inject musl based python auto-instrumentation if proper annotation is configured: instrumentation.opentelemetry.io/otel-python-platform: "musl" Refs open-telemetry#2264 * Add changelog * fix indentation in e2e yaml * Assert specific command in musl e2e instrumentation test * Update README
* Allow setting target allocator via label * Move label definition to constants package * Fix context handling in collector webhook build validator
…`OTEL_EXPORTER_OTLP_PROTOCOL` (open-telemetry#3413) * refactor(exporter): extract function to create traces exporter Issue open-telemetry#3412 * feat(exporter): Support OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf for exporting traces via http Closes open-telemetry#3412
I marked autoinstrumentation/php/prepare_files_for_docker_image.sh as executable and added "[chore]" to the PR title to skip changelog check. Please take a look. |
@SergeyKleyman can you clean this branch up so Github doesn't show all the changes from main? |
Description: Part 1 of #3331 that is adding support for auto-instrumentation of PHP application in the same way as it's already implemented for other runtimes such as Java, .NET, etc.
As requested this PR handles the part which is publishing the image.