-
-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to login using a Docker image #149
Comments
Hi @kang-makes and thanks for the comprehensive issue
It's correct:
You can modify the log file location with
It's only a Docker thing to be honest, there's no PHP involved. I didn't find the time to start on it yet, but my approach would be to run caddy as a reverse-proxy in the container with supervisord doing the process managing stuff, as it's pretty standard nowadays.
You're trying to log to the admin dashboard with a user account. The admin account is a single account that uses the |
First try is #150 is you want to have a look |
Hi! Thank you very much for answering this fast and start working on it. I am kind of busy these days and I cannot spend time on this but I'll review it this Thursday/Friday. Cheers :) |
@kang-makes would you be so kind as to test https://github.com/tchapi/davis/pkgs/container/davis-standalone and see if it fits your needs? Thanks |
Hi @tchapi! I am not ignoring you. I just got sick last week. The thing is I don't like the approach you are taking, and, to open a good conversation, as talk is cheap, I prefer to talk over a PR. So I created the PR #152 . As I am the one that wants this functionality and have the knowledge to develop it, I would like to help. I cannot help on the PHP side, but I can on the Docker one. The PR #152 is an iterative approach. Instead of a gazillion of docker images with duplicated code, the point is to create a slim image that is used as base to install supervisor/webservers on top of it. As a first approach, I started with a multi-stage build for Docker. The PR still misses CI, build and push. it will take me still a couple of days. We can discuss over the PR once I finished if you would. Take a look and let me know what you think :) |
Ok, @tchapi. I have it. I'll give you a tour. First it comes the PR #152: It is finished and ready to be reviewed. It seems that the image is being build two times but I added the lines for cache so it should use Github cache service to cache all the layers, making the build almost instantaneous after the first build. I ran the workflows on my fork and here you have a couple of examples. Notice the 20 minutes of difference between the first build and the other 3 builds: Once that is merged, building and publishing images should be really fast and you are testing the build at PR time. You might add integration tests if needed. I do not know how to do that in PHP but I can help you to do the boilerplate if you teach me. Then, the second PR adds the flavor to the images, so you can have an image tagged You can see the difference between the PR #152 and the following one on my fork that I left it as an example kang-makes#4 Note that it is an example you you can understand the approach I had in mind of having a PHP-FPM image ans using that image as base to add the webserver on top. In case you need to develop or debug the image locally, you just have to build the base one And as I shuffled the layers a bit adding the code in the last layer possible, the build between iterations in case you develop over Docker should be blazing fast. Let me know if you need any further question about my approach, we are ready to discuss it. If you like, you can merge review and/or merge #152 and I'll finish kang-makes#4 so you can release a new version with caddy and nginx :D |
Hello, Thanks for your work on this. I'll admit I have mixed feelings:
Thanks for the feedback, though 😅
I think that's a little overstated — There is one package to date in this repository, with one flavor per architecture, so I'm not sure I get your point. Where are these gazillion of docker images? The actual point of having Docker packages in this repository is to provide users with a simple way to install Davis on their stack without having to build anything. Going the "combined docker image" route means that I'll provide two packages instead of one:
I won't provide a Nginx flavor, for instance, as it's an implementation detail that is irrelevant if you want to use the combined Docker image itself (and to Davis as a piece of software in general). Moreover, having many options is a maintenance burden that I won't have appetite to support. On the tags vs. packages difference, I have no strong opinion. About PR #152
General remarks on your rationale
Yes and no, since we're relying on a base image (8.2-fpm-alpine) and packages that are not pinned, because we want to benefit from upstream changes. So it will be faster if you iterate fast, but not on a longer timeframe unfortunately.
Sure, if we want to build the images at PR time, which I don't see the benefits of. I was fine with a 20min build now since it's only made at release time, and I'm not releasing 4 times per day (barely once per month). Given that the base image will likely have been updated between two releases, the cache won't be used (or wanted).
I understand it yes. This could be a good thing indeed. I'm not sure on how to fill in the $BASE_IMAGE when building locally in a docker compose context though, if you care to show me? Side note: Github workflows files are very cumbersome and hard to read / understand and maintain, so I won't add a lot more complexity in there if possible. TL,DR;→ All in all, I can see the benefits of building the PHP extensions in a separate layer 👌🏼 (but keep in mind it's only a very small 8% of space saving) |
Hi!
I might be harsh here. I did not mean to. English is not my first language. Sorry.
I meant layers, not images. The more layers you have, the less performant is on clusters that have too many images. And here I do not want to talk as if I was running this on the most busy server in the universe but a home server with an RPi could start to struggle after a few images that are not well packaged.
Fair enough.
Here I made a wrong assumption. You stated "The actual point of having Docker packages in this repository is to provide users with a simple way to install Davis on their stack without having to build anything." and it is a valid point. The assumption I made is that you might develop this application on Docker itself so you work and test in the same image you are deploying (As I am used to). This is the main reason I also changed the layer order to include the code as late as possible so you do not have to rebuild anything for each test.
Fair point.
Sure: docker build -t davis:devel .
docker build -t test -f Dockerfile.caddy --build-arg BASE_IMAGE=davis:devel .
I can help you if you need to.
As I said before and did not explain correctly, the point was not only to save a few bytes but also to reduce the amount of layers. Also to reduce the amount of files that might be in layers that might not be accessible on the last one. So you do not need to squash. I am used to working by giving the review of PRs directly on the PR itself. Like this. I saw that you have some concerns about the PR of missing files that are needed on the final image and I have removed them. The same for some CI steps. If you want, you can add there all the comments with all the concerns you have so I can address all the issues you see in separate threads so we can follow the conversation without issues. I would like to address these issues and merge PRs one at a time so it is not a PR that has 1000 lines changed that is hard to read. After this one that already includes LDAP and IMAP, we can iterate and add Caddy on a subsequent PR.
I have experience with clusters, Docker, and CI. I can help you if you want. As you can see, I have a cadence of 2 weeks when I can find time/energy to use on FOSS but I can help. You can ping me on any change that you need to do the review, always respecting your view. Let me know what you think :) |
Any news on this? Should I still work on this or just test your approach and see it if fits my needs? |
Sorry, I have been taken by a lot of pressing matters which did not allow me to contribute further. For now, I'd be open to merge the weight / nb of layers reduction in the Dockerfile as you proposed in #152; It needs a few changes though (mainly my comments above: composer in the final layer, no change in CI files + adding back the missing app files in the image). Also, the Once we have that baked in we can have a look at the rest, but I'm more strongly leaning towards not complexifying things with a base image + another image on top to be honest |
I had to do an ugly hack to make this software works on Docker/Kubernetes because the reverse proxy was unable to find the files.
These are configurations:
Then, this is a snippet of the manifest I am using to deploy it.
In case you are not familiar with Kuberentes, I can explain you:
At
volumes
I specify a empty volumes where I am going to save the files atpublic
so the proxy and the php-fpm container share the same folder.The other configMap is uses as key-value and mapped as environment variables using this:
On
initContainers
I run one-shots before leaving the other long running containers. The one shots copy everything inpublic
to the empty values I explained above and the other one do the migrations so I do not have to do it manually. Each restart/upgrade it is going to migrate automatically.Well... It work, but I am unable to log in. I create, using
admin
a user that is administrator:If I use the admin dashboard it says that the credentials are invalid:
But if I use the same credentials to
/dav
I am able to log in:If I go to the log folder, it is completely empty.
Here are all the question/suggestions that I am facing during the deployment of this service:
I don't know if the log folder I am looking into is correct. I know that is documented but it seems odd that is
/var/www/davis/var/log
instead of/var/log/davis
.On the other hand, having log files is an antipattern for Docker images as (ideally) docker images should be run as read only (and adding volumes to the folder that should write something like session files or vcards or whatever) and logs should not write to a file but to
stdout/stderr
so you can usedocker logs
to look at the logs of the container.As the reverse Proxy needs also the files, having them in two images seems to be also an issue. I see a closed issue here and a roadmap card here. I am not a PHP expert so I cannot help you to develop it but I can give you a helping hand on the Docker image and the deployment of it if you give me an status update on this and guide me a bit on the approach you are looking for.
Now, about the issue itself. I do not know how to debug this issue with the password because I do not know if the issues is in the way I am copying the files or the backend itself as I have no logs at all of what is happening.
I am happy to help to debug it.
The text was updated successfully, but these errors were encountered: