-
Notifications
You must be signed in to change notification settings - Fork 186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix Django CSRF protection behind proxy automatically #366
Comments
I am pretty sure this is a very easy change... I am a little surprised that no one has done this yet. I can try, but I have limited experience with this setup. I can, however, outline what needs to be done so that maybe one of the devs can make the actual change easily: There is already code to inject the
All that needs to be done, is to add new variable, such as what you suggested: trusted_origins_str = '["' + '","'.join(get_conf('CSRF_TRUSTED_ORIGINS', 'seafile.example.com').split(',')) + '"]'
with open(join(topdir, 'conf', 'seahub_settings.py'), 'a+') as fp:
fp.write('\n')
fp.write("CSRF_TRUSTED_ORIGINS = " + trusted_origins_str) Note that the above script allows for many domains to be configured in the environment variable, like so:
|
Why not modify seahub_settings.py directly instead of depending on another environment variable? |
Because the environment variable can be set in something like Portainer (which is how I deploy Seafile through docker), and right now, on version 10, seafile can be configured entirely using those environment variables without having to add any other means of configuring seahub_settings.py. |
Out of curiosity, how would you recommend configuring seahub_settings.py in a config-managed environment consistently without manually editing the file? Its entirely likely that there's some expected means of configuring this that I am missing, but the only thing I can think of is going into the container and modifying the settings, which would get undone every time the container is re-deployed, no? Either that, or I guess I could mount a volume with settings for the container, but that risks messing up all the scripts that currently modify those settings. |
I looked on my deployment and found that yes, seahub_settings.py does appear to be mounted to the volume so I guess the assumption is that I would modify it there to persist. That makes sense, but its still markedly more complex to do that in a configuration-managed environment. If using something like Portainer, this essentially introduces an undocumented change. If using Ansible (which is the other way I deploy these things) its considerably more effort to modify the file consistently than to just deploy it in a single task/module with the relevant environment variable. I understand that we can't just add ALL the variables as environment variables, at some point a line has to be drawn before the entire code is just environment variables, however, I would argue that this issue seems to be one that a LOT of people run into precisely because deploying this behind a reverse proxy is such a common pattern. Therefore, I would argue that this should be included as an environment variable. |
For what it's worth, I deploy via ansible and behind caddy as reverse proxy and have always found the current docker deployment lacking. For non-pro, there is a nice alternative available, but for my needs, I've adapted my deploy based on sth similar to what @undergroundwires did here. My docker compose lists all services as needed with corresponding volumes, starts the stack, downs it and then runs a patch-seafile.Dockerfile that copies a python script that will modify according to deployment needs and env files for each service as the running user. The stack then gets restarted, voila. Works nicely enough for me. I do wish though, the docker deployment were a little less frustrating than it has been before. I'm yet to migrate a couple of servers from 10 pro to 11 pro via ansible. It's probably gonna be another tab war to catch everything, so I'm still pushing it off... |
Yeah I totally get that. I've been deploying Seafile with Ansible, and following essentially the same approach as above. In fact I was just looking over how I dealt with this issue in Ansible, and of course its a I very much appreciate linking to @undergroundwires 's solution though... that's pretty cool and I might be able to adapt that. But I still think the proper approach would be to make the environment variable as detailed above. |
I agree regarding this specific issue. In general though I've found that there's too much to change based on different deployment needs, and that the above approach works nicely for ci/cd stuff. I keep everything in a gitea repo, clone it for the specific deployment and adapt service specific env files (by hand, could do it via ansible vars), then upon deploy, ansible pulls that clone to the target. Another tasks runs a shell script that will generate needed passwords and replace placeholders in the env files. Then ansible calls docker as described above where the python script will use the .env files for modification. What im always struggling with is getting a new staging environment running upon major version changes since by experience, things tend to break. Once that's done and works as expected, I migrate a test-server and deploy, once it has passed, via ansible. Having a git repo on the target allows me to pull changes and simply update based on my workflow. Hope this makes sense? And yes, indeed, @undergroundwires fix was a very nice find in that regard :) |
I would also like to have some way of deploying Seafile v11 as an HTTP/insecure docker container behind a reverse proxy (I use traefik) which manages HTTPS/LetsEncrypt, without having to manually log in to the deployed container and edit its files (i.e. |
@freeplant There are multiple reasons here, I think:
...or like me (back in November-December 2023), or like so many other enthusiasts, I guess. |
After migrating Seafile 11 running Django 4.0, users using proxies start getting
Origin checking failed - XX does not match any trusted origins.
errors.Handling this is documented in Server FAQ.
Setting
CSRF_TRUSTED_ORIGINS = ["{url}"]
in{data_dir}/seafile/conf/seahub_settings.py
solves the issue.It would be nice that the docker instance configures this automatically for a seamless experience so not everyone spends hours on debugging this like me.
It would be nice to introduce a variable like
PROXY_ROOT_URL
orPROXY_BASE_URL_WITH_SCHEME
. So in this code it can setCSRF_TRUSTED_ORIGINS = ["{PROXY_ROOT_URL}"]
(ifPROXY_ROOT_URL
is defined) to the URL. Or just use{proto}://{domain}
utilizingFORCE_HTTPS_IN_CONF
without introducing any new configuration.Solves #347, and a lot of other StackOverflow, forum GitHub issues.
The text was updated successfully, but these errors were encountered: