OAuth 2.0 (RFC 6749) violations #5208
Replies: 34 comments
-
Thanks for the report. The enforcement of TLS/HTTPS will not added i think for multiple reasons.
Regarding the unique, that is probably something we should adjust. Together with some lifetime/ttl checks probably. |
Beta Was this translation helpful? Give feedback.
-
I assumed TLS enforcement was not desired; however I'll remark nonetheless. Rocket is TLS capable, so a reverse proxy could still be used. Second, binding Vaultwarden to the loopback interface is an exceptional case since we are then talking about an environment that has only one client device. Additionally, TLS while useless in that situation, could still technically be enforced. Last, one could parse the IP in the config file and exempt TLS enforcement iff Vaultwarden is bound to To me if you taking on the burden and responsibility of hosting a password manager and setting up TLS is undesirable, you probably shouldn't be hosting something so important. I realize this project's stance is much more relaxed though. |
Beta Was this translation helpful? Give feedback.
-
In general most projects, even if it is something secure or not do not enforce TLS on the container side. They leave that to the person who implements it. We do notify in the Only way i would see this happen is if we generate a CA and Self-Signed cert our selfs, and use that as a default. |
Beta Was this translation helpful? Give feedback.
-
I don't have enough data to make assertions like "most projects, even if it is something secure or not do not enforce TLS on the container side", so I won't comment on that other than one could interpret Vaultwarden as the crate and not a "container" (e.g., Docker image). This is just a philosophical difference, so I won't spend much more time arguing my stance. There are things that will always be forced by a project (e.g., Vaultwarden doesn't allow the use of UNIX sockets). I'm of the opinion that forcing TLS 1.2 or TLS 1.3 is OK since it simplifies development (e.g., tests only need to occur on TLS-based deployments and not both TLS and non-TLS deployments), improves the security for end users (e.g., there may be some that don't even realize that not using TLS means the credentials can trivially be read by anyone), and allows for better RFC adherence. My opinion is in part motivated by the my stance that X.509 v3 certificates are the responsibility of the admin because I agree deploying self-signed certificates does not make sense. Regardless, it will be nice to see enforcement of the uniqueness of refresh tokens as well as detecting the use of an expired token and interpreting that as a compromised token. |
Beta Was this translation helpful? Give feedback.
-
Ok, let's say we enforce it. But i want my TLS to be terminated by my reverse proxy. How would you then determine i have TLS enabled there? Or would you force me to configure TLS also on Vaultwarden? Adding a variable to allow non TLS enforcing would not make development easier as you suggested. What would you suggest for this scenario? |
Beta Was this translation helpful? Give feedback.
-
I would indeed force TLS; in fact that is what I am doing on my WIP Bitwarden-compatible server. How one communicates with the HTTPS server/Rocket/Vaultwarden would be considered out of scope. Using a reverse proxy server does not ensure secure transmission even if the reverse proxy server uses TLS 1.2—technically the proper subset of secure ciphersuites of TLS 1.2—or TLS 1.3 since you need to also ensure you have blocked all communication to the HTTP server/Vaultwarden from any client except the reverse proxy server. The only way you can do that is with firewall rules and hope that no one is spoofing an IP to make it appear as the reverse proxy server. Firewalls are not a replacement for actual crypto though; so unless Vaultwarden adds some form of client-side certificate that can be enforced to ensure a specific client is connecting (e.g., a reverse proxy server), not enforcing TLS is strictly weaker. This justification is not much different than why RFC 6749 requires TLS: it too allows reverse proxies to communicate with the authorization server, but it still requires TLS in all situations (i.e., "reverse proxy servers" are not exempt). This does mean the contrived scenario of running Vaultwarden on the loopback interface would still require TLS even though from a security perspective it adds nothing. I don't expect to change your mind though. I understand why a project would take a looser stance. I am merely replying to your questions. |
Beta Was this translation helpful? Give feedback.
-
@zacknewman Correct me if I am wrong, but from my understanding RFC 6749 does not require that TLS termination happens on the entity serving authorization tokens. I believe Vaultwarden installation does not violate RFC 6749 as long as it is fronted by a reverse proxy or other system that does TLS termination. |
Beta Was this translation helpful? Give feedback.
-
Please cite via links or copy-paste where your "understanding" is coming from. I cited the relevant sections that make it rather cut-and-dry that TLS MUST be used (and more generally that confidentiality MUST exist between the client and resource server and the resource server and the authorization server): 10.3 Access Tokens Access token credentials (as well as any confidential access token The client is the Bitwarden application the user is using to access their vault. "Transit" means the entire path from authorization server to client. There are no special exceptions stated there or anywhere else in that document literally or inferred about "blessed" intermediaries (e.g., "reverse proxy servers") that don't need TLS. More generally the world is (slowly) transitioning to encrypt-by-default as it should have long ago. This means common setups where "reverse proxy servers" can talk to their backends in plaintext will increasingly be less common at least in non-hobby setups where security and privacy matter not much differently than how application servers use TLS when talking to database backends even when on a local network. In fact even the oft-cited (but incorrect) recommendation that SMTP should use STARTTLS (i.e., "TCP port 587") has been corrected over 5 years ago with the recommendation of SMTPS (i.e., "TCP port 465"). Obviously there are situations for certain entities where this won't matter as has already been discussed in this issue. An important point to emphasize is that requiring TLS does not technically preclude setups from happening even if they slightly complicate them. That is where this conversation becomes philosophical and opinionated unlike the RFC violations. If the target audience includes hobby projects, then the lower the entry the better even if that comes at the cost of security and privacy (and RFC violations). For more serious projects, the entry is higher. Where one draws the line is up to them; and no matter where it is drawn, it will necessarily preclude some people. I already stated the project would likely not change its stance even with the RFC violations, so you don't have anything to worry about especially based on @BlackDex's responses. As someone who manages many separate domains and services many of which protected by a WireGuard server and not just a "reverse proxy server", I think that if handling X.509 v3 certificates is "too much", then you shouldn't be hosting something so important. It is quite easy to manage this, and I don't even use Kubernetes or any other orchestration system. But this is a controversial and opinionated stance that will of course have their detractors. Ideally this discussion stays technical; and until someone shows me where in the RFC I am wrong, I won't retract my claim there are violations. To stay on track with the technical, what do you mean "Bitwarden API"? There are many ways one can define that. Any definition that implies or is based on a necessary but not sufficient condition that if any portion of the API requires TLS, then TLS should be required by the implementation implies Vaultwarden MUST require TLS since portions of the API that depend on WebAuthn (e.g., WebAuthn 2FA) indirectly require TLS. Similarly, the Bitwarden API requires HTTP + REST + JSON—in practice REST implies the other two, but it doesn't actually require them—which then raises the question what is HTTP? That can be HTTP/0.9, HTTP/1.0, HTTP/1.1, HTTP/2.0, or HTTP/3.0. The latter two practically and formally require TLS respectively. Point being there are both technical and philosophical reasons a Bitwarden-conforming server would require TLS even though that will cause some entities to not host since it's "too complicated". |
Beta Was this translation helpful? Give feedback.
-
@zacknewman
Hopefully these examples will illustrate that there are multiple ways of deploying vaultwarden with strict adherence to RFC specs and many of them do not require vaultwarden to worry about TLS implementation. As vaulwarden is positioned as a lightweight product, I don't see any benefits to adding unnecessary features. And there are even more ways that while technically may not adhere to the spec, in reality they are just as secure. Oh regarding |
Beta Was this translation helpful? Give feedback.
-
Agreed.
Perhaps pedantic, but this is problematic with the requirement that TCP be used. While it may be the case now and perhaps forever for Vaultwarden that HTTP/1.1 is used, Bitwarden doesn't mandate that which means you prevent a perfectly reasonable HTTP/3-based implementation. Of course generalizing "TCP" to "TCP/UDP" solves the problem, so I agree with that amendment.
Indeed. This is why mathematicians should write RFCs so such ambiguity is less likely—documentation would be substantially longer and more complex though, lol. I would posit that Section 3.2 allows this with the paragraph: The means through which the client obtains the location of the token I do agree that this is not obvious; thus "usage" must take over which is to stay this conforms.
Disagree. Anyone that can listen to the traffic between the reverse proxy server and Vaultwarden has access to the plaintext data. This wouldn't be true if TLS were used which means there are more attack vectors. Hopefully the setup is such that the only entities that have access to such traffic are the same entities that have access to Vaultwarden, but that's an added assumption; furthermore that assumption is made how? Firewall rules? That does not have the same security guarantees that cryptography like TLS has, so I don't believe this conforms.
For the same reasons I think point 4 does not conform, I believe this does not.
I think this conforms in spirit, so I have a similar stance as I do with point 3. The RFC seemingly uses TLS as a synonym for "confidentiality", but this is clearly not true from a cryptographic perspective.
Not even going to pretend to know enough what you're talking about. If it's actually the case you need physical access, then I agree. It is sort of similar to points 3 and 6 where you meet the "confidentiality" aspect without TLS. Here physical access.
As my replies indicate, I only think a proper subset of those examples conform. I don't think requiring TLS makes Vaultwarden that less lightweight; and as stated, I don't think some of your examples are "as secure". Like most things, an "expert" will almost always be able to circumvent formal requirements in a way that is as secure or even more secure; however such experts are not common enough to assume them. So I still conclude a Bitwarden-conforming server has good reason to require TLS. If Daniel Bernstein is comfortable not conforming to RFC 6749, then it's likely his reasons are justified as he's an actual mathematician and cryptographer and has likely forgotten more about crypto than the people involved in the RFC would ever know. Of course it won't hurt to have his reasons peer-reviewed, but I am not at all qualified to be one of said peers.
I have no experience with that, so I cannot really comment. I'm curious what a company like Google which has quite an immaculate record regarding security (not so much privacy) does when handling backend communication. Sadly there are many big companies that I don't respect very much from a security perspective, so those examples are less meaningful to me. More generally, I never stated that it's impossible for specific configurations to be as secure or more without conforming to RFC 6749; however the existence of such possibilities doesn't change my stance. Exception proves the rule as they say. As an analogy, one should not "roll their own crypto". That should be followed by a great many. An application or library that does not allow users to control certain TLS ciphersuites is perfectly OK even though there exist experts that know more than enough (e.g., Thomas Pornin) to have such control. Is the target audience dominated by Pornins? I don't think so. |
Beta Was this translation helpful? Give feedback.
-
@zacknewman Basically option 3 is more secure than option 6 because traffic never leaves the physical host/vm in option 3, while in option 6 it does leave the host/vm while being encrypted by something outside of TLS. And yes, to intercept data in option 7 one either have to gain remote access to a load balancer/reverse proxy (same attack vector as many other options), or gain access to the host running vaultwarden (same attack vector as all the other options), or get access to the data center, cage within it and then install a physical device on the wire.
|
Beta Was this translation helpful? Give feedback.
-
@eoprede, I think you've done enough to show there exists real-world setups that are "secure"; but I still fail to see how requiring TLS on Vaultwarden's side precludes such setups. You need to establish there exist secure setups without Vaultwarden requiring TLS (check), that such a setup is common enough to even consider (I'll just take this by assumption), and that enforcing TLS prevents such setups (yet to see). For example, you claim that doing so will no longer make Vaultwarden "lightweight"; but I don't like blind assertions especially ones on performance without hard data. Where is the actual data that shows TLS will have a measurable negative impact? Until then, I am highly skeptical. You also didn't show how using Kubernetes makes X.509 certificates "unreasonably" burdensome. So to me the examples given at worst don't benefit, but they also are not "reasonably" affected adversely. This means I think it's a net gain: entities that don't benefit security wise are not appreciably impacted negatively and entities that do benefit well benefit. I find it hard to believe that you are equipped to support so many services that one of them requiring TLS prevents the setup. Until one of those two things are shown/explained to me, I don't see how "secure by requirement" would cause an issue. And just to reiterate for external observers, Vaultwarden has made it quite clear TLS won't be enforced; so this is largely academic. For example can you prove to me that reusing the same X.509 certificate the reverse proxy server uses to enforce TLS is "too complex"? I struggle to see how you can easily enforce X.509 certificates and renewals on the reverse proxy server, but then somehow can't just re-use the certificate for Vaultwarden. That's only one example. The onus to prove that there doesn't exist any setup that makes X.509 certificates on Vaultwarden's side fairly simple while simultaneously having such a setup on the reverse proxy server is a lot harder. But let's just start with this one simple example. The implications of such a proof seem too catastrophic also. This means you take a philosophical (perhaps even technical if Kubernetes is not capable) stance against any service that integrates public-key cryptography? Any HTTP/3 service will never be supported because TLS is part of the protocol and no longer at a lower "OSI layer"? Any service that integrates the Noise protocol won't be supported? I find that stance to be archaic. As time goes on, we are seeing services integrate cryptography directly even at the protocol level (e.g., HTTP/3); so perhaps in your lifetime you can only support services without public-key cryptography, but I find such a setup less than ideal to put it mildly. As mentioned multiple times before, this is a philosophical disagreement I don't think you and I will resolve. |
Beta Was this translation helpful? Give feedback.
-
@zacknewman |
Beta Was this translation helpful? Give feedback.
-
As stated, I don't find that it does. This is a subjective argument that likely won't get resolved. What you find to be unnecessarily complicated another won't just as another will find something unnecessarily complicated that you don't. Handling X.509 v3 certificates even for "backend services" is not complicated in my opinion.
You didn't make it clear what you meant by "lightweight", so I assumed you were referring to the often incorrect assumption that TLS has measurable negative performance. Vaultwarden supports TLS directly regardless if it's forced for not. This means the project absolutely should be testing TLS configurations regardless; otherwise the project should require the opposite: TLS must not be used directly since it's not a tested or approved setup. Until then, supporting only TLS actually makes the work less since they're already testing such config. As far as the security of the dependencies are concerned, that is outside the scope; otherwise Rocket shouldn't be used at all right? Use "established HTTP" servers like Nginx for the HTTP logic. I'm not convinced Nginx is more secure than Rust-based TLS solutions anyway which internally rely on OpenSSL, BoringSSL, LibreSSL, etc. Regardless, this is an argument that not only states Vaultwarden shouldn't require TLS but in fact should outright prevent it which is a crazy stance as far as I am concerned.
You keep getting hung up on your situation. I have a lot of security layers on my setup that also wouldn't benefit much, but the idea is to require such setups for the "common good". There are a seemingly infinite number of situations where I don't benefit by some law, RFC, or other requirement due to already addressing such issues myself—often times in a more thorough way—yet I understand why they exist and don't dismiss such things as unnecessary because they don't affect me. There are a myriad of ways you can automate X.509 certificates that require 10 minutes or so of one-time configuration. That is very much reasonable.
One way you can solve this that requires an initial small-time investment in time is setting up a dedicated X.509 certificate server that is responsible for auto-renewing all certificates. You can configure SFTP on the other machines to fetch said X.509 certificate in a way that is hardened such that such machines can only fetch the X.509 certificate from their directory. This can all be automated so that it takes essentially no time or effort. Even easier though is to use a self-signed certificate. Why does this hurt security? You seem to forget that we are dealing with one of the seven situations you mentioned—in addition to the situations mentioned by @BlackDex and me (e.g., running Vaultwarden on the loopback interface)—which means "security" is already established. The reason you are using the X.509 certificate is to get Vaultwarden to work with 0 impact on security. Again, we are dealing with a rather selfish and myopic perspective that just because you don't benefit and have to do something with very minimal effort like using a self-signed certificate that means the "good of all" must be rejected too. I vehemently disagree with this on technical, crypto, and moral grounds. If something benefits a lot of people with minimal impact on me, then I'm all for it. No matter what Vaultwarden or any other service requires there will always be situations where you don't benefit from how it's set up, but you nonetheless configure it so that it works.
Such points are addressed with a self-signed certificate so long as they meet one of the seven situations you have defined or the situations @BlackDex and I have defined.
As stated, I disagree with this stance. End-to-end encryption should not be avoided just to cater to people who find it "too difficult" to set up. That just means such services won't be managed by those people. I want applications to integrate cryptography directly rather than cross their fingers and hope it's hardened outside. We used to be a world where crypto and security were not a thing. Then we shifted to optional crypto and security. Then crypto and security by default. Finally we are slowly shifting to crypto and security by requirement. If a setup can't handle such services, then this is an XY problem: the issue is in the setup not the service with mandatory crypto. |
Beta Was this translation helpful? Give feedback.
-
Let's agree to disagree on this topic. Also whether we enabled TLS or not, webauthn will not work without it, so in that sense it's mandatory. What @eoprede means in regards to managing certificates. It's fine to manage a few manually, it becomes a pain in the ... if you need to do this for all separate services when you have a dynamic environment like k8s and run maybe 30 or more different service and all those services (like Vaultwarden) have different ways on how to configure where the certificate is located, how you can enable or disable TLS versions, ciphers, MAC's etc... (if at all), you can configure a reverse proxy just once in a secure way and even allow or force new http versions. And also keep it secure and up to date for all services in one go. Also Vaultwarden will still need HTTP/1.1 for websocket connections. So that will be the minimum version until something else is going to be used. |
Beta Was this translation helpful? Give feedback.
-
This also supports my stance of not offering PRs, and why I'd appreciate if you refrained from off-handed comments like "we already know you are not going to do this". The larger our philosophical differences the less likely a PR aligns with your position making said PR a waste of both of our time. I am more on the "secure" part of the continuum, and the project is more on the "support as many deployments as possible" part of the continuum. Neither one is more "correct" than the other, but it does mean that "just create an issue" seems to be the better fit since then the project can decide if/how to address it. I apologize if this issue derailed. When someone attempts to discuss some of the points I raised even after the project has made it clear on their stance, then I feel the need to address them. I probably should have exercised better judgment and not replied though. |
Beta Was this translation helpful? Give feedback.
-
Also, regarding the PR's, and especially the attachment one, I do not recall you provide any PR at all. And i would never reject a PR or if someone is offering to create one. My memory could fail me of course. If you revere to the sentence |
Beta Was this translation helpful? Give feedback.
-
That appears to be the case indeed. |
Beta Was this translation helpful? Give feedback.
-
Back to the issue of enforcing TLS to guarantee RFC 6749 and WebAuthn conformance. The reason I was against deploying a self-signed certificate is that I don't think it makes TLS enforcement appreciably easier. Deployments that rely on TLS via Rocket are unaffected entirely since they will remain using the X.509 v3 certificate they are already using. Deployments that rely on TLS via an intermediary (e.g., reverse proxy server) benefit very little. Specifically such deployments wouldn't have to create a self-signed certificate themselves, but they would still have to configure their intermediary to trust the certificate. Is that worth it? If you think that is the only way TLS enforcement works, then fine. Of course that would mean that insecure deployments are strictly forbidden; and as you stated:
that seems to be acceptable. If TLS is not enforced, then I think the project should emphasize the consequences of that even if it should be "obvious" to most people. Specifically all data that is sent is in plaintext which means the encrypted vault, OAuth tokens, master password hash, and potentially other secrets like TOTP keys when a user looks at the value (in addition to the initial time it was set up). This in turn means that deployments that enable TLS later may not be any more secure since an entity could have already captured said data and use it. Of course I realize that a major aspect of password managers like Bitwarden is the vault is encrypted and decryption is done on the client, so an entity having another entity's encrypted vault isn't nearly as catastrophic as other services. However security is about layers; so since authentication exists, it's clear the project thinks it adds some benefit (which of course I agree); at which point, it should be done right. |
Beta Was this translation helpful? Give feedback.
-
I don't think that. What other solutions (besides forcing certificates) do you see for this challenge?
Where should we emphasize this? We already provide a note in the admin/diagnostics page. We have a better warning in the web-vault then Bitwarden if someone tries to use the Crypt API without SSL, because that also does not work. I'm searching for good solutions here, since, as i stated before, I'm also for security, but it indeed should not become an annoyance to configure. Do you maybe know other software/packaged solutions which do something like this which we could check? |
Beta Was this translation helpful? Give feedback.
-
A good start is The wiki would ideally be updated to make the HTTPS section part of the deployment section in an effort to emphasize that TLS is required not merely optional. The HTTPS section should be updated to emphasize the adverse consequences of not using TLS.
I'm guessing there is not a manner in which you will be comfortable with. Under my philosophical belief system especially with the above updates, I think it's reasonable to do nothing but enforce TLS via Rocket since the documentation will go over such config and clearly states it as a prerequisite. Users of intermediaries create self-signed certs, and the rest create the X.509 cert as they are already doing. The benefit is that you are not just relying on documentation alone to "enforce" TLS, but it's literally enforced at the application level. I view this analogously to a developer that uses the type system of a language to enforce certain invariants instead of purely on API documentation. As has gone over ad nauseam though, this is too "extreme" and not backward compatible for this project. The bundled self-signed certificate way doesn't improve the above approach much and opens the door to deployments that treat said self-signed certificate as actually secure which would require documentation alone to emphasize that the bundled self-signed certificate is not trustworthy and only exists to lower the bar for deployments that rely on intermediaries to enforce TLS. Thus if I had to guess, the best the project can do is improve documentation and guides. |
Beta Was this translation helpful? Give feedback.
-
From my point of view this issue should be closed / turned into a discussion (as it's not really a technical problem with this project or something that will be fixed by a PR but seems like a difference of opinion about security vs usability). |
Beta Was this translation helpful? Give feedback.
-
From my point of view this issue should be closed / turned into a discussion (as it's not really a technical problem with this project or something that will be fixed by a PR but seems like a difference of opinion about security vs usability).
No, there are two issues: not enforcing uniqueness of refresh tokens and the TLS thing. Unfortunately most of this issue was spent disagreeing about the merits of enforcing TLS, but there is no argument against not enforcing refresh token uniqueness. Code blindly grabs the first value fetched from the database, but it is required that such token be unique. Re-read the first two comments. Related to that is detecting when an expired token is used and interpreting that as a compromised token.
|
Beta Was this translation helpful? Give feedback.
-
@zacknewman Thanks for the clarification. I still think that the issue should be closed / turned into a discussion because there's no disagreement about that first issue and you already agreed to disagree on the second issue. |
Beta Was this translation helpful? Give feedback.
-
@zacknewman Thanks for the clarification. I still think that the issue should be closed / turned into a discussion because there's no disagreement about that first issue and you already agreed to disagree on the second issue.Huh? Why close an issue before the issue is fixed? Almost all issues have no disagreement about the existence of an issue and often the solutions, but the issues remain open until an actual PR addresses them. That’s the point of open issues. If you think too much time has been spent disagreeing about TLS enforcement which detracts from the other legitimate issue, then flag comments as not relevant. Alternatively create a separate issue that only mentions the need to enforce refresh token uniqueness since without RFC 6749 is violated and code is already incorrectly assuming such uniqueness before closing this issue.
|
Beta Was this translation helpful? Give feedback.
-
@zacknewman, RFC 6749 is a Proposed Standard. This means that no one is “forced” to follow that standard in every single detail. There is a reason why OAuth 2.1 is still in draft form and being updated. Regarding concerns about security behind reverse proxies, there are two relevant standards you can consider: [RFC 9209: The Proxy-Status HTTP Response Header Field]((https://www.rfc-editor.org/rfc/rfc9209.pdf) and RFC 9440: Client-Cert HTTP Header Field provides guidance on handling proxy-related information in HTTP responses. Many companies adopt the practice of using locally hosted backends that are not exposed to the internet directly. These backends operate behind reverse proxies or load balancers and to SSL Offloading Article 1 and Article 2. In such cases, VaultWarden is not required to enforce SSL. Instead, you can configure your DNS server (e.g., Cloudflare) to allow communication only to servers with the same root CA certificate. Lastly, RFC 6749 is an older standard. We now have RFC 8996: Deprecating TLS 1.0 and TLS 1.1, which addresses TLS security. It has been approved by the IETF as a Best Current Practice. TLS 1.2, mentioned in RFC 6749, never received approval, and we have since moved to TLS 1.3 as the new standard. Keep in mind that these papers primarily apply to widespread public production environments. VaultWarden, like Bitwarden, is considered an enterprise product for internal company use and for companies with solid IT knowledge. If you host it yourself, you can take control of its security |
Beta Was this translation helpful? Give feedback.
-
+1 for "this issue should be closed / turned into a discussion" |
Beta Was this translation helpful? Give feedback.
-
imho that's a common case: If you put it behind a reverse proxy hosted on the same server, there is no reason to expose it by binding it to anything else than the loopback interface. |
Beta Was this translation helpful? Give feedback.
-
I totally agree with this! and also totally oppose to this one:
As explained multiple times in this discussion its not an "exceptional case". Im very certain to say its mostly the standard to put shit behind a local reverse proxy and bind the application to only accept connection from that reverse proxy. This discussion leads to nowhere at this point! Cause as said the actual position is that RFC 6749 is not an enforced RFC ist a proposed RFC which would make the further development of vaultwarden difficultier. And with that position and no changes regarding future RFC i think everything about this is said. If someone creates a PR with a viable solution for the mentioned problems thats the moment to restart this discussion. Its a very simple "agreeing to disagree" situation. |
Beta Was this translation helpful? Give feedback.
-
One way to avoid mistaken missing TLS is to by default check the X-Forwarded-Proto (or another, configurable) header to make sure HTTPS is used to connect. |
Beta Was this translation helpful? Give feedback.
-
Sections 10.3 and 10.4 of RFC 6749 requires the authorization server to use TLS when exchanging the access and refresh tokens; however Vaultwarden—which acts as both the resource and authorization servers—currently does not force TLS, meaning deployments without TLS violate the RFC.
Additionally Section 6 requires the refresh token to be uniquely linked to the client; however Vaultwarden does not have a
UNIQUE CONSTRAINT
defined ondevices.refresh_token
which means it's possible, albeit unlikely, for multiple clients to have the same refresh token.If adherence to RFC 6749 is not a goal, I do believe a
UNIQUE CONSTRAINT
should still be defined ondevices.refresh_token
since it will ensure the token is truly unique. Furthermore Vaultwarden already (incorrectly) assumes/hopes the refresh token is unique; and with an actualUNIQUE CONSTRAINT
defined—which will be backed by aUNIQUE INDEX
with any real-world RDBMS I am aware of—the performance will be even better (and most importantly correct).With this change, it would also be nice if refresh token compromise were detected as well forcing all clients associated to a user to re-authenticate via password and whatever 2-factor authentication is hopefully configured.
Beta Was this translation helpful? Give feedback.
All reactions