-
Notifications
You must be signed in to change notification settings - Fork 546
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support all the cache
options in fetch()
#3847
Comments
+1 to adding it, but it will not work with the current caching interceptor logic. Pasting part of my comment from here:
|
https://fetch.spec.whatwg.org/#concept-request-cache-mode doens't imply it's implemented on top. I'm specifically not talking about
Can you clarify this? Why can't we go update the underlining cache? |
I know, but think about it from this perspective - currently the cache is built into the interceptor itself; it accepts an array of buffers as headers and a node stream for its body. To cache fetch responses, you have to store the headers (a HeadersList), a body (web readable), and the internal response state. The fetch spec calls implementations to update, insert, and delete responses from the cache in the spec itself. If we leave this to the interceptor, there are a number of places where internal response state may be modified, headers get added, etc. that will change how the response is received.
The interceptor cache does not take internal response state into account, and that internal state is also modified between where we are meant to update/insert/delete from the cache and where we are currently doing it (the dispatch). Without taking this into account, things like |
This is not how I read the spec in https://fetch.spec.whatwg.org/#concept-request-cache-mode. There is no real connection to |
Of course you can convert the HeadersList to and from an array of buffers and web readables to node readables (and vice versa), but what I'm getting at is that fetch has different requirements to cache than the interceptor api. |
I can only see fetch having additional requirements, not different. If they were different, it would mean that the fetch() spec does not follow RFC 9111, which sound highly unlikely. |
Do you have something in mind in specific @KhafraDev? |
Maybe my phrasing wasn't the best 😄.
We need the following:
|
The reusability was one of my concerns when designing the API of the current cache store, but definitely can revisit (maybe create a separate one if we made the work after the cut off for v7).
Good question, I'd imagine will be separated
We can make an issue for that For the remaining points, definitely a discussion will be worthwhile. |
I think it's 95% fine and just missing a little extra that we need. We can convert fetch Headers to raw headers and web streams to node ones easily, so not much of a concern there. Honestly someone would need to start implementing it to see exactly what's missing/lacking. I don't have the time or motivation to do so currently. :( An example of one of my bullet points above, does this cache the response? Which takes precedence? Will we allow this? My other points are straightforward IMO so examples aren't needed. const agent = new Agent().compose(interceptors.cache())
await fetch('...', {
cache: 'no-store',
dispatcher: agent
}) |
If you can fill an issue with the requirements we can see who is willing to give it a shot (I can give it a try after the H2 work), and will help to record progress.
Noup, it won't; but I see where you are going. I'm pretty sure the composed one will take precedence as for We ether build a mechanism to identify when is a composed dispatcher (and the interceptor in use), or document well what's the possible scenarios they can fall if having both caches set. |
@metcoder95 I don't think you meant to close this. |
oh sorry. my bad 😞 |
Ref https://developer.mozilla.org/en-US/docs/Web/API/Request/cache.
I think we could add some support for this.
The text was updated successfully, but these errors were encountered: