You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Running pg_fs on a server with very limited resources, I often get the dreaded pg_featureserv invoked oom-killer when a user requests a dense collection (well under the 10k record limit but lots of vertices per feature).
I can tell users that requesting huge chunks of geojson is inefficient and not a great idea. And maybe bump up my server memory resources to handle some use cases.
But I wonder, could pg_fs stream the features instead of loading them all into memory?
The text was updated successfully, but these errors were encountered:
Short answer, yes? Should be able to do prepare/execute/fetch in the SQL loop ... only open question is to what extent this is already supported in the pq library we're using. Do we have to manually plumb it out, or can we just flip a switch to a smaller fetch size.
Running pg_fs on a server with very limited resources, I often get the dreaded
pg_featureserv invoked oom-killer
when a user requests a dense collection (well under the 10k record limit but lots of vertices per feature).I can tell users that requesting huge chunks of geojson is inefficient and not a great idea. And maybe bump up my server memory resources to handle some use cases.
But I wonder, could pg_fs stream the features instead of loading them all into memory?
The text was updated successfully, but these errors were encountered: