You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jul 7, 2021. It is now read-only.
I use upload_file_request_with_chunks with a function that yields bytes, for this. The downside is that it seems if the upload is interrupted on the network, the server doesn't know it and I sometimes get skylinks for short content returned.
EDIT: it doesn't look like this works any more in the new version.
EDIT: I also checked the requests source for streaming with known length, and it looks like it calls a super_len() function in utils.py from the request handler in models.py and this very flexible helper tries everything it can on the data passed and saves what works. Notably it tries calling .__len__() and also reading the .len attribute, so in theory specifying one of these should work fine.
EDIT: Just for completion, a file-like object can be passed just like a chunked iterator. It worked for me to implement streaming progress by inheriting from io.FileIO or io.BufferedReader and overloading read.
We should support uploading generic data using streams.
We will need to use the https://toolbelt.readthedocs.io/ library for this as
requests
does not support this.The text was updated successfully, but these errors were encountered: