-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Fix chunked_data requests in urequests.py #670
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Example:
def read_in_chunks(file_object, chunk_size=4096):
while True:
data = file_object.read(chunk_size)
if not data:
break
yield data
file = open(filename, "rb")
r = urequests.post(url, data=read_in_chunks(file))
On current urequests, this will fail as chunked_data is not detected properly as read_in_chunks() does not have an __iter__ attr. Changing the test to __next__ works and the example works.
Good pickup! I'm not sure whether the original library author tested this chunked data handling in a different way, or perhaps just copied cpython code without testing... either way it's true that __iter__ isn't exposed on common micropython generators like yielding functions or "tuple comprehension":
$ micropython
MicroPython v1.19.1-1014-gbde222ce8 on 2023年04月14日; linux [GCC 9.4.0] version
Use Ctrl-D to exit, Ctrl-E for paste mode
>>> def gen():
... yield 'hi'
... yield 'there'
...
>>> g = gen()
>>> dir(g)
['__class__', '__next__', 'close', 'send', 'throw', 'pend_throw']
>>>
>>> dir((a for a in [1, 2]))
['__class__', '__next__', 'close', 'send', 'throw', 'pend_throw']
>>>
vs python
$ python
Python 3.8.10 (default, Mar 15 2022, 12:22:08)
>>> def gen():
... yield 'hi'
... yield 'there'
...
>>> g = gen()
>>> dir(g)
['__class__', '__del__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '__lt__', '__name__', '__ne__', '__new__', '__next__', '__qualname__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'close', 'gi_code', 'gi_frame', 'gi_running', 'gi_yieldfrom', 'send', 'throw']
>>>
>>> dir((a for a in [1, 2]))
['__class__', '__del__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '__lt__', '__name__', '__ne__', '__new__', '__next__', '__qualname__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'close', 'gi_code', 'gi_frame', 'gi_running', 'gi_yieldfrom', 'send', 'throw']
It's very common for micropython to not expose "little-used" attributes on objects as every attribute exposed comes with a notable code size cost.
Your fix looks fairly appropriate to me; no extra code size added and meets the needs of your example, which also matches the official requests example of this functionality: https://requests.readthedocs.io/en/latest/user/advanced/#chunk-encoded-requests
I'd also support adding your example above to a new example_chunked_post.py file, this sort of iterative / chunked handling is very valuable in micropython to limit buffer size and therefore ram use!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Chunked detection does not work as generators never have an `__iter__` attribute. They do have `__next__`. Example that now works with this commit: def read_in_chunks(file_object, chunk_size=4096): while True: data = file_object.read(chunk_size) if not data: break yield data file = open(filename, "rb") r = requests.post(url, data=read_in_chunks(file))
895bd41 to
e025c84
Compare
Thanks for the contribution!
Chunked detection does not work on current Micropython as it never has an
__iter__attribute to a generator. It does add__next__to them, so this test inurequestsfor chunked detection did not work. I've changed it to__next__. I can post a failing example in the PR.