Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Fix chunked_data requests in urequests.py #670

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
dpgeorge merged 1 commit into micropython:master from bwhitman:patch-1
Oct 4, 2023

Conversation

@bwhitman
Copy link
Contributor

@bwhitman bwhitman commented May 30, 2023

Chunked detection does not work on current Micropython as it never has an __iter__ attribute to a generator. It does add __next__ to them, so this test in urequests for chunked detection did not work. I've changed it to __next__. I can post a failing example in the PR.

Copy link
Contributor Author

Example:

def read_in_chunks(file_object, chunk_size=4096):
 while True:
 data = file_object.read(chunk_size)
 if not data:
 break
 yield data
file = open(filename, "rb")
r = urequests.post(url, data=read_in_chunks(file))

On current urequests, this will fail as chunked_data is not detected properly as read_in_chunks() does not have an __iter__ attr. Changing the test to __next__ works and the example works.

@bwhitman bwhitman changed the title (削除) Update urequests.py (削除ここまで) (追記) Fix chunked_data requests in urequests.py (追記ここまで) May 30, 2023
Copy link
Contributor

andrewleech commented May 30, 2023
edited
Loading

Good pickup! I'm not sure whether the original library author tested this chunked data handling in a different way, or perhaps just copied cpython code without testing... either way it's true that __iter__ isn't exposed on common micropython generators like yielding functions or "tuple comprehension":

$ micropython
MicroPython v1.19.1-1014-gbde222ce8 on 2023年04月14日; linux [GCC 9.4.0] version
Use Ctrl-D to exit, Ctrl-E for paste mode
>>> def gen():
... yield 'hi'
... yield 'there'
...
>>> g = gen()
>>> dir(g)
['__class__', '__next__', 'close', 'send', 'throw', 'pend_throw']
>>>
>>> dir((a for a in [1, 2]))
['__class__', '__next__', 'close', 'send', 'throw', 'pend_throw']
>>>

vs python

$ python
Python 3.8.10 (default, Mar 15 2022, 12:22:08)
>>> def gen():
... yield 'hi'
... yield 'there'
...
>>> g = gen()
>>> dir(g)
['__class__', '__del__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '__lt__', '__name__', '__ne__', '__new__', '__next__', '__qualname__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'close', 'gi_code', 'gi_frame', 'gi_running', 'gi_yieldfrom', 'send', 'throw']
>>>
>>> dir((a for a in [1, 2]))
['__class__', '__del__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '__lt__', '__name__', '__ne__', '__new__', '__next__', '__qualname__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'close', 'gi_code', 'gi_frame', 'gi_running', 'gi_yieldfrom', 'send', 'throw']

It's very common for micropython to not expose "little-used" attributes on objects as every attribute exposed comes with a notable code size cost.

Your fix looks fairly appropriate to me; no extra code size added and meets the needs of your example, which also matches the official requests example of this functionality: https://requests.readthedocs.io/en/latest/user/advanced/#chunk-encoded-requests

I'd also support adding your example above to a new example_chunked_post.py file, this sort of iterative / chunked handling is very valuable in micropython to limit buffer size and therefore ram use!

bwhitman reacted with thumbs up emoji

Copy link
Contributor

@andrewleech andrewleech left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Chunked detection does not work as generators never have an `__iter__`
attribute. They do have `__next__`.
Example that now works with this commit:
 def read_in_chunks(file_object, chunk_size=4096):
 while True:
 data = file_object.read(chunk_size)
 if not data:
 break
 yield data
 file = open(filename, "rb")
 r = requests.post(url, data=read_in_chunks(file))
@dpgeorge dpgeorge merged commit e025c84 into micropython:master Oct 4, 2023
Copy link
Member

dpgeorge commented Oct 4, 2023

Thanks for the contribution!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Reviewers

@andrewleech andrewleech andrewleech approved these changes

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

AltStyle によって変換されたページ (->オリジナル) /