Skip to main content
Code Review

Return to Answer

replaced http://stackoverflow.com/ with https://stackoverflow.com/
Source Link

I probably have more questions than answers, but it's easier to give them here rather than via a comment:

Now, if we take the original question as is, I'd ask the interviewer about the use case. If they envision a scenario with a massive GET string that they don't want to store in entirety in memory, then we could have a generator that creates fragments of the GET parameters. The consumer of this generator would send said fragments directly to the networking stack, and finalize the requests with `\r\n\r\n' when the generator finishes. Again, I'd consider the question itself artificial.

I probably have more questions than answers, but it's easier to give them here rather than via a comment:

  • GET string optimized for big data is a bit of an oxymoron. GET strings usually do not go beyond 4/8/16 KB due to server limitations. (http://stackoverflow.com/questions/812925/what-is-the-maximum-possible-length-of-a-query-string). POST would be a different matter (and a better interview question).
  • Your option 1 and option 2 seem to solve different problems, so there's not much of a comparison. Option 1 indeed generates a GET request from a dictionary (obviously not optimized for big data), while option 2 generates dictionaries of JSON elements from the response.
  • In option 3, don't see much reason for using a generator in req_resource, as you end up anyway storing it all in memory under page list. Speaking of which page is not the best name for a list of JSON items.

Now, if we take the original question as is, I'd ask the interviewer about the use case. If they envision a scenario with a massive GET string that they don't want to store in entirety in memory, then we could have a generator that creates fragments of the GET parameters. The consumer of this generator would send said fragments directly to the networking stack, and finalize the requests with `\r\n\r\n' when the generator finishes. Again, I'd consider the question itself artificial.

I probably have more questions than answers, but it's easier to give them here rather than via a comment:

  • GET string optimized for big data is a bit of an oxymoron. GET strings usually do not go beyond 4/8/16 KB due to server limitations. (https://stackoverflow.com/questions/812925/what-is-the-maximum-possible-length-of-a-query-string). POST would be a different matter (and a better interview question).
  • Your option 1 and option 2 seem to solve different problems, so there's not much of a comparison. Option 1 indeed generates a GET request from a dictionary (obviously not optimized for big data), while option 2 generates dictionaries of JSON elements from the response.
  • In option 3, don't see much reason for using a generator in req_resource, as you end up anyway storing it all in memory under page list. Speaking of which page is not the best name for a list of JSON items.

Now, if we take the original question as is, I'd ask the interviewer about the use case. If they envision a scenario with a massive GET string that they don't want to store in entirety in memory, then we could have a generator that creates fragments of the GET parameters. The consumer of this generator would send said fragments directly to the networking stack, and finalize the requests with `\r\n\r\n' when the generator finishes. Again, I'd consider the question itself artificial.

Source Link
RomanK
  • 476
  • 2
  • 7

I probably have more questions than answers, but it's easier to give them here rather than via a comment:

  • GET string optimized for big data is a bit of an oxymoron. GET strings usually do not go beyond 4/8/16 KB due to server limitations. (http://stackoverflow.com/questions/812925/what-is-the-maximum-possible-length-of-a-query-string). POST would be a different matter (and a better interview question).
  • Your option 1 and option 2 seem to solve different problems, so there's not much of a comparison. Option 1 indeed generates a GET request from a dictionary (obviously not optimized for big data), while option 2 generates dictionaries of JSON elements from the response.
  • In option 3, don't see much reason for using a generator in req_resource, as you end up anyway storing it all in memory under page list. Speaking of which page is not the best name for a list of JSON items.

Now, if we take the original question as is, I'd ask the interviewer about the use case. If they envision a scenario with a massive GET string that they don't want to store in entirety in memory, then we could have a generator that creates fragments of the GET parameters. The consumer of this generator would send said fragments directly to the networking stack, and finalize the requests with `\r\n\r\n' when the generator finishes. Again, I'd consider the question itself artificial.

lang-py

AltStyle によって変換されたページ (->オリジナル) /