Message158423
| Author |
Robert.Elsner |
| Recipients |
Robert.Elsner, mark.dickinson |
| Date |
2012年04月16日.12:35:43 |
| SpamBayes Score |
-1.0 |
| Marked as misclassified |
Yes |
| Message-id |
<1334579744.55.0.286784173297.issue14596@psf.upfronthosting.co.za> |
| In-reply-to |
| Content |
Well the problem is, that performance is severely degraded when calling unpack multiple times. I do not know in advance the size of the files and they might vary in size from 1M to 1G. I could use some fixed-size buffer which is inefficient depending on the file size (too big or too small). And if I change the buffer on the fly, I end up with the memory leak. I think the caching should take into account the available memory on the system. the no_leak function has comparable performance without the leak. And I think there is no point in caching Struct instances when they go out of scope and can not be accessed anymore? If i let it slip from the scope I do not want to use it thereafter. Especially considering that struct.Struct behaves as expected as do array.fromfile and numpy.fromfile. |
|
History
|
|---|
| Date |
User |
Action |
Args |
| 2012年04月16日 12:35:44 | Robert.Elsner | set | recipients:
+ Robert.Elsner, mark.dickinson |
| 2012年04月16日 12:35:44 | Robert.Elsner | set | messageid: <1334579744.55.0.286784173297.issue14596@psf.upfronthosting.co.za> |
| 2012年04月16日 12:35:43 | Robert.Elsner | link | issue14596 messages |
| 2012年04月16日 12:35:43 | Robert.Elsner | create |
|