Message77915
| Author |
beazley |
| Recipients |
amaury.forgeotdarc, beazley, christian.heimes, donmez, georg.brandl, giampaolo.rodola, pitrou, rhettinger, wplappert |
| Date |
2008年12月16日.15:58:22 |
| SpamBayes Score |
1.8467279e-07 |
| Marked as misclassified |
No |
| Message-id |
<1229443104.25.0.331775326341.issue4561@psf.upfronthosting.co.za> |
| In-reply-to |
| Content |
I wish I shared your optimism about this, but I don't. Here's a short
explanation why.
The problem of I/O and the associated interface between hardware, the
operating system kernel, and user applications is one of the most
fundamental and carefully studied problems in all of computer systems.
The C library and its associated I/O functionality provide the user-
space implementation of this interface. However, if you peel the covers
off of the C library, you're going to find a lot of really hairy stuff
in there. Examples might include:
1. Low-level optimization related to the system hardware (processor
architecture, caching, I/O bus, etc.).
2. Hand-written finely tuned assembly code.
3. Low-level platform-specific system calls such as ioctl().
4. System calls related to shared memory regions, kernel buffers, etc.
(i.e., optimizations that try to eliminate buffer copies).
5. Undocumented vendor-specific "proprietary" system calls (i.e.,
unknown "magic").
So, you'll have to forgive me for being skeptical, but I just don't
think any programmer is going to sit down and bang out a new
implementation of buffered I/O that is going to match the performance of
what's provided by the C library.
Again, I would love to be proven wrong. |
|