ziglang/zig
147
2.9k
Fork
You've already forked zig
271

std.Io: introduce batching and operations API, satisfying the "poll" use case #30743

Open
andrewrk wants to merge 20 commits from poll into master
pull from: poll
merge into: ziglang:master
ziglang:master
ziglang:one-mutex-per-mutex
ziglang:mmap
ziglang:riscv-ci
ziglang:test-no-bin
ziglang:io-uring-update
ziglang:llvm22
ziglang:poll-ring
ziglang:debug-file-leaks-differently
ziglang:debug-file-leaks
ziglang:hate-letter-to-std.os
ziglang:i-am-a-foolish-fool
ziglang:ProcessPrng
ziglang:elfv2-dyn
ziglang:jobserver
ziglang:threadtheft
ziglang:io-threaded-no-queue
ziglang:0.15.x
ziglang:Io.net
ziglang:comptime-allocator
ziglang:restricted-function-pointers
ziglang:cli
ziglang:wasm-linker-writer
ziglang:wrangle-writer-buffering
ziglang:sha1-stream
ziglang:async-await-demo
ziglang:fixes
ziglang:0.14.x
ziglang:ast-node-methods
ziglang:spork8
ziglang:macos-debug-info
ziglang:make-vs-configure
ziglang:fuzz-macos
ziglang:main
ziglang:sans-aro
ziglang:ArrayList-reserve
ziglang:incr-bug
ziglang:llvm-ir-nosanitize-metadata
ziglang:ci-tarballs
ziglang:ci-scripts
ziglang:threadpool
ziglang:0.12.x
ziglang:new-pkg-hash
ziglang:json-diagnostics
ziglang:more-doctests
ziglang:rework-comptime-mutation
ziglang:0.11.x
ziglang:ci-perf-comment
ziglang:stage2-async
ziglang:0.10.x
ziglang:autofix
ziglang:0.9.x
ziglang:aro
ziglang:hcs
ziglang:0.8.x
ziglang:0.7.x

This branch introduces Io.Operation, std.Io.operate, std.Io.Batch, and implements them for FileReadStreaming. In std.Io.Threaded, the implementation is based on poll().

The idea here is that every VTable function that makes sense to have error{Canceled,Timeout} in its error set would be moved into this Operation tagged union. Note that the higher level API, e.g. std.Io.File.readFileStreaming is unchanged. We expect this to be lowerable in a reasonable fashion using IoUring, Kqueue, and Windows.

Motivation

  • general support for batching, timeouts, and non-blocking across all I/O operations
  • ability to migrate code that used poll() without causing error.ConcurrencyUnavailable on -fsingle-threaded builds
    • for example, processing stdout and stderr of child processes in a single-threaded application

Demonstration

Using std.process.run to collect both stdout and stderr in a single-threaded program using std.Threaded.Io. The point of this example is, if the parent naively reads from either stdout or stderr non-concurrently, it will deadlock.

child.zig:

conststd=@import("std");constIo=std.Io;pubfnmain(init:std.process.Init)!void{constio=init.io;varstdout:Io.File.Writer=.initStreaming(.stdout(),io,&.{});varstderr:Io.File.Writer=.initStreaming(.stderr(),io,&.{});trystdout.interface.splatByteAll('A',5000);trystderr.interface.splatByteAll('B',5000);trystdout.interface.splatByteAll('C',5000);trystderr.interface.splatByteAll('D',5000);}

parent.zig

conststd=@import("std");constIo=std.Io;pubfnmain(init:std.process.Init)!void{constio=init.io;constarena=init.arena.allocator();constresult=trystd.process.run(arena,io,.{.argv=&.{"./child"},});std.debug.print("stdout:\n{s}\nstderr:\n{s}",.{result.stdout,result.stderr});}
$ stage3/bin/zig build-exe parent.zig -fsingle-threaded
$ ./parent
stdout:
AAAAAA...(x5000)CCCCC....(x5000)
stderr:
BBBBBB...(x5000)DDDDD....(x5000)
$ strace ./parent
...
pipe2([8, 9], O_CLOEXEC) = 0
fork() = 3305035
close(9) = 0
close(4) = 0
close(6) = 0
munmap(0x7fb1cec93000, 53248) = 0
munmap(0x7fb1ceb40000, 131072) = 0
read(8, "", 8) = 0
close(8) = 0
mmap(0x7fb1ceca0000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb1cf009000
poll([{fd=3, events=POLLIN}, {fd=5, events=POLLIN}], 2, -1) = 2 ([{fd=3, revents=POLLIN}, {fd=5, revents=POLLIN}])
readv(3, [{iov_base="AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., iov_len=129}], 1) = 129
readv(5, [{iov_base="BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB"..., iov_len=129}], 1) = 129
poll([{fd=3, events=POLLIN}, {fd=5, events=POLLIN}], 2, -1) = 2 ([{fd=3, revents=POLLIN}, {fd=5, revents=POLLIN}])
readv(3, [{iov_base="AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., iov_len=194}], 1) = 194
readv(5, [{iov_base="BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB"..., iov_len=194}], 1) = 194
poll([{fd=3, events=POLLIN}, {fd=5, events=POLLIN}], 2, -1) = 2 ([{fd=3, revents=POLLIN}, {fd=5, revents=POLLIN}])
readv(3, [{iov_base="AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., iov_len=291}], 1) = 291
readv(5, [{iov_base="BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB"..., iov_len=291}], 1) = 291
mremap(0x7fb1cf009000, 4096, 8192, 0) = -1 ENOMEM (Cannot allocate memory)
mmap(0x7fb1cf00a000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb1cf007000
poll([{fd=3, events=POLLIN}, {fd=5, events=POLLIN}], 2, -1) = 2 ([{fd=3, revents=POLLIN}, {fd=5, revents=POLLIN}])
readv(3, [{iov_base="AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., iov_len=436}], 1) = 436
readv(5, [{iov_base="BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB"..., iov_len=436}], 1) = 436
poll([{fd=3, events=POLLIN}, {fd=5, events=POLLIN}], 2, -1) = 2 ([{fd=3, revents=POLLIN}, {fd=5, revents=POLLIN}])
readv(3, [{iov_base="AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., iov_len=654}], 1) = 654
readv(5, [{iov_base="BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB"..., iov_len=654}], 1) = 654
mremap(0x7fb1cf007000, 8192, 12288, 0) = -1 ENOMEM (Cannot allocate memory)
mmap(0x7fb1cf009000, 16384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb1cf003000
poll([{fd=3, events=POLLIN}, {fd=5, events=POLLIN}], 2, -1) = 2 ([{fd=3, revents=POLLIN}, {fd=5, revents=POLLIN}])
readv(3, [{iov_base="AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., iov_len=981}], 1) = 981
readv(5, [{iov_base="BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB"..., iov_len=981}], 1) = 981
poll([{fd=3, events=POLLIN}, {fd=5, events=POLLIN}], 2, -1) = 2 ([{fd=3, revents=POLLIN}, {fd=5, revents=POLLIN}])
readv(3, [{iov_base="AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., iov_len=1472}], 1) = 1472
readv(5, [{iov_base="BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB"..., iov_len=1472}], 1) = 1472
mremap(0x7fb1cf003000, 16384, 20480, 0) = -1 ENOMEM (Cannot allocate memory)
mmap(0x7fb1cf007000, 32768, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb1ced42000
poll([{fd=3, events=POLLIN}, {fd=5, events=POLLIN}], 2, -1) = 2 ([{fd=3, revents=POLLIN}, {fd=5, revents=POLLIN}])
readv(3, [{iov_base="AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., iov_len=2208}], 1) = 2208
readv(5, [{iov_base="BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB"..., iov_len=2208}], 1) = 2208
poll([{fd=3, events=POLLIN}, {fd=5, events=POLLIN}], 2, -1) = 2 ([{fd=3, revents=POLLIN}, {fd=5, revents=POLLIN}])
readv(3, [{iov_base="CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC"..., iov_len=3312}], 1) = 3312
readv(5, [{iov_base="DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD"..., iov_len=3312}], 1) = 3312
mremap(0x7fb1ced42000, 32768, 49152, 0) = -1 ENOMEM (Cannot allocate memory)
mmap(0x7fb1ced4a000, 73728, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb1cec8e000
poll([{fd=3, events=POLLIN}, {fd=5, events=POLLIN}], 2, -1) = 2 ([{fd=3, revents=POLLIN}, {fd=5, revents=POLLIN}])
readv(3, [{iov_base="CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC"..., iov_len=4968}], 1) = 323
readv(5, [{iov_base="DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD"..., iov_len=4968}], 1) = 323
poll([{fd=3, events=POLLIN}, {fd=5, events=POLLIN}], 2, -1) = 2 ([{fd=3, revents=POLLHUP}, {fd=5, revents=POLLHUP}])
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=3305035, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
...

Merge Blockers

  • fix bugs and failures
  • Windows implementation

Followup Issues

  • move more stuff from VTable to Operation

closes https://github.com/ziglang/zig/issues/25753

This branch introduces `Io.Operation`, `std.Io.operate`, `std.Io.Batch`, and implements them for `FileReadStreaming`. In `std.Io.Threaded`, the implementation is based on poll(). The idea here is that every VTable function that makes sense to have `error{Canceled,Timeout}` in its error set would be moved into this `Operation` tagged union. Note that the higher level API, e.g. `std.Io.File.readFileStreaming` is unchanged. We expect this to be lowerable in a reasonable fashion using IoUring, Kqueue, and Windows. ## Motivation * general support for batching, timeouts, and non-blocking across all I/O operations * ability to migrate code that used `poll()` without causing `error.ConcurrencyUnavailable` on `-fsingle-threaded` builds - for example, processing stdout and stderr of child processes in a single-threaded application ## Demonstration Using `std.process.run` to collect both stdout and stderr in a single-threaded program using `std.Threaded.Io`. The point of this example is, if the parent naively reads from either stdout or stderr non-concurrently, it will deadlock. child.zig: ```zig const std = @import("std"); const Io = std.Io; pub fn main(init: std.process.Init) !void { const io = init.io; var stdout: Io.File.Writer = .initStreaming(.stdout(), io, &.{}); var stderr: Io.File.Writer = .initStreaming(.stderr(), io, &.{}); try stdout.interface.splatByteAll('A', 5000); try stderr.interface.splatByteAll('B', 5000); try stdout.interface.splatByteAll('C', 5000); try stderr.interface.splatByteAll('D', 5000); } ``` parent.zig ```zig const std = @import("std"); const Io = std.Io; pub fn main(init: std.process.Init) !void { const io = init.io; const arena = init.arena.allocator(); const result = try std.process.run(arena, io, .{ .argv = &.{"./child"}, }); std.debug.print("stdout:\n{s}\nstderr:\n{s}", .{ result.stdout, result.stderr }); } ``` ``` $ stage3/bin/zig build-exe parent.zig -fsingle-threaded $ ./parent stdout: AAAAAA...(x5000)CCCCC....(x5000) stderr: BBBBBB...(x5000)DDDDD....(x5000) $ strace ./parent ... pipe2([8, 9], O_CLOEXEC) = 0 fork() = 3305035 close(9) = 0 close(4) = 0 close(6) = 0 munmap(0x7fb1cec93000, 53248) = 0 munmap(0x7fb1ceb40000, 131072) = 0 read(8, "", 8) = 0 close(8) = 0 mmap(0x7fb1ceca0000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb1cf009000 poll([{fd=3, events=POLLIN}, {fd=5, events=POLLIN}], 2, -1) = 2 ([{fd=3, revents=POLLIN}, {fd=5, revents=POLLIN}]) readv(3, [{iov_base="AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., iov_len=129}], 1) = 129 readv(5, [{iov_base="BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB"..., iov_len=129}], 1) = 129 poll([{fd=3, events=POLLIN}, {fd=5, events=POLLIN}], 2, -1) = 2 ([{fd=3, revents=POLLIN}, {fd=5, revents=POLLIN}]) readv(3, [{iov_base="AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., iov_len=194}], 1) = 194 readv(5, [{iov_base="BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB"..., iov_len=194}], 1) = 194 poll([{fd=3, events=POLLIN}, {fd=5, events=POLLIN}], 2, -1) = 2 ([{fd=3, revents=POLLIN}, {fd=5, revents=POLLIN}]) readv(3, [{iov_base="AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., iov_len=291}], 1) = 291 readv(5, [{iov_base="BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB"..., iov_len=291}], 1) = 291 mremap(0x7fb1cf009000, 4096, 8192, 0) = -1 ENOMEM (Cannot allocate memory) mmap(0x7fb1cf00a000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb1cf007000 poll([{fd=3, events=POLLIN}, {fd=5, events=POLLIN}], 2, -1) = 2 ([{fd=3, revents=POLLIN}, {fd=5, revents=POLLIN}]) readv(3, [{iov_base="AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., iov_len=436}], 1) = 436 readv(5, [{iov_base="BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB"..., iov_len=436}], 1) = 436 poll([{fd=3, events=POLLIN}, {fd=5, events=POLLIN}], 2, -1) = 2 ([{fd=3, revents=POLLIN}, {fd=5, revents=POLLIN}]) readv(3, [{iov_base="AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., iov_len=654}], 1) = 654 readv(5, [{iov_base="BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB"..., iov_len=654}], 1) = 654 mremap(0x7fb1cf007000, 8192, 12288, 0) = -1 ENOMEM (Cannot allocate memory) mmap(0x7fb1cf009000, 16384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb1cf003000 poll([{fd=3, events=POLLIN}, {fd=5, events=POLLIN}], 2, -1) = 2 ([{fd=3, revents=POLLIN}, {fd=5, revents=POLLIN}]) readv(3, [{iov_base="AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., iov_len=981}], 1) = 981 readv(5, [{iov_base="BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB"..., iov_len=981}], 1) = 981 poll([{fd=3, events=POLLIN}, {fd=5, events=POLLIN}], 2, -1) = 2 ([{fd=3, revents=POLLIN}, {fd=5, revents=POLLIN}]) readv(3, [{iov_base="AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., iov_len=1472}], 1) = 1472 readv(5, [{iov_base="BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB"..., iov_len=1472}], 1) = 1472 mremap(0x7fb1cf003000, 16384, 20480, 0) = -1 ENOMEM (Cannot allocate memory) mmap(0x7fb1cf007000, 32768, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb1ced42000 poll([{fd=3, events=POLLIN}, {fd=5, events=POLLIN}], 2, -1) = 2 ([{fd=3, revents=POLLIN}, {fd=5, revents=POLLIN}]) readv(3, [{iov_base="AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., iov_len=2208}], 1) = 2208 readv(5, [{iov_base="BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB"..., iov_len=2208}], 1) = 2208 poll([{fd=3, events=POLLIN}, {fd=5, events=POLLIN}], 2, -1) = 2 ([{fd=3, revents=POLLIN}, {fd=5, revents=POLLIN}]) readv(3, [{iov_base="CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC"..., iov_len=3312}], 1) = 3312 readv(5, [{iov_base="DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD"..., iov_len=3312}], 1) = 3312 mremap(0x7fb1ced42000, 32768, 49152, 0) = -1 ENOMEM (Cannot allocate memory) mmap(0x7fb1ced4a000, 73728, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb1cec8e000 poll([{fd=3, events=POLLIN}, {fd=5, events=POLLIN}], 2, -1) = 2 ([{fd=3, revents=POLLIN}, {fd=5, revents=POLLIN}]) readv(3, [{iov_base="CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC"..., iov_len=4968}], 1) = 323 readv(5, [{iov_base="DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD"..., iov_len=4968}], 1) = 323 poll([{fd=3, events=POLLIN}, {fd=5, events=POLLIN}], 2, -1) = 2 ([{fd=3, revents=POLLHUP}, {fd=5, revents=POLLHUP}]) --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=3305035, si_uid=1000, si_status=0, si_utime=0, si_stime=0} --- ... ``` ## Merge Blockers * fix bugs and failures * Windows implementation ## Followup Issues * move more stuff from VTable to Operation ---- closes https://github.com/ziglang/zig/issues/25753
This file has changed a lot since the previous release, and I resisted
the urge to do this until the conflicts would be minimized.
std.Io: proof-of-concept "operations" API
Some checks failed
ci / aarch64-linux-debug (pull_request) Has been cancelled
ci / aarch64-linux-release (pull_request) Has been cancelled
ci / aarch64-macos-debug (pull_request) Has been cancelled
ci / aarch64-macos-release (pull_request) Has been cancelled
ci / loongarch64-linux-debug (pull_request) Has been cancelled
ci / loongarch64-linux-release (pull_request) Has been cancelled
ci / powerpc64le-linux-debug (pull_request) Has been cancelled
ci / powerpc64le-linux-release (pull_request) Has been cancelled
ci / riscv64-linux-debug (pull_request) Has been cancelled
ci / riscv64-linux-release (pull_request) Has been cancelled
ci / s390x-linux-debug (pull_request) Has been cancelled
ci / s390x-linux-release (pull_request) Has been cancelled
ci / x86_64-freebsd-debug (pull_request) Has been cancelled
ci / x86_64-freebsd-release (pull_request) Has been cancelled
ci / x86_64-linux-debug (pull_request) Has been cancelled
ci / x86_64-linux-debug-llvm (pull_request) Has been cancelled
ci / x86_64-linux-release (pull_request) Has been cancelled
ci / x86_64-windows-debug (pull_request) Has been cancelled
ci / x86_64-windows-release (pull_request) Has been cancelled
f8f68ac320
This commit shows a proof-of-concept direction for std.Io.VTable to go,
which is to have general support for batching, timeouts, and
non-blocking.
I'm not sure if this is a good idea or not so I'm putting it up for
scrutiny.
This commit introduces `std.Io.operate`, `std.Io.Operation`, and
implements it experimentally for `FileReadStreaming`.
In `std.Io.Threaded`, the implementation is based on poll().
This commit shows how it can be used in `std.process.run` to collect
both stdout and stderr in a single-threaded program using
`std.Threaded.Io`.
It also demonstrates how to upgrade code that was previously using
`std.Io.poll` (*not* integrated with the interface!) using concurrency.
This may not be ideal since it makes the build runner no longer support
single-threaded mode. There is still a needed abstraction for
conveniently reading multiple File streams concurrently without
io.concurrent, but this commit demonstrates that such an API can be
built on top of the new `std.Io.operate` functionality.
std.Io.Threaded: fix init for single-threaded
Some checks failed
ci / aarch64-linux-release (pull_request) Has been cancelled
ci / aarch64-macos-debug (pull_request) Has been cancelled
ci / aarch64-macos-release (pull_request) Has been cancelled
ci / loongarch64-linux-debug (pull_request) Has been cancelled
ci / loongarch64-linux-release (pull_request) Has been cancelled
ci / powerpc64le-linux-debug (pull_request) Has been cancelled
ci / powerpc64le-linux-release (pull_request) Has been cancelled
ci / riscv64-linux-debug (pull_request) Has been cancelled
ci / riscv64-linux-release (pull_request) Has been cancelled
ci / s390x-linux-debug (pull_request) Has been cancelled
ci / s390x-linux-release (pull_request) Has been cancelled
ci / x86_64-freebsd-release (pull_request) Has been cancelled
ci / x86_64-linux-debug (pull_request) Has been cancelled
ci / x86_64-linux-debug-llvm (pull_request) Has been cancelled
ci / x86_64-linux-release (pull_request) Has been cancelled
ci / x86_64-windows-debug (pull_request) Has been cancelled
ci / x86_64-windows-release (pull_request) Has been cancelled
ci / x86_64-freebsd-debug (pull_request) Has been cancelled
ci / aarch64-linux-debug (pull_request) Has been cancelled
35ec7b4da8
First-time contributor
Copy link

+1 to the overall shape. Not only collectOutput now works with a single thread, I'd argue that its implementation is just more natural as a hand-coded state machine, and not as a pair of asyncConcurrent tasks.

Specific interface I am not happy with (but maybe I just misunderstand something). The issue I have with it is that it assumes readiness based IO, not completion based:

pub fn operate(io: Io, operations: []Operation, n_wait: usize, timeout: Timeout) OperateError!void {

If n_wait < operations.len, the interface needs to guarantee that each operation is "atomic" --- either it is completed, or otherwise not started. This works for poll, but doesn't work for io_uring, and arguably, "real world".

E.g, suppose I write low-level code that issues 5 write operations to an actual physical device. The way this works is that I ask the device to start the corresponding DMA operations, and then wait for interrupt which notifies me about completions.

I can wait for first 2 out of 5 operations to complete, but I still need to wait for other 3, I can't just drop them on the floor. The operate interface prevents this for two reasons:

  • there isn't affordance to wait for perviously submitted operations
  • If we fix that, the problem of memory fragmentation appears ---- If I submit a slice of 1000 ops, 999 complete, but one is slow, I'll still have to keep the entire slice pinned, I can't trivially re-use 999 completed slots.
+1 to the overall shape. Not only `collectOutput` now works with a single thread, I'd argue that its implementation is just more natural as a hand-coded state machine, and not as a pair of `asyncConcurrent` tasks. Specific interface I am not happy with (but maybe I just misunderstand something). The issue I have with it is that it assumes readiness based IO, not completion based: ``` pub fn operate(io: Io, operations: []Operation, n_wait: usize, timeout: Timeout) OperateError!void { ``` If `n_wait < operations.len`, the interface needs to guarantee that each operation is "atomic" --- either it is completed, or otherwise not started. This works for `poll`, but doesn't work for `io_uring`, and arguably, "real world". E.g, suppose I write low-level code that issues 5 write operations to an actual physical device. The way this works is that I ask the device to start the corresponding DMA operations, and then wait for interrupt which notifies me about completions. I can wait for first 2 out of 5 operations to complete, but I still need to wait for other 3, I can't just drop them on the floor. The `operate` interface prevents this for two reasons: * there isn't affordance to wait for _perviously_ submitted operations * If we fix that, the problem of memory fragmentation appears ---- If I submit a slice of 1000 ops, 999 complete, but one is slow, I'll still have to keep the entire slice pinned, I can't trivially re-use 999 completed slots.
First-time contributor
Copy link

Oh, there's also the poll-vs-epoll big-O problem here: if operations.len is 10^6, but n_wait it 1, you still have to trawl though the million to find out which single op did complete.

Oh, there's also the poll-vs-epoll big-O problem here: if `operations.len` is 10^6, but n_wait it 1, you still have to trawl though the million to find out which single op did complete.
Owner
Copy link

I agree with matklad's concerns, but can't really figure out a nice solution to them. While trying to find one, I realised that (IMO) the natural conclusion of this PR's line of thinking is to literally just expose an io_uring style API---it lets you handle completions as soon as they're available, it works with any primitive you can think of, and it allows you to reuse buffer memory. However, I strongly believe that would be a misstep: the whole point of the Io API is to abstract over these low-level event loop mechanisms, so straight-up exposing an io_uring-esque API would seriously muddy the intention behind the interface (and, of course, introduces a lot of extra complexity).

Also, I get the feeling that the use cases for this operate feature are actually pretty rare. I honestly am struggling to think of many cases where I just want to dispatch a bunch of IO operations and don't want non-trivial handling of their individual completions as they come in (if I do want that, then I should just have an async or concurrent task per operation---that's kinda the whole point of the future/group API!). However, I think that there is one very clear counterexample, which is exactly what you've done here: polling child process output streams. That's a real use case, which we really want to be able to use with a dumb single-threaded Io implementation, and it's what every call to std.Io.poll in the Zig repository is doing right now---but I think other use cases where operate would be preferable to async/concurrent are pretty damn rare. (Of course, if someone has a counterexample where they think operate is clearly preferable for something else too, do bring that up! A counterexample might make me rethink my conclusions here).

Having pondered this a little, my current feeling is that the operate API sadly doesn't really fit. It feels like it's trying to bolt on to Io a completely different design for asynchronous operations, and tries to be useful for several things at once, therefore ending up kind of mediocre at them all. I believe that a small, focused poll API (vaguely akin to the old one) would probably be a neater and simpler addition to the interface, which handles the key use case of child processes just as well; reflects the primitives actually available on OSes so that implementations don't have to do anything too complicated; and avoids muddying the waters wrt how you're "supposed" to do certain stuff asynchronously.

I agree with matklad's concerns, but can't really figure out a nice solution to them. While trying to find one, I realised that (IMO) the natural conclusion of this PR's line of thinking is to literally just expose an io_uring style API---it lets you handle completions as soon as they're available, it works with any primitive you can think of, and it allows you to reuse buffer memory. However, I strongly believe that would be a misstep: the whole point of the `Io` API is to abstract over these low-level event loop mechanisms, so straight-up exposing an io_uring-esque API would seriously muddy the intention behind the interface (and, of course, introduces a lot of extra complexity). Also, I get the feeling that the use cases for this `operate` feature are actually pretty rare. I honestly am struggling to think of many cases where I just want to dispatch a bunch of IO operations and *don't* want non-trivial handling of their individual completions as they come in (if I do want that, then I should just have an `async` or `concurrent` task per operation---that's kinda the whole point of the future/group API!). However, I think that there is one very clear counterexample, which is exactly what you've done here: polling child process output streams. That's a real use case, which we really want to be able to use with a dumb single-threaded `Io` implementation, and it's what *every* call to `std.Io.poll` in the Zig repository is doing right now---but I think other use cases where `operate` would be preferable to `async`/`concurrent` are pretty damn rare. (Of course, if someone has a counterexample where they think `operate` is clearly preferable for something else too, do bring that up! A counterexample might make me rethink my conclusions here). Having pondered this a little, my current feeling is that the `operate` API sadly doesn't really fit. It feels like it's trying to bolt on to `Io` a completely different design for asynchronous operations, and tries to be useful for several things at once, therefore ending up kind of mediocre at them all. I believe that a small, focused `poll` API (vaguely akin to the old one) would probably be a neater and simpler addition to the interface, which handles the key use case of child processes just as well; reflects the primitives actually available on OSes so that implementations don't have to do anything too complicated; and avoids muddying the waters wrt how you're "supposed" to do certain stuff asynchronously.
First-time contributor
Copy link

I'm likely being naive as someone who's never used the current poller directly, but what benefit does this bring that wouldn't be better solved by making any higher level "poller" interface just use io.concurrent, and then working down from there to support other use-cases: to meet a single-threaded requirement, caller can use Io.Evented, and for platforms that don't have a working Io.Evented, implement possibly-somewhat-limited Io.Evented backends for Windows and posixy poll(2) that suffice at least for this use-case, etc?

I'm likely being naive as someone who's never used the current poller directly, but what benefit does this bring that wouldn't be better solved by making any higher level "poller" interface just use `io.concurrent`, and then working down from there to support other use-cases: to meet a single-threaded requirement, caller can use `Io.Evented`, and for platforms that don't have a working `Io.Evented`, implement possibly-somewhat-limited `Io.Evented` backends for Windows and posixy `poll(2)` that suffice at least for this use-case, etc?
andrewrk force-pushed poll from 35ec7b4da8
Some checks failed
ci / aarch64-linux-release (pull_request) Has been cancelled
ci / aarch64-macos-debug (pull_request) Has been cancelled
ci / aarch64-macos-release (pull_request) Has been cancelled
ci / loongarch64-linux-debug (pull_request) Has been cancelled
ci / loongarch64-linux-release (pull_request) Has been cancelled
ci / powerpc64le-linux-debug (pull_request) Has been cancelled
ci / powerpc64le-linux-release (pull_request) Has been cancelled
ci / riscv64-linux-debug (pull_request) Has been cancelled
ci / riscv64-linux-release (pull_request) Has been cancelled
ci / s390x-linux-debug (pull_request) Has been cancelled
ci / s390x-linux-release (pull_request) Has been cancelled
ci / x86_64-freebsd-release (pull_request) Has been cancelled
ci / x86_64-linux-debug (pull_request) Has been cancelled
ci / x86_64-linux-debug-llvm (pull_request) Has been cancelled
ci / x86_64-linux-release (pull_request) Has been cancelled
ci / x86_64-windows-debug (pull_request) Has been cancelled
ci / x86_64-windows-release (pull_request) Has been cancelled
ci / x86_64-freebsd-debug (pull_request) Has been cancelled
ci / aarch64-linux-debug (pull_request) Has been cancelled
to f6b28ea244
Some checks failed
ci / x86_64-freebsd-debug (pull_request) Failing after 14m8s
ci / x86_64-freebsd-release (pull_request) Failing after 3m25s
ci / aarch64-linux-debug (pull_request) Failing after 8m13s
ci / aarch64-linux-release (pull_request) Failing after 5m50s
ci / x86_64-openbsd-debug (pull_request) Failing after 4m7s
ci / x86_64-openbsd-release (pull_request) Failing after 3m19s
ci / aarch64-macos-debug (pull_request) Failing after 8m48s
ci / aarch64-macos-release (pull_request) Failing after 6m15s
ci / x86_64-windows-release (pull_request) Failing after 6m32s
ci / x86_64-windows-debug (pull_request) Failing after 7m17s
ci / x86_64-linux-debug (pull_request) Failing after 8m15s
ci / x86_64-linux-debug-llvm (pull_request) Failing after 8m26s
ci / x86_64-linux-release (pull_request) Failing after 4m11s
ci / s390x-linux-debug (pull_request) Failing after 10m10s
ci / s390x-linux-release (pull_request) Failing after 7m53s
ci / powerpc64le-linux-debug (pull_request) Failing after 9m15s
ci / powerpc64le-linux-release (pull_request) Failing after 7m9s
ci / loongarch64-linux-debug (pull_request) Failing after 3m29s
ci / loongarch64-linux-release (pull_request) Failing after 2m59s
ci / riscv64-linux-debug (pull_request) Has been skipped
ci / riscv64-linux-release (pull_request) Has been skipped
2026年01月08日 22:05:07 +01:00
Compare
First-time contributor
Copy link

counterexample

I think there might be two classes of objections here: performance and programming model.

For performance, I don't know an example, but that has to be something with hight natural concurrent
"width" with relatively few syscalls per unit of concurrency, so perhaps concurrent du?

For programming model, my central example would be compaction code (compaction.zig). Compaction is similar to merge,
where you turn sorted A and sorted B into sorted combined union C, except that all of A, B, and C
are on disk, and chunked. So the code here is a loop that reads chunk A, chunk B, merges them
in-memory, and writes new chunk C, with complications:

  • Elements from A and B might also cancel out, so in general it's hard to predict what happens
    first: running out of chunk A, chunk B, or in-progress chunk C.
  • To hide IO latency, we want to read ahead several chunks of A and B. Similarly, several chunks
    from C might be in flight, if in-memory merge is faster than chunk write latency.
  • But we don't want to read too far ahead, so there's a shared budget of in-memory chunks we can use
    for IO.
  • But we need to be smart with prioritizing which chunk we read next, depending whether A or B is
    consumed faster.
  • And we actually run several independent copies of the process, but they all share budget.
  • And, the kicker, the whole process is suspendable, we pause at certain well-defined safe-points
    and resume later.
  • And safe points are defined dynmaically based on the progress so far.

This adds up to a ridiculous 2k lines choreographic dance around a 10-line hot loop that actually
does all the real work.

And it does seem to me that writing that in the explicit state machine style "I've read/merged/wrote
a chunk, what should I do next, given global constraints?" is the least bad approach. Filing each
individual read as a separate async and than updating the state under mutex feels horrible. Having
a central "dispatch" async function with a mailbox where completions are posted is better, but feels
like an extra abstraction overhead over raw concurrent operations.

operate won't help there though, because each individual "read_chunk" operation isn't a single
read syscall, but rather a readAll loop.

Which I guess is another issue with operate --- not only its a separate way to express
concurrency, it also is not composable, and works only for operations that IO deems "primitive". Hm,
I wonder if we can actually flip this around? Instead of operations beeing structs, we could
submit arbitrary functions, and then IO is allowed to optimize certain operations? E.g.,
single_threaded could document that, if all ops are reads and writes, it goes via epoll without
spending concurrency budget. Not a sericous suggestion :)

> counterexample I think there might be two classes of objections here: performance and programming model. For performance, I don't know an example, but that has to be something with hight natural concurrent "width" with relatively few syscalls per unit of concurrency, so perhaps concurrent `du`? For programming model, my central example would be compaction code ([compaction.zig](https://github.com/tigerbeetle/tigerbeetle/blob/46941baa27667ec964e302954d8f2eb342b03cdc/src/lsm/compaction.zig)). Compaction is similar to merge, where you turn sorted A and sorted B into sorted combined union C, except that all of A, B, and C are on disk, and chunked. So the code here is a loop that reads chunk A, chunk B, merges them in-memory, and writes new chunk C, with complications: * Elements from A and B might also cancel out, so in general it's hard to predict what happens first: running out of chunk A, chunk B, or in-progress chunk C. * To hide IO latency, we want to read ahead several chunks of A and B. Similarly, several chunks from C might be in flight, if in-memory merge is faster than chunk write latency. * But we don't want to read too far ahead, so there's a shared budget of in-memory chunks we can use for IO. * But we need to be smart with prioritizing which chunk we read next, depending whether A or B is consumed faster. * And we actually run several independent copies of the process, but they all share budget. * And, the kicker, the whole process is suspendable, we pause at certain well-defined safe-points and resume later. * And safe points are defined dynmaically based on the progress so far. This adds up to a ridiculous 2k lines choreographic dance around a 10-line hot loop that actually does all the real work. And it does seem to me that writing that in the explicit state machine style "I've read/merged/wrote a chunk, what should I do next, given global constraints?" is the least bad approach. Filing each individual read as a separate `async` and than updating the state under mutex feels horrible. Having a central "dispatch" async function with a mailbox where completions are posted is better, but feels like an extra abstraction overhead over raw concurrent operations. `operate` won't help there though, because each individual "read_chunk" operation isn't a single read syscall, but rather a `readAll` loop. Which I guess is another issue with `operate` --- not only its a separate way to express concurrency, it also is not composable, and works only for operations that IO deems "primitive". Hm, I wonder if we can actually flip this around? Instead of `operations` beeing structs, we could submit arbitrary functions, and then IO is allowed to optimize certain operations? E.g., single_threaded could document that, if all ops are reads and writes, it goes via epoll without spending concurrency budget. Not a sericous suggestion :)
Author
Owner
Copy link

Thank you, both of you, for taking a look!

Please have a look at the commit I just pushed. I simplified the interface and implementation to eliminate the n_wait and timeout parameters. Timeout can maybe be brought back in the future but it's a separate concern.

Specific interface I am not happy with (but maybe I just misunderstand something). The issue I have with it is that it assumes readiness based IO, not completion based

This is the main thing I addressed with the new commit. The std.Io.Threaded implementation based on poll() does up to 1 poll() and then always returns. With this API definition, completion-based implementations such as IoUring can send the operations to the event loop like normal and wait on them like normal, returning after all have completed. This is a valid definition of "nonblocking = true" in the operation (i.e. it does not block other operations).

I'm not concerned about operations.len being very large. Applications suffering from this problem should be solved by using a more appropriate Io implementation. Meanwhile, if you're stuck with std.Io.Threaded (i.e. poll), this batching interface is going to be strictly better than doing these operations in a loop. Much, much better, in fact, because it will prevent I/O operations from unnecessarily blocking other ones. Also the implementation has a fixed buffer for the poll set, so it's O(1) technically ;)

@blblack - the problem statement is this:

ability to migrate code that used poll() without causing error.ConcurrencyUnavailable on -fsingle-threaded builds

In other words, if you use io.concurrent() in single-threaded builds using std.Io.Threaded, you receive error.ConcurrencyUnavailable. Therefore, if users update their Zig code from using poll() to using std.Io under these conditions, their software will now fail, when it worked perfectly fine before.

Thank you, both of you, for taking a look! Please have a look at the commit I just pushed. I simplified the interface and implementation to eliminate the `n_wait` and `timeout` parameters. Timeout can maybe be brought back in the future but it's a separate concern. > Specific interface I am not happy with (but maybe I just misunderstand something). The issue I have with it is that it assumes readiness based IO, not completion based This is the main thing I addressed with the new commit. The `std.Io.Threaded` implementation based on poll() does up to 1 poll() and then always returns. With this API definition, completion-based implementations such as IoUring can send the operations to the event loop like normal and wait on them like normal, returning after all have completed. This is a valid definition of "nonblocking = true" in the operation (i.e. it does not block other operations). I'm not concerned about operations.len being very large. Applications suffering from this problem should be solved by using a more appropriate Io implementation. Meanwhile, if you're stuck with `std.Io.Threaded` (i.e. `poll`), this batching interface is going to be strictly better than doing these operations in a loop. Much, much better, in fact, because it will prevent I/O operations from unnecessarily blocking other ones. Also the implementation has a fixed buffer for the poll set, so it's O(1) technically ;) @blblack - the problem statement is this: > ability to migrate code that used poll() without causing error.ConcurrencyUnavailable on -fsingle-threaded builds In other words, if you use `io.concurrent()` in single-threaded builds using `std.Io.Threaded`, you receive `error.ConcurrencyUnavailable`. Therefore, if users update their Zig code from using poll() to using `std.Io` under these conditions, their software will now fail, when it worked perfectly fine before.
Author
Owner
Copy link

I'd argue that its implementation is just more natural as a hand-coded state machine, and not as a pair of asyncConcurrent tasks.

I have to strong disagree with you there. If you're willing to take error.ConcurrencyUnavailable into the error set, then this is what you can write:

diff --git a/lib/std/process.zig b/lib/std/process.zig
index b5fc8541d8..15815354c8 100644
--- a/lib/std/process.zig
+++ b/lib/std/process.zig
@@ -467,6 +467,7 @@ pub fn spawnPath(io: Io, dir: Io.Dir, options: SpawnOptions) SpawnError!Child {
 
 pub const RunError = posix.GetCwdError || posix.ReadError || SpawnError || posix.PollError || error{
 StreamTooLong,
+ ConcurrencyUnavailable,
 };
 
 pub const RunOptions = struct {
diff --git a/lib/std/process/Child.zig b/lib/std/process/Child.zig
index fc31014520..e64f6106fa 100644
--- a/lib/std/process/Child.zig
+++ b/lib/std/process/Child.zig
@@ -125,14 +125,15 @@ pub fn wait(child: *Child, io: Io) WaitError!Term {
 return io.vtable.childWait(io.userdata, child);
 }
 
-pub const CollectOutputError = error{StreamTooLong} || Allocator.Error || Io.File.Reader.Error;
+pub const CollectOutputError = error{
+ StreamTooLong,
+ ConcurrencyUnavailable,
+} || Allocator.Error || Io.File.Reader.Error;
 
 pub const CollectOutputOptions = struct {
 stdout: *std.ArrayList(u8),
 stderr: *std.ArrayList(u8),
- /// Used for `stdout` and `stderr`. If not provided, only the existing
- /// capacity will be used.
- allocator: ?Allocator = null,
+ allocator: Allocator,
 stdout_limit: Io.Limit = .unlimited,
 stderr_limit: Io.Limit = .unlimited,
 };
@@ -144,56 +145,24 @@ pub const CollectOutputOptions = struct {
 /// The process must have been started with stdout and stderr set to
 /// `process.SpawnOptions.StdIo.pipe`.
 pub fn collectOutput(child: *const Child, io: Io, options: CollectOutputOptions) CollectOutputError!void {
- const files: [2]Io.File = .{ child.stdout.?, child.stderr.? };
- const lists: [2]*std.ArrayList(u8) = .{ options.stdout, options.stderr };
- const limits: [2]Io.Limit = .{ options.stdout_limit, options.stderr_limit };
- var dones: [2]bool = .{ false, false };
- var reads: [2]Io.Operation = undefined;
- var vecs: [2][1][]u8 = undefined;
- while (true) {
- for (&reads, &lists, &files, dones, &vecs) |*read, list, file, done, *vec| {
- if (done) {
- read.* = .noop;
- continue;
- }
- if (options.allocator) |gpa| try list.ensureUnusedCapacity(gpa, 1);
- const cap = list.unusedCapacitySlice();
- if (cap.len == 0) return error.StreamTooLong;
- vec[0] = cap;
- read.* = .{ .file_read_streaming = .{
- .file = file,
- .data = vec,
- .nonblocking = true,
- .result = undefined,
- } };
- }
- var all_done = true;
- var any_canceled = false;
- var other_err: (error{StreamTooLong} || Io.File.Reader.Error)!void = {};
- io.vtable.operate(io.userdata, &reads);
- for (&reads, &lists, &limits, &dones) |*read, list, limit, *done| {
- if (done.*) continue;
- const n = read.file_read_streaming.result catch |err| switch (err) {
- error.Canceled => {
- any_canceled = true;
- continue;
- },
- error.WouldBlock => continue,
- else => |e| {
- other_err = e;
- continue;
- },
- };
- if (n == 0) {
- done.* = true;
- } else {
- all_done = false;
- }
- list.items.len += n;
- if (list.items.len > @intFromEnum(limit)) other_err = error.StreamTooLong;
- }
- if (any_canceled) return error.Canceled;
- try other_err;
- if (all_done) return;
- }
+ var stdout = try io.concurrent(collectStream, .{
+ io, options.allocator, child.stdout.?, options.stdout, options.stdout_limit,
+ });
+ defer stdout.cancel(io) catch {};
+
+ var stderr = try io.concurrent(collectStream, .{
+ io, options.allocator, child.stderr.?, options.stderr, options.stderr_limit,
+ });
+ defer stderr.cancel(io) catch {};
+
+ try stdout.await(io);
+ try stderr.await(io);
+}
+
+fn collectStream(io: Io, gpa: Allocator, file: File, list: *std.ArrayList(u8), limit: Io.Limit) CollectOutputError!void {
+ var fr = file.readerStreaming(io, &.{});
+ fr.interface.appendRemaining(gpa, list, limit) catch |err| switch (err) {
+ error.ReadFailed => return fr.err.?,
+ else => |e| return e,
+ };
 }

This is sooo much nicer. The only reason to suffer through the other way is to avoid possibility of failure via error.ConcurrencyUnavailable. Or, avoiding the tough question of, "after upgrading to Zig 0.16.0, why does my program spawn an extra thread now when it didn't before?"

> I'd argue that its implementation is just more natural as a hand-coded state machine, and not as a pair of asyncConcurrent tasks. I have to strong disagree with you there. If you're willing to take `error.ConcurrencyUnavailable` into the error set, then this is what you can write: ```diff diff --git a/lib/std/process.zig b/lib/std/process.zig index b5fc8541d8..15815354c8 100644 --- a/lib/std/process.zig +++ b/lib/std/process.zig @@ -467,6 +467,7 @@ pub fn spawnPath(io: Io, dir: Io.Dir, options: SpawnOptions) SpawnError!Child { pub const RunError = posix.GetCwdError || posix.ReadError || SpawnError || posix.PollError || error{ StreamTooLong, + ConcurrencyUnavailable, }; pub const RunOptions = struct { diff --git a/lib/std/process/Child.zig b/lib/std/process/Child.zig index fc31014520..e64f6106fa 100644 --- a/lib/std/process/Child.zig +++ b/lib/std/process/Child.zig @@ -125,14 +125,15 @@ pub fn wait(child: *Child, io: Io) WaitError!Term { return io.vtable.childWait(io.userdata, child); } -pub const CollectOutputError = error{StreamTooLong} || Allocator.Error || Io.File.Reader.Error; +pub const CollectOutputError = error{ + StreamTooLong, + ConcurrencyUnavailable, +} || Allocator.Error || Io.File.Reader.Error; pub const CollectOutputOptions = struct { stdout: *std.ArrayList(u8), stderr: *std.ArrayList(u8), - /// Used for `stdout` and `stderr`. If not provided, only the existing - /// capacity will be used. - allocator: ?Allocator = null, + allocator: Allocator, stdout_limit: Io.Limit = .unlimited, stderr_limit: Io.Limit = .unlimited, }; @@ -144,56 +145,24 @@ pub const CollectOutputOptions = struct { /// The process must have been started with stdout and stderr set to /// `process.SpawnOptions.StdIo.pipe`. pub fn collectOutput(child: *const Child, io: Io, options: CollectOutputOptions) CollectOutputError!void { - const files: [2]Io.File = .{ child.stdout.?, child.stderr.? }; - const lists: [2]*std.ArrayList(u8) = .{ options.stdout, options.stderr }; - const limits: [2]Io.Limit = .{ options.stdout_limit, options.stderr_limit }; - var dones: [2]bool = .{ false, false }; - var reads: [2]Io.Operation = undefined; - var vecs: [2][1][]u8 = undefined; - while (true) { - for (&reads, &lists, &files, dones, &vecs) |*read, list, file, done, *vec| { - if (done) { - read.* = .noop; - continue; - } - if (options.allocator) |gpa| try list.ensureUnusedCapacity(gpa, 1); - const cap = list.unusedCapacitySlice(); - if (cap.len == 0) return error.StreamTooLong; - vec[0] = cap; - read.* = .{ .file_read_streaming = .{ - .file = file, - .data = vec, - .nonblocking = true, - .result = undefined, - } }; - } - var all_done = true; - var any_canceled = false; - var other_err: (error{StreamTooLong} || Io.File.Reader.Error)!void = {}; - io.vtable.operate(io.userdata, &reads); - for (&reads, &lists, &limits, &dones) |*read, list, limit, *done| { - if (done.*) continue; - const n = read.file_read_streaming.result catch |err| switch (err) { - error.Canceled => { - any_canceled = true; - continue; - }, - error.WouldBlock => continue, - else => |e| { - other_err = e; - continue; - }, - }; - if (n == 0) { - done.* = true; - } else { - all_done = false; - } - list.items.len += n; - if (list.items.len > @intFromEnum(limit)) other_err = error.StreamTooLong; - } - if (any_canceled) return error.Canceled; - try other_err; - if (all_done) return; - } + var stdout = try io.concurrent(collectStream, .{ + io, options.allocator, child.stdout.?, options.stdout, options.stdout_limit, + }); + defer stdout.cancel(io) catch {}; + + var stderr = try io.concurrent(collectStream, .{ + io, options.allocator, child.stderr.?, options.stderr, options.stderr_limit, + }); + defer stderr.cancel(io) catch {}; + + try stdout.await(io); + try stderr.await(io); +} + +fn collectStream(io: Io, gpa: Allocator, file: File, list: *std.ArrayList(u8), limit: Io.Limit) CollectOutputError!void { + var fr = file.readerStreaming(io, &.{}); + fr.interface.appendRemaining(gpa, list, limit) catch |err| switch (err) { + error.ReadFailed => return fr.err.?, + else => |e| return e, + }; } ``` This is sooo much nicer. The only reason to suffer through the other way is to avoid possibility of failure via `error.ConcurrencyUnavailable`. Or, avoiding the tough question of, "after upgrading to Zig 0.16.0, why does my program spawn an extra thread now when it didn't before?"
andrewrk force-pushed poll from f6b28ea244
Some checks failed
ci / x86_64-freebsd-debug (pull_request) Failing after 14m8s
ci / x86_64-freebsd-release (pull_request) Failing after 3m25s
ci / aarch64-linux-debug (pull_request) Failing after 8m13s
ci / aarch64-linux-release (pull_request) Failing after 5m50s
ci / x86_64-openbsd-debug (pull_request) Failing after 4m7s
ci / x86_64-openbsd-release (pull_request) Failing after 3m19s
ci / aarch64-macos-debug (pull_request) Failing after 8m48s
ci / aarch64-macos-release (pull_request) Failing after 6m15s
ci / x86_64-windows-release (pull_request) Failing after 6m32s
ci / x86_64-windows-debug (pull_request) Failing after 7m17s
ci / x86_64-linux-debug (pull_request) Failing after 8m15s
ci / x86_64-linux-debug-llvm (pull_request) Failing after 8m26s
ci / x86_64-linux-release (pull_request) Failing after 4m11s
ci / s390x-linux-debug (pull_request) Failing after 10m10s
ci / s390x-linux-release (pull_request) Failing after 7m53s
ci / powerpc64le-linux-debug (pull_request) Failing after 9m15s
ci / powerpc64le-linux-release (pull_request) Failing after 7m9s
ci / loongarch64-linux-debug (pull_request) Failing after 3m29s
ci / loongarch64-linux-release (pull_request) Failing after 2m59s
ci / riscv64-linux-debug (pull_request) Has been skipped
ci / riscv64-linux-release (pull_request) Has been skipped
to 5cd0c75a89
Some checks failed
ci / aarch64-linux-debug (pull_request) Has been cancelled
ci / aarch64-linux-release (pull_request) Has been cancelled
ci / aarch64-macos-debug (pull_request) Has been cancelled
ci / aarch64-macos-release (pull_request) Has been cancelled
ci / loongarch64-linux-debug (pull_request) Has been cancelled
ci / loongarch64-linux-release (pull_request) Has been cancelled
ci / powerpc64le-linux-debug (pull_request) Has been cancelled
ci / powerpc64le-linux-release (pull_request) Has been cancelled
ci / riscv64-linux-debug (pull_request) Has been cancelled
ci / riscv64-linux-release (pull_request) Has been cancelled
ci / s390x-linux-debug (pull_request) Has been cancelled
ci / s390x-linux-release (pull_request) Has been cancelled
ci / x86_64-freebsd-debug (pull_request) Has been cancelled
ci / x86_64-freebsd-release (pull_request) Has been cancelled
ci / x86_64-linux-debug (pull_request) Has been cancelled
ci / x86_64-linux-debug-llvm (pull_request) Has been cancelled
ci / x86_64-linux-release (pull_request) Has been cancelled
ci / x86_64-openbsd-debug (pull_request) Has been cancelled
ci / x86_64-openbsd-release (pull_request) Has been cancelled
ci / x86_64-windows-debug (pull_request) Has been cancelled
ci / x86_64-windows-release (pull_request) Has been cancelled
2026年01月10日 00:07:26 +01:00
Compare
andrewrk force-pushed poll from 5cd0c75a89
Some checks failed
ci / aarch64-linux-debug (pull_request) Has been cancelled
ci / aarch64-linux-release (pull_request) Has been cancelled
ci / aarch64-macos-debug (pull_request) Has been cancelled
ci / aarch64-macos-release (pull_request) Has been cancelled
ci / loongarch64-linux-debug (pull_request) Has been cancelled
ci / loongarch64-linux-release (pull_request) Has been cancelled
ci / powerpc64le-linux-debug (pull_request) Has been cancelled
ci / powerpc64le-linux-release (pull_request) Has been cancelled
ci / riscv64-linux-debug (pull_request) Has been cancelled
ci / riscv64-linux-release (pull_request) Has been cancelled
ci / s390x-linux-debug (pull_request) Has been cancelled
ci / s390x-linux-release (pull_request) Has been cancelled
ci / x86_64-freebsd-debug (pull_request) Has been cancelled
ci / x86_64-freebsd-release (pull_request) Has been cancelled
ci / x86_64-linux-debug (pull_request) Has been cancelled
ci / x86_64-linux-debug-llvm (pull_request) Has been cancelled
ci / x86_64-linux-release (pull_request) Has been cancelled
ci / x86_64-openbsd-debug (pull_request) Has been cancelled
ci / x86_64-openbsd-release (pull_request) Has been cancelled
ci / x86_64-windows-debug (pull_request) Has been cancelled
ci / x86_64-windows-release (pull_request) Has been cancelled
to f5ed2d2d14
Some checks failed
ci / aarch64-linux-debug (pull_request) Has been cancelled
ci / aarch64-linux-release (pull_request) Has been cancelled
ci / aarch64-macos-debug (pull_request) Has been cancelled
ci / aarch64-macos-release (pull_request) Has been cancelled
ci / loongarch64-linux-debug (pull_request) Has been cancelled
ci / loongarch64-linux-release (pull_request) Has been cancelled
ci / powerpc64le-linux-debug (pull_request) Has been cancelled
ci / powerpc64le-linux-release (pull_request) Has been cancelled
ci / riscv64-linux-debug (pull_request) Has been cancelled
ci / riscv64-linux-release (pull_request) Has been cancelled
ci / s390x-linux-debug (pull_request) Has been cancelled
ci / s390x-linux-release (pull_request) Has been cancelled
ci / x86_64-freebsd-debug (pull_request) Has been cancelled
ci / x86_64-freebsd-release (pull_request) Has been cancelled
ci / x86_64-linux-debug (pull_request) Has been cancelled
ci / x86_64-linux-debug-llvm (pull_request) Has been cancelled
ci / x86_64-linux-release (pull_request) Has been cancelled
ci / x86_64-openbsd-debug (pull_request) Has been cancelled
ci / x86_64-openbsd-release (pull_request) Has been cancelled
ci / x86_64-windows-debug (pull_request) Has been cancelled
ci / x86_64-windows-release (pull_request) Has been cancelled
2026年01月10日 04:13:23 +01:00
Compare
std.process.Child.collectOutput: change back to other impl
Some checks failed
ci / aarch64-linux-debug (pull_request) Has been cancelled
ci / aarch64-linux-release (pull_request) Has been cancelled
ci / aarch64-macos-debug (pull_request) Has been cancelled
ci / aarch64-macos-release (pull_request) Has been cancelled
ci / loongarch64-linux-debug (pull_request) Has been cancelled
ci / loongarch64-linux-release (pull_request) Has been cancelled
ci / powerpc64le-linux-debug (pull_request) Has been cancelled
ci / powerpc64le-linux-release (pull_request) Has been cancelled
ci / riscv64-linux-debug (pull_request) Has been cancelled
ci / riscv64-linux-release (pull_request) Has been cancelled
ci / s390x-linux-debug (pull_request) Has been cancelled
ci / s390x-linux-release (pull_request) Has been cancelled
ci / x86_64-freebsd-debug (pull_request) Has been cancelled
ci / x86_64-freebsd-release (pull_request) Has been cancelled
ci / x86_64-linux-debug (pull_request) Has been cancelled
ci / x86_64-linux-debug-llvm (pull_request) Has been cancelled
ci / x86_64-linux-release (pull_request) Has been cancelled
ci / x86_64-openbsd-debug (pull_request) Has been cancelled
ci / x86_64-openbsd-release (pull_request) Has been cancelled
ci / x86_64-windows-debug (pull_request) Has been cancelled
ci / x86_64-windows-release (pull_request) Has been cancelled
d93db4c387
this one avoids calling poll() more than necessary
andrewrk force-pushed poll from d93db4c387
Some checks failed
ci / aarch64-linux-debug (pull_request) Has been cancelled
ci / aarch64-linux-release (pull_request) Has been cancelled
ci / aarch64-macos-debug (pull_request) Has been cancelled
ci / aarch64-macos-release (pull_request) Has been cancelled
ci / loongarch64-linux-debug (pull_request) Has been cancelled
ci / loongarch64-linux-release (pull_request) Has been cancelled
ci / powerpc64le-linux-debug (pull_request) Has been cancelled
ci / powerpc64le-linux-release (pull_request) Has been cancelled
ci / riscv64-linux-debug (pull_request) Has been cancelled
ci / riscv64-linux-release (pull_request) Has been cancelled
ci / s390x-linux-debug (pull_request) Has been cancelled
ci / s390x-linux-release (pull_request) Has been cancelled
ci / x86_64-freebsd-debug (pull_request) Has been cancelled
ci / x86_64-freebsd-release (pull_request) Has been cancelled
ci / x86_64-linux-debug (pull_request) Has been cancelled
ci / x86_64-linux-debug-llvm (pull_request) Has been cancelled
ci / x86_64-linux-release (pull_request) Has been cancelled
ci / x86_64-openbsd-debug (pull_request) Has been cancelled
ci / x86_64-openbsd-release (pull_request) Has been cancelled
ci / x86_64-windows-debug (pull_request) Has been cancelled
ci / x86_64-windows-release (pull_request) Has been cancelled
to 078a19cf31
Some checks failed
ci / x86_64-windows-release (pull_request) Failing after 2m52s
ci / x86_64-freebsd-release (pull_request) Failing after 2m36s
ci / x86_64-windows-debug (pull_request) Failing after 3m35s
ci / aarch64-macos-release (pull_request) Failing after 2m54s
ci / aarch64-macos-debug (pull_request) Failing after 3m40s
ci / x86_64-freebsd-debug (pull_request) Failing after 3m51s
ci / x86_64-linux-debug (pull_request) Failing after 4m9s
ci / x86_64-openbsd-release (pull_request) Failing after 4m3s
ci / x86_64-linux-release (pull_request) Failing after 3m49s
ci / x86_64-openbsd-debug (pull_request) Failing after 5m26s
ci / aarch64-linux-release (pull_request) Failing after 6m0s
ci / x86_64-linux-debug-llvm (pull_request) Failing after 6m49s
ci / aarch64-linux-debug (pull_request) Failing after 7m47s
ci / loongarch64-linux-release (pull_request) Failing after 18s
ci / loongarch64-linux-debug (pull_request) Failing after 48s
ci / riscv64-linux-debug (pull_request) Has been skipped
ci / riscv64-linux-release (pull_request) Has been skipped
ci / s390x-linux-debug (pull_request) Failing after 10m43s
ci / s390x-linux-release (pull_request) Failing after 8m16s
ci / powerpc64le-linux-debug (pull_request) Failing after 9m29s
ci / powerpc64le-linux-release (pull_request) Failing after 7m3s
2026年01月10日 05:47:38 +01:00
Compare
First-time contributor
Copy link

I have to strong disagree with you there

Thanks for calling me out on this one!

Please have a look at the commit I just pushed.

Yeah, I like the new version, don't see any footguns, it solves the read2 use-case, and it feels like it might be useful more generally.

>I have to strong disagree with you there Thanks for calling me out on this one! >Please have a look at the commit I just pushed. Yeah, I like the new version, don't see any footguns, it solves the [`read2`](https://github.com/rust-lang/cargo/blob/2e5dd3484ec047c5f825ccdc13ae188394b8708b/crates/cargo-util/src/read2.rs) use-case, and it feels like it might be useful more generally.
andrewrk force-pushed poll from 078a19cf31
Some checks failed
ci / x86_64-windows-release (pull_request) Failing after 2m52s
ci / x86_64-freebsd-release (pull_request) Failing after 2m36s
ci / x86_64-windows-debug (pull_request) Failing after 3m35s
ci / aarch64-macos-release (pull_request) Failing after 2m54s
ci / aarch64-macos-debug (pull_request) Failing after 3m40s
ci / x86_64-freebsd-debug (pull_request) Failing after 3m51s
ci / x86_64-linux-debug (pull_request) Failing after 4m9s
ci / x86_64-openbsd-release (pull_request) Failing after 4m3s
ci / x86_64-linux-release (pull_request) Failing after 3m49s
ci / x86_64-openbsd-debug (pull_request) Failing after 5m26s
ci / aarch64-linux-release (pull_request) Failing after 6m0s
ci / x86_64-linux-debug-llvm (pull_request) Failing after 6m49s
ci / aarch64-linux-debug (pull_request) Failing after 7m47s
ci / loongarch64-linux-release (pull_request) Failing after 18s
ci / loongarch64-linux-debug (pull_request) Failing after 48s
ci / riscv64-linux-debug (pull_request) Has been skipped
ci / riscv64-linux-release (pull_request) Has been skipped
ci / s390x-linux-debug (pull_request) Failing after 10m43s
ci / s390x-linux-release (pull_request) Failing after 8m16s
ci / powerpc64le-linux-debug (pull_request) Failing after 9m29s
ci / powerpc64le-linux-release (pull_request) Failing after 7m3s
to 139019739d
Some checks failed
ci / aarch64-macos-debug (pull_request) Failing after 7m38s
ci / aarch64-macos-release (pull_request) Failing after 2m57s
ci / x86_64-freebsd-release (pull_request) Failing after 2m6s
ci / x86_64-freebsd-debug (pull_request) Failing after 3m1s
ci / x86_64-windows-debug (pull_request) Failing after 4m33s
ci / aarch64-linux-debug (pull_request) Failing after 7m37s
ci / x86_64-windows-release (pull_request) Failing after 4m30s
ci / aarch64-linux-release (pull_request) Failing after 5m41s
ci / x86_64-openbsd-debug (pull_request) Failing after 4m29s
ci / x86_64-openbsd-release (pull_request) Failing after 2m50s
ci / x86_64-linux-debug (pull_request) Failing after 8m13s
ci / x86_64-linux-debug-llvm (pull_request) Failing after 7m57s
ci / x86_64-linux-release (pull_request) Failing after 4m8s
ci / powerpc64le-linux-debug (pull_request) Failing after 9m21s
ci / powerpc64le-linux-release (pull_request) Failing after 7m10s
ci / s390x-linux-debug (pull_request) Failing after 13m29s
ci / s390x-linux-release (pull_request) Failing after 8m52s
ci / loongarch64-linux-debug (pull_request) Failing after 14m59s
ci / loongarch64-linux-release (pull_request) Failing after 13m26s
2026年01月13日 08:22:44 +01:00
Compare
andrewrk force-pushed poll from 139019739d
Some checks failed
ci / aarch64-macos-debug (pull_request) Failing after 7m38s
ci / aarch64-macos-release (pull_request) Failing after 2m57s
ci / x86_64-freebsd-release (pull_request) Failing after 2m6s
ci / x86_64-freebsd-debug (pull_request) Failing after 3m1s
ci / x86_64-windows-debug (pull_request) Failing after 4m33s
ci / aarch64-linux-debug (pull_request) Failing after 7m37s
ci / x86_64-windows-release (pull_request) Failing after 4m30s
ci / aarch64-linux-release (pull_request) Failing after 5m41s
ci / x86_64-openbsd-debug (pull_request) Failing after 4m29s
ci / x86_64-openbsd-release (pull_request) Failing after 2m50s
ci / x86_64-linux-debug (pull_request) Failing after 8m13s
ci / x86_64-linux-debug-llvm (pull_request) Failing after 7m57s
ci / x86_64-linux-release (pull_request) Failing after 4m8s
ci / powerpc64le-linux-debug (pull_request) Failing after 9m21s
ci / powerpc64le-linux-release (pull_request) Failing after 7m10s
ci / s390x-linux-debug (pull_request) Failing after 13m29s
ci / s390x-linux-release (pull_request) Failing after 8m52s
ci / loongarch64-linux-debug (pull_request) Failing after 14m59s
ci / loongarch64-linux-release (pull_request) Failing after 13m26s
to 0b4ad5b534
Some checks failed
ci / aarch64-linux-debug (pull_request) Has been cancelled
ci / aarch64-linux-release (pull_request) Has been cancelled
ci / aarch64-macos-debug (pull_request) Has been cancelled
ci / aarch64-macos-release (pull_request) Has been cancelled
ci / loongarch64-linux-debug (pull_request) Has been cancelled
ci / loongarch64-linux-release (pull_request) Has been cancelled
ci / powerpc64le-linux-debug (pull_request) Has been cancelled
ci / powerpc64le-linux-release (pull_request) Has been cancelled
ci / s390x-linux-debug (pull_request) Has been cancelled
ci / s390x-linux-release (pull_request) Has been cancelled
ci / x86_64-freebsd-debug (pull_request) Has been cancelled
ci / x86_64-freebsd-release (pull_request) Has been cancelled
ci / x86_64-linux-debug (pull_request) Has been cancelled
ci / x86_64-linux-debug-llvm (pull_request) Has been cancelled
ci / x86_64-linux-release (pull_request) Has been cancelled
ci / x86_64-openbsd-debug (pull_request) Has been cancelled
ci / x86_64-openbsd-release (pull_request) Has been cancelled
ci / x86_64-windows-debug (pull_request) Has been cancelled
ci / x86_64-windows-release (pull_request) Has been cancelled
2026年01月14日 06:29:55 +01:00
Compare
andrewrk changed title from (削除) std.Io: proof-of-concept "operations" API, satisfying the "poll" use case (削除ここまで) to std.Io: introduce batching and operations API, satisfying the "poll" use case 2026年01月14日 07:13:04 +01:00
andrewrk force-pushed poll from 0b4ad5b534
Some checks failed
ci / aarch64-linux-debug (pull_request) Has been cancelled
ci / aarch64-linux-release (pull_request) Has been cancelled
ci / aarch64-macos-debug (pull_request) Has been cancelled
ci / aarch64-macos-release (pull_request) Has been cancelled
ci / loongarch64-linux-debug (pull_request) Has been cancelled
ci / loongarch64-linux-release (pull_request) Has been cancelled
ci / powerpc64le-linux-debug (pull_request) Has been cancelled
ci / powerpc64le-linux-release (pull_request) Has been cancelled
ci / s390x-linux-debug (pull_request) Has been cancelled
ci / s390x-linux-release (pull_request) Has been cancelled
ci / x86_64-freebsd-debug (pull_request) Has been cancelled
ci / x86_64-freebsd-release (pull_request) Has been cancelled
ci / x86_64-linux-debug (pull_request) Has been cancelled
ci / x86_64-linux-debug-llvm (pull_request) Has been cancelled
ci / x86_64-linux-release (pull_request) Has been cancelled
ci / x86_64-openbsd-debug (pull_request) Has been cancelled
ci / x86_64-openbsd-release (pull_request) Has been cancelled
ci / x86_64-windows-debug (pull_request) Has been cancelled
ci / x86_64-windows-release (pull_request) Has been cancelled
to 3b71c697ab
Some checks failed
ci / loongarch64-linux-debug (pull_request) Waiting to run
ci / loongarch64-linux-release (pull_request) Waiting to run
ci / aarch64-macos-release (pull_request) Failing after 1m50s
ci / x86_64-freebsd-release (pull_request) Failing after 2m2s
ci / x86_64-freebsd-debug (pull_request) Failing after 2m48s
ci / x86_64-windows-release (pull_request) Failing after 3m16s
ci / x86_64-windows-debug (pull_request) Failing after 4m21s
ci / aarch64-macos-debug (pull_request) Failing after 5m13s
ci / x86_64-openbsd-release (pull_request) Failing after 4m20s
ci / x86_64-openbsd-debug (pull_request) Failing after 6m42s
ci / x86_64-linux-release (pull_request) Failing after 31m41s
ci / x86_64-linux-debug (pull_request) Failing after 35m42s
ci / aarch64-linux-release (pull_request) Failing after 57m56s
ci / x86_64-linux-debug-llvm (pull_request) Failing after 1h2m36s
ci / aarch64-linux-debug (pull_request) Failing after 1h10m35s
ci / powerpc64le-linux-release (pull_request) Failing after 46m0s
ci / powerpc64le-linux-debug (pull_request) Failing after 54m1s
ci / s390x-linux-debug (pull_request) Failing after 1h7m59s
ci / s390x-linux-release (pull_request) Failing after 45m4s
2026年01月14日 10:01:43 +01:00
Compare
Some checks failed
ci / loongarch64-linux-debug (pull_request) Waiting to run
ci / loongarch64-linux-release (pull_request) Waiting to run
ci / aarch64-macos-release (pull_request) Failing after 1m50s
Required
Details
ci / x86_64-freebsd-release (pull_request) Failing after 2m2s
Required
Details
ci / x86_64-freebsd-debug (pull_request) Failing after 2m48s
Required
Details
ci / x86_64-windows-release (pull_request) Failing after 3m16s
Required
Details
ci / x86_64-windows-debug (pull_request) Failing after 4m21s
Required
Details
ci / aarch64-macos-debug (pull_request) Failing after 5m13s
Required
Details
ci / x86_64-openbsd-release (pull_request) Failing after 4m20s
Required
Details
ci / x86_64-openbsd-debug (pull_request) Failing after 6m42s
Required
Details
ci / x86_64-linux-release (pull_request) Failing after 31m41s
Required
Details
ci / x86_64-linux-debug (pull_request) Failing after 35m42s
Required
Details
ci / aarch64-linux-release (pull_request) Failing after 57m56s
Required
Details
ci / x86_64-linux-debug-llvm (pull_request) Failing after 1h2m36s
Required
Details
ci / aarch64-linux-debug (pull_request) Failing after 1h10m35s
Required
Details
ci / powerpc64le-linux-release (pull_request) Failing after 46m0s
ci / powerpc64le-linux-debug (pull_request) Failing after 54m1s
ci / s390x-linux-debug (pull_request) Failing after 1h7m59s
ci / s390x-linux-release (pull_request) Failing after 45m4s
This pull request has changes conflicting with the target branch.
  • lib/std/Io/Threaded.zig
View command line instructions

Checkout

From your project repository, check out a new branch and test the changes.
git fetch -u origin poll:poll
git switch poll
Sign in to join this conversation.
No reviewers
Labels
Clear labels
abi/f32
abi/ilp32
abi/n32
abi/sf
abi/x32
accepted

This proposal is planned.
arch/1750a
arch/21k
arch/6502
arch/a29k
arch/aarch64
arch/alpha
arch/amdgcn
arch/arc
arch/arc32
arch/arc64
arch/arm
arch/avr
arch/avr32
arch/bfin
arch/bpf
arch/clipper
arch/colossus
arch/cr16
arch/cris
arch/csky
arch/dlx
arch/dsp16xx
arch/elxsi
arch/epiphany
arch/fr30
arch/frv
arch/h8300
arch/h8500
arch/hexagon
arch/hppa
arch/hppa64
arch/i370
arch/i860
arch/i960
arch/ia64
arch/ip2k
arch/kalimba
arch/kvx
arch/lanai
arch/lm32
arch/loongarch32
arch/loongarch64
arch/m32r
arch/m68k
arch/m88k
arch/maxq
arch/mcore
arch/metag
arch/microblaze
arch/mips
arch/mips64
arch/mmix
arch/mn10200
arch/mn10300
arch/moxie
arch/mrisc32
arch/msp430
arch/nds32
arch/nios2
arch/ns32k
arch/nvptx
arch/or1k
arch/pdp10
arch/pdp11
arch/pj
arch/powerpc
arch/powerpc64
arch/propeller
arch/riscv32
arch/riscv64
arch/rl78
arch/rx
arch/s390
arch/s390x
arch/sh
arch/sh64
arch/sparc
arch/sparc64
arch/spirv
arch/spu
arch/st200
arch/starcore
arch/tilegx
arch/tilepro
arch/tricore
arch/ts
arch/v850
arch/vax
arch/vc4
arch/ve
arch/wasm
arch/we32k
arch/x86
arch/x86_16
arch/x86_64
arch/xcore
arch/xgate
arch/xstormy16
arch/xtensa
autodoc

The web application for interactive documentation and generation of its assets.
backend/c

The C backend outputs C source code.
backend/llvm

The LLVM backend outputs an LLVM bitcode module.
backend/self-hosted

The self-hosted backends produce machine code directly.
binutils

Zig's included binary utilities: zig ar, zig dlltool, zig lib, zig ranlib, zig objcopy, and zig rc.
breaking

Implementing this issue could cause existing code to no longer compile or have different behavior.
build system

The Zig build system - zig build, std.Build, the build runner, and package management.
debug info

An issue related to debug information (e.g. DWARF) produced by the Zig compiler.
docs

An issue with documentation, e.g. the language reference or standard library doc comments.
error message

This issue points out an error message that is unhelpful and should be improved.
frontend

Tokenization, parsing, AstGen, ZonGen, Sema, Legalize, and Liveness.
fuzzing

An issue related to Zig's integrated fuzz testing.
incremental

Reuse of internal compiler state for faster compilation.
lib/c

This issue relates to Zig's libc implementation and/or vendored libcs.
lib/compiler-rt

This issue relates to Zig's compiler-rt library.
lib/cxx

This issue relates to Zig's vendored libc++ and/or libc++abi.
lib/std

This issue relates to Zig's standard library.
lib/tsan

This issue relates to Zig's vendored libtsan.
lib/ubsan-rt

This issue relates to Zig's ubsan-rt library.
lib/unwind

This issue relates to Zig's vendored libunwind.
linking

Zig's integrated object file and incremental linker.
miscompilation

The compiler reports success but produces semantically incorrect code.
os/aix
os/android
os/bridgeos
os/contiki
os/dragonfly
os/driverkit
os/emscripten
os/freebsd
os/fuchsia
os/haiku
os/hermit
os/hurd
os/illumos
os/ios
os/kfreebsd
os/linux
os/maccatalyst
os/macos
os/managarm
os/netbsd
os/ohos
os/openbsd
os/plan9
os/redox
os/rtems
os/serenity
os/solaris
os/tvos
os/uefi
os/visionos
os/wali
os/wasi
os/watchos
os/windows
os/zos
proposal

This issue suggests modifications. If it also has the "accepted" label then it is planned.
release notes

This issue or pull request should be mentioned in the release notes.
testing

This issue is related to testing the compiler, standard library, or other parts of Zig.
tier system

This issue tracks the support tier for a target.
zig cc

Zig as a drop-in C-family compiler.
zig fmt

The Zig source code formatter.
bounty

https://ziglang.org/news/announcing-donor-bounties
bug

Observed behavior contradicts documented or intended behavior.
contributor-friendly

This issue is limited in scope and/or knowledge of project internals.
downstream

An issue with a third-party project that uses this project.
enhancement

Solving this issue will likely involve adding new logic or components to the codebase.
infra

An issue related to project infrastructure, e.g. continuous integration.
optimization

A task to improve performance and/or resource usage.
question

No questions on the issue tracker; use a community space instead.
regression

A bug that did not occur in a previous version.
upstream

An issue with a third-party project that this project uses.
No labels
abi/f32
abi/ilp32
abi/n32
abi/sf
abi/x32
accepted
arch/1750a
arch/21k
arch/6502
arch/a29k
arch/aarch64
arch/alpha
arch/amdgcn
arch/arc
arch/arc32
arch/arc64
arch/arm
arch/avr
arch/avr32
arch/bfin
arch/bpf
arch/clipper
arch/colossus
arch/cr16
arch/cris
arch/csky
arch/dlx
arch/dsp16xx
arch/elxsi
arch/epiphany
arch/fr30
arch/frv
arch/h8300
arch/h8500
arch/hexagon
arch/hppa
arch/hppa64
arch/i370
arch/i860
arch/i960
arch/ia64
arch/ip2k
arch/kalimba
arch/kvx
arch/lanai
arch/lm32
arch/loongarch32
arch/loongarch64
arch/m32r
arch/m68k
arch/m88k
arch/maxq
arch/mcore
arch/metag
arch/microblaze
arch/mips
arch/mips64
arch/mmix
arch/mn10200
arch/mn10300
arch/moxie
arch/mrisc32
arch/msp430
arch/nds32
arch/nios2
arch/ns32k
arch/nvptx
arch/or1k
arch/pdp10
arch/pdp11
arch/pj
arch/powerpc
arch/powerpc64
arch/propeller
arch/riscv32
arch/riscv64
arch/rl78
arch/rx
arch/s390
arch/s390x
arch/sh
arch/sh64
arch/sparc
arch/sparc64
arch/spirv
arch/spu
arch/st200
arch/starcore
arch/tilegx
arch/tilepro
arch/tricore
arch/ts
arch/v850
arch/vax
arch/vc4
arch/ve
arch/wasm
arch/we32k
arch/x86
arch/x86_16
arch/x86_64
arch/xcore
arch/xgate
arch/xstormy16
arch/xtensa
autodoc
backend/c
backend/llvm
backend/self-hosted
binutils
breaking
build system
debug info
docs
error message
frontend
fuzzing
incremental
lib/c
lib/compiler-rt
lib/cxx
lib/std
lib/tsan
lib/ubsan-rt
lib/unwind
linking
miscompilation
os/aix
os/android
os/bridgeos
os/contiki
os/dragonfly
os/driverkit
os/emscripten
os/freebsd
os/fuchsia
os/haiku
os/hermit
os/hurd
os/illumos
os/ios
os/kfreebsd
os/linux
os/maccatalyst
os/macos
os/managarm
os/netbsd
os/ohos
os/openbsd
os/plan9
os/redox
os/rtems
os/serenity
os/solaris
os/tvos
os/uefi
os/visionos
os/wali
os/wasi
os/watchos
os/windows
os/zos
proposal
release notes
testing
tier system
zig cc
zig fmt
bounty
bug
contributor-friendly
downstream
enhancement
infra
optimization
question
regression
upstream
Milestone
Clear milestone
No items
No milestone
Projects
Clear projects
No items
No project
Assignees
Clear assignees
No assignees
4 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
ziglang/zig!30743
Reference in a new issue
ziglang/zig
No description provided.
Delete branch "poll"

Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?