Let’s say I have two containers X and Y (two containers in the same Kubernetes pod, actually) that share a volume. On that volume is a named pipe /foo
that I want attached to a bash process on container Y. As container X sends data to container Y, I want it to work the same way the e.g., ssh
works: the command should be read by the remote bash shell, and pipes and whatnot are handled transparently.
I’ve tried the following on machine Y:
bash -s <> /foo
: If from machine X I runecho "ls -l" > /foo; cat /foo
then the command is run on Y, but the output goes to Y’s terminal and not to the named pipebash -s < /foo > /foo
If from machine X I runecho "ls -l" > /foo; cat /foo
then nothing happensbash -s < /foo | cat - > foo
: If from machine X I runecho "ls -l" > /foo; cat /foo
then the command is run on Y, the output appears on X’s terminal, and then no further commands are executed on Y and X hangs.
It could be that what I’m looking for is a way to attach the current shell on X to the named pipe, and on X attach it to a new shell. Again, I’d like it to work like ssh
does, where you can connect and send a single non-interactive command (or script) and/or pipe data, and somehow it works the way you’d expect. I just haven’t figured out the proper shell fu to make it happen.
Also I don’t understand why the three options above produce different results.
2 Answers 2
OK, with the help of telometto, Kamil and Greg A. Woods, I arrived at the following solution, which seems to meet my needs:
- On the shared volume, create three named pipes,
stdin
,stdout
andstderr
- On machine Y, run
while true; do bash -s < stdin 2> stderr > stdout ; done
- On machine X I created a helper script to help run remote commands:
#!/bin/bash
trap "trap - SIGTERM && kill -- -$$" SIGINT SIGTERM
(
echo "$@"
if read -t 0 ; then
cat - 2> /dev/null
fi
) | cat > stdin
cat stdout &
cat stderr > /dev/stderr &
wait
To use it, you simply ./run "ls -l"
from X and the command is run on Y. You can also ./run "cat > bar" < foo
from X and the command is run on Y, moving (in this case) the contents of foo
from X to Y along the way. Piped input is handled similarly.
-
2Excellent :-) Please don't forget to drop by after 48 hours to accept your answer. That prevents it from appearing as "Unanswered" in the future, and helps others facing similar issue.Peregrino69– Peregrino692023年02月15日 16:39:02 +00:00Commented Feb 15, 2023 at 16:39
-
Both instances of
cat
are entirely superfluous.bash -s < stdin > stdout 2> stderr
Greg A. Woods– Greg A. Woods2023年02月16日 03:16:26 +00:00Commented Feb 16, 2023 at 3:16
To attach a bash shell to a named pipe, you can use the exec
command to replace the current shell process with a new one that is connected to the named pipe. E.g. how you can do this on machine Y:
cat /foo | bash
This will replace the current shell process with a new one that reads from and writes to the named pipe /foo
. Now, any commands or data sent from machine X to /foo
will be read by the new shell process on machine Y.
Regarding your three attempts and the different results:
bash -s <> /foo
: This command runs a new shell process that reads from standard input and writes to standard output. The<>
syntax tells the shell to open the named pipe for both reading and writing. When you runecho "ls -l" > /foo
on machine X, it writes the string "ls -l" to the named pipe, which is then read by the new shell process on machine Y. The command is executed on Y, but the output is printed to Y's terminal instead of being written to the named pipe.bash -s < /foo > /foo
: This command runs a new shell process that reads from the named pipe and writes to the named pipe. When you runecho "ls -l" > /foo
on machine X, it writes the string "ls -l" to the named pipe, which is then read by the new shell process on machine Y. However, the shell process is also writing to the same named pipe, which creates a deadlock and causes nothing to happen.bash -s < /foo | cat - > foo
: This command runs a new shell process that reads from the named pipe and pipes its output to the cat command, which writes to a regular filefoo
. When you runecho "ls -l" > /foo
on machine X, it writes the string "ls -l" to the named pipe, which is then read by the new shell process on machine Y. The command is executed on Y, and the output appears on X's terminal because it is being written to the cat process running on X. However, the cat process does not write anything back to the named pipe, which causes the new shell process on machine Y to hang.
-
1
bash < /foo > /foo
is absurd. Any pipe is unidirectional, there's just one stream. The command makesbash
read and interpret whatever enters the pipe, including its own output. When thisbash
runs,echo "ls -l" > /foo
may be harmless, but don't tryecho ls > /foo
.Kamil Maciorowski– Kamil Maciorowski2023年02月14日 21:54:30 +00:00Commented Feb 14, 2023 at 21:54
ssh
, when it doesn't allocate a tty on the remote side, passes data from its stdin to the stdin of the remote command; it receives data the remote command prints to its (i.e. remote) stdout and stderr, and prints locally to its own stdout and stderr respectively. So you need three pipes. You can drop stderr or merge the remote stdout and stderr, still you need at least two pipes: at least one pipe per direction. Can you take it from here?