I have a bash script with various if statements based on command line arguments I pass in when calling it. Having some kind of output as to what commands are being run is helpful to confirm the flow through all those if statements, but my current solution is giving me too much information.
Using set -v
in the script was somewhat helpful to see commands printed to the screen as they were run in the script, however I get too many commands. It's almost like an entire copy of the script.
I want output that shows what commands are being run, but I don't want to see comments, new lines, expressions in if statements, etc.
Is there a way I can pass all possible output generated by the -v option through a regex first before being printed? Or some other solution to get bash to only output commands of a certain "type" (e.g. that are using executables and not just bash specific statements, comments etc.?)
[1] https://stackoverflow.com/questions/257616/sudo-changes-path-why was quite helpful on this and is where I got the suggestion for the set -v
usage.
Edit:
A similar (but not identical) script to the one I'm running:
#!/bin/bash
#get verbose command output
set -v
env=1ドル
if [ "$env" == "dev" ]; then
python ascript.py
fi
if [ "$env" == "prod" ]; then
#launching in prod will most likely fail if not run as root. Warn user if not running as root.
if [ $EUID -ne 0 ]; then
echo "It doesn't look like you're running me as root. This probably won't work. Press any key to continue." > /dev/stderr
read input
fi
#"stop" any existing nginx processes
pkill -f nginx
nginx -c `pwd`/conf/artfndr_nginx.conf
fi
I want only 2 possible sets of output lines from this script. The first:
python ascript.py
The second:
pkill -f nginx
nginx -c /some/current/directory/conf/artfndr_nginx.conf
6 Answers 6
Use a sub-shell, i.e:
( set -x; cmd1; cmd2 )
For example:
( set -x; echo "hi there" )
prints
+ echo 'hi there'
hi there
-
3I prefer this one over
set -x; cmd; set +x
for several reasons. First, it does not resetset -x
in case it has been on before. Second, termination of the script inside does not cause traps are executed with verbose settings on.Oliver Gondža– Oliver Gondža2019年09月27日 13:01:14 +00:00Commented Sep 27, 2019 at 13:01 -
@OliverGondža depends on the problem, and as you pointed out, the subshell requires less mental overhead at the cost of actual resource overhead; this approach, combined in a loop, or function, or anything low level is risky.christian elsee– christian elsee2021年09月10日 10:50:01 +00:00Commented Sep 10, 2021 at 10:50
When I write more complex bash scripts, I use a little function to run commands that will also print the commands run into a logfile (with thanks to @KonradRudolph for providing a simpler version in the comments):
runthis(){
## print the command to the logfile
echo "$@" >> "$LOG"
## run the command and redirect its error output
## to the logfile
"$@" 2>> "$LOG"
}
Then, in my script, I run commands like this:
runthis cp /foo/bar /baz/
If you don't want a command printed, just run it normally.
You can either set the $LOG
to a filename or just remove it and print to stdout or stderr.
-
2+1 Also I was able to run this inside my script by simply prepending "important" commands with a short-named version of the function, so the lines look something like
v python ascript.py
without having to enclose in quotations and lose my vim code highlightingTrindaz– Trindaz2013年12月27日 14:19:00 +00:00Commented Dec 27, 2013 at 14:19 -
@Trindaz the quotes are there for when you need to pass variables in your commands, if the variables contain spaces you might have problems otherwise.2013年12月27日 14:23:44 +00:00Commented Dec 27, 2013 at 14:23
-
eval ..... || ok=1
: will set ok to "1" only when "eval ..." fails ?? Maybe you meant "&&" ? And if you meant that, add a "ok=0" before the eval line, so it's "reset" each time. Or simply rename "ok" into "error" ? it seems that's what was meant here. So in the end:eval "$@" 2>> "$LOG" && error=0 || error=1
Olivier Dulac– Olivier Dulac2013年12月27日 14:50:29 +00:00Commented Dec 27, 2013 at 14:50 -
@OlivierDulac, in the version of this I use, I have an
ok
variable that will stop the script if any command fails. Since that was not relevant here, I removed it but forgot to delete the|| ok=1
. Thanks, fixed now.2013年12月27日 14:53:36 +00:00Commented Dec 27, 2013 at 14:53 -
1@KonradRudolph um. I cannot for the life of me see any benefit at all. I suspect I had some reason back then, since I know I try to avoid
eval
as much as possible, but I tried to break your suggestion using various file names with spaces and the like and it worked every time. I suspect you are right and there is no benefit, just needless complication, so I have edited the answer to your version. Thanks!2023年11月27日 14:21:53 +00:00Commented Nov 27, 2023 at 14:21
I've seen methods used similar to @terdon's. It's the beginnings of what higher level programming languages call loggers, and offer as full blown libraries, such as log4J (Java), log4Perl (Perl) etc.
You can get something similar using set -x
in Bash as you've mentioned but you can use it to turn up the debugging just a subset of commands by wrapping blocks of code with them like so.
$ set -x; cmd1; cmd2; set +x
Examples
Here's a one liner pattern you can use.
$ set -x; echo "hi" ;set +x
+ echo hi
hi
+ set +x
You can wrap them like this for multiple commands in a script.
set -x
cmd1
cmd2
set +x
cmd3
Log4Bash
Most people are oblivious but Bash also has a log4* as well, Log4Bash. If you have more modest needs this might be worth the time to set it up.
log4bash is an attempt to have better logging for Bash scripts (i.e. make logging in Bash suck less).
Examples
Here are some examples of using log4bash.
#!/usr/bin/env bash
source log4bash.sh
log "This is regular log message... log and log_info do the same thing";
log_warning "Luke ... you turned off your targeting computer";
log_info "I have you now!";
log_success "You're all clear kid, now let's blow this thing and go home.";
log_error "One thing's for sure, we're all gonna be a lot thinner.";
# If you have figlet installed -- you'll see some big letters on the screen!
log_captains "What was in the captain's toilet?";
# If you have the "say" command (e.g. on a Mac)
log_speak "Resistance is futile";
Log4sh
If you want what I would classify as more of the full power of a log4* framework then I'd give Log4sh a try.
excerpt
log4sh was originally developed to solve a logging problem I had in some of the production environments I have worked in where I either had too much logging, or not enough. Cron jobs in particular caused me the most headaches with their constant and annoying emails telling me that everything worked, or that nothing worked but not a detailed reason why. I now use log4sh in environments where logging from shell scripts is critical, but where I need more than just a simple "Hello, fix me!" type of logging message. If you like what you see, or have any suggestions on improvements, please feel free to drop me an email. If there is enough interest in the project, I will develop it further.
log4sh has been developed under the Bourne Again Shell (/bin/bash) on Linux, but great care has been taken to make sure it works under the default Bourne Shell of Solaris (/bin/sh) as this happens to be the primary production platform used by myself.
Log4sh supports several shells, not just Bash.
- Bourne Shell (sh)
- BASH - GNU Bourne Again SHell (bash)
- DASH (dash)
- Korn Shell (ksh)
- pdksh - the Public Domain Korn Shell (pdksh)
It's also been tested on several OSes, not just Linux.
- Cygwin (under Windows)
- FreeBSD (user supported)
- Linux (Gentoo, RedHat, Ubuntu)
- Mac OS X
- Solaris 8, 9, 10
Using a log4* framework will take some time to learn but it is worth it if you have more demanding needs from your logging. Log4sh makes use of a configuration file where you can define appenders and control the formatting for the output that will appear.
Example
#! /bin/sh
#
# log4sh example: Hello, world
#
# load log4sh (disabling properties file warning) and clear the default
# configuration
LOG4SH_CONFIGURATION='none' . ./log4sh
log4sh_resetConfiguration
# set the global logging level to INFO
logger_setLevel INFO
# add and configure a FileAppender that outputs to STDERR, and activate the
# configuration
logger_addAppender stderr
appender_setType stderr FileAppender
appender_file_setFile stderr STDERR
appender_activateOptions stderr
# say Hello to the world
logger_info 'Hello, world'
Now when I run it:
$ ./log4sh.bash
INFO - Hello, world
NOTE: The above configures the appender as part of the code. If you like this can be extracted out into its own file, log4sh.properties
etc.
Consult the excellent documentation for Log4sh if you need further details.
-
Thanks for the added notes, but the main problem I have with that is all the
set
commands i'd need to introduce, alternating around comments etc, so just having a function at the top of my script, with a single character function call prepended to all "important" lines in the script seemed neater to me for now. (single character because the function has a single character name)Trindaz– Trindaz2013年12月27日 15:06:00 +00:00Commented Dec 27, 2013 at 15:06 -
@Trindaz - sorry I hadn't finished my answer yet. Take a look at log4bash if you have more needs that the function that terdon gave.2013年12月27日 15:07:00 +00:00Commented Dec 27, 2013 at 15:07
-
1@Trindaz - I do something similar from time to time, the other approach I've used is to wrap
echo
in my own function,mecho
, and then pass a switch into the program called-v
for verbose when I want to turn things on off. I also can control it with a 2nd argument switch which specifies the function's name, so I have 2 axis on which to control the logging. This is often the gateway to wanting log4bash though.2013年12月27日 15:09:32 +00:00Commented Dec 27, 2013 at 15:09 -
1@Trindaz
set -x
prints commands as they are executed. It doesn't print comments.set -x
is practical for debugging (unlikeset -v
which isn't very useful). Zsh has better output forset -x
than bash, for example it shows which function is currently being executed and the source line number.Gilles 'SO- stop being evil'– Gilles 'SO- stop being evil'2013年12月27日 15:39:44 +00:00Commented Dec 27, 2013 at 15:39 -
Thanks @Gilles that's true, but it did give me the if expression expansions, which was overkill in this caseTrindaz– Trindaz2013年12月27日 15:40:39 +00:00Commented Dec 27, 2013 at 15:40
This is a revised version of Steven Penny's neat function. It prints its arguments in color and quotes them as needed. Use it to selectively echo the commands you want to trace. Since quotes are output, you can copy printed lines and paste them to the terminal for immediate re-execution while you are debugging a script. Read the first comment to know what I changed and why.
xc() # $@-args
{
cecho "$@"
"$@"
}
cecho() # $@-args
{
awk '
BEGIN {
x = "047円"
printf "033円[36m"
while (++i < ARGC) {
if (! (y = split(ARGV[i], z, x))) {
printf (x x)
} else {
for (j = 1; j <= y; j++) {
printf "%s", z[j] ~ /[^[:alnum:]%+,./:=@_-]/ ? (x z[j] x) : z[j]
if (j < y) printf "\\" x
}
}
printf i == ARGC - 1 ? "033円[m\n" : FS
}
}
' "$@"
}
Example usage with output:
# xc echo "a b" "c'd" "'" '"' "fg" '' " " "" \# this line prints in green
echo 'a b' c\'d \' '"' fg '' ' ' '' '#' this line prints in green
a b c'd ' " fg # this line prints in green
The second line above prints in green and can be copy-pasted to reproduce the third line.
Further Remarks
@Steven-Penny's original xc is clever and he deserves all credits for it. However, I noticed some issues, but I couldn't comment his post directly because I don't have enough reputation. So I made a suggested edit to his post but the reviewers rejected my edit. Hence I resorted to posting my comments as this answer, though I'd would have preferred to be able to edit Steve Penny's own answer.
What I changed wrt Steven-Penny's answer
Fixed: printing null strings - they weren't printed. Fixed: printing strings that include %
- they caused awk syntax errors. Replaced for (j in ...)
with for (j = 0, ...)
because the former doesn't guarantee the order of array traversal (it's awk implementation-dependent). Added 0 to octal numbers for portability.
Update
Steven Penny's has since fixed those issues in his answer, so these remarks stay only for the historical record of my answer. See the Comments section for further details.
You could trap
DEBUG
and then test the BASH_COMMAND
variable. Add this to the top of the script:
log() {
case "1ドル" in
python\ *)
;&
pkill\ *)
printf "%s\n" "$*"
;;
esac
}
trap 'log "$BASH_COMMAND"' DEBUG
The code is readable; it just tests if the first argument begins with python
or pkill
, and prints it if that's the case. And the trap uses BASH_COMMAND
(which contains the command that will be executed) as the first argument.
$ bash foo.sh dev
python ascript.py
python: can't open file 'ascript.py': [Errno 2] No such file or directory
$ bash foo.sh prod
It doesn't look like you're running me as root. This probably won't work. Press any key to continue.
pkill -f nginx
foo.sh: line 32: nginx: command not found
Note that while case
uses globs, you could just as easily do:
if [[ 1ドル =~ python|nginx ]]
then
printf "%s" "$*"
fi
And use regular expressions.
You can use the "sh_trace" shell function from the POSIX stdlib library to print the command in color before running it. Example:
preview
Underlying Awk function:
function sh_trace(ary, b, d, k, q, w, z) {
b = "47円"
for (d in ary) {
k = split(ary[d], q, b)
q[1]
if (d - 1)
z = z " "
for (w in q) {
z = z (!k || q[w] ~ "[^[:alnum:]%+,./:=@_-]" ? b q[w] b : q[w]) \
(w < k ? "\\" b : "")
}
}
printf "33円[36m%s33円[m\n", z
system(z)
}
-
FYI, Link to POSIX stdlib library is broken. And I don't find any hits googling just for "sh_trace".studgeek– studgeek2024年05月20日 22:34:57 +00:00Commented May 20, 2024 at 22:34
set -v
output you want and which ones you don't.