2

I have a bash script that may be invoked multiple times simultaneously. To protect the state information (saved in a /tmp file) that the script accesses, I am using file locking like this:

do_something()
{
...
}
// Check if there are any other instances of the script; if so exit
exec 8>$LOCK
if ! flock -n -x 8; then
 exit 1
fi
// script does something...
do_something

Now any other instance that was invoked when this script was running exits. I want the script to run only one extra time if there were n simultaneous invocations, not n-times, something like this:

do_something()
{
...
}
// Check if there are any other instances of the script; if so exit
 exec 8>$LOCK
 if ! flock -n -x 8; then
 exit 1
 fi
// script does something...
do_something
// check if another instance was invoked, if so re-run do_something again
 if [ condition ]; then
 do_something
 fi

How can I go about doing this? Touching a file inside the flock before quitting and having that file as the condition for the second if doesn't seem to work.

asked Jun 4, 2012 at 22:24
2
  • Why not just --wait for the lock, and exec 0ドル if flock fails? Commented Jun 4, 2012 at 22:34
  • Because I just want the script to run twice if there were n simultaneous invocations, not n-times. Commented Jun 4, 2012 at 23:20

3 Answers 3

2

Have one flag (lockfile) to signal that a things needs doing, and always set it. Have a separate flag that is unset by the execution part.

REQUEST_FILE=/tmp/please_do_something
LOCK_FILE=/tmp/doing_something
# request running
touch $REQUEST_FILE
# lock and run
if ln -s /proc/$$ $LOCK_FILE 2>/dev/null ; then
 while [ -e $REQUEST_FILE ]; do
 do_something
 rm $REQUEST_FILE
 done
 rm $LOCK_FILE
fi

If you want to ensure that "do_something" is run exactly once for each time the whole script is run, then you need to create some kind of a queue. The overall structure is similar.

answered Jun 4, 2012 at 22:41
Sign up to request clarification or add additional context in comments.

2 Comments

This approach is racy - what if when the first instance does the "rm $REQUEST_FILE" line, another instance starts and does "touch $REQUEST_FILE" ? In that case, the second instance will exit because of the locking code, and the first instance will also exit without executing "do_something" one extra time.
True; it's a question of how tolerant you are of running do_something too many or too few times and how complex you want to make the script. My version as written won't run it too many times, but could run it too few. If you must not run too few but can run too many times, move the rm to before do_something. If you must guarantee that do_something must be restarted after the most recent script invocation, but must not be restarted without a script invocation between starts, then I need to think a bit more.
2

They're not everone's favourite, but I've always been a fan of symbolic links to make lockfiles, since they're atomic. For example:

lockfile=/var/run/`basename 0ドル`.lock
if ! ln -s "pid=$$ when=`date '+%s'` status=$something" "$lockfile"; then
 echo "Can't set lock." >&2
 exit 1
fi

By encoding useful information directly into the link target, you eliminate the race condition introduced by writing to files.

That said, the link that Dennis posted provides much more useful information that you should probably try to understand before writing much more of your script. My example above is sort of related to BashFAQ/045 which suggests doing a similar thing with mkdir.

If I understand your question correctly, then what you want to do might be achieved (slightly unreliably) by using two lock files. If setting the first lock fails, we try the second lock. If setting the second lock fails, we exit. The error exists if the first lock is delete after we check it but before check the second existant lock. If this level of error is acceptable to you, that's great.

This is untested; but it looks reasonable to me.

#!/usr/local/bin/bash
lockbase="/tmp/test.lock"
setlock() {
 if ln -s "pid=$$" "$lockbase".1ドル 2>/dev/null; then
 trap "rm \"$lockbase\".1ドル" 0 1 2 5 15
 else
 return 1
 fi
}
if setlock 1 || setlock 2; then
 echo "I'm in!"
 do_something_amazing
else
 echo "No lock - aborting."
fi
answered Jun 5, 2012 at 5:20

3 Comments

Sure, that's why you include the pid in the symlink. But that's out of scope for this question. :-)
I like to make symlinks to /proc/$$ - that way you can easily check if the process is still running: [ ! -e $LOCKFILE -a -L $LOCKFILE ] - that is, a broken symlink - means the previous run crashed without removing the lockfile.
That's pretty clever, but it only works on operating systems with a /proc filesystem. I work in FreeBSD, where /proc is optional, and not enabled by default. Also, I like having the "when" field, so I can determine if a process should be considered stuck despite still existing.
0

Please see Process Management.

answered Jun 4, 2012 at 22:30

Comments

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.