I wrote an alias in my .bashrc file that open a txt file every time I start bash shell.
The problem is that I would like to open such a file only once, that is the first time I open the shell.
Is there any way to do that?
The general solution to this problem is to have a session lock of some kind. You could create a file /tmp/secret with the pid and/or tty of the process which is editing the other file, and remove the lock file when done. Now, your other sessions should be set up to not create this file if it already exists.
Proper locking is a complex topic, but for the simple cases, this might be good enough. If not, google for "mutual exclusion". Do note that there may be security implications if you get it wrong.
Why are you using an alias for this? Sounds like the code should be directly in your .bashrc, not in an alias definition.
So if, say, what you have now in your .bashrc is something like
alias start_editing_my_project_work_hour_report='emacs ~/prj.txt &̈́'
start_editing_my_project_work_hour_report
unalias start_editing_my_project_work_hour_report
... then with the locking, and without the alias, you might end up with something like
# Obtain my UID on this host, and construct directory name and lock file name
uid=$(id -u)
dir=/tmp/prj-$uid
lock=$dir/pid.lock
# The loop will execute at most twice,
# but we don't know yet whether once is enough
while true; do
if mkdir -p "$dir"; then
# Yay, we have the lock!
( echo $$ >"$lock" ; emacs ~/prj.txt; rm -f "$lock" ) &
break
else
other=$(cat "$lock")
# If the process which created the UID is still live, do nothing
if kill -0 $other; then
break
else
echo "removing stale lock file dir (dead PID $other) and retrying" >&2
rm -rf "$dir"
continue
fi
fi
done
Related
I have encountered this in a bash script:
if { set -C; 2>/dev/null >~/test.lock; }; then
echo "Acquired lock"
else
echo "Lock file exists… exiting"
exit 1
fi
It enters on the else flow. I know set -C will not overwrite the files, 2>/dev/null means something as : redirect errors to "void", but then I have >~/test.lock and will mean to redirect something in the lock file, (what exactly, the errors probably). I have test.lock the file in home, created, and empty. Being a if , it must return false in my case.
{ ... ; ... ; } is a compound command. That is bash executes every command in it, and the exit code is the one of the last one.
It is a bit like ( ... ; ... ), except that with ( you execute a subshell (a bit like sh -c "... ; ...") which is less efficient, and, moreover, prevent affecting local variables of your current shell, for example.
So, in short, { set -C; 2>/dev/null >~/test.lock; } means "do set -C, then do 2>/dev/null >~/test.lock and the return (exit code) is the one of that last command".
So if { set -C; 2>/dev/null >~/test.lock; } is "if 2>/dev/null >~/test.lock succeeds in that compound command, that is after set -C".
Now, set -C means that you can't overwrite existing files
And 2>/dev/null > ~/test.lock is an attempt to overwrite test.lock if it exists, or to create it if it doesn't.
So, what you have here, is
If lock file already exist, fail and say "lock file exists, exiting"
If lock file did not exit, create it, and say "lock acquired".
And it does it in one operation.
So it is different than
# Illustration of how NOT to do it. Do not use this code :-)
if [[ -f "test.lock" ]]
then
echo "lock file exists, exiting"
else
2>/dev/null > ~/test.lock
echo "lock file acquired"
fi
because that, clearer but wrong, version does not guarantee that something will have created the lock file between the evaluation of the if condition and the execution of 2>/dev/null > ~/test.lock.
The version you've shown has the advantage that the creation and test of lock is the same thing.
set -C disallows writing to existing files
2>/dev/null suppresses warnings
>~/test.lock attempts to write to a file called test.lock. If the file already exists this returns an error because of set -C. Otherwise it will create a new test.lock file, making the next instance of this script fail on this step.
The purpose of lock files is to ensure that only one instance of a script runs at the same time. When the program is finished it could delete ~/test.lock to let another instance run.
I would like to lock a directory while a Bash script is running and make sure it's not locked anymore when the script dies.
My script creates a directory, and I want to try deleting it, if it I can't delete it then it means it's locked. If it's not locked it should create the directory.
rm "$dir_path" > /dev/null 2>&1
if [ -d "$dir_path" ]; then
exit 0
fi
cp -r "$template_dir" "$dir_path"
# Lock directory
#LOCK "$dir_path"
# flock --exclusive --nonblock "$app_apex_path" # flock: bad file descriptor
# When script ends the lock is automatically removed without need to do any cleanup
# this is necessary because if for example in case of power failure the dir would still
# be locked on next boot.
I have looked into flock but it doesn't seem to work like this.
Here’s an example with advisory locking, which works fine as long as all participating scripts follow the same protocol.
set -e
if ! mkdir '/tmp/my-magic-lock'; then
exit 1 # Report an error, maybe?
fi
trap "rmdir '/tmp/my-magic-lock'" EXIT
# We hold the advisory lock now.
rm -Rf "$dir_path"
cp -a "$template_dir" "$dir_path"
As a side note, if I were to tackle this situation, I would simply make $template_dir and $dir_path Btrfs subvolumes and use snapshots instead of copies:
set -e
btrfs subvolume delete "$dir_path"
btrfs subvolume snapshot "$template_dir" "$dir_path"
This^^^ is way more efficient, “atomic” (in a number of beneficial ways), copy-on-write and also resilient towards multiple concurrent replacement attempts of the same kind, yielding a correct state once all attempts finish.
In kornshell, `basename $0` gives me the name of the current script.
How would I exploit $$ or $PPID to implement the singleton pattern of only having one script named `basename $0` executed on this server by any user?
ps -ef|grep `basename $0`
This will show me all processes which are running that have the name of the currently running script.
I need a script which can abort when a thread which is not $$ is running the script named `basename $0`.
To provide a race-free mutex, flock is your friend. If you aren't on Linux -- where it's provided by util-linux -- a portable version is available.
If you truly want it to apply to the entire system -- crossing user accounts -- you'll need a directory for your locks to live where all users can create files, and you'll need to ensure that all users can write to your lockfiles.
Assuming you have the flock utility, each program which wants to participate in this protocol can behave as follows:
#!/bin/ksh
umask 000 # allow all users to access the file we're about to create
exec 9>"/tmp/${0##*/}.lck" # open lockfile on FD 9, based on basename of argv[0]
umask 022 # move back to more restrictive file permissions
flock -x -n 9 || exit # grab that lock, or exit the script early
# continue
One key note: Do not try to delete lockfiles when your script exits. If you're in a condition where someone else is actively trying to grab a lock, they'll already have a file descriptor on that existing file; if you delete the file while they have a handle on it, you just ensured a race wherein that program can think it holds the lock while someone else creates a new file under the same name and locks it.
I am trying to login on one of the remote server(Box1) and trying to read one file on remote server(Box1).
That contain the another server(Box2) details, base upon that details I have to come back to the local server and ssh to another server(Box2) for some data crunching. and so on.....
ssh box1.com << EOF
if [[ ! -f /home/rakesh/tomar.log ]]
then
echo "LOG file not found"
else
echo " LOG file present"
export server_node1= `cat /home/rakesh/tomar.log`
fi
EOF
ssh box2.com << EOF
if [[ ! -f /home/rakesh/tomar.log ]]
then
echo "LOG file not found"
else
echo " LOG file present"
export server_node2= `cat /home/rakesh/tomar.log`
fi
EOF
but I am not getting value of "server_node1" and "server_node2" on local machine.
any help would be appreciated.
Just like bash -c 'export foo=bar' cannot declare a variable in the calling shell where you typed this, an ssh command cannot declare a variable in the calling shell. You will have to refactor so that the calling shell receives the information and knows what to do with it.
I agree with the comment that storing a log file in a variable is probably not a sane, or at least elegant, thing to do, but the easy way to do what you are attempting is to put the ssh inside the assignment.
server_node1=$(ssh box1.com cat tomar.log)
server_node2=$(ssh box2.com cat tomar.log)
A few notes and amplifications:
The remote shell will run in your home directory, so I took it out (on the assumption that /home/rt9419 is your home directory, obviously).
In case of an error in the cat command, the exit code of ssh will be the error code from cat, and the error message on standard error will be visible on your standard error, so the echo seemed quite superfluous. (If you want a custom message, variable=$(ssh whatever) || echo "Custom message" >&2 would do that. Note the redirection to standard error; it doesn't seem to matter here, but it's good form.)
If you really wanted to, you could run an arbitrarily complex command in the ssh; as outlined above, it didn't seem necessary here, but you could do assigment=$(ssh remote 'if [[ things ]]; then for variable in $(complex commands to drive a loop); do : etc etc; done; fi; more </dev/null; exit "$variable"') or whatever.
As further comments on your original attempt,
The backticks in the here document in your attempt would be evaluated by your local shell before the ssh command even ran. There are separate questions about how to fix that; see e.g. How have both local and remote variable inside an SSH command. but in short, unless you absolutely require the local shell to be able to modify the commands you send, probably put them in single quotes, like I did in the silly complex ssh example above.
The function of export is to make variables visible to child processes. There is no way to affect the environment of a parent process (short of having it cooperate and/or coordinate the change, as in the code above). As an example to illustrate the difference, if you set PERL5LIB to a directory with Perl libraries, but fail to export it, the Perl process you start will not see the variable; it is only visible to the current shell. When you export it, any Perl process you start as a child of this shell will also see this variable and the value you assigned. In other words, you export variables which are not private to the current shell (and don't export private ones; aside from making sure they are private, this saves the amount of memory which needs to be copied between processes), but that still only makes them visible to children, by the design of the U*x process architecture.
You should get back the file from box1and box2 with an scp:
scp box1.com:/home/rt9419/tomar.log ~/tomar1.log
#then you can cat!
export server_node1=`cat ~/tomar1.log`
idem with box2
scp box2.com:/home/rt9419/tomar.log ~/tomar2.log
#then you can cat!
export server_node2=`cat ~/tomar2.log`
There are several possibilities. In your case, you could on the remote system create a file (in bash syntax), containing the assignments of these variables, for example
echo "export server_node2='$(</home/rt9419/tomar.log)'" >>export_settings
(which makes me wonder why you want the whole content of your logfile be stored into a variable, but this is another question), then transfer this file to your host (for example with scp) and source it from within your bash script.
I have a shell script which usually runs nearly 10 mins for a single run,but i need to know if another request for running the script comes while a instance of the script is running already, whether new request need to wait for existing instance to compplete or a new instance will be started.
I need a new instance must be started whenever a request is available for the same script.
How to do it...
The shell script is a polling script which looks for a file in a directory and execute the file.The execution of the file takes nearly 10 min or more.But during execution if a new file arrives, it also has to be executed simultaneously.
the shell script is below, and how to modify it to execute multiple requests..
#!/bin/bash
while [ 1 ]; do
newfiles=`find /afs/rch/usr8/fsptools/WWW/cgi-bin/upload/ -newer /afs/rch/usr$
touch /afs/rch/usr8/fsptools/WWW/cgi-bin/upload/.my_marker
if [ -n "$newfiles" ]; then
echo "found files $newfiles"
name2=`ls /afs/rch/usr8/fsptools/WWW/cgi-bin/upload/ -Art |tail -n 2 |head $
echo " $name2 "
mkdir -p -m 0755 /afs/rch/usr8/fsptools/WWW/dumpspace/$name2
name1="/afs/rch/usr8/fsptools/WWW/dumpspace/fipsdumputils/fipsdumputil -e -$
$name1
touch /afs/rch/usr8/fsptools/WWW/dumpspace/tempfiles/$name2
fi
sleep 5
done
When writing scripts like the one you describe, I take one of two approaches.
First, you can use a pid file to indicate that a second copy should not run. For example:
#!/bin/sh
pidfile=/var/run/$(0##*/).pid
# remove pid if we exit normally or are terminated
trap "rm -f $pidfile" 0 1 3 15
# Write the pid as a symlink
if ! ln -s "pid=$$" "$pidfile"; then
echo "Already running. Exiting." >&2
exit 0
fi
# Do your stuff
I like using symlinks to store pid because writing a symlink is an atomic operation; two processes can't conflict with each other. You don't even need to check for the existence of the pid symlink, because a failure of ln clearly indicates that a pid cannot be set. That's either a permission or path problem, or it's due to the symlink already being there.
Second option is to make it possible .. nay, preferable .. not to block additional instances, and instead configure whatever it is that this script does to permit multiple servers to run at the same time on different queue entries. "Single-queue-single-server" is never as good as "single-queue-multi-server". Since you haven't included code in your question, I have no way to know whether this approach would be useful for you, but here's some explanatory meta bash:
#!/usr/bin/env bash
workdir=/var/tmp # Set a better $workdir than this.
a=( $(get_list_of_queue_ids) ) # A command? A function? Up to you.
for qid in "${a[#]}"; do
# Set a "lock" for this item .. or don't, and move on.
if ! ln -s "pid=$$" $workdir/$qid.working; then
continue
fi
# Do your stuff with just this $qid.
...
# And finally, clean up after ourselves
remove_qid_from_queue $qid
rm $workdir/$qid.working
done
The effect of this is to transfer the idea of "one at a time" from the handler to the data. If you have a multi-CPU system, you probably have enough capacity to handle multiple queue entries at the same time.
ghoti's answer shows some helpful techniques, if modifying the script is an option.
Generally speaking, for an existing script:
Unless you know with certainty that:
the script has no side effects other than to output to the terminal or to write to files with shell-instance specific names (such as incorporating $$, the current shell's PID, into filenames) or some other instance-specific location,
OR that the script was explicitly designed for parallel execution,
I would assume that you cannot safely run multiple copies of the script simultaneously.
It is not reasonable to expect the average shell script to be designed for concurrent use.
From the viewpoint of the operating system, several processes may of course execute the same program in parallel. No need to worry about this.
However, it is conceivable, that a (careless) programmer wrote the program in such a way that it produces incorrect results, when two copies are executed in parallel.