"set -m" command in shell script - bash

I was going through a shell script where set -m was used.
My understanding is that set is used to get positional args.
For ex: set "SO community is very helpful". Now, if I do echo $1, I should get SO and so on for $2, $3...
After checking the command with help flag, I got "-m Job control is enabled."
My question is, what is the purpose of set -m in the following code?
set -m
(
current_hash="some_ha54_one"
new_hash=$(cat file.txt)
if [ $current_hash -ne new_hash ]; then
pip install -r requirement.txt
fi
tmp="temp variable"
export tmp
bash some_bash_file.sh &
wait
bash some_other_bash_file.sh &
)
I understand (to the best of my knowledge) what I going on inside () but what is the use of set -m ?

"Job control" enables features like bg and fg; signal-handling and file-descriptor routing changes intended for human operators who might use them to bring background tasks into the foreground to provide them with input; and the ability to refer to background tasks by job number instead of PID. The script segment you showed doesn't use these features, so the set -m call is presumably pointless.
These features are meant for human users, not scripts; and so in scripts they're off by default. In general, code that attempts to use them in scripts is buggy, and should be replaced with code that operates by PID. As an example, code that runs two scripts in parallel with each other, and then collects the exit status of each when they're finished without needing job control follows:
bash some_bash_file & some_pid=$!
bash some_other_file & some_other_pid=$!
wait "$some_pid"; some_rc=$?
wait "$some_other_pid"; some_other_rc=$?

Related

Bash call another script depending on "read" input

I am trying to learn about making simple bash scripts to do things on my computer just because I want to learn because I think it is interesting (and I can also think of uses down the track).
I am trying to write a script that assigns variables that will call another script depending on what I type. I have managed to call another script from a variable using the below:
#!/bin/bash
Echo hello please choose you next step
VBA="/Users/zap/VBA.sh"
$VBA
but now I want to be able to call one script or another depending on user input, and I have tried to make the script below, so that if when I type VBA in the "read" section it runs one script and if I type in VBB in it runs a different script. But it seems that this does not work how do I need to change the syntax to make the out put run with the script VBA or VBB?
#!/bin/bash
Echo hello please choose you next step
VBA="/Users/zap/VBA.sh"
VBB="/Users/zap/VBB.sh"
read IPT
NXT="$"$IPT""
echo $NXT
If I can make this work I will turn this into a simple script that runs sudo shutdown and then asks me if I want to shut down immediately (I think -h) or restart (I think -r).
There are a few options.
1) You could use an if statement and explicitly run the command you want:
echo Do you want to reboot (Y/N)?
read IPT
if [ "$IPT" = "Y" ] ; then
shutdown -h now
else
shutdown -r now
fi
2) you can try to force the user to type the command. It could be dangerous if you sudo
since the user could type something you don't want to run.
echo do you want to run VBA or VBB?
read IPT
"/Users/zap/${IPT}.sh"

whether a shell script can be executed if another instance of the same script is already running

I have a shell script which usually runs nearly 10 mins for a single run,but i need to know if another request for running the script comes while a instance of the script is running already, whether new request need to wait for existing instance to compplete or a new instance will be started.
I need a new instance must be started whenever a request is available for the same script.
How to do it...
The shell script is a polling script which looks for a file in a directory and execute the file.The execution of the file takes nearly 10 min or more.But during execution if a new file arrives, it also has to be executed simultaneously.
the shell script is below, and how to modify it to execute multiple requests..
#!/bin/bash
while [ 1 ]; do
newfiles=`find /afs/rch/usr8/fsptools/WWW/cgi-bin/upload/ -newer /afs/rch/usr$
touch /afs/rch/usr8/fsptools/WWW/cgi-bin/upload/.my_marker
if [ -n "$newfiles" ]; then
echo "found files $newfiles"
name2=`ls /afs/rch/usr8/fsptools/WWW/cgi-bin/upload/ -Art |tail -n 2 |head $
echo " $name2 "
mkdir -p -m 0755 /afs/rch/usr8/fsptools/WWW/dumpspace/$name2
name1="/afs/rch/usr8/fsptools/WWW/dumpspace/fipsdumputils/fipsdumputil -e -$
$name1
touch /afs/rch/usr8/fsptools/WWW/dumpspace/tempfiles/$name2
fi
sleep 5
done
When writing scripts like the one you describe, I take one of two approaches.
First, you can use a pid file to indicate that a second copy should not run. For example:
#!/bin/sh
pidfile=/var/run/$(0##*/).pid
# remove pid if we exit normally or are terminated
trap "rm -f $pidfile" 0 1 3 15
# Write the pid as a symlink
if ! ln -s "pid=$$" "$pidfile"; then
echo "Already running. Exiting." >&2
exit 0
fi
# Do your stuff
I like using symlinks to store pid because writing a symlink is an atomic operation; two processes can't conflict with each other. You don't even need to check for the existence of the pid symlink, because a failure of ln clearly indicates that a pid cannot be set. That's either a permission or path problem, or it's due to the symlink already being there.
Second option is to make it possible .. nay, preferable .. not to block additional instances, and instead configure whatever it is that this script does to permit multiple servers to run at the same time on different queue entries. "Single-queue-single-server" is never as good as "single-queue-multi-server". Since you haven't included code in your question, I have no way to know whether this approach would be useful for you, but here's some explanatory meta bash:
#!/usr/bin/env bash
workdir=/var/tmp # Set a better $workdir than this.
a=( $(get_list_of_queue_ids) ) # A command? A function? Up to you.
for qid in "${a[#]}"; do
# Set a "lock" for this item .. or don't, and move on.
if ! ln -s "pid=$$" $workdir/$qid.working; then
continue
fi
# Do your stuff with just this $qid.
...
# And finally, clean up after ourselves
remove_qid_from_queue $qid
rm $workdir/$qid.working
done
The effect of this is to transfer the idea of "one at a time" from the handler to the data. If you have a multi-CPU system, you probably have enough capacity to handle multiple queue entries at the same time.
ghoti's answer shows some helpful techniques, if modifying the script is an option.
Generally speaking, for an existing script:
Unless you know with certainty that:
the script has no side effects other than to output to the terminal or to write to files with shell-instance specific names (such as incorporating $$, the current shell's PID, into filenames) or some other instance-specific location,
OR that the script was explicitly designed for parallel execution,
I would assume that you cannot safely run multiple copies of the script simultaneously.
It is not reasonable to expect the average shell script to be designed for concurrent use.
From the viewpoint of the operating system, several processes may of course execute the same program in parallel. No need to worry about this.
However, it is conceivable, that a (careless) programmer wrote the program in such a way that it produces incorrect results, when two copies are executed in parallel.

How to increment a global variable within another bash script

Question,
I want to have a bash script that will have a global variable that can be incremented from other bash scripts.
Example:
I have a script like the following:
#! /bin/bash
export Counter=0
for SCRIPT in /Users/<user>/Desktop/*sh
do
$SCRIPT
done
echo $Counter
That script will call all the other bash scripts in a folder and those scripts will have something like the following:
if [ "$Output" = "$Check" ]
then
echo "OK"
((Counter++))
I want it to then increment the $Counter variable if it does equal "OK" and then pass that value back to the initial batch script so I can keep that counter number and have a total at the end.
Any idea on how to go about doing that?
Environment variables propagate in one direction only -- from parent to child. Thus, a child process cannot change the value of an environment variable set in their parent.
What you can do is use the filesystem:
export counter_file=$(mktemp "$HOME/.counter.XXXXXX")
for script in ~user/Desktop/*sh; do "$script"; done
...and, in the individual script:
counter_curr=$(< "$counter_file" )
(( ++counter_curr ))
printf '%s\n' "$counter_curr" >"$counter_file"
This isn't currently concurrency-safe, but your parent script as currently written will never call more than one child at a time.
An even easier approach, assuming that the value you're tracking remains relatively small, is to use the file's size as a proxy for the counter's value. To do this, incrementing the counter is as simple as this:
printf '\n' >>"$counter_file"
...and checking its value in O(1) time -- without needing to open the file and read its content -- is as simple as checking the file's size; with GNU stat:
counter=$(stat -f %z "$counter_file")
Note that locking may be required for this to be concurrency-safe if using a filesystem such as NFS which does not correctly implement O_APPEND; see Norman Gray's answer (to which this owes inspiration) for a working implementation.
You could source the other scripts, which means they're not running in a sub-process but "inline" in the calling script like this:
#! /bin/bash
export counter=0
for script in /Users/<user>/Desktop/*sh
do
source "$script"
done
echo $counter
But as pointed out in the comments i'd only advise to use this approach if you control the called scripts yourself. If they for example exit or have variables clashing with each other, bad things could happen.
As described, you can't do this, since there isn't anything which corresponds to a ‘global variable’ for shell scripts.
As the comment suggests, you'll have to use the filesystem to communicate between scripts.
One simple/crude way of doing what you describe would be to simply have each cooperating script append a line to a file, and the ‘global count’ is the size of this file:
#! /bin/sh -
echo ping >>/tmp/scriptcountfile
then wc -l /tmp/scriptcountfile is the number of times that's happened. Of course, there's a potential race condition there, so something like the following would sequence those accesses:
#! /bin/sh -
(
flock -n 9
echo 'do stuff...'
echo ping >>/tmp/stampfile
) 9>/tmp/lockfile
(the flock command is available on Linux, but isn't portable).
Of course, then you can start to do fancier things by having scripts send stuff through pipes and sockets, but that's going somewhat over the top.

Spawning background process under different user in bash

I know I can run this command to spawn a background process and get the PID:
PID=`$SCRIPT > /dev/null 2>&1 & echo $!`
and to run a command under different user:
su - $USER -c "$COMMAND"
I don't want the script to run as root and I can't quite figure out how to combine the two and get the PID of the spawned process.
Thanks!
I think you want the runuser command. General syntax:
runuser -l userNameHere -c 'command'
I suspect that if you set your $SCRIPT variable to the above (with appropriate changes), your first command will do what you want.
To elaborate on: See the following: - stackoverflow.com/questions/9119885/…
See particularly the following quote from Chris Dodd:
Unfortunately there's no easy way to do this prior to bash version 4, when $BASHPID was
introduced. One thing you can do is to write a tiny program that prints its parent PID:...
If you have bash 4 and BASHPID, see $$ in a script vs $$ in a subshell
I don't have version 4, so I can't provide an example of it's usage.
Or write a tiny C program which execvs it's arguments and make it setuid to USER.
Or even make a setuid shell script (not generally recommended). Hopefully the USER is fixed; if not, get the source for runuser, this is essentially what runuser (not a POSIX command) does.
PID=`su - $USER -c "$SCRIPT > /dev/null 2>&1 & echo $!"`
The problems with the your use of su (above) include:
the $! is being executed in the context of the -c sub-shell of su, not the current shell where PID is,
you're requesting that your SCRIPT be run as a login shell, so you don't even know if USER's shell supports $!,
you have no control over the parent-child process chain that su (and the user's shell) create.
IOW, when you use
PID=`$SCRIPT > /dev/null 2>&1 & echo $!`
there's only one program involved, bash, and two (maybe three?) processes that you pretty much have complete control over. When you throw su into the mix, that changes things much more than is apparent on the surface -- bash and su support similar arguments, right?!?
For obvious reasons, su does mucho magic to protect it and its' children's environment from attacks; it doesn't even like being put in the background....
It's kind of late, but here is a two liner will work, seems to need to be two so that it doesn't wait for the $SCRIPT to complete:
su $USER -c "$SCRIPT 2>&1 & >> $LogOrNull echo $! > /some/writeable/path"
PID="$(cat /some/writeable/path)"
/some/writeable/path will need to be writeable by $USER
And the user running these commands will need to have read access

Using Return in bash

I'm wanting this script to print 1,2,3.. without the use of functions, just execute two.sh then carry on where it left off, is it possible?
[root#server:~]# cat testing.sh
#!/bin/bash
echo "1"
exec ./two.sh
echo "3"
[root#server:~]# cat two.sh
#!/bin/bash
echo "2"
return
exec, if you give it a program name a, will replace the current program with whatever you specify.
If you want to just run the script (in another process) and return, simply use:
./two.sh
to do that.
For this simple case, you can also execute the script in the context of the current process with:
. ./two.sh
That will not start up a new process but will have the side-effect of allowing two.sh to affect the current shell's environment. While that's not a problem for your current two.sh (since all it does is echo a line), it may be problematic for more complicated scripts (for example, those that set environment variables).
a Without a program name, it changes certain properties of the current program, such as:
exec >/dev/null
which simply starts sending all standard output to the bit bucket.
Sure, just run:
echo "1"
./two.sh
echo "3"

Resources