Similar to these questions (1) (2), I'm wanting to run a command in a background process, carry on processing, then later use the return value from that command.
I have one function in my script that takes particularly long, so I would like to run it first before the rest of the setup so that there is less of a delay when the return value of that script is given, but currently the return value doesn't get captured.
What I've tried:
if [ $LAZY_LOAD -eq 0 ]; then
echo "INFO - Getting least loaded server in background. Can take up to 30s."
local leastLoaded=$( getLeastLoaded ) &
fi
# Other setup stuff that doesn't use leastLoaded...
# setup setup setup....
if [ $LAZY_LOAD -eq 0 ]; then
echo "INFO - Waiting for least loaded server to be retrieved before continuing"
wait
fi
echo "INFO - Doing stuff with $leastLoaded."
doThingWithLeastLoaded $leastLoaded
getLeastLoaded definitely works without the &, so I'm sure this is a concurrency issue.
Thanks!
According to bash manual:
If a command is terminated by the control operator &, the shell executes the command in the background in a subshell.
So your local command would not affect the current shell.
I'd suggest like this:
do-something > /some/file &
... ...
wait
var=$( cat /some/file )
Related
New to shell scripting, trying to run a java program in parallel with port specified as input.
For example ./Test.sh 8080 8081 --> Desired result would be to run script twice with 2 ports . i think " & " is to make it run in PARALLEL ?
Help/ guidance would be appreciated.
#!/bin/bash
PORT=$*;
if [ $# -eq 0 ]
then
echo "No arguments supplied"
fi
for i in $PORT
do
java -DpropertySource=~/Downloads/app.properties -jar APP.jar
-Dserver.port="$PORT" &
done
There are a few bugs here:
1) $1 will only contain the first parameter. You will need $* to contain more than one. (And given that you want the variable to contain multiple ports, it would then be more helpful to call the variable PORTS.)
2) You cannot have the whitespace around the = in a variable assignment in bash.
3) You are looping over i but not using that variable inside the loop. Where you have -Dserver.port="$PORT" you should instead use your loop variable i.
4) You are missing a line continuation character \ at the end of the java ... line (ensure that there is no whitespace after it).
5) The command separator ; at the end of the first line is redundant (although it does not actually harm).
6) Where you are testing for wrong usage, the script will issue the warning but carry on regardless. You need to put an exit statement there. It is good practice to give a non-zero exit value in the event of a failure, so exit 1 is suggested here.
Putting these together:
#!/bin/bash
PORTS=$*
if [ $# -eq 0 ]
then
echo "No arguments supplied"
exit 1
fi
for i in $PORTS
do
java -DpropertySource=~/Downloads/app.properties -jar APP.jar \
-Dserver.port="$i" &
done
Regarding the &, it will launch the command in the background, so that execution of the script will continue (including then reaching the end and exiting) while the command that was launched may still be running. So yes, your java instances listening on the different ports will then be running in parallel.
I am writing a script that executes around 10 back-end processes in sequence, depending on if the previous process was executed without any errors.
Now let's assume the scenario, in which lets say 5th process failed and script came out. But I want to code it in a way such that, when next time user runs it(after removing the error because of which script exited last time), he should be able to run from 5th process onwards and not again from 1st process.
To be more specific, assume following is the script:
Script Starts
Process1
if [ $? -eq 0 ] then
Process2
if [ $? -eq 0 ] then
Process3
if [ $? -eq 0 ] then
..
..
..
..
if [ $? -eq 0 ] then
Process10
else
exit
So here the script will exit anytime if any one of the process fails to complete with status 0. So again, if process5 fails, and user corrects the problem and restarts script, the script should start with process5 again and not process1 or at least there should be an option to user if he wants to resume the script or start it back from beginning i.e. process1.
What all possible ways we can code this kind of script, also please bear in mind, I am not allowed to use a temporary db, where I can store the status of each process.
I need to code in sh (shell script) in unix.
A simple solution would be to write stamp files:
#/bin/sh
set -e # Automatically abort if any simple command fails
if ! test -f cmd1-stamp; cmd1; fi
touch cmd1-stamp
if ! test -f cmd2-stamp; cmd2; fi
touch cmd2-stamp
When the script executes, if cmd1-stamp exists, cmd1 is not executed. Otherwise, cmd1 is executed. The script will abort if it fails. Note that it is very tempting to write test -f cmd1-stamp || cmd1, and this seems to work ( in bash ) but the shell specs state that the shell shall abort if the simple command that fails is not a part of an AND or OR list, and I suspect this is (yet another) instance of bash not conforming to the spec. (Although it doesn't seem to specify that the shell shall not abort if the failing command is part of an AND or OR list.)
Here is my bash code:
(
flock -n -e 200 || (echo "This script is currently being run" && exit 1)
sleep 10
...Call some functions which is written in another script...
sleep 5
) 200>/tmp/blah.lockfile
I'm running the script from two shells successively and as long as the first one is at "sleep 5" all goes good, meaning that the other one doesn't start. But when the first turns to perform the code from another script (other file) the second run starts to execute.
So I have two questions here:
What should I do to prevent this script and all its "children" from run while the script OR its "child" is still running.
(I didn't find a more appropriate expression for running another script other than a "child", sorry for that :) ).
According to man page, -n causes the process to exit when it fails to gain the lock, but as far as I can see it just wait until it can run. What am I missing ?
Your problem may be fairly mundane. Namely,
false || ( exit 1 )
Does not cause the script to exit. Rather, the exit instructs the subshell to exit. So change your first line to:
flock -n -e 200 || { echo "This script is currently being run"; exit 1; } >&2
I use SSH Secure Shell client to connect to a server and run my scripts.
I want to stop a script on some conditions, so when I use exit, not only the script stops, but all the client disconnects from the server!, Here is the code:
if [[ `echo $#` -eq 0 ]]; then
echo "Missing argument- must to get a friend list";
exit
fi
for user in $*; do
if [[ !(-f `echo ${user}.user`) ]]; then
echo "The user name ${user} doesn't exist.";
exit
fi
done
A picture of the client:
Why is this happening?
You use source to run the script, this runs it in the current shell. That means that exit terminates the current shell and with that the ssh session.
replace source with bash and it should work, or better put
#!/bin/bash
on to of the file and make it executable.
exit returns from the current shell - If you've started a script by running it directly, this will exit the shell that the script is running in.
return returns from a function or sourced file (TY Dennis Williamson) - Same thing, but it doesn't terminate your current shell.
break returns from a loop - Similar to return, but can be used anywhere within a loop to stop processing more items. This is probably what you want.
if you are running from the current shell, exit will obviously exit from the shell and disconnect you. try running it in a new shell ( use a . before the script) or else use 'return' instead of exit
would like to add new functionality to the bash shell. I need to have a queue for executions.
What is the easy way to add new functionality to the bash shell keeping all native functions?
I would like to process the command line, then let the bash to execute them. For users it should be transparent.
Thanks Arman
EDIT
I just discovered prll.sourceforge.net it does exactly what I need.
Its easier than it seems:
#!/bin/sh
yourfunctiona(){ ...; }
...
yourfunctionz(){ ...; }
. /path/to/file/with/more/functions
while read COMMANDS; do
eval "$COMMANDS"
done
you can use read -p if you need a prompt or -t if you want it to timeout ... or if you wanted you could even use your favorite dialog program in place of read and pipe the output to a tailbox
touch /tmp/mycmdline
Xdialog --tailbox /tmp/mycmdline 0 0 &
COMMANDS="echo "
while ([ "$COMMANDS" != "" ]); do
COMMANDS=`Xdialog --stdout --inputbox "Text here" 0 0`
eval "$COMMANDS"
done >>/tmp/mycmdline &
To execute commands in threads you can use the following in place of eval $COMMANDS
#this will need to be before the loope
NUMCORES=$(awk '/cpu cores/{sum += $4}END{print sum}' /proc/cpuinfo)
for i in {1..$NUMCORES};do
if [ $i -eq $NUMCORES ] && #see comments below
if [ -d /proc/$threadarray[$i] ]; then #this core already has a thread
#note: each process gets a directory named /proc/<its_pid> - hacky, but works
continue
else #this core is free
$COMMAND &
threadarray[$i]=$!
break
fi
done
Then there is the case where you fill up all threads.
You can either put the whole thing in a while loop and add continues and breaks,
or you can pick a core to wait for (probably the last) and wait for it
to wait for a single thread to complete use:
wait $threadarray[$i]
to wait for all threads to complete use:
wait
#I ended up using this to keep my load from getting to high for too long
another note: you may find that some commands don't like to be threaded, if so you can put the whole thing in a case statement
I'll try to do some cleanup on this soon to put all of the little blocks together (sorry, I'm cobbling this together from random notes that I used to implement this exact thing, but can't seem to find)