sqlplus within a loop - unix - oracle

Is there a way send multiple sqlplus commands within a loop but waiting for one to run with success in order for the next one to start ?
Here is a sample of my code. I have tried to add that sleep 15 because the functions that I'm going to execute are taking like 10-20s to run. I want to get rid of that 15s constant and make them run one after the other.
if [ "$#" -eq 1 ]; then
checkUser "$1"
while read line; do
sqlplus $user/$pass#$server $line
sleep 15
done < "$wrapperList"
fi

The instruction in a while loop are done in sequence. It would be equivalent to do like that, using ; to chain instructions:
sqlplus $user/$pass#$server $line1
sqlplus $user/$pass#$server $line2
So you don't need the sleep 15 here, since the sqlplus commands will not be called in parallel. The way you did it already is calling them one after the other.
Nota: It is even better to stop running if first line did not return correctly, using && to say: run only if previous return code is 0
sqlplus $user/$pass#$server $line1 &&\
sqlplus $user/$pass#$server $line2
To have this in the while loop:
checkUser "$1"
while read line; do
sqlplus $user/$pass#$server $line
RET_CODE=$? # check return code, and break if not ok.
if [ ${RET_CODE} != 0 ]; then
echo "aborted." ; break
fi
done < "$wrapperList"
On the other hand, When you want to run in parallel, syntax is different, like here: Unix shell script run SQL scripts in parallel

Related

How to wait in bash till a shell script is finished?

right now I'm using this script for a program:
export FREESURFER_HOME=$HOME/freesurfer
source $FREESURFER_HOME/SetUpFreeSurfer.sh
cd /home/ubuntu/fastsurfer
datadir=/home/ubuntu/moya/data
fastsurferdir=/home/ubuntu/moya/output
mkdir -p $fastsurferdir/logs # create log dir for storing nohup output log (optional)
while read p ; do
echo $p
nohup ./run_fastsurfer.sh --t1 $datadir/$p/orig.nii \
--parallel --threads 16 --sid $p --sd $fastsurferdir > $fastsurferdir/logs/out-${p}.log &
sleep 3600s
done < /home/ubuntu/moya/data/subjects-list.txt
Instead of using sleep 3600s, as the program needs around an hour, I'd like to use wait until all processes (several PIDS) are finished.
If this is the right way, can you tell me how to do that?
BR Alex
wait will wait for all background processes to finish (see help wait). So all you need is to run wait after creating all of the background processes.
This may be more than what you are asking for but I figured I would provide some methods for controlling the number of threads you want to have running at once. I find that I always want to limit the number for various reasons.
Explaination
The following will limit concurrent threads to max_threads running at one time. I am also using the main design pattern so we have a main that runs the script with a function run_jobs that handles the calling and waiting. I read all of $p into an array, then traverse that array as we launch threads. It will either launch a thread up to 4 or wait 5 seconds, once there are at least one less than four it will start another thread. When finished it waits for any remaining to be done. If you want something more simplistic I can do that as well.
#!/usr/bin/env bash
export FREESURFER_HOME=$HOME/freesurfer
source $FREESURFER_HOME/SetUpFreeSurfer.sh
typeset max_threads=4
typeset subjects_list="/home/ubuntu/moya/data/subjects-list.txt"
typeset subjectsArray
run_jobs() {
local child="$$"
local num_children=0
local i=0
while [[ 1 ]] ; do
num_children=$(ps --no-headers -o pid --ppid=$child | wc -w) ; ((num_children-=1))
echo "Children: $num_children"
if [[ ${num_children} -lt ${max_threads} ]] ;then
if [ $i -lt ${#subjectsArray[#]} ] ;then
((i+=1))
# RUN COMMAND HERE &
./run_fastsurfer.sh --t1 $datadir/${subjectsArray[$i]}/orig.nii \
--parallel --threads 16 --sid ${subjectsArray[$i]} --sd $fastsurferdir
fi
fi
sleep 10
done
wait
}
main() {
cd /home/ubuntu/fastsurfer
datadir=/home/ubuntu/moya/data
fastsurferdir=/home/ubuntu/moya/output
mkdir -p $fastsurferdir/logs # create log dir for storing nohup output log (optional)
mapfile -t subjectsArray < ${subjects_list}
run_jobs
}
main
Note: I did not run this code since you have not provided enough information to actually do so.

Bash script if else execute once cronjob every minute

I have a bash script in crontab that runs every minute.
In this bash script i have a sql query which goes and check for the number
If the number is greater than predefined number then I want to move files and replace files.
This works absolutely fine, the problem is that since this script runs every minute via crontab, when the script runs next time it overwrites the file.
Is there any logic that I can put that this code is only run once but let the cron run every minute.
here is the code
#!/bin/bash
count=`mysql -B -u root -ppassword -e 'select count(*) from column' table | tail -n +2`
allowed="500"
if [ "$count" -ge "$allowed" ]
then
mv /netboot/var/www/html /usr/html/
mv /netboot/var/www/back /netboot/var/www/html
echo "Not Allowed - Disable Code goes here"
else
echo "all is good for now $count"
fi
exit 0
Your help is appreciated.
I have managed to fix this by creating another if statement within the parent if.
See below.
#!/bin/bash
count=`mysql -B -u root -ppassword -e 'select count(*) from column' table | tail -n +2`
allowed="500"
if [ "$count" -ge "$allowed" ]
then
if
html folder exists in /usr/
then
mv /netboot/var/www/html /usr/html/
mv /netboot/var/www/back /netboot/var/www/html
else
echo " "
fi
echo "Not Allowed - Disable Code goes here"
else
echo "all is good for now $count"
fi
exit 0

"allowed" operations in bash read while loop

I have a file text.txt which contains two lines.
first line
second line
I am trying to loop in bash using following loop:
while read -r LINE || [[ -n "$LINE" ]]; do
# sed -i 'some command' somefile
echo "echo something"
echo "$LINE"
sh call_other_script.sh
if ! sh some_complex_script.sh ; then
echo "operation failed"
fi
done <file.txt
When calling some_complex_script.sh only the first line is processed, however when commenting it out all two lines are processed.
some_complex_script.sh does all kind of stuff, like starting processes, sqlplus, starting WildFly etc.
./bin/call_some_script.sh | tee $SOME_LOGFILE &
wait
...
sqlplus $ORACLE_USER/$ORACLE_PWD#$DB<<EOF
whenever sqlerror exit 1;
whenever oserror exit 2;
INSERT INTO TABLE ....
COMMIT;
quit;
EOF
...
nohup $SERVER_DIR/bin/standalone.sh -c $WILDFLY_PROFILE -u 230.0.0.4 >/dev/null 2>&1 &
My question is if there are some operations which are not supposed to be called in some_complex_script.sh and in the loop (it may as well take 10 minutes to finish, is this a good idea at all?) which may break that loop.
The script is called using Jenkins and the Publish over SSH Plugin. When some_complex_script.sh is called on its own, there are no problems.
You should close or redirect stdin for the other commands you run, to stop them reading from the file. eg:
sh call_other_script.sh </dev/null

Bash script: spawning multiple processes issues

So i am writing a script to call a process 365 times and they should run in 10 batches, so this is something i wrote but there are multiple issues -
1. the log message is not getting written to the log file, i see the error message in err file
2. there is this "Command not found" error I keep getting from the script for the line process.
3. even if the command doesnt succeed, still it doesn't print FAIL but prints success
#!/bin/bash
set -m
FAIL=0
for i in {1..10}
do
waitPIDS=()
j=$i
while [ $j -lt 366 ]; do
exec 1>logfile
exec 2>errorfile
`process $j &`
waitPIDS[${#waitPIDS[#]}]=$!
j=$[$j+1]
done
for jpid in "${waitPIDS[#]}"
do
echo $jpid
wait $jpid
if [[ $? != 0 ]] ; then
echo "fail"
else
echo "success"
fi
done
done
What is wrong with it ?
thanks!
At the very least, this line:
`process $j &`
Shouldn't have any backticks in it. You probably just want:
process $j &
Besides that, you're overwriting your log files instead of appending to them; is that intended?

Process Scheduling

Let's say, I have 10 scripts that I want to run regularly as cron jobs. However, I don't want all of them to run at the same time. I want only 2 of them running simultaneously.
One solution that I'm thinking of is create two script, put 5 statements on each of them, and them as separate entries in the crontab. However the solution seem very adhoc.
Is there existing unix tool to perform the task I mentioned above?
The jobs builtin can tell you how many child processes are running. Some simple shell scripting can accomplish this task:
MAX_JOBS=2
launch_when_not_busy()
{
while [ $(jobs | wc -l) -ge $MAX_JOBS ]
do
# at least $MAX_JOBS are still running.
sleep 1
done
"$#" &
}
launch_when_not_busy bash job1.sh --args
launch_when_not_busy bash jobTwo.sh
launch_when_not_busy bash job_three.sh
...
wait
NOTE: As pointed out by mobrule, my original answer will not work because the wait builtin with no arguments waits for ALL children to finish. Hence the following 'parallelexec' script, which avoids polling at the cost of more child processes:
#!/bin/bash
N="$1"
I=0
{
if [[ "$#" -le 1 ]]; then
cat
else
while [[ "$#" -gt 1 ]]; do
echo "$2"
set -- "$1" "${#:3}"
done
fi
} | {
d=$(mktemp -d /tmp/fifo.XXXXXXXX)
mkfifo "$d"/fifo
exec 3<>"$d"/fifo
rm -rf "$d"
while [[ "$I" -lt "$N" ]] && read C; do
($C; echo >&3) &
let I++
done
while read C; do
read -u 3
($C; echo >&3) &
done
}
The first argument is the number of parallel jobs. If there are more, each one is run as a job, otherwise all commands to run are read from stdin line by line.
I use a named pipe (which is sent to oblivion as soon as the shell opens it) as a synchronization method. Since only single bytes are written there are no race condition issues that could complicate things.
GNU Parallel is designed for this kind of tasks:
sem -j2 do_stuff
sem -j2 do_other_stuff
sem -j2 do_third_stuff
do_third_stuff will only be run when either do_stuff or do_other_stuff has finished.
Watch the intro videos to learn more:
http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Resources