SQLPLUS prevents while loop from ending - bash

I have a function that, in a loop, calls another function that executes an sql.
If the sql is a file, it only executes the 1st sql, but, if the sql is embedded, it finishes the loop. Example:
function conciliateFile {
### NOT WORKING
sqlplus -S -L ${BBDD_CHAIN} # ${HOME}/tmp/prueba.sql
### WORKING
#sqlplus -S -L ${BBDD_CHAIN} <<EOF
#set serveroutput on size 1000000
#set linesize 350
#DECLARE
#result_code VARCHAR2(4);
#result_description VARCHAR2(500);
#BEGIN
#DBMS_OUTPUT.PUT_LINE('HELLO');
#END;
#\
#exit;
#EOF
}
funcion mainFunction {
NUM_FILES=$(find ${SQL_PATH} -type f -name "${PATTERN}*sql" |wc -l)
COUNTER=1
find ${SQL_PATH} -type f -name "${PATTERN}*sql" | sort -t"_" -k 2 -n | while read CURRENT_TMP_SQL_FILE; do
CURRENT_TMP_SQL_OUT_FILE=${CURRENT_TMP_SQL_FILE}_out
CURRENT_TMP_SQL_ERR_FILE=${CURRENT_TMP_SQL_FILE}_err
echo "Conciliating file ${COUNTER} out of ${NUM_FILES}"
conciliateFile
COUNTER=$(expr ${COUNTER} + 1)
}
# Main
#split files
mainFunction
And the output would be something like this:
NOT WORKING (executing sql file)
"Conciliating file 1 out of 3"
WORKING OPTION (executing embedded sql)
"Conciliating file 1 out of 3"
"Conciliating file 2 out of 3"
"Conciliating file 3 out of 3"
Any suggestions on this issue???

Your script prueba.sql must end with an exit - otherwise sqlplus will not terminate execution

Related

How do I shorten the path shown in my fish prompt to only the current directory name?

If I'm in a deep directory, let's say
/run/media/PhoenixFlame101/Coding/Projects/react-app
the fish prompt currently looks like this:
/r/m/Ph/C/P/react-app >
How do I change it to show only the current directory? Like this:
react-app >
I am also using tide, if that makes any difference.
Edit:
Since #glenn-jackman asked here's the outputs of type fish_prompt:
fish_prompt is a function with definition
# Defined in /home/PhoenixFlame101/.config/fish/functions/fish_prompt.fish # line 2
function fish_prompt
_tide_status=$status _tide_pipestatus=$pipestatus if not set -e _tide_repaint
jobs -q && set -lx _tide_jobs
/usr/bin/fish -c "set _tide_pipestatus $_tide_pipestatus
set _tide_parent_dirs $_tide_parent_dirs
PATH=$(string escape "$PATH") CMD_DURATION=$CMD_DURATION fish_bind_mode=$fish_bind_mode set _tide_prompt_4007 (_tide_2_line_prompt)" &
builtin disown
command kill $_tide_last_pid 2>/dev/null
set -g _tide_last_pid $last_pid
end
math $COLUMNS-(string length -V "$_tide_prompt_4007[1]$_tide_prompt_4007[3]")+5 | read -lx dist_btwn_sides
echo -ns \n''(string replace #PWD# (_tide_pwd) "$_tide_prompt_4007[1]")''
string repeat -Nm(math max 0, $dist_btwn_sides-$_tide_pwd_len) ' '
echo -ns "$_tide_prompt_4007[3]"\n"$_tide_prompt_4007[2] "
end
and type prompt_pwd:
prompt_pwd is a function with definition
# Defined in /usr/share/fish/functions/prompt_pwd.fish # line 1
function prompt_pwd --description 'short CWD for the prompt'
set -l options h/help d/dir-length= D/full-length-dirs=
argparse -n prompt_pwd $options -- $argv
or return
if set -q _flag_help
__fish_print_help prompt_pwd
return 0
end
set -q argv[1]
or set argv $PWD
set -ql _flag_d
and set -l fish_prompt_pwd_dir_length $_flag_d
set -q fish_prompt_pwd_dir_length
or set -l fish_prompt_pwd_dir_length 1
set -l fulldirs 0
set -ql _flag_D
and set fish_prompt_pwd_full_dirs $_flag_D
set -q fish_prompt_pwd_full_dirs
or set -l fish_prompt_pwd_full_dirs 1
for path in $argv
# Replace $HOME with "~"
set -l realhome ~
set -l tmp (string replace -r '^'"$realhome"'($|/)' '~$1' $path)
if test "$fish_prompt_pwd_dir_length" -eq 0
echo $tmp
else
# Shorten to at most $fish_prompt_pwd_dir_length characters per directory
# with full-length-dirs components left at full length.
set -l full
if test $fish_prompt_pwd_full_dirs -gt 0
set -l all (string split -m (math $fish_prompt_pwd_full_dirs - 1) -r / $tmp)
set tmp $all[1]
set full $all[2..]
else if test $fish_prompt_pwd_full_dirs -eq 0
# 0 means not even the last component is kept
string replace -ar '(\.?[^/]{'"$fish_prompt_pwd_dir_length"'})[^/]*' '$1' $tmp
continue
end
string join / (string replace -ar '(\.?[^/]{'"$fish_prompt_pwd_dir_length"'})[^/]*/' '$1/' $tmp) $full
end
end
end
I'm not sure what exactly this does, but I hope it helps!

Parallel subshells doing work and report status

I am trying to do work in all subfolders in parallel and describe a status per folder once it is done in bash.
suppose I have a work function which can return a couple of statuses
#param #1 is the folder
# can return 1 on fail, 2 on sucess, 3 on nothing happend
work(){
cd $1
// some update thing
return 1, 2, 3
}
now I call this in my wrapper function
do_work(){
while read -r folder; do
tput cup "${row}" 20
echo -n "${folder}"
(
ret=$(work "${folder}")
tput cup "${row}" 0
[[ $ret -eq 1 ]] && echo " \e[0;31mupdate failed \uf00d\e[0m"
[[ $ret -eq 2 ]] && echo " \e[0;32mupdated \uf00c\e[0m"
[[ $ret -eq 3 ]] && echo " \e[0;32malready up to date \uf00c\e[0m"
) &>/dev/null
pids+=("${!}")
((++row))
done < <(find . -maxdepth 1 -mindepth 1 -type d -printf "%f\n" | sort)
echo "waiting for pids ${pids[*]}"
wait "${pids[#]}"
}
and what I want is, that it prints out all the folders per line, and updates them independently from each other in parallel and when they are done, I want that status to be written in that line.
However, I am unsure subshell is writing, which ones I need to capture how and so on.
My attempt above is currently not writing correctly, and not in parallel.
If I get it to work in parallel, I get those [1] <PID> things and [1] + 3156389 done ... messing up my screen.
If I put the work itself in a subshell, I don't have anything to wait for.
If I then collect the pids I dont get the response code to print out the text to show the status.
I did have a look at GNU Parallel but I think I cannot have that behaviour. (I think I could hack it that the finished jobs are printed, but I want all 'running' jobs are printed, and the finished ones get amended).
Assumptions/undestandings:
a separate child process is spawned for each folder to be processed
the child process generates messages as work progresses
messages from child processes are to be displayed in the console in real time, with each child's latest message being displayed on a different line
The general idea is to setup a means of interprocess communications (IC) ... named pipe, normal file, queuing/messaging system, sockets (plenty of ideas available via a web search on bash interprocess communications); the children write to this system while the parent reads from the system and issues the appropriate tput commands.
One very simple example using a normal file:
> status.msgs # initialize our IC file
child_func () {
# Usage: child_func <unique_id> <other> ... <args>
local i
for ((i=1;i<=10;i++))
do
sleep $1
# each message should include the child's <unique_id> ($1 in this case);
# parent/monitoring process uses this <unique_id> to control tput output
echo "$1:message - $1.$i" >> status.msgs
done
}
clear
( child_func 3 & )
( child_func 5 & )
( child_func 2 & )
while IFS=: read -r child msg
do
tput cup $child 10
echo "$msg"
done < <(tail -f status.msgs)
NOTES:
the (child_func 3 &) construct is one way to eliminate the OS message re: 'background process completed' from showing up in stdout (there may be other ways but I'm drawing a blank at the moment)
when using a file (normal, pipe) OP will want to look at a locking method (flock?) to insure messages from multiple children don't stomp each other
OP can get creative with the format of the messages printed to status.msgs in conjunction with parsing logic in the parent's while loop
assuming variable width messages OP may want to look at appending a tput el on the end of each printed message in order to 'erase' any characters leftover from a previous/longer message
exiting the loop could be as simple as keeping count of the number of child processes that send a message <id>:done, or keeping track of the number of children still running in the background, or ...
Running this at my command line generates 3 separate lines of output that are updated at various times (based on the sleep $1):
# no ouput to line #1
message - 2.10 # messages change from 2.1 to 2.2 to ... to 2.10
message - 3.10 # messages change from 3.1 to 3.2 to ... to 3.10
# no ouput to line #4
message - 5.10 # messages change from 5.1 to 5.2 to ... to 5.10
NOTE: comments not actually displayed in console
Based on #markp-fuso's answer:
printer() {
while IFS=$'\t' read -r child msg
do
tput cup $child 10
echo "$child $msg"
done
}
clear
parallel --lb --tagstring "{%}\t{}" work ::: folder1 folder2 folder3 | printer
echo
You can't control exit statuses like that. Try this instead, rework your work function to echo status:
work(){
cd $1
# some update thing &> /dev/null without output
echo "${1}_$status" #status=1, 2, 3
}
And than set data collection from all folders like so:
data=$(
while read -r folder; do
work "$folder" &
done < <(find . -maxdepth 1 -mindepth 1 -type d -printf "%f\n" | sort)
wait
)
echo "$data"

How can I disable * expansion in a script?

I have a strange problem - possibly I'm just going blind. I have this short script, which replaces the string #qry# in the here-document with a select statement in a file and then pipes it to mysql:
#!/bin/bash
if [[ "$1" == "-h" ]]
then
echo "sqljob [sqlfile] [procnm] [host] [database] [config file]"
echo " sqlfile: text file containing an SQL statement"
echo " procnm: name that will given to the new, stored procedure"
echo " host: hostname of IP address of the database server"
echo " database: the procedure will be created here"
echo " config file: default configuration file with username and password"
exit
fi
infile=$1
procnm=$2
hn=$3
pn=$4
db=$5
mycfg=$6
{
set -o noglob
sed -e "s/#qry#/$(echo $(cat $infile))/g" <<!
drop procedure if exists $procnm;
delete from jobs where jobname="$procnm";
insert into jobs
set
notes="SQL job $procnm",
jobname="$procnm",
parm_tmpl='int';
delimiter //
create procedure $procnm(vqid int)
begin
call joblogmsg(vqid,0,"$procnm","","Executing #qry#");
drop table if exists ${procnm}_res;
create table ${procnm}_res as
#qry#
end//
delimiter ;
!
} | mysql --defaults-file=$mycfg -h $hn -P $pn $db
However, when the select contains *, it expands to whatever is in the directory even though I use noglob. However, it works from the command line:
$ set -o noglob
$ ls *
What am I doing wrong?
Edit
Block Comments in a Shell Script has been suggested as a duplicate, but as you will notice, I need to expand ${procnm} in the here-doc; I just need to avoid the same happening to select *.
I suspect it is because the construct echo (cat). The echo command gets the * from the cat command and the shell in which it runs expands it. In that shell set noglob is not active.
Try leaving the echo away: /$(cat $infile)/, in the end that is the data you need; then there is no extra glob expansion by a shell.

How to parallel process a function, with loops

So I have this function, I want this function to run everything that It contains in itself at the same time. So far it isn't working, and according to other sources, this is how you do it. The function itself works if its not in parallel.
#!/bin/bash
foo () {
cd ${HOME}/sh/path/to/script/execute
for f in *.sh; do #goes to "execute" directory and executes all
#scripts the current directory "execute" basically run-parts without cron
cd ~/sh/path/to/script
while IFS= read -r l1 #Line 1 in master.txt
IFS= read -r l2 #Line 2 in master.txt
IFS= read -r l3 #Line 3 in master.txt
do
cd /dev/shm/arb
echo ${l1} > arg.txt & echo ${l2} > arg2.txt & echo ${l3} > arg3.txt
cd ${HOME}/sh/path/to/script/execute
bash -H ${f} #executes all scripts inside "execute" folder
cd ~/sh/path/to/script/here
./here.sh &
cd ~/sh/path/to/script &
done <master.txt
done
}
export -f foo
parallel ::: foo
Results in
#No result at all....., just buffers. htop doesn't acknowledge any
#processes, and when this runs its pretty taxing on the cores.
master.txt content
In case this is relevant:
apple_fruit
apple_veggie
veggie_fruit
#apple changes
pear_fruit
pear_veggie
veggie_fruit
#pear changes
cucumber_fruit
...
I'm very new to using parallel, and don't know how it works in advanced(and basic) situations so would the loops interfere? And if it does interfere, is there a workaround?
The result is probably going to be something like:
inner() {
script="$1"
parallel -N3 "'$script' {}; here.sh {}" :::: master.txt
}
export -f inner
parallel inner ::: ${HOME}/sh/path/to/script/execute/*.sh
This will call each of the scripts in ${HOME}/sh/path/to/script/execute/ (and here.sh) with 3 arguments from master.txt like this:
${HOME}/sh/path/to/script/execute/script1.sh apple_fruit apple_veggie veggie_fruit
You need to change the scripts so that:
They get the arguments from the command line (not from arg.txt, arg2.txt, arg3.txt).
They send their output to stdout

Write a script to put a series of files in sequence

I am beginning in scripting and I am trying to write a script in bash. I need a script to write a sequence of several file names that are numbered from 1 to 50 inside one file. These are trajectory files from MD simulations. My idea was to write something like:
for valor in {1..50}
do
echo "
#!/bin/bash
catdcd -o Traj-all.dcd -stride 10 -dcd traj-$valor.dcd" > Traj.bash
exit
However, I just got one file with the following line:
#!/bin/bash
catdcd -o Traj-all.dcd -stride 10 -dcd traj-50.dcd
exit
But what I really want is something like:
#!/bin/bash
catdcd -o Traj-all.dcd -stride 10 -dcd traj-1.dcd -dcd traj-2.dcd -dcd traj-3.dcd ... -dcd traj-50.dcd
exit
How can I solve this problem?
You need to read a bit more about bash brace expansion. You can do this:
{
echo "#!/bin/bash"
echo "catdcd -o Traj-all.dcd -stride 10" "-dec traj-"{1..50}".dcd"
# ^^^^^^^^^^^^^^^^^^^^^^^^^
} > Traj.bash
The underlined part is where the brace expansion will get expanded by the shell into
-dec traj-1.dcd -dec traj-2.dcd ... -dec traj-50.dcd
You don't need to explicitly end your script with exit -- the shell will exit by itself when it runs out of commands.
> truncates the file on open. Either only use it once before the loop to create the file and then append (>>) within the loop, or redirect the entire loop.
> foo
for ...
do ...
echo ... >> foo
done
...
{
for ...
do ...
echo ...
done
} > foo

Resources