Creating an alias that has foreach and if statements in it - tcsh

I'm trying to create an alias in my ~/.alias file as follows. The shell is tcsh.
alias getlastlog 'foreach log ( `find dir1/dir2/*log | tac` )\
grep -q "Options.*BRINGUP" $log \
if ( $status == 0 ) then \
continue \
endif \
break \
end \
less $log '
Then when I run getlastlog in the terminal, I get an error if: Improper then.
If I copy paste the following lines in the terminal, it works as expected
foreach log ( `find dir1/dir2/*log | tac` )
grep -q "Options.*BRINGUP" $log
if ( $status == 0 ) then
continue
endif
break
end
less $log
How I should I create the alias so that it works as expected and I don't get the error if: Improper then. ?
Basically, I'm trying to open the latest modified log file using less that does not contain the pattern Options.*BRINGUP. This action is performed frequently and hence, I would like to create an alias for it.

Related

BASH - 'exit 1' failed in loop inside another loop [duplicate]

This question already has answers here:
Exit bash script within while loop
(2 answers)
Closed last month.
The following code doesn't exit at the first exit 1 from the call of error_exit. What am I missing?
#!/bin/bash
THIS_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
JINJANG_DIR="$(cd "$THIS_DIR/../.." && pwd)"
DATAS_DIR="$THIS_DIR/datas"
error_exit() {
echo ""
echo "ERROR - Following command opens the file that has raised an error."
echo ""
echo " > open \"$1\""
exit 1
}
cd "$DATAS_DIR"
find . -name 'datas.*' -type f | sort | while read -r datafile
do
localdir="$(dirname $datafile)"
echo " * Testing ''$localdir''."
filename=$(basename "$datafile")
ext=${filename##*.}
if [ "$ext" == "py" ]
then
unsafe="-u"
else
unsafe=""
fi
datas="$DATAS_DIR/$datafile"
find . -name 'template.*' -type f | sort | while read -r template
do
filename=$(basename "$template")
ext=${filename##*.}
template="$DATAS_DIR/$template"
outputfound="$DATAS_DIR/$localdir/output_found.$ext"
cd "$JINJANG_DIR"
python -m src $UNSAFE "$DATA" "$TEMPLATE" "$OUTPUTFOUND" || error_exit "$localdir"
done
cd "$DATAS_DIR"
done
Here is the output I obtain.
ERROR - Following command opens the file that has raised an error.
> open "./html/no-param-1"
* Testing ''./html/no-param-2''.
ERROR - Following command opens the file that has raised an error.
> open "./html/no-param-2"
* Testing ''./latex/no-param-1''.
ERROR - Following command opens the file that has raised an error.
> open "./latex/no-param-1"
* Testing ''./latex/no-param-2''.
ERROR - Following command opens the file that has raised an error.
In my bash environment invoking exit in a subprocess does not abort the parent process, eg:
$ echo "1 2 3" | exit # does not exit my console but instead ...
$ # presents me with the command prompt
In your case you have the pipeline: find | sort | while, so the python || error_exit is being called within a subprocess which in turn means the exit 1 will apply to the subprocess but not the (parent) script.
One solution that insures the (inner) while (and thus the exit 1) is not run in a subprocess:
while read -r template
do
... snip ...
python ... || error_exit
... snip ...
done < <(find . -name 'template.*' -type f | sort)
NOTES:
I'd recommend getting used to this structure as it also addresses another common issue ...
values assigned to variables in a subprocess are not passed 'up' to the parent process
subprocess behavior may differ in other shells
Of course, this same issue applies to the parent/outer while loop so, if the objective is for the exit 1 to apply to the entire script then this same structure will need to be implemented for the parent/outer find | sort | while, too:
while read -r datafile
do
... snip ...
while read -r template
do
... snip ...
python ... || error_exit
done < <(find . -name 'template.*' -type f | sort)
cd "$DATAS_DIR"
done < <(find . -name 'datas.*' -type f | sort)
Additional note copied from GordonDavisson's edit of this answer:
Note that the <( ) construct ("process substitution") is not
available in all shells, or even in bash when it's in sh-compatibility
mode (i.e. when it's invoked as sh or /bin/sh). So be sure to use
an explicit bash shebang (like #!/bin/bash or #!/usr/bin/env bash)
in your script, and don't override it by running the script with the
sh command.

Multithreading semaphore for bash script (sub-processes)

Is there any way / binary for a semaphore-like structure? Eg. For running a fixed amount of (background) sub-process as we loop through a directory of files (using word "sub-process" here and not "thread", since using an appended & in my bash commands to do the "multithreading" (but would be open to any more convenient suggestions)).
My actual use case is trying to use a binary called bcp on CentOS 7 to write a (variable sized) set of TSV files to a remote MSSQL Server DB and have observed that there seems to be a problem with the program when running too many threads. Eg. something like
for filename in $DATAFILES/$TARGET_GLOB; do
if [ ! -f $filename ]; then
echo -e "\nFile $filename not found!\nExiting..."
exit 255
else
echo -e "\nImporting $filename data to $DB/$TABLE"
fi
echo -e "\nStarting BCP export threads for $filename"
/opt/mssql-tools/bin/bcp "$TABLE" in "$filename" \
$TO_SERVER_ODBCDSN \
-U $USER -P $PASSWORD \
-d $DB \
$RECOMMEDED_IMPORT_MODE \
-t "\t" \
-e ${filename}.bcperror.log &
done
# collect all subprocesses at the end
wait
that starts a new sub-process for every file all at once in an unrestricted way, appears to crash each sub-process. Would like to see if adding a semaphore-like structure into the loop to lock the number of sub-process that will be spun up would help. Eg. something like (using some non-bash-like pseudo-code here)
sem = Semaphore(locks=5)
for filename in $DATAFILES/$TARGET_GLOB; do
if [ ! -f $filename ]; then
echo -e "\nFile $filename not found!\nExiting..."
exit 255
else
echo -e "\nImporting $filename data to $DB/$TABLE"
fi
sem.lock()
<same code from original loop>
sem.unlock()
done
# collect all subprocesses at the end
wait
If anything like this is possible or if this is a common problem with an existing best practice solution (I'm pretty new to bash programming), advice would be appreciated.
This isn't strictly equivalent, but you can use xargs to start up to a given number of processes at once:
-P max-procs, --max-procs=max-procs
Run up to max-procs processes at a time; the default is 1. If
max-procs is 0, xargs will run as many processes as possible at
a time. Use the -n option or the -L option with -P; otherwise
chances are that only one exec will be done. While xargs is
running, you can send its process a SIGUSR1 signal to increase
the number of commands to run simultaneously, or a SIGUSR2 to
decrease the number. You cannot decrease it below 1. xargs
never terminates its commands; when asked to decrease, it merely
waits for more than one existing command to terminate before
starting another.
Something like:
printf "%s\n" $DATAFILES/$TARGET_GLOB |
xargs -d '\n' -I {} --max-procs=5 bash -c '
filename=$1
if [ ! -f $filename ]; then
echo -e "\nFile $filename not found!\nExiting..."
exit 255
else
echo -e "\nImporting $filename data to $DB/$TABLE"
fi
echo -e "\nStarting BCP export threads for $filename"
/opt/mssql-tools/bin/bcp "$TABLE" in "$filename" \
$TO_SERVER_ODBCDSN \
-U $USER -P $PASSWORD \
-d $DB \
$RECOMMEDED_IMPORT_MODE \
-t "\t" \
-e ${filename}.bcperror.log
' _ {}
You'll need to export the TABLE, TO_SERVER_ODBCDSN, USER, PASSWORD, DB and RECOMMEDED_IMPORT_MODE variables beforehand, so that they're available in the processes started by xargs. Or you can put commands run using bash -c here in a separate script, and put the variables in that script.
Following recommendation by #Mark Setchell, using GNU Parallel to replace the loop (in a simulated cron environment (see https://stackoverflow.com/a/2546509/8236733)) with
bcpexport() {
filename=$1
TO_SERVER_ODBCDSN=$2
DB=$3
TABLE=$4
USER=$5
PASSWORD=$6
RECOMMEDED_IMPORT_MODE=$7
DELIMITER=$8 # DO NOT use format like "'\t'", nested quotes seem to cause hard-to-catch error
<same code from original loop>
}
export -f bcpexport
parallel -j 10 bcpexport \
::: $DATAFILES/$TARGET_GLOB \
::: "$TO_SERVER_ODBCDSN" \
::: $DB \
::: $TABLE \
::: $USER \
::: $PASSWORD \
::: $RECOMMEDED_IMPORT_MODE \
::: $DELIMITER
to run at most 10 threads at a time, where $DATAFILES/$TARGET_GLOB is a glob string to return all of the files in the desired dir. (eg. "$storagedir/tsv/*.tsv") that we want to go through (and adding the remaining fixed args with each of the elements returned by that glob as the remaining parallel inputs shown) (The $TO_SERVER_ODBCDSN variable is actually "-D -S <some ODBC DSN>", so needed to add quotes to pass as single arg). So if the $DATAFILES/$TARGET_GLOB glob returns files A, B, C, ..., we end up running the commands
bcpexport A "$TO_SERVER_ODBCDSN" $DB ...
bcpexport B "$TO_SERVER_ODBCDSN" $DB ...
bcpexport C "$TO_SERVER_ODBCDSN" $DB ...
...
in parallel. An additionally nice thing about using parallel is
GNU parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially.
Using &
Example code
#!/bin/bash
xmms2 play &
sleep 5
xmms2 next &
sleep 1
xmms2 stop

check if file does not exist or is older than another in csh

in C-shell I need to check if a file exists or if it is older than another file (or in this example older than 5 seconds from the beginning of unix time). if the file does not exist or is old, some stuff should be executed.
In my example "bla.txt" does not exist, so the first condition is true
if ( ! -f bla.txt || `stat -c "%Y" bla.txt` > 5 ) echo 1
stat: cannot stat `bla.txt': No such file or directory
1
Problem is, if I combine these conditions in an if statement, the second one (age of file) is executed although the first one is already true and gives an error because the file is not there.
in bash, everything works as it should
if [ ! -f bla.txt ] || [ `stat -c "%Y" bla.txt` > 5 ]; then echo 1; fi
1
any ideas on how to achieve this behaviour in csh WITHOUT an else if? I don't want to have the commands to execute twice in my code.
thanks!
CSH has a parser which, to be honest, doesn't deserve the name.
The issue in this particular instance is that it doesn't evaluate the left side of the || construct first before starting stat (as you've seen). As you're depending on the standard output of stat you can't redirect output via >& /dev/null either, and redirection of just stderr is a bit of a nuisance (see Redirecting stderr in csh).
If you want clean csh code that is still understandable but do not want to code the actual code call twice, I think the cleanest solution is to use an intermediate variable. Something like this:
#!/bin/csh
set f=$1
set do=0
if ( ! -f $f ) then
set do=1
else
if ( `stat -c "%Y" $f >& /dev/null ` < 5 ) set do=1
endif
if ( $do ) echo "File $f does not exist or is older than 5s after epoch"
(Note that your original code also had the age test reversed from your prose.)
You can move the -f test inside the shell command from which you are redirecting the output of stat. Here is a script to illustrate:
#!/bin/csh
set verbose
set XX=/tmp/foo
set YY=2
rm -f $XX
foreach ZZ ( 0 1 )
if ( ` stat -c "%Y" $XX` > $YY ) echo 1
if ( ` test -f $XX && stat -c "%Y" $XX` > $YY ) echo 1
if ( $ZZ == 0 ) touch $XX
stat -c "%Y" $XX
sleep $YY
end

Unexpected error in a shell script : "Unexpected keyword `from` ". Can't locate the mistake

I have written a shell script that will connect with database and retrieve the records. But when I exceute it, it gives me the error as: Unexpected keyword from. Can anyone please suggest me what mistake I am doing?
The code I have done is as given below:
#------------------------------------------------------------------------------------------------
# Define Script and Log Location
# ------------------------------------------------------------------------------------------------
SCRIPTHOME=/opt/psoft/scripts
SCRIPTINPUT=/opt/psoft/scripts/tac/input
SCRIPTLOG=/opt/psoft/scripts/tac/log
SCRIPTOUTPUT=/opt/psoft/scripts/tac/output
SCRIPTNOTPROCESSED=/opt/psoft/scripts/tac/notprocessed
# ------------------------------------------------------------------------------------------------
# Define Oracle Environment
# ------------------------------------------------------------------------------------------------
export ORACLE_HOME=/opt/oracle/product/9.2.0
export TNS_ADMIN=/var/opt/oracle/admin/network
export PATH=$PATH:$ORACLE_HOME/bin:$HOME/scripts:.
# ------------------------------------------------------------------------------------------------
# Main Program
# ------------------------------------------------------------------------------------------------
incno=$1;
if test ${#incno} -lt 6
then
echo "Please provide 6 digit incident no";
echo "TAC script has not been run";
exit;
fi;
cd ${SCRIPTINPUT}
if test -e *.csv
then
#cd ${SCRIPTINPUT}
for f in *.csv
do
dos2unix $f $f # To remove control M characters in the input file
echo " $f - Control M characters removed sucessfully " >> input.log
done
echo " Control M characters present in all files in input folder has been removed " >>input.log
cd ${SCRIPTINPUT}
for INP in *.csv
do
log_file="${SCRIPTLOG}/${INP}.log"
# To check if input file for Taccode or Not
cd ${SCRIPTINPUT}
echo "Taccode to be executed for the file $INP"
count=0;
while read line
do
pcode=`echo $line | cut -d "," -f "1"`
tcode=`echo $line | cut -d "," -f "2"`
cpcode=${#pcode}
ctcode=${#tcode}
#cpcode=`echo ${pcode} | grep -oE [[:digit:]] | wc -l`
#ctcode=`echo ${tcode} | grep -oE [[:digit:]] | wc -l`
if test $cpcode -eq 5
then
DBRESULT=`sqlplus sprint2/sh3rl0ck#SPRXP03
select * from mytable where productcode='10130' AND taccode='35710100';
quit;`
echo "Hello $count:$pcode:$tcode:$DBRESULT"
#here the database result should be checked for errors
if test $? -ne 0
then
echo "Query execution failed.Check ${log_file} file for errors!"
mv ${SCRIPTINPUT}/$INP ${SCRIPTNOTPROCESSED}
exit;
else
count=$(expr $count + 1)
echo "Record No:${count} ${pcode}:${tcode}" >>${log_file}
fi;
else
echo "Problem with product code or tac code. Check log file for errors"
echo "Record No:${count} ${pcode}:${tcode}:" >>${log_file}
mv ${SCRIPTINPUT}/$INP ${SCRIPTNOTPROCESSED}
exit;
fi;
done <${INP} #end file reading while loop
echo "Script excution succeeded" >>${log_file}
echo "${count} records inserted"
echo "Script excution succeeded";
done #end outer for loop
else
echo "No csv files found in input directory. -TAC script has not been run."
fi;
I think you'll need to put your sqlplus command all on one line
DBRESULT=`sqlplus sprint2/sh3rl0ck#SPRXP03 select * from mytable where productcode='10130' AND taccode='35710100'; quit;`
or include explicit line breaks
DBRESULT=`sqlplus sprint2/sh3rl0ck#SPRXP03 \
select * from mytable where productcode='10130' AND taccode='35710100'; \
quit;`
Oracle sqlplus doesn't support inline sql statement. The only commandline support it has is to run a sql script using #. The viable solution to your problem is to use heredoc.
You can do something like:
DBRESULT=$(sqlplus sprint2/sh3rl0ck#SPRXP03 <<-SQL
select * from mytable where productcode='10130' AND taccode='35710100';
exit;
SQL
)

for loop: commands start from begin every time

I have write the following bash script to check list of domains from domain.list and multiple directories from dir.list.
# is the first domain, it first tries to find file at
http://example.com
if success script finish and exit no problem.
if failed it go to check it at
https://example.com
if ok , script finish and exit,
if not
check for it at
http://example.com/$list of different directories.
If file found script finished and exit , if failed to find
then go to check it at
https://example.com/$list of different directories
But the problem , when the first check failed and second check failed , it goes to third check , but it keep looping , at third command and 4th command, tell it find file or list of directories finished.
I want the script when reach the 3rd command to run it and check for it at list of directories tell the list finish and not to go for the 4th command tell it finished
As at my script it keep checking for single domain at multiple directories and every time to check a new directory it start the whole script from the bagain and run the 1st command and 2nd command again from the begin and I do not need that, big loss of time
Thanks
#!/bin/bash
dirs=(`cat dir.list`)
doms=( `cat domain.list`)
for dom in "${doms[#]}"
do
for dir in "${dirs[#]}"
do
target1="http://${dom}"
target2="https://${dom}"
target3="http://${dom}/${dir}"
target4="https://${dom}/${dir}"
if curl -s --insecure -m2 ${target1}/test.txt | grep "success" > /dev/null ;then
echo ${target1} >> dir.result
break
elif curl -s --insecure -m2 ${target2}/test.txt | grep "success" > /dev/null;then
echo ${target2} >> dir.result
break
elif curl -s --insecure -m2 ${target3}/test.txt | grep "success" > /dev/null; then
echo ${target3} >> dir.result
break
elif curl -s --insecure -m2 ${target4}/test.txt | grep "success" > /dev/null ; then
echo ${target4} >> dir.result
break
fi
done
done
Your code is sub-optimal; if you have a list of 5 'dir' values, you check 5 times whether http://${domain}/test.txt exists — but the chances are that if it didn't exist the first time, it doesn't exist on the other times either.
You use dir to indicate a sub-directory name, but your code uses http://${dom}:${dir} rather than the more normal http://${dom}/${dir}. Technically, what follows the colon up to the first slash is a port number, not a directory. I'm going to assume this is a typo and the colon should be replaced by a slash.
Generally, do not use the back-tick notation; use $(…) instead. Avoid swathes of repeated code, too.
I think you can compress your script down to something like this:
#!/bin/bash
dirs=( $(cat dir.list) )
file=test.txt
fetch_file()
{
if curl -s --insecure -m2 "${1:?}/${file}" | grep "success" > /dev/null
then
echo "${1}"
return 0
else
return 1
fi
}
for dom in $(cat domain.list)
do
for proto in http https
do
fetch_file "${proto}://{$dom}" && break
for dir in "${dirs[#]}"
do
fetch_file "${proto}://${dom}/${dir}" && break 2
done
done
done > dir.result
If the domain list is massive, you could consider using while read dom; do …; done < domain.list instead of using the $(cat domain.list). It would be feasible, and possibly even sensible, to define variable site="${proto}://${dom}" and then use that in the invocations of fetch_file.
You can use this script:
while read dom; do
while read dir; do
target1="http://${dom}"
target2="https://${dom}"
target3="http://${dom}:${dir}"
target4="https://${dom}:${dir}"
if curl -s --insecure -m2 ${target1}/test.txt | grep -q "success"; then
echo ${target1} >> dir.result
break 2
elif curl -s --insecure -m2 ${target2}/test.txt | grep -q "success"; then
echo ${target2} >> dir.result
break 2
elif curl -s --insecure -m2 ${target3}/test.txt | grep -q "success"; then
echo ${target3} >> dir.result
break 2
elif curl -s --insecure -m2 ${target4}/test.txt | grep -q "success"; then
echo ${target4} >> dir.result
break 2
fi
done < dir.list
done < domain.list

Resources