While creating a bash script that calls sqlcmd for different variables in a loop, noticed it appears to be setting the loop variables that should update in the loop to be constant once it is called the first time (can tell that this is very likely the culprit by continueing and later and later segments of the loop until got to the sqlcmd segction). Ie. if we loop through a list (of length L) of MSSQL Server table names with sqlcmd, the loop will just do, not L, but infinite iterations of the loop instructions using only the first entry in the list.
A minimal example is below:
#!/bin/bash
tables_list=$1
while read -r line
do
tablecols="$line"
IFS=',' read -a arr_tablecols <<< "$tablecols"
mssql_tablename=${arr_tablecols[0]}
echo -e "\n\n\n##### Processing: $mssql_tablename #####\n"
TO_SERVER_ODBCDSN="-D -S <ODBC DSN name for mssql host>"
TO_SERVER_IP="-S <my mssql host IP>"
DB="ClarityETL_test"
TABLE="$mssql_tablename"
USER=<my mssql username>
PASSWORD=<my mssql login password>
#uncomment to see that sqlcmd does in fact appear to be the problem
#continue
{
echo -e "Counting destination table: $DB/$TABLE"
sqlcmd -Q "PRINT '$mssql_tablename';" \
$TO_SERVER_ODBCDSN \
-U $USER -P $PASSWORD \
-d $DB
} || { echo -e "\nFailed to truncate MSSQL DB"; exit 255; }
done < "$tables_list"
where the file being used as $table_list looks like
mymssqltable1
mymssqltable2
...
(since the example just uses the PRINT command, the list could be mostly anything you want to use, really).
This behavior is really weird to me and could not find anything mentioning this in the docs (https://learn.microsoft.com/en-us/sql/linux/quickstart-install-connect-red-hat?view=sql-server-2017#create-and-query-data) (I'm on CentOS 7). If anyone knows what is going on here or any further debugging advice, please let me know.
I suspect sqlcmd is reading from standard input, so it's consuming the rest of the input file. I'm not sure why this is causing the loop to repeat infintely, rather than ending after the first iteration. But if this is what's going on, the solution is to redirect the input of sqlcmd.
sqlcmd -Q "PRINT '$mssql_tablename';" \
$TO_SERVER_ODBCDSN \
-U $USER -P $PASSWORD \
-d $DB < /dev/null
When you redirect input of the while loop, all the commands inside the loop inherit that redirected input. So note that if you were calling sqlcmd from within another script nested in the loop (rather than directly within the loop itself), you would do something like
while read -r line
do
tablecols="$line"
IFS=',' read -a arr_tablecols <<< "$tablecols"
mssql_tablename=${arr_tablecols[0]}
bash scriptThatUsesSqlcmd.sh mssql_tablename < /dev/null
done < "$tables_list"
to un-inherit the loop's standard input redirect in the script that will use sqlcmd.
Related
I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).
I have two files with the following contents:
beam.csv
T11
G14
M10
frequency.csv
115000
120000
344444
I want to associate each beam with a frequency i.e. build an associative array such as NAME[beam]=frequency
declare -A NAME
while read -r beam && read -r -u 3 freq; do
NAME[$beam]=$freq
done < beam.csv 3< frequency.csv
But it doesn't get right at all
When I run echo "${!NAME[#]}" "${NAME[#]}" there is no output and when I try echo ${NAME[T11]} I don't get any output as well.
Your code works fine. Probably you are just unaware that when you execute it like this:
thisisyourshellprompt$ ./yourscript
it actually executes in a subshell; therefore the NAME variable is local to that subshell where the script is run, not to the shell where you have just typed the command.
When the script is done, it returns, and you are back to your shell, where NAME has never been defined.
But the code works, as you can verify by putting the echo in the script, as in
declare -A NAME
while read -r beam && read -r -u 3 freq; do
NAME[$beam]=$freq
done < beam.csv 3< frequency.csv
echo "${!NAME[#]}" "${NAME[#]}"
I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).
I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).
I'm writing a script truncating tables. The problem is that the script goes in infinite loop.
./n_input contains environment variables that are used in script.
#!/bin/ksh
INPUT_FILE=./n_input
if [ ! -e $INPUT_FILE ];
then
echo "Error: Input file require"
exit 1
fi
source $INPUT_FILE
while read Table
do
TABLE="$DSTN_DATABASE.dbo.$Table"
echo "Table : $TABLE"
QUERY="truncate table $TABLE"
echo $QUERY > ./tmp_file
sqlcmd -m 1 -U $DSTN_USER -P $DSTN_PASSWORD -D -S $DSTN_SERVER -m1 -i ./tmp_file
RET_VALUE=$?
if [ $RET_VALUE -ne 0 ]
then
echo "Error $TABLE"
fi
done < $TABLE_LIST
exit 0
How do I break the loop? I have tried to remove sqlcmd from script and verified that it was working. It's working as expected. Observed the same behavior with sqlcmd -Q option.
$TABLE_LIST file contains only one table name.
You don't need to write the query into temporary file. Use -q, or -Q options instead:
q="truncate table ${TABLE};"
sqlcmd -m 1 -U "$DSTN_USER" -P "$DSTN_PASSWORD" -S "$DSTN_SERVER" -q "$q"
Note the ; at the end of the query. Probably, that's the reason why the script "stalls". That may look like an infinitely running loop.
Also note the use of double quotes. You should wrap variables in double quotes to prevent reinterpretation of the special characters.
By the way, you can locate the exact command that is causing the issue by adding set -x at the beginning of the script. set -x turns on debugging mode. With debugging mode on, you see the commands being executed.
It's very unlikely that the content of $TABLE_LIST file is causing such behavior, unless the file is enormously big. The loop construct is correct, and the number of iterations should match the number of lines in the file.