Bash ending the variable after first space? - bash

I'm building a string in Bash in a loop. The data processed by the loop from the lines in two files which are looking like this:
FIRST part of the string is a line in file 1 looking like:
SOME_PACKAGE
SECOND PART is a line of a file 2 looking like:
someFunction('some',parameters,here)
The final output has a dot between the two strings:
1 SOME_PACKAGE.someFunction('some',parameters,here)
The 1 is important here. Explaination in a second.
The string is formed in a double while loop
while read line1 ; do
while read line2 ; do
stringArray=($line2)
string=$line1.${stringArray[1]}
sqlplus -s /nolog > /dev/null 2>&1 <<EOF
connect user/password#db_instance
variable rc refcursor;
SPOOL ${line1}_${stringArray[0]}.DATA
exec :rc := $string;
print rc;
spool off
exit
EOF
done < file2.txt
done < file1.txt
This string is then passed to SQLPlus, and SQLPlus should exit a command like this:
SQL> variable rc refcursor;
SQL> exec :rc := SOME_PACKAGE.someFunction('some',parameters,here);
SQL> print rc;
Until now, everything was working fine. But I got a more complicated parameters next to the someFunction. Now it looks like this:
SOME_PACKAGE.someFunction('some',parameters,here,'and 2 more',NULL)
It seems that the variable passed to SQL*Plus ends on the first space... So it looks like:
SOME_PACKAGE.someFunction('some',parameters,here,'and
From what I know I shouldnt pass spaces in variables, or if I want to do this, I should wrap them in quote signs: "" but where should I put those quote signs to pass the final vatiable to SQL*Plus WITHOUT those quote signs? Or what other solution do You guys propose?

The answer was simple thanks to #arco444.
The reason all this happened was the Internal field separator, which was set to default.
What I did was the following:
I've changed the look of file2 from
1 SOME_PACKAGE.someFunction('some',parameters,here,'and 2 more',NULL)
to
1§SOME_PACKAGE.someFunction('some',parameters,here,'and 2 more',NULL)
And I added some ILS changes before and after the loop, so the final code looks like this:
oldifs=$IFS
IFS="§"
while read line1 ; do
while read line2 ; do
stringArray=($line2)
string=$line1.${stringArray[1]}
sqlplus -s /nolog > /dev/null 2>&1 <<EOF
connect user/password#db_instance
variable rc refcursor;
SPOOL ${line1}_${stringArray[0]}.DATA
exec :rc := $string;
print rc;
spool off
exit
EOF
done < file2.txt
done < file1.txt
IFS=$oldifs
Everything is working like a charm now.

Related

convert oracle refcursor to text in unix script

This is in continuation to the post below. I am able to return data from the oracle stored procedure to unix script.
Fetch data from Oracle SP Out Param SYS_REFCURSOR in Unix Korn Shell Script
But while looping through the records i dont get the expected result. Below is the code. Before the variable table is printed i am getting an error "cannot open"
weeknum=$1
#read ref cursor from proc
cur=`sqlplus -s $connection <<EOF
SET PAGESIZE 0 FEEDBACK OFF VERIFY OFF HEADING OFF ECHO OFF
var return_val refcursor
exec WEEKLYLOAD($weeknum, :return_val);
print return_val
EXIT
EOF`
print "done"
table=""
while read -r line
do
$table=$$table"\n"$line
done < $cur
You are trying to direct input from your cur variable, but the form you are using is looking for a file with the name of the first word in $cur - rather than the entire contents of that variable. The error you see will be the first word in the first column of the first row of the ref cursor opened by your procedure.
So if your ref cursor was opened for a query that, say, produced three rows of output with value A, B and C it would try to read input from a file called A, and report cannot open (unless a file called that happened to exist in the current working directory).
You can echo the variable and pipe it instead:
echo "$cur" | while read -r line
do
table=$table"\n"$line
done
I've removed the extra $ symbols from the assignment. But this doesn't look particularly useful; with the same three-row result as above, $table would end up as:
\nA\nB\nC
If you just want to print the contents of $cur to the console you can use one of these (or others):
echo "$cur"
printf "%s\n" "$cur"
which both produce
A
B
C

Pass values read from a file as input to an SQL query in Oracle

#cat file.txt
12354
13456
13498
#!/bin/bash
for i in `cat file.txt`
do
sqlplus XXXXX/XXXXX#DB_NAME << EOF
select *from TABLE_NAME where id="$i"
EOF
done
This is not working for me. Help me how I can solve this.
The solution given by #codeforester works. However I was unable to use it because it created as many DB connections as the number of lines in your file which is a potential impact.
To overcome this, I chose the below solution which may not be ideal but does the job with just one DB connection.
Considering the same data in file.txt
12354
13456
13498
I used the below sed command to populate the above to a single variable "12354,13456,13498"
myvariable=$(echo "`cat file.txt | sed '$!s/$/,/g' | tr -d '\n' | tr -d ' '`")
Now below script will pass this variable to the SQL query and spool the data into a text file:
#!/bin/bash
myvariable=$(echo "`cat file.txt | sed '$!s/$/,/g' | tr -d '\n' | tr -d ' '`")
echo #myvariable
sqlplus /nolog << EOF
CONNECT user#dbname/dbpassword
SPOOL dboutput.txt
select column1 from table_name where id in ($myvariable);
SPOOL OFF
EOF
The output is stored in dboutput.txt (along with the SQL query)
cat dboutput.txt
SQL> select column1 from table_name where id in (12354,13456,13498);
NAME
---------------------------------------------------------------------------- ----
data1
data2
data3
SQL> spool off
Here is the right way to use the heredoc <<, along with the choice of while read instead of for to read the file:
#!/bin/bash
while read -r value; do
sqlplus xxxxx/xxxxx#db_name << EOF
select * from table_name where id='$value';
# make sure heredoc marker "EOF" is not indented
EOF
done < file.txt
See also:
How can I write a here doc to a file in Bash script?
BashFAQ001 to understand why for loop is not the best way to read text lines from a file.

delete rows from databse using a shell script

the below script gives me an error. Basically I am trying to delete the records that I got from the first query. I have put them in a text file, formatted them and used them in the delete operation.
After executing the script I am getting the below error:-
: line 5: syntax error at line 27: `<<' unmatched
Can't tell because the code you dumped is unformatted, but my first guess would be you have leading spaces in front of the EOF in your here document.
This should work (note that there are no leading spaces in front of the EOF.:
sqlplus -s $dbcreds << EOF > output.txt
SET SERVEROUTPUT OFF
select empname from emp where dept_no=123;
EOF
if [ -s "output.txt" ]
then
echo " Found the below employees....Deleting them from Database ..............!!!! \n"
cat output.txt
sed "s/(.*)/'\1'/" output.txt| tr '\n' ','|sed 's/.$//' >final_employees.txt
while read line
do
sqlplus -s $dbcreds <<EOF
SET SERVEROUTPUT OFF
Delete from emp where empname in ($line);
EOF
done < final_employees.txt
else
echo " No employees found....!!!"
fi

Line error in shell script

I have the following code in a shell script.
This only seems to work when it is not defined in a function.
The problematic line is the one containing the "<<".
The error message is
"./run: line 210: syntax error:
unexpected end of file"
How can I write this correctly within a function?
init_database()
{
cd ../cfg
db.sh << ENDC
$DB_ADMIN
0
y
n
ENDC
check_status
sqlplus $DB_SCHEMA#$DB_NAME < initial_data.sql
cd -
}
There are a number of ways to fix that problem.
1/ Unindent the here document end marker, such as:
cat <<EOF
hello
$PWD
EOF
but that will make your code look ugly.
2/ "Indent" the here document begin marker:
cat <<' EOF'
hello
$PWD
EOF
where that bit before the first EOF is exactly the same as the before the second (tab, four spaces, two tabs, whatever). This allows you to keep your nice indenting, although it doesn't expand variables inside the here-document ($PWD doesn't change).
3/ Allow tabs to be stripped from the start of input lines and the end marker.
cat <<-EOF
hello
$PWD
EOF
but there's no way to get tabs into the beginnings of lines.
4/ For your purposes, you can also use:
( echo "$DB_ADMIN";
echo "" ;
echo "0" ;
echo "y" ;
echo "n"
) | db.sh
check_status
sqlplus $DB_SCHEMA#$DB_NAME < initial_data.sql
cd -
I believe number 4 is the best option for you. It allows nice lining up of the input, tabs and spaces anywhere in the lines and variable expansion.
The end of your "Here document" needs to be unindented, I'm afraid.
The ENDC label must be alone in a line without leading/trailing whitspaces.

Capturing multiple line output into a Bash variable

I've got a script 'myscript' that outputs the following:
abc
def
ghi
in another script, I call:
declare RESULT=$(./myscript)
and $RESULT gets the value
abc def ghi
Is there a way to store the result either with the newlines, or with '\n' character so I can output it with 'echo -e'?
Actually, RESULT contains what you want — to demonstrate:
echo "$RESULT"
What you show is what you get from:
echo $RESULT
As noted in the comments, the difference is that (1) the double-quoted version of the variable (echo "$RESULT") preserves internal spacing of the value exactly as it is represented in the variable — newlines, tabs, multiple blanks and all — whereas (2) the unquoted version (echo $RESULT) replaces each sequence of one or more blanks, tabs and newlines with a single space. Thus (1) preserves the shape of the input variable, whereas (2) creates a potentially very long single line of output with 'words' separated by single spaces (where a 'word' is a sequence of non-whitespace characters; there needn't be any alphanumerics in any of the words).
Another pitfall with this is that command substitution — $() — strips trailing newlines. Probably not always important, but if you really want to preserve exactly what was output, you'll have to use another line and some quoting:
RESULTX="$(./myscript; echo x)"
RESULT="${RESULTX%x}"
This is especially important if you want to handle all possible filenames (to avoid undefined behavior like operating on the wrong file).
In case that you're interested in specific lines, use a result-array:
declare RESULT=($(./myscript)) # (..) = array
echo "First line: ${RESULT[0]}"
echo "Second line: ${RESULT[1]}"
echo "N-th line: ${RESULT[N]}"
In addition to the answer given by #l0b0 I just had the situation where I needed to both keep any trailing newlines output by the script and check the script's return code.
And the problem with l0b0's answer is that the 'echo x' was resetting $? back to zero... so I managed to come up with this very cunning solution:
RESULTX="$(./myscript; echo x$?)"
RETURNCODE=${RESULTX##*x}
RESULT="${RESULTX%x*}"
Parsing multiple output
Introduction
So your myscript output 3 lines, could look like:
myscript() { echo $'abc\ndef\nghi'; }
or
myscript() { local i; for i in abc def ghi ;do echo $i; done ;}
Ok this is a function, not a script (no need of path ./), but output is same
myscript
abc
def
ghi
Considering result code
To check for result code, test function will become:
myscript() { local i;for i in abc def ghi ;do echo $i;done;return $((RANDOM%128));}
1. Storing multiple output in one single variable, showing newlines
Your operation is correct:
RESULT=$(myscript)
About result code, you could add:
RCODE=$?
even in same line:
RESULT=$(myscript) RCODE=$?
Then
echo $RESULT $RCODE
abc def ghi 66
echo "$RESULT"
abc
def
ghi
echo ${RESULT#Q}
$'abc\ndef\nghi'
printf '%q\n' "$RESULT"
$'abc\ndef\nghi'
but for showing variable definition, use declare -p:
declare -p RESULT RCODE
declare -- RESULT="abc
def
ghi"
declare -- RCODE="66"
2. Parsing multiple output in array, using mapfile
Storing answer into myvar variable:
mapfile -t myvar < <(myscript)
echo ${myvar[2]}
ghi
Showing $myvar:
declare -p myvar
declare -a myvar=([0]="abc" [1]="def" [2]="ghi")
Considering result code
In case you have to check for result code, you could:
RESULT=$(myscript) RCODE=$?
mapfile -t myvar <<<"$RESULT"
declare -p myvar RCODE
declare -a myvar=([0]="abc" [1]="def" [2]="ghi")
declare -- RCODE="40"
3. Parsing multiple output by consecutives read in command group
{ read firstline; read secondline; read thirdline;} < <(myscript)
echo $secondline
def
Showing variables:
declare -p firstline secondline thirdline
declare -- firstline="abc"
declare -- secondline="def"
declare -- thirdline="ghi"
I often use:
{ read foo;read foo total use free foo ;} < <(df -k /)
Then
declare -p use free total
declare -- use="843476"
declare -- free="582128"
declare -- total="1515376"
Considering result code
Same prepended step:
RESULT=$(myscript) RCODE=$?
{ read firstline; read secondline; read thirdline;} <<<"$RESULT"
declare -p firstline secondline thirdline RCODE
declare -- firstline="abc"
declare -- secondline="def"
declare -- thirdline="ghi"
declare -- RCODE="50"
After trying most of the solutions here, the easiest thing I found was the obvious - using a temp file. I'm not sure what you want to do with your multiple line output, but you can then deal with it line by line using read. About the only thing you can't really do is easily stick it all in the same variable, but for most practical purposes this is way easier to deal with.
./myscript.sh > /tmp/foo
while read line ; do
echo 'whatever you want to do with $line'
done < /tmp/foo
Quick hack to make it do the requested action:
result=""
./myscript.sh > /tmp/foo
while read line ; do
result="$result$line\n"
done < /tmp/foo
echo -e $result
Note this adds an extra line. If you work on it you can code around it, I'm just too lazy.
EDIT: While this case works perfectly well, people reading this should be aware that you can easily squash your stdin inside the while loop, thus giving you a script that will run one line, clear stdin, and exit. Like ssh will do that I think? I just saw it recently, other code examples here: https://unix.stackexchange.com/questions/24260/reading-lines-from-a-file-with-bash-for-vs-while
One more time! This time with a different filehandle (stdin, stdout, stderr are 0-2, so we can use &3 or higher in bash).
result=""
./test>/tmp/foo
while read line <&3; do
result="$result$line\n"
done 3</tmp/foo
echo -e $result
you can also use mktemp, but this is just a quick code example. Usage for mktemp looks like:
filenamevar=`mktemp /tmp/tempXXXXXX`
./test > $filenamevar
Then use $filenamevar like you would the actual name of a file. Probably doesn't need to be explained here but someone complained in the comments.
How about this, it will read each line to a variable and that can be used subsequently !
say myscript output is redirected to a file called myscript_output
awk '{while ( (getline var < "myscript_output") >0){print var;} close ("myscript_output");}'

Resources