This is in continuation to the post below. I am able to return data from the oracle stored procedure to unix script.
Fetch data from Oracle SP Out Param SYS_REFCURSOR in Unix Korn Shell Script
But while looping through the records i dont get the expected result. Below is the code. Before the variable table is printed i am getting an error "cannot open"
weeknum=$1
#read ref cursor from proc
cur=`sqlplus -s $connection <<EOF
SET PAGESIZE 0 FEEDBACK OFF VERIFY OFF HEADING OFF ECHO OFF
var return_val refcursor
exec WEEKLYLOAD($weeknum, :return_val);
print return_val
EXIT
EOF`
print "done"
table=""
while read -r line
do
$table=$$table"\n"$line
done < $cur
You are trying to direct input from your cur variable, but the form you are using is looking for a file with the name of the first word in $cur - rather than the entire contents of that variable. The error you see will be the first word in the first column of the first row of the ref cursor opened by your procedure.
So if your ref cursor was opened for a query that, say, produced three rows of output with value A, B and C it would try to read input from a file called A, and report cannot open (unless a file called that happened to exist in the current working directory).
You can echo the variable and pipe it instead:
echo "$cur" | while read -r line
do
table=$table"\n"$line
done
I've removed the extra $ symbols from the assignment. But this doesn't look particularly useful; with the same three-row result as above, $table would end up as:
\nA\nB\nC
If you just want to print the contents of $cur to the console you can use one of these (or others):
echo "$cur"
printf "%s\n" "$cur"
which both produce
A
B
C
Related
I have 2 text files. I want to loop in the first file to get a list, then using that list, loop from the second file to search for matching fields.
The first loop was fine, but when the second loop comes in, the variable $CLIENT_ABBREV cannot be read in the second loop, it's reading as blank. Output looks like does not match DOG where there's a blank before does.
while IFS=',' read CLIENT_ID NAME SERVER_NAME CLIENT_ABBREV
do
echo "\n------------"
echo Configuration in effect for this run
echo CLIENT_ID=$CLIENT_ID
echo NAME=$NAME
echo SERVER_NAME=$SERVER_NAME
echo CLIENT_ABBREV=$CLIENT_ABBREV
while IFS=',' read JOB_NAME CLIENT_ABBREV_FROMCOMMAND JOBTYPE JOBVER
do
if [ "$CLIENT_ABBREV" == "$CLIENT_ABBREV_FROMCOMMAND" ]; then
# do something
else
echo $CLIENT_ABBREV does not match $CLIENT_ABBREV_FROMCOMMAND
done <"$COMMAND_LIST"
done <"$CLIENT_LIST"
Is there a file with the name COMMAND_LIST ?
Or, actually do you want to use $COMMAND_LIST instead of COMMAND_LIST ?
I have below output from a text file. This is long file i just copy here some rows only.
HP83904B74E6
13569.06
7705.509999999999
HP4DC2EECAA8
4175.1
2604.13
And i want to print it like below.
HP83904B74E6 13569.06 7705.509999999999
HP4DC2EECAA8 4175.1 2604.13
I have tried by reading the file line by live using while loop and try to store the value of variable e.g. variablename$i so that i can print it like variablename0 and after every 3 line i have used If statement to print the value of variablename0 variablename1 variablename2, but did not work for me.
Use pr:
$ pr -a3t tmp.txt
HP83904B74E6 13569.06 7705.509999999999
HP4DC2EECAA8 4175.1 2604.13
i have tried by reading the file line by live using while loop and try to store the value of variable e.g. variablename$i so that i can print it like variablename0 and after every 3 line i have used If statement to print the value of variablename0 variablename1 variablename2, but did not work for me. I am just learning bash.
while read -r a; do
read -r b;
read -r c;
echo "$a $b $c";
done < file
you get,
HP83904B74E6 13569.06 7705.509999999999
HP4DC2EECAA8 4175.1 2604.13
I'm building a string in Bash in a loop. The data processed by the loop from the lines in two files which are looking like this:
FIRST part of the string is a line in file 1 looking like:
SOME_PACKAGE
SECOND PART is a line of a file 2 looking like:
someFunction('some',parameters,here)
The final output has a dot between the two strings:
1 SOME_PACKAGE.someFunction('some',parameters,here)
The 1 is important here. Explaination in a second.
The string is formed in a double while loop
while read line1 ; do
while read line2 ; do
stringArray=($line2)
string=$line1.${stringArray[1]}
sqlplus -s /nolog > /dev/null 2>&1 <<EOF
connect user/password#db_instance
variable rc refcursor;
SPOOL ${line1}_${stringArray[0]}.DATA
exec :rc := $string;
print rc;
spool off
exit
EOF
done < file2.txt
done < file1.txt
This string is then passed to SQLPlus, and SQLPlus should exit a command like this:
SQL> variable rc refcursor;
SQL> exec :rc := SOME_PACKAGE.someFunction('some',parameters,here);
SQL> print rc;
Until now, everything was working fine. But I got a more complicated parameters next to the someFunction. Now it looks like this:
SOME_PACKAGE.someFunction('some',parameters,here,'and 2 more',NULL)
It seems that the variable passed to SQL*Plus ends on the first space... So it looks like:
SOME_PACKAGE.someFunction('some',parameters,here,'and
From what I know I shouldnt pass spaces in variables, or if I want to do this, I should wrap them in quote signs: "" but where should I put those quote signs to pass the final vatiable to SQL*Plus WITHOUT those quote signs? Or what other solution do You guys propose?
The answer was simple thanks to #arco444.
The reason all this happened was the Internal field separator, which was set to default.
What I did was the following:
I've changed the look of file2 from
1 SOME_PACKAGE.someFunction('some',parameters,here,'and 2 more',NULL)
to
1§SOME_PACKAGE.someFunction('some',parameters,here,'and 2 more',NULL)
And I added some ILS changes before and after the loop, so the final code looks like this:
oldifs=$IFS
IFS="§"
while read line1 ; do
while read line2 ; do
stringArray=($line2)
string=$line1.${stringArray[1]}
sqlplus -s /nolog > /dev/null 2>&1 <<EOF
connect user/password#db_instance
variable rc refcursor;
SPOOL ${line1}_${stringArray[0]}.DATA
exec :rc := $string;
print rc;
spool off
exit
EOF
done < file2.txt
done < file1.txt
IFS=$oldifs
Everything is working like a charm now.
I am trying to execute a SQL query on SYBASE database using shell script.
A simple query to count the number of rows in a table.
#!/bin/sh
[ -f /etc/bash.bashrc.local ] && . /etc/bash.bashrc.local
. /gi/base_environ
. /usr/gi/bin/environ
. /usr/gi/bin/path
ISQL="isql <username> guest"
count() {
VAL=$( ${ISQL} <<EOSQL
set nocount on
go
set chained off
go
select count(*) from table_name
go
EOSQL
)
echo "VAL : $VAL"
echo $VAL | while read line
do
echo "line : $line"
done
}
count
The above code gives the output as follows
VAL : Password:
-----------
35
line : Password: ----------- 35
Is there a way to get only the value '35'. What I am missing here? Thanks in advance.
The "select count(*)" prints a result set as output, i.e. a column header (here, that's blank), a line of dashes for each column, and the column value for every row. Here you have only 1 column and 1 row.
If you want to get rid of the dashes, you can do various things:
select the count(*) into a variable and just PRINT the variable. This will remove the dashes from the output
perform some additional filtering with things like grep and awk on the $VAL variable before using it
As for the 'Password:' line: you are not specifying a password in the 'isql' command, so 'isql' will prompt for it (since it works, it looks like there is no password). Best specify a password flag to avoid this prompt -- or filter out that part as mentioned above.
Incidentally, it looks like you may be using the 'isql' from the Unix/Linux ODBC installation, rather than the 'isql' utility that comes with Sybase. Best use the latter (check with 'which isql').
I've got a script 'myscript' that outputs the following:
abc
def
ghi
in another script, I call:
declare RESULT=$(./myscript)
and $RESULT gets the value
abc def ghi
Is there a way to store the result either with the newlines, or with '\n' character so I can output it with 'echo -e'?
Actually, RESULT contains what you want — to demonstrate:
echo "$RESULT"
What you show is what you get from:
echo $RESULT
As noted in the comments, the difference is that (1) the double-quoted version of the variable (echo "$RESULT") preserves internal spacing of the value exactly as it is represented in the variable — newlines, tabs, multiple blanks and all — whereas (2) the unquoted version (echo $RESULT) replaces each sequence of one or more blanks, tabs and newlines with a single space. Thus (1) preserves the shape of the input variable, whereas (2) creates a potentially very long single line of output with 'words' separated by single spaces (where a 'word' is a sequence of non-whitespace characters; there needn't be any alphanumerics in any of the words).
Another pitfall with this is that command substitution — $() — strips trailing newlines. Probably not always important, but if you really want to preserve exactly what was output, you'll have to use another line and some quoting:
RESULTX="$(./myscript; echo x)"
RESULT="${RESULTX%x}"
This is especially important if you want to handle all possible filenames (to avoid undefined behavior like operating on the wrong file).
In case that you're interested in specific lines, use a result-array:
declare RESULT=($(./myscript)) # (..) = array
echo "First line: ${RESULT[0]}"
echo "Second line: ${RESULT[1]}"
echo "N-th line: ${RESULT[N]}"
In addition to the answer given by #l0b0 I just had the situation where I needed to both keep any trailing newlines output by the script and check the script's return code.
And the problem with l0b0's answer is that the 'echo x' was resetting $? back to zero... so I managed to come up with this very cunning solution:
RESULTX="$(./myscript; echo x$?)"
RETURNCODE=${RESULTX##*x}
RESULT="${RESULTX%x*}"
Parsing multiple output
Introduction
So your myscript output 3 lines, could look like:
myscript() { echo $'abc\ndef\nghi'; }
or
myscript() { local i; for i in abc def ghi ;do echo $i; done ;}
Ok this is a function, not a script (no need of path ./), but output is same
myscript
abc
def
ghi
Considering result code
To check for result code, test function will become:
myscript() { local i;for i in abc def ghi ;do echo $i;done;return $((RANDOM%128));}
1. Storing multiple output in one single variable, showing newlines
Your operation is correct:
RESULT=$(myscript)
About result code, you could add:
RCODE=$?
even in same line:
RESULT=$(myscript) RCODE=$?
Then
echo $RESULT $RCODE
abc def ghi 66
echo "$RESULT"
abc
def
ghi
echo ${RESULT#Q}
$'abc\ndef\nghi'
printf '%q\n' "$RESULT"
$'abc\ndef\nghi'
but for showing variable definition, use declare -p:
declare -p RESULT RCODE
declare -- RESULT="abc
def
ghi"
declare -- RCODE="66"
2. Parsing multiple output in array, using mapfile
Storing answer into myvar variable:
mapfile -t myvar < <(myscript)
echo ${myvar[2]}
ghi
Showing $myvar:
declare -p myvar
declare -a myvar=([0]="abc" [1]="def" [2]="ghi")
Considering result code
In case you have to check for result code, you could:
RESULT=$(myscript) RCODE=$?
mapfile -t myvar <<<"$RESULT"
declare -p myvar RCODE
declare -a myvar=([0]="abc" [1]="def" [2]="ghi")
declare -- RCODE="40"
3. Parsing multiple output by consecutives read in command group
{ read firstline; read secondline; read thirdline;} < <(myscript)
echo $secondline
def
Showing variables:
declare -p firstline secondline thirdline
declare -- firstline="abc"
declare -- secondline="def"
declare -- thirdline="ghi"
I often use:
{ read foo;read foo total use free foo ;} < <(df -k /)
Then
declare -p use free total
declare -- use="843476"
declare -- free="582128"
declare -- total="1515376"
Considering result code
Same prepended step:
RESULT=$(myscript) RCODE=$?
{ read firstline; read secondline; read thirdline;} <<<"$RESULT"
declare -p firstline secondline thirdline RCODE
declare -- firstline="abc"
declare -- secondline="def"
declare -- thirdline="ghi"
declare -- RCODE="50"
After trying most of the solutions here, the easiest thing I found was the obvious - using a temp file. I'm not sure what you want to do with your multiple line output, but you can then deal with it line by line using read. About the only thing you can't really do is easily stick it all in the same variable, but for most practical purposes this is way easier to deal with.
./myscript.sh > /tmp/foo
while read line ; do
echo 'whatever you want to do with $line'
done < /tmp/foo
Quick hack to make it do the requested action:
result=""
./myscript.sh > /tmp/foo
while read line ; do
result="$result$line\n"
done < /tmp/foo
echo -e $result
Note this adds an extra line. If you work on it you can code around it, I'm just too lazy.
EDIT: While this case works perfectly well, people reading this should be aware that you can easily squash your stdin inside the while loop, thus giving you a script that will run one line, clear stdin, and exit. Like ssh will do that I think? I just saw it recently, other code examples here: https://unix.stackexchange.com/questions/24260/reading-lines-from-a-file-with-bash-for-vs-while
One more time! This time with a different filehandle (stdin, stdout, stderr are 0-2, so we can use &3 or higher in bash).
result=""
./test>/tmp/foo
while read line <&3; do
result="$result$line\n"
done 3</tmp/foo
echo -e $result
you can also use mktemp, but this is just a quick code example. Usage for mktemp looks like:
filenamevar=`mktemp /tmp/tempXXXXXX`
./test > $filenamevar
Then use $filenamevar like you would the actual name of a file. Probably doesn't need to be explained here but someone complained in the comments.
How about this, it will read each line to a variable and that can be used subsequently !
say myscript output is redirected to a file called myscript_output
awk '{while ( (getline var < "myscript_output") >0){print var;} close ("myscript_output");}'