How to extract the sybase sql query output in a shell script - shell

I am trying to execute a SQL query on SYBASE database using shell script.
A simple query to count the number of rows in a table.
#!/bin/sh
[ -f /etc/bash.bashrc.local ] && . /etc/bash.bashrc.local
. /gi/base_environ
. /usr/gi/bin/environ
. /usr/gi/bin/path
ISQL="isql <username> guest"
count() {
VAL=$( ${ISQL} <<EOSQL
set nocount on
go
set chained off
go
select count(*) from table_name
go
EOSQL
)
echo "VAL : $VAL"
echo $VAL | while read line
do
echo "line : $line"
done
}
count
The above code gives the output as follows
VAL : Password:
-----------
35
line : Password: ----------- 35
Is there a way to get only the value '35'. What I am missing here? Thanks in advance.

The "select count(*)" prints a result set as output, i.e. a column header (here, that's blank), a line of dashes for each column, and the column value for every row. Here you have only 1 column and 1 row.
If you want to get rid of the dashes, you can do various things:
select the count(*) into a variable and just PRINT the variable. This will remove the dashes from the output
perform some additional filtering with things like grep and awk on the $VAL variable before using it
As for the 'Password:' line: you are not specifying a password in the 'isql' command, so 'isql' will prompt for it (since it works, it looks like there is no password). Best specify a password flag to avoid this prompt -- or filter out that part as mentioned above.
Incidentally, it looks like you may be using the 'isql' from the Unix/Linux ODBC installation, rather than the 'isql' utility that comes with Sybase. Best use the latter (check with 'which isql').

Related

Replace array of string ( passed as argument to script ) replace those values in a HQL file using Bash shell script?

I have a script which accepts 3 arguments $1 $2 $3
but $3 is an array like ("2018" "01")
so I am executing my script as :
sh script.sh Employee IT "2018 01"
and there an HQL file ( emp.hql) in which I want to replace my partition columns with the array passed like below :
***"select deptid , employee_name from {TBL_NM} where year={par_col[i]} and month={par_col[i]}"***
so below is the code I have tried :
**Table=$1
dept=$2
Par_cols=($3)
for i in "${par_cols[#]}" ;do
sed -i "/${par_col[i]}/${par_col[i]}/g" /home/hk/emp.hql**
done
Error :
*sed: -e experssion #1 , char 0: no previous regular expression*
*sed: -e experssion #2 , char 0: no previous regular expression*
But I think logic to replace partition columns is wrong , could you please help me in this?
Desired Output in HQL file :
select deptid ,employee_name from employee where year=2018 and month=01
Little bit related to below like :
Shell script to find, search and replace array of strings in a file

convert oracle refcursor to text in unix script

This is in continuation to the post below. I am able to return data from the oracle stored procedure to unix script.
Fetch data from Oracle SP Out Param SYS_REFCURSOR in Unix Korn Shell Script
But while looping through the records i dont get the expected result. Below is the code. Before the variable table is printed i am getting an error "cannot open"
weeknum=$1
#read ref cursor from proc
cur=`sqlplus -s $connection <<EOF
SET PAGESIZE 0 FEEDBACK OFF VERIFY OFF HEADING OFF ECHO OFF
var return_val refcursor
exec WEEKLYLOAD($weeknum, :return_val);
print return_val
EXIT
EOF`
print "done"
table=""
while read -r line
do
$table=$$table"\n"$line
done < $cur
You are trying to direct input from your cur variable, but the form you are using is looking for a file with the name of the first word in $cur - rather than the entire contents of that variable. The error you see will be the first word in the first column of the first row of the ref cursor opened by your procedure.
So if your ref cursor was opened for a query that, say, produced three rows of output with value A, B and C it would try to read input from a file called A, and report cannot open (unless a file called that happened to exist in the current working directory).
You can echo the variable and pipe it instead:
echo "$cur" | while read -r line
do
table=$table"\n"$line
done
I've removed the extra $ symbols from the assignment. But this doesn't look particularly useful; with the same three-row result as above, $table would end up as:
\nA\nB\nC
If you just want to print the contents of $cur to the console you can use one of these (or others):
echo "$cur"
printf "%s\n" "$cur"
which both produce
A
B
C

Parsing through a CSV file

I have a CSV files like this:
2015-12-10,22:45:00,205,5626,85
2015-12-10,23:00:01,79,5625,85
2015-12-13,13:00:01,4410,5629,85
2015-12-13,13:15:00,4244,5627,85
2015-12-13,13:30:00,4082,5627,85
I tried this script to generate an SQL statement:
#!/bin/bash
inputfile=${1}
echo $inputfile
OLDIFS=$IFS
IFS=,
while read date time current full cycle
do
echo -—$date --$time --$current --$full --$cycle
echo insert into table values($date,$time,$current,$full,$cycle)
sleep 1
done < $inputfile
IFS=$OLDIFS
But on execution I get this error and it doesn't run as expected:
/Scripts/CreateSql.sh: line 10: syntax error near unexpected token `('
/Scripts/CreateSql.sh: line 10: `echo insert into table values(\$date,$time,$current,$full,$cycle)'
I need the statement generated like this:
insert into table values($date,$time,$current,$full,$cycle)
Please kindly suggest a fix for this.
Use double quotes as anything under () to shell means spawn a new process.
echo "insert into table values($date,$time,$current,$full,$cycle)"
All,
i fixed this-
echo 'insert into table values ('$date','$time','$current','$full','$cycle')'

How can I select a sqlite column with multiple lines in bash?

I have a sqlite database table with three columns that is storing Name, Location, and Notes. It appears that everything is stored correctly, as when using the sqlite command line I see the correct number of columns and the data is grouped correctly.
The problem comes when using a bash script (this is a requirement) to access the data. The "Notes" column stores data that can potentially be multiple lines (with newlines and such). When I query this table, using something like the following:
stmt="Select name, location, notes from t1"
sqlite3 db "$stmt" | while read ROW;
do
name=`echo $V_ROW | awk '{split($0,a,"|"); print a[1]}'`
location=`echo $V_ROW | awk '{split($0,a,"|"); print a[2]}'`
notes=`echo $V_ROW | awk '{split($0,a,"|"); print a[3]}'`
done
I end up with everything normal until the first newline character in the notes column. After this, each note line is treated as a new row. What would be the correct way to handle this in bash?
Since the data is pipe separated, you can do this (untested): read each line into an array; check the size of the array
if 3 fields, then you have a row from the db, but the notes field may be incomplete. Do something with the previous row, which by now has a complete notes field.
if 1 field found, append the field value to the current notes field.
sqlite3 db "$stmt" | {
full_row=()
while IFS='|' read -ra row; do
if [[ ${#row[#]} -eq 3 ]]; then
# this line contains all 3 fields
if [[ ${#full_row[#]} -eq 0 ]]; then
: # "row" is the first row to be seen, nothing to do here
else
name=${full_row[0]}
location=${full_row[1]}
notes=${full_row[2]}
do_something_with "$name" "$location" "$notes"
#
# not necessary to use separate vars
# do_something_with "${row[#]}"
fi
# then store the current row with incomplete notes
full_row=( "${row[#]}" )
else
# only have notes.
full_row[2]+=" "${row[0]}
fi
done
}
You better takes steps to ensure the notes field does not contain your field separator (|)

Storing CHAR or CLOB sqlplus columns into a shell script variable

I'm having trouble storing column values into shell script variables when these include white spaces, since all the results are split on whitespaces instead of actual column values.
For example, this is what I got now:
set -A SQL_RESULTS_ARRAY `sqlplus -s un/pass#database << EOF
SET ECHO OFF
SET FEED OFF
SET HEAD OFF
SET SPACE 0
SELECT EMAIL_SUBJECT, MAIL_TO FROM EMAIL_TABLE;
EOF`
echo "${SQL_RESULTS_ARRAY[0]}"
echo "${SQL_RESULTS_ARRAY[1]}"
This doesn't work because the value of EMAIL_SUBJECT is an entire sentence, ie "Message subject test", so those echos just end up printing
Message
subject
Instead of
Message subject test
email1#email.com email2#email.com
Basically, how do I end up with only two items in the array (one per column), instead of five items (one per word)? Is this at all possible with a single connection? (I'd rather not start a new connection per column)
EDIT: Another thing, another one of my CLOB columns is EMAIL_BODY, which can basically be any text-- thus I'd rather not have a preset separator, since EMAIL_BODY can have all sorts of commas, pipes, new lines, etc...
The key you're missing is to set the shell's IFS (internal field separator) to be the same as your query results. Here's a ksh session:
$ results="Message subject test,email1#email.com email2#email.com"
$ set -A ary $results
$ for i in 0 1 2 3 4; do print "$i. ${ary[$i]}"; done
0. Message
1. subject
2. test,email1#email.com
3. email2#email.com
4.
$ IFS=,
$ set -A ary $results
$ for i in 0 1 2 3 4; do print "$i. ${ary[$i]}"; done
0. Message subject test
1. email1#email.com email2#email.com
2.
3.
4.
You'll probably want to do something like this:
results=`sqlplus ...`
old_IFS="$IFS"
IFS=,
set -A SQL_RESULTS_ARRAY $results
IFS="$old_IFS
print "${SQL_RESULTS_ARRAY[0]}"
print "${SQL_RESULTS_ARRAY[1]}"
You may try to set COLSEP and separate by its value.
Try adding double quotes using string concatenation in the select statement. Array elements that are quoted permit white space (at least in bash).
read up about the bash's "Internal Field Separator" $IFS
it is set to whitespace by default, which may be causing your problem.

Resources