how to read contents of a file coming as a single argument to shell script - shell

I want to pass reference file as an argument to shell script
ref.file contains
no=1
desc="query 0 "
src_txt="select count(1) from source"
trg_txt="select count(1) from target"
flag=c
no=2
desc="query 1 "
src_txt="select count(1) from source1"
trg_txt="select count(1) from target1"
flag=c
no=3
desc="query 2 "
src_txt="select count(1) from source2"
trg_txt="select count(1) from target2"
flag=c
and so on...
shell comamnd - sh generic.sh $(cat ref.txt )
inside shell script I am trying to read the contains of ref.txt
echo "ref-- " $1
It prints only first line i.e. no=1
if i use $2 and so on then it will print only few words not the whole text
How to read the entire file in a variable including new lines and iterate over other queries in ref file in a single go

Related

Replace array of string ( passed as argument to script ) replace those values in a HQL file using Bash shell script?

I have a script which accepts 3 arguments $1 $2 $3
but $3 is an array like ("2018" "01")
so I am executing my script as :
sh script.sh Employee IT "2018 01"
and there an HQL file ( emp.hql) in which I want to replace my partition columns with the array passed like below :
***"select deptid , employee_name from {TBL_NM} where year={par_col[i]} and month={par_col[i]}"***
so below is the code I have tried :
**Table=$1
dept=$2
Par_cols=($3)
for i in "${par_cols[#]}" ;do
sed -i "/${par_col[i]}/${par_col[i]}/g" /home/hk/emp.hql**
done
Error :
*sed: -e experssion #1 , char 0: no previous regular expression*
*sed: -e experssion #2 , char 0: no previous regular expression*
But I think logic to replace partition columns is wrong , could you please help me in this?
Desired Output in HQL file :
select deptid ,employee_name from employee where year=2018 and month=01
Little bit related to below like :
Shell script to find, search and replace array of strings in a file

Bash Script printing different output of same command running on "cmd line window" and "reading from csv file"

I have a shell script which reads input command(with arguments) from a csv file and execute.
The command is /path/ABCD_CALL GI 30-JUN-2010 '' 98994-01
here '' is single quote without space,
In csv file I am using /opt/isis/infosys/src/gq19iobl/IOBL_CALL GI 30-JUN-2010 \'' \'' 98994-01
to escape single qoutes
Below is the shell script
IFS=","
cat modules.csv | while read line;
do
d="${line}"
eval "$d"
done
This command shows hundreds of records as an output on console window.
The issue I am facing is, the same command when I type manually and run from command window, I am able to see all the output records; but the same command when I am running from csv using shell script mentioned above I am getting only 1 record which shows error array.
I applied debugging using
set -x
trap read debug
There I can see below output
+ cat modules.csv
+ read line
' d='/path/ABCD_CALL GI 30-JUN-2010 '\'''\'' '\'''\'' 98994-01
' eval '/path/ABCD_CALL GI 30-JUN-2010 '\'''\'' '\'''\'' 98994-01
++ /path/ABCD_CALL GI 30-JUN-2010 '' '' $'98994-01\r'
------------- ABCD RESULT SUMMARY -------------
ABCD return message : MESSAGE ARRAY must be checked for ERRORS and WARNINGS. and ABCD returned a 1
Total balances : 0.
Total errors : 1.
error_array[0]
and so on with other details of error.
What should I do to see the same output when reading the same data from csv?

convert oracle refcursor to text in unix script

This is in continuation to the post below. I am able to return data from the oracle stored procedure to unix script.
Fetch data from Oracle SP Out Param SYS_REFCURSOR in Unix Korn Shell Script
But while looping through the records i dont get the expected result. Below is the code. Before the variable table is printed i am getting an error "cannot open"
weeknum=$1
#read ref cursor from proc
cur=`sqlplus -s $connection <<EOF
SET PAGESIZE 0 FEEDBACK OFF VERIFY OFF HEADING OFF ECHO OFF
var return_val refcursor
exec WEEKLYLOAD($weeknum, :return_val);
print return_val
EXIT
EOF`
print "done"
table=""
while read -r line
do
$table=$$table"\n"$line
done < $cur
You are trying to direct input from your cur variable, but the form you are using is looking for a file with the name of the first word in $cur - rather than the entire contents of that variable. The error you see will be the first word in the first column of the first row of the ref cursor opened by your procedure.
So if your ref cursor was opened for a query that, say, produced three rows of output with value A, B and C it would try to read input from a file called A, and report cannot open (unless a file called that happened to exist in the current working directory).
You can echo the variable and pipe it instead:
echo "$cur" | while read -r line
do
table=$table"\n"$line
done
I've removed the extra $ symbols from the assignment. But this doesn't look particularly useful; with the same three-row result as above, $table would end up as:
\nA\nB\nC
If you just want to print the contents of $cur to the console you can use one of these (or others):
echo "$cur"
printf "%s\n" "$cur"
which both produce
A
B
C

Exporting function with xargs parallel & psql in bash

I'm trying to run SQL against one-or-many psql-compatible hosts in parallel, with SQL run in sequence on each host, using xargs.
The bash script, which I'm sourcing from another script:
# Define the count of hosts (also the number of parallel processes)
export pe_fpe_hosts_line_count=$(cat $pe_fpe_hosts_file_loc | wc -l)
# Define the function that runs SQL from a file
function func_pe_exec_sql {
while read pe_sql_file; do
psql -q -t -c "\"$pe_sql_file"\"
done <$pe_fpe_sql_file_loc
}
export -f func_pe_exec_sql
# Define the xargs parallel function
function func_pe_parallel {
while read pe_hosts_file; do
echo $pe_hosts_file | xargs -d '\n' -P $pe_fpe_hosts_line_count func_pe_exec_sql
done <$pe_fpe_hosts_file_loc
}
The error I get: xargs: func_pe_exec_sql: No such file or directory. This is weird - I've exported the function!
Example SQL file:
INSERT INTO public.psql_test SELECT 1 as myint, now() as mytime;
INSERT INTO public.psql_test SELECT 2 as myint, now() as mytime;
INSERT INTO public.psql_test SELECT 3 as myint, now() as mytime;
INSERT INTO public.psql_test SELECT 4 as myint, now() as mytime;
INSERT INTO public.psql_test SELECT 5 as myint, now() as mytime;
Example SQL Host file:
--host=myhost1 --port=5432 --dbname=postgres --username=cooluser
--host=myhost2 --port=5432 --dbname=postgres --username=cooluser
pe_fpe_sql_file_loc is the path to the SQL file, and pe_fpe_hosts_file_loc is the path to the SQL Host file.
The SQL must always be run in separate transactions, and each row in the SQL file needs to be inserted separately, one after another. 5 should be in the same row as the greatest of the mytime values.
I am using it as an ETL framework with functions defined in the database though, and not for simple inserts :)
I think your invocation of xargs is incorrect. You are not actually passing the line from pe_hosts_file to the function func_pe_exec_sql.
You need to pass the input from the pipe to the function, to do that; you need to have a place-holder which -I flag in xargs provides.
-I replace-str
Replace occurrences of replace-str in the initial-arguments with names
read from standard input. Also, unquoted blanks do not terminate input items;
instead the separator is the newline character. Implies -x and -L 1.
Using that something like below needs to be used.
| xargs -d '\n' -I {} -P "$pe_fpe_hosts_line_count" bash -c 'func_pe_exec_sql "{}"'
where the {} is the place-holder for the value piped and we are passing it to the sub-shell spawned by bash -c directly to the function func_pe_exec_sql. The special double quotes around {} is to ensure, the shell to expand the value before the function is invoked.

How to extract the sybase sql query output in a shell script

I am trying to execute a SQL query on SYBASE database using shell script.
A simple query to count the number of rows in a table.
#!/bin/sh
[ -f /etc/bash.bashrc.local ] && . /etc/bash.bashrc.local
. /gi/base_environ
. /usr/gi/bin/environ
. /usr/gi/bin/path
ISQL="isql <username> guest"
count() {
VAL=$( ${ISQL} <<EOSQL
set nocount on
go
set chained off
go
select count(*) from table_name
go
EOSQL
)
echo "VAL : $VAL"
echo $VAL | while read line
do
echo "line : $line"
done
}
count
The above code gives the output as follows
VAL : Password:
-----------
35
line : Password: ----------- 35
Is there a way to get only the value '35'. What I am missing here? Thanks in advance.
The "select count(*)" prints a result set as output, i.e. a column header (here, that's blank), a line of dashes for each column, and the column value for every row. Here you have only 1 column and 1 row.
If you want to get rid of the dashes, you can do various things:
select the count(*) into a variable and just PRINT the variable. This will remove the dashes from the output
perform some additional filtering with things like grep and awk on the $VAL variable before using it
As for the 'Password:' line: you are not specifying a password in the 'isql' command, so 'isql' will prompt for it (since it works, it looks like there is no password). Best specify a password flag to avoid this prompt -- or filter out that part as mentioned above.
Incidentally, it looks like you may be using the 'isql' from the Unix/Linux ODBC installation, rather than the 'isql' utility that comes with Sybase. Best use the latter (check with 'which isql').

Resources