How to retrieve multiple data from table using shell script? - oracle

var=`sqlplus -s user/pass <<EOF
2 set feedback off
3 set heading off
4 set pagesize 2000
5 select * from orders;
6 exit
7 EOF`
8 cnt=${#var[#]}
9 for (( i=0 ; i<cnt ; i++ ))
10 do
11 echo $var[0]
12 echo $var[1]
13 done
This is the code I am using but it is always giving me the last row.
Here is the result:
$ sh test.sh
[0] 5 555 c52
[1] 5 555 c52
Here is the table:
ORDER_ID QUANTITY EAN C_ID
--------------- ---------- ---------- --------------------
o1 14 551 c1
o2 14 552 c2
o3 3 553 c3
o4 4 554 c4
o5 5 555 c5

The variable var isn't an array. It's simple scalar variable contains multiline text. Therefore you can't use it for loop to retrive the variable contents. Try something like this:
~$ var=$(sqlplus -s user/pass#db_tnsalias <<EOF
set feedback off
set heading off
set pagesize 0
select * from orders;
exit
EOF
)
~$ echo "$var" | while read line; do echo $line; done
o1 14 551 c1
o2 4 552 c2
o3 3 553 c3
o4 4 554 c4
o5 5 555 c5
Take note:
double quotes around $var to avoid a variable expansion
pegesize 0 to remove first empty line

This cannot work because the sql command returns lines which are then fed into an array word by word. So afterwards there is no indication left where a data row ends and the next begins. Instead of filling an array, I'd pipe the whole output into a while loop which reads from stdin like so:
{ sqlplus -s system/subho94 <<EOF
set feedback off
set heading off
set pagesize 2000
select * from orders;
exit
EOF
} |
while read orderid quantity ean cid; do
echo $orderid $quantity $ean $cid
done
Note that while reading lines from stdin the shell will tokenize the lines but applying IFS, which results in all white space being stripped from the columns, but I think that is what you want anyways.

Related

Bash: Output columns from array consisting of two columns

Problem
I am writing a bash script and I have an array, where each value consists of two columns. It looks like this:
for i in "${res[#]}"; do
echo "$i"
done
#Stream1
0 a1
1 b1
2 c1
4 d1
6 e1
#Stream2
0 a2
1 b2
3 c2
4 d2
9 f2
...
I would like to combine the output from this array into a larger table, and multiplex the indices. Furthermore, I would like to format the top row by inserting comment #Sec.
I would like the result to be something like this:
#Sec Stream1 Stream2
0 a1 a2
1 b1 b2
2 c1
3 c2
4 d1 d2
6 e1
9 f2
The insertion of #Sec and removal of the # behind the Streamkeyword is not necessary but desired if not too difficult.
Tried Solutions
I have tried piping to column and awk, but have not been able to produce the desired results.
EDIT
resis an array in a bash script. It is quite large, so I will only provide a short selection. Running echo "$( typeset -p res)"produces following output:
declare -a res='([1]="#Stream1
0 3072
1 6144
2 5120
3 1024
5 6144
..." [2]="#Stream2
0 3072
1 5120
2 4096
3 3072
53 3072
55 1024
57 2048")'
As for the 'result', my initial intention was to assign the resulting table to a variable and use it in another awk script to calculate the moving averages for specified indices, and plot the results. This will be done for ~20 different files. However I am open to other solutions.
The number of streams may vary from 10 to 50. Each stream having from 100 to 300 rows.
You may use this awk solution:
cat tabulate.awk
NF == 1 {
h = h OFS substr($1, 2)
++numSec
next
}
{
keys[$1]
map[$1,numSec] = $2
}
END {
print h
for (k in keys) {
printf "%s", k
for (i=1; i<=numSec; ++i)
printf "\t%s", map[k,i]
print ""
}
}
Then use it as:
awk -v OFS='\t' -v h='#Sec' -f tabulate.awk file
#Sec Stream1 Stream2
0 a1 a2
1 b1 b2
2 c1
3 c2
4 d1 d2
6 e1
9 f2

How to generate N columns with printf

I'm currently using:
printf "%14s %14s %14s %14s %14s %14s\n" $(cat NFE.txt)>prueba.txt
This reads a list in NFE.txt and generates 6 columns. I need to generate N columns where N is a variable.
Is there a simple way of saying something like:
printf "N*(%14s)\n" $(cat NFE.txt)>prueba.txt
Which generates the desire output?
# T1 is a white string with N blanks
T1=$(printf "%${N}s")
# Replace every blank in T with string %14s and assign to T2
T2="${T// /%14s }"
# Pay attention to that T2 contains a trailing blank.
# ${T2% } stands for T2 without a trailing blank
printf "${T2% }\n" $(cat NFE.txt)>prueba.txt
You can do this although i don't know how robust it will be
$(printf 'printf '; printf '%%14s%0.s' {1..6}; printf '\\n') $(<file)
^
This is your variable number of strings
It prints out the command with the correct number of string and executes it in a subshell.
Input
10 20 30 40 50 1 0
1 3 45 6 78 9 4 3
123 4
5 4 8 4 2 4
Output
10 20 30 40 50 1
0 1 3 45 6 78
9 4 3 123 4 5
4 8 4 2 4
You could write this in pure bash, but then you could just use an existing language. For example:
printf "$(python -c 'print("%14s "*6)')\n" $(<NFE.txt)
In pure bash, you could write, for example:
repeat() { (($1)) && printf "%s%s" "$2" "$(times $(($1-1)) "$2")"; }
and then use that in the printf:
printf "$(repeat 6 "%14s ")\n" $(<NFE.txt)

Store result of query in an array in shell scripting

I want to store row results of a query in array in unix scripting.
I tried this :
array=`sqlplus -s $DB <<eof
select customer_id from customer;
eof`;
When i tried to print it , it shows me this result:
echo ${array[0]};
CUSTOMER_ID ----------- 1 2 51 52 101 102 103 104 105 106 108 11 rows selected.
But I want to store each row as an element by excluding column_name and that "11 rows selected" sentence.
Thanks in Advance.
To create an array you need this syntax in BASH:
array=($(command))
or:
declare -a array=($(command))
For your sqlplus command:
array=($(sqlplus -s "$DB"<<eof
SET PAGESIZE 0;
select customer_id from customer;
eof))
and then print it as:
printf "%\n" "${array[#]}"
Just note that it is subject to all the shell expansions on white-spaces.

How can I compare two 2D-array files with bash?

I have two 2D-array files to read with bash.
What I want to do is extract the elements inside both files.
These two files contain different rows x columns such as:
file1.txt (nx7)
NO DESC ID TYPE W S GRADE
1 AAA 20 AD 100 100 E2
2 BBB C0 U 200 200 D
3 CCC 9G R 135 135 U1
4 DDD 9H Z 246 246 T1
5 EEE 9J R 789 789 U1
.
.
.
file2.txt (mx3)
DESC W S
AAA 100 100
CCC 135 135
EEE 789 789
.
.
.
Here is what I want to do:
Extract the element in DESC column of file2.txt then find the corresponding element in file1.txt.
Extract the W,S elements in such row of file2.txt then find the corresponding W,S elements in such row of file1.txt.
If [W1==W2 && S1==S2]; then echo "${DESC[colindex]} ok"; else echo "${DESC[colindex]} NG"
How can I read this kind of file as a 2D array with bash or is there any convenient way to do that?
bash does not support 2D arrays. You can simulate them by generating 1D array variables like array1, array2, and so on.
Assuming DESC is a key (i.e. has no duplicate values) and does not contain any spaces:
#!/bin/bash
# read data from file1
idx=0
while read -a data$idx; do
let idx++
done <file1.txt
# process data from file2
while read desc w2 s2; do
for ((i=0; i<idx; i++)); do
v="data$i[1]"
[ "$desc" = "${!v}" ] && {
w1="data$i[4]"
s1="data$i[5]"
if [ "$w2" = "${!w1}" -a "$s2" = "${!s1}" ]; then
echo "$desc ok"
else
echo "$desc NG"
fi
break
}
done
done <file2.txt
For brevity, optimizations such as taking advantage of sort order are left out.
If the files actually contain the header NO DESC ID TYPE ... then use tail -n +2 to discard it before processing.
A more elegant solution is also possible, which avoids reading the entire file in memory. This should only be relevant for really large files though.
If the rows order is not needed be preserved (can be sorted), maybe this is enough:
join -2 2 -o 1.1,1.2,1.3,2.5,2.6 <(tail -n +2 file2.txt|sort) <(tail -n +2 file1.txt|sort) |\
sed 's/^\([^ ]*\) \([^ ]*\) \([^ ]*\) \2 \3/\1 OK/' |\
sed '/ OK$/!s/\([^ ]*\) .*/\1 NG/'
For file1.txt
NO DESC ID TYPE W S GRADE
1 AAA 20 AD 100 100 E2
2 BBB C0 U 200 200 D
3 CCC 9G R 135 135 U1
4 DDD 9H Z 246 246 T1
5 EEE 9J R 789 789 U1
and file2.txt
DESC W S
AAA 000 100
CCC 135 135
EEE 789 000
FCK xxx 135
produces:
AAA NG
CCC OK
EEE NG
Explanation:
skip the header line in both files - tail +2
sort both files
join the needed columns from both files into one table like, in the result will appears only the lines what has common DESC field
like next:
AAA 000 100 100 100
CCC 135 135 135 135
EEE 789 000 789 789
in the lines, which have the same values in 2-4 and 3-5 columns, substitute every but 1st column with OK
in the remainder lines substitute the columns with NG

Cell-wise summation of tables in a linux shell script

I have a set of tables in the following format:
1000 3 0 15 14
2000 3 0 7 13
3000 2 3 14 12
4000 3 1 11 14
5000 1 1 9 14
6000 3 1 13 11
7000 3 0 10 15
They are in simple text files.
I want to merge these files into a new table in the same format, where each cell (X,Y) is the sum of all cells (X,Y) from the original set of tables. One slightly complicating factor is that the numbers from the first column should not be summed, since these are labels.
I suspect this can be done with AWK, but I'm not particularly versed in this language and can't find a solution on the web. If someone suggests another tool, that's also fine.
I want to do this from a bash shell script.
Give this a try:
#!/usr/bin/awk -f
{
for (i=2;i<=NF; i++)
a[$1,i]+=$i
b[$1]=$1
if (NF>maxNF) maxNF=NF
}
END {
n=asort(b,c)
for (i=1; i<=n; i++) {
printf "%s ", b[c[i]]
for (j=2;j<=maxNF;j++) {
printf "%d ", a[c[i],j]
}
print ""
}
}
Run it like this:
./sumcell.awk table1 table2 table3
or
./sumcell.awk table*
The output using your example input twice would look like this:
$ ./sumcell.awk table1 table1
1000 6 0 30 28
2000 6 0 14 26
3000 4 6 28 24
4000 6 2 22 28
5000 2 2 18 28
6000 6 2 26 22
7000 6 0 20 30
Sum each line, presuming at least one numeric column on each line.
while read line ; do
label=($line)
printf ${label[0]}' ' ;
expr $(
printf "${label[1]}"
for c in "${label[#]:2}" ; do
printf ' + '$c
done
)
done < table
EDIT: Of course I didn't see the comment about combining based on the label, so this is incomplete.
perl -anE'$h{$F[0]}[$_]+=$F[$_]for 1..4}{say$_,"#{$h{$_}}"for sort{$a<=>$b}keys%h' file_1 file_2

Resources