I want to store row results of a query in array in unix scripting.
I tried this :
array=`sqlplus -s $DB <<eof
select customer_id from customer;
eof`;
When i tried to print it , it shows me this result:
echo ${array[0]};
CUSTOMER_ID ----------- 1 2 51 52 101 102 103 104 105 106 108 11 rows selected.
But I want to store each row as an element by excluding column_name and that "11 rows selected" sentence.
Thanks in Advance.
To create an array you need this syntax in BASH:
array=($(command))
or:
declare -a array=($(command))
For your sqlplus command:
array=($(sqlplus -s "$DB"<<eof
SET PAGESIZE 0;
select customer_id from customer;
eof))
and then print it as:
printf "%\n" "${array[#]}"
Just note that it is subject to all the shell expansions on white-spaces.
Related
var=`sqlplus -s user/pass <<EOF
2 set feedback off
3 set heading off
4 set pagesize 2000
5 select * from orders;
6 exit
7 EOF`
8 cnt=${#var[#]}
9 for (( i=0 ; i<cnt ; i++ ))
10 do
11 echo $var[0]
12 echo $var[1]
13 done
This is the code I am using but it is always giving me the last row.
Here is the result:
$ sh test.sh
[0] 5 555 c52
[1] 5 555 c52
Here is the table:
ORDER_ID QUANTITY EAN C_ID
--------------- ---------- ---------- --------------------
o1 14 551 c1
o2 14 552 c2
o3 3 553 c3
o4 4 554 c4
o5 5 555 c5
The variable var isn't an array. It's simple scalar variable contains multiline text. Therefore you can't use it for loop to retrive the variable contents. Try something like this:
~$ var=$(sqlplus -s user/pass#db_tnsalias <<EOF
set feedback off
set heading off
set pagesize 0
select * from orders;
exit
EOF
)
~$ echo "$var" | while read line; do echo $line; done
o1 14 551 c1
o2 4 552 c2
o3 3 553 c3
o4 4 554 c4
o5 5 555 c5
Take note:
double quotes around $var to avoid a variable expansion
pegesize 0 to remove first empty line
This cannot work because the sql command returns lines which are then fed into an array word by word. So afterwards there is no indication left where a data row ends and the next begins. Instead of filling an array, I'd pipe the whole output into a while loop which reads from stdin like so:
{ sqlplus -s system/subho94 <<EOF
set feedback off
set heading off
set pagesize 2000
select * from orders;
exit
EOF
} |
while read orderid quantity ean cid; do
echo $orderid $quantity $ean $cid
done
Note that while reading lines from stdin the shell will tokenize the lines but applying IFS, which results in all white space being stripped from the columns, but I think that is what you want anyways.
I have an file like this:
EMP Ename Sal Comm
101 Ravi $800 500
102 Ram $1000 40
103 Shyam $#400 50
I want to convert it to this form:
EMP Ename Sal Comm
101 Ravi 800 500
102 Ram 1000 40
103 Shyam 400 50
The file is tab delimited. I've tried with awk, but unfortunately I couldn't remove the "$#" part from the Sal column.
This should do the trick:
sed -e 's/\$//g' -e 's/\#//g' file > newFile
Slightly confused about what you mean about not needing it permanently.
Alternatively, in vi:
:%s/\$//g'
:%s/\#//g'
If you want to modify a file inside a vi session
I have two 2D-array files to read with bash.
What I want to do is extract the elements inside both files.
These two files contain different rows x columns such as:
file1.txt (nx7)
NO DESC ID TYPE W S GRADE
1 AAA 20 AD 100 100 E2
2 BBB C0 U 200 200 D
3 CCC 9G R 135 135 U1
4 DDD 9H Z 246 246 T1
5 EEE 9J R 789 789 U1
.
.
.
file2.txt (mx3)
DESC W S
AAA 100 100
CCC 135 135
EEE 789 789
.
.
.
Here is what I want to do:
Extract the element in DESC column of file2.txt then find the corresponding element in file1.txt.
Extract the W,S elements in such row of file2.txt then find the corresponding W,S elements in such row of file1.txt.
If [W1==W2 && S1==S2]; then echo "${DESC[colindex]} ok"; else echo "${DESC[colindex]} NG"
How can I read this kind of file as a 2D array with bash or is there any convenient way to do that?
bash does not support 2D arrays. You can simulate them by generating 1D array variables like array1, array2, and so on.
Assuming DESC is a key (i.e. has no duplicate values) and does not contain any spaces:
#!/bin/bash
# read data from file1
idx=0
while read -a data$idx; do
let idx++
done <file1.txt
# process data from file2
while read desc w2 s2; do
for ((i=0; i<idx; i++)); do
v="data$i[1]"
[ "$desc" = "${!v}" ] && {
w1="data$i[4]"
s1="data$i[5]"
if [ "$w2" = "${!w1}" -a "$s2" = "${!s1}" ]; then
echo "$desc ok"
else
echo "$desc NG"
fi
break
}
done
done <file2.txt
For brevity, optimizations such as taking advantage of sort order are left out.
If the files actually contain the header NO DESC ID TYPE ... then use tail -n +2 to discard it before processing.
A more elegant solution is also possible, which avoids reading the entire file in memory. This should only be relevant for really large files though.
If the rows order is not needed be preserved (can be sorted), maybe this is enough:
join -2 2 -o 1.1,1.2,1.3,2.5,2.6 <(tail -n +2 file2.txt|sort) <(tail -n +2 file1.txt|sort) |\
sed 's/^\([^ ]*\) \([^ ]*\) \([^ ]*\) \2 \3/\1 OK/' |\
sed '/ OK$/!s/\([^ ]*\) .*/\1 NG/'
For file1.txt
NO DESC ID TYPE W S GRADE
1 AAA 20 AD 100 100 E2
2 BBB C0 U 200 200 D
3 CCC 9G R 135 135 U1
4 DDD 9H Z 246 246 T1
5 EEE 9J R 789 789 U1
and file2.txt
DESC W S
AAA 000 100
CCC 135 135
EEE 789 000
FCK xxx 135
produces:
AAA NG
CCC OK
EEE NG
Explanation:
skip the header line in both files - tail +2
sort both files
join the needed columns from both files into one table like, in the result will appears only the lines what has common DESC field
like next:
AAA 000 100 100 100
CCC 135 135 135 135
EEE 789 000 789 789
in the lines, which have the same values in 2-4 and 3-5 columns, substitute every but 1st column with OK
in the remainder lines substitute the columns with NG
Suppose I have two tab-delimited files that share a column. Both files have a header line that gives a label to each column. What's an easy way to take the union of the two tables, i.e. take the columns from A and B, but do so according to the value of column K?
for example, table A might be:
employee_id name
123 john
124 mary
and table B might be:
employee_id age
124 18
123 22
then the union based on column 1 of table A ("employee_id") should yield the table:
employee_id name age
123 john 22
124 mary 18
i'd like to do this using Unix utilities, like "cut" etc. how can this be done?
you can use the join utility, but your files need to be sorted first.
join file1 file2
man join for more information
here's a start. I leave you to format the headers as needed
$ awk 'NR>1{a[$1]=a[$1]" "$2}END{for(i in a)print a[i],i}' tableA.txt tableB.txt
age employee_id
john 22 123
mary 18 124
another way
$ join <(sort tableA.txt) <(sort tableB.txt)
123 john 22
124 mary 18
employee_id name age
experiment with the join options when needed (see info page or man page)
Try:
paste file1 file2 > file3
I have a file that has 2 columns as given below....
101 6
102 23
103 45
109 36
101 42
108 21
102 24
109 67
and so on......
I want to write a script that adds the values from 2nd column if their corresponding first column matches
for example add all 2nd column values if it's 1st column is 101
add all 2nd column values if it's 1st colummn is 102
add all 2nd column values if it's 1st colummn is 103 and so on ...
i wrote my script like this , but i'm not getting the correct result
awk '{print $1}' data.txt > col1.txt
while read line
do
awk ' if [$1 == $line] sum+=$2; END {print "Sum for time stamp", $line"=", sum}; sum=0' data.txt
done < col1.txt
awk '{array[$1]+=$2} END { for (i in array) {print "Sum for time stamp",i,"=", array[i]}}' data.txt
Pure Bash :
declare -a sum
while read -a line ; do
(( sum[${line[0]}] += line[1] ))
done < "$infile"
for index in ${!sum[#]}; do
echo -e "$index ${sum[$index]}"
done
The output:
101 48
102 47
103 45
108 21
109 103