Why grep is not getting last argument? - bash

This is a script that searches each line of a file ($1) into another file ($2):
val=$(wc -l < $1)
for ((i = 1; i <= val; i++))
do
line=$(sed '$i!d' $1)
if grep -q "$(echo $line)" $2
then
echo found
fi
done
But it gets stuck in the if grep.
It behaves as if it's not getting the $2.

a script that searches each line of a file ($1) into another file ($2)
No need to write your own script for that. Use grep's -f option:
if grep -qf "$1" "$2"; then
echo found
else
echo not found
fi

Solved, the problem was in how I was passing the line number in sed:
#!/bin/bash
val=$(wc -l < $1)
for ((i = 1; i <= val; i++))
do
line=$(sed "$i!d" $1)
if ! grep -q "$(echo $line)" $2
then
echo $line
fi
done
This works fine, if you do:
./script file1 file2
It gives you the lines of the first file that are missing in the second.

Related

Need to add space at the end of each line using Unix shell script

I need to add space at the end of each line except the header lines.Below is the example of my file:
13120000005000002100000000000000000000081D000
231200000000000 000 00XY018710V000000000
231200000000000 000 00XY018710V000000000
13120000012000007000000000000000000000081D000
231200000000000 000 00XY057119V000000000
So 1st & 4th line(starting with 131200 ) is my header line...Except my header I want 7-8spaces at the end of each line.
Please find the code that I am currently using:
find_list=`find *.dat -type f`
Filename='*.dat'
filename='xyz'
for file in $find_list
do
sed -i -e 's/\r$/ /' "$file"
n=1
loopcounterpre=""
newfile=$(echo "$filename" | sed -e 's/\.[^.]*$//')".dat"
while read line
do
if [[ $line != *[[:space:]]* ]]
then
rowdetail=$line
loopcounter=$( echo "$rowdetail" | cut -b 1-6)
if [[ "$loopcounterpre" == "$loopcounter" ]]
then
loopcounterpre=$loopcounter
#Increases the counter for in the order of 001,002 and so on until the Pay entity is changed
n=$((n+1))
#Resets the Counter to 1 when the pay entity changes
else
loopcounterpre=$loopcounter
n=1
fi
printf -v m "%03d" $n
llen=$(echo ${#rowdetail})
rowdetailT=$(echo "$rowdetail" | cut -b 1-$((llen-3)))
ip=$rowdetailT$m
echo "$ip" >> $newfile
else
rowdetail=$line
echo "$rowdetail" >> $newfile
fi
done < $file
bye
EOF
done
The entire script can be replaced with one line of GNU sed:
sed -is '/^131200\|^1351000/!s/$/ /' $(find *.dat -type f)
Using awk:
$ awk '{print $0 ($0~/^(131200|1351000)/?"":" ")}' file
print current record $0 and if it starts with $0~/^(131200|1351000)/ print "" else : print " ".

copy text in another file and append different strings shell script

file=$2
isHeader=$true
while read -r line;
do
if [ $isHeader ]
then
sed "1i$line",\"BATCH_ID\"\n >> $file
else
sed "$line,1"\a >> $file
fi
isHeader=$false
done < $1
echo $file
In the first line I want to append a string and to the others lines I want to append the same string for the rest of the lines. I tried this but it doesn't work. I don't have any ideas, can somebody help me please?
Not entirely clear to me what you want to do, but if you simply want to append text at the end of each line, use echo in place of sed:
file=$2
isHeader=1
while read -r line;
do
if [ $isHeader ]
then
#sed "1i$line",\"BATCH_ID\"\n >> $file
echo "${line},\"BATCH_ID\"\n" > $file
else
#sed "$line,1"\a >> $file
echo "${line},1\a" >> $file
fi
isHeader=0
done < $1
cat $file
The accepted answer is slightly wrong because echo...\a produces a bell. Also, awk or sed support regular expressions and are 10x faster at line-by-line processing. Here it is in awk:
#! /bin/sh
script='NR == 1 { print $0 ",\"BATCH_ID\"" }
NR > 1 { print $0 ",1" }'
awk "$script" $1 > $2
In sed it's even simpler:
sed '1 s/$/,"BATCH_ID"/; 2,$ s/$/,1/' $1 > $2
To convince yourself of the speed, try this yourself:
$ time seq 100000 | while read f; do echo ${f}foo; done > /dev/null
real 0m2.068s
user 0m1.708s
sys 0m0.364s
$ time seq 100000 | sed 's/$/foo/' > /dev/null
real 0m0.166s
user 0m0.156s
sys 0m0.017s

How to browse a line from a file?

I have a file that contains 10 lines with this sort of content:
aaaa,bbb,132,a.g.n.
I wanna walk throw every line, char by char and put the data before the " , " is met in an output file.
if [ $# -eq 2 ] && [ -f $1 ]
then
echo "Read nr of fields to be saved or nr of commas."
read n
nrLines=$(wc -l < $1)
while $nrLines!="1" read -r line || [[ -n "$line" ]]; do
do
for (( i=1; i<=$n; ++i ))
do
while [ read -r -n1 temp ]
do
if [ temp != "," ]
then
echo $temp > $(result$i)
else
fi
done
paste -d"\n" $2 $(result$i)
done
nrLines=$($nrLines-1)
done
else
echo "File not found!"
fi
}
In parameter $2 I have an empty file in which I will store the data from file $1 after I extract it without the " , " and add a couple of comments.
Example:
My input_file contains:
a.b.c.d,aabb,comp,dddd
My output_file is empty.
I call my script: ./script.sh input_file output_file
After execution the output_file contains:
First line info: a.b.c.d
Second line info: aabb
Third line info: comp
(yes, without the 4th line info)
You can do what you want very simply with parameter-expansion and substring-removal using bash alone. For example, take an example file:
$ cat dat/10lines.txt
aaaa,bbb,132,a.g.n.
aaaa,bbb,133,a.g.n.
aaaa,bbb,134,a.g.n.
aaaa,bbb,135,a.g.n.
aaaa,bbb,136,a.g.n.
aaaa,bbb,137,a.g.n.
aaaa,bbb,138,a.g.n.
aaaa,bbb,139,a.g.n.
aaaa,bbb,140,a.g.n.
aaaa,bbb,141,a.g.n.
A simple one-liner using native bash string handling could simply be the following and give the following results:
$ while read -r line; do echo ${line%,*}; done <dat/10lines.txt
aaaa,bbb,132
aaaa,bbb,133
aaaa,bbb,134
aaaa,bbb,135
aaaa,bbb,136
aaaa,bbb,137
aaaa,bbb,138
aaaa,bbb,139
aaaa,bbb,140
aaaa,bbb,141
Paremeter expansion w/substring removal works as follows:
var=aaaa,bbb,132,a.g.n.
Beginning at the left and removing up to, and including, the first ',' is:
${var#*,} # bbb,132,a.g.n.
Beginning at the left and removing up to, and including, the last ',' is:
${var##*,} # a.g.n.
Beginning at the right and removing up to, and including, the first ',' is:
${var%,*} # aaaa,bbb,132
Beginning at the left and removing up to, and including, the last ',' is:
${var%%,*} # aaaa
Note: the text to remove above is represented with a wildcard '*', but wildcard use is not required. It can be any allowable text. For example, to only remove ,a.g.n where the preceding number is 136, you can do the following:
${var%,136*},136 # aaaa,bbb,136 (all others unchanged)
To print 2016 th line from a file named file.txt u have to run a command like this-
sed -n '2016p' < file.txt
More-
sed -n '2p' < file.txt
will print 2nd line
sed -n '2011p' < file.txt
2011th line
sed -n '10,33p' < file.txt
line 10 up to line 33
sed -n '1p;3p' < file.txt
1st and 3th line
and so on...
For more detail, please have a look in this tutorial and this answer.
In native bash the following should do what you want, assuming you replace the contents of your script.sh with the below:
#!/bin/bash
IN_FILE=${1}
OUT_FILE=${2}
IFS=\,
while read line; do
set -- ${line}
for ((i=1; i<=${#}; i++)); do
((${i}==4)) && continue
((n+=1))
printf '%s\n' "Line ${n} info: ${!i}"
done
done < ${IN_FILE} > ${OUT_FILE}
This will not print the 4th field of each line within the input file, on a new line in the output file (I assume this is your requirement as per your comment?).
[wspace#wspace sandbox]$ awk -F"," 'BEGIN{OFS="\n"}{for(i=1; i<=NF-1; i++){print "line Info: "$i}}' data.txt
line Info: a.b.c.d
line Info: aabb
line Info: comp
This little snippet can ignore the last field.
updated:
#!/usr/bin/env bash
if [ ! -f "$1" -o $# -ne 2 ];then
echo "Usage: $(basename $0) input_file out_file"
exit 127
fi
input_file=$1
output_file=$2
: > $output_file
if [ "$(wc -l < $1)" -ne 0 ];then
while true
do
read -r -n1 char
if [ "$char" == "" ];then
break
elif [ $char != "," ];then
temp=$temp$char
else
echo "line info: $temp" >> $output_file
temp=""
fi
done < $input_file
else
echo "file $1 is empty"
fi
Maybe this is what you want
Did you try
sed "s|,|\n|g" $1 | head -n -1 > $2
I assume that only the last word would not have a comma on its right.
Try this (tested with you sample line) :
#!/bin/bash
# script.sh
echo "Number of fields to save ?"
read nf
while IFS=$',' read -r -a arr; do
newarr=${arr[#]:0:${nf}}
done < "$1"
for i in ${newarr[#]};do
printf "%s\n" $i
done > "$2"
Execute script with :
$ ./script.sh inputfile outputfile
Number of fields ?
3
$ cat outputfile
a.b.c.d
aabb
comp
All words separated with commas are stored into an array $arr
A tmp array $newarr removes last $n element ($n get the read command).
It loops over new array and prints result in $2, the outputfile.

bash, adding string after a line

I'm trying to put together a bash script that will search a bunch of files and if it finds a particular string in a file, it will add a new line on the line after that string and then move on to the next file.
#! /bin/bash
echo "Creating variables"
SEARCHDIR=testfile
LINENUM=1
find $SEARCHDIR* -type f -name *.xml | while read i; do
echo "Checking $i"
ISBE=`cat $i | grep STRING_TO_SEARCH_FOR`
if [[ $ISBE =~ "STRING_TO_SEARCH_FOR" ]] ; then
echo "found $i"
cat $i | while read LINE; do
((LINENUM=LINENUM+1))
if [[ $LINE == "<STRING_TO_SEARCH_FOR>" ]] ; then
echo "editing $i"
awk -v "n=$LINENUM" -v "s=new line to insert" '(NR==n) { print s } 1' $i
fi
done
fi
LINENUM=1
done
the bit I'm having trouble with is
awk -v "n=$LINENUM" -v "s=new line to insert" '(NR==n) { print s } 1' $i
if I just use $i at the end, it will output the content to the screen, if I use $i > $i then it will just erase the file and if I use $i >> $i it will get stuck in a loop until the disk fills up.
any suggestions?
Unfortunately awk dosen't have an in-place replacement option, similar to sed's -i, so you can create a temp file and then remove it:
awk '{commands}' file > tmpfile && mv tmpfile file
or if you have GNU awk 4.1.0 or newer, the -i inplace is added, so you can do:
awk -i inplace '{commands}' file
to modify the original
#cat $i | while read LINE; do
# ((LINENUM=LINENUM+1))
# if [[ $LINE == "<STRING_TO_SEARCH_FOR>" ]] ; then
# echo "editing $i"
# awk -v "n=$LINENUM" -v "s=new line to insert" '(NR==n) { print s } 1' $i
# fi
# done
# replaced by
sed -i 's/STRING_TO_SEARCH_FOR/&\n/g' ${i}
or use awk in place of sed
also
# ISBE=`cat $i | grep STRING_TO_SEARCH_FOR`
# if [[ $ISBE =~ "STRING_TO_SEARCH_FOR" ]] ; then
#by
if [ $( grep -c 'STRING_TO_SEARCH_FOR' ${i} ) -gt 0 ]; then
# if file are huge, if not directly used sed on it, it will be faster (but no echo about finding the file)
If you can, maybe use a temporary file?
~$ awk ... $i > tmpfile
~$ mv tmpfile $i
Or simply awk ... $i > tmpfile && mv tmpfile $i
Note that, you can use mktemp to create this temporary file.
Otherwise, with sed you can insert a line right after a match:
~$ cat f
auie
nrst
abcd
efgh
1234
~$ sed '/abcd/{a\
new_line
}' f
auie
nrst
abcd
new_line
efgh
1234
The command search if the line matches /abcd/, if so, it will append (a\) the line new_line.
And since sed as the -i to replace inline, you can do:
if [[ $ISBE =~ "STRING_TO_SEARCH_FOR" ]] ; then
echo "found $i"
echo "editing $i"
sed -i "/STRING_TO_SEARCH_FOR/{a
\new line to insert
}" $i
fi

sed command to select certain number of lines from a file

I am trying to split huge files each of which will contain around say 30k of lines.
I found it can be done using sed -n 'from_line,to_line p' command but it is working fine if i have line numbers but here in my case i am using two variable and i am getting error for that.
here is script which i am using.
k=1
for i in `ls final*`
do
count=`wc -l $i|awk '{print $1}'`
marker1=1
marker2=30000
no_of_files=$(( count/30000 ))
#echo $no_of_files
no_of_files=$(( no_of_files+1 ))
while [[ no_of_files -ne 0 ]];do
if [[ $marker2 -gt $count ]];then
sed -n '$marker1,$count p' $i > purge$k.txt
else
sed -n '$marker1,$marker2 p' $i > purge$k.txt
marker1=$(( marker2+1 ))
marker2=$(( marker2+30000 ))
fi
no_of_files=$(( no_of_files-1 ))
k=$(( k+1 ))
done
done
I am getting below error while running the script.
sed: $marker1,$marker2 p is not a recognized function.
sed: $marker1,$marker2 p is not a recognized function.
sed: $marker1,$marker2 p is not a recognized function.
sed: $marker1,$marker2 p is not a recognized function.
sed: $marker1,$marker2 p is not a recognized function.
sed: $marker1,$marker2 p is not a recognized function.
sed: $marker1,$count p is not a recognized function.
It doesnt work probably because you use variables in ''
try to change sed commands as follow
sed -n "$marker1,$count p"
or better is this
sed -n '/'$marker1'/,/'$count'/p'
Some small changes.
Use double quote in sed. Do not use old back tics, use parentheses.
Change k=$(( k+1 )) to ((k++)).
k=1
for i in $(ls final*)
do
count=$(wc -l <$i)
marker1=1
marker2=30000
no_of_files=$(( count/30000 ))
#echo $no_of_files
(( no_of_files++ ))
while [[ no_of_files -ne 0 ]];do
if [[ $marker2 -gt $count ]];then
sed -n "$marker1,$count p" $i > purge$k.txt
else
sed -n "$marker1,$marker2 p" $i > purge$k.txt
marker1=$(( marker2+1 ))
marker2=$(( marker2+30000 ))
fi
(( no_of_files-- ))
(( k++ ))
done
done
This wc -l $i|awk '{print $1}' could be used like this:
awk 'END {print NR}' $i
or
wc -l < $i
As others have noted, you have your shell variables inside single quotes so they are not being expanded. But you are using the wrong tool. What you are doing creates N files using N passes. split -l 30000 "$i" will split the file into 30,000 line pieces called xaa, xab, ... You can tell split what to call the xaa files too.

Resources