How to search a line containing word in file and from that line to end of file should be echo the date using shell script - shell

cat "file.log"| grep -q '2013-11-10'
while read line
do
echo file_content_time=`echo $line | sed -e 's/\([0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0- 9] [0-9][0-9]:[0-9][0-9]:[0-9][0-9]\).*/\1/'`
if [ $? -eq 0 ]
then
echo comparison_start_date=`date -d "$file_content_time" +%Y%m%d`
fi
done < 'file.log'
/* Here I am trying find out the line containing '2013-11-10' and from that line onwards date has to display .*/

To output everything from a line containing a pattern up to the end-of-file all you need is
awk '/2013-11-10/,/pattern-not-in-file/' file.log

awk '/pattern/{p=1}p' your_file

initial_time=$(grep -o -m1 "2013-11-10 [0-9][0-9]:[0-9][0-9]:[0-9][0-9]" file.log)

Related

Grep command returns nothing in shell script

When I try to extract rows that are matched string which are in another file.But the grep command returns nothing.
#!/bin/bash
input="export.txt"
file="filename.csv"
val=`head -n 1 $file`
echo $val>export.csv
cat export.txt | while read line
do
val=`echo $line | tr -d '\n'`
echo $val
valu=`grep $val $file`
echo $valu
done
You can simply do this :
grep -f list.txt input.txt
Which will extract all the lines from input which match any word from list.txt.
If for some reason you want to save each match, you can do it in a Bash array as :
IFS=$'\n' read -d '' -a values <<< "$( grep -f list.txt input.txt )"
And then you can print a certain match as :
echo "${values[1]}"
Regards!

Sed replace substring only if expression exist

In a bash script, I am trying to remove the directory name in filenames :
documents/file.txt
direc/file5.txt
file2.txt
file3.txt
So I try to first see if there is a "/" and if yes delete everything before :
for i in **/*.scss *.scss; do
echo "$i" | sed -n '^/.*\// s/^.*\///p'
done
But it doesn't work for files in the current directory, it gives me a blank string.
I get :
file.txt
file5.txt
When you only want the filename, use basename instead of sed.
# basename /path/to/file
returns file
here is the man page
Your sed attempt is basically fine, but you should print regardless of whether you performed a substitution; take out the -n and the p at the end. (Also there was an unrelated syntax error.)
Also, don't needlessly loop over all files.
printf '%s\n' **/*.scss *.scss |
sed -n 's%^.*/%%p'
This also can be done with awk bash util.
Example:
echo "1/2/i.py" | awk 'BEGIN {FS="/"} {print $NF}'
output: i.py
Eventually, I did :
for i in **/*.scss *.scss; do
# for i in *.scss; do
# for i in _hm-globals.scss; do
name=${i##*/} # remove dir name
name=${name%.scss} # remove extension
name=`echo "$name" | sed -n "s/^_hm-//p"` # remove _hm-
if [[ $name = *"."* ]]; then
name=`echo "$name" | sed -n 's/\./-/p'` #replace . to --
fi
echo "$name" >&2
done

Removing current line of a file

I'm facing something that looks easy, but can't find the answer :
The goal of this function is to remove all the line that contains 3 commas ',' :
while read line; do
COUNT=$(echo $line | grep -o "\," | wc -)
if [ $COUNT -ne 3 ]; then
remove line
fi
done < tmp.txt
I dont find how to remove current line, can you help me ?
I extract this tmp.txt from a larger with grep, if it was in a variable instead of a tmp.txt will it be the same ?
while read line; do
COUNT=$(echo $line | grep -o "\," | wc -)
COUNT=$(echo $line | grep -o "\," | wc -)
if [ $COUNT -ne 3 ]; then
remove line
fi
done <<< "$toto"
Thanks in advance
Using sed command only solution.
sed '/^\([^,]*,\)\{3\}[^,]*$/d' infile
Delete all those line which character comma , occurred exactly 3 times.
Or using awk:
awk -F, 'NF!=4' infile
Or both read from a variable.
sed '/^\([^,]*,\)\{3\}[^,]*$/d' <<<"$variable"
awk -F, 'NF!=4' <<<"$variable"
A simple awk solution
awk 'gsub(/,/,",")!=3' file
gsub replaces the pattern with the specified string and it returns the number of substitutions/replacements made.
We are replacing , with , here and thus gsub will return us the number of , in the string.
Example :
Input file
hello this line has 1 ,
This line, has, 3 ,
This line, has, 4 , commas , Thanks
Output
$ awk 'gsub(/,/,",")!=3' file
hello this line has 1 ,
This line, has, 4 , commas , Thanks
I would have done it in the other way :
while read line; do
COUNT=$(echo $line | grep -o "\," | wc -)
if [ $COUNT -eq 3 ]; then
echo $line >> $tempofile
fi
done < tmp.txt
If the line is matched, keep it, otherwise get to next line.
This simple command can remove all the lines that contains 3
$ awk '!/3/' file_name

How to browse a line from a file?

I have a file that contains 10 lines with this sort of content:
aaaa,bbb,132,a.g.n.
I wanna walk throw every line, char by char and put the data before the " , " is met in an output file.
if [ $# -eq 2 ] && [ -f $1 ]
then
echo "Read nr of fields to be saved or nr of commas."
read n
nrLines=$(wc -l < $1)
while $nrLines!="1" read -r line || [[ -n "$line" ]]; do
do
for (( i=1; i<=$n; ++i ))
do
while [ read -r -n1 temp ]
do
if [ temp != "," ]
then
echo $temp > $(result$i)
else
fi
done
paste -d"\n" $2 $(result$i)
done
nrLines=$($nrLines-1)
done
else
echo "File not found!"
fi
}
In parameter $2 I have an empty file in which I will store the data from file $1 after I extract it without the " , " and add a couple of comments.
Example:
My input_file contains:
a.b.c.d,aabb,comp,dddd
My output_file is empty.
I call my script: ./script.sh input_file output_file
After execution the output_file contains:
First line info: a.b.c.d
Second line info: aabb
Third line info: comp
(yes, without the 4th line info)
You can do what you want very simply with parameter-expansion and substring-removal using bash alone. For example, take an example file:
$ cat dat/10lines.txt
aaaa,bbb,132,a.g.n.
aaaa,bbb,133,a.g.n.
aaaa,bbb,134,a.g.n.
aaaa,bbb,135,a.g.n.
aaaa,bbb,136,a.g.n.
aaaa,bbb,137,a.g.n.
aaaa,bbb,138,a.g.n.
aaaa,bbb,139,a.g.n.
aaaa,bbb,140,a.g.n.
aaaa,bbb,141,a.g.n.
A simple one-liner using native bash string handling could simply be the following and give the following results:
$ while read -r line; do echo ${line%,*}; done <dat/10lines.txt
aaaa,bbb,132
aaaa,bbb,133
aaaa,bbb,134
aaaa,bbb,135
aaaa,bbb,136
aaaa,bbb,137
aaaa,bbb,138
aaaa,bbb,139
aaaa,bbb,140
aaaa,bbb,141
Paremeter expansion w/substring removal works as follows:
var=aaaa,bbb,132,a.g.n.
Beginning at the left and removing up to, and including, the first ',' is:
${var#*,} # bbb,132,a.g.n.
Beginning at the left and removing up to, and including, the last ',' is:
${var##*,} # a.g.n.
Beginning at the right and removing up to, and including, the first ',' is:
${var%,*} # aaaa,bbb,132
Beginning at the left and removing up to, and including, the last ',' is:
${var%%,*} # aaaa
Note: the text to remove above is represented with a wildcard '*', but wildcard use is not required. It can be any allowable text. For example, to only remove ,a.g.n where the preceding number is 136, you can do the following:
${var%,136*},136 # aaaa,bbb,136 (all others unchanged)
To print 2016 th line from a file named file.txt u have to run a command like this-
sed -n '2016p' < file.txt
More-
sed -n '2p' < file.txt
will print 2nd line
sed -n '2011p' < file.txt
2011th line
sed -n '10,33p' < file.txt
line 10 up to line 33
sed -n '1p;3p' < file.txt
1st and 3th line
and so on...
For more detail, please have a look in this tutorial and this answer.
In native bash the following should do what you want, assuming you replace the contents of your script.sh with the below:
#!/bin/bash
IN_FILE=${1}
OUT_FILE=${2}
IFS=\,
while read line; do
set -- ${line}
for ((i=1; i<=${#}; i++)); do
((${i}==4)) && continue
((n+=1))
printf '%s\n' "Line ${n} info: ${!i}"
done
done < ${IN_FILE} > ${OUT_FILE}
This will not print the 4th field of each line within the input file, on a new line in the output file (I assume this is your requirement as per your comment?).
[wspace#wspace sandbox]$ awk -F"," 'BEGIN{OFS="\n"}{for(i=1; i<=NF-1; i++){print "line Info: "$i}}' data.txt
line Info: a.b.c.d
line Info: aabb
line Info: comp
This little snippet can ignore the last field.
updated:
#!/usr/bin/env bash
if [ ! -f "$1" -o $# -ne 2 ];then
echo "Usage: $(basename $0) input_file out_file"
exit 127
fi
input_file=$1
output_file=$2
: > $output_file
if [ "$(wc -l < $1)" -ne 0 ];then
while true
do
read -r -n1 char
if [ "$char" == "" ];then
break
elif [ $char != "," ];then
temp=$temp$char
else
echo "line info: $temp" >> $output_file
temp=""
fi
done < $input_file
else
echo "file $1 is empty"
fi
Maybe this is what you want
Did you try
sed "s|,|\n|g" $1 | head -n -1 > $2
I assume that only the last word would not have a comma on its right.
Try this (tested with you sample line) :
#!/bin/bash
# script.sh
echo "Number of fields to save ?"
read nf
while IFS=$',' read -r -a arr; do
newarr=${arr[#]:0:${nf}}
done < "$1"
for i in ${newarr[#]};do
printf "%s\n" $i
done > "$2"
Execute script with :
$ ./script.sh inputfile outputfile
Number of fields ?
3
$ cat outputfile
a.b.c.d
aabb
comp
All words separated with commas are stored into an array $arr
A tmp array $newarr removes last $n element ($n get the read command).
It loops over new array and prints result in $2, the outputfile.

How to find the first occurence of date which is greater than or eqaul to particular date in text file using shell script

past_date='2013-11-14'
initial_time=$(grep -o -m1 "$past_date [0-9][0-9]:[0-9][0-9]:[0-9][0-9]" logfile.txt)
/* Here I am trying to find the first occurence of date which is greater than or eqaul to '2013-11-14', Above code I have tried ,It is giving only that particular line of file, If that date is not found It has to give next date which is greater than 2013-11-14 date */
Using awk
past_date='20131114'
awk '{d=$1;gsub(/-/,"",d);if (d>=p) {print;exit}}' p=$past_date logfile
2013-11-15 15:45:40 Starting agent install process
If you use bash, then you might want to try something like:
past_date='2013-11-14'
initial_time=$(grep -oP '\d{4}-\d\d-\d\d \d\d:\d\d:\d\d' < logfile.txt | \
while read LINE ; do if [ "$LINE" '>' "$past_date" ]; then echo $LINE; break; fi ; done)
while read line
do
initial_time=`echo $line | sed -e 's/\([0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9] [0-9][0-9]:[0-9][0-9]:[0-9][0-9]\).*/\1/'`
file_content_date=`date -d "$initial_time" +%Y%m%d`
comparison_past_date=`date -d "$past_date" +%Y%m%d`
if [ $comparison_past_date -le $file_content_date ]; then
comparison_start_date=`date -d "$file_content_date" +%Y%m%d`
break
fi
done < logfile.txt
fi

Resources