How to edit previous line from current in text file? - bash

So what I need exactly.
I have a file that I looping line by line and when I'll found the word "search" I need to return on previous line and change the word "false" to "true" inside that line, but only on that line not for all file. I'm newbie in bash and that all that I have.
file="/u01/MyFile.txt"
count=0
while read line
do
((count++))
if [[ $line == *"[search]"* ]]
then
?????????????
fi
done < $file

You could do the whole thing in pure bash like this:
# Declare a function process_file doing the stuff
process_file() {
# Always have the previous line ready, hold off printing
# until we know if it needs to be changed.
read prev
while read line; do
if [[ $line == *"[search]"* ]]; then
# substitute false with true in $prev. Use ${prev//false/true} if
# several occurrences may need to be replaced.
echo "${prev/false/true}"
else
echo "$prev"
fi
# remember current line as previous for next turn
prev="$line"
done
# in the end, print the last line (it was saved as $prev) in the last
# loop iteration.
echo "$prev"
}
# call function, feed file to it.
process_file < file
However, there are tools that are better suited to this sort of file processing than pure bash and that are commonly used in shell scripts: awk and sed. These tools process a file by reading line after line1 from it and running a piece of code for each line individually, preserving some state between lines (not unlike the code above) and come with more powerful text processing facilities.
For this, I'd use awk:
awk 'index($0, "[search]") { sub(/false/, "true", prev) } NR != 1 { print prev } { prev = $0 } END { print prev }' filename
That is:
index($0, "[search]") { # if the currently processed line contains
sub(/false/, "true", prev) # "[search]", replace false with true in the
# saved previous line. (use gsub if more than
# one occurrence may have to be replaced)
}
NR != 1 { # then, unless we're processing the first line
# and don't have a previous line,
print prev # print the previous line
}
{ # then, for all lines:
prev = $0 # remember it as previous line for the next turn
}
END { # and after the last line was processed,
print prev # print the last line (that we just saved
# as prev)
}
You could also use sed:
sed '/\[search\]/ { x; s/false/true/; x; }; x; ${ p; x; }; 1d' filename
...but as you can see, sed is somewhat more cryptic. It has its strengths, but this problem doesn't play to them.
Addendum, as requested: The main thing to know is that sed reads line into something called the pattern space (on which most commands operate) and has a hold buffer on the side where you can save things between lines. We'll use the hold buffer to hold the current previous line. The code works as follows:
/\[search\]/ { # if the currently processed line contains [search]
x # eXchange pattern space (PS) and hold buffer (HB)
s/false/true/ # replace false with true in the pattern space
x # swap back. This changed false to true in the PS.
# Use s/false/true/g for multiple occurrences.
}
x # swap pattern space, hold buffer (the previous line
# is now in the PS, the current in the HB)
${ # if we're processing the last line,
p # print the PS
x # swap again (current line is now in PS)
}
1d # If we're processing the first line, the PS now holds
# the empty line that was originally in the HB. Don't
# print that.
# We're dropping off the end here, and since we didn't
# disable auto-print, the PS will be printed now.
# That is the previous line except if we're processing
# the last line (then it's the last line)
Well, I did warn you that sed is somewhat more cryptic than awk. A caveat of this code is that it expects the input file to have more than one line.
1 In awk's case, it's records that don't have to be lines but are lines by default.

A very simple approach would be to read 2 lines at a time and then check for the condition in the second line and replace the previous line.
while read prev_line # reads every 1st line
do
read curr_line # reads every 2nd line
if [[ $curr_line == *"[search]"* ]]; then
echo "${prev_line/false/true}"
echo "$curr_line
else
echo "$prev_line"
echo "$curr_line"
fi
done < "file.txt"

The correct version of your way of doing this would be:
file="/u01/MyFile.txt"
count=0
while read line
do
((count++))
if [[ $line == *"[search]"* ]]
then
sed -i.bak "$((count-1))s/true/false/" $file
fi
done < $file

Related

Using sed in order to change a specific character in a specific line

I'm a beginner in bash and here is my problem. I have a file just like this one:
Azzzezzzezzzezzz...
Bzzzezzzezzzezzz...
Czzzezzzezzzezzz...
I try in a script to edit this file.ABC letters are unique in all this file and there is only one per line.
I want to replace the first e of each line by a number who can be :
1 in line beginning with an A,
2 in line beginning with a B,
3 in line beginning with a C,
and I'd like to loop this in order to have this type of result
Azzz1zzz5zzz1zzz...
Bzzz2zzz4zzz5zzz...
Czzz3zzz6zzz3zzz...
All the numbers here are random int variables between 0 and 9. I really need to start by replacing 1,2,3 in first exec of my loop, then 5,4,6 then 1,5,3 and so on.
I tried this
sed "0,/e/s/e/$1/;0,/e/s/e/$2/;0,/e/s/e/$3/" /tmp/myfile
But the result was this (because I didn't specify the line)
Azzz1zzz2zzz3zzz...
Bzzzezzzezzzezzz...
Czzzezzzezzzezzz...
I noticed that doing sed -i "/A/ s/$/ezzz/" /tmp/myfile will add ezzz at the end of A line so I tried this
sed -i "/A/ 0,/e/s/e/$1/;/B/ 0,/e/s/e/$2/;/C/ 0,/e/s/e/$3/" /tmp/myfile
but it failed
sed: -e expression #1, char 5: unknown command: `0'
Here I'm lost.
I have in a variable (let's call it number_of_e_per_line) the number of e in either A, B or C line.
Thank you for the time you take for me.
Just apply s command on the line that matches A.
sed '
/^A/{ s/e/$1/; }
/^B/{ s/e/$2/; }
# or shorter
/^C/s/e/$3/
'
s command by default replaces the first occurrence. You can do for example s/s/$1/2 to replace the second occurrence, s/e/$1/g (like "Global") replaces all occurrences.
0,/e/ specifies a range of lines - it filters lines from the first up until a line that matches /e/.
sed is not part of Bash. It is a separate (crude) programming language and is a very standard command. See https://www.grymoire.com/Unix/Sed.html .
Continuing from the comment. sed is a poor choice here unless all your files can only have 3 lines. The reason is sed processes each line and has no way to keep a separate count for the occurrences of 'e'.
Instead, wrapping sed in a script and keeping track of the replacements allows you to handle any file no matter the number of lines. You just loop and handle the lines one at a time, e.g.
#!/bin/bash
[ -z "$1" ] && { ## valiate one argument for filename provided
printf "error: filename argument required.\nusage: %s filename\n" "./$1" >&2
exit 1
}
[ -s "$1" ] || { ## validate file exists and non-empty
printf "error: file not found or empty '%s'.\n" "$1"
exit 1
}
declare -i n=1 ## occurrence counter initialized 1
## loop reading each line
while read -r line || [ -n "$line" ]; do
[[ $line =~ ^.*e.*$ ]] || continue ## line has 'e' or get next
sed "s/e/1/$n" <<< "$line" ## substitute the 'n' occurence of 'e'
((n++)) ## increment counter
done < "$1"
Your data file having "..." at the end of each line suggests your files is larger than the snippet posted. If you have lines beginning 'A' - 'Z', you don't want to have to write 26 separate /match/s/find/replace/ substitutions. And if you have somewhere between 3 and 26 (or more), you don't want to have to rewrite a different sed expression for every new file you are faced with.
That's why I say sed is a poor choice. You really have no way to make the task a generic task with sed. The downside to using a script is it will become a poor choice as the number of records you need to process increase (over 100000 or so just due to efficiency)
Example Use/Output
With the script in replace-e-incremental.sh and your data in file, you would do:
$ bash replace-e-incremental.sh file
Azzz1zzzezzzezzz...
Bzzzezzz1zzzezzz...
Czzzezzzezzz1zzz...
To Modify file In-Place
Since you make multiple calls to sed here, you need to redirect the output of the file to a temporary file and then replace the original by overwriting it with the temp file, e.g.
$ bash replace-e-incremental.sh file > mytempfile && mv -f mytempfile file
$ cat file
Azzz1zzzezzzezzz...
Bzzzezzz1zzzezzz...
Czzzezzzezzz1zzz...

Comment out line, only if previous line contains matching string

Looking for a solution for a bash script using sed or awk to comment out a line, only if the previous line contains a matching string.
For example, a file containing:
...
if [ $V1 -gt 100 ]; then
some specific commands
else
some other specific commands
fi
...
I'd like to comment out the line containing else but ONLY if the previous line contains specific.
I've attempted piping multiple sed commands along with grep commands to no avail.
sed -E '/specific/{n;s/^([[:blank:]]*)else$/\1#else/}'
Output
...
if [ $V1 -gt 100 ]; then
some specific commands
#else
some other commands
fi
...
A retrospection
/specific/ look for the line containing the pattern specific
n add the next line to the pattern space. n auto prints the current pattern space.
Check if the next line is (one_or_more_spaces)else,if yes, substitute the line with a (one_or_more_spaces_found_previously)#else. Remember () is for pattern reuse and \1 is the previously matched pattern reused.
-E enable extended regex
-i is for inplace edit of the actual file
You can use this awk solution:
awk '/specific/{p=NR} NR==p+1{p=0; if (/^[[:blank:]]*else/) $0 = "#" $0} 1' file
if [ $V1 -gt 100 ]; then
some specific commands
#else
some other commands
fi
In this block /specific/p=NR we find specific and store current line # in p
Next block is executed for very next line due to p == NR+1 condition
We rest p=0 and if that line has else at start with optional whitespaces before we just comment it out.

remove block of text between two lines based on content

I need to remove/filter a very large log file
i managed to bring the log-file into blocks of text starting with a line containing <-- or --> ending with a line containing Content-Length:
now if this block of text contains the word REGISTER it need to be deleted.
i found the flowing example:
# sed script to delete a block if /regex/ matches inside it
:t
/start/,/end/ { # For each line between these block markers..
/end/!{ # If we are not at the /end/ marker
$!{ # nor the last line of the file,
N; # add the Next line to the pattern space
bt
} # and branch (loop back) to the :t label.
} # This line matches the /end/ marker.
/regex/d; # If /regex/ matches, delete the block.
} # Otherwise, the block will be printed.
#---end of script---
written by Russell Davies on this page
but i do not know how to transport this to a single line statement to use in a pipe
my goal is to pipe a tail -F of the log file to the final version so it get updates by the minute
Try this:
awk '/<--|-->/{rec=""; f=1} f{rec = rec $0 ORS} /Content-Length:/{ if (f && (rec !~ "REGISTER")) printf "%s",rec; f=0}' file
If it doesn't do what you want, provide more info on what you want along with sample input and output.
To break down the above, here's each statement on separate lines with some comments:
awk '
/<--|-->/ {rec=""; f=1} # find the start of the record, reset the string to hold it and set a flag to indicate we've started processing a record
f {rec = rec $0 ORS} # append to the end of the string containing the current record
/Content-Length:/{ # find the end of the record
if (f && (rec !~ "REGISTER")) # print the record if it doesn't contain "REGISTER"
printf "%s",rec
f=0 # clear the "found record" indicator
}
' file
and if you have text between your records that you'd want printed, just add a test for the "found" flag not being set and invoke the default action of printing the current record (!f;)
awk '/<--|-->/{rec=""; f=1} f{rec = rec $0 ORS} !f; /Content-Length:/{ if (f && (rec !~ "REGISTER")) printf "%s",rec; f=0}' file
This might work for you (GNU sed);
sed '/<--\|-->/!b;:a;/Content-Length/!{$!{N;ba}};//{/REGISTER/d}' file
/<--\|-->/!b if a line does not contain <-- or --> print it
:a;/Content-Length/!{$!{N;ba}} keep appending lines until the string Content-Length or the end of file is encountered.
//{/REGISTER/d} if the line(s) read in contains Content-Length and REGISTER delete it/them else print it/them as normal.
If I get what you need correctly, you want to filter out the block, that is this only print the block:
tail -f logfile | sed -n '/\(<--\|-->\)/,/Content-Length:/ p'
If you want to delete it:
tail -f logfile | sed '/\(<--\|-->\)/,/Content-Length:/ d'

Delete n1 previous lines and n2 lines following with respect to a line containing a pattern

sed -e '/XXXX/,+4d' fv.out
I have to find a particular pattern in a file and delete 5 lines above and 4 lines below it simultaneously. I found out that the line above removes the line containing the pattern and four lines below it.
sed -e '/XXXX/,~5d' fv.out
In sed manual it was given that ~ represents the lines which is followed by the pattern. But when i tried it, it was the lines following the pattern that was deleted.
So, how do I delete 5 lines above and 4 lines below a line containing the pattern simultaneously?
One way using sed, assuming that the patterns are not close enough each other:
Content of script.sed:
## If line doesn't match the pattern...
/pattern/ ! {
## Append line to 'hold space'.
H
## Copy content of 'hold space' to 'pattern space' to work with it.
g
## If there are more than 5 lines saved, print and remove the first
## one. It's like a FIFO.
/\(\n[^\n]*\)\{6\}/ {
## Delete the first '\n' automatically added by previous 'H' command.
s/^\n//
## Print until first '\n'.
P
## Delete data printed just before.
s/[^\n]*//
## Save updated content to 'hold space'.
h
}
### Added to fix an error pointed out by potong in comments.
### =======================================================
## If last line, print lines left in 'hold space'.
$ {
x
s/^\n//
p
}
### =======================================================
## Read next line.
b
}
## If line matches the pattern...
/pattern/ {
## Remove all content of 'hold space'. It has the five previous
## lines, which won't be printed.
x
s/^.*$//
x
## Read next four lines and append them to 'pattern space'.
N ; N ; N ; N
## Delete all.
s/^.*$//
}
Run like:
sed -nf script.sed infile
A solution using awk:
awk '$0 ~ "XXXX" { lines2del = 5; nlines = 0; }
nlines == 5 { print lines[NR%5]; nlines-- }
lines2del == 0 { lines[NR%5] = $0; nlines++ }
lines2del > 0 { lines2del-- }
END { while (nlines-- > 0) { print lines[(NR - nlines) % 5] } }' fv.out
Update:
This is the script explained:
I remember the last 5 lines in the array lines using rotatory indexes (NR%5; NR is the record number; in this case lines).
If I find the pattern in the current line ($0 ~ "XXXX; $0 being the current record: in this case a line; and ~ being the Extended Regular Expression match operator), I reset the number of lines read and note that I have 5 lines to delete (including the current line).
If I already read 5 lines, I print the current line.
If I do not have lines to delete (which is also true if I had read 5 lines, I put the current line in the buffer and increment the number of lines. Note how the number of lines is decremented and then incremented if a line is printed.
If lines need to be deleted, I do not print anything and decrement the number of lines to delete.
At the end of the script, I print all the lines that are in the array.
My original version of the script was the following, but I ended up optimizing it to the above version:
awk '$0 ~ "XXXX" { lines2del = 5; nlines = 0; }
lines2del == 0 && nlines == 5 { print lines[NR%5]; lines[NR%5] }
lines2del == 0 && nlines < 5 { lines[NR%5] = $0; nlines++ }
lines2del > 0 { lines2del-- }
END { while (nlines-- > 0) { print lines[(NR - nlines) % 5] } }' fv.out
awk is a great tool ! I strongly recommend that you find a tutorial on the net and read it. One important thing: awk works with Extended Regular Expressions (ERE). Their syntax is a little different from Standard Regular Expression (RE) used in sed, but all that can be done with RE can be done with ERE.
The idea is to read 5 lines without printing them. If you find the pattern, delete the unprinted lines and the 4 lines bellow. If you do not find the pattern, remember the current line and print the 1st unprinted line. At the end, print what is unprinted.
sed -n -e '/XXXX/,+4{x;s/.*//;x;d}' -e '1,5H' -e '6,${H;g;s/\n//;P;s/[^\n]*//;h}' -e '${g;s/\n//;p;d}' fv.out
Of course, this only works if you have one occurrence of your pattern in the file. If you have many, you need to read 5 new lines after finding your pattern, and it gets complicated if you again have your pattern in those lines. In this case, I think sed is not the right tool.
This might work for you:
sed 'H;$!d;g;s/\([^\n]*\n\)\{5\}[^\n]*PATTERN\([^\n]*\n\)\{5\}//g;s/.//' file
or this:
awk --posix -vORS='' -vRS='([^\n]*\n){5}[^\n]*PATTERN([^\n]*\n){5}' 1 file
a more efficient sed solution:
sed ':a;/PATTERN/,+4d;/\([^\n]*\n\)\{5\}/{P;D};$q;N;ba' file
If you are happy to output the result to a file instead of stdout, vim can do it quite efficiently:
vim -c 'g/pattern/-5,+4d' -c 'w! outfile|q!' infile
or
vim -c 'g/pattern/-5,+4d' -c 'x' infile
to edit the file in-place.

bashscript for file search and replace!

Hey I try to write a littel bash script. This should copy a dir and all files in it. Then it should search each file and dir in this copied dir for a String (e.g #ForTestingOnly) and then this save the line number. Then it should go on and count each { and } as soon as the number is equals it should save againg the line number. => it should delete all the lines between this 2 numbers.
I'm trying to make a script which searchs for all this annotations and then delete the method which is directly after this ano.
Thx for help...
so far I have:
echo "please enter dir"
read dir
newdir="$dir""_final"
cp -r $dir $newdir
cd $newdir
grep -lr -E '#ForTestingOnly' * | xargs sed -i 's/#ForTestingOnly//g'
now with grep I can search and replace the #ForTestingOnly anot. but I like to delete this and the following method...
Give this a try. It's oblivious to braces in comments and literals, though, as David Gelhar warned. It only finds and deletes the first occurrence of the "#ForTestingOnly" block (under the assumption that there will only be one anyway).
#!/bin/bash
find . -maxdepth 1 | while read -r file
do
open=0 close=0
# start=$(sed -n '/#ForTestingOnly/{=;q}' "$file")
while read -r line
do
case $line in
*{*) (( open++ )) ;;
*}*) (( close++ ));;
'') : ;; # skip blank lines
*) # these lines contain the line number that the sed "=" command printed
if (( open == close ))
then
break
fi
;;
esac
# split braces onto separate lines dropping all other chars
# print the line number once per line that contains either { or }
# done < <(sed -n "$start,$ { /[{}]/ s/\([{}]\)/\1\n/g;ta;b;:a;p;=}" "$file")
done < <(sed -n "/#ForTestingOnly/,$ { /[{}]/ s/\([{}]\)/\1\n/g;ta;b;:a;p;=}" "$file")
end=$line
# sed -i "${start},${end}d" "$file"
sed -i "/#ForTestingOnly/,${end}d" "$file"
done
Edit: Removed one call to sed (by commenting out and replacing a few lines).
Edit 2:
Here's a breakdown of the main sed line:
sed -n "/#ForTestingOnly/,$ { /[{}]/ s/\([{}]\)/\1\n/g;ta;b;:a;p;=}" "$file"
-n - only print lines when explicitly requested
/#ForTestingOnly/,$ - from the line containing "#ForTestingOnly" to the end of the file
s/ ... / ... /g perform a global (per-line) substitution
\( ... \) - capture
[{}] - the characters that appear in the list bewteen the square brackets
\1\n - substitute what was captured plus a newline
ta - if a substitution was made, branch to label "a"
b - branch (no label means "to the end and begin the per-line cycle again for the next line) - this branch functions as an "else" for the ta, I could have used T instead of ta;b;:a, but some versions of sed don't support T
:a - label "a"
p - print the line (actually, print the pattern buffer which now consists of possibly multiple lines with a "{" or "}" on each one)
= - print the current line number of the input file
The second sed command simply says to delete the lines starting at the one that has the target string and ending at the line found by the while loop.
The sed command at the top which I commented out says to find the target string and print the line number it's on and quit. That line isn't necessary since the main sed command is taking care of starting in the right place.
The inner whileloop looks at the output of the main sed command and increments counters for each brace. When the counts match it stops.
The outer while loop steps through all the files in the current directory.
I fixed the bugs in the old version. The new versions has two scripts: an awk script and a bash driver.
The driver is:
#!/bin/bash
AWK_SCRIPT=ann.awk
for i in $(find . -type f -print); do
while [ 1 ]; do
cmd=$(awk -f $AWK_SCRIPT $i)
if [ -z "$cmd" ]; then
break
else
eval $cmd
fi
done
done
the new awk script is:
BEGIN {
# line number where we will start deleting
start = 0;
}
{
# check current line for the annotation
# we're looking for
if($0 ~ /#ForTestingOnly/) {
start = NR;
found_first_open_brace = 0;
num_open = 0;
num_close = 0;
}
if(start != 0) {
if(num_open == num_close && found_first_open_brace == 1) {
print "sed -i \'\' -e '" start "," NR " d' " ARGV[1];
start = 0;
exit;
}
for(i = 1; i <= length($0); i++) {
c = substr($0, i, 1);
if(c == "{") {
found_first_open_brace = 1;
num_open++;
}
if(c == "}") {
num_close++;
}
}
}
}
Set the path to the awk script in the driver then run the driver in the root dir.

Resources