I'm trying to replace the nth occurrence of a substring in a file. I tried to achieve this using sed but all attempts failed to give me the desired output. Some of the attempts are:
sed 's/old/new/g'
sed 's/old/new/3'
sed 's/old/new/3g'
The most common usage of sed is to perform a replacement such as
sed 's/foo/bar/' file
This will replace the first occurrence of the string foo by the string bar and it will do this for every line in file.
If you want to replace the 3rd occurrence of the string foo only, but do this for every line, then you can write:
sed 's/foo/bar/3' file.
Finally, if you want to replace all occurrences, then you use :
sed 's/foo/bar/g' file.
Any combination such as
sed 's/foo/bar/3g' file
results in unspecified behaviour.
If you want to replace the nth occurrence in a file than sed is not the right tool, but perl or awk might be better.
If you know you have maximum one occurrence of "foo" per line, you can do
awk '/foo/{c++}(c==n){sub("foo","bar")}1' file
If more than a single occurrence per line might appear it becomes a bit more tricky, various solutions are possible:
awk 'BEGIN{FS="foo";OFS="bar";n=5}
(c<n) && (c+NF-1>=n) {
for(i=1;i<NF;++i) printf $i ((++c==n) ? OFS : FS); print $NF; next
}
{c+=NF-1; print}' file
Related
I would like to remove the string between ":" and the first "|" using sed.
input:
|abc:1.2.3|def|
output from sed:
|abc|def|
I managed to come up with sed 's|\(:\)[^|]*|\1|', but this sed command does not remove the first character (":"). How can I modify this command to also remove the colon?
You don't need to group : in your pattern and use it in substitution.
You should keep it simple:
s='|abc:1.2.3|def|'
sed 's/:[^|]*//' <<< "$s"
|abc|def|
: matches a colon and [^|]* matches 0 or more non-pipe characters
1st solution: With awk you could try following awk program.
awk 'match($0,/:[^|]*/){print substr($0,1,RSTART-1) substr($0,RSTART+RLENGTH)}' Input_file
Explanation: Using match function of awk, where matching from : to till first occurrence of | here. So what match function does is, whenever a regex is matched in it, it will SET values for its OOTB variables named RSTART and RLENGTH, so based on that we are printing sub-string to neglect matched part and print everything else as per required output in question.
2nd solution: Using FPAT option in GNU awk, try following, written and tested with your shown samples only.
awk -v FPAT=':[^|]*' '{print $1,$2}' Input_file
I am trying to do this:
I have a file with content like below;
file:
abcdefgh
I am looking for a way to do this;
file:
aBCdefgh
So,make the 2nd and 3rd letter "capital/uppercase" in the file itself, because I have to do multiple conversions at different positions in a string in the file. Can someone please help me to know how to do this?
I came to know something like this below, but it does only for a single first character of the string in the file:
sed -i 's/^./\U&/' file
output:
Abcdefgh
Thanks much!
Change your sed approach to the following:
sed -i 's/\(.\)\(..\)/\1\U\2/' file
$ cat file
aBCdefgh
matching section:
\(.\) - match the 1st char of the string into the 1st captured group
\(..\) - match the next 2 chars placing into the 2nd captured group
replacement section:
\1 - points to the 1st parenthesized group \1 i.e. the 1st char
\U\2 - uppercase the characters from the 2nd captured group \2
Bonus approach for I want to capitalize "105th & 106th" characters:
sed -Ei 's/(.{104})(..)/\1\U\2/' file
awk on duty.
echo "abcdefgh" | awk '{print substr($0,1,1) toupper(substr($0,2,2)) substr($0,4)}'
Output will be as follows.
aBCdefgh
In case you have a Input_file and you want to save the edits into same Input_file.
awk '{print substr($0,1,1) toupper(substr($0,2,2)) substr($0,4)}' Input_file > temp_file && mv temp_file Input_file
Explanation: Please run above code as this is only for explanation purposes.
echo "abcdefgh" ##using echo command to print a string on the standard output.
| ##Pipe(|) is used for taking a command's standard output to pass as a standard input to another command(in this case echo is passing it's standard output to awk).
awk '{ ##Starting awk here.
##Print command in awk is being used to print anything variable, string etc etc.
##substring is awk's in-built utility which will allow us to get the specific parts of the line, variable. So it's syntax is substr(line/variable,starting point of the line/number,number of characters you need from the strating point mentioned), in case you haven't mentioned any number of characters it will take all the characters from starting point to till the end of the line.
##toupper, so it is also a awk's in-built utility which will covert any text to UPPER CASE passed to it, so in this case I am passing 2nd and 3rd character to it as per OP's request.
print substr($0,1,1) toupper(substr($0,2,2)) substr($0,4)}'
So I know how to print lines from one pattern to another pattern:
sed -ne '/pattern_1/,/pattern_2/ p'
Which works for input that looks like this:
random_line_1
pattern_1
random_line_2
random_line_3
random_line_4
random_line_5
pattern_2
random_line_6
So that lines from pattern_1 to pattern_2 get printed.
But how can I print lines until the second occurrence of the second pattern:
random_line_1
pattern_1
pattern_2
random_line_3
random_line_4
random_line_5
pattern_2
random_line_6
I want to print the lines from pattern_1 to the second pattern_2 so that I get this as output:
pattern_1
pattern_2
random_line_3
random_line_4
random_line_5
pattern_2
More specifically, I am trying to capture text, starting at a header, that is surrounded by empty lines, that may or may not have text before the header and after the second empty line (where pattern_1 is the header and pattern_2 is the empty line):
Header:
<empty line>
Some_text
Some_more_text
Even_more_text
When_will_it_stop
<empty line>
Preferably, a sed answer would work best since I know a little bit about how it works, but I would be open to awk submissions, as long as every piece of the command is explained.
I'm not at a machine on which to test, but you should be able to do something very simple to understand just with grep and its "context" switches (-A, -B and -C).
So to delete all lines before pattern1, simply find pattern1 and all lines after (-A):
grep -A 9999 "pattern1" YourFile
Then, in the result, search for the second occurrence (-m2) of pattern2 and everything before (-B):
grep -A 9999 "pattern1" YourFile | grep -B 9999 -m2 "pattern2"
A simpler sed example for your specific case:
sed -ne '/pattern_1/,/pattern_2/{/pattern_1/N;p}'
This just says that within the range, suck the line after the header pattern_1 into the pattern space and print it. This means that if the line after pattern_1 is pattern_2, that occurence of pattern_2 will not count for the range.
In other words:
sed -ne '/Header/,/^$/{/Header/N;p}'
Could you please try following.
awk '/pattern_1/{a=1}
a<3 && a;
/pattern_2/{a++}
' Input_file
Adding code with explanation as follows too.
awk '/pattern_1/{a=1} ##Searching for string /pattern_1/ in a line, if it is present in a line then making variable a value as 1.
a<3 && a; ##Now checking if variable a value is less than 3 and it is NOT NULL, so if both conditions are TRUE then didnot define any action, so by default print action will happen on current line of the Input_file.
/pattern_2/{a++} ##Searching string pattern_2 in a line and incrementing the value of variable a with 1 each time it sees this string in a line.
' Input_file ##mentioning Input_file name over here.
This might work for you (GNU sed):
sed -n '/pattern_1/{:a;N;s/pattern_2/&/2p;Ta}' file
From the address containing pattern_1 gather up the following lines in the pattern space until the second substitution of the pattern_2 has been successfully printed.
For your particular example, use:
sed -n '/Header:/{:a;N;s/\n$/&/mp2;Ta}' file
N.B. the m flag allows for matching on multiple lines within the pattern space. The T command is the opposite of the t command, which jumps to location :x (where x is user defined) when the previous substitution is successful.
I have a example dataset separated by semicolon as below;
123;IZMIR;ZMIR;123
abc;ANKAR;aaa;999
AAA;ZMIR;ZMIR;bob
BBB;ANKR;RRRR;ABC
I would like to replace values in a specified column. Lets say I want to change "ZMIR" AS "IZMIR" but only for the third column, the ones on the second column must stay the same.
Desired output is;
123;IZMIR;IZMIR;123
abc;ANKAR;aaa;999
AAA;ZMIR;IZMIR;bob
BBB;ANKR;RRRR;ABC
I tried;
sed 's/;ZMIR;/;IZMIR;/' file.txt
the problem is that it changes all the values on the file not just the 3rd one.
I also tried;
awk -F";" '{gsub("ZMIR",";IZMIR;",$2)}1'
and here it specifies the column but, it somehow adds spaces;
123 I;IZMIR; ZMIR 123
abc;ANKAR;aaa;999
AAA ;IZMIR; ZMIR bob
BBB;ANKR;RRRR;ABC
sed doesn't know about columns, awk does (but in awk they're called "fields"):
awk 'BEGIN{FS=OFS=";"} $3=="ZMIR"{$3="IZMIR"} 1' file
Note that since the above is doing a literal string search and replace, you don't have to worry about regexp or backreference metacharacters in the search or replacement strings, unlike in a sed solution (see https://stackoverflow.com/a/29626460/1745001).
wrt what you tried previously with awk:
awk -F";" '{gsub("ZMIR",";IZMIR;",$2)}1'
That says: find "ZMIR" in the 2nd semi-colon-separated field and replace it with ";IZMIR;" and also change every existing ";" on the line to a blank character.
To learn awk, read the book Effective Awk Programming, 4th Edition, by Arnold Robbins.
If you exactly know where the word to replace is located and how many of them are in that line you could use sed with something like:
sed '3 s/ZMIR/IZMIR/2'
With the 3 in the beginning you are selecting the third line and with the 2 in the end the second occurrence. However the awk solution is a better one. But just that you know how it works in sed ;)
This might work for you (GNU sed):
sed -r 's/[^;]+/\n&\n/3;s/\nZMIR\n/IZMIR/;s/\n//g' file
Surround the required field by unique markers then replace the required string (plus markers) by the replacement string. Finally remove the unique markers.
Perl on Command Line
Input
123;IZMIR;ZMIR;123
000;ANKAR;aaa;999
AAA;ZMIR;ZMIR;bob
BBB;ANKR;RRRR;ABC
$. == 1 means first row it does the work only for this row So second row $. == 2
$F[0] means first column and it only does on this column So fourth column $F[3]
-a -F\; means that delimiter is ;
what you want
perl -a -F\; -pe 's/$F[0]/***/ if $. == 1' your-file
output
***;IZMIR;ZMIR;123
abc;ANKAR;aaa;999
AAA;ZMIR;ZMIR;bob
BBB;ANKR;RRRR;ABC
for row == 2 and column == 2
perl -a -F\; -pe 's/$F[1]/***/ if $. == 2' your-file
123;IZMIR;ZMIR;123
abc;***;aaa;999
AAA;ZMIR;ZMIR;bob
BBB;ANKR;RRRR;ABC
Also without -a -F
perl -pe 's/123/***/ if $. == 1' your-file
output
***;IZMIR;ZMIR;123
abc;ANKAR;aaa;999
AAA;ZMIR;ZMIR;bob
BBB;ANKR;RRRR;ABC
If you want to edit you can add -i option that means Edit in-place And that's it, it simply find, replace and save in the same file
perl -i -a -F\; and so on
You need to include some absolute references in the line:
^ for beginning of the line
unequivocal separation pattern
^.*ZMIR and [^;]*;ZMIR give different values where first take everything before ZMIR and sed take the longest possible
Specific
sed 's/^\([^;]*;[^;]*;\)ZMIR;/\1IZMIR;/' YourFile
generic where Old and New are batch variable (Remember, this is regex value so regex rules to apply like escaping some char)
#Old='ZMIR'
#New='IZMIR'
sed 's/^\(\([^;]*;\)\{2\}\)'${Old}';/\1'${New}';/' YourFile
In this simple case sed is an alternative, but awk is better for a complex or long line.
I would like ignore all lines which occur before a match in bash (also ignoring the matched line. Example of input could be
R1-01.sql
R1-02.sql
R1-03.sql
R1-04.sql
R2-01.sql
R2-02.sql
R2-03.sql
and if I match R2-01.sql in this already sorted input I would like to get
R2-02.sql
R2-03.sql
Many ways possible. For example: assuming that your input is in list.txt
PATTERN="R2-01.sql"
sed "0,/$PATTERN/d" <list.txt
because, the 0,/pattern/ works only on GNU sed, (e.g. doesn't works on OS X), here is an tampered solution. ;)
PATTERN="R2-01.sql"
(echo "dummy-line-to-the-start" ; cat - ) < list.txt | sed "1,/$PATTERN/d"
This will add one dummy line to the start, so the real pattern must be on line the 1 or higher, so the 1,/pattern/ will works - deleting everything from the line 1 (dummy one) up to the pattern.
Or you can print lines after the pattern and delete the 1st, like:
sed -n '/pattern/,$p' < list.txt | sed '1d'
with awk, e.g.:
awk '/pattern/,0{if (!/pattern/)print}' < list.txt
or, my favorite use the next perl command:
perl -ne 'print unless 1../pattern/' < list.txt
deletes the 1.st line when the pattern is on 1st line...
another solution is reverse-delete-reverse
tail -r < list.txt | sed '/pattern/,$d' | tail -r
if you have the tac command use it instead of tail -r The interesant thing is than the /pattern/,$d' works on the last line but the1,/pattern/d` doesn't on the first.
How to ignore all lines before a match occurs in bash?
The question headline and your example don't quite match up.
Print all lines from "R2-01.sql" in sed:
sed -n '/R2-01.sql/,$p' input_file.txt
Where:
-n suppresses printing the pattern space to stdout
/ starts and ends the pattern to match (regular expression)
, separates the start of the range from the end
$ addresses the last line in the input
p echoes the pattern space in that range to stdout
input_file.txt is the input file
Print all lines after "R2-01.sql" in sed:
sed '1,/R2-01.sql/d' input_file.txt
1 addresses the first line of the input
, separates the start of the range from the end
/ starts and ends the pattern to match (regular expression)
$ addresses the last line in the input
d deletes the pattern space in that range
input_file.txt is the input file
Everything not deleted is echoed to stdout.
This is a little hacky, but it's easy to remember for quickly getting the output you need:
$ grep -A99999 $match $file
Obviously you need to pick a value for -A that's large enough to match all contents; if you use a too-small value the output will be silently truncated.
To ensure you get all output you can do:
$ grep -A$(wc -l $file) $match $file
Of course at that point you might be better off with the sed solutions, since they don't require two reads of the file.
And if you don't want the matching line itself, you can simply pipe this command into tail -n+1 to skip the first line of output.
awk -v pattern=R2-01.sql '
print_it {print}
$0 ~ pattern {print_it = 1}
'
you can do with this,but i think jomo666's answer was better.
sed -nr '/R2-01.sql/,${/R2-01/d;p}' <<END
R1-01.sql
R1-02.sql
R1-03.sql
R1-04.sql
R2-01.sql
R2-02.sql
R2-03.sql
END
Perl is another option:
perl -ne 'if ($f){print} elsif (/R2-01\.sql/){$f++}' sql
To pass in the regex as an argument, use -s to enable a simple argument parser
perl -sne 'if ($f){print} elsif (/$r/){$f++}' -- -r=R2-01\\.sql file
This can be accomplished with grep, by printing a large enough context following the $match. This example will output the first matching line followed by 999,999 lines of "context".
grep -A999999 $match $file
For added safety (in case the $match begins with a hyphen, say) you should use -e to force $match to be used as an expression.
grep -A999999 -e '$match' $file