match multiple conditions with GNU sed - bash

I'm using sed to replace values in other bash scripts, such as:
somedata="$(<somefile.sh)"
somedata=`sed 's/ ==/==/g' <<< $somedata` # [space]== becomes ==
somedata=`sed 's/== /==/g' <<< $somedata` # ==[space] becomes ==
The same for ||, &&, !=, etc. I think steps should be reduced with the right regex match. The operator does not need surrounding spaces, but may have a space before and after, only before, or only after. Is there a way to handle all of these with one sed command?
There are many other conditions not mentioned also. The script takes more time to execute than desired.
The goal is to reduce the overall execution time so I am hoping to reduce the number of commands used with clever regex to match multiple conditions.
I'm also considering tr, awk or perl - whichever is fastest?

With GNU sed, you can use the | (or) operator:
$ sed -r 's/ *(&&|\|\|) */\1/g' <<< "foo && bar || baz"
foo&&bar||baz
*(&&|\|\|) *: search for zero or more space followed by any of the | separated strings followed by zero or more space
the matching strings are captured and output using backreference
Edit:
As pointed out in comments, you can use the -E flag with GNU sed in place of -r. Your command will be more portable:
sed -E 's/ *(\&\&|\|\|) */\1/g'
As GNU sed also supports \| alternation operator with Basic Regular Expressions, you can use it for better readability:
sed 's/ *\(&&\|||\) */\1/g'

You can chain multiple sed substitutions with the -e flag:
$ echo -n "test data here" | sed -e 's/test/TEST/' \
-e 's/data/HERE/' \
-e 's/here/DATA/'
$ TEST HERE DATA

you can use a sedfile (-f option) alongside with the -i option (replace in-place, no need to store in env. variable):
sed -i -f mysedfile somefile.sh
mysedfile may contain expressions, 1 per line
s/ *&& */\&\&/g
s/ *== */==/g
(or use the -e option to use several expression, but if you have a lot of them, it wil become quickly unreadable)
BTW: -i option creates a temporary file within the processed file directory, so in the end, if operation succeeds, the original file is deleted and the temporary file is renamed into the original file name
When the end of the file is reached, the temporary file is renamed
to the output file's original name. The extension, if supplied,
is used to modify the name of the old file before renaming the
temporary file, thereby making a backup copy(2))
so there's no I/O overhead with that option. No need at all to store in a variable.

Related

Uncomment config line with sed [duplicate]

how to remove comment lines (as # bal bla ) and empty lines (lines without charecters) from file with one sed command?
THX
lidia
If you're worried about starting two sed processes in a pipeline for performance reasons, you probably shouldn't be, it's still very efficient. But based on your comment that you want to do in-place editing, you can still do that with distinct commands (sed commands rather than invocations of sed itself).
You can either use multiple -e arguments or separate commands with a semicolon, something like (just one of these, not both):
sed -i 's/#.*$//' -e '/^$/d' fileName
sed -i 's/#.*$//;/^$/d' fileName
The following transcript shows this in action:
pax> printf 'Line # with a comment\n\n# Line with only a comment\n' >file
pax> cat file
Line # with a comment
# Line with only a comment
pax> cp file filex ; sed -i 's/#.*$//;/^$/d' filex ; cat filex
Line
pax> cp file filex ; sed -i -e 's/#.*$//' -e '/^$/d' filex ; cat filex
Line
Note how the file is modified in-place even with two -e options. You can see that both commands are executed on each line. The line with a comment first has the comment removed then all is removed because it's empty.
In addition, the original empty line is also removed.
#paxdiablo has a good answer but it can be improved.
(1) The '/^$/d' clause only matches 100% blank lines.
If you want to also match lines that are entirely whitespace (spaces, tabs etc.) use this instead:
'/^\s*$/d'
(2) The 's/#.*$//' clause only matches lines that start with the # character in column 0.
If you want to also match lines that have only whitespace before the first # use this instead:
'/^\s*#.*$/d'
The above criteria may not be universal (e.g. within a HEREDOC block, or in a Python multi-line string the different approaches could be significant), but in many cases the conventional definition of "blank" lines include whitespace-only, and "comment" lines include whitespace-then-#.
(3) Lastly, on OSX at least, the #paxdiablo solution in which the first clause turns comment lines into blank lines, and the second clause strips blank lines (including what were originally comments) doesn't work. It seems to be more portable to make both clauses /d delete actions as I've done.
The revised command incorporating the above is:
sed -e '/^\s*#.*$/d' -e '/^\s*$/d' inputFile
This tiny jewel removes all # comments, no matter where they begin in a line (see caution below):
sed -e 's/\s*#.*$//'
Example:
text="
this is a # test
#this is a test
#this is a #test
this is # another #test
"
$echo "$text" | sed -e 's/\s*#.*$//'
this is a
this is
Next this removes any resulting blank lines:
$echo "$text" | sed -e 's/\s*#.*$//' | sed -e '/^\s*$/d'
Caution: Depending on the syntax and/or interpretation of the lines your processing, this might not be an appropriate solution, as it just stupidly removes end of lines, even if the '#' is part of your data or code. However, for use cases where you'll never use a hash except for as an end of line comment then it works fine. So just as with all coding, context must be taken into consideration.
Alternative variant, using grep:
cat file.txt | grep -Ev '(#.*$)|(^$)'
you can use awk
awk 'NF{gsub(/^[ \t]*#/,"");print}' file
First example(paxdiablo) is very good except its not change file, just output result. If you want to change it inline:
sudo sed -i 's/#.*$//;/^$/d' inputFile
On (one of) my linux boxes, sed understands extended regular expressions with the -r option, so:
sed -r '/(^\s*#)|(^\s*$)/d' squid.conf.installed
is very useful for showing all non-blank, non comment lines.
The regex matches either start of line followed by zero or more spaces or tabs followed by either a hash or end of line, and deletes those matching lines from the input.

Text processing in bash - extracting information between multiple HTML tags and outputting it into CSV format [duplicate]

I can't figure how to tell sed dot match new line:
echo -e "one\ntwo\nthree" | sed 's/one.*two/one/m'
I expect to get:
one
three
instead I get original:
one
two
three
sed is line-based tool. I don't think these is an option.
You can use h/H(hold), g/G(get).
$ echo -e 'one\ntwo\nthree' | sed -n '1h;1!H;${g;s/one.*two/one/p}'
one
three
Maybe you should try vim
:%s/one\_.*two/one/g
If you use a GNU sed, you may match any character, including line break chars, with a mere ., see :
.
Matches any character, including newline.
All you need to use is a -z option:
echo -e "one\ntwo\nthree" | sed -z 's/one.*two/one/'
# => one
# three
See the online sed demo.
However, one.*two might not be what you need since * is always greedy in POSIX regex patterns. So, one.*two will match the leftmost one, then any 0 or more chars as many as possible, and then the rightmost two. If you need to remove one, then any 0+ chars as few as possible, and then the leftmost two, you will have to use perl:
perl -i -0 -pe 's/one.*?two//sg' file # Non-Unicode version
perl -i -CSD -Mutf8 -0 -pe 's/one.*?two//sg' file # S&R in a UTF8 file
The -0 option enables the slurp mode so that the file could be read as a whole and not line-by-line, -i will enable inline file modification, s will make . match any char including line break chars, and .*? will match any 0 or more chars as few as possible due to a non-greedy *?. The -CSD -Mutf8 part make sure your input is decoded and output re-encoded back correctly.
You can use python this way:
$ echo -e "one\ntwo\nthree" | python -c 'import re, sys; s=sys.stdin.read(); s=re.sub("(?s)one.*two", "one", s); print s,'
one
three
$
This reads the entire python's standard input (sys.stdin.read()), then substitutes "one" for "one.*two" with dot matches all setting enabled (using (?s) at the start of the regular expression) and then prints the modified string (the trailing comma in print is used to prevent print from adding an extra newline).
This might work for you:
<<<$'one\ntwo\nthree' sed '/two/d'
or
<<<$'one\ntwo\nthree' sed '2d'
or
<<<$'one\ntwo\nthree' sed 'n;d'
or
<<<$'one\ntwo\nthree' sed 'N;N;s/two.//'
Sed does match all characters (including the \n) using a dot . but usually it has already stripped the \n off, as part of the cycle, so it no longer present in the pattern space to be matched.
Only certain commands (N,H and G) preserve newlines in the pattern/hold space.
N appends a newline to the pattern space and then appends the next line.
H does exactly the same except it acts on the hold space.
G appends a newline to the pattern space and then appends whatever is in the hold space too.
The hold space is empty until you place something in it so:
sed G file
will insert an empty line after each line.
sed 'G;G' file
will insert 2 empty lines etc etc.
How about two sed calls:
(get rid of the 'two' first, then get rid of the blank line)
$ echo -e 'one\ntwo\nthree' | sed 's/two//' | sed '/^$/d'
one
three
Actually, I prefer Perl for one-liners over Python:
$ echo -e 'one\ntwo\nthree' | perl -pe 's/two\n//'
one
three
Below discussion is based on Gnu sed.
sed operates on a line by line manner. So it's not possible to tell it dot match newline. However, there are some tricks that can implement this. You can use a loop structure (kind of) to put all the text in the pattern space, and then do the operation.
To put everything in the pattern space, use:
:a;N;$!ba;
To make "dot match newline" indirectly, you use:
(\n|.)
So the result is:
root#u1804:~# echo -e "one\ntwo\nthree" | sed -r ':a;N;$!ba;s/one(\n|.)*two/one/'
one
three
root#u1804:~#
Note that in this case, (\n|.) matches newline and all characters. See below example:
root#u1804:~# echo -e "oneXXXXXX\nXXXXXXtwo\nthree" | sed -r ':a;N;$!ba;s/one(\n|.)*two/one/'
one
three
root#u1804:~#

Deleting first n rows and column x from multiple files using Bash script

I am aware that the "deleting n rows" and "deleting column x" questions have both been answered individually before. My current problem is that I'm writing my first bash script, and am having trouble making that script work the way I want it to.
file0001.csv (there are several hundred files like these in one folder)
Data number of lines 540
No.,Profile,Unit
1,1027.84,µm
2,1027.92,µm
3,1028,µm
4,1028.81,µm
Desired output
1,1027.84
2,1027.92
3,1028
4,1028.81
I am able to use sed and cut individually but for some reason the following bash script doesn't take cut into account. It also gives me an error "sed: can't read ls: No such file or directory", yet sed is successful and the output is saved to the original files.
sem2csv.sh
for files in 'ls *.csv' #list of all .csv files
do
sed '1,2d' -i $files | cut -f '1-2' -d ','
done
Actual output:
1,1027.84,µm
2,1027.92,µm
3,1028,µm
4,1028.81,µm
I know there may be awk one-liners but I would really like to understand why this particular bash script isn't running as intended. What am I missing?
The -i option of sed modifies the file in place. Your pipeline to cut receives no input because sed -i produces no output. Without this option, sed would write the results to standard output, instead of back to the file, and then your pipeline would work; but then you would have to take care of writing the results back to the original file yourself.
Moreover, single quotes inhibit expansion -- you are "looping" over the single literal string ls *.csv. The fact that you are not quoting it properly then causes the string to be subject to wildcard expansion inside the loop. So after variable interpolation, your sed command expands to
sed -i 1,2d ls *.csv
and then the shell expands *.csv because it is not quoted. (You should have been receiving a warning that there is no file named ls in the current directory, too.) You probably attempted to copy an example which used backticks (ASCII 96) instead of single quotes (ASCII 39) -- the difference is quite significant.
Anyway, the ls is useless -- the proper idiom is
for files in *.csv; do
sed '1,2d' "$files" ... # the double quotes here are important
done
Mixing sed and cut is usually not a good idea because you can express anything cut can do in terms of a simple sed script. So your entire script could be
for f in *.csv; do
sed -i -e '1,2d' -e 's/,[^,]*$//' "$f"
done
which says to remove the last comma and everything after it. (If your sed does not like multiple -e options, try with a semicolon separator: sed -i '1,2d;s/,[^,]*$//' "$f")
You may use awk,
$ awk 'NR>2{sub(/,[^,]*$/,"",$0);print}' file
1,1027.84
2,1027.92
3,1028
4,1028.81
or
sed -i '1,2d;s/,[^,]*$//' file
1,2d; for deleting the first two lines.
s/,[^,]*$// removes the last comma part in remaining lines.

Replace all unquoted characters from a file bash

Using bash, how would one replace all unquoted characters from a file?
I have a system that I can't modify that spits out CSV files such as:
code;prop1;prop2;prop3;prop4;prop5;prop6
0,1000,89,"a1,a2,a3",33,,
1,,,"a55,a10",1,1 L,87
2,25,1001,a4,,"1,5 L",
I need this to become, for a new system being added
code;prop1;prop2;prop3;prop4;prop5;prop6
0;1000;89;a1,a2,a3;33;;
1;;;a55,a10;1;1 L;87
2;25;1001;a4;1,5 L;
If the quotes can be removed after this substitution happens in one command it would be nice :) But I prefer clarity to complicated one-liners for future maintenance.
Thank you
With sed:
sed -e 's/,/;/g' -e ':loop; s/\("\)\([^;]*\);\([^"]*"\)/\1\2,\3/; t loop'
Test:
$ sed -e 's/,/;/g' -e ':loop; s/\("\)\([^;]*\);\([^"]*"\)/\1\2,\3/; t loop' yourfile
code;prop1;prop2;prop3;prop4;prop5;prop6
0;1000;89;"a1,a2,a3";33;;
1;;;"a55,a10";1;1 L;87
2;25;1001;a4;;"1,5 L";
You want to use a csv parser. Parsing csv with shell tools is hard (you will encounter regular expressions soon, and they rarely get all cases).
There is one in almost every language. I recommend python.
You can also do this using excel/openoffice variants by opening the file and then saving with ; as the separator.
You can used sed:
echo '0,1000,89,"a1,a2,a3",33,,' | sed -e "s|\"||g"
This will replace " with the empty string (deletes it), and you can pipe another sed to replace the , with ;:
sed -e "s|,|;|g"
$ echo '0,1000,89,"a1,a2,a3",33,,' | sed -e "s|\"||g" | sed -e "s|,|;|g"
>> 0;1000;89;a1;a2;a3;33;;
Note that you can use any separator you want instead of | inside the sed command. For example, you can rewrite the first sed as:
sed -e "s-\"--g"

How to delete the string which is present in parameter from file in unix

I have redirected some string into one parameter for ex: ab=jyoti,priya, pranit
I have one file : Name.txt which contains -
jyoti
prathmesh
John
Kelvin
pranit
I want to delete the records from the Name.txt file which are contain in ab parameter.
Please suggest if this can be done ?
If ab is a shell variable, you can easily turn it into an extended regular expression, and use it with grep -E:
grep -E -x -v "${ab//,/|}" Name.txt
The string substitution ${ab//,/|} returns the value of $ab with every , substituted with a | which turns it into an extended regular expression, suitable for passing as an argument to grep -E.
The -v option says to remove matching lines.
The -x option specifies that the match needs to cover the whole input line, so that a short substring will not cause an entire longer line to be removed. Without it, ab=prat would cause pratmesh to be removed.
If you really require a sed solution, the transformation should be fairly trivial. grep -E -v -x 'aaa|bbb|ccc' is equivalent to sed '/^\(aaa\|bbb\|ccc)$/d' (with some dialects disliking the backslashes, and others requiring them).
To do an in-place edit (modify Name.txt without a temporary file), try this:
sed -i "/^\(${ab//,/\|}\)\$/d" Name.txt
This is not entirely robust against strings containing whitespace or other shell metacharacters, but if you just need
Try with
sed -e 's/\bjyoti\b//g;s/\bpriya\b//g' < Name.txt
(using \b assuming you need word boundaries)
this will do it:
for param in `echo $ab | sed -e 's/[ ]+//g' -e 's/,/ /g'` ; do res=`sed -e "s/$param//g" < name.txt`; echo $res > name.txt; done
echo $res

Resources