Adding file paths to Latex figures? - ruby

In the below text I would like to add figs/01/ to each of the 3 files. As you can see the files can either be pdf,png or not have an extension and sometimes the \includegraphics breaks over several lines.
My current thinking is
cat figs.tex | ruby -ne 'puts $_.gsub(/\\includegraphics\[.*?\]\{.*?\}/) { |x| x.do_something_here }'
but it is a chick and egg problem, because I would need to search again for the part to search and replace.
Question
Can anyone see how to solve such a situation?
\begin{figure}[ht]
\centerline{ \includegraphics[height=55mm]{plotLn} \includegraphics[height=55mm]{plotLnZoom.pdf}}
\caption{Funktionen $f(x) = \ln(x)$ \ref{examg0} (bl)}
\end{figure}
\begin{example}[Parameterfremstilling for ret linje]\label{tn6.linje}
\begin{think}
Givet linjen $\,m\,$,
\includegraphics[trim=1cm 11.5cm 1cm
11.5cm,width=0.60\textwidth,clip]{vektor8.png}
\end{think}

You can read the whole file in one shot (instead of the default behaviour that reads the file line by line). To do that you need the switch -0777 (special value for the record separator). This solves the problem of a pattern that spreads over multiple lines.
You can also replace the -n option and puts with -p to automatically print the result.
ruby -0777 -pe 'gsub(/\\includegraphics\[[^\]]*\]{\K/,"figs/01/")' figs.tex
You can omit $_, by default gsub is applied to it. (You can even impress your friends removing the space between -pe and the quote ')
About the pattern, \K removes all on the left from the match result, the match result here is only an empty string at the expected position where the replacement string is inserted.
Note that the ruby command line options come from Perl:
perl -0777 -pe 's!\\includegraphics\[[^\]]*\]{\K!figs/01/!g' figs.tex

Related

Using shell scripts to remove all commas except for the first on each line

I have a text file consisting of lines which all begin with a numerical code, followed by one or several words, a comma, and then a list of words separated by commas. I need to delete all commas in every line apart from the first comma. For example:
1.2.3 Example question, a, question, that, is, hopefully, not, too, rudimentary
which should be changed to
1.2.3 Example question, a question that is hopefully not too rudimentary
I have tried using sed and shell scripts to solve this, and I can figure out how to delete the first comma on each line (1) and how to delete all commas (2), but not how to delete only the commas after the first comma on each line
(1)
while read -r line
do
echo "${line/,/}"
done <"filename.txt" > newfile.txt
mv newfile.txt filename.txt
(2)
sed 's/,//g' filename.txt > newfile.txt
You need to capture the first comma, and then remove the others. One option is to change the first comma into some otherwise unused character (Control-A for example), then remove the remaining commas, and finally replace the replacement character with a comma:
sed -e $'s/,/\001/; s/,//g; s/\001/,/'
(using Bash ANSI C quoting — the \001 maps to Control-A).
An alternative mechanism uses sed's labels and branches, as illustrated by Wiktor Stribiżew's answer.
If using GNU sed, you can specify a number in the flags of sed's s/// command along with g to indicate which match to start replacing at:
$ sed 's/,//2g' <<<'1.2.3 Example question, a, question, that, is, hopefully, not, too, rudimentary'
1.2.3 Example question, a question that is hopefully not too rudimentary
Its manual says:
Note: the POSIX standard does not specify what should happen when you mix the g and NUMBER modifiers, and currently there is no widely agreed upon meaning across sed implementations. For GNU sed, the interaction is defined to be: ignore matches before the NUMBERth, and then match and replace all matches from the NUMBERth on.
so if you're using a different sed, your mileage may vary. (OpenBSD and NetBSD seds raise an error instead, for example).
You can use
sed ':a; s/^\([^,]*,[^,]*\),/\1/;ta' filename.txt > newfile.txt
Details
:a - sets an a label
s/^\([^,]*,[^,]*\),/\1/ - finds 0+ non-commas at the start of string, a comma and again 0+ non-commas, capturing this substring into Group 1, and then just matching a , and replacing the match with the contents of Group 1 (removes the non-first comma)
ta - upon a successful replacement, jumps back to the a label location.
See an online sed demo:
s='1.2.3 Example question, a, question, that, is, hopefully, not, too, rudimentary'
sed ':a; s/^\([^,]*,[^,]*\),/\1/;ta' <<< "$s"
# => 1.2.3 Example question, a question that is hopefully not too rudimentary
awk 'NF>1 {$1=$1","} 1' FS=, OFS= filename.txt
sed ':a;s/,//2;t a' filename.txt
sed 's/,/\
/;s/,//g;y/\n/,/' filename.txt
This might work for you (GNU sed):
sed 's/,/&\n/;h;s/,//g;H;g;s/\n.*\n//' file
Append a newline to the first comma.
Copy the current line to the hold space.
Remove all commas in the current line.
Append the current line to the hold space.
Swap the current line for the hold space.
Remove everything between the introduced newlines.

Perl does not match multiple lines

I want to match:
Start Here Some example
text covering a few
lines. End Here
So I do
$ perl -nle 'print $1 if /(Start Here.*?)End Here/s'
then paste the text above and ctr-D. It wont match from cmd - but it does in file script. Why?
Change input record separator ($/) to null using -0 command line switch.
perl -0777nle 'print $1 if /(Start Here.*?)End Here/s' <<END
Start Here Some example
text covering a few
lines. End Here
THE_END
man perlrun
-0[octal/hexadecimal]
specifies the input record separator ($/) as an octal or
hexadecimal number. […] Any value 0400 or above will cause Perl to slurp files whole, but by convention the value 0777 is the one normally used for this purpose.
man perlvar
IO::Handle->input_record_separator( EXPR )
$INPUT_RECORD_SEPARATOR
$RS
$/
The input record separator, newline by default. This influences Perl's idea of what a "line" is. […] You may set it to […] "undef" to read through the end of file.
As others have explained, you're reading your file a line at a time, so matches over multiple lines are never going to work.
Reading files a line at a time is often the best approach. So we can use the "flip-flip" operator to do this:
$ perl -nle 'print if /Start Here/ .. /End Here/' your_file_here

find and overwrite string in a binary file using a script

binary file file.f1
which has String abc I want to overwrite it with adcd
perl -pi -e s/abc/abcd/ file.f1
works but it inserts it rather than overwriting it, which causes error for the program which uses it
I'm not sure how will I be able to do that without making things more complex,
I'd prefer if it used tools like sed, grep, python, perl one liners which are available by default on UNIX system
I'm not very experienced user and am very new to these tools
edit- hope its clear now
data inside bin file is like
[abc def xyz]
when doing perl -pi -e s/abc/abcd/ file.f1
it becomes [abcd def xyz]
what i want is to overwrite it with a extra [space] so it becomes
[abcd ef xyz]
You are trying to patch a binary file. Perl RE are not set for this type of process. While they will work MOST time, specific sequences may trick the RE engine, which assume the file to be text. Use with care.
To get replacement, make the source string match the length of the target string
perl -pi -e 's/abc./abcd/' file.f1
Perl will replace the first 4 byte string that starts with abc with abcd. If you suspect that the 4th character may be special (e.g. new line, or similar), use the single line mode. It will allow '.' to match ANY character.
perl -pi -e 's/abc./abcd/s' file.f1
perl -pi -e 's/blue/red/g' $file_name
The g at the end is required. Another tool to use would be sed for these kinds of tasks.
Another post about using perl

Multiline perl regex replace on large file without slurp

I have a file which is much larger than the amount of memory available on the server which needs to run this script.
In that file, I need to run a basic regex which does a find and replace across two lines at a time. I've looked at using sed, awk, and perl, but I haven't been able to get any of them to work as I need it in this instance.
On a smaller file, the following line does what I need it to:
perl -0777 -i -pe 's/,\s+\)/\n\)/g' inputfile.txt
In essence, any time a line ends in a comma and the next line starts in a closing parenthesis, remove the comma.
When I tried to run that on my production file I just got the message "Killed" in the terminal after a couple of minutes and the file contents were completely erased. I was watching memory usage during that and as expected it was running at 100% and using the swap space extensively.
Is there a way to make that perl command run on two lines at a time instead, or an alternative bash command which might achieve the same result?
If it makes it easier by keeping the file size identical then I also have the option of replacing the comma with a space character.
A fairly direct logic:
print a line unless it ends with a comma (need to check on the next line, perhaps remove it)
print the previous line ($p) if it had a comma, without it if the current line starts with )
perl -ne'
if ($p =~ /,$/) { $p =~ s/,$// if /^\s*\)/; print $p };
print unless /,$/;
$p = $_
' file
Efficiency of this can be improved some, by losing one regex (so engine startup overhead goes) and some data copy but at the expense of clumsier code, having additional logic and checks.
Tested with file
hello
here's a comma,
which was fine
(but here's another,
) which has to go,
and that was another good one.
end
The above fails to print the last line if it ends in a comma. One fix for that is to check our buffer (previous line $p) in an END block, so to add at the end
END { print $p if $p =~ /,$/}
This is a fairly usual way to check for trailing buffers or conditions in -n/-p one-liners.
Another fix, less efficient but with perhaps cleaner code, is to replace the statement
print unless /,$/;
with
print if (not /,$/ or eof);
This does run an eof check on every line of the file, while END runs once.
Delay printing out the trailing comma and line feed until you know it's ok to print it out.
perl -ne'
$_ = $buf . $_;
s/^,(?=\n\))//;
$buf = s/(,\n)\z// ? $1 : "";
print;
END { print $buf; }
'
Faster:
perl -ne'
print /^\)/ ? "\n" : ",\n" if $f;
$f = s/,\n//;
print;
END { print ",\n" if $f; }
'
Specifying file to process to Perl one-liner
If using \n newline as a record separator is awkward, use something else. In this case you're specifically interested in the sequence ,\n), and we can let Perl find that for us as we read the file:
perl -pe 'BEGIN{ $/ = ",\n)" } s/,\n\)/\n)/' input.txt >output.txt
This portion: $/ = ",\n)" tells Perl that instead of iterating over lines of the file, it should iterate over records that terminate with the sequence ,\n). That helps us to assure that every chunk will have at most one such sequence, but more importantly, that this sequence will not span chunks (or records, or file-reads). Every chunk read will either end in ,\n) or in the case of the final record, may end not have a record terminator (by our definition of terminator).
Next we just use substitution to eliminate that comma in our ,\n) record separator sequence.
The key here really is that by setting the record separator to the very sequence we're interested in, we guarantee the sequence will not get broken across file-reads.
As has been mentioned in the comments, this solution is most useful only if the span between ,\n) sequences doesn't exceed the amount of memory you are willing to throw at the problem. It is most likely that newlines themselves occur in the file more often than ,\n) sequences, and so, this will read in larger chunks. You know your data set better than we do, and so are in a better position of judging whether the simplicity of this solution is outweighed by the footprint it consumes in memory.
This can be done more simply with just awk.
awk 'BEGIN{RS=".\n."; ORS=""} {gsub(",\n)", "\n)", RT); print $0 RT}'
Explanation:
awk, unlike Perl, allows a regular expression as the Record Separator, here .\n. which "captures" the two characters surrounding each newline.
Setting ORS to empty prevents print from outputting extra newlines. Newlines are all captured in RS/RT.
RT represents the actual text matched by the RS regular expression.
The gsub removes any desired comma from RT if present.
Caveat: You'd need gnu awk (gawk) for this to work. It seems that POSIX-only awk will lack the regexp-RS with RT variable feature, according to gawk man page.
Note: gsub is not really needed, sub is good enough and probably should have been used above.
This might work for you (GNU sed):
sed 'N;s/,\n)/\n)/;P;D' file
Keep a moving window of two lines throughout the file and if the first ends in a , and the second begins with ), remove the ,.
If there is white space and it needs to be preserved, use:
sed 'N;s/,\(\s*\n\s*)\)/\1/;P;D' file

Bash script output text between first match and 2nd match only [duplicate]

I'm trying to use sed to clean up lines of URLs to extract just the domain.
So from:
http://www.suepearson.co.uk/product/174/71/3816/
I want:
http://www.suepearson.co.uk/
(either with or without the trailing slash, it doesn't matter)
I have tried:
sed 's|\(http:\/\/.*?\/\).*|\1|'
and (escaping the non-greedy quantifier)
sed 's|\(http:\/\/.*\?\/\).*|\1|'
but I can not seem to get the non-greedy quantifier (?) to work, so it always ends up matching the whole string.
Neither basic nor extended Posix/GNU regex recognizes the non-greedy quantifier; you need a later regex. Fortunately, Perl regex for this context is pretty easy to get:
perl -pe 's|(http://.*?/).*|\1|'
In this specific case, you can get the job done without using a non-greedy regex.
Try this non-greedy regex [^/]* instead of .*?:
sed 's|\(http://[^/]*/\).*|\1|g'
With sed, I usually implement non-greedy search by searching for anything except the separator until the separator :
echo "http://www.suon.co.uk/product/1/7/3/" | sed -n 's;\(http://[^/]*\)/.*;\1;p'
Output:
http://www.suon.co.uk
this is:
don't output -n
search, match pattern, replace and print s/<pattern>/<replace>/p
use ; search command separator instead of / to make it easier to type so s;<pattern>;<replace>;p
remember match between brackets \( ... \), later accessible with \1,\2...
match http://
followed by anything in brackets [], [ab/] would mean either a or b or /
first ^ in [] means not, so followed by anything but the thing in the []
so [^/] means anything except / character
* is to repeat previous group so [^/]* means characters except /.
so far sed -n 's;\(http://[^/]*\) means search and remember http://followed by any characters except / and remember what you've found
we want to search untill the end of domain so stop on the next / so add another / at the end: sed -n 's;\(http://[^/]*\)/' but we want to match the rest of the line after the domain so add .*
now the match remembered in group 1 (\1) is the domain so replace matched line with stuff saved in group \1 and print: sed -n 's;\(http://[^/]*\)/.*;\1;p'
If you want to include backslash after the domain as well, then add one more backslash in the group to remember:
echo "http://www.suon.co.uk/product/1/7/3/" | sed -n 's;\(http://[^/]*/\).*;\1;p'
output:
http://www.suon.co.uk/
Simulating lazy (un-greedy) quantifier in sed
And all other regex flavors!
Finding first occurrence of an expression:
POSIX ERE (using -r option)
Regex:
(EXPRESSION).*|.
Sed:
sed -r ‍'s/(EXPRESSION).*|./\1/g' # Global `g` modifier should be on
Example (finding first sequence of digits) Live demo:
$ sed -r 's/([0-9]+).*|./\1/g' <<< 'foo 12 bar 34'
12
How does it work?
This regex benefits from an alternation |. At each position engine tries to pick the longest match (this is a POSIX standard which is followed by couple of other engines as well) which means it goes with . until a match is found for ([0-9]+).*. But order is important too.
Since global flag is set, engine tries to continue matching character by character up to the end of input string or our target. As soon as the first and only capturing group of left side of alternation is matched (EXPRESSION) rest of line is consumed immediately as well .*. We now hold our value in the first capturing group.
POSIX BRE
Regex:
\(\(\(EXPRESSION\).*\)*.\)*
Sed:
sed 's/\(\(\(EXPRESSION\).*\)*.\)*/\3/'
Example (finding first sequence of digits):
$ sed 's/\(\(\([0-9]\{1,\}\).*\)*.\)*/\3/' <<< 'foo 12 bar 34'
12
This one is like ERE version but with no alternation involved. That's all. At each single position engine tries to match a digit.
If it is found, other following digits are consumed and captured and the rest of line is matched immediately otherwise since * means
more or zero it skips over second capturing group \(\([0-9]\{1,\}\).*\)* and arrives at a dot . to match a single character and this process continues.
Finding first occurrence of a delimited expression:
This approach will match the very first occurrence of a string that is delimited. We can call it a block of string.
sed 's/\(END-DELIMITER-EXPRESSION\).*/\1/; \
s/\(\(START-DELIMITER-EXPRESSION.*\)*.\)*/\1/g'
Input string:
foobar start block #1 end barfoo start block #2 end
-EDE: end
-SDE: start
$ sed 's/\(end\).*/\1/; s/\(\(start.*\)*.\)*/\1/g'
Output:
start block #1 end
First regex \(end\).* matches and captures first end delimiter end and substitues all match with recent captured characters which
is the end delimiter. At this stage our output is: foobar start block #1 end.
Then the result is passed to second regex \(\(start.*\)*.\)* that is same as POSIX BRE version above. It matches a single character
if start delimiter start is not matched otherwise it matches and captures the start delimiter and matches the rest of characters.
Directly answering your question
Using approach #2 (delimited expression) you should select two appropriate expressions:
EDE: [^:/]\/
SDE: http:
Usage:
$ sed 's/\([^:/]\/\).*/\1/g; s/\(\(http:.*\)*.\)*/\1/' <<< 'http://www.suepearson.co.uk/product/174/71/3816/'
Output:
http://www.suepearson.co.uk/
Note: this will not work with identical delimiters.
sed does not support "non greedy" operator.
You have to use "[]" operator to exclude "/" from match.
sed 's,\(http://[^/]*\)/.*,\1,'
P.S. there is no need to backslash "/".
sed - non greedy matching by Christoph Sieghart
The trick to get non greedy matching in sed is to match all characters excluding the one that terminates the match. I know, a no-brainer, but I wasted precious minutes on it and shell scripts should be, after all, quick and easy. So in case somebody else might need it:
Greedy matching
% echo "<b>foo</b>bar" | sed 's/<.*>//g'
bar
Non greedy matching
% echo "<b>foo</b>bar" | sed 's/<[^>]*>//g'
foobar
Non-greedy solution for more than a single character
This thread is really old but I assume people still needs it.
Lets say you want to kill everything till the very first occurrence of HELLO. You cannot say [^HELLO]...
So a nice solution involves two steps, assuming that you can spare a unique word that you are not expecting in the input, say top_sekrit.
In this case we can:
s/HELLO/top_sekrit/ #will only replace the very first occurrence
s/.*top_sekrit// #kill everything till end of the first HELLO
Of course, with a simpler input you could use a smaller word, or maybe even a single character.
HTH!
This can be done using cut:
echo "http://www.suepearson.co.uk/product/174/71/3816/" | cut -d'/' -f1-3
another way, not using regex, is to use fields/delimiter method eg
string="http://www.suepearson.co.uk/product/174/71/3816/"
echo $string | awk -F"/" '{print $1,$2,$3}' OFS="/"
sed certainly has its place but this not not one of them !
As Dee has pointed out: Just use cut. It is far simpler and much more safe in this case. Here's an example where we extract various components from the URL using Bash syntax:
url="http://www.suepearson.co.uk/product/174/71/3816/"
protocol=$(echo "$url" | cut -d':' -f1)
host=$(echo "$url" | cut -d'/' -f3)
urlhost=$(echo "$url" | cut -d'/' -f1-3)
urlpath=$(echo "$url" | cut -d'/' -f4-)
gives you:
protocol = "http"
host = "www.suepearson.co.uk"
urlhost = "http://www.suepearson.co.uk"
urlpath = "product/174/71/3816/"
As you can see this is a lot more flexible approach.
(all credit to Dee)
sed 's|(http:\/\/[^\/]+\/).*|\1|'
There is still hope to solve this using pure (GNU) sed. Despite this is not a generic solution in some cases you can use "loops" to eliminate all the unnecessary parts of the string like this:
sed -r -e ":loop" -e 's|(http://.+)/.*|\1|' -e "t loop"
-r: Use extended regex (for + and unescaped parenthesis)
":loop": Define a new label named "loop"
-e: add commands to sed
"t loop": Jump back to label "loop" if there was a successful substitution
The only problem here is it will also cut the last separator character ('/'), but if you really need it you can still simply put it back after the "loop" finished, just append this additional command at the end of the previous command line:
-e "s,$,/,"
sed -E interprets regular expressions as extended (modern) regular expressions
Update: -E on MacOS X, -r in GNU sed.
Because you specifically stated you're trying to use sed (instead of perl, cut, etc.), try grouping. This circumvents the non-greedy identifier potentially not being recognized. The first group is the protocol (i.e. 'http://', 'https://', 'tcp://', etc). The second group is the domain:
echo "http://www.suon.co.uk/product/1/7/3/" | sed "s|^\(.*//\)\([^/]*\).*$|\1\2|"
If you're not familiar with grouping, start here.
I realize this is an old entry, but someone may find it useful.
As the full domain name may not exceed a total length of 253 characters replace .* with .\{1, 255\}
This is how to robustly do non-greedy matching of multi-character strings using sed. Lets say you want to change every foo...bar to <foo...bar> so for example this input:
$ cat file
ABC foo DEF bar GHI foo KLM bar NOP foo QRS bar TUV
should become this output:
ABC <foo DEF bar> GHI <foo KLM bar> NOP <foo QRS bar> TUV
To do that you convert foo and bar to individual characters and then use the negation of those characters between them:
$ sed 's/#/#A/g; s/{/#B/g; s/}/#C/g; s/foo/{/g; s/bar/}/g; s/{[^{}]*}/<&>/g; s/}/bar/g; s/{/foo/g; s/#C/}/g; s/#B/{/g; s/#A/#/g' file
ABC <foo DEF bar> GHI <foo KLM bar> NOP <foo QRS bar> TUV
In the above:
s/#/#A/g; s/{/#B/g; s/}/#C/g is converting { and } to placeholder strings that cannot exist in the input so those chars then are available to convert foo and bar to.
s/foo/{/g; s/bar/}/g is converting foo and bar to { and } respectively
s/{[^{}]*}/<&>/g is performing the op we want - converting foo...bar to <foo...bar>
s/}/bar/g; s/{/foo/g is converting { and } back to foo and bar.
s/#C/}/g; s/#B/{/g; s/#A/#/g is converting the placeholder strings back to their original characters.
Note that the above does not rely on any particular string not being present in the input as it manufactures such strings in the first step, nor does it care which occurrence of any particular regexp you want to match since you can use {[^{}]*} as many times as necessary in the expression to isolate the actual match you want and/or with seds numeric match operator, e.g. to only replace the 2nd occurrence:
$ sed 's/#/#A/g; s/{/#B/g; s/}/#C/g; s/foo/{/g; s/bar/}/g; s/{[^{}]*}/<&>/2; s/}/bar/g; s/{/foo/g; s/#C/}/g; s/#B/{/g; s/#A/#/g' file
ABC foo DEF bar GHI <foo KLM bar> NOP foo QRS bar TUV
Have not yet seen this answer, so here's how you can do this with vi or vim:
vi -c '%s/\(http:\/\/.\{-}\/\).*/\1/ge | wq' file &>/dev/null
This runs the vi :%s substitution globally (the trailing g), refrains from raising an error if the pattern is not found (e), then saves the resulting changes to disk and quits. The &>/dev/null prevents the GUI from briefly flashing on screen, which can be annoying.
I like using vi sometimes for super complicated regexes, because (1) perl is dead dying, (2) vim has a very advanced regex engine, and (3) I'm already intimately familiar with vi regexes in my day-to-day usage editing documents.
Since PCRE is also tagged here, we could use GNU grep by using non-lazy match in regex .*? which will match first nearest match opposite of .*(which is really greedy and goes till last occurrence of match).
grep -oP '^http[s]?:\/\/.*?/' Input_file
Explanation: using grep's oP options here where -P is responsible for enabling PCRE regex here. In main program of grep mentioning regex which is matching starting http/https followed by :// till next occurrence of / since we have used .*? it will look for first / after (http/https://). It will print matched part only in line.
echo "/home/one/two/three/myfile.txt" | sed 's|\(.*\)/.*|\1|'
don bother, i got it on another forum :)
sed 's|\(http:\/\/www\.[a-z.0-9]*\/\).*|\1| works too
Here is something you can do with a two step approach and awk:
A=http://www.suepearson.co.uk/product/174/71/3816/
echo $A|awk '
{
var=gensub(///,"||",3,$0) ;
sub(/\|\|.*/,"",var);
print var
}'
Output:
http://www.suepearson.co.uk
Hope that helps!
Another sed version:
sed 's|/[:alnum:].*||' file.txt
It matches / followed by an alphanumeric character (so not another forward slash) as well as the rest of characters till the end of the line. Afterwards it replaces it with nothing (ie. deletes it.)
#Daniel H (concerning your comment on andcoz' answer, although long time ago): deleting trailing zeros works with
s,([[:digit:]]\.[[:digit:]]*[1-9])[0]*$,\1,g
it's about clearly defining the matching conditions ...
You should also think about the case where there is no matching delims. Do you want to output the line or not. My examples here do not output anything if there is no match.
You need prefix up to 3rd /, so select two times string of any length not containing / and following / and then string of any length not containing / and then match / following any string and then print selection. This idea works with any single char delims.
echo http://www.suepearson.co.uk/product/174/71/3816/ | \
sed -nr 's,(([^/]*/){2}[^/]*)/.*,\1,p'
Using sed commands you can do fast prefix dropping or delim selection, like:
echo 'aaa #cee: { "foo":" #cee: " }' | \
sed -r 't x;s/ #cee: /\n/;D;:x'
This is lot faster than eating char at a time.
Jump to label if successful match previously. Add \n at / before 1st delim. Remove up to first \n. If \n was added, jump to end and print.
If there is start and end delims, it is just easy to remove end delims until you reach the nth-2 element you want and then do D trick, remove after end delim, jump to delete if no match, remove before start delim and and print. This only works if start/end delims occur in pairs.
echo 'foobar start block #1 end barfoo start block #2 end bazfoo start block #3 end goo start block #4 end faa' | \
sed -r 't x;s/end//;s/end/\n/;D;:x;s/(end).*/\1/;T y;s/.*(start)/\1/;p;:y;d'
If you have access to gnu grep, then can utilize perl regex:
grep -Po '^https?://([^/]+)(?=)' <<< 'http://www.suepearson.co.uk/product/174/71/3816/'
http://www.suepearson.co.uk
Alternatively, to get everything after the domain use
grep -Po '^https?://([^/]+)\K.*' <<< 'http://www.suepearson.co.uk/product/174/71/3816/'
/product/174/71/3816/
The following solution works for matching / working with multiply present (chained; tandem; compound) HTML or other tags. For example, I wanted to edit HTML code to remove <span> tags, that appeared in tandem.
Issue: regular sed regex expressions greedily matched over all the tags from the first to the last.
Solution: non-greedy pattern matching (per discussions elsewhere in this thread; e.g. https://stackoverflow.com/a/46719361/1904943).
Example:
echo '<span>Will</span>This <span>remove</span>will <span>this.</span>remain.' | \
sed 's/<span>[^>]*>//g' ; echo
This will remain.
Explanation:
s/<span> : find <span>
[^>] : followed by anything that is not >
*> : until you find >
//g : replace any such strings present with nothing.
Addendum
I was trying to clean up URLs, but I was running into difficulty matching / excluding a word - href - using the approach above. I briefly looked at negative lookarounds (Regular expression to match a line that doesn't contain a word) but that approach seemed overly complex and did not provide a satisfactory solution.
I decided to replace href with ` (backtick), do the regex substitutions, then replace ` with href.
Example (formatted here for readability):
printf '\n
<a aaa h href="apple">apple</a>
<a bbb "c=ccc" href="banana">banana</a>
<a class="gtm-content-click"
data-vars-link-text="nope"
data-vars-click-url="https://blablabla"
data-vars-event-category="story"
data-vars-sub-category="story"
data-vars-item="in_content_link"
data-vars-link-text
href="https:example.com">Example.com</a>\n\n' |
sed 's/href/`/g ;
s/<a[^`]*`/\n<a href/g'
apple
banana
Example.com
Explanation: basically as above. Here,
s/href/` : replace href with ` (backtick)
s/<a : find start of URL
[^`] : followed by anything that is not ` (backtick)
*` : until you find a `
/<a href/g : replace each of those found with <a href
Unfortunately, as mentioned, this it is not supported in sed.
To overcome this, I suggest to use the next best thing(actually better even), to use vim sed-like capabilities.
define in .bash-profile
vimdo() { vim $2 --not-a-term -c "$1" -es +"w >> /dev/stdout" -cq! ; }
That will create headless vim to execute a command.
Now you can do for example:
echo $PATH | vimdo "%s_\c:[a-zA-Z0-9\\/]\{-}python[a-zA-Z0-9\\/]\{-}:__g" -
to filter out python in $PATH.
Use - to have input from pipe in vimdo.
While most of the syntax is the same. Vim features more advanced features, and using \{-} is standard for non-greedy match. see help regexp.

Resources