xargs sed and command substitution in bash - bash

I'm trying to pass a xargs string replace into a sed replacement inside of a substitution, here's the non-working code.
CALCINT=$CALCINT$(seq $CALCLINES | xargs -Iz echo $CALCINT' -F "invoiceid'z'="'$(sed -n '/invoiceid'z'/s/.*name="invoiceid'z'"\s\+value="\([^"]\+\).*/\1/p' output.txt))
Everything works up until the sed inside the second substitution. the 'z' should be a number 1-20 based on the $CALCLINES variable. I know it has something to do with not escaping properly for sed but I'm having trouble wrapping my head around how sed wants things escaped in this situation.
Here's the surrounding lines of code:
curl -b mycookiefile -c mycookiefile http://localhost/calcint.php > output.txt
CALCLINES=`grep -o 'class="addinterest"' output.txt | wc -l`
CALCINT=$CALCINT$(seq $CALCLINES | xargs -Iz echo $CALCINT' -F "invoiceid'z'="'$(sed -n '/invoiceid17/s/.*name="invoiceid17"\s\+value="\([^"]\+\).*/\1/p' output.txt))
echo $CALCINT
Output: (What I get now)
-F "invoiceid1=" -F "invoiceid2=" -F "invoiceid3=" -F "invoiceid4=" -F "invoiceid5=" -F "invoiceid6=" -F "invoiceid7=" -F "invoiceid8=" -F "invoiceid9=" -F "invoiceid10=" -F "invoiceid11=" -F "invoiceid12=" -F "invoiceid13=" -F "invoiceid14=" -F "invoiceid15=" -F "invoiceid16=" -F "invoiceid17=" -F "invoiceid18=" -F "invoiceid19=" -F "invoiceid20="
What I'm hoping to see as output is something like this
-F "invoiceid1=2342" -F "invoiceid2=456456" -F "invoiceid3=78987" ...etc etc
-------------------------EDIT-----------------------
FWIW...here's the output.txt and other things I've tried.
for i in $(seq -f "%02g" ${CALCLINES});do
sed -n "/interest$i/s/.*name=\"interest$i\"\s\+value=\"\([^\"]\+\).*/\1/p" output.txt > output2.txt
done
output2.txt contains nothing
Thanks to #janos response for clearing things up but taking a step back makes it clear to me that the root of the issue here is that I'm struggling to get the invoice ids out. It's dynamically generated HTML "....name="invoiceid7" value="556"..." so there isn't anything consistent in those particular tags that I can grep on, which is why I was counting another tag that IS consistent then trying to use a variable sed to basically deduce the tag name then extract the value.
Annd..output.txt https://pastebin.com/ewUaddVi
------UPDATE-----
Working solution
Stuff sed into a loop. Note how I had to use ' to use variables in the sed string. That is well documented elsewhere on here. :)
for i in $(seq ${CALCLINES});do
e="interest"$i`
CALCINT=$CALCINT' -F "'$e'='
CALCINT=$CALCINT$(sed -n '/'$e'/s/.*name="'$e'"\s\+value="\([^"]\+\).*/\1/p' output.txt)'"'
done
Please read through the comments on the solution below, there is a cleaner way of doing this.

Your current approach cannot work, specifically this part:
... | xargs -Iz echo -F "invoiceid'z'="$(sed ...)"
The problem is that the $(sed ...) will not be evaluated for each line in the input during the execution of xargs.
The shell will evaluate this once, before it actually runs xargs.
And you need there dynamic values from your input.
You can make this work by taking a different approach:
Extract the invoice ids. For example, write a grep or sed pipeline that produces as output simply the list of invoice ids
Transform the invoice list to the -F "invoiceidNUM=..." form that you need
For the second step, Awk could be practical. The script could be something like this:
curl -b mycookiefile -c mycookiefile http://localhost/calcint.php > output.txt
args=$(sed ... output.txt | awk '{ print "-F \"invoice" NR "=" $0 "\"" }')
echo $args
For example if the sed step produces 2342, 456456, 78987, then the output will be:
-F "invoice1=2342" -F "invoice2=456456" -F "invoice3=78987"

Related

User input into variables and grep a file for pattern

H!
So I am trying to run a script which looks for a string pattern.
For example, from a file I want to find 2 words, located separately
"I like toast, toast is amazing. Bread is just toast before it was toasted."
I want to invoke it from the command line using something like this:
./myscript.sh myfile.txt "toast bread"
My code so far:
text_file=$1
keyword_first=$2
keyword_second=$3
find_keyword=$(cat $text_file | grep -w "$keyword_first""$keyword_second" )
echo $find_keyword
i have tried a few different ways. Directly from the command line I can make it run using:
cat myfile.txt | grep -E 'toast|bread'
I'm trying to put the user input into variables and use the variables to grep the file
You seem to be looking simply for
grep -E "$2|$3" "$1"
What works on the command line will also work in a script, though you will need to switch to double quotes for the shell to replace variables inside the quotes.
In this case, the -E option can be replaced with multiple -e options, too.
grep -e "$2" -e "$3" "$1"
You can pipe to grep twice:
find_keyword=$(cat $text_file | grep -w "$keyword_first" | grep -w "$keyword_second")
Note that your search word "bread" is not found because the string contains the uppercase "Bread". If you want to find the words regardless of this, you should use the case-insensitive option -i for grep:
find_keyword=$(cat $text_file | grep -w -i "$keyword_first" | grep -w -i "$keyword_second")
In a full script:
#!/bin/bash
#
# usage: ./myscript.sh myfile.txt "toast" "bread"
text_file=$1
keyword_first=$2
keyword_second=$3
find_keyword=$(cat $text_file | grep -w -i "$keyword_first" | grep -w -i "$keyword_second")
echo $find_keyword

Pass a list of files to sed to delete a line in them all

I am trying to do a one liner command that would delete the first line from a bunch of files. The list of files will be generated by grep command.
grep -l 'hsv,vcv,tro,ztk' ${OUTPUT_DIR}/*.csv | tr -s "\n" " " | xargs /usr/bin/sed -i '1d'
The problem is that sed can't see the list of files to act on.I'm not able to work out what is wrong with the command. Please can someone point me to my mistake.
Line numbers in sed are counted across all input files. So the address 1 only matches once per sed invocation.
In your example, only the first file in the list will get edited.
You can complete your task with loop such as this:
grep -l 'hsv,vcv,tro,ztk' "${OUTPUT_DIR}/"*.csv |
while IFS= read -r file; do
sed -i '1d' "$file"
done
This might work for you (GNU sed and grep):
grep -l 'hsv,vcv,tro,ztk' ${OUTPUT_DIR}/*.csv | xargs sed -i '1d'
The -l ouputs the file names which are received as arguments for xargs.
The -i edits in place the file and removes the first line of each file.
N.B. The -i option in sed works at a per file level, to use line numbers for each file within a stream use the -s option.
The only solution that worked for me is this apart from the one posted by Dan above -
for k in $(grep -l 'hsv,vcv,tro,ztk' ${OUTPUT_DIR}/*.csv | tr -s "\n" " ")
do
/usr/bin/sed -i '1d' "${k}"
done

bash: cURL from a file, increment filename if duplicate exists

I'm trying to curl a list of URLs to aggregate the tabular data on them from a set of 7000+ URLs. The URLs are in a .txt file. My goal was to cURL each line and save them to a local folder after which I would grep and parse out the HTML tables.
Unfortunately, because of the format of the URLs in the file, duplicates exist (example.com/State/City.html. When I ran a short while loop, I got back fewer than 5500 files, so there are at least 1500 dupes in the list. As a result, I tried to grep the "/State/City.html" section of the URL and pipe it to sed to remove the / and substitute a hyphen to use with curl -O. cURL was trying to grab
Here's a sample of what I tried:
while read line
do
FILENAME=$(grep -o -E '\/[A-z]+\/[A-z]+\.htm' | sed 's/^\///' | sed 's/\//-/')
curl $line -o '$FILENAME'
done < source-url-file.txt
It feels like I'm missing something fairly straightforward. I've scanned the man page because I worried I had confused -o and -O which I used to do a lot.
When I run the loop in the terminal, the output is:
Warning: Failed to create the file State-City.htm
I think you dont need multitude seds and grep, just 1 sed should suffice
urls=$(echo -e 'example.com/s1/c1.html\nexample.com/s1/c2.html\nexample.com/s1/c1.html')
for u in $urls
do
FN=$(echo "$u" | sed -E 's/^(.*)\/([^\/]+)\/([^\/]+)$/\2-\3/')
if [[ ! -f "$FN" ]]
then
touch "$FN"
echo "$FN"
fi
done
This script should work and also take care of downloading same files multiple files.
just replace the touch command by your curl one
First: you didn't pass the url info to grep.
Second: try this line instead:
FILENAME=$(echo $line | egrep -o '\/[^\/]+\/[^\/]+\.html' | sed 's/^\///' | sed 's/\//-/')

Optimize shell script for multiple sed replacements

I have a file containing a list of replacement pairs (about 100 of them) which are used by sed to replace strings in files.
The pairs go like:
old|new
tobereplaced|replacement
(stuffiwant).*(too)|\1\2
and my current code is:
cat replacement_list | while read i
do
old=$(echo "$i" | awk -F'|' '{print $1}') #due to the need for extended regex
new=$(echo "$i" | awk -F'|' '{print $2}')
sed -r "s/`echo "$old"`/`echo "$new"`/g" -i file
done
I cannot help but think that there is a more optimal way of performing the replacements. I tried turning the loop around to run through lines of the file first but that turned out to be much more expensive.
Are there any other ways of speeding up this script?
EDIT
Thanks for all the quick responses. Let me try out the various suggestions before choosing an answer.
One thing to clear up: I also need subexpressions/groups functionality. For example, one replacement I might need is:
([0-9])U|\10 #the extra brackets and escapes were required for my original code
Some details on the improvements (to be updated):
Method: processing time
Original script: 0.85s
cut instead of awk: 0.71s
anubhava's method: 0.18s
chthonicdaemon's method: 0.01s
You can use sed to produce correctly -formatted sed input:
sed -e 's/^/s|/; s/$/|g/' replacement_list | sed -r -f - file
I recently benchmarked various string replacement methods, among them a custom program, sed -e, perl -lnpe and an probably not that widely known MySQL command line utility, replace. replace being optimized for string replacements was almost an order of magnitude faster than sed. The results looked something like this (slowest first):
custom program > sed > LANG=C sed > perl > LANG=C perl > replace
If you want performance, use replace. To have it available on your system, you'll need to install some MySQL distribution, though.
From replace.c:
Replace strings in textfile
This program replaces strings in files or from stdin to stdout. It accepts a list of from-string/to-string pairs and replaces each occurrence of a from-string with the corresponding to-string. The first occurrence of a found string is matched. If there is more than one possibility for the string to replace, longer matches are preferred before shorter matches.
...
The programs make a DFA-state-machine of the strings and the speed isn't dependent on the count of replace-strings (only of the number of replaces). A line is assumed ending with \n or \0. There are no limit exept memory on length of strings.
More on sed. You can utilize multiple cores with sed, by splitting your replacements into #cpus groups and then pipe them through sed commands, something like this:
$ sed -e 's/A/B/g; ...' file.txt | \
sed -e 's/B/C/g; ...' | \
sed -e 's/C/D/g; ...' | \
sed -e 's/D/E/g; ...' > out
Also, if you use sed or perl and your system has an UTF-8 setup, then it also boosts performance to place a LANG=C in front of the commands:
$ LANG=C sed ...
You can cut down unnecessary awk invocations and use BASH to break name-value pairs:
while IFS='|' read -r old new; do
# echo "$old :: $new"
sed -i "s~$old~$new~g" file
done < replacement_list
IFS='|' will give enable read to populate name-value in 2 different shell variables old and new.
This is assuming ~ is not present in your name-value pairs. If that is not the case then feel free to use an alternate sed delimiter.
Here is what I would try:
store your sed search-replace pair in a Bash array like ;
build your sed command based on this array using parameter expansion
run command.
patterns=(
old new
tobereplaced replacement
)
pattern_count=${#patterns[*]} # number of pattern
sedArgs=() # will hold the list of sed arguments
for (( i=0 ; i<$pattern_count ; i=i+2 )); do # don't need to loop on the replacement…
search=${patterns[i]};
replace=${patterns[i+1]}; # … here we got the replacement part
sedArgs+=" -e s/$search/$replace/g"
done
sed ${sedArgs[#]} file
This result in this command:
sed -e s/old/new/g -e s/tobereplaced/replacement/g file
You can try this.
pattern=''
cat replacement_list | while read i
do
old=$(echo "$i" | awk -F'|' '{print $1}') #due to the need for extended regex
new=$(echo "$i" | awk -F'|' '{print $2}')
pattern=${pattern}"s/${old}/${new}/g;"
done
sed -r ${pattern} -i file
This will run the sed command only once on the file with all the replacements. You may also want to replace awk with cut. cut may be more optimized then awk, though I am not sure about that.
old=`echo $i | cut -d"|" -f1`
new=`echo $i | cut -d"|" -f2`
You might want to do the whole thing in awk:
awk -F\| 'NR==FNR{old[++n]=$1;new[n]=$2;next}{for(i=1;i<=n;++i)gsub(old[i],new[i])}1' replacement_list file
Build up a list of old and new words from the first file. The next ensures that the rest of the script isn't run on the first file. For the second file, loop through the list of replacements and perform them each one by one. The 1 at the end means that the line is printed.
{ cat replacement_list;echo "-End-"; cat YourFile; } | sed -n '1,/-End-/ s/$/³/;1h;1!H;$ {g
t again
:again
/^-End-³\n/ {s///;b done
}
s/^\([^|]*\)|\([^³]*\)³\(\n\)\(.*\)\1/\1|\2³\3\4\2/
t again
s/^[^³]*³\n//
t again
:done
p
}'
More for fun to code via sed. Try maybe for a time perfomance because this start only 1 sed that is recursif.
for posix sed (so --posix with GNU sed)
explaination
copy replacement list in front of file content with a delimiter (for line with ³ and for list with -End-) for an easier sed handling (hard to use \n in class character in posix sed.
place all line in buffer (add the delimiter of line for replacement list and -End- before)
if this is -End-³, remove the line and go to final print
replace each first pattern (group 1) found in text by second patttern (group 2)
if found, restart (t again)
remove first line
restart process (t again). T is needed because b does not reset the test and next t is always true.
Thanks to #miku above;
I have a 100MB file with a list of 80k replacement-strings.
I tried various combinations of sed's sequentially or parallel, but didn't see throughputs getting shorter than about a 20-hour runtime.
Instead I put my list into a sequence of scripts like "cat in | replace aold anew bold bnew cold cnew ... > out ; rm in ; mv out in".
I randomly picked 1000 replacements per file, so it all went like this:
# first, split my replace-list into manageable chunks (89 files in this case)
split -a 4 -l 1000 80kReplacePairs rep_
# next, make a 'replace' script out of each chunk
for F in rep_* ; do \
echo "create and make executable a scriptfile" ; \
echo '#!/bin/sh' > run_$F.sh ; chmod +x run_$F.sh ; \
echo "for each chunk-file line, strip line-ends," ; \
echo "then with sed, turn '{long list}' into 'cat in | {long list}' > out" ; \
cat $F | tr '\n' ' ' | sed 's/^/cat in | replace /;s/$/ > out/' >> run_$F.sh ;
echo "and append commands to switch in and out files, for next script" ; \
echo -e " && \\\\ \nrm in && mv out in\n" >> run_$F.sh ; \
done
# put all the replace-scripts in sequence into a main script
ls ./run_rep_aa* > allrun.sh
# make it executable
chmod +x allrun.sh
# run it
nohup ./allrun.sh &
.. which ran in under 5 mins, a lot less than 20 hours !
Looking back, I could have used more pairs per script, by finding how many lines would make up the limit.
xargs --show-limits </dev/null 2>&1 | grep --color=always "actually use:"
Maximum length of command we could actually use: 2090490
So just under 2MB; how many pairs would that be for my script ?
head -c 2090490 80kReplacePairs | wc -l
76923
So it seems I could have used 2 * 40000-line chunks
to expand on chthonicdaemon's solution
live demo
#! /bin/sh
# build regex from text file
REGEX_FILE=some-patch.regex.diff
# test
# set these with "export key=val"
SOME_VAR_NAME=hello
ANOTHER_VAR_NAME=world
escape_b() {
echo "$1" | sed 's,/,\\/,g'
}
regex="$(
(echo; cat "$REGEX_FILE"; echo) \
| perl -p -0 -e '
s/\n#[^\n]*/\n/g;
s/\(\(SOME_VAR_NAME\)\)/'"$(escape_b "$SOME_VAR_NAME")"'/g;
s/\(\(ANOTHER_VAR_NAME\)\)/'"$(escape_b "$ANOTHER_VAR_NAME")"'/g;
s/([^\n])\//\1\\\//g;
s/\n-([^\n]+)\n\+([^\n]*)(?:\n\/([^\n]+))?\n/s\/\1\/\2\/\3;\n/g;
'
)"
echo "regex:"; echo "$regex" # debug
exec perl -00 -p -i -e "$regex" "$#"
prefixing lines with -+/ allows empty "plus" values, and protects leading whitespace from buggy text editors
sample input: some-patch.regex.diff
# file format is similar to diff/patch
# this is a comment
# replace all "a/a" with "b/b"
-a/a
+b/b
/g
-a1|a2
+b1|b2
/sg
# this is another comment
-(a1).*(a2)
+b\1b\2b
-a\na\na
+b
-a1-((SOME_VAR_NAME))-a2
+b1-((ANOTHER_VAR_NAME))-b2
sample output
s/a\/a/b\/b/g;
s/a1|a2/b1|b2/;;
s/(a1).*(a2)/b\1b\2b/;
s/a\na\na/b/;
s/a1-hello-a2/b1-world-b2/;
this regex format is compatible with sed and perl
since miku mentioned mysql replace:
replacing fixed strings with regex is non-trivial,
since you must escape all regex chars,
but you also must handle backslash escapes ...
naive escaper:
echo '\(\n' | perl -p -e 's/([.+*?()\[\]])/\\\1/g'
\\(\n

Bash grep variable from multiple variables on a single line

I am using GNU bash, version 4.2.20(1)-release (x86_64-pc-linux-gnu). I have a music file list I dumped into a variable: $pltemp.
Example:
/Music/New/2010s/2011;Ziggy Marley;Reggae In My Head
I wish to grep the 3rd field above, in the Master-Music-List.txt, then continue another grep for the 2nd field. If both matched, print else echo "Not Matched".
So the above will search for the Song Title (Reggae In My Head), then will make sure it has the artist "Shaggy" on the same line, for a success.
So far, success for a non-variable grep;
$ grep -i -w -E 'shaggy.*angel' Master-Music-MM-Playlist.m3u
$ if ! grep Shaggy Master-Music-MM-Playlist.m3u ; then echo "Not Found"; fi
$ grep -i -w Angel Master-Music-MM-Playlist.m3u | grep -i -w shaggy
I'm not sure how to best construct the 'entire' list to process.
I want to do this on a single line.
I used this to dump the list into the variable $pltemp...
Original: \Music\New\2010s\2011\Ziggy Marley - Reggae In My Head.mp3
$ pltemp="$(cat Reggae.m3u | sed -e 's/\(.*\)\\/\1;/' -e 's/\(.*\)\ -\ /\1;/' -e 's/\\/\//g' -e 's/\\/\//g' -e 's/.mp3//')"
If you realy want to "grep this, then grep that", you need something more complex than grep by itself. How about awk?
awk -F';' '$3~/title/ && $2~/artist/ {print;n=1;exit;} END {if(n=0)print "Not matched";}'
If you want to make this search accessible as a script, the same thing simply changes form. For example:
#!/bin/sh
awk -F';' -vartist="$1" -vtitle="$2" '$3~title && $2~artist {print;n=1;exit;} END {if(n=0)print "Not matched";}'
Write this to a file, make it executable, and pipe stuff to it, with the artist substring/regex you're looking for as the first command line option, and the title substring/regex as the second.
On the other hand, what you're looking for might just be a slightly more complex regular expression. Let's wrap it in bash for you:
if ! echo "$pltemp" | egrep '^[^;]+;[^;]*artist[^;]*;.*title'; then
echo "Not matched"
fi
You can compress this to a single line if you like. Or make it a stand-along shell script, or make it a function in your .bashrc file.
awk -F ';' -v title="$title" -v artist="$artist" '$3 ~ title && $2 ~ artist'
Well, none of the above worked, so I came up with this...
for i in *.m3u; do
cat "$i" | sed 's/.*\\//' | while read z; do
grep --color=never -i -w -m 1 "$z" Master-Music-Playlist.m3u \
| echo "#NotFound;"$z" "
done > "$i"-MM-Final.txt;
done
Each line is read (\Music\Lady Gaga - Paparazzi.mp3), the path is stripped, the song is searched in the Master Music List, if not found, it echos "Not Found", saved into a new playlist.
Works {Solved}
Thanks anyway.

Resources