different ways to grep for two distinct strings, appearing in any order, or line, in the same file - bash

I want to return all files that have the strings: "main(" as well as "foo".
This is like using a multi pattern OR grep but with AND instead.
The best I've come up with is:
grep -rl . -e "main("|while read fname; do grep -rl "$fname" -e "foo"; done
It does the job, but ideally I wouldn't have to write bash script.
E.g.
text1.txt:
int main()
{
stuff....
}
foo
grep command would return text1.txt since it contains the strings 'main(' and 'foo'

Just use awk to match both patterns and print filenames:
awk 'FNR == 1 { m = f = 0 } # reset flags at start of each file
/main\(/ { ++m } /foo/ { ++f } # set flags when patterns match
m && f { print FILENAME; nextfile }' **/*
nextfile is a GNU extension which skips to the next file, rather than the next line. With globstar enabled, ** expands recursively. In an interactive bash shell, it is enabled by default, but in a script you can enable it yourself using shopt -s globstar.
With non-GNU awk, you can use another flag to skip lines and avoid printing the filename multiple times:
awk 'FNR == 1 { m = f = p = 0 } # reset flags at start of each file
p { next } # skip lines once this filename has been printed
/main\(/ { ++m } /foo/ { ++f }
m && f { print FILENAME; ++p }' **/*

Try
grep -rlZ 'main(' | xargs -0 grep -l 'foo'
-Z, --null
Output a zero byte (the ASCII NUL character) instead of the character that normally follows a file
name. For example, grep -lZ outputs a zero byte after each file name instead of the usual newline.
This option makes the output unambiguous, even in the presence of file names containing unusual
characters like newlines. This option can be used with commands like find -print0, perl -0, sort -z,
and xargs -0 to process arbitrary file names, even those that contain newline characters.
The first grep would print all filenames containing main( separated by NUL character. xargs would then pass the files to second grep command which would print files containing foo
If the files are small enough and do not contain NUL character,
grep -rlz 'main(.*foo\|foo.*main('
where -z would use NUL as line separator, effectively slurping whole file

Related

How i should use sed for delete specific strings and allow duplicate with more characters?

i had generate a list of file, and this had 17417 lines like :
./usr
./usr/share
./usr/share/mime-info
./usr/share/mime-info/libreoffice7.0.mime
./usr/share/mime-info/libreoffice7.0.keys
./usr/share/appdata
./usr/share/appdata/libreoffice7.0-writer.appdata.xml
./usr/share/appdata/org.libreoffice7.0.kde.metainfo.xml
./usr/share/appdata/libreoffice7.0-draw.appdata.xml
./usr/share/appdata/libreoffice7.0-impress.appdata.xml
./usr/share/appdata/libreoffice7.0-base.appdata.xml
./usr/share/appdata/libreoffice7.0-calc.appdata.xml
./usr/share/applications
./usr/share/applications/libreoffice7.0-xsltfilter.desktop
./usr/share/applications/libreoffice7.0-writer.desktop
./usr/share/applications/libreoffice7.0-base.desktop
./usr/share/applications/libreoffice7.0-math.desktop
./usr/share/applications/libreoffice7.0-startcenter.desktop
./usr/share/applications/libreoffice7.0-calc.desktop
./usr/share/applications/libreoffice7.0-draw.desktop
./usr/share/applications/libreoffice7.0-impress.desktop
./usr/share/icons
./usr/share/icons/gnome
./usr/share/icons/gnome/16x16
./usr/share/icons/gnome/16x16/mimetypes
./usr/share/icons/gnome/16x16/mimetypes/libreoffice7.0-oasis-formula.png
The thing is i want to delete the lines like :
./usr
./usr/share
./usr/share/mime-info
./usr/share/appdata
./usr/share/applications
./usr/share/icons
./usr/share/icons/gnome
./usr/share/icons/gnome/16x16
./usr/share/icons/gnome/16x16/mimetypes
and the "." at the start, for the result must be like :
/usr/share/mime-info/libreoffice7.0.mime
/usr/share/mime-info/libreoffice7.0.keys
/usr/share/appdata/libreoffice7.0-writer.appdata.xml
/usr/share/appdata/org.libreoffice7.0.kde.metainfo.xml
/usr/share/appdata/libreoffice7.0-draw.appdata.xml
/usr/share/appdata/libreoffice7.0-impress.appdata.xml
/usr/share/appdata/libreoffice7.0-base.appdata.xml
/usr/share/appdata/libreoffice7.0-calc.appdata.xml
/usr/share/applications/libreoffice7.0-xsltfilter.desktop
/usr/share/applications/libreoffice7.0-writer.desktop
/usr/share/applications/libreoffice7.0-base.desktop
/usr/share/applications/libreoffice7.0-math.desktop
/usr/share/applications/libreoffice7.0-startcenter.desktop
/usr/share/applications/libreoffice7.0-calc.desktop
/usr/share/applications/libreoffice7.0-draw.desktop
/usr/share/applications/libreoffice7.0-impress.desktop
/usr/share/icons/gnome/16x16/mimetypes/libreoffice7.0-oasis-formula.png
This is possible using sed ? or is more practical using another tool
With your list in the filename list, you could do:
sed -n 's/^[.]//;/\/.*[._].*$/p' list
Where:
sed -n suppresses printing of pattern-space; then
s/^[.]// is the substitution form that simply removes the first character '.' from each line; then
/\/.*[._].*$/p matches line that contain a '.' or '_' (optional) after the last '/' with p causing that line to be printed.
Example Use/Output
$ sed -n 's/^[.]//;/\/.*[._].*$/p' list
/usr/share/mime-info/libreoffice7.0.mime
/usr/share/mime-info/libreoffice7.0.keys
/usr/share/appdata/libreoffice7.0-writer.appdata.xml
/usr/share/appdata/org.libreoffice7.0.kde.metainfo.xml
/usr/share/appdata/libreoffice7.0-draw.appdata.xml
/usr/share/appdata/libreoffice7.0-impress.appdata.xml
/usr/share/appdata/libreoffice7.0-base.appdata.xml
/usr/share/appdata/libreoffice7.0-calc.appdata.xml
/usr/share/applications/libreoffice7.0-xsltfilter.desktop
/usr/share/applications/libreoffice7.0-writer.desktop
/usr/share/applications/libreoffice7.0-base.desktop
/usr/share/applications/libreoffice7.0-math.desktop
/usr/share/applications/libreoffice7.0-startcenter.desktop
/usr/share/applications/libreoffice7.0-calc.desktop
/usr/share/applications/libreoffice7.0-draw.desktop
/usr/share/applications/libreoffice7.0-impress.desktop
/usr/share/icons/gnome/16x16/mimetypes/libreoffice7.0-oasis-formula.png
Note, without GNU sed that allows chaining of expressions with ';' you would need:
sed -n -e 's/^[.]//' -e '/\/.*[._].*$/p' list
Assuming you want to delete the line(s) which is included other
pathname(s), would you please try:
sort -r list.txt | awk ' # sort the list in the reverse order
{
sub("^\\.", "") # remove leading dot
s = prev; sub("/[^/]+$", "", s) # remove the rightmost slash and following characters
if (s != $0) print # if s != $0, it means $0 is not a substring of the previous line
prev = $0 # keep $0 for the next line
}'
Result:
/usr/share/mime-info/libreoffice7.0.mime
/usr/share/mime-info/libreoffice7.0.keys
/usr/share/icons/gnome/16x16/mimetypes/libreoffice7.0-oasis-formula.png
/usr/share/applications/libreoffice7.0-xsltfilter.desktop
/usr/share/applications/libreoffice7.0-writer.desktop
/usr/share/applications/libreoffice7.0-startcenter.desktop
/usr/share/applications/libreoffice7.0-math.desktop
/usr/share/applications/libreoffice7.0-impress.desktop
/usr/share/applications/libreoffice7.0-draw.desktop
/usr/share/applications/libreoffice7.0-calc.desktop
/usr/share/applications/libreoffice7.0-base.desktop
/usr/share/appdata/org.libreoffice7.0.kde.metainfo.xml
/usr/share/appdata/libreoffice7.0-writer.appdata.xml
/usr/share/appdata/libreoffice7.0-impress.appdata.xml
/usr/share/appdata/libreoffice7.0-draw.appdata.xml
/usr/share/appdata/libreoffice7.0-calc.appdata.xml
/usr/share/appdata/libreoffice7.0-base.appdata.xml

Check if all of multiple strings or regexes exist in a file

I want to check if all of my strings exist in a text file. They could exist on the same line or on different lines. And partial matches should be OK. Like this:
...
string1
...
string2
...
string3
...
string1 string2
...
string1 string2 string3
...
string3 string1 string2
...
string2 string3
... and so on
In the above example, we could have regexes in place of strings.
For example, the following code checks if any of my strings exists in the file:
if grep -EFq "string1|string2|string3" file; then
# there is at least one match
fi
How to check if all of them exist? Since we are just interested in the presence of all matches, we should stop reading the file as soon all strings are matched.
Is it possible to do it without having to invoke grep multiple times (which won't scale when input file is large or if we have a large number of strings to match) or use a tool like awk or python?
Also, is there a solution for strings that can easily be extended for regexes?
Awk is the tool that the guys who invented grep, shell, etc. invented to do general text manipulation jobs like this so not sure why you'd want to try to avoid it.
In case brevity is what you're looking for, here's the GNU awk one-liner to do just what you asked for:
awk 'NR==FNR{a[$0];next} {for(s in a) if(!index($0,s)) exit 1}' strings RS='^$' file
And here's a bunch of other information and options:
Assuming you're really looking for strings, it'd be:
awk -v strings='string1 string2 string3' '
BEGIN {
numStrings = split(strings,tmp)
for (i in tmp) strs[tmp[i]]
}
numStrings == 0 { exit }
{
for (str in strs) {
if ( index($0,str) ) {
delete strs[str]
numStrings--
}
}
}
END { exit (numStrings ? 1 : 0) }
' file
the above will stop reading the file as soon as all strings have matched.
If you were looking for regexps instead of strings then with GNU awk for multi-char RS and retention of $0 in the END section you could do:
awk -v RS='^$' 'END{exit !(/regexp1/ && /regexp2/ && /regexp3/)}' file
Actually, even if it were strings you could do:
awk -v RS='^$' 'END{exit !(index($0,"string1") && index($0,"string2") && index($0,"string3"))}' file
The main issue with the above 2 GNU awk solutions is that, like #anubhava's GNU grep -P solution, the whole file has to be read into memory at one time whereas with the first awk script above, it'll work in any awk in any shell on any UNIX box and only stores one line of input at a time.
I see you've added a comment under your question to say you could have several thousand "patterns". Assuming you mean "strings" then instead of passing them as arguments to the script you could read them from a file, e.g. with GNU awk for multi-char RS and a file with one search string per line:
awk '
NR==FNR { strings[$0]; next }
{
for (string in strings)
if ( !index($0,string) )
exit 1
}
' file_of_strings RS='^$' file_to_be_searched
and for regexps it'd be:
awk '
NR==FNR { regexps[$0]; next }
{
for (regexp in regexps)
if ( $0 !~ regexp )
exit 1
}
' file_of_regexps RS='^$' file_to_be_searched
If you don't have GNU awk and your input file does not contain NUL characters then you can get the same effect as above by using RS='\0' instead of RS='^$' or by appending to variable one line at a time as it's read and then processing that variable in the END section.
If your file_to_be_searched is too large to fit in memory then it'd be this for strings:
awk '
NR==FNR { strings[$0]; numStrings=NR; next }
numStrings == 0 { exit }
{
for (string in strings) {
if ( index($0,string) ) {
delete strings[string]
numStrings--
}
}
}
END { exit (numStrings ? 1 : 0) }
' file_of_strings file_to_be_searched
and the equivalent for regexps:
awk '
NR==FNR { regexps[$0]; numRegexps=NR; next }
numRegexps == 0 { exit }
{
for (regexp in regexps) {
if ( $0 ~ regexp ) {
delete regexps[regexp]
numRegexps--
}
}
}
END { exit (numRegexps ? 1 : 0) }
' file_of_regexps file_to_be_searched
git grep
Here is the syntax using git grep with multiple patterns:
git grep --all-match --no-index -l -e string1 -e string2 -e string3 file
You may also combine patterns with Boolean expressions such as --and, --or and --not.
Check man git-grep for help.
--all-match When giving multiple pattern expressions, this flag is specified to limit the match to files that have lines to match all of them.
--no-index Search files in the current directory that is not managed by Git.
-l/--files-with-matches/--name-only Show only the names of files.
-e The next parameter is the pattern. Default is to use basic regexp.
Other params to consider:
--threads Number of grep worker threads to use.
-q/--quiet/--silent Do not output matched lines; exit with status 0 when there is a match.
To change the pattern type, you may also use -G/--basic-regexp (default), -F/--fixed-strings, -E/--extended-regexp, -P/--perl-regexp, -f file, and other.
This gnu-awk script may work:
cat fileSearch.awk
re == "" {
exit
}
{
split($0, null, "\\<(" re "\\>)", b)
for (i=1; i<=length(b); i++)
gsub("\\<" b[i] "([|]|$)", "", re)
}
END {
exit (re != "")
}
Then use it as:
if awk -v re='string1|string2|string3' -f fileSearch.awk file; then
echo "all strings were found"
else
echo "all strings were not found"
fi
Alternatively, you can use this gnu grep solution with PCRE option:
grep -qzP '(?s)(?=.*\bstring1\b)(?=.*\bstring2\b)(?=.*\bstring3\b)' file
Using -z we make grep read complete file into a single string.
We are using multiple lookahead assertions to assert that all the strings are present in the file.
Regex must use (?s) or DOTALL mod to make .* match across the lines.
As per man grep:
-z, --null-data
Treat input and output data as sequences of lines, each terminated by a
zero byte (the ASCII NUL character) instead of a newline.
First, you probably want to use awk. Since you eliminated that option in the question statement, yes, it is possible to do and this provides a way to do it. It is likely MUCH slower than using awk, but if you want to do it anyway...
This is based on the following assumptions:G
Invoking AWK is unacceptable
Invoking grep multiple times is unacceptable
The use of any other external tools are unacceptable
Invoking grep less than once is acceptable
It must return success if everything is found, failure when not
Using bash instead of external tools is acceptable
bash version is >= 3 for the regular expression version
This might meet all of your requirements: (regex version miss some comments, look at string version instead)
#!/bin/bash
multimatch() {
filename="$1" # Filename is first parameter
shift # move it out of the way that "$#" is useful
strings=( "$#" ) # search strings into an array
declare -a matches # Array to keep track which strings already match
# Initiate array tracking what we have matches for
for ((i=0;i<${#strings[#]};i++)); do
matches[$i]=0
done
while IFS= read -r line; do # Read file linewise
foundmatch=0 # Flag to indicate whether this line matched anything
for ((i=0;i<${#strings[#]};i++)); do # Loop through strings indexes
if [ "${matches[$i]}" -eq 0 ]; then # If no previous line matched this string yet
string="${strings[$i]}" # fetch the string
if [[ $line = *$string* ]]; then # check if it matches
matches[$i]=1 # mark that we have found this
foundmatch=1 # set the flag, we need to check whether we have something left
fi
fi
done
# If we found something, we need to check whether we
# can stop looking
if [ "$foundmatch" -eq 1 ]; then
somethingleft=0 # Flag to see if we still have unmatched strings
for ((i=0;i<${#matches[#]};i++)); do
if [ "${matches[$i]}" -eq 0 ]; then
somethingleft=1 # Something is still outstanding
break # no need check whether more strings are outstanding
fi
done
# If we didn't find anything unmatched, we have everything
if [ "$somethingleft" -eq 0 ]; then return 0; fi
fi
done < "$filename"
# If we get here, we didn't have everything in the file
return 1
}
multimatch_regex() {
filename="$1" # Filename is first parameter
shift # move it out of the way that "$#" is useful
regexes=( "$#" ) # Regexes into an array
declare -a matches # Array to keep track which regexes already match
# Initiate array tracking what we have matches for
for ((i=0;i<${#regexes[#]};i++)); do
matches[$i]=0
done
while IFS= read -r line; do # Read file linewise
foundmatch=0 # Flag to indicate whether this line matched anything
for ((i=0;i<${#strings[#]};i++)); do # Loop through strings indexes
if [ "${matches[$i]}" -eq 0 ]; then # If no previous line matched this string yet
regex="${regexes[$i]}" # Get regex from array
if [[ $line =~ $regex ]]; then # We use the bash regex operator here
matches[$i]=1 # mark that we have found this
foundmatch=1 # set the flag, we need to check whether we have something left
fi
fi
done
# If we found something, we need to check whether we
# can stop looking
if [ "$foundmatch" -eq 1 ]; then
somethingleft=0 # Flag to see if we still have unmatched strings
for ((i=0;i<${#matches[#]};i++)); do
if [ "${matches[$i]}" -eq 0 ]; then
somethingleft=1 # Something is still outstanding
break # no need check whether more strings are outstanding
fi
done
# If we didn't find anything unmatched, we have everything
if [ "$somethingleft" -eq 0 ]; then return 0; fi
fi
done < "$filename"
# If we get here, we didn't have everything in the file
return 1
}
if multimatch "filename" string1 string2 string3; then
echo "file has all strings"
else
echo "file miss one or more strings"
fi
if multimatch_regex "filename" "regex1" "regex2" "regex3"; then
echo "file match all regular expressions"
else
echo "file does not match all regular expressions"
fi
Benchmarks
I did some benchmarking searching .c,.h and .sh in arch/arm/ from Linux 4.16.2 for the strings "void", "function", and "#define". (Shell wrappers were added/ the code tuned that all can be called as testname <filename> <searchstring> [...] and that an if can be used to check the result)
Results: (measured with time, real time rounded to closest half second)
multimatch: 49s
multimatch_regex: 55s
matchall: 10.5s
fileMatchesAllNames: 4s
awk (first version): 4s
agrep: 4.5s
Perl re (-r): 10.5s
Perl non-re: 9.5s
Perl non-re optimised: 5s (Removed Getopt::Std and regex support for faster startup)
Perl re optimised: 7s (Removed Getopt::Std and non-regex support for faster startup)
git grep: 3.5s
C version (no regex): 1.5s
(Invoking grep multiple times, especially with the recursive method, did better than I expected)
A recursive solution. Iterate over the files one by one. For each file, check if it matches the first pattern and break early (-m1: on first match), only if it matched the first pattern, search for second pattern and so on:
#!/bin/bash
patterns="$#"
fileMatchesAllNames () {
file=$1
if [[ $# -eq 1 ]]
then
echo "$file"
else
shift
pattern=$1
shift
grep -m1 -q "$pattern" "$file" && fileMatchesAllNames "$file" $#
fi
}
for file in *
do
test -f "$file" && fileMatchesAllNames "$file" $patterns
done
Usage:
./allfilter.sh cat filter java
test.sh
Searches in the current dir for the tokens "cat", "filter" and "java". Found them only in "test.sh".
So grep is invoked often in the worst case scenario (finding the first N-1 patterns in the last line of each file, except for the N-th pattern).
But with an informed ordering (rarly matches first, early matchings first) if possible, the solution should be reasonable fast, since many files are abandoned early because they didn't match the first keyword, or accepted early, as they matched a keyword close to the top.
Example: You search a scala source file which contains tailrec (somewhat rarely used), mutable (rarely used, but if so, close to the top on import statements) main (rarely used, often not close to the top) and println (often used, unpredictable position), you would order them:
./allfilter.sh mutable tailrec main println
Performance:
ls *.scala | wc
89 89 2030
In 89 scala files, I have the keywords distribution:
for keyword in mutable tailrec main println; do grep -m 1 $keyword *.scala | wc -l ; done
16
34
41
71
Searching them with a slightly modified version of the scripts, which allows to use a filepattern as first argument takes about 0.2s:
time ./allfilter.sh "*.scala" mutable tailrec main println
Filepattern: *.scala Patterns: mutable tailrec main println
aoc21-2017-12-22_00:16:21.scala
aoc25.scala
CondenseString.scala
Partition.scala
StringCondense.scala
real 0m0.216s
user 0m0.024s
sys 0m0.028s
in close to 15.000 codelines:
cat *.scala | wc
14913 81614 610893
update:
After reading in the comments to the question, that we might be talking about thounsands of patterns, handing them as arguments doesn't seem to be a clever idea; better read them from a file, and pass the filename as argument - maybe for the list of files to filter too:
#!/bin/bash
filelist="$1"
patternfile="$2"
patterns="$(< $patternfile)"
fileMatchesAllNames () {
file=$1
if [[ $# -eq 1 ]]
then
echo "$file"
else
shift
pattern=$1
shift
grep -m1 -q "$pattern" "$file" && fileMatchesAllNames "$file" $#
fi
}
echo -e "Filepattern: $filepattern\tPatterns: $patterns"
for file in $(< $filelist)
do
test -f "$file" && fileMatchesAllNames "$file" $patterns
done
If the number and length of patterns/files exceeds the possibilities of argument passing, the list of patterns could be split into many patternfiles and processed in a loop (for example of 20 pattern files):
for i in {1..20}
do
./allfilter2.sh file.$i.lst pattern.$i.lst > file.$((i+1)).lst
done
You can
make use of the -o|--only-matching option of grep (which forces to output only the matched parts of a matching line, with each such part on a separate output line),
then eliminate duplicate occurrences of matched strings with sort -u,
and finally check that the count of remaining lines equals the count of the input strings.
Demonstration:
$ cat input
...
string1
...
string2
...
string3
...
string1 string2
...
string1 string2 string3
...
string3 string1 string2
...
string2 string3
... and so on
$ grep -o -F $'string1\nstring2\nstring3' input|sort -u|wc -l
3
$ grep -o -F $'string1\nstring3' input|sort -u|wc -l
2
$ grep -o -F $'string1\nstring2\nfoo' input|sort -u|wc -l
2
One shortcoming with this solution (failing to meet the partial matches should be OK requirement) is that grep doesn't detect overlapping matches. For example, although the text abcd matches both abc and bcd, grep finds only one of them:
$ grep -o -F $'abc\nbcd' <<< abcd
abc
$ grep -o -F $'bcd\nabc' <<< abcd
abc
Note that this approach/solution works only for fixed strings. It cannot be extended for regexes, because a single regex can match multiple different strings and we cannot track which match corresponds to which regex. The best you can do is store the matches in a temporary file, and then run grep multiple times using one regex at a time.
The solution implemented as a bash script:
matchall:
#!/usr/bin/env bash
if [ $# -lt 2 ]
then
echo "Usage: $(basename "$0") input_file string1 [string2 ...]"
exit 1
fi
function find_all_matches()
(
infile="$1"
shift
IFS=$'\n'
newline_separated_list_of_strings="$*"
grep -o -F "$newline_separated_list_of_strings" "$infile"
)
string_count=$(($# - 1))
matched_string_count=$(find_all_matches "$#"|sort -u|wc -l)
if [ "$matched_string_count" -eq "$string_count" ]
then
echo "ALL strings matched"
exit 0
else
echo "Some strings DID NOT match"
exit 1
fi
Demonstration:
$ ./matchall
Usage: matchall input_file string1 [string2 ...]
$ ./matchall input string1 string2 string3
ALL strings matched
$ ./matchall input string1 string2
ALL strings matched
$ ./matchall input string1 string2 foo
Some strings DID NOT match
The easiest way for me to check if the file has all three patterns is to get only matched patterns, output only unique parts and count lines.
Then you will be able to check it with a simple Test condition: test 3 -eq $grep_lines.
grep_lines=$(grep -Eo 'string1|string2|string3' file | uniq | wc -l)
Regarding your second question, I don't think it's possible to stop reading the file as soon as more than one pattern is found. I've read man page for grep and there are no options that could help you with that. You can only stop reading lines after specific one with an option grep -m [number] which happens no matter of matched patterns.
Pretty sure that a custom function is needed for that purpose.
It's an interesting problem, and there's nothing obvious in the grep man page to suggest an easy answer. There's might be an insane regex that would do it, but may be clearer with a straightforward chain of greps, even though that ends up scanning the file n-times. At least the -q option has it bail at the first match each time, and the && will shortcut evaluation if one of the strings is not found.
$grep -Fq string1 t && grep -Fq string2 t && grep -Fq string3 t
$echo $?
0
$grep -Fq string1 t && grep -Fq blah t && grep -Fq string3 t
$echo $?
1
Perhaps with gnu sed
cat match_word.sh
sed -z '
/\b'"$2"'/!bA
/\b'"$3"'/!bA
/\b'"$4"'/!bA
/\b'"$5"'/!bA
s/.*/0\n/
q
:A
s/.*/1\n/
' "$1"
and you call it like that :
./match_word.sh infile string1 string2 string3
return 0 if all match are found else 1
here you can look for 4 strings
if you want more, you can add lines like
/\b'"$x"'/!bA
Just for "solutions completeness", you can use a different tool and avoid multiple greps and awk/sed or big (and probably slow) shell loops; Such a tool is agrep.
agrep is actually a kind of egrep supporting also and operation between patterns, using ; as a pattern separator.
Like egrep and like most of the well known tools, agrep is a tool that operates on records/lines and thus we still need a way to treat the whole file as a single record.
Moreover agrep provides a -d option to set your custom record delimiter.
Some tests:
$ cat file6
str4
str1
str2
str3
str1 str2
str1 str2 str3
str3 str1 str2
str2 str3
$ agrep -d '$$\n' 'str3;str2;str1;str4' file6;echo $?
str4
str1
str2
str3
str1 str2
str1 str2 str3
str3 str1 str2
str2 str3
0
$ agrep -d '$$\n' 'str3;str2;str1;str4;str5' file6;echo $?
1
$ agrep -p 'str3;str2;str1' file6 #-p prints lines containing all three patterns in any position
str1 str2 str3
str3 str1 str2
No tool is perfect, and agrep has also some limitations; you can't use a regex /pattern longer than 32 chars and some options are not available when used with regexps- all these are explained in agrep man page
Ignoring the "Is it possible to do it without ... or use a tool like awk or python?" requirement, you can do it with a Perl script:
(Use an appropriate shebang for your system or something like /bin/env perl)
#!/usr/bin/perl
use Getopt::Std; # option parsing
my %opts;
my $filename;
my #patterns;
getopts('rf:',\%opts); # Allowing -f <filename> and -r to enable regex processing
if ($opts{'f'}) { # if -f is given
$filename = $opts{'f'};
#patterns = #ARGV[0 .. $#ARGV]; # Use everything else as patterns
} else { # Otherwise
$filename = $ARGV[0]; # First parameter is filename
#patterns = #ARGV[1 .. $#ARGV]; # Rest is patterns
}
my $use_re= $opts{'r'}; # Flag on whether patterns are regex or not
open(INF,'<',$filename) or die("Can't open input file '$filename'");
while (my $line = <INF>) {
my #removal_list = (); # List of stuff that matched that we don't want to check again
for (my $i=0;$i <= $#patterns;$i++) {
my $pattern = $patterns[$i];
if (($use_re&& $line =~ /$pattern/) || # regex match
(!$use_re&& index($line,$pattern) >= 0)) { # or string search
push(#removal_list,$i); # Mark to be removed
}
}
# Now remove everything we found this time
# We need to work backwards to keep us from messing
# with the list while we're busy
for (my $i=$#removal_list;$i >= 0;$i--) {
splice(#patterns,$removal_list[$i],1);
}
if (scalar(#patterns) == 0) { # If we don't need to match anything anymore
close(INF) or warn("Error closing '$filename'");
exit(0); # We found everything
}
}
# End of file
close(INF) or die("Error closing '$filename'");
exit(1); # If we reach this, we haven't matched everything
Is saved as matcher.pl this will search for plain text strings:
./matcher filename string1 string2 string3 'complex string'
This will search for regular expressions:
./matcher -r filename regex1 'regex2' 'regex4'
(The filename can be given with -f instead):
./matcher -f filename -r string1 string2 string3 'complex string'
It is limited to single line matching patterns (due to dealing with the file linewise).
The performance, when calling for lots of files from a shell script, is slower than awk (But search patterns can contain spaces, unlike the ones passed space-separated in -v to awk). If converted to a function and called from Perl code (with a file containing a list of files to search), it should be much faster than most awk implementations. (When called on several smallish files, the perl startup time (parsing, etc of the script) dominates the timing)
It can be sped up significantly by hardcoding whether regular expressions are used or not, at the cost of flexibility. (See my benchmarks here to see what effect removing Getopt::Std has)
perl -lne '%m = (%m, map {$_ => 1} m!\b(string1|string2|string3)\b!g); END { print scalar keys %m == 3 ? "Match": "No Match"}' file
In python using the fileinput module allows the files to be specified on the command line or the text read line by line from stdin. You could hard code the strings into a python list.
# Strings to match, must be valid regular expression patterns
# or be escaped when compiled into regex below.
strings = (
r'string1',
r'string2',
r'string3',
)
or read the strings from another file
import re
from fileinput import input, filename, nextfile, isfirstline
for line in input():
if isfirstline():
regexs = map(re.compile, strings) # new file, reload all strings
# keep only strings that have not been seen in this file
regexs = [rx for rx in regexs if not rx.match(line)]
if not regexs: # found all strings
print filename()
nextfile()
Assuming all your strings to check are in a file strings.txt, and the file you want to check in is input.txt, the following one liner will do :
Updated the answer based on comments :
$ diff <( sort -u strings.txt ) <( grep -o -f strings.txt input.txt | sort -u )
Explanation :
Use grep's -o option to match only the strings you are interested in. This gives all the strings that are present in the file input.txt. Then use diff to get the strings that are not found. If all the strings were found, the result would be nothing. Or, just check the exit code of diff.
What it does not do :
Exit as soon as all matches are found.
Extendible to regx.
Overlapping matches.
What it does do :
Find all matches.
Single call to grep.
Does not use awk or python.
Many of these answers are fine as far as they go.
But if performance is an issue -- certainly possible if the input is large and you have many thousands of patterns -- then you'll get a large speedup using a tool like lex or flex that generates a true deterministic finite automaton as a recognizer rather than calling a regex interpreter once per pattern.
The finite automaton will execute a few machine instructions per input character regardless of the number of patterns.
A no-frills flex solution:
%{
void match(int);
%}
%option noyywrap
%%
"abc" match(0);
"ABC" match(1);
[0-9]+ match(2);
/* Continue adding regex and exact string patterns... */
[ \t\n] /* Do nothing with whitespace. */
. /* Do nothing with unknown characters. */
%%
// Total number of patterns.
#define N_PATTERNS 3
int n_matches = 0;
int counts[10000];
void match(int n) {
if (counts[n]++ == 0 && ++n_matches == N_PATTERNS) {
printf("All matched!\n");
exit(0);
}
}
int main(void) {
yyin = stdin;
yylex();
printf("Only matched %d patterns.\n", n_matches);
return 1;
}
A down side is that you'd have to build this for every given set of patterns. That's not too bad:
flex matcher.y
gcc -O lex.yy.c -o matcher
Now run it:
./matcher < input.txt
The following python script should do the trick. It kind of does call the equivalent of grep (re.search) multiple times for each line -- i.e. it it searches each pattern for each line, but since you are not forking out a process each time, it should be much more efficient. Also, it removes the patterns which have already been found and stops when all of them have been found.
#!/usr/bin/env python
import re
# the file to search
filename = '/path/to/your/file.txt'
# list of patterns -- can be read from a file or command line
# depending on the count
patterns = [r'py.*$', r'\s+open\s+', r'^import\s+']
patterns = map(re.compile, patterns)
with open(filename) as f:
for line in f:
# search for pattern matches
results = map(lambda x: x.search(line), patterns)
# remove the patterns that did match
results = zip(results, patterns)
results = filter(lambda x: x[0] == None, results)
patterns = map(lambda x: x[1], results)
# stop if no more patterns are left
if len(patterns) == 0:
break
# print the patterns which were not found
for p in patterns:
print p.pattern
You can add a separate check for plain strings (string in line) if you are dealing with plain (non-regex) strings -- will be slightly more efficient.
Does that solve your problem?
One more Perl variant - whenever all given strings match..even when the file is read half through, the processing completes and just prints the results
> perl -lne ' /\b(string1|string2|string3)\b/ and $m{$1}++; eof if keys %m == 3; END { print keys %m == 3 ? "Match": "No Match"}' all_match.txt
Match
> perl -lne ' /\b(string1|string2|stringx)\b/ and $m{$1}++; eof if keys %m == 3; END { print keys %m == 3 ? "Match": "No Match"}' all_match.txt
No Match
First delete the line separator, and then use normal grep multiple times, as the number of patterns as in below.
Example: Let the file content be as below
PAT1
PAT2
PAT3
something
somethingelse
cat file | tr -d "\n" | grep "PAT1" | grep "PAT2" | grep -c "PAT3"
For plain speed, with no external tool limitations, and no regexes, this (crude) C version does a decent job. (Possibly Linux only, although it should work on all Unix-like systems with mmap)
#include <sys/mman.h>
#include <sys/stat.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <fcntl.h>
#include <unistd.h>
#include <errno.h>
/* https://stackoverflow.com/a/8584708/1837991 */
inline char *sstrstr(char *haystack, char *needle, size_t length)
{
size_t needle_length = strlen(needle);
size_t i;
for (i = 0; i < length; i++) {
if (i + needle_length > length) {
return NULL;
}
if (strncmp(&haystack[i], needle, needle_length) == 0) {
return &haystack[i];
}
}
return NULL;
}
int matcher(char * filename, char ** strings, unsigned int str_count)
{
int fd;
struct stat sb;
char *addr;
unsigned int i = 0; /* Used to keep us from running of the end of strings into SIGSEGV */
fd = open(filename, O_RDONLY);
if (fd == -1) {
fprintf(stderr,"Error '%s' with open on '%s'\n",strerror(errno),filename);
return 2;
}
if (fstat(fd, &sb) == -1) { /* To obtain file size */
fprintf(stderr,"Error '%s' with fstat on '%s'\n",strerror(errno),filename);
close(fd);
return 2;
}
if (sb.st_size <= 0) { /* zero byte file */
close(fd);
return 1; /* 0 byte files don't match anything */
}
/* mmap the file. */
addr = mmap(NULL, sb.st_size, PROT_READ, MAP_PRIVATE, fd, 0);
if (addr == MAP_FAILED) {
fprintf(stderr,"Error '%s' with mmap on '%s'\n",strerror(errno),filename);
close(fd);
return 2;
}
while (i++ < str_count) {
char * found = sstrstr(addr,strings[0],sb.st_size);
if (found == NULL) { /* If we haven't found this string, we can't find all of them */
munmap(addr, sb.st_size);
close(fd);
return 1; /* so give the user an error */
}
strings++;
}
munmap(addr, sb.st_size);
close(fd);
return 0; /* if we get here, we found everything */
}
int main(int argc, char *argv[])
{
char *filename;
char **strings;
unsigned int str_count;
if (argc < 3) { /* Lets count parameters at least... */
fprintf(stderr,"%i is not enough parameters!\n",argc);
return 2;
}
filename = argv[1]; /* First parameter is filename */
strings = argv + 2; /* Search strings start from 3rd parameter */
str_count = argc - 2; /* strings are two ($0 and filename) less than argc */
return matcher(filename,strings,str_count);
}
Compile it with:
gcc matcher.c -o matcher
Run it with:
./matcher filename needle1 needle2 needle3
Credits:
uses sstrstr
File handling mostly stolen from the mmap man page
Notes:
It will scan through the parts of the file preceding the matched strings multiple times - it will only open the file once though.
The entire file might end up loaded into memory, especially if a string doesn't match, the OS needs to decide that
regex support can probably be added by using the POSIX regex library (Performance would likely be slightly better than grep - it is should be based on the same library and you would gain reduced overhead from only opening the file once for searching for multiple regexes)
Files containing nulls should work, search strings with them not though...
All characters other than null should be searchable (\r, \n, etc)
I didn't see a simple counter among answers, so here is a counter oriented solution using awk that stops as soon as all matches are satisfied:
/string1/ { a = 1 }
/string2/ { b = 1 }
/string3/ { c = 1 }
{
if (c + a + b == 3) {
print "Found!";
exit;
}
}
A generic script
to expand usage through shell arguments:
#! /bin/sh
awk -v vars="$*" -v argc=$# '
BEGIN { split(vars, args); }
{
for (arg in args) {
if (!temp[arg] && $0 ~ args[arg]) {
inc++;
temp[arg] = 1;
}
}
if (inc == argc) {
print "Found!";
exit;
}
}
END { exit 1; }
' filename
Usage (in which you can pass Regular Expressions):
./script "str1?" "(wo)?men" str3
or to apply a string of patterns:
./script "str1? (wo)?men str3"
$ cat allstringsfile | tr '\n' ' ' | awk -f awkpattern1
Where allstringsfile is your text file, as in the original question.
awkpattern1 contains the string patterns, with && condition:
$ cat awkpattern1
/string1/ && /string2/ && /string3/

How to process tr across all files in a directory and output to a different name in another directory?

mpu3$ echo * | xargs -n 1 -I {} | tr "|" "/n"
which outputs:
#.txt
ag.txt
bg.txt
bh.txt
bi.txt
bid.txt
dh.txt
dw.txt
er.txt
ha.txt
jo.txt
kc.txt
lfr.txt
lg.txt
ng.txt
pb.txt
r-c.txt
rj.txt
rw.txt
se.txt
sh.txt
vr.txt
wa.txt
is what I have so far. What is missing is the output; I get none. What I really want is to get a list of txt files, use their name up to the extension, process out the "|" and replace it with a LF/CR and put the new file in another directory as [old-name].ics. HALP. THX in advance. - Idiot me.
You can loop over the files and use sed to process the file:
for i in *.txt; do
sed -e 's/|/\n/g' "$i" > other_directory/"${i%.txt}".ics
done
No need to use xargs, especially with echo which would risk the filenames getting word split and having globbing apply to them, so could well do the wrong thing.
Then we use sed and use s to substitute | with \n g makes it a global replace. We redirect that to the other director you want and use bash's parameter expansion to strip off the .txt from the end
Here's an awk solution:
$ awk '
FNR==1 { # for first record of every file
close(f) # close previous file f
f="path_to_dir/" FILENAME # new filename with path
sub(/txt$/,"ics",f) } # replace txt with ics
{
gsub(/\|/,"\n") # replace | with \n
print > f }' *.txt # print to new file

Bash - Search and Replace operation with reporting the files and lines that got changed

I have a input file "test.txt" as below -
hostname=abc.com hostname=xyz.com
db-host=abc.com db-host=xyz.com
In each line, the value before space is the old value which needs to be replaced by the new value after the space recursively in a folder named "test". I am able to do this using below shell script.
#!/bin/bash
IFS=$'\n'
for f in `cat test.txt`
do
OLD=$(echo $f| cut -d ' ' -f 1)
echo "Old = $OLD"
NEW=$(echo $f| cut -d ' ' -f 2)
echo "New = $NEW"
find test -type f | xargs sed -i.bak "s/$OLD/$NEW/g"
done
"sed" replaces the strings on the fly in 100s of files.
Is there a trick or an alternative way by which i can get a report of the files changed like absolute path of the file & the exact lines that got changed ?
PS - I understand that sed or stream editors doesn't support this functionality out of the box. I don't want to use versioning as it will be an overkill for this task.
Let's start with a simple rewrite of your script, to make it a little bit more robust at handling a wider range of replacement values, but also faster:
#!/bin/bash
# escape regexp and replacement strings for sed
escapeRegex() { sed 's/[^^]/[&]/g; s/\^/\\^/g' <<<"$1"; }
escapeSubst() { sed 's/[&/\]/\\&/g' <<<"$1"; }
while read -r old new; do
find test -type f -exec sed "/$(escapeRegex "$old")/$(escapeSubst "$new")/g" -i '{}' \;
done <test.txt
So, we loop over pairs of whitespace-separated fields (old, new) in lines from test.txt and run a standard sed in-place replace on all files found with find.
Pretty similar to your script, but we properly read lines from test.txt (no word splitting, pathname/variable expansion, etc.), we use Bash builtins whenever possible (no need to call external tools like cat, cut, xargs); and we escape sed metacharacters in old/new values for proper use as sed's regexp and replacement expressions.
Now let's add logging from sed:
#!/bin/bash
# escape regexp and replacement strings for sed
escapeRegex() { sed 's/[^^]/[&]/g; s/\^/\\^/g' <<<"$1"; }
escapeSubst() { sed 's/[&/\]/\\&/g' <<<"$1"; }
while read -r old new; do
find test -type f -printf '\n[%p]\n' -exec sed "/$(escapeRegex "$old")/{
h
s//$(escapeSubst "$new")/g
H
x
s/\n/ --> /
w /dev/stdout
x
}" -i '{}' > >(tee -a change.log) \;
done <test.txt
The sed script above changes each old to new, but it also writes old --> new line to /dev/stdout (Bash-specific), which we in turn append to change.log file. The -printf action in find outputs a "header" line with file name, for each file processed.
With this, your "change log" will look something like:
[file1]
hostname=abc.com --> hostname=xyz.com
[file2]
[file1]
db-host=abc.com --> db-host=xyz.com
[file2]
db-host=abc.com --> db-host=xyz.com
Just for completeness, a quick walk-through the sed script. We act only on lines containing the old value. For each such line, we store it to hold space (h), change it to new, append that new value to the hold space (joined with newline, H) which now holds old\nnew. We swap hold with pattern space (x), so we can run s command that converts it to old --> new. After writing that to the stdout with w, we move the new back from hold to pattern space, so it gets written (in-place) to the file processed.
From man sed:
-i[SUFFIX], --in-place[=SUFFIX]
edit files in place (makes backup if SUFFIX supplied)
This can be used to create a backup file when replacing. You can then look for any backup files, which indicate which files were changed, and diff those with the originals. Once you're done inspecting the diff, simply remove the backup files.
If you formulate your replacements as sed statements rather than a custom format you can go one further, and use either a sed shebang line or pass the file to -f/--file to do all the replacements in one operation.
There's several problems with your script, just replace it all with (using GNU awk instead of GNU sed for inplace editing):
mapfile -t files < <(find test -type f)
awk -i inplace '
NR==FNR { map[$1] = $2; next }
{ for (old in map) gsub(old,map[old]) }
' test.txt "${files[#]}"
You'll find that is orders of magnitude faster than what you were doing.
That still has the issue your existing script does of failing when the "test.txt" strings contain regexp or backreference metacharacters and modifying previously-modified strings and handling partial matches - if that's an issue let us know as it's easy to work around with awk (and extremely difficult with sed!).
To get whatever kind of report you want you just tweak the { for ... } line to print them, e.g. to print a record of the changes to stderr:
mapfile -t files < <(find test -type f)
awk -i inplace '
NR==FNR { map[$1] = $2; next }
{
orig = $0
for (old in map) {
gsub(old,map[old])
}
if ($0 != orig) {
printf "File %s, line %d: \"%s\" became \"%s\"\n", FILENAME, FNR, orig, $0 | "cat>&2"
}
}
' test.txt "${files[#]}"

Looking for a regex pattern, passing that pattern to a script, and replacing the pattern with the output of the script

For every time the pattern shows up (In this example the case of a 2 digit number) I want to pass that pattern to a script and replace that pattern with the output of a script.
I'm using sed an example of what it should look like would be
echo 'siedi87sik65owk55dkd' | sed 's/[0-9][0-9]/.\/script.sh/g'
Right now this returns
siedi./script.shsik./script.showk./script.shdkd
But I would like it to return
siedi!!!87!!!sik!!!65!!!owk!!!55!!!dkd
This is what is in ./script.sh
#!/bin/bash
echo "!!!$1!!!"
It has to be replaced with the output. In this example I know I could just use a normal sed substitution but I don't want that as an answer.
sed is for simple substitutions on individual lines, that is all. Anything else, even if it can be done, requires arcane language constructs that became obsolete in the mid-1970s when awk was invented and are used today purely for the mental exercise. Your problem is not a simple substitution so you shouldn't try to use sed to solve it.
You're going to want something like:
awk '{
head = ""
tail = $0
while ( match(tail,/[0-9]{2}/) ) {
tgt = substr(tail,RSTART,RLENGTH)
cmd = "./script.sh " tgt
if ( (cmd | getline line) > 0) {
tgt = line
}
close(cmd)
head = head substr(tail,1,RSTART-1) tgt
tail = substr(tail,RSTART+RLENGTH)
}
print head tail
}'
e.g. using an echo in place of your script.sh command:
$ echo 'siedi87sik65owk55dkd' |
awk '{
head = ""
tail = $0
while ( match(tail,/[0-9]{2}/) ) {
tgt = substr(tail,RSTART,RLENGTH)
cmd = "echo !!!" tgt "!!!"
if ( (cmd | getline line) > 0) {
tgt = line
}
close(cmd)
head = head substr(tail,1,RSTART-1) tgt
tail = substr(tail,RSTART+RLENGTH)
}
print head tail
}'
siedi!!!87!!!sik!!!65!!!owk!!!55!!!dkd
Ed's awk solution is obviously the way to go here.
For fun, I tried to come up with a sed solution, and here is (a convoluted GNU sed) one that takes the pattern and the script to be run as parameters; the input is either read from standard input (i.e., you can pipe to it) or from a file supplied as the third argument.
For your example, we'd have infile with contents
siedi87sik65owk55dkd
siedi11sik22owk33dkd
(two lines to demonstrate how this works for multiple lines), then script with contents
#!/bin/bash
echo "!!!${1}!!!"
and finally the solution script itself, so. Usage is
./so pattern script [input]
where pattern is an extended regular expression as understood by GNU sed (with the -r option), script is the name of the command you want to run for each match, and the optional input is the name of the input file if input is not standard input.
For your example, this would be
./so '[[:digit:]]{2}' script infile
or, as a filter,
cat infile | ./so '[[:digit:]]{2}' script
with output
siedi!!!87!!!sik!!!65!!!owk!!!55!!!dkd
siedi!!!11!!!sik!!!22!!!owk!!!33!!!dkd
This is what so looks like:
#!/bin/bash
pat=$1 # The pattern to match
script=$2 # The command to run for each pattern
infile=${3:-/dev/stdin} # Read from standard input if not supplied
# Use sed and have $pattern and $script expand to the supplied parameters
sed -r "
:build_loop # Label to loop back to
h # Copy pattern space to hold space
s/.*($pat).*/.\/\"$script\" \1/ # (1) Extract last match and prepare command
# Replace pattern space with output of command
e
G # (2) Append hold space to pattern space
s/(.*)$pat(.*)/\1~~~\2/ # (3) Replace last match of pattern with ~~~
/\n[^\n]*$pat[^\n]*$/b build_loop # Loop if string contains match
:fill_loop # Label for second loop
s/(.*\n)(.*)\n([^\n]*)~~~([^\n]*)$/\1\3\2\4/ # (4) Replace last ~~~
t fill_loop # Loop if there was a replacement
s/(.*)\n(.*)~~~(.*)$/\2\1\3/ # (5) Final ~~~ replacement
" < "$infile"
The sed command works with two loops. The first one copies the pattern space to the hold space, then removes everything but the last match from the pattern space and prepares the command to be run. After the substitution with (1) in its comment, the pattern space looks like this:
./script 55
The e command (a GNU extension) then replaces the pattern space with the output of this command. After this, G appends the hold space to the pattern space (2). The pattern space now looks like this:
!!!55!!!
siedi87sik65owk55dkd
The substitution at (3) replaces the last match with a string hopefully not equal to the pattern and we get
!!!55!!!
siedi87sik65owk~~~dkd
The loop repeats if the last line of the pattern space still has a match for the pattern. After three loops, the pattern space looks like this:
!!!87!!!
!!!65!!!
!!!55!!!
siedi~~~sik~~~owk~~~dkd
The second loop now replaces the last ~~~ with the second to last line of the pattern space with substitution (4). The command uses lots of "not a newline" ([^\n]) to make sure we're not pulling the wrong replacement for ~~~.
Because of the way command (4) is written, the loop ends with one last substitution to go, so before command (5), we have this pattern space:
!!!87!!!
siedi~~~sik!!!65!!!owk!!!55!!!dkd
Command (5) is a simpler version of command (4), and after it, the output is as desired.
This seems to be fairly robust and can deal with spaces in the name of the script to be run as long as it's properly quoted when calling:
./so '[[:digit:]]{2}' 'my script' infile
This would fail if
The input file contains ~~~ (solvable by replacing all occurrences at the start, putting them back at the end)
The output of script contains ~~~
The pattern contains ~~~
i.e., the solution very much depends on ~~~ being unique.
Because nobody asked: so as a one-liner.
#!/bin/bash
sed -re ":b;h;s/.*($1).*/.\/\"$2\" \1/;e" -e "G;s/(.*)$1(.*)/\1~~~\2/;/\n[^\n]*$1[^\n]*$/bb;:f;s/(.*\n)(.*)\n([^\n]*)~~~([^\n]*)$/\1\3\2\4/;tf;s/(.*)\n(.*)~~~(.*)$/\2\1\3/" < "${3:-/dev/stdin}"
Still works!
A conceptually simpler multi-utility solution:
Using GNU utilities:
echo 'siedi87sik65owk55dkd' |
sed 's|[0-9]\{2\}|$(./script.sh &)|g' |
xargs -d'\n' -I% sh -c 'echo '\"%\"
Using BSD utilities (also works with GNU utilities):
echo 'siedi87sik65owk55dkd' |
sed 's|[0-9]\{2\}|$(./script.sh &)|g' | tr '\n' '\0' |
xargs -0 -I% sh -c 'echo '\"%\"
The idea is to use sed to translate the tokens of interest lexically into a string containing shell command substitutions that invoke the target script with the token, and then pass the result to the shell for evaluation.
Note:
Any embedded " and $ characters in the input must be \-escaped.
xargs -d'\n' (GNU) and tr '\n' '\0' / xargs -0 (BSD) are only needed to correctly preserve whitespace in the input - if that is not needed, the following POSIX-compliant solution will do:
echo 'siedi87sik65owk55dkd' |
sed 's|[0-9]\{2\}|$(./script.sh &)|g' | tr '\n' '\0' |
xargs -I% sh -c 'printf "%s\n" '\"%\"

Resources