I am using John the Ripper, an application that outputs a generated password line by line. I want to make a bash script that takes the output of each line and apply "md5sum" to it and print it out.
For example:
$ ./john --wordlist=password.lst -rules:Single
12346
fdgh
sdfdfj
test
password1234
...
and so on... (really fast)
I want to take each line and apply md5sum to each line.
$ md5sum <<< "12346"
f447b20a7fcbf53a5d5be013ea0b15af -
Use
command | while IFS= read -r l; do md5sum <<<"$l"; done
or simpler with xargs (or not simpler):
command | xargs -n1 sh -c 'md5sum <<<"$1"' --
where command is your ./john --wordlist=password.lst -rules:Single command.
Related
I want to apply a command to each line of piped stdin like so:
cat file.txt | grep ... | ./filter | wc -l
the problem is ./filter accepts only a single line of input and gives a single line of output. I've tried xargs but it spawns a subshell and I can't capture it's output to continue working with the result. Is there an easy way to do that?
If it accepts a single line, then you should put it in a loop if you want to process multiple lines,
cat file.txt |
grep ... |
while read line ; do
echo "$line" | ./filter
done |
wc -l
To call a command for each line you can read a line into a variable and use the variable as a standard input. (Also, let’s avoid UUOC.)
grep ... < file.txt |
while IFS= read -r line; do
./filter <<< "$line"
done |
wc -l
In this case it looks like things may get way easier if you instead write the whole filter in awk. Because it will give you wc -l for free (NR), plus line and record splitting and filtering better than what grep can do.
I have a file, ciao.py thas has only one line in it: print("ciao")
I want to do this: I want to do that via pipe stream, and als, if I do cat ciao.py | sed 's/.*/&\n&/' it would work, but I want to do this in two separated parts, simulating the case where I want to print it and then pass that to further commands.
If I do this:
cat ciao.py | sed 's/.*/&\n/' |tee >(xargs echo) | xargs echo
it does not work. It prints print("ciao") print("ciao") in the same line. I don't understand why, since I am putting \n with sed.
I'd guess print cia is appearing twice on the same line because xargs is calling echo with multiple strings since xargs calls the command you provide it with groups of input lines at a time by default.
Is this what you're trying to do?
$ cat ciao.py | sed 's/.*/&\n/' |tee >(xargs -n 1 echo) | xargs -n 1 echo
print(ciao)
print(ciao)
or:
$ cat ciao.py | sed 's/.*/&\n/' |tee >(cat) | xargs -n 1 echo
print(ciao)
print(ciao)
There are, of course, better ways to get that output from that input, e.g.:
$ sed 'p' ciao.py
print("ciao")
print("ciao")
I want to use tail in my custom pipe command.
For example, I want to execute this command:
>ls -1 | tail -n 1 | awk '{print "last file is "$1}'
>last file is test.txt
And I want to make it short by making my own custom script. It looks like this:
>ls -1 | myscript
>last file is test.txt
I know myscript can get input from "ls -1" by this code:
while read line; do
echo last file is $line
done
But I don't know how to use "tail -n 1" in the custom pipe command code above.
Is there a way to use a pipe command in another pipe command script?
Or do I have to implement the code which does the same process as "tail -n 1" myself?
I hope bash has some solution for this.
Try putting just this in myscript
tail -n 1 | awk '{print "last file is "$1}'
This works as the first command (tail) consumes the stdin of your script. In general, scripts work as though you typed their contest as-is to the terminal.
I am facing a problem with a script I need to use for log analysis; let me explain the question:
I have a gzipped file like:
5555_prova.log.gz
Inside the file there are mali lines of log like this one:
2018-06-12 03:34:31 95.245.15.135 GET /hls.playready.vod.mediasetpremium/farmunica/2018/06/218742_163f10da04c7d2/hlsrc/w12/21.ts
I need a script read the gzipped log file which is capable to output on the stdout a modified log line like this one:
5555 2018-06-12 03:34:31 95.245.15.135 GET /hls.playready.vod.mediasetpremium/farmunica/2018/06/218742_163f10da04c7d2/hlsrc/w12/21.ts
As you can see the line of log now start with the number read from the gzip file name.
I need this new line to feed a logstash data crunching chain.
I have tried with a script like this:
echo "./5555_prova.log.gz" | xargs -ISTR -t -r sh -c "gunzip -c STR | awk '{$0="5555 "$0}' "
this is not exactly what I need (the prefix is static and not captured with a regular expression from the file name) but even with this simplified version I receive an error:
sh -c gunzip -c ./5555_prova.log.gz | awk '{-bash=5555 -bash}'
-bash}' : -c: line 0: unexpected EOF while looking for matching `''
-bash}' : -c: line 1: syntax error: unexpected end of file
As you can see from the above output the $0 is no more the whole line passed via pipe to awk but is a strange -bash.
I need to use xargs because the list of gzipped file is fed the the command line from an another tool (i.e. an instantiated inotifywait listening to a directory where the files are written via ftp).
What I am missing? do you have some suggestions to point me in the right direction?
Regards,
S.
Trying to following the #Charles Duffy suggestion I have written this code:
#/bin/bash
#
# Usage: sendToLogstash.sh [pattern]
#
# Executes a command whenever files matching the pattern are closed in write
# mode or moved to. "{}" in the command is replaced with the matching filename (via xargs).
# Requires inotifywait from inotify-tools.
#
# For example,
#
# whenever.sh '/usr/local/myfiles/'
#
#
DIR="$1"
PATTERN="\.gz$"
script=$(cat <<'EOF'
awk -v filename="$file" 'BEGIN{split(filename,array,"_")}{$0=array[1] OFS $0} 1' < $(gunzip -dc "$DIR/$file")
EOF
)
inotifywait -q --format '%f' -m -r -e close_write -e moved_to "$DIR" \
| grep --line-buffered $PATTERN | xargs -I{} -r sh -c "file={}; $script"
But I got the error:
[root#ms-felogstash ~]# ./test.sh ./poppo
gzip: /1111_test.log.gz: No such file or directory
gzip: /1111_test.log.gz: No such file or directory
sh: $(gunzip -dc "$DIR/$file"): ambiguous redirect
Thanks for your help, I feel very lost writing bash scripts.
Regards,
S.
EDIT: Also in case you are dealing with multiple .gz files and want to print their content along with their file names(first column _ delimited) then following may help you.
for file in *.gz; do
awk -v filename="$file" 'BEGIN{split(filename,array,"_")}{$0=array[1] OFS $0} 1' <(gzip -dc "$file")
done
I haven't tested your code(couldn't completely understand also), so trying to give here a way like in case your code could pass file name to awk then it will be pretty simple to append the file's first digits like as follows(just an example).
awk 'FNR==1{split(FILENAME,array,"_")} {$0=array[1] OFS $0} 1' 5555_prova.log_file
So here I am taking FILENAME out of the box variable for awk(only in first line of file) and then by splitting it into array named array and then adding it in each line of the file.
Also wrap "gunzip -c STR this with ending " which seems to be missing before you pass its output to awk too.
NEVER, EVER use xargs -I with a string substituted into sh -c (or bash -c or any other context where that string is interpreted as code). This allows malicious filenames to run arbitrary commands -- think about what happens if someone runs touch $'$(rm -rf ~)\'$(rm -rf ~)\'.gz', and gets that file into your log.
Instead, let xargs append arguments after your script text, and write your script to iterate over / read those arguments as data, rather than having them substituted into code.
To show how to use xargs safely (well, safely if we assume that you've filtered out filenames with literal newlines):
# This way you don't need to escape the quotes in your script by hand
script=$(cat <<'EOF'
for arg; do gunzip -c <"$arg" | awk '{$0="5555 "$0}'; done
EOF
)
# if you **did** want to escape them by hand, it would look like this:
# script='for arg; do gunzip -c <"$arg" | awk '"'"'{$0="5555 "$0}'"'"'; done'
echo "./5555_prova.log.gz" | xargs -d $'\n' sh -c "$script" _
To be safer with all possible filenames, you'd instead use:
printf '%s\0' "./5555_prova.log.gz" | xargs -0 sh -c "$script" _
Note the use of NUL-delimited input (created with printf '%s\0') and xargs -0 to consume it.
I have to run many python script which differ just with one parameter. I name them as runv1.py, runv2.py, runv20.py. I have the original script, say runv1.py. Then I make all copies that I need by
cat runv1.py | tee runv{2..20..1}.py
So I have runv1.py,.., runv20.py. But still the parameter v=1 in all of them.
Q: how can I also replace v parameter to read it from the file name? so e.g in runv4.py then v=4. I would like to know if there is any one-line shell command or combination of commands. Thank you!
PS: direct editing each file is not a proper solution when there are too many files.
Below for loop will serve your purpose I think
for i in `ls | grep "runv[0-9][0-9]*.py"`
do
l=`echo $i | tr -d [a-z.]`
sed -i 's/v/'"$l"'/g' runv$l.py
done
Below command was to pass the parameter to script extracted from the filename itself
ls | grep "runv[0-9][0-9]*.py" | tr -d [a-z.] | awk '{print "./runv"$0".py "$0}' | xargs sh
in the end instead of sh you can use python or bash or ksh.