Bash- Running a command on each grep correspondence without stopping tail -n0 -f - bash

I'm currently monitoring a log file and my ultimate goal is to write a script that uses tail -n0 -f and execute a certain command once grep finds a correspondence. My current code:
tail -n 0 -f $logfile | grep -q $pattern && echo $warning > $anotherlogfile
This works but only once, since grep -q stops when it finds a match. The script must keep searching and running the command, so I can update a status log and run another script to automatically fix the problem. Can you give me a hint?
Thanks

use a while loop
tail -n 0 -f "$logfile" | while read LINE; do
echo "$LINE" | grep -q "$pattern" && echo "$warning" > "$anotherlogfile"
done

awk will let us continue to process lines and take actions when a pattern is found. Something like:
tail -n0 -f "$logfile" | awk -v pattern="$pattern" '$0 ~ pattern {print "WARN" >> "anotherLogFile"}'
If you need to pass in the warning message and path to anotherLogFile you can use more -v flags to awk. Also, you could have awk take the action you want instead. It can run commands via the system() function where you pass the shell command to run

Related

using xargs in parallel bruteforce script

I am working on a bruteforce script using xargs in parallel. I have it working using GNU parallel but cant get it to work right with xargs. So far I have
cat $wordlist | xargs -n 1 -P 32 -I {} curl -s -o /dev/null -w '%{http_code} {}\n' --socks5 127.0.0.1:1080 $ip -u admin:{} | awk '{if($1== "200") {print "Found password" $2; exit}}'
which prints the password but fails to exit. So my question is how can I write this so that it exits after finding first match?
Thanks.
So my question is how can I write this so that it exits after finding
first match?
Don't use exit, that will likely give you SIGPIPE (signal 13) errors. Kill your process group safely this way:
{print "Found password" $2; killall -g}}

Piping curl followed by an echo some times truncates the output when using tail

Given the following 2 lines in a bash script,
LOCATION=$(curl -i -H "Windmill-Name: $APPLICATION_NAME" -H "Windmill-Identifier: $CFBundleIdentifier" -F "ipa=#$IPA" -F "plist=#$PLIST" $WINDMILL_BASE_URL/windmill/rest/windmill/$USER | grep ^Location | awk '{print $2}')
echo "[windmill] Use $LOCATION for accessing '$APPLICATION_NAME'"
In some cases, the echo string appears like below.
for accessing 'MultiPartIOSDemo'[a truncated $LOCATION]
The behaviour is not consistent but when it reproduces, the malformed output is consistent (i.e. a truncated $LOCATION at some range).
It looks like echo outputs the string to the buffer but the piping of curl isn't yet done and ends up writing its output on top.
Can't quite tell.
Update
Tried all of your suggestions, but the same problem occurs.
Have now dropped grep and the script looks like so
LOCATION=$(curl -i -H "Windmill-Name: $APPLICATION_NAME" -H "Windmill-Identifier: $CFBundleIdentifier" -F "ipa=#$IPA" -F "plist=#$PLIST" "$WINDMILL_BASE_URL/windmill/rest/windmill/$USER" | awk -W '/^Location/ {print $2}')
echo "[windmill] Use $LOCATION for accessing '$APPLICATION_NAME'"
Here are some more details.
The script that includes the above lines is wrapped in a
(
(
# bash script
) 2>&1 | tee $HOME/.windmill/$PROJECT_NAME.log
) 2>&1 | tee $HOME/.windmill/windmill.log
Hence the output of the echo is on both logs.
Just noticed that the above behaviour occurs while tailing, e.g.
tail -fn 20 ~/.windmill/windmill.log
However if I do a
more ~/.windmill/windmill.log
I can see that the echo message appears correctly. Notice the newline character "^M". Wondering if it has anything to do with the way tail parses the log.
[windmill] Use [correct $LOCATION] ^M for accessing 'MultiPartIOSDemo'
Clarified Question
Guess after all the above, there are really 2 questions.
Under what circumstances does the ^M appear in the log?
Why is tail parsing the log wrong, i.e. parsing ^M in such a way that it first outputs the "for accessing 'MultiPartIOSDemo'" then the "Use $LOCATION" on top.
I would (1) change
$WINDMILL_BASE_URL/windmill/rest/windmill/$USER
to
"$WINDMILL_BASE_URL/windmill/rest/windmill/$USER"
(2) use tee to debug
and (3) use awk to match "^location" instead of grep
LOCATION=$(curl -i -H "Windmill-Name: $APPLICATION_NAME" -H "Windmill-Identifier: $CFBundleIdentifier" -F "ipa=#$IPA" -F "plist=#$PLIST" "$WINDMILL_BASE_URL/windmill/rest/windmill/$USER" | tee /tmp/debug.$$ | awk ' /^Location/ {print $2}')
echo "[windmill] Use $LOCATION for accessing '$APPLICATION_NAME'"
Perhaps curl is returing an error. You can check the results and only parse it if no error occured
# Create a trace file
tfile=/tmp/trace.$$
# uncomment this line if you want to delete the trace file at the end of the script
#trap "/bin/rm $tfile" 0 1 15
curl -i -H "Windmill-Name: $APPLICATION_NAME" -H "Windmill-Identifier: $CFBundleIdentifier" -F "ipa=#$IPA" -F "plist=#$PLIST" "$WINDMILL_BASE_URL/windmill/rest/windmill/$USER" >$tfile
status=$?
if [ $status -eg 0 ]
then
echo curl was successful
LOCATION=$(awk '/^Location/ {print $2}' <$tfile)
echo "[windmill] Use $LOCATION for accessing '$APPLICATION_NAME'"
else
echo curl exited with status $status
fi

How to break a tail -f command in bash

How can I break a tail -f in bash? Since this question is related to this question
tail -f | awk and end tail once data is found
I tried the following:
#! /bin/bash
tvar="testing"
(set -o pipefail && tail -f <<< "$tvar" | awk '{print; exit} END{ exit 1}' )
But the script is still hanging on to tail -f
Well, the problem is not the tail -f but the awk which hangs. It is meant to terminate when EOF is found (with exit 1). But there is no EOF found; the tail -f does not terminate, so there comes no EOF.
Would the awk terminate, then this would also break the pipe and the tail would receive a SIGPIPE (which would terminate it).
You must find a different condition on which to terminate.
EDIT:
To achieve what you want you can start the tail -f in the background, remember its PID and kill it as soon as you do not need it anymore. Running in the background and using a pipe at the same time is tricky. The easiest way to do it would be to use a named pipe (FIFO):
mkfifo log.pipe
tail -f log > log.pipe & tail_pid=$!
awk ... < log.pipe
kill $tail_pid
rm log.pipe
It seems that switching from using <<< to echo "$tvar" | tail -f does what you want instead?
$> cat test.sh
#! /bin/bash
tvar="testing"
(set -o pipefail && echo "$tvar" | tail -f | awk '{print} END{ exit 1}' )
$> ./test.sh
testing
$>
Although the awk doesn't print anything out afterwards.

Grep output of command and use it in "if" statement, bash

Okay so here's another one about the StarMade server.
Previously I had this script for detecting a crash, it would simply search through the logs:
#!/bin/bash
cd "$(dirname "$0")"
if ( grep "[SERVER] SERVER SHUTDOWN" log.txt.0); then
sleep 7; kill -9 $(ps -aef | grep -v grep | grep 'StarMade.jar' | awk '{print $2}')
fi
It would find "[SERVER] SERVER SHUTDOWN" and kill the process after that, however this is not a waterproof method, because with different errors it could be possible that the message doesn't appear, rendering this script useless.
So I have this tool that can send commands to the server, but returns an EOF exception when the server is in a crashed state. I basically want to grab the output of this command, and use it in the if-statement above, instead of the current grep command, in such a way that it would execute the commands below when the grep finds "java.io.EOFException".
I could make it write the output to a file and then grep it from there, but I wonder, isn't there a better/more efficient method to do this?
EDIT: okay, so after a bit of searching I put together the following:
if ( java -jar /home/starmade/StarMade/StarNet.jar xxxxx xxxxx /chat) 2>&1 > /dev/null |grep java.io.EOFException);
Would this be a valid if-statement? I need it to match "java.io.EOFException" in the output of the first command, and if it matches, to execute something with "then" (got that part working).
Not sure to solve your problem, but this line:
ps -aef | grep -v grep | grep 'StarMade.jar' | awk '{print $2}'
could be change to
ps -aef | awk '/[S]tarMade.jar/ {print $2}'
The [S] prevents awk from finding itself.
Or just like this to get the pid
pidof StarMade.jar

run hadoop command in bash script

i need to run hadoop command in bash script, which go through bunch of folders on amazon S3, then write those folder names into a txt file, then do further process. but the problem is when i ran the script, seems no folder names were written to txt file. i wonder if it's the hadoop command took too long to run and the bash script didn't wait until it finished and go ahead to do further process, if so how i can make bash wait until the hadoop command finished then go do other process?
here is my code, i tried both way, neither works:
1.
listCmd="hadoop fs -ls s3n://$AWS_ACCESS_KEY:$AWS_SECRET_KEY#$S3_BUCKET/*/*/$mydate | grep s3n | awk -F' ' '{print $6}' | cut -f 4- -d / > $FILE_NAME"
echo -e "listing... $listCmd\n"
eval $listCmd
...other process ...
2.
echo -e "list the folders we want to copy into a file"
hadoop fs -ls s3n://$AWS_ACCESS_KEY:$AWS_SECRET_KEY#$S3_BUCKET/*/*/$mydate | grep s3n | awk -F' ' '{print $6}' | cut -f 4- -d / > $FILE_NAME
... other process ....
any one knows what might be wrong? and is it better to use the eval function or just use the second way to run hadoop command directly
thanks.
I would prefer to eval in this case, prettier to append the next command to this one. and I would rather break down listCmd into parts, so that you know there is nothing wrong at the grep, awk or cut level.
listCmd="hadoop fs -ls s3n://$AWS_ACCESS_KEY:$AWS_SECRET_KEY#$S3_BUCKET/*/*/$mydate > $raw_File"
gcmd="cat $raw_File | grep s3n | awk -F' ' '{print $6}' | cut -f 4- -d / > $FILE_NAME"
echo "Running $listCmd and other commands after that"
otherCmd="cat $FILE_NAME"
eval "$listCmd";
echo $? # This will print the exit status of the $listCmd
eval "$gcmd" && echo "Finished Listing" && eval "$otherCmd"
otherCmd will only be executed if $gcmd succeeds. If you have too many commands that you need to execute, then this becomes a bit ugly. If you roughly know how long it will take, you can insert a sleep command.
eval "$listCmd"
sleep 1800 # This will sleep 1800 seconds
eval "$otherCmd"

Resources