How to send TCPDUMP output to remote script via CURL - bash

So I have a script here that is taking a TCPDUMP output. We are trying to send (2) variables to a PHP script over the web ($SERVER). The filename header is created and contains both $FILETOSEND which is the filename and filedata. The actual data for the filedata variable is coming from a file called 1 (the data is formatted as you can tell). I am having issues with the section that calls out #send common 10 sec dump.
I am trying to CURL the file 1 and I am doing so by using curl --data "$(cat 1)" $SERVER
The script isn't sending the file 1 at all, mostly just sends the filename and no file data. Is there a problem with the way I am sending the file? Is there a better way to format it?
while true; do
sleep $DATASENDFREQ;
killall -9 tcpdump &> /dev/null
if [ -e $DUMP ]; then
mv $DUMP $DUMP_READY
fi
create_dump
DATE=`date +"%Y-%m-%d_%H-%M-%S"`
FILETOSEND=$MAC-$DATE-$VERSION
# we write fileheader to the file. 2 vars : filename, filedata.
FILEHEADER="filename=$FILETOSEND&filedata="
echo $FILEHEADER > 2
# change all colons to underscores for avoiding Windows filenames issues
sed -i 's/:/_/g' 2
# delete all newlines \n in the file
tr -d '\n' < 2 > 1
# parsing $DUMP_READY to awk.txt (no header in awk.txt)
awk '{ if (NF > 18 && $10 == "signal") {print "{\"mac\": \""$16"\",\"sig\": \""$9"\",\"ver\": \""$8"\",\"ts\": \""$1"\",\"ssid\": \""$19"\"}" }}' $DUMP_READY > awk.txt
sed -i 's/SA://g' awk.txt
sed -i 's/&/%26/g' awk.txt
cat awk.txt >> 1
sync
# send $OFFLINE
if [ -e $OFFLINE ]; then
curl -d $OFFLINE $SERVER
if [ $? -eq "0" ]; then
echo "status:dump sent;msg:offline dump sent"
rm $OFFLINE
else
echo "status:dump not sent;msg:offline dump not sent"
fi
fi
# send common 10 secs dump
curl --data "$(cat 1)" $SERVER
if [ $? -eq "0" ]; then
echo "status:dump sent"
else
cat 1 >> $OFFLINE
echo "status:dump not sent"
fi
if [ -e $DUMP_READY ]; then
rm -f $DUMP_READY 1 2 upload_file*
fi

Related

Can't verify result from SSH

I'm verifying matches of a file via SSH to a host ubunty system, and the if statement is not correctly processing the result.
export FILENAME=test.txt
export NUM=$(ssh -t ubuntu#192.168.XXX.XXX "ls ~/Documents/ | grep '$FILENAME' | wc -l")
echo "Received value: $NUM"
if [ $NUM == 0 ]; then
echo "If processed as: 0"
else
echo "If processed as: 1"
fi
So if $FILENAME exists, I get the following output
Connection to 192.168.XXX.XXX closed.
Received value: 1
If processed as: 1
And if not, I get the following one
Connection to 192.168.XXX.XXX closed.
Received value: 0
If processed as: 1
Why may this be happening? Am I getting a wrong formatted value? If I force before the if statement NUM=0 or NUM=1 it gets correctly processed.
if [ $NUM == 0 ]; then should work as expected. (More info on SO)
Use cat -v to show all invisible chars in your output;
NUM=$(ssh -t ubuntu#192.168.XXX.XXX "ls ~/Documents/ | grep '$FILENAME' | wc -l")
echo "NUM: ${NUM}" | cat -v
#Prints; NUM: 0^M
The invisible ^M char is messing with the if statement.
Remove if from the result by piping through tr -d '\r'
export FILENAME=test.txt
export NUM=$(ssh -t ubuntu#192.168.XXX.XXX "ls ~/Documents/ | grep '$FILENAME' | wc -l" | tr -d '\r')
echo "Received value: $NUM"
if [ $NUM == 0 ]; then
echo "If processed as: 0"
else
echo "If processed as: 1"
fi
More ^M info;
What is ^M
Remove ^M from variable

Grep large number of patterns from a huge log file

I have a shell script which is invoked every hour via cron job and to search through the asterisk logs and provide me the unique ids for a call which ended with cause 31.
while read ref
do
cat sample.log | grep "$ref" | grep 'got hangup request, cause 31' | grep -o 'C-[0-9a-z][0-9a-z][0-9a-z][0-9a-z][0-9a-z][0-9a-z][0-9a-z][0-9a-z]' >> cause_temp.log
done < callref.log
The issue is that the while loop is too slow and for accuracy I have included 4 while loops like mentioned above to perform various checks.
callref.log file consists of call identifier values and every hour it will have about 50-90 thousand values and the script take about 45-50 minutes to complete the execution and email me the report.
It would be of great help if I would be able to cut down the execution time of the loops. Since the size of sample.log file is about 20 GB and for each loop the file is opened and search is performed, I figured that the while loop is the bottleneck here.
Have done the research and found some useful links like
Link 1 Link 2
But the solutions suggested I cannot implement or do not know how to. Any suggestion would be helpful. Thanks
Since sample.log consists of sensitive information I would not be able to share any logs, but below are some sample logs which I got from the internet.
Dec 16 18:02:04 asterisk1 asterisk[31774]: NOTICE[31787]: chan_sip.c:11242 in handle_request_register: Registration from '"503"<sip:503#192.168.1.107>' failed for '192.168.1.137' - Wrong password
Dec 16 18:03:13 asterisk1 asterisk[31774]: NOTICE[31787]: chan_sip.c:11242 in handle_request_register: Registration from '"502"<sip:502#192.168.1.107>' failed for '192.168.1.137' - Wrong password
Dec 16 18:04:49 asterisk1 asterisk[31774]: NOTICE[31787]: chan_sip.c:11242 in handle_request_register: Registration from '"1737245082"<sip:1737245082#192.168.1.107>' failed for '192.168.1.137' - Username/auth name mismatch
Dec 16 18:04:49 asterisk1 asterisk[31774]: NOTICE[31787]: chan_sip.c:11242 in handle_request_register: Registration from '"100"<sip:100#192.168.1.107>' failed for '192.168.1.137' - Username/auth name mismatch
Jun 27 18:09:47 host asterisk[31774]: ERROR[27910]: chan_zap.c:10314 setup_zap: Unable to register channel '1-2'
Jun 27 18:09:47 host asterisk[31774]: WARNING[27910]: loader.c:414 __load_resource: chan_zap.so: load_module failed, returning -1
Jun 27 18:09:47 host asterisk[31774]: WARNING[27910]: loader.c:554 load_modules: Loading module chan_zap.so failed!
the file callref.log consists of a list of lines which looks like -
C-001ec22d
C-001ec23d
C-001ec24d
C-001ec31d
C-001ec80d
Also the desired output of the above while loop looks like C-001ec80d
Also my main concern is to make the while loop run faster. Like load all the values of callref.log in an array and search for all the values simultaneously in a single pass of sample.log if possible.
Since you could not produce adequate sample logs to test against even when requested, I whipped up some test material myself:
$ cat callref.log
a
b
$ cat sample.log
a 1
b 2
c 1
Using awk:
$ awk 'NR==FNR { # hash callrefs
a[$1]
next
}
{ # check callrefs from sample records and output when match
for(l in a)
if($0 ~ l && $0 ~ 1) # 1 is the static string you look for along a callref
print l
}' callref.log sample.log
a 1
HTH
I spent a day building a test framework and testing variations of different commands and I think you already have the fastest one.
Which leads me to think that if you are to get better performance you should look into a log digesting framework, like ossec (where your log samples came from) perhaps splunk. Those may be too clumsy for your wishes. Alternatively you should consider designing and building something in java/C/perl/awk better suited to parsing.
Running your existing script more frequently will also help.
Good luck! If you like I can box up the work I did and post it here, but I think its overkill.
as requested;
CalFuncs.sh: a library I source in most of my scripts
#!/bin/bash
LOGDIR="/tmp"
LOG=$LOGDIR/CalFunc.log
[ ! -d "$LOGDIR" ] && mkdir -p $(dirname $LOG)
SSH_OPTIONS="-o StrictHostKeyChecking=no -q -o ConnectTimeout=15"
SSH="ssh $SSH_OPTIONS -T"
SCP="scp $SSH_OPTIONS"
SI=$(basename $0)
Log() {
echo "`date` [$SI] $#" >> $LOG
}
Run() {
Log "Running '$#' in '`pwd`'"
$# 2>&1 | tee -a $LOG
}
RunHide() {
Log "Running '$#' in '`pwd`'"
$# >> $LOG 2>&1
}
PrintAndLog() {
Log "$#"
echo "$#"
}
ErrorAndLog() {
Log "[ERROR] $# "
echo "$#" >&2
}
showMilliseconds(){
date +%s
}
runMethodForDuration(){
local startT=$(showMilliseconds)
$1
local endT=$(showMilliseconds)
local totalT=$((endT-startT))
PrintAndLog "that took $totalT seconds to run $1"
echo $totalT
}
genCallRefLog.sh - generates fictitious callref.log size depending on argument
#!/bin/bash
#Script to make 80000 sequential lines of callref.log this should suffice for a POC
if [ -z "$1" ] ; then
echo "genCallRefLog.sh requires an integer of the number of lines to pump out of callref.log"
exit 1
fi
file="callref.log"
[ -f "$file" ] && rm -f "$file" # del file if exists
i=0 #put start num in here
j="$1" #put end num in here
echo "building $j lines of callref.log"
for (( a=i ; a < j; a++ ))
do
printf 'C-%08x\n' "$a" >> $file
done
genSampleLog.sh generates fictitious sample.log size depending on argument
#!/bin/bash
#Script to make 80000 sequential lines of callref.log this should suffice for a POC
if [ -z "$1" ] ; then
echo "genSampleLog.sh requires an integer of the number of lines to pump out of sample.log"
exit 1
fi
file="sample.log"
[ -f "$file" ] && rm -f "$file" # del file if exists
i=0 #put start num in here
j="$1" #put end num in here
echo "building $j lines of sample.log"
for (( a=i ; a < j; a++ ))
do
printf 'Dec 16 18:02:04 asterisk1 asterisk[31774]: NOTICE[31787]: C-%08x got hangup request, cause 31\n' "$a" >> $file
done
and finally the actual test script I used. Often I would comment out the building scripts as they only need to run when changing the log size. I also typically would only run one testing function at a time and record the results.
test.sh
#!/bin/bash
source "./CalFuncs.sh"
targetLogFile="cause_temp.log"
Log "Starting"
checkTargetFileSize(){
expectedS="$1"
hasS=$(cat $targetLogFile | wc -l)
if [ "$expectedS" != "$hasS" ] ; then
ErrorAndLog "Got $hasS but expected $expectedS, when inspecting $targetLogFile"
exit 244
fi
}
standard(){
iter=0
while read ref
do
cat sample.log | grep "$ref" | grep 'got hangup request, cause 31' | grep -o 'C-[0-9a-z][0-9a-z][0-9a-z][0-9a-z][0-9a-z][0-9a-z][0-9a-z][0-9a-z]' >> $targetLogFile
done < callref.log
}
subStandardVarient(){
iter=0
while read ref
do
cat sample.log | grep 'got hangup request, cause 31' | grep -o "$ref" >> $targetLogFile
done < callref.log
}
newFunction(){
grep -f callref.log sample.log | grep 'got hangup request, cause 31' >> $targetLogFile
}
newFunction4(){
grep 'got hangup request, cause 31' sample.log | grep -of 'callref.log'>> $targetLogFile
}
newFunction5(){
#splitting grep
grep 'got hangup request, cause 31' sample.log > /tmp/somefile
grep -of 'callref.log' /tmp/somefile >> $targetLogFile
}
newFunction2(){
iter=0
while read ref
do
((iter++))
echo "$ref" | grep 'got hangup request, cause 31' | grep -of 'callref.log' >> $targetLogFile
done < sample.log
}
newFunction3(){
iter=0
pat=""
while read ref
do
if [[ "$pat." != "." ]] ; then
pat="$pat|"
fi
pat="$pat$ref"
done < callref.log
# Log "Have pattern $pat"
while read ref
do
((iter++))
echo "$ref" | grep 'got hangup request, cause 31' | grep -oP "$pat" >> $targetLogFile
done < sample.log
#grep: regular expression is too large
}
[ -f "$targetLogFile" ] && rm -f "$targetLogFile"
numLines="100000"
Log "testing algorithms with $numLines in each log file."
setupCallRef(){
./genCallRefLog.sh $numLines
}
setupSampleLog(){
./genSampleLog.sh $numLines
}
setupCallRef
setupSampleLog
runMethodForDuration standard > /dev/null
checkTargetFileSize "$numLines"
[ -f "$targetLogFile" ] && rm -f "$targetLogFile"
runMethodForDuration subStandardVarient > /dev/null
checkTargetFileSize "$numLines"
[ -f "$targetLogFile" ] && rm -f "$targetLogFile"
runMethodForDuration newFunction > /dev/null
checkTargetFileSize "$numLines"
# [ -f "$targetLogFile" ] && rm -f "$targetLogFile"
# runMethodForDuration newFunction2 > /dev/null
# checkTargetFileSize "$numLines"
# [ -f "$targetLogFile" ] && rm -f "$targetLogFile"
# runMethodForDuration newFunction3 > /dev/null
# checkTargetFileSize "$numLines"
# [ -f "$targetLogFile" ] && rm -f "$targetLogFile"
# runMethodForDuration newFunction4 > /dev/null
# checkTargetFileSize "$numLines"
[ -f "$targetLogFile" ] && rm -f "$targetLogFile"
runMethodForDuration newFunction5 > /dev/null
checkTargetFileSize "$numLines"
The above shows that the existing method was always faster than anything I came up with. I think someone took care to optimize it.

Grep is not showing results even i used fgrep and -f options

I have used the below content to fetch some values .
But the grep in the code is not showing any results.
#!/bin/bash
file=test.txt
while IFS= read -r cmd;
do
check_address=`grep -c $cmd music.cpp`
if [ $check_address -ge 1 ]; then
echo
else
grep -i -n "$cmd" music.cpp
echo $cmd found
fi
done < "$file"
Note : there are no carriage return in my text file or .sh file.
i checked using
bash -x check.sh
It is just showing
+grep -i -n "$cmd" music.cpp

Grep inside bash script not finding item

I have a script which is checking a key in one file against a key in another to see if it exists in both. However in the script the grep never returns anything has been found but on the command line it does.
#!/bin/bash
# First arg is the csv file of repo keys separated by line and in
# this manner 'customername,REPOKEY'
# Second arg is the log file to search through
log_file=$2
csv_file=$1
while read line;
do
customer=`echo "$line" | cut -d ',' -f 1`
repo_key=`echo "$line" | cut -d ',' -f 2`
if [ `grep "$repo_key" $log_file` ]; then
echo "1"
else
echo "0"
fi
done < $csv_file
The CSV file is formatted as follows:
customername,REPOKEY
and the log file is as follows:
REPOKEY
REPOKEY
REPOKEY
etc
I call the script by doing ./script csvfile.csv logfile.txt
Rather then checking output of grep command use grep -q to check its return status:
if grep -q "$repo_key" "$log_file"; then
echo "1"
else
echo "0"
fi
Also your script can be simplified to:
log_file=$2
csv_file=$1
while IFS=, read -r customer repo_key; do
if grep -q "$repo_key" "$log_file"; then
echo "1"
else
echo "0"
fi
done < "$csv_file"
use the exit status of the grep command to print 1 or 0
repo_key=`echo "$line" | cut -d ',' -f 2`
grep -q "$repo_key" $log_file
if [ $? -eq 1 ]; then
echo "1"
else
echo "0"
fi
-q supresses the output so that no output is printed
$? is the exit status of grep command 1 on successfull match and 0 on unsuccessfull
you can have a much simpler version as
grep -q "$repo_key" $log_file
echo $?
which will produce the same output

Bash script help/evaluation

I'm trying to learn some scripting however I can't find solution for one functionality.
Basically I would like to ask to evaluate my script as it's probably possible to reduce the complexity and number of lines.
The purpose of this script is to download random, encrypted MySQL backups from Amazon S3, restore the dump and run some random MySQL queries.
I'm not sure how to email the output from printf statements - one is for headers and second one for actual data. I've tried to format the output so it looks like below but I had to exclude the headers from the loop:
Database: Table: Entries:
database1 random_table 0
database2 random_table 0
database3 random_table 0
database4 random_table 0
I would like to include this output in the email and also change the email subject based on the success/failure of the script.
I probably use to much if loops and MySQL queries are probably to complicated.
Script:
#!/usr/bin/env bash
# DB Details:
db_user="user"
db_pass="password"
db_host="localhost"
# Date
date_stamp=$(date +%d%m%Y)
# Initial Setup
data_dir="/tmp/backup"
# Checks
if [ ! -e /usr/bin/s3cmd ]; then
echo "Required package (http://s3tools.org/s3cmd)"
exit 2
fi
if [ -e /usr/bin/gpg ]; then
gpg_key=$(gpg -K | tr -d "{<,>}" | awk '/an#example.com/ { print $4 }')
if [ "$gpg_key" != "an#example.com" ]; then
echo "No GPG key"
exit 2
fi
else
echo "No GPG package"
exit 2
fi
if [ -d $data_dir ]; then
rm -rf $data_dir/* && chmod 700 $data_dir
else
mkdir $data_dir && chmod 700 $data_dir
fi
# S3 buckets
bucket_1=s3://test/
# Download backup
for backup in $(s3cmd ls s3://test/ | awk '{ print $2 }')
do
latest=$(s3cmd ls $backup | awk '{ print $2 }' | sed -n '$p')
random=$(s3cmd ls $latest | shuf | awk '{ print $4 }' | sed -n '1p')
s3cmd get $random $data_dir >/dev/null 2>&1
done
# Decrypting Files
for file in $(ls -A $data_dir)
do
filename=$(echo $file | sed 's/\.e//')
gpg --out $data_dir/$filename --decrypt $data_dir/$file >/dev/null 2>&1 && rm -f $data_dir/$file
if [ $? -eq 0 ]; then
# Decompressing Files
bzip2 -d $data_dir/$filename
if [ $? -ne 0 ]; then
echo "Decompression Failed!"
fi
else
echo "Decryption Failed!"
exit 2
fi
done
# MySQL Restore
printf "%-40s%-30s%-30s\n\n" Database: Table: Entries:
for dump in $(ls -A $data_dir)
do
mysql -h $db_host -u $db_user -p$db_pass < $data_dir/$dump
if [ $? -eq 0 ]; then
# Random DBs query
db=$(echo $dump | sed 's/\.sql//')
random_table=$(mysql -h $db_host -u $db_user -p$db_pass $db -e "SHOW TABLES" | grep -v 'Tables' | shuf | sed -n '1p')
db_entries=$(mysql -h $db_host -u $db_user -p$db_pass $db -e "SELECT * FROM $random_table" | grep -v 'id' | wc -l)
printf "%-40s%-30s%-30s\n" $db $random_table $db_entries
mysql -h $db_host -u $db_user -p$db_pass -e "DROP DATABASE $db"
else
echo "The system was unable to restore backups!"
rm -rf $data_dir
exit 2
fi
done
#Remove backups
rm -rf $data_dir
You'll get the best answers if you ask specific questions (rather than, "please review my code")...and if you limit each post to a single question. Regarding emailing the output of your printf statements:
You can group statements into a block and then pipe the output of a block into another program. For example:
{
echo "This is a header"
echo
for x in {1..10}; do
echo "This is row $x"
done
} | mail -s "Here is my output" lars#example.com
If you want to make the email subject conditional upon the success or
failure of something elsewhere in the script, you can (a) save your
output to a file, and then (b) email the file after building the
subject line:
{
echo "This is a header"
echo
for x in {1..10}; do
echo "This is row $x"
done
} > output
if is_success; then
subject="SUCCESS: Here is your output"
else
subject="FAILURE: Here are your errors"
fi
mail -s "$subject" lars#example.com < output

Resources