tshark to split pcap file based on MAC address - bash

I have around 7 PCAP files and I would like to split them based on MAC address then place them into separate files and a new directory for each PCAP file based on the title of the PCAP files. My current approach (see below) is obviously not the best approach and would need the loop part.
#!/bin/bash
#Name of PCAP file to analyse
pcap_file="tcpdump_2019-06-21_213044.pcap"
#MAC address to filter
mac="00:17:88:48:89:21"
mkdir /2019-06-21/
for steam in $pcap_file;
do
/usr/bin/tshark -r $pcap_file -Y "eth.addr eq 00:17:88:48:89:21" -w
$mac.pcap
done
#!/bin/bash
pcap_file=(tcpdump_2019-07-01_000001.pcap tcpdump_2019-06-26_120301.pcap)
macs=( 00:17:88:71:78:72 )
devices=(phillips_1 phillips_2)
for pcf in ${pcap_file[*]}
do
echo "$pcap_file" >&2
/usr/bin/tshark -r "$pcf" -Y "eth.addr eq $macs" -w "$devices.pcap"
done

Something like:
#!/bin/bash
# usage "$0" pcap_file1 pcap_file2 ...
#macs=( 00:17:88:48:89:21 00:17:88:48:89:22 00:17:88:48:89:23 )
ips=( 192.168.202.68 192.168.202.79 192.168.229.153 192.168.23.253 )
for pcf in "$#"
do
dir="`basename "$pcf" | sed -r 's/(tcpdump_)(.*)(_[0-6]{6}.pcap)/\2/'`"
mkdir "$dir"
cd "$dir" || exit # into the newly created child dir
pcf="`realpath -e "$pcf"`" # make sure the file can be found from the new directory
#for mac in ${macs[*]}
for ip in ${ips[*]}
do
#echo "$mac" >&2
echo "$ip" >&2
#/usr/bin/tshark -r "$pcf" -Y "eth.addr eq $mac" -w "$mac.pcap"
/usr/bin/tshark -r "$pcf" -Y "ip.addr == $ip" -w "$ip.pcap"
done
cd .. # back to the parent dir
done
Where in your case you would use the commented out lines.
I used IPs to test, for I couldn't find an appropriate file to test mac's on. I used the file maccdc2012_00000.pcap.gz found here: https://www.netresec.com/?page=MACCDC to test (note: my example takes a long time to finish on that large file).

Related

Send files to folders using bash script

I want to copy the functionality of a windows program called files2folder, which basically lets you right-click a bunch of files and send them to their own individual folders.
So
1.mkv 2.png 3.doc
gets put into directories called
1 2 3
I have got it to work using this script but it throws out errors sometimes while still accomplishing what I want
#!/bin/bash
ls > list.txt
sed -i '/list.txt/d' ./list.txt
sed 's/.$//;s/.$//;s/.$//;s/.$//' ./list.txt > list2.txt
for i in $(cat list2.txt); do
mkdir $i
mv $i.* ./$i
done
rm *.txt
is there a better way of doing this? Thanks
EDIT: My script failed with real world filenames as they contained more than one . so I had to use a different sed command which makes it work. this is an example filename I'm working with
Captain.America.The.First.Avenger.2011.INTERNAL.2160p.UHD.BluRay.X265-IAMABLE
I guess you are getting errors on . and .. so change your call to ls to:
ls -A > list.txt
-A List all entries except for . and ... Always set for the super-user.
You don't have to create a file to achieve the same result, just assign the output of your ls command to a variable. Doing something like this:
files=`ls -A`
for file in $files; do
echo $file
done
You can also check if the resource is a file or directory like this:
files=`ls -A`
for res in $files; do
if [[ -d $res ]];
then
echo "$res is a folder"
fi
done
This script will do what you ask for:
files2folder:
#!/usr/bin/env sh
for file; do
dir="${file%.*}"
{ ! [ -f "$file" ] || [ "$file" = "$dir" ]; } && continue
echo mkdir -p -- "$dir"
echo mv -n -- "$file" "$dir/"
done
Example directory/files structure:
ls -1 dir/*.jar
dir/paper-279.jar
dir/paper.jar
Running the script above:
chmod +x ./files2folder
./files2folder dir/*.jar
Output:
mkdir -p -- dir/paper-279
mv -n -- dir/paper-279.jar dir/paper-279/
mkdir -p -- dir/paper
mv -n -- dir/paper.jar dir/paper/
To make it actually create the directories and move the files, remove all echo

Bash shell script not working

I have a script I am trying to work out to scan my LAN and send me notification if there is a new MAC address that does not appear in my master list. I believe my variables may be messed up. This is what I have:
#!/bin/bash
LIST=$HOME/maclist.log
MASTERFILE=$HOME/master
FILEDIFF="$(diff $LIST $MASTERFILE)"
# backup the maclist first
if [ -f $LIST ]; then
cp $LIST maclist_`date +%Y%m%H%M`.log.bk
else
touch $LIST
fi
# this will scan the network and extract the IP and MAC address
nmap -n -sP 192.168.122.0/24 | awk '/^Nmap scan/{IP=$5};/^MAC/{print IP,$3};{next}' > $LIST
# this will use a diff command to compare the maclist created above and master list of known good devices on the LAN
if [ $FILEDIFF ] 2> /dev/null; then
echo
echo "---- All is well on `date` ----" >> macscan.log
echo
else
# echo -e "\nWARNING!!" | `mutt -e 'my_hdr From:user#email.com' -s "WARNIG!! NEW DEVICE ON THE LAN" -i maclist.log user#email.com`
echo "emailing you"
fi
When I execute this when the maclist.log does not exist I get this response:
diff: /root/maclist.log: No such file or directory
If I execute it again with the maclist.log file existing the file gets renamed from the cp line without any issue.
The line
FILEDIFF="$(diff $LIST $MASTERFILE)"
executes the diff when it is run (not when you use $FILELIST later). At that time the list file hasn't been created.
The easiest fix is just to put the diff command in full where $FILELIST is currently used.

Monitor and ftp newly-added files on Linux -- modify existing code

The OS is centos 7, I have a small application to implement below functionality:
1.Read information from config.ini like this:
# Configuration file for ftpxml service
# Remote FTP server informations
ftpadress=1.2.3.4
username=test
password=test
# Local folders configuration
# folderA: folder for incomming files
folderA=/u02/dump
# folderB: Successfuly transfered files are copied here
folderB=/u02/dump_bak
# retrydir: when ftp upload fails, store failed files in this
# directory
retrydir=/u02/dump_retry
Monitor folder A. If there are any newly-added files in A, do step 3.
Ftp these new files to a remote ftp server in the order of their creation time, While upload finished, copy uploaded files to folder B and delete relevant files in folder A.
If ftp fails, store relevant files in retrydir and try to ftp them later.
Record every operation in a log file.
Detailed setting instruction for the application:
install ncftp package: yum install ncftp -y, it's not a service nor a daemon, just a client tool which is invoked in bash file for ftp purpose.
Customize these files to suit your setting using vi: config.ini
,ftpmon.path and ftpmon.service
copy ftpmon.path and ftpmon.service to /etc/systemd/system/, copy config.ini and ftpxml.sh to /u02/ftpxml/, run: chmod +x ftpxml.sh
Start the monitoring tool
sudo systemctl start ftpmon.path
If you want to enable it at boot time just enter: sudo systemctl enable ftpmon.path
Setup a cron task to purge queued files (add option -p)
*/5 * * * * /u02/ftpxml/ftpxml.sh -p
Now the application seems works well, except a special situation:
When we put several files in folder A continuously, for instance, put 1.txt, 2.txt and 3.txt...... one after another in a short time, we usually found 1.txt ftp well, but the upcoming files fails to ftp and still stay under folder A.
Now I am going to fix this problem. I suppose the error maybe due to: while doing ftp for the first file, maybe the second file is already created under folder A. so the code can't care about the second file.
Below is code of ftpxml.sh:
#!/bin/bash
# ! Please read the README.txt file !
# Copy files from upload dir to remote FTP then save them
# to folderB
# look our location
SCRIPT=$(readlink -f $0)
# Absolute path to this script
SCRIPTPATH=`dirname $SCRIPT`
PIDFILE=${SCRIPTPATH}/ftpmon_prog.lock
# load config.ini
if [ -f $SCRIPTPATH/config.ini ]; then
source $SCRIPTPATH/config.ini
else
echo "No config found. Exiting"
fi
# Lock to avoid multiple instances
if [ -f $PIDFILE ]; then
kill -0 $(cat $PIDFILE) 2> /dev/null
if [ $? == 0 ]; then
exit
fi
fi
# Store PID in lock file
echo $$ > $PIDFILE
# Parse cmdline arguments
while getopts ":ph" opt; do
case $opt in
p)
#we set the purge mode (cron mode)
purge_only=1
;;
\?|h)
echo "Help text"
exit 1
;;
esac
done
# declare usefull functions
# common logging function
function logmsg() {
LOGFILE=ftp_upload_`date +%Y%m%d`.log
echo $(date +%m-%d-%Y\ %H:%M:%S) $* >> $SCRIPTPATH/log/${LOGFILE}
}
# Upload to remote FTP
# we use ncftpput to batch silently
# $1 file to upload $2 return value placeholder
function upload() {
ncftpput -V -u $username -p $password $ftpadress /prog/ $1
return $?
}
function purge_retry() {
failed_files=$(ls -1 -rt ${retrydir}/*)
if [ -z $failed_files ]; then
return
fi
while read line
do
#upload ${retrydir}/$line
upload $line
if [ $? != 0 ]; then
# upload failed we exit
exit
fi
logmsg File $line Uploaded from ${retrydir}
mv $line $folderB
logmsg File $line Copyed from ${retrydir}
done <<< "$failed_files"
}
# first check out 'queue' directory
purge_retry
# if called from a cron task we are done
if [ $purge_only ]; then
rm -f $PIDFILE
exit 0
fi
# look in incoming dir
new_files=$(ls -1 -rt ${folderA}/*)
while read line
do
# launch upload
if [ Z$line == 'Z' ]; then
break
fi
#upload ${folderA}/$line
upload $line
if [ $? == 0 ]; then
logmsg File $line Uploaded from ${folderA}
else
# upload failed we cp to retry folder
echo upload failed
cp $line $retrydir
fi
# don't care upload successfull or failed, we ALWAYS move the file to folderB
mv $line $folderB
logmsg File $line Copyed from ${folderA}
done <<< "$new_files"
# clean exit
rm -f $PIDFILE
exit 0
below is content of ftpmon.path:
[Unit]
Description= Triggers the service that logs changes.
Documentation= man:systemd.path
[Path]
# Enter the path to monitor (/u02/dump)
PathModified=/u02/dump/
[Install]
WantedBy=multi-user.target
below is content of ftpmon.service:
[Unit]
Description= Starts File Upload monitoring
Documentation= man:systemd.service
[Service]
Type=oneshot
#Set here the user that ftpmxml.sh will run as
User=root
#Set the exact path to the script
ExecStart=/u02/ftpxml/ftpxml.sh
[Install]
WantedBy=default.target
Thanks in advance, hope any experts can give me some suggestion.
As you remove the successfull transfered files from A you can leave files with transfer errors in A. So I am dealing only with files in one folder.
List your files by creation time with
find -type f -maxdepth 1 -print0 | xargs -r0 stat -c %y\ %n | sort
if you want hidden files to be included or - if not -
find -type f -maxdepth 0 -print0 | xargs -r0 stat -c %y\ %n | sort
You'll get something like
2016-02-19 18:53:41.000000000 ./.dockerenv
2016-02-19 18:53:41.000000000 ./.dockerinit
2016-02-19 18:56:09.000000000 ./versions.txt
2016-02-19 19:01:44.000000000 ./test.sh
Now cut the filenames (or use xargs -r0 stat -c %n if it does not matter that the files are order by name instead the timestamp) and
do the transfer
check the success
move successfully transfered files to B
As you stated above, there are situations where newly stored files are not successfully transfered. This may be if the file is written further after you started the transfer. So filter the timestamp to be at least some time old. Add -mmin -1 to the find statement for "at least one minute old"
find -type f -maxdepth 0 -mmin -1 -print0 | xargs -r0 stat -c %n | sort
If you don't want to use a minute file age you'll have to check if the file is still open: lsof | grep ./testfile but this may have issues if you have tmpfs in your file system.
lsof | grep ./testfile
lsof: WARNING: can't stat() tmpfs file system /var/lib/docker/containers/8596cd310292a54652c7f50d7315c8390703b4816442146b340946779a72a40c/shm
Output information may be incomplete.
lsof: WARNING: can't stat() proc file system /run/docker/netns/fb9323486c44
Output information may be incomplete.
So add %s to the stats statement to check the file size twice within some seconds and if it's constant the file may be written complete. May, as the write process may be stalled.

How to change parameter in a file, only if the file exists and the parameter is not already set?

#!/bin/bash
# See if registry is set to expire updates
filename=hostnames
> test.log
PARAMETER=Updates
FILE=/etc/.properties
CODE=sudo if [ ! -f $FILE] && grep $PARAMETER $FILE; then echo "File found, parameter not found."
#CODE=grep $PARAMETER $FILE || sudo tee -a /etc/.properties <<< $PARAMETER
while read -r -a line
do
hostname=${line//\"}
echo $hostname":" >> test.log
#ssh -n -t -t $hostname "$CODE" >> test.log
echo $CODE;
done < "$filename"
exit
I want to set "Updates 30" in /etc/.properties on about 50 servers if:
The file exists (not all servers have the software installed)
The parameter "Updates" is not already set in the file (e.g. in case of multiple runs)
I am a little puzzled so far how, because I am not sure if this can be done in 1 line of bash code. The rest of the script works fine.
Ok, here's what i think would be a solution for you. Like explained in this article http://www.unix.com/shell-programming-scripting/181221-bash-script-execute-command-remote-servers-using-ssh.html
invoke the script which contains the commands that you want to be executed at the remote server
Code script 1:
while read -r -a line
do
ssh ${line} "bash -s" < script2
done < "$filename"
To replace a line in a text file, you can use sed (http://www.cyberciti.biz/faq/unix-linux-replace-string-words-in-many-files/)
Code script 2:
PARAMETER=Updates
FILE=/etc/.properties
NEWPARAMETER=Updates ###(What you want to write there)
if [ ! -f $FILE] && grep $PARAMETER $FILE; then exit
sed -i 's/$PARAMETER/$NEWPARAMETER/g' $FILE
So, I'm not certain this covers all your use case, I hope this helps you out if there is anything feel free to ask!

grep spacing error

Hi guys i've a problem with grep . I don't know if there is another search code in shell script.
I'm trying to backup a folder AhmetsFiles which is stored in my Flash Disk , but at the same time I've to group them by their extensions and save them into [extensionName] Folder.
AhmetsFiles
An example : /media/FlashDisk/AhmetsFiles/lecture.pdf must be stored in /home/$(whoami)/Desktop/backups/pdf
Problem is i cant copy a file which name contains spaces.(lecture 2.pptx)
After this introduction here my code.
filename="/media/FlashDisk/extensions"
count=0
exec 3<&0
exec 0< $filename
mkdir "/home/$(whoami)/Desktop/backups"
while read extension
do
cd "/home/$(whoami)/Desktop/backups"
rm -rf "$extension"
mkdir "$extension"
cd "/media/FlashDisk/AhmetsFiles"
files=( `ls | grep -i "$extension"` )
fCount=( `ls | grep -c -i "$extension"` )
for (( i=0 ; $i<$fCount ; i++ ))
do
cp -f "/media/FlashDisk/AhmetsFiles/${files[$i]}" "/home/$(whoami)/Desktop/backups/$extension"
done
let count++
done
exec 0<&3
exit 0
Your looping is way more complicated than it needs to be, no need for either ls or grep or the files and fCount variables:
for file in *.$extension
do
cp -f "/media/FlashDisk/AhmetsFiles/$file" "$HOME/Desktop/backups/$extension"
done
This works correctly with spaces.
I'm assuming that you actually wanted to interpret $extension as a file extension, not some random string in the middle of the filename like your original code does.
Why don't you
grep -i "$extension" | while IFS=: read x ; do
cp ..
done
instead?
Also, I believe you may prefer something like grep -i ".$extension$" instead (anchor it to the end of line).
On the other hand, the most optimal way is probably
cp -f /media/FlashDisk/AhmetsFiles/*.$extension "$HOME/Desktop/backups/$extension/"

Resources