capture SSID in wireshark or tshark - bash

Im new in tshark...below code is a part of a code that captures SSID(NAME of access point) and takes average of its RSSI(power of signal).
I need someone to explane what this line does exactly???
tshark -i prism0 -Y wlan.fc.type_subtype==0x08 -T fields -e wlan.sa -e prism.did.rssi -e wlan_mgt.ssid -a duration:1 > air1.txt
the main code is:
#Main shell script for generating appropriate data for histograms
#$1 (first command line parameter): run length
#$2 (second command line parameter): Target SSID
#Usage: should be run as root, example:
# ./histscan.sh 10 Computer
let i=0;
while [ "$i" -lt "$1" ]; do
let i="$i+1";
tshark -i prism0 -Y wlan.fc.type_subtype==0x08 -T fields -e wlan.sa -e prism.did.rssi -e wlan_mgt.ssid -a duration:1 > air1.txt
echo "SSID AVG-RSSI" > mean.old
RSSI=$(cat air1.txt | grep $2 | cut -f2 | ./rssiMean.sh | cut -d'.' -f1)
if [ -z "$RSSI" ]; then let RSSI=-100; fi
let RSSI="$RSSI+100";
echo $2 " " $RSSI >> mean.old
cp mean.old mean.rssi
done

The tshark command you included here will set up and start a packet capture. Here are the parameters the line specifies.
-i prism => indicates that the capture should be done on the interface named prism0
-Y wlan.fc.type_subtype==0x08 => specifies a display filter. So, this capture will only print out 802.11 frames that are access point beacons.
-T fields => The output will contain just the data fields you specify with all of the -e options
-e => The specific fields to print out. In your case, it will be the WLAN source address [basically the MAC of the access point that sent the beacon], the RSSI, and SSID fields.
-a duration:1 => The capture will run for 1 second
air.txt => Output of the tshark command will be directed to this file.

Related

Why is my code clobbering the current line?

I'm trying to build a csv of countries a list of ipv4 addresses come from.
I keep clobbering the IP in the output file;
#!/bin/bash
cat ipv4list.txt | while read ip ;do
echo -n "$ip", >> outputfile
whois -r "$ip">temp.txt
cat temp.txt | grep -i country >> outputfile
done
cat ipv4list.txt
1.1.1.1
2.2.2.2
What I'd like is outputfile to read;
1.1.1.1,country: AU
2.2.2.2,country: US
but I'm getting outputfile as follows;
,country: AU
,country: US
The echo statement need the refer to he ip variable:
echo -n "$ip", >> outputfile
Also, the clobbering indicate they input file may use window new lines (\r\n). Check the output file with editor, or hexdump
Going a bit outside the box, here. Please be kind, guys.
pop.ed:-
1d
wq
whois.sh:-
#!/bin/sh -x
init() {
cp ipv4list ipv4stack
}
next() {
[[ -s ipv4stack ]] && main
}
main() {
ip=$(echo "1p" | ed -s ipv4stack.txt)
wic=$(whois -r "${ip}")
echo "${ip},${wic}" >> outputfile
ed -s ipv4stack.txt < pop.ed
next
}
init
next
Ed apparently isn't installed by default in most distributions these days either, sadly; so you may need to install it if you want to use this.

Bash, loop unexpected stop

I'm having problems with this last part of my bash script. It receives input from 500 web addresses and is supposed to fetch the server information from each. It works for a bit but then just stops at like the 45 element. Any thoughts with my loop at the end?
#initializing variables
timeout=5
headerFile="lab06.output"
dataFile="fortune500.tsv"
dataURL="http://www.tech.mtu.edu/~toarney/sat3310/lab09/"
dataPath="/home/pjvaglic/Documents/labs/lab06/data/"
curlOptions="--fail --connect-timeout $timeout"
#creating the array
declare -a myWebsitearray
#obtaining the data file
wget $dataURL$dataFile -O $dataPath$dataFile
#getting rid of the crap from dos
sed -n "s/^m//" $dataPath$dataFile
readarray -t myWebsitesarray < <(cut -f3 -d$'\t' $dataPath$dataFile)
myWebsitesarray=("${myWebsitesarray[#]:1}")
websitesCount=${#myWebsitesarray[*]}
echo "There are $websitesCount websites in $dataPath$dataFile"
#echo -e ${myWebsitesarray[200]}
#printing each line in the array
for line in ${myWebsitesarray[*]}
do
echo "$line"
done
#run each website URL and gather header information
for line in "${myWebsitearray[#]}"
do
((count++))
echo -e "\\rPlease wait... $count of $websitesCount"
curl --head "$curlOptions" "$line" | awk '/Server: / {print $2 }' >> $dataPath$headerFile
done
#display results
echo "Results: "
sort $dataPath$headerFile | uniq -c | sort -n
It would certainly help if you actually passed the --connect-timeout option to curl. As written, you are currently passing the single argument --fail --connect-timeout $timeout rather than 3 distinct arguments --fail, --connect-timeout, and $timeout. This is one instance where you should not quote the variable. IOW, use:
curl --head $curlOptions "$line"

bash scripting to add users

I created a bash script to read information such as username, group etc., from a text file and create users based on it in linux. The code seems to function properly and creates the users as desired. But the user information in the last line of the text file always gets misinterpreted. Even if i delete it then the next last line gets misinterpreted i.e., the text is read wrongly.
`
#!/bin/bash
userfile="users.txt"
IFS=$'\n'
if [ ! -f "$userfile" ]
then
echo "File does not exist. Specify a valid file and try again. "
exit
fi
groups=(`cut -f 4 "$userfile" | sed 's/ //'`)
fullnames=(`cut -f 1 "$userfile" | sed 's/,//' | sed 's/"//g'`)
username1=(`cut -f 1 "$userfile" |sed 's/,//' | sed 's/"//' | tr [A-Z] [a-z] | awk '{print substr($2,1,1) substr($3,1,1) substr($1,1,1)}'`)
username2=(`cut -f 4 "$userfile" | tr [A-Z] [a-z] | awk '{print substr($1,1,1)}'`)
i=0
n=${#username1[#]}
for (( q=0; q<n; q++ ))
do
usernames[$q]=${username1[$q]}"${username2[$q]}"
done
declare -a usernames
x=0
created=0
for user in ${usernames[*]}
do
adduser -c ${fullnames[$x]} -p 123456789 -f 15 -m -d /home/${groups[$x]}/$user -K LOGIN_RETRIES=3 -K PASS_MAX_DAYS=30 -K PASS_WARN_AGE=3 -N -s /bin/bash $user 2> /dev/null
usermod -g ${groups[$x]} $user
chage -d 0 $user
let created=$created+1
x=$x+1
echo -e "User $user created "
done
echo "$created Users created"
enter image description here`
#!/bin/bash
userfile="./users.txt"; # <-- Config
while read line; do
# FULL NAME
# Capture all between quotes as full name
fullname=$(printf '%s' "${line}" | sed 's/^"\(.*\)".*/\1/')
# Remove spaces and punctuations???:
fullname=$(printf '%s' "${fullname}" | tr -d '[:punct:][:blank:]')
# Right-side names:
partb=$(printf '%s' "${line}" | sed "s/^\".*\"//g")
# CODE 1, capture second row
code1=$(printf '%s' "${partb}" | cut -f 2 )
# CODE 2, capture third row
code2=$(printf '%s' "${partb}" | cut -f 3 )
# GROUP, capture fourth row
group=$(printf '%s' "${partb}" | cut -f 4 )
# Print only for report
echo "fullname: ${fullname}\n code 1: ${code1}\n code 2: ${code2}\n group: ${group}\n"
done <${userfile}
Maybe these are the fields that you want, now you have it in variables for manipulate them: $fullname, $code1, $code2 and $group.
Although maybe the fail that you observed was due to some misplaced quotation mark in the text file or the line breaks, on the attached screenshot I can see one missed quote.

bash script pulling variables from .txt, keeps giving syntax error while trying to use mount command

Ive been trying to get this to work for the last week and cannot figure out why this is not working. I get mixed results typing directly into the terminal, but keep getting syntax error messages when running from the .sh. using ubuntu 11.10
It looks like part of the mount command gets pushed to the next line not allowing it to complete properly.. I have no idea why this is happening or how to prevent it from going to the second line.
i have several lines defined as follows in mounts.txt, that gets read from mount-drives.sh below
I have called it to run using sudo so it shouldnt be a permissions issue.
Thanks for taking a look, let me know if additional info is needed.
mounts.txt
mountname,//server/share$,username,password,
mount-drives.sh ---origional, updated below
#!/bin/bash
while read LINE;
do
# split lines up using , to separate variables
name=$(echo $LINE | cut -d ',' -f 1)
path=$(echo $LINE | cut -d ',' -f 2)
user=$(echo $LINE | cut -d ',' -f 3)
pass=$(echo $LINE | cut -d ',' -f 4)
echo $name
echo $path
echo $user
echo $pass
location="/mnt/test/$name/"
if [ ! -d $location ]
then
mkdir $location
fi
otherstuff="-o rw,uid=1000,gid=1000,file_mode=0777,dir_mode=0777,username=$user,password=$pass"
mount -t cifs $otherstuff $path $location
done < "/path/to/mounts.txt";
mount-drives.sh ---updated
#!/bin/bash
while read LINE
do
name=$(echo $LINE | cut -d ',' -f 1)
path=$(echo $LINE | cut -d ',' -f 2)
user=$(echo $LINE | cut -d ',' -f 3)
pass=$(echo $LINE | cut -d ',' -f 4)
empty=$(echo $LINE | cut -d ',' -f 5)
location="/mount/test/$name/"
if [ ! -d $location ]
then
mkdir $location
fi
mounting="mount -t cifs $path $location -o username=$user,password=$pass,rw,uid=1000,gid=1000,file_mode=0777,dir_mode=0777"
$mounting
echo $mounting >> test.txt
done < "/var/www/MediaCenter/mounts.txt"
Stab in the dark (after reading the comments). The "$pass" is picking up a newline because the mounts.txt was created in windows and has windows line endings. Try changing the echo $pass line to:
echo ---${pass}---
and see if it all shows up correctly.
There's a lot here that could stand improvement. Consider the following -- far more compact, far more correct -- approach:
while IFS=, read -u 3 -r name path user pass empty _; do
mkdir -p "$location"
cmd=( mount \
-t cifs \
-o "rw,uid=1000,gid=1000,file_mode=0777,dir_mode=0777,username=$user,password=$pass" \
"$path" "$location" \
)
printf -v cmd_str '%q ' "${cmd[#]}" # generate a string corresponding with the command
echo "$cmd_str" >>test.txt # append that string to our output file
"${cmd[#]}" # run the command in the array
done 3<mounts.txt
Unlike the original, this will work correctly even if your path or location values contain whitespace.

Shell Script to download youtube files from playlist

I'm trying to write a bash script that will download all of the youtube videos from a playlist and save them to a specific file name based on the title of the youtube video itself. So far I have two separate pieces of code that do what I want but I don't know how to combine them together to function as a unit.
This piece of code finds the titles of all of the youtube videos on a given page:
curl -s "$1" | grep '<span class="title video-title "' | cut -d\> -f2 | cut -d\< -f1
And this piece of code downloads the files to a filename given by the youtube video id (e.g. the filename given by youtube.com/watch?v=CsBVaJelurE&feature=relmfu would be CsBVaJelurE.flv)
curl -s "$1" | grep "watch?" | cut -d\" -f4| while read video;
do youtube-dl "http://www.youtube.com$video";
done
I want a script that will output the youtube .flv file to a filename given by the title of the video (in this case BASH lesson 2.flv) rather than simply the video id name. Thanks in advance for all the help.
OK so after further research and updating my version of youtube-dl, it turns out that this functionality is now built directly into the program, negating the need for a shell script to solve the playlist download issue on youtube. The full documentation can be found here: (http://rg3.github.com/youtube-dl/documentation.html) but the simple solution to my original question is as follows:
1) youtube-dl will process a playlist link automatically, there is no need to individually feed it the URLs of the videos that are contained therein (this negates the need to use grep to search for "watch?" to find the unique video id
2) there is now an option included to format the filename with a variety of options including:
id: The sequence will be replaced by the video identifier.
url: The sequence will be replaced by the video URL.
uploader: The sequence will be replaced by the nickname of the person who uploaded the video.
upload_date: The sequence will be replaced by the upload date in YYYYMMDD format.
title: The sequence will be replaced by the literal video title.
ext: The sequence will be replaced by the appropriate extension (like
flv or mp4).
epoch: The sequence will be replaced by the Unix epoch when creating
the file.
autonumber: The sequence will be replaced by a five-digit number that
will be increased with each download, starting at zero.
the syntax for this output option is as follows (where NAME is any of the options shown above):
youtube-dl -o '%(NAME)s' http://www.youtube.com/your_video_or_playlist_url
As an example, to answer my original question, the syntax is as follows:
youtube-dl -o '%(title)s.%(ext)s' http://www.youtube.com/playlist?list=PL2284887FAE36E6D8&feature=plcp
Thanks again to those who responded to my question, your help is greatly appreciated.
If you want to use the title from youtube page as a filename, you could use -t option of youtube-dl. If you want to use the title from your "video list" page and you sure that there is exactly one watch? URL for every <span class="title video-title" title, then you can use something like this:
#!/bin/bash
TMPFILE=/tmp/downloader-$$
onexit() {
rm -f $TMPFILE
}
trap onexit EXIT
curl -s "$1" -o $TMPFILE
i=0
grep '<span class="title video-title "' $TMPFILE | cut -d\> -f2 | cut -d\< -f1 | while read title; do
titles[$i]=$title
((i++))
done
i=0
grep "watch?" $TMPFILE | cut -d\" -f4 | while read url; do
urls[$i]="http://www.youtube.com$url"
((i++))
done
i=0; while (( i < ${#urls[#]} )); do
youtube-dl -o "${titles[$i]}.%(ext)" "${urls[$i]}"
((i++))
done
I did not tested it because I have no "video list" page example.
this following method work and play you titanic from youtube
youtube-downloader.sh
youtube-video-url.sh
#!/bin/bash
decode() {
to_decode='s:%([0-9A-Fa-f][0-9A-Fa-f]):\\x\1:g'
printf "%b" `echo $1 | sed 's:&:\n:g' | grep "^$2" | cut -f2 -d'=' | sed -r $to_decode`
}
data=`wget http://www.youtube.com/get_video_info?video_id=$1\&hl=pt_BR -q -O-`
url_encoded_fmt_stream_map=`decode $data 'url_encoded_fmt_stream_map' | cut -f1 -d','`
signature=`decode $url_encoded_fmt_stream_map 'sig'`
url=`decode $url_encoded_fmt_stream_map 'url'`
test $2 && name=$2 || name=`decode $data 'title' | sed 's:+: :g;s:/:-:g'`
test "$name" = "-" && name=/dev/stdout || name="$name.vid"
wget "${url}&signature=${signature}" -O "$name"
#!/usr/bin/env /bin/bash
function youtube-video-url {
local field=
local data=
local split="s:&:\n:g"
local decode_str='s:%([0-9A-Fa-f][0-9A-Fa-f]):\\x\1:g'
local yt_url="http://www.youtube.com/get_video_info?video_id=$1"
local grabber=`command -v curl`
local args="-sL"
if [ ! "$grabber" ]; then
grabber=`command -v wget`
args="-qO-"
fi
if [ ! "$grabber" ]; then
echo 'No downloader available.' >&2
test x"${BASH_SOURCE[0]}" = x"$0" && exit 1 || return 1
fi
function decode {
data="`echo $1`"
field="$2"
if [ ! "$field" ]; then
field="$1"
data="`cat /dev/stdin`"
fi
data=`echo $data | sed $split | grep "^$field" | cut -f2 -d'=' | sed -r $decode_str`
printf "%b" $data
}
local map=`$grabber $args $yt_url | decode 'url_encoded_fmt_stream_map' | cut -f1 -d','`
echo `decode $map 'url'`\&signature=`decode $map 'sig'`
}
[ $SHLVL != 1 ] && export -f youtube-video-url
bash youtube-player.sh saalGKY7ifU
#!/bin/bash
decode() {
to_decode='s:%([0-9A-Fa-f][0-9A-Fa-f]):\\x\1:g'
printf "%b" `echo $1 | sed 's:&:\n:g' | grep "^$2" | cut -f2 -d'=' | sed -r $to_decode`
}
data=`wget http://www.youtube.com/get_video_info?video_id=$1\&hl=pt_BR -q -O-`
url_encoded_fmt_stream_map=` decode $data 'url_encoded_fmt_stream_map' | cut -f1 -d','`
signature=` decode $url_encoded_fmt_stream_map 'sig'`
url=`decode $url_encoded_fmt_stream_map 'url'`
test $2 && name=$2 || name=`decode $data 'title' | sed 's:+: :g;s:/:-:g'`
test "$name" = "-" && name=/dev/stdout || name="$name.mp4"
# // wget "${url}&signature=${signature}" -O "$name"
mplayer -zoom -fs "${url}&signature=${signature}"
It uses decode and bash, that you may have installed.
I use this bash script to download a given set of songs from a given youtube's playlist
#!/bin/bash
downloadDirectory = <directory where you want your videos to be saved>
playlistURL = <URL of the playlist>
for i in {<keyword 1>,<keyword 2>,...,<keyword n>}; do
youtube-dl -o ${downloadDirectory}"/youtube-dl/%(title)s.%(ext)s" ${playlistURL} --match-title $i
done
Note: "keyword i" is the title (in whole or part; if part, it should be unique to that playlist) of a given video in that playlist.
Edit: You can install youtube-dl by pip install youtube-dl
#!/bin/bash
# Coded by Biki Teron
# String replace command in linux
echo "Enter youtube url:"
read url1
wget -c -O index.html $url1
################################### Linux string replace ##################################################
sed -e 's/%3A%2F%2F/:\/\//g' index.html > youtube.txt
sed -i 's/%2F/\//g' youtube.txt
sed -i 's/%3F/?/g' youtube.txt
sed -i 's/%3D/=/g' youtube.txt
sed -i 's/%26/\&/g' youtube.txt
sed -i 's/%252/%2/g' youtube.txt
sed -i 's/sig/&signature/g' youtube.txt
## command to get filename
nawk '/<title>/,/<\/title>/' youtube.txt > filename.txt ## Print the line between containing <title> and <\/title> .
sed -i 's/.*content="//g' filename.txt
sed -i 's/">.*//g' filename.txt
sed -i 's/.*<title>//g' filename.txt
sed -i 's/<.*//g' filename.txt
######################################## Coding to get all itag list ########################################
nawk '/"fmt_list":/,//' youtube.txt > fmt.html ## Print the line containing "fmt_list": .
sed -i 's/.*"fmt_list"://g' fmt.html
sed -i 's/, "platform":.*//g' fmt.html
sed -i 's/, "title":.*//g' fmt.html
# String replace command in linux to get correct itag format
sed -i 's/\\\/1920x1080\\\/99\\\/0\\\/0//g' fmt.html ## Replace \/1920x1080\/99\/0\/0 by blank .
sed -i 's/\\\/1920x1080\\\/9\\\/0\\\/115//g' fmt.html ## Replace \/1920x1080\/9\/0\/115 by blank.
sed -i 's/\\\/1280x720\\\/99\\\/0\\\/0//g' fmt.html ## Replace \/1280x720\/99\/0\/0 by blank.
sed -i 's/\\\/1280x720\\\/9\\\/0\\\/115//g' fmt.html ## Replace \/1280x720\/9\/0\/115 by blank.
sed -i 's/\\\/854x480\\\/99\\\/0\\\/0//g' fmt.html ## Replace \/854x480\/99\/0\/0 by blank.
sed -i 's/\\\/854x480\\\/9\\\/0\\\/115//g' fmt.html ## Replace \/854x480\/9\/0\/115 by blank.
sed -i 's/\\\/640x360\\\/99\\\/0\\\/0//g' fmt.html ## Replace \/640x360\/99\/0\/0 by blank.
sed -i 's/\\\/640x360\\\/9\\\/0\\\/115//g' fmt.html ## Replace \/640x360\/9\/0\/115 by blank.
sed -i 's/\\\/640x360\\\/9\\\/0\\\/115//g' fmt.html ## Replace \/640x360\/9\/0\/115 by blank.
sed -i 's/\\\/320x240\\\/7\\\/0\\\/0//g' fmt.html ## Replace \/320x240\/7\/0\/0 by blank.
sed -i 's/\\\/320x240\\\/99\\\/0\\\/0//g' fmt.html ## Replace \/320x240\/99\/0\/0 by blank.
sed -i 's/\\\/176x144\\\/99\\\/0\\\/0//g' fmt.html ## Replace \/176x144\/99\/0\/0 by blank.
# Command to cut a part of a file between any two strings
nawk '/"url_encoded_fmt_stream_map":/,//' youtube.txt > url.txt
sed -i 's/.*url_encoded_fmt_stream_map"://g' url.txt
#Display video resolution information
echo ""
echo "Video resolution:"
echo "[46=1080(.webm)]--[37=1080(.mp4)]--[35=480(.flv)]--[36=180(.3gpp)]"
echo "[45=720 (.webm)]--[22=720 (.mp4)]--[34=360(.flv)]--[17=144(.3gpp)]"
echo "[44=480 (.webm)]--[18=360 (.mp4)]--[5=240 (.flv)]"
echo "[43=360 (.webm)]"
echo ""
echo "itag list= "`cat fmt.html`
echo "Enter itag number: "
read fmt
####################################### Coding to get required resolution #################################################
## cut itag=?
sed -e "s/.*,itag=$fmt//g" url.txt > "$fmt"_1.txt
sed -e 's/\u0026quality.*//g' "$fmt"_1.txt > "$fmt".txt
sed -i 's/.*u0026url=//g' "$fmt".txt ## Ignore all lines before \u0026url= but print all lines after \u0026url=.
sed -e 's/\u0026type.*//g' "$fmt".txt > "$fmt"url.txt ## Ignore all lines after \u0026type but print all lines before \u0026type.
sed -i 's/\\/\&/g' "$fmt"url.txt ## replace \ by &
sed -e 's/.*\u0026sig//g' "$fmt".txt > "$fmt"sig.txt ## Ignore all lines before \u0026sig but print all lines after \u0026sig.
sed -i 's/\\/\&ptk=machinima/g' "$fmt"sig.txt ## replace \ by &
echo `cat "$fmt"url.txt``cat "$fmt"sig.txt` > "$fmt"url.txt ## Add string at the end of a line
echo `cat "$fmt"url.txt` > link.txt ## url and signature content to 44url.txt
rm "$fmt"sig.txt
rm "$fmt"_1.txt
rm "$fmt".txt
rm "$fmt"url.txt
rm youtube.txt
########################################### Coding for filename with correct extension #####################################
if [ $fmt -eq 46 ]
then
echo `cat filename.txt`.webm > filename.txt
elif [ $fmt -eq 45 ]
then
echo `cat filename.txt`.webm > filename.txt
elif [ $fmt -eq 44 ]
then
echo `cat filename.txt`.webm > filename.txt
elif [ $fmt -eq 43 ]
then
echo `cat filename.txt`.webm > filename.txt
elif [ $fmt -eq 37 ]
then
echo `cat filename.txt`.mp4 > filename.txt
elif [ $fmt -eq 22 ]
then
echo `cat filename.txt`.mp4 > filename.txt
elif [ $fmt -eq 18 ]
then
echo `cat filename.txt`.mp4 > filename.txt
elif [ $fmt -eq 35 ]
then
echo `cat filename.txt`.flv > filename.txt
elif [ $fmt -eq 34 ]
then
echo `cat filename.txt`.flv > filename.txt
elif [ $fmt -eq 5 ]
then
echo `cat filename.txt`.flv > filename.txt
elif [ $fmt -eq 36 ]
then
echo `cat filename.txt`.3gpp > filename.txt
else
echo `cat filename.txt`.3gpp > filename.txt
fi
rm fmt.html
rm url.txt
filename=`cat filename.txt`
linkdownload=`cat link.txt`
wget -c -O "$filename" $linkdownload
echo "Download Finished!"
read

Resources