Putting a string on same line tcl - bash

I have a nmap output and I need to put strings on different lines on same line.
Nmap Output:
Nmap scan report for 169.254.0.1
Host is up (0.014s latency).
Not shown: 97 closed ports
PORT STATE SERVICE
80/tcp open http
1720/tcp open H.323/Q.931
5060/tcp open sip
Device type: VoIP adapter|WAP|PBX|webcam|printer
New Ouput:
169.254.0.1,Voip adapter
How can I do this on tcl or bash?

In Tcl, we can use regexp to extract the required data.
set nmap_output "Nmap scan report for 169.254.0.1
Host is up (0.014s latency).
Not shown: 97 closed ports
PORT STATE SERVICE
80/tcp open http
1720/tcp open H.323/Q.931
5060/tcp open sip
Device type: VoIP adapter|WAP|PBX|webcam|printer"
if {[regexp {scan\s+report\s+for\s+(\S+).*Device\s+type:\s+([^|]+)} $nmap_output match ip type]} {
puts $ip,$type
}

Brute force:
<your_nmap_output> | \
egrep "Nmap scan report|Device type" | \
sed -r 's/[ ]*Nmap scan report for (.*)$/\1,/' | \
sed -r 's/[ ]*Device type: ([^\|]*)\|.*/\1/' | \
xargs

Related

How to delete a connection by type with nmcli?

I want to have a bash script that can delete all my network manager connections of type gsm with nmcli.
What is the best approach for this?
This is actually a trickier question than it seems on the surface, because NetworkManager allows for connection names with spaces in them. This makes programmatic parsing of the output of nmcli connection show for connection names a bit awkward. I think the best option for scripting would be to rely on the UUID, since it seems to consistently be a 36 character group of hexidecimal characters and dashes. This means we can pull it consistently with a regular expression. So for example you could get a list of the UUIDs for gsm connections with the following:
$ nmcli connection show | grep gsm | grep -E -o '[0-9a-f\-]{36}'
cc823da6-d4e1-4757-a37a-aaaaaaaaa
etc
So you could grab the UUIDs and then delete based on the UUID:
GSM_UUIDS=$(nmcli connection show | grep gsm | grep -E -o '[0-9a-f\-]{36}')
while IFS= read -r UUID; do echo nmcli connection delete $UUID; done <<< "$GSM_UUIDS"
Run with the echo to make sure you're getting the result you expect, then you can remove it and you should be in business. I ran locally with some dummy GSM connections and it seemed to work they way you would want it to:
GSM_UUIDS=$(nmcli connection show | grep gsm | grep -E -o '[0-9a-f\-]{36}')
while IFS= read -r UUID; do nmcli connection delete $UUID; done <<< "$GSM_UUIDS"
Connection 'gsm' (cd311376-d7ab-4891-ba73-e4e8a3fc6614) successfully deleted.
Connection 'gsm-1' (54171181-5c37-4224-baf5-9eb36458f773) successfully deleted.
nmcli con del $(nmcli -t -f UUID,TYPE con | awk -F":" '{if ($2 == "gsm") print $1}')

How to extract IP address from log file and append it to an URL?

Log file contains this line.
Nov 28 21:39:25 soft-server sshd[11946]: Accepted password for myusername from 10.0.2.2 port 13494 ssh2
I want to run the curl command only if the log file contains the string "Accepted password for" and append the IP address to URL.
Something like this:
if [ grep -q "Accepted password for" var/log/auth.log]
then
curl 'www.examplestextmessage.com/send-a-message/text=$IP_address'
fi
Additionally, how to rewrite the above script which can check multiple logins and to run separate curl commands for each results?
for eg:
Nov 28 21:35:25 localhost sshd[565]: Server listening on 0.0.0.0 port 22
Nov 28 21:39:25 soft-server sshd[11946]: Accepted password for myusername
from 10.0.2.2 port 13494 ssh2
Nov 28 21:40:25 localhost sshd[565]: Server listening on 0.0.0.0 port 22
Nov 28 21:41:25 localhost sshd[565]: Server listening on 0.0.0.0 port 22
Nov 28 21:42:25 localhost sshd[565]: Server listening on 0.0.0.0 port 22
Nov 28 21:43:25 soft-server sshd[11946]: Accepted password for myusername from 10.0.1.1 port 13494 ssh2
grep -oP 'Accepted password for \w+ from\s\K[^ ]+' log.file|while read line;
do
curl "www.examplestextmessage.com/send-a-message/text=$line"
done
Explanation:
First grep will list the IP addresses from the line containing the words "Accepted password for". Then the stream of grep result is feeded into while loop to append ip addresses to curl and execute curl.
1 line script with xargs:
grep -oP 'Accepted password for \w+ from\s\K[^ ]+' "/var/log/auth.log" | xargs -I{} -r curl -v -L "www.examplestextmessage.com/send-a-message/text={}"
xarg -r If the standard input is completely empty, do not run the command. By default, the command is run once even if there is no input.
xarg -I{} option changes the way the new command lines are built. Instead of adding as many arguments as possible at a time, xargs will take one name at a time from its input, look for the given token ({} here) and replace that with the name.
Explanation :
grep the content "Accepted password for"
From result set find ip address using awk/cut
use IP address list as input for loop
below is code example
for IP_address in `cat auth.log | grep 'Accepted password for' | awk '{print $11}'`
do
curl "www.examplestextmessage.com/send-a-message/text=$IP_address"
done
Another option for completeness is sed using -E to cater for regular expressions:
sed -En 's/(^.*Accepted password for )(.*)( from )(.*)( port.*$)/\4/p'
This will split the text into five separate sections signified by extracts in between brackets. We then focus on the 4th extract to get the ip address

Bad nmap grepable output

if i scan which nmap one target and i use output grepable option (-oG) if have this output
nmap -sS -oG - 192.168.1.1
Status: Up
Host: 192.168.1.1 () Ports: 20/closed/tcp//ftp-data///, 21/open/tcp//ftp///, 22/closed/tcp//ssh///, 43/closed/tcp//whois///, 80/open/tcp//http///
# Nmap done at Thu Dec 12 11:32:36 2
As you can see the line who indicate the ports number have no newline. For use grep it's no easy... :)
I'am on debian wheezy, i use bash, how can i correct this?
Thanks
Well, although they call it "grepable" output, it's more meant to be parsed by tools such as awk, sed or Perl.
Alot of useful information is on NMAP website.
The fields are also separated by tab characters, so i'd start with eg. cut -f5 file to get the fields you want and then you can do fine parsing with say awk -F/ '{print $2}'. I'm not sure what part of the output is of interest.
Perl would also work to parse the output as described on their webpage, but that's probably not needed.
There is nothing wrong with that output. Grepable format is designed to have one line per host, so that you can grep for all hosts with a particular port open.
If what you want is a list of only those ports that are open, you can tell Nmap to only print those with the --open option:
sh$ nmap -p 80,22 localhost -oG - -n -Pn --open
# Nmap 6.41SVN scan initiated Thu Dec 12 08:40:03 2013 as: nmap -p 80,22 -oG - -n -Pn --open localhost
Host: 127.0.0.1 () Status: Up
Host: 127.0.0.1 () Ports: 22/open/tcp//ssh/// Ignored State: closed (1)
# Nmap done at Thu Dec 12 08:40:03 2013 -- 1 IP address (1 host up) scanned in 0.08 seconds

Capturing Data from Tshark

Tshark is a command line packet sniffer. I am trying to find a way to get information from the packets, put it in a variable and do some regular expression on it.
Right now, I am getting this from tshark:
Capturing on eth0
0.000000 74.125.71.116 -> 112.204.184.111 TCP http > 55828 [ACK] Seq=1 Ack=1 Win=6434 Len=0 TSV=2558834852 TSER=542043
0.000035 112.204.184.111 -> 74.125.71.116 HTTP Continuation or non-HTTP traffic
0.000043 112.204.184.111 -> 74.125.71.116 HTTP Continuation or non-HTTP traffic
Note: I am using Ruby.
You can use tshark itself without another utility. This command prints out all URI's from packets as they arrive:
$ tshark -R http.request.full_uri -T fields -e http.request.full_uri -i en0
You can refine the display filter (the -R parameter) to better match your requirements. It even supports Perl regular expression matching:
# Mac OS X
$ tshark -R 'http.request.full_uri matches "\\.jpg\|\\.js"' -T fields -e http.request.full_uri -i en0
Example output from visiting youtube.com:
$ tshark -R 'http.request.full_uri matches "\\.jpg\|\\.js"' -T fields -e http.request.full_uri -i en0
Capturing on en0
http://s.ytimg.com/yt/jsbin/www-core-vfl3_mVgh.js
http://s.ytimg.com/yt/jsbin/www-subscriptions-vfl5HwfxW.js
http://i2.ytimg.com/i/QMbqH7xJu5aTAPQ9y_U7WQ/1.jpg?v=95416b
http://i1.ytimg.com/vi/4R0BAjrZqyY/default.jpg
http://i4.ytimg.com/i/KVtW8ExxO21F2sNLtwrq_w/1.jpg?v=a1fa0c
http://i3.ytimg.com/vi/z3U0udLH974/default.jpg
http://i2.ytimg.com/vi/arKyyDRsE_8/default.jpg
http://i2.ytimg.com/vi/y1TGz-fEyiE/default.jpg
http://i2.ytimg.com/vi/-tc983PZK3o/default.jpg
http://i2.ytimg.com/vi/1yT2rrTyMK8/default.jpg
http://i4.ytimg.com/vi/cciUXpITsu0/default.jpg
http://i2.ytimg.com/vi/uG0dimAxHpI/default.jpg
http://i2.ytimg.com/vi/eP9P50kbzTk/default.jpg
http://i1.ytimg.com/vi/ppBe0T412uU/default.jpg
http://i1.ytimg.com/vi/8360wVLtEuk/default.jpg
http://i4.ytimg.com/vi/G_yB7wdTxa0/default.jpg
http://i4.ytimg.com/vi/gcZxoLs3NIU/default.jpg
http://i1.ytimg.com/i/po2fJvnalYlwN97ehhyfBQ/1.jpg?v=b8e52a
http://i1.ytimg.com/vi/D2Xjj_ra8lQ/default.jpg
http://i1.ytimg.com/vi/PewewGu9gp8/default.jpg
http://i1.ytimg.com/vi/P9FkRD6ppGo/default.jpg
http://i3.ytimg.com/vi/vpZ4SMU4znQ/default.jpg
http://i3.ytimg.com/vi/jrrSGulNOLc/default.jpg
http://i3.ytimg.com/vi/FJtTzQfdnoQ/default.jpg
http://i3.ytimg.com/vi/68sEHPpQXes/default.jpg
http://i2.ytimg.com/vi/iWYqsaJk_U8/default.jpg
http://i4.ytimg.com/vi/7Prb8DbdfwY/default.jpg
http://i1.ytimg.com/vi/HJFlxLJSX8E/default.jpg
http://i1.ytimg.com/vi/ta6Vu_v7VLg/default.jpg
http://i1.ytimg.com/vi/Hq7NtDSIErE/default.jpg
http://i4.ytimg.com/vi/Sjdj7qhcTuw/default.jpg
http://i3.ytimg.com/vi/Nm3Acf3_oMY/default.jpg
http://i3.ytimg.com/vi/BpsrThXh_gM/default.jpg
http://i3.ytimg.com/vi/Z3yapgewktY/default.jpg
http://i3.ytimg.com/vi/2UFc1pr2yUU/default.jpg
http://i2.ytimg.com/vi/q_Bt6NwD4FY/default.jpg
http://i2.ytimg.com/vi/uTAAlzABzBA/default.jpg
http://i2.ytimg.com/vi/iRLUY6dMF8k/default.jpg
http://i2.ytimg.com/vi/-cDH6CYzTAw/default.jpg
http://i1.ytimg.com/vi/8p6Fn8R1Rc4/default.jpg
http://i1.ytimg.com/vi/T8gDQWdlW6A/default.jpg
http://i2.ytimg.com/vi/ERTcZV7uTFU/default.jpg
http://i1.ytimg.com/vi/PyxgwA6PvnI/default.jpg
http://i1.ytimg.com/vi/xUGlezOCvu4/default.jpg
http://i1.ytimg.com/vi/Ljb6Mne8Mfc/default.jpg
Note: In Windows, I've seentshark print all URIs in a particular packet in one line without delimiters (e.g., "http://www.google.comhttp://www.google.com/logos/classicplus.png"). Only some packets were affected by this.
You could either pipe this data into a file which you then open and parse with Ruby, or you could use a Ruby lib that can access the same data, such as: http://sourceforge.net/apps/trac/rubypcap/

tcpdump - ignore unkown host error

I've got a tcpdump command running from a bash script. looks something like this.
tcpdump -nttttAr /path/to/file -F /my/filter/file
The filter file has a combination of ip addresses and host names. i.e.
host 111.111.111.111 or host 112.112.112.112 and not (host abc.com or host def.com or host zyx.com).
And it works great - as long as the host names are all valid. My problem is sometimes these hostnames will not be valid and upon encountering one - tcpdump spits out
tcpdump: Unknown Host
I thought with the -n option it would skip dns lookup - but in anycase I need it to ignore the unknown host and continue along the filter file.
Any ideas?
Thank you in advance.
The -n option prevents conversion of IP addresses into names, but not the other way around. If you supply a hostname as an argument, it has to be looked up to get the IP address since packets only contain the numeric address and not the hostname. However, there ought to be a way to ignore invalid hostnames, but I can't find one. Perhaps you could pre-process your filter file using dig.
dig +short non-existent-domain.com # returns null
dig +short google.com # returns multiple IP addresses
This could probably be better, but it should show you hostnames in your filter file that aren't valid:
grep -Po '(?<=host )[^ )]*' filterfile | grep -v '[0-9]$' | xargs -I % sh -c 'echo -n "% "; echo $(dig +short %)' | grep -v ' [0-9]'
Any hostnames it prints didn't have IP addresses returned by dig.

Resources