Bad nmap grepable output - bash

if i scan which nmap one target and i use output grepable option (-oG) if have this output
nmap -sS -oG - 192.168.1.1
Status: Up
Host: 192.168.1.1 () Ports: 20/closed/tcp//ftp-data///, 21/open/tcp//ftp///, 22/closed/tcp//ssh///, 43/closed/tcp//whois///, 80/open/tcp//http///
# Nmap done at Thu Dec 12 11:32:36 2
As you can see the line who indicate the ports number have no newline. For use grep it's no easy... :)
I'am on debian wheezy, i use bash, how can i correct this?
Thanks

Well, although they call it "grepable" output, it's more meant to be parsed by tools such as awk, sed or Perl.
Alot of useful information is on NMAP website.
The fields are also separated by tab characters, so i'd start with eg. cut -f5 file to get the fields you want and then you can do fine parsing with say awk -F/ '{print $2}'. I'm not sure what part of the output is of interest.
Perl would also work to parse the output as described on their webpage, but that's probably not needed.

There is nothing wrong with that output. Grepable format is designed to have one line per host, so that you can grep for all hosts with a particular port open.
If what you want is a list of only those ports that are open, you can tell Nmap to only print those with the --open option:
sh$ nmap -p 80,22 localhost -oG - -n -Pn --open
# Nmap 6.41SVN scan initiated Thu Dec 12 08:40:03 2013 as: nmap -p 80,22 -oG - -n -Pn --open localhost
Host: 127.0.0.1 () Status: Up
Host: 127.0.0.1 () Ports: 22/open/tcp//ssh/// Ignored State: closed (1)
# Nmap done at Thu Dec 12 08:40:03 2013 -- 1 IP address (1 host up) scanned in 0.08 seconds

Related

How to extract IP address from log file and append it to an URL?

Log file contains this line.
Nov 28 21:39:25 soft-server sshd[11946]: Accepted password for myusername from 10.0.2.2 port 13494 ssh2
I want to run the curl command only if the log file contains the string "Accepted password for" and append the IP address to URL.
Something like this:
if [ grep -q "Accepted password for" var/log/auth.log]
then
curl 'www.examplestextmessage.com/send-a-message/text=$IP_address'
fi
Additionally, how to rewrite the above script which can check multiple logins and to run separate curl commands for each results?
for eg:
Nov 28 21:35:25 localhost sshd[565]: Server listening on 0.0.0.0 port 22
Nov 28 21:39:25 soft-server sshd[11946]: Accepted password for myusername
from 10.0.2.2 port 13494 ssh2
Nov 28 21:40:25 localhost sshd[565]: Server listening on 0.0.0.0 port 22
Nov 28 21:41:25 localhost sshd[565]: Server listening on 0.0.0.0 port 22
Nov 28 21:42:25 localhost sshd[565]: Server listening on 0.0.0.0 port 22
Nov 28 21:43:25 soft-server sshd[11946]: Accepted password for myusername from 10.0.1.1 port 13494 ssh2
grep -oP 'Accepted password for \w+ from\s\K[^ ]+' log.file|while read line;
do
curl "www.examplestextmessage.com/send-a-message/text=$line"
done
Explanation:
First grep will list the IP addresses from the line containing the words "Accepted password for". Then the stream of grep result is feeded into while loop to append ip addresses to curl and execute curl.
1 line script with xargs:
grep -oP 'Accepted password for \w+ from\s\K[^ ]+' "/var/log/auth.log" | xargs -I{} -r curl -v -L "www.examplestextmessage.com/send-a-message/text={}"
xarg -r If the standard input is completely empty, do not run the command. By default, the command is run once even if there is no input.
xarg -I{} option changes the way the new command lines are built. Instead of adding as many arguments as possible at a time, xargs will take one name at a time from its input, look for the given token ({} here) and replace that with the name.
Explanation :
grep the content "Accepted password for"
From result set find ip address using awk/cut
use IP address list as input for loop
below is code example
for IP_address in `cat auth.log | grep 'Accepted password for' | awk '{print $11}'`
do
curl "www.examplestextmessage.com/send-a-message/text=$IP_address"
done
Another option for completeness is sed using -E to cater for regular expressions:
sed -En 's/(^.*Accepted password for )(.*)( from )(.*)( port.*$)/\4/p'
This will split the text into five separate sections signified by extracts in between brackets. We then focus on the 4th extract to get the ip address

Portable way to resolve host name to IP address

I need to resolve a host name to an IP address in a shell script. The code must work at least in Cygwin, Ubuntu and OpenWrt(busybox).
It can be assumed that each host will have only one IP address.
Example:
input
google.com
output
216.58.209.46
EDIT:
nslookup may seem like a good solution, but its output is quite unpredictable and difficult to filter. Here is the result command on my computer (Cygwin):
>nslookup google.com
Unauthorized answer:
Serwer: UnKnown
Address: fdc9:d7b9:6c62::1
Name: google.com
Addresses: 2a00:1450:401b:800::200e
216.58.209.78
I've no experience with OpenWRT or Busybox but the following one-liner will should work with a base installation of Cygwin or Ubuntu:
ipaddress=$(LC_ALL=C nslookup $host 2>/dev/null | sed -nr '/Name/,+1s|Address(es)?: *||p')
The above works with both the Ubuntu and Windows version of nslookup. However, it only works when the DNS server replies with one IP (v4 or v6) address; if more than one address is returned the first one will be used.
Explanation
LC_ALL=C nslookup sets the LC_ALL environment variable when running the nslookup command so that the command ignores the current system locale and print its output in the command’s default language (English).
The 2>/dev/null avoids having warnings from the Windows version of nslookup about non-authoritative servers being printed.
The sed command looks for the line containing Name and then prints the following line after stripping the phrase Addresses: when there's more than one IP (v4 or 6) address -- or Address: when only one address is returned by the name server.
The -n option means lines aren't printed unless there's a p commandwhile the-r` option means extended regular expressions are used (GNU sed is the default for Cygwin and Ubuntu).
If you want something available out-of-the-box on almost any modern UNIX, use Python:
pylookup() {
python -c 'import socket, sys; print socket.gethostbyname(sys.argv[1])' "$#" 2>/dev/null
}
address=$(pylookup google.com)
With respect to special-purpose tools, dig is far easier to work with than nslookup, and its short mode emits only literal answers -- in this case, IP addresses. To take only the first address, if more than one is found:
# this is a bash-specific idiom
read -r address < <(dig +short google.com | grep -E '^[0-9.]+$')
If you need to work with POSIX sh, or broken versions of bash (such as Git Bash, built with mingw, where process substitution doesn't work), then you might instead use:
address=$(dig +short google.com | grep -E '^[0-9.]+$' | head -n 1)
dig is available for cygwin in the bind-utils package; as bind is most widely used DNS server on UNIX, bind-utils (built from the same codebase) is available for almost all Unix-family operating systems as well.
Here's my variation that steals from earlier answers:
nslookup blueboard 2> /dev/null | awk '/Address/{a=$3}END{print a}'
This depends on nslookup returning matching lines that look like:
Address 1: 192.168.1.100 blueboard
...and only returns the last address.
Caveats: this doesn't handle non-matching hostnames at all.
TL;DR; Option 2 is my preferred choice for IPv4 address. Adjust the regex to get IPv6 and/or awk to get both. There is a slight edit to option 2 suggested use given in EDIT
Well a terribly late answer here, but I think I'll share my solution here, esp. because the accepted answer didn't work for me on openWRT(no python with minimal setup) and the other answer errors out "no address found after comma".
Option 1 (gives the last address from last entry sent by nameserver):
nslookup example.com 2>/dev/null | tail -2 | tail -1 | awk '{print $3}'
Pretty simple and straight forward and doesn't really need an explanation of piped commands.
Although, in my tests this always gave IPv4 address (because IPv4 was always last line, at least in my tests.) However, I read about the unexpected behavior of nslookup. So, I had to find a way to make sure I get IPv4 even if the order was reversed - thanks regex
Option 2 (makes sure you get IPv4):
nslookup example.com 2>/dev/null | sed 's/[^0-9. ]//g' | tail -n 1 | awk -F " " '{print $2}'
Explanation:
nslookup example.com 2>/dev/null - look up given host and ignore STDERR (2>/dev/null)
sed 's/[^0-9. ]//g' - regex to get IPv4 (numbers and dots, read about 's' command here)
tail -n 1 - get last 1 line (alt, tail -1)
awk -F " " '{print $2} - Captures and prints the second part of line using " " as a field separator
EDIT: A slight modification based on a comment to make it actually more generalized:
nslookup example.com 2>/dev/null | printf "%s" "$(sed 's/[^0-9. ]//g')" | tail -n 1 | printf "%s" "$(awk -F " " '{print $1}')"
In the above edit, I'm using printf command substitution to take care of any unwanted trailing newlines.

Putting a string on same line tcl

I have a nmap output and I need to put strings on different lines on same line.
Nmap Output:
Nmap scan report for 169.254.0.1
Host is up (0.014s latency).
Not shown: 97 closed ports
PORT STATE SERVICE
80/tcp open http
1720/tcp open H.323/Q.931
5060/tcp open sip
Device type: VoIP adapter|WAP|PBX|webcam|printer
New Ouput:
169.254.0.1,Voip adapter
How can I do this on tcl or bash?
In Tcl, we can use regexp to extract the required data.
set nmap_output "Nmap scan report for 169.254.0.1
Host is up (0.014s latency).
Not shown: 97 closed ports
PORT STATE SERVICE
80/tcp open http
1720/tcp open H.323/Q.931
5060/tcp open sip
Device type: VoIP adapter|WAP|PBX|webcam|printer"
if {[regexp {scan\s+report\s+for\s+(\S+).*Device\s+type:\s+([^|]+)} $nmap_output match ip type]} {
puts $ip,$type
}
Brute force:
<your_nmap_output> | \
egrep "Nmap scan report|Device type" | \
sed -r 's/[ ]*Nmap scan report for (.*)$/\1,/' | \
sed -r 's/[ ]*Device type: ([^\|]*)\|.*/\1/' | \
xargs

Send text file, line by line, with netcat

I'm trying to send a file, line by line, with the following commands:
nc host port < textfile
cat textfile | nc host port
I've tried with tail and head, but with the same result: the entire file is sent as a unique line.
The server is listening with a specific daemon to receive data log information.
I'd like to send and receive the lines one by one, not the whole file in a single shot.
How can I do that?
Do you HAVE TO use netcat?
cat textfile > /dev/tcp/HOST/PORT
can also serve your purpose, at least with bash.
I'de like to send, and receive, one by one the lines, not all the file in a single shot.
Try
while read x; do echo "$x" | nc host port; done < textfile
OP was unclear on whether they needed a new connection for each line. But based on the OP's comment here, I think their need is different than mine. However, Google sends people with my need here so here is where I will place this alternative.
I have a need to send a file line by line over a single connection. Basically, it's a "slow" cat. (This will be a common need for many "conversational" protocols.)
If I try to cat an email message to nc I get an error because the server can't have a "conversation" with me.
$ cat email_msg.txt | nc localhost 25
554 SMTP synchronization error
Now if I insert a slowcat into the pipe, I get the email.
$ function slowcat(){ while read; do sleep .05; echo "$REPLY"; done; }
$ cat email_msg.txt | slowcat | nc localhost 25
220 et3 ESMTP Exim 4.89 Fri, 27 Oct 2017 06:18:14 +0000
250 et3 Hello localhost [::1]
250 OK
250 Accepted
354 Enter message, ending with "." on a line by itself
250 OK id=1e7xyA-0000m6-VR
221 et3 closing connection
The email_msg.txt looks like this:
$ cat email_msg.txt
HELO localhost
MAIL FROM:<system#example.com>
RCPT TO:<bbronosky#example.com>
DATA
From: [IES] <system#example.com>
To: <bbronosky#example.com>
Date: Fri, 27 Oct 2017 06:14:11 +0000
Subject: Test Message
Hi there! This is supposed to be a real email...
Have a good day!
-- System
.
QUIT
Use stdbuf -oL to adjust standard output stream buffering. If MODE is 'L' the corresponding stream will be line buffered:
stdbuf -oL cat textfile | nc host port
Just guessing here, but you probably CR-NL end of lines:
sed $'s/$/\r/' textfile | nc host port

tcpdump - ignore unkown host error

I've got a tcpdump command running from a bash script. looks something like this.
tcpdump -nttttAr /path/to/file -F /my/filter/file
The filter file has a combination of ip addresses and host names. i.e.
host 111.111.111.111 or host 112.112.112.112 and not (host abc.com or host def.com or host zyx.com).
And it works great - as long as the host names are all valid. My problem is sometimes these hostnames will not be valid and upon encountering one - tcpdump spits out
tcpdump: Unknown Host
I thought with the -n option it would skip dns lookup - but in anycase I need it to ignore the unknown host and continue along the filter file.
Any ideas?
Thank you in advance.
The -n option prevents conversion of IP addresses into names, but not the other way around. If you supply a hostname as an argument, it has to be looked up to get the IP address since packets only contain the numeric address and not the hostname. However, there ought to be a way to ignore invalid hostnames, but I can't find one. Perhaps you could pre-process your filter file using dig.
dig +short non-existent-domain.com # returns null
dig +short google.com # returns multiple IP addresses
This could probably be better, but it should show you hostnames in your filter file that aren't valid:
grep -Po '(?<=host )[^ )]*' filterfile | grep -v '[0-9]$' | xargs -I % sh -c 'echo -n "% "; echo $(dig +short %)' | grep -v ' [0-9]'
Any hostnames it prints didn't have IP addresses returned by dig.

Resources