How to write an output of a grep from an openssl command? - bash

Consider you have list of thousand IP addresses in a text file - one per line. I want to be able to grab all possible anomalies from each IP address using openssl s_client command. So far, anomalies are certificate expired, self signed certificate, and issuer CN to include emailAddress=root#localhost.localdomain.
Overall, I want to be able to obtain concise error message per IP address if there is any. My current bash script looks like:
for ip in `awk '{print $1}' < general_certs.txt`; do
# Print IP address currently checking
echo -e $ip;
if timeout 30 openssl s_client -connect $ip:443| grep -i 'error' ; then
# Write error information and IP address to a file
echo `openssl s_client -connect $ip:443| grep -i 'error'` $ip >> general_errors;
else
# Write noerror information and IP address to another file
echo "noerror" $ip >> general_noerror;
fi;
done
First issue I have with the code is that it is not optimized and that I am skeptical if it returns accurate results. The end goal with the above script is to identify all untrusted certificate containing IPs.
Second issue I had with the above code that I could not echo $ip first because it would get truncated by the message itself. So, I ended up writing out $ip after the error message.
It is not necessary to use openssl if there is a more genuine solution to my question.

You can try to run them all in one shot by putting the processes in the background:
#!/bin/bash
max=100
counter=0
for ip in `awk '{print $1}' < general_certs.txt`; do
(( counter++ ))
(
results=$( timeout 30 openssl s_client -connect $ip:443 2> /dev/null )
if [ "$results" = "" ]; then
echo "$ip noresponse"
else
echo -n "$ip "
echo "$results" | grep -i 'error' || echo "noerror"
fi
) >> general_errors &
if [ $counter -eq $max ]; then
wait
counter=0
fi
done
wait
This was the input:
$ cat general_certs.txt
stackoverflow.com
redhat.com
google.com
8.8.8.8
vt.edu
bls.gov
8.8.4.4
This was the output:
$ cat general_errors
8.8.4.4 noerror
stackoverflow.com noerror
google.com noerror
vt.edu noerror
bls.gov noerror
redhat.com noerror
8.8.8.8 noresponse
If you have some that fail, I can test them.

If you're trying to get which ip's have a untrusted certificate, you could try, even if it's not the best option, with curl.
Something like:
for ip in `awk '{print $1}' < general_certs.txt`; do
echo -e $ip
curl https://${ip}:443 &> /dev/null
if [ $? == 51 ]; then
echo "upsis, https://${ip}:443 has a untrusted certificate" >> general_err
else
echo "yai, https://${ip}:443 doesn't have a untrusted certificate" >> general_noerr
fi
done
Notice that here you're only checking for unstrusted certificates ( error 51 in curl ), but the command could send any other error and it would go to general_noerror.

Related

Formating nmap results to get http server

I am trying to take a nmap scan result, determine the http ports (http, https, http-alt ...) and capture them ip and ports in order to automaticly perform web app scans.
I have my nmap results in grepable format. Using grep to delete any lines that do no contain the string "http". But I am now unsure how I can proceed.
Host: 127.0.0.1 (localhost) Ports: 3390/open/tcp//dsc///, 5901/open/tcp//vnc-1///, 8000/open/tcp//http-alt/// Ignored State: closed (65532)
This is my current result. From this I can get the IP of hosts with a http server open by using the cut command and getting the second field. which is the first part of my problem solved.
But now I am looking for a way to only get (from the above example)
8000/open/tcp//http-alt///
(NB: I'm not looking to get it just for the spefic case, using
cut -f 3 -d "," will work for this case, but if the http server was in the first field it would not work.)
after which i can use the cut command to get the port to then add it to a file with the ip, resulting in
127.0.0.1:8000
Could anyone advise a good way to do this?
Code of my simple bash script for doing a basic scan of all ports,the then doing a more advanced one based on the open ports found. Next step and objecive is to automaticly scan web apps with a directory scan and niktoo scan of identified web apps
#!/bin/bash
echo "Welcome to the quick lil tool. This runs a basic nmap scan, collects open ports and does a more advanced scan. reducing the time needed"
echo -e "\nUsage: ./getPorts.sh [Hosts]\n"
if [ $# -eq 0 ]
then
echo "No argument specified. Usage: ./getPorts.sh [Host or host file]"
exit 1
fi
if [[ "$EUID" -ne 0 ]]; then
echo "Not running as root"
exit 1
fi
nmap -iL $1 -p- -oA results
#Replace input file with gnmap scan, It will generate a list of all open ports
cat results.gnmap |awk -F'[/ ]' '{h=$2; for(i=1;i<=NF;i++){if($i=="open"){print h,":",$(i-1)}}}'| awk -F ':' '{print $2}' | sed -z 's/\n/,/g;s/,$/\n/' >> ports.list
#more advanced nmap scan
ports=$(cat ports.list)
echo $ports
nmap -p $ports -sC -sV -iL $1
EDIT: Found a way. Not sure why I was so focused on using the gnmap format for this, If I use the regular .nmap format. I can simple grep the line with http in and use cut to get the first field.
(cat results.nmap | grep 'http' | cut -d "/" -f 1)
EDIT2: I realised the method mentioned in my first edit is not optimal when processing multiple results as I then have a list of IP's from the .nmap, and a list of ports from the .gnmap. I have found a good solution to my problem using a single file. see below:
#!/bin/bash
httpalt=$(cat test.gnmap | awk '/\/http-alt\// {for(i=5;i<=NF;i++)if($i~"/open/.+/http-alt/"){sub("/.*","",$i); print "http://"$2":"$i}}')
if [ -z "$httpalt" ]
then
echo "No http-alt servers found"
else
echo "http-alt servers found"
echo $httpalt
printf "\n"
fi
http=$(cat test.gnmap | awk '/\/http\// {for(i=5;i<=NF;i++)if($i~"/open/.+/http/"){sub("/.*","",$i);print "http://"$2":"$i}}')
if [ -z "$http" ]
then
echo "No http servers found"
else
echo "http servers found"
echo $http
printf "\n"
fi
https=$(cat test.gnmap | awk '/\/https\// {for(i=5;i<=NF;i++)if($i~"/open/.+/https/"){sub("/.*","",$i); print "https://"$2":"$i}}')
if [ -z "$https" ]
then
echo "No http servers found"
else
echo "https servers found"
echo $https
printf "\n"
fi
echo ----
printf "All ip:webapps \n"
webserver=$(echo "$httpalt $http $https" | sed -e 's/\s\+/,/g'|sed -z 's/\n/,/g;s/,$/\n/')
if [[ ${webserver::1} == "," ]]
then
webserver="${webserver#?}"
else
echo 0; fi
for webservers in $webserver; do
echo $webservers
done
echo $https
https=$(echo "$https" | sed -e 's/\s\+/,/g'|sed -z 's/\n/,/g;s/,$/\n/')
echo $https
mkdir https
mkdir ./https/nikto/
mkdir ./https/dirb/
for onehttps in ${https//,/ }
do
echo "Performing Dirb and nikto for https"
dirb $onehttps > ./https/dirb/https_dirb
nikto -url $onehttps > ./https/nikto/https_nitko
done
mkdir http
mkdir ./http/nikto
mkdir ./http/dirb/
for onehttp in ${http//,/ }
do
echo $onehttp
echo "Performing Dirb for http"
dirb $onehttp >> ./http/dirb/http_dirb
nikto -url $onehttp >> ./http/nikto/http_nikto
done
mkdir httpalt
mkdir httpalt/nikto/
mkdir httpalt/dirb/
for onehttpalt in ${httpalt//,/ }
do
echo "Performing Dirb for http-alt"
dirb $onehttpalt >> ./httpalt/dirb/httpalt_dirb
nikto -url $onehttpalt >> ./httpalt/nikto/httpalt_nikto
done
This will check for any http, https, and http-alt servers, store them in a variable, check for duplicates and remove any trailing commas at the begining, It is far from perfect, but is a good solution for now!
Just want to share a brilliant open source tool on GitHub that can be used to easily parse NMAP XML files.
https://github.com/honze-net/nmap-query-xml
I use some of the python code to extract http/https URLs from the nmap xml file.
# pip3 install python-libnmap
from libnmap.parser import NmapParser
def extract_http_urls_from_nmap_xml(file):
    try:
        report = NmapParser.parse_fromfile(file)
        urls = []
    except IOError:
        print("Error: Nmap XML file %s not found. Quitting!" % file)
        sys.exit(1)
    for host in report.hosts:
        for service in host.services:
            filtered_services = "http,http-alt,http-mgmt,http-proxy,http-rpc-epmap,https,https-alt,https-wmap,http-wmap,httpx"
            if (service.state == "open") and (service.service in filtered_services.split(",")):
                line = "{service}{s}://{hostname}:{port}"
                line = line.replace("{xmlfile}", nmap_file)
                line = line.replace("{hostname}", host.address if not host.hostnames else host.hostnames[0]) # TODO: Fix naive code.
                line = line.replace("{hostnames}", host.address if not host.hostnames else ", ".join(list(set(host.hostnames)))) # TODO: Fix naive code.
                line = line.replace("{ip}", host.address)
                line = line.replace("{service}", service.service)
                line = line.replace("{s}", "s" if service.tunnel == "ssl" else "")
                line = line.replace("{protocol}", service.protocol)
                line = line.replace("{port}", str(service.port))
                line = line.replace("{state}", str(service.state))
                line = line.replace("-alt", "")
                line = line.replace("-mgmt", "")
                line = line.replace("-proxy", "")
                line = line.replace("-rpc-epmap", "")
                line = line.replace("-wmap", "")
                line = line.replace("httpx", "http")
                urls.append(line)
    return list(dict.fromkeys(urls))
printf "Host: 127.0.0.1 (localhost) Ports: 3390/open/tcp//dsc///, 5901/open/tcp//vnc-1///, 8000/open/tcp//http-alt/// Ignored State: closed (65532)" > file
cat file | tr -s ' ' | tr ',' '\n' | sed s'#^ ##g' > f2
string=$(sed -n '3p' f2 | cut -d' ' -f1)
It is only horizontal search which is difficult; vertical is easy. You can get any string out of any text you like, as long as you can get the string on its' own line, and then determine which line you need to print.
You only need complex regular expressions if you are relying exclusively on horizontal search. In almost all cases, as long as your substring is on its' own line, cut can take you the rest of the way.

Issue with ping sweep that reads in an IP address

The main issue I am having with this is pinging the read in IP address. It always says that the address is down.
Ping_Sweep()
{
echo -e '\n'
echo '----- Ping Sweep -----'
echo -e '\n'
command date >> pingresults.tx
echo "Enter in the first three number sequences of an IP address (ex. ###.###.###): "
read -r ip_address
for x in $ip_address
do
echo "IP address being pinged: $ip_address"
if ping –c 1 "$x" &> /dev/null
then
echo "IP: $x is up."
else
echo "Ping failed. $x is down."
fi
done
Main_Menu
}
ping –c should be ping -c. Your original command has an en dash. This is usually caused by copying code rendered by a bad blog framework.

My Bash script won't cache an item

I'm trying to use a Bash script I found and have slightly modified to automatically update the DNS settings on GoDaddy.
It sort of works, however I'm not getting the echo "no ip address, program exit" because I don't think on line 28 the cache file is being created. Would anybody be able to tell me what's going on? I'm using Raspbian. Many thanks in advance!
#/bin/bash
# This script is used to check and update your GoDaddy DNS server to the IP address of your current internet connection.
#
# Original PowerShell script by mfox: https://github.com/markafox/GoDaddy_Powershell_DDNS
# Ported to bash by sanctus
# Added AAAA record by Binny Chan
#
# Improved to take command line arguments and output information for log files by pollito
#
# First go to GoDaddy developer site to create a developer account and get your key and secret
#
# https://developer.godaddy.com/getstarted
#
# Be aware that there are 2 types of key and secret - one for the test server and one for the production server
# Get a key and secret for the production server
# Check an A record and a domain are both specified on the command line.
if [ $# -ne 2 ]; then
echo "usage: $0 type a_record domain_name"
echo "usage: $0 AAAA www my_domain"
exit 1
fi
# Set A record and domain to values specified by user
name=$1 # name of A record to update
domain=$2 # name of domain to update
cache=/tmp/.mcsync.$name.$domain.addr
[ -e $cache ] && old=`cat $cache`
# Modify the next two lines with your key and secret
key="key" # key for godaddy developer API
secret="secret" # secret for godaddy developer API
headers="Authorization: sso-key $key:$secret"
#echo $headers
# Get public ip address there are several websites that can do this.
ret=$(curl -s GET "http://ipinfo.io/json")
currentIp=$(echo $ret | grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b")
# Check empty ip address or not
if [ -z "$currentIp" ]; then
echo $name"."$domain": $(date): no ip address, program exit"
exit 1
fi
# Check cache ip, if matched, program exit
if [ "$old" = "$currentIp" ]; then
echo $name"."$domain": $(date): address unchanged, program exit $currentIp"
echo "IPs equal. Exiting..."
exit
else
echo $name"."$domain": $(date): currentIp:" $currentIp
fi
#Get dns ip
result=$(curl -s -k -X GET -H "$headers" \
"https://api.godaddy.com/v1/domains/$domain/records/A/$name")
dnsIp=$(echo $result | grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b")
echo $name"."$domain": $(date): dnsIp:" $dnsIp
# ip not match
if [ "$dnsIp" != $currentIp ];
then
echo $name"."$domain": $(date): IPs not equal. Updating."
request='{"data":"'$currentIp'","ttl":3600}'
#echo $request
nresult=$(curl -i -k -s -X PUT \
-H "$headers" \
-H "Content-Type: application/json" \
-d $request "https://api.godaddy.com/v1/domains/$domain/records/A/$name")
#echo $nresult
echo $name"."$domain": $(date): IPs not equal. Updated."
fi
No, the message is on line 44, which is written because $currentIp is empty, which is retrieved on line 39: reading the response of curl request to "http://ipinfo.io/json".
otherwise just remove the cache file
rm /tmp/.mcsync.$name.$domain.addr
where $name and $domain are replaced with first and second argument to script.

fping hosts in file and return down ips

I want to use fping to ping multiple ips contained in a file and output the failed ips into a file i.e.
hosts.txt
8.8.8.8
8.8.4.4
1.1.1.1
ping.sh
#!/bin/bash
HOSTS="/tmp/hosts.txt"
fping -q -c 2 < $HOSTS
if ip down
echo ip > /tmp/down.log
fi
So I would like to end up with 1.1.1.1 in the down.log file
It seems that parsing the data from fping is somewhat difficult. It allows the parsing of data for hosts that is alive but not dead. As a way round the issue and to allow for multiple host processing simultaneously with -f, all the hosts that are alive are placed in a variable called alive and then the hosts in the /tmp/hosts.txt file are looped through and grepped against the variable alive to decipher whether the host is alive or dead. A return code of 1 signifies that grep cannot find the host in alive and hence an addition to down.log.
alive=$(fping -c 1 -f ipsfile | awk -F: '{ print $1 }')
while read line
do
grep -q -o $line <<<$alive
if [[ "$?" == "1" ]]
then
echo $line >> down.log
fi
done < /tmp/hosts.txt
Here's one way to get the result you want. Note however; i didn't use fping anywhere in my script. If the usage of fping is crucial to you then i might have missed the point entirely.
#!/bin/bash
HOSTS="/tmp/hosts.txt"
declare -i DELAY=$1 # Amount of time in seconds to wait for a packet
declare -i REPEAT=$2 # Amount of times to retry pinging upon failure
# Read HOSTS line by line
while read -r line; do
c=0
while [[ $c < $REPEAT ]]; do
# If pinging an address does not return the word "0 received", we assume the ping has succeeded
if [[ -z $(ping -q -c $REPEAT -W $DELAY $line | grep "0 received") ]]; then
echo "Attempt[$(( c + 1))] $line : Success"
break;
fi
echo "Attempt[$(( c + 1))] $line : Failed"
(( c++ ))
done
# If we failed the pinging of an address equal to the REPEAT count, we assume address is down
if [[ $c == $REPEAT ]]; then
echo "$line : Failed" >> /tmp/down.log # Log the failed address
fi
done < $HOSTS
Usage: ./script [delay] [repeatCount] -- 'delay' is the total amount of seconds we wait for a response from a ping, 'repeatCount' is how many times we retry pinging upon failure before deciding the address is down.
Here we are reading the /tmp/hosts.txt line by line and evaluating each adress using ping. If pinging an address succeeds, we move on to the next one. If an address fails, we try again for as many times as the user has specified. If the address fails all of the pings, we log it in our /tmp/down.log.
The conditions for checking whether a ping failed/succeeded may not be accurate for your use-cases, so maybe you will have to edit that. Still, i hope this gets the general idea across.

Bash script to check for new e-mails doesn't recognize new messages

I wrote a little bash script I want to start automatically when I log in to my desktop.
The script shall always run in background and periodically check for new incoming mail messages. When a new message arrives, the script shall pop up a notification via notify-send, and display its content.
However, if I send myself an email (from another address) to check if it's working, it seems that the message has already been consumed, even if in the Gmail's web interface (that I keep closed) the message is marked as unread. Obviously, since even the new message is not marked as new, the script doesn't fetch it.
I also switched off my android phone, because I think that it could interfere, and I'm sure I have no other mail clients running.
The output of the script is the following; note that between these two lines I send myself a message:
254 messages, 0 new
255 messages, 0 new
The code follows:
#!/bin/bash
SERVER="imap.gmail.com"
while :
do
echo "1 login myusername mypassword" > /tmp/checkmail
echo "2 select inbox" >> /tmp/checkmail
echo "3 logout" >> /tmp/checkmail
response=$(openssl s_client -crlf -connect $SERVER:993 -quiet 2> /dev/null < /tmp/checkmail)
rm /tmp/checkmail
news=$(echo "$response" | grep RECENT | awk '{print $2}')
last=$(echo "$response" | grep EXISTS | awk '{print $2}')
echo "$last messages, $news new"
if [ "$news" != "0" ]
then
for (( i=0; i<$news; i++))
do
echo "fetching $i° message"
echo "1 login myusername mypassword" > /tmp/getmail
echo "2 select inbox" >> /tmp/getmail
echo "3 fetch $((last-i)) (body[1])" >> /tmp/getmail
echo "4 logout" >> /tmp/getmail
response=$(openssl s_client -crlf -connect $SERVER:993 -quiet 2> /dev/null < /tmp/getmail)
rm /tmp/getmail
content=$(echo "$response" | awk '/FETCH/{flag=1;next}/3 OK Success/{flag=0}flag')
notify-send -t 0 "New message" "$content"
done
fi
sleep 60
done
Thank you in advance
I experienced the very same problem, with the gmail imap server. I have therefore resorted to modify your script adding one extra IMAP command:
echo "1 login $myusername $mypassword" > /tmp/checkmail
echo "2 select inbox" >> /tmp/checkmail
echo "3 STATUS inbox (UNSEEN)" >> /tmp/checkmail
echo "4 logout" >> /tmp/checkmail
and then replacing your definition of $news with the one below
news=$(echo "$response" | grep UNSEEN | awk '{print $5}' | sed 's/)//' | tr -d '\r')
Note that the removal of '\r' was required for the forthcoming script to work.
I am nonetheless still struggling to make the next steps of your original script working on my setup.

Resources