I’m creating a small script to take the output from tshark and print it out to terminal. I'm trying to to only filter by requests made through the browser address bar.
So when www.facebook.com is loaded, the terminal only prints out facebook.com, rather than fbstatic-a.akamaihd.net etc .. (other DNS requests made through the requested website)
This program loops forever repeating dns requests and writes to the terminal.
Any ideas?
Would the following work for you?
$ tshark -r dns.pcap -T fields -e dns.qry.name -Y "dns.qry.type == 0x0001 and udp.dstport == 53"
www.yahoo.com
The display filter (the part after "Y") is to limit the query type to be for A record (you want to avoid CNAME etc) in the request.
dns.qry.type == 0x0001 is for A record, udp.dstport == 53 is for DNS request.
Hope it helps.
Related
I am selecting error log details from a docker container and decide within a shell script, how and when to alert about the issue by discord and/or email.
Because I am receiving the email alerts too often with the same information in the email body, I want to implement the following two adjustments:
Fatal error log selection:
FATS="$(docker logs --since 24h $NODENAME 2>&1 | grep 'FATAL' | grep -v 'INFO')"
Email sent, in case FATS has some content:
swaks --from "$MAILFROM" --to "$MAILTO" --server "$MAILSERVER" --auth LOGIN --auth-user "$MAILUSER" --auth-password "$MAILPASS" --h-Subject "FATAL ERRORS FOUND" --body "$FATS" --silent "1"
How can I send the email only in the case, FATS has another content than the previous run of the script? I have thought about a hash about its content, which is stored and read in a text file. If the hash is the same than the previous script run, the email will be skipped.
Another option could be a local, temporary variable in the global user's bash profile, so that there is no file to be stored on the file system (to avoid read / writes).
How can I do that?
When you are writing a script for your monitoring, add functions for additional functionality, like:
logging all the alerts that have been send
make sure you don't send more than 1 alert each hour
consider sending warnings only during working hours
escalate a message when it fails N times without intermediate success
possible send an alert to different receivers (different email adresses or also to sms or teams)
make an interface for an operator so he can look back when something went wrong the first time.
When you have control which messages you send, it is easy to filter duplicate meassages (after changing --since).
I‘ve chosen the proposal of #ralf-dreager and reduced selection to 1d and 1h. Consequently, I‘ve changed my monitoring script to either go through the results of 1d or just 1h, without the need to select each time again and again. Huge performance improvement and no need to store anything else in a variable or on the file system.
FATS="$(docker logs --since 1h $NODENAME 2>&1 | grep 'FATAL' | grep -v 'INFO')"
I am using UFTP to transfer files within the subnetwork computers.
But when I used -H to send only particular computers instead of sending to all computers, it is not working as expected.
Let me explain in detail :
I have two windows machines in same network of IP's 172.21.170.198,172.21.181.216 respectively.
From one of the system, I used below mentioned command to send the file
uftp.exe -R 100000 -H 172.21.170.198,172.21.181.216 e:\setup.exe
But both machines won't receive those file.
But if I use this command both machines will receive the file.
uftp.exe -R 100000 E:\setup.exe
I want to know whether I made any mistake.
Please correct me if I am wrong.
Thanks in Advance.
Kindly revert back for any clarifications.
Regards,
Thiyagu
If ipv6 isn't enabled, it would look like this, converting the ipv4 addresses to hex (with a converter like http://www.kloth.net/services/iplocate.php):
uftp.exe -R 100000 -H 0xAC15AAC6,0xAC15B5D8 e:\setup.exe
But if you have an ipv6 address on the client, the client id sort of comes from the end of it backwards. Like if the address was "fe80::e5ca:e3ca:fea3:153f%5", the command would look like:
uftp.exe -R 100000 -H 0x3f15a3fe e:\setup.exe
(coming from "fe a3 15 3f")
The goal was to frequently change default outgoing source ip on a machine with multiple interfaces and live ips.
I used ip route replace default as per its documentation and let a script run in loop for some interval. It changes source ip fine for a while but then all internet access to the machine is lost. It has to be remotely rebooted from a web interface to have any thing working
Is there any thing that could possibly prevent this from working stably. I have tried this on more than one servers?
Following is a minimum example
# extract all currently active source ips except loopback
IPs="$(ifconfig | grep 'inet addr:'| grep -v '127.0.0.1' | cut -d: -f2 |
awk '{ print $1}')"
read -a ip_arr <<<$IPs
# extract all currently active mac / ethernet addresses
Int="$(ifconfig | grep 'eth'| grep -v 'lo' | awk '{print $1}')"
read -a eth_arr <<<$Int
ip_len=${#ip_arr[#]}
eth_len=${#eth_arr[#]}
i=0;
e=0;
while(true); do
#ip route replace 0.0.0.0 dev eth0:1 src 192.168.1.18
route_cmd="ip route replace 0.0.0.0 dev ${eth_arr[e]} src ${ip_arr[i]}"
echo $route_cmd
eval $route_cmd
sleep 300
(i++)
(e++)
if [ $i -eq $ip_len ]; then
i=0;
e=0;
echo "all ips exhausted - starting from first again"
# break;
fi
done
I wanted to comment, but as I'm not having enough points, it won't let me.
Consider:
Does varying the delay time before its run again change the number of iterations before it fails?
Exporting the ifconfig & routes every time you change it, to see if something is meaningfully different over time. Maybe some basic tests to it (ping, nslookup, etc) Basically, find what is exactly going wrong with it. Also exporting the commands you send to a logfile (text file per change?) to see changes in them to see if some is different after x iterations.
What connectivity is lost? Incoming? Outgoing? Specific applications?
You say you use/do this on other servers without problems?
Are the IP's: Static (/etc/network/interfaces), bootp/DHCP, semi-static (bootp/DHCP server serving, based on MAC address), and if served by bootp/DHCP, what is the lease duration?
On the last remark:
bootp/dhcp will give IP's for x duration. say its 60 minutes. After half that time it will "check" with the bootp/dhcp server if it can keep the IP, and extend the lease to 60 minutes again, this can mean a small reconfig on the ifconfig (maybe even at the same time of your script?).
hth
I am switching DNS servers and I'd like to write a short ruby script that runs every 10s and triggers a local Mac OS X system notification as soon as my website resolves to a different IP.
Using terminal-notifier sending a system notification is as easy as this
terminal-notifier -message "DNS Changed"
I'd like to trigger it as soon as the output of
ping -i 10 mywebsite.com
... changes or simply does not contain a defined IP string anymore.
> 64 bytes from 12.34.56.789: icmp_seq=33 ttl=41 time=241.564 ms
in this case "12.34.56.789".
How do I monitor the change of the output string of the ping -i 10 mywebsite.com and call the notification function once a change has been detected?
I thought this might be a nice practice while waiting for the DNS to be updated.
Try this:
IP = "12.34.56.789"
p = IO.popen("ping -i 10 mywebsite.com")
p.each_line do |l|
if(! l =~ /from #{IP}/) #The IP has changed
system("terminal-notifier -message \"DNS Changed\"")
end
end
Have a text file w/ around 3 million URL's of sites I want to block.
Trying to ping them one by one (yes, I know it is going to take some time).
Have a script (yes, I am a bit slow in BASH) which reads the lines one at a time from text file.
Obviously cannot print text file here. Text file was created >> w/ Python some time ago.
Problem is that ping returns "unknown host" w/ every entry. If I make a smaller file by hand using the same entries the script works. I thought it may be a white space or end of line issue so tried addressing that in script. What could the issue possibly be?
#!/bin/bash
while read line
do
li=$(echo $line|tr -d '\n')
li2=$(echo $li|tr -d ' ')
if [ ${#line} -lt 2 ]
then
continue
fi
ping -c 2 -- $li2>>/dev/null
if [ $? -gt 0 ]
then
echo 'bad'
else
echo 'good'
fi
done<'temp_file.txt'
Does the file contains URLs or hostnames ?
If it contains URLs you must extract the hostname from URLs before pinging:
hostname=$(echo "$li2"|cut -d/ -f3);
ping -c 2 -- "$hostname"
Ping is used to ping hosts. If you have URLs of websites, then it will not work. Check that you have hosts in your file , example www.google.com or an IP address and not actual full website urls. If you want to check actual URLs, use a tool like wget and another tool like grep/awk to grab for errors like 404 or others. Last but not least, people who are security conscious will sometimes block pinging from the outside, so take note.
C heck if the file contains windows-style \r\n line endings: head file | od -c
If so, to fix it: dos2unix filename filename
I wouldn't use ping for this. It can easily be blocked, and it's not the best way to check for either ip addresses or if a server presents web pages.
If you just want to find the corresponding IP, use host:
$ host www.google.com
www.google.com is an alias for www.l.google.com.
www.l.google.com has address 209.85.149.106
www.l.google.com has address 209.85.149.147
www.l.google.com has address 209.85.149.99
www.l.google.com has address 209.85.149.103
www.l.google.com has address 209.85.149.104
www.l.google.com has address 209.85.149.105
As you see, you get all the IPs registered to a host. (Note that this requires you to parse the hostname from your urls!)
If you want to see if a URL points at a web server, use wget:
wget --spider $url
The --spider flag makes wget not save the page, just check that it exists. You could look at the return code, or add the -S flag (which prints the HTTP headers returned)