Using socat to dump traffic to pcap - proxy

Hi everyone this is my first question on stackoverflow!
I'm using this software (it's a NIDS); one of its features is using socat to create a proxy that saves the traffic to a pcap.
That's the command it uses to do this: /usr/bin/socat -d OPENSSL-LISTEN:50010,cipher=HIGH,method=TLS1.2,reuseaddr,pf=ip4,fk,cert=/usr/local/owlh/src/owlhnode/conf/certs/ca.pem,verify=0 SYSTEM:"/usr/sbin/tcpdump -n -r - -s 0 -G 50 -W 100 -w /usr/local/owlh/pcaps/remote-test%d%m%Y%H%M%S.pcap not port 22"
That's what happens when using curl i try to make a request to google through the proxy:
╭─myasnik#tanuki ~/…/ossihr-poc/docker ‹master*›
╰─$ export https_proxy=https://0.0.0.0:50010/
╭─myasnik#tanuki ~/…/ossihr-poc/docker ‹master*›
╰─$ export http_proxy=https://0.0.0.0:50010/
╭─myasnik#tanuki ~/…/ossihr-poc/docker ‹master*›
╰─$ curl --proxy-insecure www.google.it
curl: (52) Empty reply from server
root#owlh-node:/# /usr/bin/socat -d OPENSSL-LISTEN:50010,cipher=HIGH,method=TLS1.2,reuseaddr,pf=ip4,fk,cert=/usr/local/owlh/src/owlhnode/conf/certs/ca.pem,verify=0 SYSTEM:"/usr/sbin/tcpdump -n -r - -s 0 -G 50 -W 100 -w /usr/local/owlh/pcaps/remote-test%d%m%Y%H%M%S.pcap not port 22"
tcpdump: unknown file format
2020/08/18 12:00:08 socat[1590] W system("/usr/sbin/tcpdump -n -r - -s 0 -G 50 -W 100 -w /usr/local/owlh/pcaps/remote-test%d%m%Y%H%M%S.pcap not port 22") returned with status 256
2020/08/18 12:00:08 socat[1590] W system(): No such file or directory
2020/08/18 12:00:08 socat[1589] E waitpid(): child 1590 exited with status 1
Thanks a lot for your help in advantage!

Here is the answer to the question, i think i misunderstood the way it was supposed to be done: https://github.com/OwlH-net/OwlH-Node/issues/47

Related

I am using HPING3 and want to use source IP's from specific /24 subnet. for example 172.0.0.1-172.0.0.250. Can i specify this in hping3?

I am using hping for my lab to generate UDP traffic
sudo hping3 64.2.0.50 --spoof 172.0.0.3 -p 323 -2 -c 300 -i u4
insted of using 172.0.0.3, i want to send around 250 packets from source 172.0.0.1 - 172.0.0.250.
can i achieve this using hping?
Was hoping this might work
sudo hping3 64.2.0.50 --spoof 172.0.0.x -p 323 -2 -c 300 -i u4
sudo hping3 64.2.0.50 --rand-source 172.0.0.[0-200] -p 323 -2 -c 300 -i u40

Bash broken pipe with tcpdump

I use the following command to send pinging IP's to a script:
sudo tcpdump -ne -l -i eth0 icmp and icmp[icmptype]=icmp-echo \
| cut -d " " -f 10 | xargs -L2 ./pong.sh
Unfortunately this gives me:
tcpdump: Unable to write output: Broken pipe
To dissect my commands:
Output the ping's from the traffic:
sudo tcpdump -ne -l -i eth0 icmp and icmp[icmptype]=icmp-echo
Output:
11:55:58.812177 IP xxxxxxx > 127.0.0.1: ICMP echo request, id 50776, seq 761, length 64
This will get the IP's from the tcpdump output:
cut -d " " -f 10 # output: 127.0.0.1
Get the output to the script:
xargs -L2 ./pong.sh
This will mimic the following example command:
./pong.sh 127.0.0.1
The strange thing is that the commands work seperate (on their own)...
I tried debugging it but I have no experience with debugging pipes. I checked the commands but they seem fine.
It would seem that's cut stdio buffering is interfering here, i.e. replace | xargs ... by | cat in your cmdline to find out.
Fwiw below cmdline wfm (pipe straight to xargs then use the shell itself to get the nth arg), note btw the extra tcpdump args : -c10 (just to limit to 10pkts, then show the 10/2 lines) and -Q in (only inbound pkts):
$ sudo tcpdump -c 10 -Q in -ne -l -i eth0 icmp and icmp[icmptype]=icmp-echo 2>/dev/null | \
xargs -L2 sh -c 'echo -n "$9: "; ping -nqc1 $9 | grep rtt'
192.168.100.132: rtt min/avg/max/mdev = 3.743/3.743/3.743/0.000 ms
192.168.100.132: rtt min/avg/max/mdev = 5.863/5.863/5.863/0.000 ms
192.168.100.132: rtt min/avg/max/mdev = 6.167/6.167/6.167/0.000 ms
192.168.100.132: rtt min/avg/max/mdev = 4.256/4.256/4.256/0.000 ms
192.168.100.132: rtt min/avg/max/mdev = 1.545/1.545/1.545/0.000 ms
$ _
For those coming across this (like me), tcpdump buffering is the issue.
From the man page:
-l Make stdout line buffered. Useful if you want to see the data
while capturing it. For example:
# tcpdump -l | tee dat
or
# tcpdump -l > dat & tail -f dat

how to capture trap message in net-snmp

i work with net-snmp and i try a few commands like:
snmptrap -v 1 -c public host TRAP-TEST-MIB::demotraps localhost 6 17 '' \
SNMPv2-MIB::sysLocation.0 s "Just here"
snmptrap -v 2c -c public localhost '' NOTIFICATION-TEST-MIB::demo-notif \
SNMPv2-MIB::sysLocation.0 s "just here"
snmptrap -v 1 -c public host NET-SNMP-EXAMPLES-MIB::netSnmpExampleHeartbeatNotification "" 6 17 "" \
netSnmpExampleHeartbeatRate i 123456
but is just give me a new line without error or something
someone can give me advice ?
Netsnmp provides Snmptrapd for this purpose.
It is an application which can listen on a port (default 162) on a host for traps and will log those that are received.
//EDIT ...
Here is an example ...
snmptrapd -f -m +ALL -Lo -c /tmp/snmptrapd.conf 9876
where /tmp/snmptrapd.conf only contains one line which for simplicity disables community/password checking
disableAuthorization yes
Use man snmptrapd to see what the flags/arguements mean.

curl from shell: discard output file if transfer timed-out

Using curl from the shell, what is the best way to discard (or detect) files that are not completely downloaded because a timeout occurred? What I'm trying to do is:
curl -m 2 --compress -o "dest/#1" "http://url/{$list}"
When a timeout occurs, the log shows it, and the part of the file that was downloaded is saved to disk:
[4/13]: http://URL/123.jpg --> dest/123.jpg
99 97984 99 97189 0 0 45469 0 0:00:02 0:00:02 --:--:-- 62500
curl: (28) Operation timed out after 2000 milliseconds with 97189 bytes received
I'm trying to either get rid of the files that were not 100% downloaded, or have them listed to attempt a resume (-C flag), later.
The best solution I have found so far is to capture the stderr of the curl call, and parse it with a combination of perl and grep to get the output file names:
curl -m 2 -o "dest/#1" "http://url/{$list}" 2>curl.out
perl -pe 's/[\t\n ]+/ /g ; s/--> /\n/g' curl.out | grep -i "Curl: (28)" | perl -pe 's/ .*//g'

How to make an Echo server with Bash?

How to write a echo server bash script using tools like nc, echo, xargs, etc capable of simultaneously processing requests from multiple clients each with durable connection?
The best that I've came up so far is
nc -l -p 2000 -c 'xargs -n1 echo'
but it only allows a single connection.
If you use ncat instead of nc your command line works fine with multiple connections but (as you pointed out) without -p.
ncat -l 2000 -k -c 'xargs -n1 echo'
ncat is available at http://nmap.org/ncat/.
P.S. with the original the Hobbit's netcat (nc) the -c flag is not supported.
Update: -k (--keep-open) is now required to handle multiple connections.
Here are some examples. ncat simple services
TCP echo server
ncat -l 2000 --keep-open --exec "/bin/cat"
UDP echo server
ncat -l 2000 --keep-open --udp --exec "/bin/cat"
In case ncat is not an option, socat will also work:
socat TCP4-LISTEN:2000,fork EXEC:cat
The fork is necessary so multiple connections can be accepted. Adding reuseaddr to TCP4-LISTEN may be convenient.
netcat solution pre-installed in Ubunutu
The netcat pre-installed in Ubuntu 16.04 comes from netcat-openbsd, and has no -c option, but the manual gives a solution:
sudo mknod -m 777 fifo p
cat fifo | netcat -l -k localhost 8000 > fifo
Then client example:
echo abc | netcat localhost 8000
TODO: how to modify the input string value? The following does not return any reply:
cat fifo | tr 'a' 'b' | netcat -l -k localhost 8000 > fifo
The remote shell example however works:
cat fifo | /bin/sh -i 2>&1 | netcat -l -k localhost 8000 > fifo
I don't know how to deal with concurrent requests simply however.
what about...
#! /bin/sh
while :; do
/bin/nc.traditional -k -l -p 3342 -c 'xargs -n1 echo'
done

Resources