Extract tcpdump data with awk - filter

I'm trying to set up a pipeline to extract, via awk, certain fields and the ascii data (source IP, target IP, and payload) from each packet in a stream of packets captured by tcpdump, but I'm having difficulty. I think the problem is that the payload are arbitrary and it's hard to find a fixed structure one can use to filter it into a record via awk. Here's my current command:
sudo tcpdump -i en1 -A -q -l | awk ' { print "fields are $3 $5 $8} '
Here is a single line of the output I'm trying to filter:
12:45:23.890302 IP 10.0.1.3.52695 > weblnb.fogcreek.com.http: tcp 739
E....M#.#...
T.........P-.....&.....
2U......GET /default.asp?pg=pgRss&ixDiscussGroup=5 HTTP/1.1
Host: discuss.joelonsoftware.com
User-Agent: Vienna/2.6.0.2601
Accept: */*
Accept-Encoding: gzip
Accept-Language: en-us
Cookie: __utma=261409944.1875583.1351297139.1362842383.1362868129.78; __utmz=261409944.1358134504.43.4.utmcsr=joelonsoftware.com|utmccn=(referral)|utmcmd=referral|utmcct=/; fb_SessionId=qc48cvnjvacl3jeo76l8qv69emn119; DBID=LTOJIXRXTFAPXDGFBKCAYLVCILYFCA; fbToken=lqdf3avvfodabtfvd5c4drt18107B8; sUniqueID=20121026230417-66.117.217.10-slb5btkgb5; __utma=131697940.47826445.1351869116.1360335377.1361680499.5; __utmz=131697940.1361680499.5.2.utmccn=(referral)|utmcsr=statcounter.com|utmcct=/p8568424/exit_link_activity/|utmcmd=referral
Connection: keep-alive
The desired output from this filter is
10.0.1.3.52695 weblnb.fogcreek.com.http: { E....M#.#...
T.........P-.....&.....
2U......GET /default.asp?pg=pgRss&ixDiscussGroup=5 HTTP/1.1
Host: discuss.joelonsoftware.com
User-Agent: Vienna/2.6.0.2601
Accept: */*
Accept-Encoding: gzip
Accept-Language: en-us
Cookie: __utma=261409944.1875583.1351297139.1362842383.1362868129.78; __utmz=261409944.1358134504.43.4.utmcsr=joelonsoftware.com|utmccn=(referral)|utmcmd=referral|utmcct=/; fb_SessionId=qc48cvnjvacl3jeo76l8qv69emn119; DBID=LTOJIXRXTFAPXDGFBKCAYLVCILYFCA; fbToken=lqdf3avvfodabtfvd5c4drt18107B8; sUniqueID=20121026230417-66.117.217.10-slb5btkgb5; __utma=131697940.47826445.1351869116.1360335377.1361680499.5; __utmz=131697940.1361680499.5.2.utmccn=(referral)|utmcsr=statcounter.com|utmcct=/p8568424/exit_link_activity/|utmcmd=referral
Connection: keep-alive}
Note: the level of abstraction here is not limited to the single specific example above. The general structure of the filtered output should look like this:
$sourceip $targetip {$raw_packet_data/payload,_could_be_http_stream_or_just_plain_gibberish}
The ending demarcation of the payload field should be the start of the next packet, cf. $sourceip.
And the awk filter should capture every line of the tcpdump output stream in this fashion, not just a single line.
Any suggestions on how to implement this?

The following maps you example input to your desired output, does it work for the whole stream?
$ awk '/tcp [0-9]+/{printf "%s %s { ",$3,$5;getline;print $0;next}$1=="Connection:"{$2=$2"}"}{printf "\t%s\n",$0}' file
10.0.1.3.52695 weblnb.fogcreek.com.http: { E....M#.#...
T.........P-.....&.....
2U......GET /default.asp?pg=pgRss&ixDiscussGroup=5 HTTP/1.1
Host: discuss.joelonsoftware.com
User-Agent: Vienna/2.6.0.2601
Accept: */*
Accept-Encoding: gzip
Accept-Language: en-us
Cookie: __utma=261409944.1875583.1351297139.1362842383.1362868129.78; __utmz=261409944.1358134504.43.4.utmcsr=joelonsoftware.com|utmccn=(referral)|utmcmd=referral|utmcct=/; fb_SessionId=qc48cvnjvacl3jeo76l8qv69emn119; DBID=LTOJIXRXTFAPXDGFBKCAYLVCILYFCA; fbToken=lqdf3avvfodabtfvd5c4drt18107B8; sUniqueID=20121026230417-66.117.217.10-slb5btkgb5; __utma=131697940.47826445.1351869116.1360335377.1361680499.5; __utmz=131697940.1361680499.5.2.utmccn=(referral)|utmcsr=statcounter.com|utmcct=/p8568424/exit_link_activity/|utmcmd=referral
Connection: keep-alive}

Related

curl post audio data with rate limit

I am trying to post audio data with curl for a HTTP-API which allows to transmit/receive audio files.
First I tried this:
curl -vv --http1.0 -H "Content-Type: audio/basic" -H "Content-Length: 9999999" -H "Connection: Keep-Alive" -H "Cache-Control: no-cache" --data-binary #- 'http://IP/API-Endpoint.cgi'
This seems to work:
* Trying [IP]...
* TCP_NODELAY set
* Connected to [IP] ([IP]) port 80 (#0)
> POST /API-Endpoint.cgi HTTP/1.0
> Host: [IP]
> User-Agent: curl/7.54.0
> Accept: */*
> Content-Type: audio/basic
> Content-Length: 9999999
> Connection: Keep-Alive
> Cache-Control: no-cache
>
* upload completely sent off: 17456 out of 17456 bytes
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< Content-Type: text/plain
< Content-Length: 0
* HTTP/1.0 connection set to keep alive!
< Connection: keep-alive
< Date: Wed, 06 Jun 2018 19:38:37 GMT
< Server: lighttpd/1.4.45
But I can only hear the very last part of the Audio file. (The file has the correct audio format for the API: G.711 μ-law with 8000 Hz) My next guess is, that the audio gets transmitted too fast and has to be sent in real time to the API endpoint. So I tried the --limit-rate parameter of curl, which had no effect. Then I tried piping the data with a rate limit into curl:
cat myfile.wav | pv -L 10k | curl -vv --http1.0 -H "Content-Type: audio/basic" -H "Content-Length: 9999999" -H "Connection: Keep-Alive" -H "Cache-Control: no-cache" --data-binary #- 'http://IP/API-Endpoint.cgi'
but the result is always the same: I can only hear the last part of the audio file. It seems like curl is waiting for the piped input to complete and then sends the request as before.
Is there an option to post audio to a HTTP-API from bash in "real time"?
Update:
Without forcing HTTP 1.0 I get the following result:
curl -vv -H "Content-Type: audio/basic" --data-binary '#myfile.wav' 'http://[IP]/API-Endpoint.cgi'
* Trying [IP]...
* TCP_NODELAY set
* Connected to [IP] ([IP]) port 80 (#0)
> POST /API-Endpoint.cgi HTTP/1.1
> Host: [IP]
> User-Agent: curl/7.54.0
> Accept: */*
> Content-Type: audio/basic
> Content-Length: 15087
> Expect: 100-continue
>
< HTTP/1.1 417 Expectation Failed
< Content-Type: text/html
< Content-Length: 363
< Connection: close
< Date: Wed, 06 Jun 2018 20:34:22 GMT
< Server: lighttpd/1.4.45
<
<?xml version="1.0" encoding="iso-8859-1"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<title>417 - Expectation Failed</title>
</head>
<body>
<h1>417 - Expectation Failed</h1>
</body>
</html>
* Closing connection 0
with -H "Content-Length: 9999999" you say that your audio file is exactly 9999999 bytes long (roughly 10 megabytes), but curl reports that your file is 17456 bytes:
* upload completely sent off: 17456 out of 17456 bytes
(roughly 0.02 megabytes), so either your Content-Length header is wrong (that's my best guess), or the program feeding your audio file to curl is faulty, closing stdin prematurely.
either fix your Content-Length header, or fix the program feeding curl's stdin, hopefully that should send the entire file intact.
EDIT: oh, seems that server can't handle Expect: 100-continue, to disable that header, add the argument -H 'Expect:'
(an empty Expect header will make curl omit the header entirely, instead of sending the header empty)
... but to answer the question in the title, yeah that's the --limit-rate argument.

curl syntax in GET based HTTP logins

For practice purposes I decided to create a simple bruteforcing bash script, that I succesuly used to solve DWVA. I then moved to IoT - namely my old IP camera. This is my code as of now:
#!/bin/bash
if [ "${##}" != "2" ]; then
echo "<command><host><path>"
exit
fi
ip=$1
path=$2
for name in $(cat user.txt); do
for pass in $(cat passwords.txt); do
echo ${name}:${pass}
res="$(curl -si ${name}:${pass}#${ip}${path})"
check=$(echo "$res" | grep "HTTP/1.1 401 Unauthorised")
if [ "$check" != '' ]; then
tput setaf 1
echo "[FAILURE]"
tput sgr0
else
tput setaf 2
echo "[SUCCESS]"
tput sgr0
exit
fi
sleep .1
done;
done;
Despite obvious flaws - like reporting succes in case of network failure - it's as good as my 20 minutes coding jobs are. However, I can't seem to get the curl command syntax quite right. Camera in question is a simple Axis, running cramFS and a small scripting os. It's similar to a lot of publicly available cameras' login forms, like ones found here, here or here. A simple GET, yet I feel like I'm bashing my head against a wall. Any bit of ahint will be madly appreciated at this point.
I've taken the liberty to paste contents of first GET package:
AYGET /operator/basic.shtml?id=478 HTTP/1.1
Host: <target_host_ip>
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:58.0) Gecko/20100101 Firefox/58.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US;q=0.7,en;q=0.3
Accept-Encoding: gzip, deflate
Referer: http://<target_host_ip>/view/view.shtml?id=282&imagepath=%2Fmjpg%2Fvideo.mjpg&size=1
Connection: keep-alive
Upgrade-Insecure-Requests: 1
Authorization: Digest username="root", realm="AXIS_ACCC8E4A2177", nonce="w3PH7XVmBQA=32dd7cd6ab72e0142e2266eb2a68f59e92995033", uri="/operator/basic.shtml?id=478", algorithm=MD5, response="025664e1ba362ebbf9c108b1acbcae97", qop=auth, nc=00000001, cnonce="a7e04861c3634d3b"
Package sent in return is a simple, dry 401.
PS.: Any powers that be - feel free to remove the IPs if they violate anything. Also feel free to point out grammar/spelling etc. mistakes since C2 exam is coming.
It looks like those cameras don't simply use "Basic" HTTP auth with a base64 encoded username:password combo, but use digest authentication which involves a bit more.
Luckily, with cURL this just means you need to specify --digest on the command line to handle it properly.
Test the sequence of events yourself using:
curl --digest http://user:password#example.com/digest-url/
You should see something similar to:
* Trying example.com...
* Connected to example.com (x.x.x.x) port 80 (#0)
* Server auth using Digest with user 'admin'
> GET /view/viewer_index.shtml?id=1323 HTTP/1.1
> Host: example.com
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 401 Unauthorized
< Date: Wed, 08 Nov 1972 17:30:37 GMT
< Accept-Ranges: bytes
< Connection: close
< WWW-Authenticate: Digest realm="AXIS_MACADDR", nonce="00b035e7Y417961b2083fae7e4b2c4053e39ef8ba0b65b", stale=FALSE, qop="auth"
< WWW-Authenticate: Basic realm="AXIS_MACADDR"
< Content-Length: 189
< Content-Type: text/html; charset=ISO-8859-1
<
* Closing connection 0
* Issue another request to this URL: 'http://admin:admin2#example.com/view/viewer_index.shtml?id=1323'
* Server auth using Digest with user 'admin'
> GET /view/viewer_index.shtml?id=1323 HTTP/1.1
> Host: example.com
> Authorization: Digest username="admin", realm="AXIS_MACADDR", nonce="00b035e7Y417961b2083fae7e4b2c4053e39ef8ba0b65b", uri="/view/viewer_index.shtml?id=1323", cnonce="NWIxZmY1YzA3NmY3ODczMDA0MDg4MTUwZDdjZmE0NGI=", nc=00000001, qop=auth, response="3b03254ef43bc4590cb00ba32defeaff"
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 401 Unauthorized
< Date: Wed, 08 Nov 1972 17:30:37 GMT
< Accept-Ranges: bytes
< Connection: close
* Authentication problem. Ignoring this.
< WWW-Authenticate: Digest realm="AXIS_MACADDR", nonce="00b035e8Y8232884a74ee247fc1cc42cab0cdf59839b6f", stale=FALSE, qop="auth"
< WWW-Authenticate: Basic realm="AXIS_MACADDR"
< Content-Length: 189
< Content-Type: text/html; charset=ISO-8859-1
<

Elastic Search Bulk API - Not indexing

I have output a JSON file in bulk format which I can load in to Kibana with the developer tools. and by inserting a few lines using the -d command
example lines of file:
{"index":{"_index":"els","_type":"logs","_id":1481018400003}}
{"timestamp":1481018400003,"zoneId":29863567,............[]}
{"index":{"_index":"els","_type":"logs","_id":"30cee368073c0c9b"}}
{"timestamp":1481018400005,"zoneId":29863567,............[]}
...
However when I run the bulk api to pot a file it does not do anything. I added verbose to the command and get the following:
* Connected to localhost (::1) port 9200 (#0)
> POST /_bulk HTTP/1.1
> Host: localhost:9200
> User-Agent: curl/7.49.0
> Accept: */*
> Content-Length: 0
> Content-Type: application/x-www-form-urlencoded
>
< HTTP/1.1 400 Bad Request
< content-type: application/json; charset=UTF-8
< content-length: 165
* HTTP error before end of send, stop sending
Any help would be great.
Thanks!

GZIP encoding in Jersey 2 / Grizzly

I can't activate gzip-encoding in my Jersey service. This is what I've tried:
Started out with the jersey-quickstart-grizzly2 archetype from the Getting Started Guide.
Added rc.register(org.glassfish.grizzly.http.GZipContentEncoding.class);
(have also tried rc.register(org.glassfish.jersey.message.GZipEncoder.class);)
Started with mvn exec:java
Tested with curl --compressed -v -o - http://localhost:8080/myapp/myresource
The result is the following:
> GET /myapp/myresource HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 zlib/1.2.3.4 ...
> Host: localhost:8080
> Accept: */*
> Accept-Encoding: deflate, gzip
>
< HTTP/1.1 200 OK
< Content-Type: text/plain
< Date: Sun, 03 Nov 2013 08:07:10 GMT
< Content-Length: 7
<
* Connection #0 to host localhost left intact
* Closing connection #0
Got it!
That is, despite Accept-Encoding: deflate, gzip in the request, there is no Content-Encoding: gzip in the response.
What am I missing here??
You have to register the org.glassfish.jersey.server.filter.EncodingFilter as well. This example enables deflate and gzip compression:
import org.glassfish.jersey.message.DeflateEncoder;
import org.glassfish.jersey.message.GZipEncoder;
import org.glassfish.jersey.server.ResourceConfig;
import org.glassfish.jersey.server.filter.EncodingFilter;
...
private void enableCompression(ResourceConfig rc) {
rc.registerClasses(
EncodingFilter.class,
GZipEncoder.class,
DeflateEncoder.class);
}
This solution is jersey specific and works not only with Grizzly, but with the JDK Http server as well.
Try the code like:
HttpServer httpServer = GrizzlyHttpServerFactory.createHttpServer(
BASE_URI, rc, false);
CompressionConfig compressionConfig =
httpServer.getListener("grizzly").getCompressionConfig();
compressionConfig.setCompressionMode(CompressionConfig.CompressionMode.ON); // the mode
compressionConfig.setCompressionMinSize(1); // the min amount of bytes to compress
compressionConfig.setCompressableMimeTypes("text/plain", "text/html"); // the mime types to compress
httpServer.start();

Bash. CGI post gives error 500. But works without AJAX.

I'm trying to call an CGI page but the response comes in blank. It returns error 500. If I just do the post without AJAX it works well.
#!/bin/bash
echo "content-type: text/html"
echo "lalala" > temp.file
cat temp.file
echo "
<br><b>Program:</b> $program <br> \n"
echo "<html> adsdasd </html>"
Here are the headers:
Connection close
Content-Length 535
Content-Type text/html; charset=iso-8859-1
Date Thu, 19 Jan 2012 12:30:04 GMT
Server Apache
Request Headers
Accept */*
Accept-Encoding gzip, deflate
Accept-Language en-us,en;q=0.5
Connection keep-alive
Content-Length 16
Content-Type application/x-www-form-urlencoded; charset=UTF-8
Host cgi:8888
Origin null
User-Agent Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:10.0) Gecko/20100101 Firefox/10.0
I solved it with
echo
echo
in the begin of the file.
It seems the server need those two echo before the header

Resources