how do you limit the number of ngrep results? - ngrep

There appears to be no way to do this from the man page. When I run ngrep on a port that's serving continuous traffic, I get a ton of results streaming. I want to limit to the number of results such a what can be done with grep -m.

You can do ngrep -n which only matches "x" total packets, where x is your input.

Related

"tail -F" equivalent in lftp

I'm currently looking for tips to simulate a tail -F in lftp.
The goal is to monitor a log file the same way I could do with a proper ssh connection.
The closest command I found for now is repeat cat logfile.
It works but that not the best when my file is too big cause it displays each time all the file.
The lftp program specifically will not support this, but if the server supports the extension, it is possible to pull only the last $x bytes from a file with, e.g. curl --range (see this serverfault answer). This, combined with some logic to only grab as many bytes as have been added since the last poll, could allow you to do this relatively efficiently. I doubt if there are any off-the-shelf FTP clients with this functionality, but someone else may know better.

bash: wait for specific command output before continuing

I know there are several posts asking similar things, but none address the problem I'm having.
I'm working on a script that handles connections to different Bluetooth low energy devices, reads from some of their handles using gatttool and dynamically creates a .json file with those values.
The problem I'm having is that gatttool commands take a while to execute (and are not always successful in connecting to the devices due to device is busy or similar messages). These "errors" translate not only in wrong data to fill the .json file but they also allow lines of the script to continue writing to the file (e.g. adding extra } or similar). An example of the commands I'm using would be the following:
sudo gatttool -l high -b <MAC_ADDRESS> --char-read -a <#handle>
How can I approach this in a way that I can wait for a certain output? In this case, the ideal output when you --char-read using gatttool would be:
Characteristic value/description: some_hexadecimal_data`
This way I can make sure I am following the script line by line instead of having these "jumps".
grep allows you to filter the output of gatttool for the data you are looking for.
If you are actually looking for a way to wait until a specific output is encountered before continuing, expect might be what you are looking for.
From the manpage:
expect [[-opts] pat1 body1] ... [-opts] patn [bodyn]
waits until one of the patterns matches the output of a spawned
process, a specified time period has passed, or an end-of-file is
seen. If the final body is empty, it may be omitted.

What's the best way to nmap thousands of subnets in parallel from a script?

To inventory a port in part of a Class-A network, I scan that as a few thousand Class-C networks using nmap. I use parallel to run 32 subnet scan jobs at once.
A minimized version of the script:
while read subnet; do
echo nmap -Pn -p"$tcpport" "$subnet" >> /tmp/nmap_parallel.list;
done < $subnets
parallel -j32 < /tmp/nmap_parallel.list
wait
echo "Subnet scan for port $tcpport complete."
Problem with this approach is the script stops at parallel.
Is there a better way to use parallel from a script?
Can't you send the commands to the background with & in order to process the others simultaneously? Something like below;
#!/bin/bash
ports="20 21 22 25 80 443"
for p in $ports
do
nmap -Pn -p$p 10.0.1.0/24 &
done
Nmap has built-in parallelism that should be able to handle scanning a Class-A network in a single command. In fact, because of Nmap's network status monitoring and feedback mechanisms, it is usually better to run just one instance of Nmap at a time. The bottleneck for Nmap is not the processor, so running multiple instances with parallel is not going to help. Instead, Nmap will send many probes at once and wait for responses. As new responses come in, new probes can be sent out. If Nmap gets a response for every probe, it increases the number of outstanding probes (parallelism) it sends. When it detects a packet drop, it decreases this number (as well as some other timing-related variables).
This adaptive timing behavior is discussed at length in the official Nmap Network Scanning book, and is based on public algorithms used in TCP.
You may be able to speed up your scan by adjusting some timing options and eliminating scan phases that do not matter to you. On the simple end, you can try -T4 to increase several timing-related settings at once, without exceeding the capability of a high-speed link. You can also try adding -n to skip the reverse-DNS name lookup phase, since you may not be interested in those results.
You have already used the -Pn flag to skip the host discovery phase; if you are only scanning one port, this may be a good idea, but it may also result in confusing output and slower scan times, since Nmap must assume that every host is up and do a real port scan. Remember the adaptive timing algorithms? They have slightly different behavior when doing host discovery that may result in faster scan times. If you don't like the default host discovery probes, you can tune them yourself. If I am scanning for port 22, I can use that as a host discovery probe with -PS22, which means my output will only show hosts with that port open or closed (not firewalled and not down). If you stick with -Pn, you should probably also use the --open option to only show hosts with your chosen ports open, otherwise you will have a lot of output to slog through.

Monitor network bytes/sec on Windows for single process/port from command line

I need to monitor average network bytes sent/received per second over a period of time from command line, but only for network traffic sent/received by a certain process or port.
I am currently able to monitor all network traffic using:
logman create counter -n CounterName -c "\Network Interface(*)\Bytes Total/sec" -f csv -o C:\output.log -si 1
which gives me a CSV of network total bytes/sec at 1 second intervals which I can then parse and determine an average, but I need to be able to monitor traffic only sent/received on a single port or by a single process (port would be better)
I've done a good amount of googling and can't find anything built in to Windows to do this. (I've looked at netstat also). I am open to any free third party tools that can do this, they just need to be able to be run from command line and produce some kind of log.
If you want to implement something yourself, you can write a Upper-Layer Windows Filter driver:
http://msdn.microsoft.com/en-us/library/windows/hardware/ff564862(v=vs.85).aspx#possible_driver_layers

Analyze Local Network Traffic, Update Quota with tshark and BASH [duplicate]

This question already has answers here:
How do I calculate network utilization for both transmit and receive
(2 answers)
Closed 2 years ago.
I have a slightly weird problem and I really hope someone can help with this:
I go to university and the wireless network here issues every login a certain quota/week (mine is 2GB). This means that every week, I am only allowed to access 2GB of the Internet - my uploads and downloads together must total at most 2GB (I am allowed access to a webpage that tells me my remaining quota). I'm usually allowed a few grace KB but let's not consider that for this problem.
My laptop runs Ubuntu and has the conky system monitor installed, which I've configured to display (among other things, ) my remaining wireless quota. Originally, I had conky hit the webpage and grep for my remaining quota. However, since my conky refreshes every 5 seconds and I'm on the wireless connection for upwards of 12 hours, the checking of the webpage itself kills my wireless quota.
To solve this problem, I figured I could do one of two things:
Hit the webpage much less frequently so that doing so doesn't kill my quota.
Monitor the wireless traffic at my wireless card and keep subtracting it from 2GB
(1) is what I've done so far: I setup a cron job to hit the webpage every minute and store the result in file on my local filesystem. Conky then reads this file - no need for it to hit the webpage; no loss of wireless quota thanks to conky.
This solution is a win by a factor of 12, which is still not enough. However, I'm a fan of realtime data and will not reduce the cron frequency further.
So, the only other solution that I have is (2). This is when I found out about wireshark and it's commandline version tshark. Now, here's what I think I should do:
daemonize tshark
set tshark to monitor the amount (in KB or B or MB - I can convert this later) of traffic flowing through my wireless card
keep appending this traffic information to file1
sum up the traffic information in the file1 and subtract it from 2GB. Store the result in file2
set conky to read file2 - that is my remaining quota
setup a cron job to delete/erase_the_contents_of file1 every Monday at 6.30AM (that's when the weekly quota resets)
At long last, my questions:
Do you see a better way to do this?
If not, how do I setup tshark to make it do what I want? What other scripts might I need?
If it helps, the website tells me my remaining quota is KB
I've already looked at the tshark man page, which unfortunately makes little sense to me, being the network-n00b that I am.
Thank you in advance.
Interesting question. I've no experience using tshark, so personally I would approach this using iptables.
Looking at:
[root#home ~]# iptables -nvxL | grep -E "Chain (INPUT|OUTPUT)"
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
Chain OUTPUT (policy ACCEPT 9763462 packets, 1610901292 bytes)
we see that iptables keeps a tally of the the bytes that passes through the each chain. So one can presumably go about monitoring your bandwidth usage by:
When your system starts up, retrieve your remaining quota from the web
Zero the byte tally in iptables (Use the -z option)
Every X seconds, get usage from iptables and deduct from quota
Here are some examples of using iptables for IP accounting.
Caveats
There are some drawbacks to this approach. First of all you need root access to run iptables, which means you need conky running as root, or run a cron daemon which writes the current values to a file which conky has access to.
Also, not all INPUT/OUTPUT packets may count towards your bandwidth allocation, e.g. intranet access, DNS, etc. One can filter out only relevant connections by matching them and placing them in a separate iptables chain (examples in the link given above). An easier approach (if the disparity is not too large) would be to occasionally grab your real time quota from the web, reset your values and start again.
It also gets a little tricky when you have existing iptables rules which are either complicated or uses custom chains. You'll then need some knowledge of iptables to retrieve the right values.

Resources