i'm working on a script that will monitor traffic on specific hosts from nagios. I have studied some scripts already made and have gathered almost all the info i need to do it but i have encountered a problem in identifying the OID's necessary for the traffic. I wanted to use IF-MIB::ifOutOctets.1 and IF-MIB::ifInOctets.1 to get the incoming and outgoing traffic but when i tested with the following line:
snmpwalk -v 1 -c public myComputer OID
i got the same result for both the OID's and that doesn't seem right. I'm wandering if there are other variables i could try instead of those i'm using now.
It would be useful even if you can point me to where i could find some info on the IF-MIB, because i can get all the values with snmpwalk but i don't know how to interpret them
Ok, i found the answer, after some searching. The values are equal because i was not asking the right interface(i was asking the loopback). There is this command snmpwalk -v 1 -c public hostname 1.3.6.1.2.1.31.1.1.1 that lists a lot of OID's and from there you can see 'IF-MIB::ifName' which stand for the interfaces. And if you execute IF-MIB::ifInOctets.x where x corresponds to the interface you are interested in you can find a number in bytes. I am not sure what it means, or how it's generated but i tested executing twice the command:
date ; snmpwalk -v 1 -c public myComputer ifOutOctets.x
at an interval of aprox 1 min, and then i subtracted the two values and devided them by the number of seconds that passed between the executions. I compared the value with the one obtained from iptraf and they kinda match, so i think you can used this way to find the traffic a station with snmp.
Related
According to example 12 here I should be able to use
dbus-send --system --print-reply --dest=org.freedesktop.NetworkManager /org/freedesktop/NetworkManager/Devices/4 org.freedesktop.NetworkManager.Device.Wireless.GetAllAccessPoints
to discover all available wireless access points, because /org/freedesktop/NetworkManager/Devices/4 is my wireless adaptor. However, it seems to return results different from the command-line equivalent
nmcli device wifi list
which returns many more SSIDs. Whilst experimenting with the above at work, I could only get one SSID via dbus-send. At home, the first time I ran the dbus-send command it returned an array of four access points, which is the same number as returned by nmcli. I ran the same dbus-send command again and this time it produced a list of only one access point, just as at work.
The next day at home I tried the dbus-send command several times and it listed just one access point. I then ran the nmcli command again and it listed five access points. After that, the dbus-send command also listed five access points. It seems that the nmcli command somehow goes further than the dbus-send command to discover access points, but once it has done so, the dbus-send command is also able to find the access points. That is not the case at work, however: the nmcli command always discovers 12 or more APs but the dbus-send command only ever discovers one.
I definitely only have one wireless adaptor: ifconfig -a lists: enp0s25, lo, sit0 and wlp3s0.
What does the nmcli command that the dbus-send command does not?
The answer is that you have to run a rescan (method RequestScan) just before getting the list of SSIDs.
I have set up an ssh connection from my mac to XenServer. I'm able to get the metrics for CPU and memory using xentop command as shown below
Further, I wanted to get the CPU utilization using a bash script so that I So that I can get the average values for this, as shown below.
.
As you can see in the above figure, The problem I'm facing is that the first value is coming out to be zero, followed by other values. For all the times the first value always comes as 0. Can someone please explain what I'm doing wrong.
PS: The -bi3 means that the command will be executed thrice.
Please let me know if you have any questions.
Thanks in advance.
To inventory a port in part of a Class-A network, I scan that as a few thousand Class-C networks using nmap. I use parallel to run 32 subnet scan jobs at once.
A minimized version of the script:
while read subnet; do
echo nmap -Pn -p"$tcpport" "$subnet" >> /tmp/nmap_parallel.list;
done < $subnets
parallel -j32 < /tmp/nmap_parallel.list
wait
echo "Subnet scan for port $tcpport complete."
Problem with this approach is the script stops at parallel.
Is there a better way to use parallel from a script?
Can't you send the commands to the background with & in order to process the others simultaneously? Something like below;
#!/bin/bash
ports="20 21 22 25 80 443"
for p in $ports
do
nmap -Pn -p$p 10.0.1.0/24 &
done
Nmap has built-in parallelism that should be able to handle scanning a Class-A network in a single command. In fact, because of Nmap's network status monitoring and feedback mechanisms, it is usually better to run just one instance of Nmap at a time. The bottleneck for Nmap is not the processor, so running multiple instances with parallel is not going to help. Instead, Nmap will send many probes at once and wait for responses. As new responses come in, new probes can be sent out. If Nmap gets a response for every probe, it increases the number of outstanding probes (parallelism) it sends. When it detects a packet drop, it decreases this number (as well as some other timing-related variables).
This adaptive timing behavior is discussed at length in the official Nmap Network Scanning book, and is based on public algorithms used in TCP.
You may be able to speed up your scan by adjusting some timing options and eliminating scan phases that do not matter to you. On the simple end, you can try -T4 to increase several timing-related settings at once, without exceeding the capability of a high-speed link. You can also try adding -n to skip the reverse-DNS name lookup phase, since you may not be interested in those results.
You have already used the -Pn flag to skip the host discovery phase; if you are only scanning one port, this may be a good idea, but it may also result in confusing output and slower scan times, since Nmap must assume that every host is up and do a real port scan. Remember the adaptive timing algorithms? They have slightly different behavior when doing host discovery that may result in faster scan times. If you don't like the default host discovery probes, you can tune them yourself. If I am scanning for port 22, I can use that as a host discovery probe with -PS22, which means my output will only show hosts with that port open or closed (not firewalled and not down). If you stick with -Pn, you should probably also use the --open option to only show hosts with your chosen ports open, otherwise you will have a lot of output to slog through.
I have a master server, that crawls data on web, and do indexing. And after that it starts mirroring to all the mirror server.
For that I use rsync and rsh.
But before start updating in mirror server, it takes time. I want to find where that delay occurs.
My understanding
It might be takes time in reverse DNS Lookup.
My questions
EDITED
Is it right to add some log in rsh.c code or in rsync code
If first question's answer is yes, then I want to make a list of time consumed when Reverse DNS Lookup is enabled and when Reverse DNS Lookup is disabled, so tell me what can do for that?
Tell me where can I add log to note the time consumed?
If my understanding and questions are not upto mark or relevant according to the task that I want, then please correct me, and give me better path, so that I can achieve my goal.
Thanks in advance. Looking for your kind response.
Edit No. 2
Basically I am analyzing the time consumed to determine the delay reason. Nothing to modufy the existing code.
My task is to analyze the code , and find the reason of delay. Thats it.
I think now all the things of my task is clear to you.
Before changing rsh, you may try using strace to see which system calls take longer.
strace -c will produce a list of system calls and % of time used by those calls.
(should help with the second question also)
For the DNS look-up to be evident you can use ltrace:
example:
ltrace -c -o log.txt wget http://dkjflsdfjka/
Then log.txt will have something like:
root#host:~# head log.txt
% time seconds usecs/call calls function
------ ----------- ----------- --------- --------------------
74.27 0.130779 130779 1 getaddrinfo
6.63 0.011680 33 344 strlen
3.05 0.005371 35 152 free
2.98 0.005255 35 147 malloc
2.74 0.004830 38 127 fgets
then you can see ... the getaddrinfo call is taking most of the time
This question already has answers here:
How do I calculate network utilization for both transmit and receive
(2 answers)
Closed 2 years ago.
I have a slightly weird problem and I really hope someone can help with this:
I go to university and the wireless network here issues every login a certain quota/week (mine is 2GB). This means that every week, I am only allowed to access 2GB of the Internet - my uploads and downloads together must total at most 2GB (I am allowed access to a webpage that tells me my remaining quota). I'm usually allowed a few grace KB but let's not consider that for this problem.
My laptop runs Ubuntu and has the conky system monitor installed, which I've configured to display (among other things, ) my remaining wireless quota. Originally, I had conky hit the webpage and grep for my remaining quota. However, since my conky refreshes every 5 seconds and I'm on the wireless connection for upwards of 12 hours, the checking of the webpage itself kills my wireless quota.
To solve this problem, I figured I could do one of two things:
Hit the webpage much less frequently so that doing so doesn't kill my quota.
Monitor the wireless traffic at my wireless card and keep subtracting it from 2GB
(1) is what I've done so far: I setup a cron job to hit the webpage every minute and store the result in file on my local filesystem. Conky then reads this file - no need for it to hit the webpage; no loss of wireless quota thanks to conky.
This solution is a win by a factor of 12, which is still not enough. However, I'm a fan of realtime data and will not reduce the cron frequency further.
So, the only other solution that I have is (2). This is when I found out about wireshark and it's commandline version tshark. Now, here's what I think I should do:
daemonize tshark
set tshark to monitor the amount (in KB or B or MB - I can convert this later) of traffic flowing through my wireless card
keep appending this traffic information to file1
sum up the traffic information in the file1 and subtract it from 2GB. Store the result in file2
set conky to read file2 - that is my remaining quota
setup a cron job to delete/erase_the_contents_of file1 every Monday at 6.30AM (that's when the weekly quota resets)
At long last, my questions:
Do you see a better way to do this?
If not, how do I setup tshark to make it do what I want? What other scripts might I need?
If it helps, the website tells me my remaining quota is KB
I've already looked at the tshark man page, which unfortunately makes little sense to me, being the network-n00b that I am.
Thank you in advance.
Interesting question. I've no experience using tshark, so personally I would approach this using iptables.
Looking at:
[root#home ~]# iptables -nvxL | grep -E "Chain (INPUT|OUTPUT)"
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
Chain OUTPUT (policy ACCEPT 9763462 packets, 1610901292 bytes)
we see that iptables keeps a tally of the the bytes that passes through the each chain. So one can presumably go about monitoring your bandwidth usage by:
When your system starts up, retrieve your remaining quota from the web
Zero the byte tally in iptables (Use the -z option)
Every X seconds, get usage from iptables and deduct from quota
Here are some examples of using iptables for IP accounting.
Caveats
There are some drawbacks to this approach. First of all you need root access to run iptables, which means you need conky running as root, or run a cron daemon which writes the current values to a file which conky has access to.
Also, not all INPUT/OUTPUT packets may count towards your bandwidth allocation, e.g. intranet access, DNS, etc. One can filter out only relevant connections by matching them and placing them in a separate iptables chain (examples in the link given above). An easier approach (if the disparity is not too large) would be to occasionally grab your real time quota from the web, reset your values and start again.
It also gets a little tricky when you have existing iptables rules which are either complicated or uses custom chains. You'll then need some knowledge of iptables to retrieve the right values.