I am trying to monitor the state of a UPS (NetVision), using the provided mib file.
So, upsBatteryStatus should be .1.3.6.1.2.1.33.1.2.1.0
snmpwalk -c COMMUNITY -v1 192.168.1.10 .1.3.6.1.2.1.33.1.2.1.0
iso.3.6.1.2.1.33.1.2.1.0 = INTEGER: 2
And here comes the tricky part:
snmptranslate -Of SOCOMECUPS-MIB::upsBatteryStatus
.iso.org.dod.internet.private.enterprises.socomecSicon.software.network.netvision.upsObjects.upsBattery.upsBatteryStatus
snmptranslate -On SOCOMECUPS-MIB::upsBatteryStatus
.1.3.6.1.4.1.4555.1.1.1.1.2.1
Its different from .1.3.6.1.2.1.33.1.2.1.0 , and it doesnt respond with a value.
check_snmp -H 192.168.1.10 -C COMMUNITY -o upsBatteryStatus -w 1 -c #3:7 -m /var/lib/mibs/ietf/NetVision-nv6-unix.mib -l "Battery Status: "
External command error: Error in packet
Reason: (noSuchName) There is no such variable name in this MIB.
Failed object: SOCOMECUPS-MIB::upsBatteryStatus
Any ideas why it isnt recongnizes as upsBatteryStatus ?
There seem to be 2 objects with the object name upsBatteryStatus in 2 different MIBs: http://www.oidview.com/mibs/4555/SOCOMECUPS-MIB.html and https://www.rfc-editor.org/rfc/rfc1628 . That explains the different OIDs. Nothing wrong with that. The OID is the true identifier of an object, the name is just for us humans.
As far as the error, I don't know what check_snmp does, so cannot say anything about that. But, have you tried this command?
snmpwalk -c COMMUNITY -v1 192.168.1.10 .1.3.6.1.4.1.4555.1.1.1.1.2.1
Helpful commands:
snmptranslate -Tp -m /usr/share/mibs/ietf/NetVision-nv6-unix.mib
and:
"upsBatteryStatus" "1.3.6.1.2.1.33.1.2.1"
| | |
| | +--upsBattery(2)
| | | |
| | | +-- -R-- EnumVal upsBatteryStatus(1)
| | | | Values: unknown(1), batteryNormal(2), batteryLow(3), batteryDepleted(4)
Nagios check_snmp command that reported the correct value is:
/usr/local/nagios/libexec/check_snmp -H 192.168.1.10 -C COMMUNITY -m /var/lib/mibs/ietf/NetVision-nv6-unix.mib -o upsBatteryStatus -w #0:1 -c #3:7 -l "Battery Status: "
SNMP OK - Battery Status: 2 | 'Battery Status: '=2;1;7;
Thank you for help.
Related
Introduction
Currently, I'm writing a customized GitHub workflow inside it I use a curl and grep command.
GitHub repo
action.yml
- name: get new tag
id: get_new_tag
shell: bash
run: |
temp_result=$(curl -s https://api.github.com/repos/${{inputs.github_repository}}/tags | grep -h "name" | grep -h "${{inputs.selector}}" | head -1 | grep -ho "${{inputs.regex}}")
echo "new-tag=${temp_result}" >> $GITHUB_OUTPUT
Full code here
test for action.yml
uses: ./
with:
files: tests/test1.txt tests/test2.txt
github_repository: MathieuSoysal/file-updater-for-release
prefix: MathieuSoysal/file-updater-for-release#
Full code are is here
Problem
I don't understand why but my GitHub Actions don't work with MacOS.
The given error:
temp_result=$(curl -s https://api.github.com/repos/MathieuSoysal/file-updater-for-release/tags | grep -h "name" | grep -h "" | head -1 | grep -ho "v\?[0-9.]*")
echo "new-tag=${temp_result}" >> $GITHUB_OUTPUT
shell: /bin/bash --noprofile --norc -e -o pipefail {0}
Error: Process completed with exit code 1.
Full log error is here
Question
Does someone know how we can fix this issue?
Alternatively, you can simply use the jq command line utility for this which is already installed and available on the GitHub runners. See the preinstalled software.
$ export URL='https://api.github.com/repos/MathieuSoysal/file-updater-for-release/tags'
$ curl -s $URL | jq -r .[0].name
v1.0.3
The issue is from this command grep -h "", this command is not supported on MacOS.
The empty string is not supported, the solution is to add something inside it.
(maybe it is the "tcpflow" problem)
I write a script to monitoring http traffic, and I install tcpflow, then grep
it works (and you should make a http request, for example curl www.163.com)
sudo tcpflow -p -c -i eth0 port 80 2>/dev/null | grep '^Host: '
it outputs like this (continuously)
Host: config.getsync.com
Host: i.stack.imgur.com
Host: www.gravatar.com
Host: www.gravatar.com
but I can't continue to use pipe
does not work (nothing output)
sudo tcpflow -p -c -i eth0 port 80 2>/dev/null | grep '^Host: ' | cut -b 7-
does not work (nothing output)
sudo tcpflow -p -c -i eth0 port 80 2>/dev/null | grep '^Host: ' | grep H
When I replace sudo tcpflow with cat foo.txt, it works:
cat foo.txt | grep '^Host: ' | grep H
so what's wrong with pipe or grep or tcpflow ?
update:
This is my final script: https://github.com/zhengkai/config/blob/master/script/monitor_outgoing_http.sh
To grep a continuous stream use --line-buffered option:
sudo tcpflow -p -c -i eth0 port 80 2> /dev/null | grep --line-buffered '^Host'
--line-buffered
Use line buffering on output. This can cause a performance penalty.
Some reflections about buffered outputting(stdbuf tool is also mentioned):
Pipes, how do data flow in a pipeline?
I think the problem is because of stdio buffering, you need to use GNU stdbuf before calling grep,
sudo tcpflow -p -c -i eth0 port 80 2>/dev/null | stdbuf -o0 grep '^Host: '
With the -o0, it basically means the output (stdout) stream from tcpflow will be unbuffered. The default behavior will be to automatically buffer up data into 40961 byte chunks before sending to next command in pipeline, which is what overriden using stdbuf
1. Refer this nice detail into the subject.
I use the following command to send pinging IP's to a script:
sudo tcpdump -ne -l -i eth0 icmp and icmp[icmptype]=icmp-echo \
| cut -d " " -f 10 | xargs -L2 ./pong.sh
Unfortunately this gives me:
tcpdump: Unable to write output: Broken pipe
To dissect my commands:
Output the ping's from the traffic:
sudo tcpdump -ne -l -i eth0 icmp and icmp[icmptype]=icmp-echo
Output:
11:55:58.812177 IP xxxxxxx > 127.0.0.1: ICMP echo request, id 50776, seq 761, length 64
This will get the IP's from the tcpdump output:
cut -d " " -f 10 # output: 127.0.0.1
Get the output to the script:
xargs -L2 ./pong.sh
This will mimic the following example command:
./pong.sh 127.0.0.1
The strange thing is that the commands work seperate (on their own)...
I tried debugging it but I have no experience with debugging pipes. I checked the commands but they seem fine.
It would seem that's cut stdio buffering is interfering here, i.e. replace | xargs ... by | cat in your cmdline to find out.
Fwiw below cmdline wfm (pipe straight to xargs then use the shell itself to get the nth arg), note btw the extra tcpdump args : -c10 (just to limit to 10pkts, then show the 10/2 lines) and -Q in (only inbound pkts):
$ sudo tcpdump -c 10 -Q in -ne -l -i eth0 icmp and icmp[icmptype]=icmp-echo 2>/dev/null | \
xargs -L2 sh -c 'echo -n "$9: "; ping -nqc1 $9 | grep rtt'
192.168.100.132: rtt min/avg/max/mdev = 3.743/3.743/3.743/0.000 ms
192.168.100.132: rtt min/avg/max/mdev = 5.863/5.863/5.863/0.000 ms
192.168.100.132: rtt min/avg/max/mdev = 6.167/6.167/6.167/0.000 ms
192.168.100.132: rtt min/avg/max/mdev = 4.256/4.256/4.256/0.000 ms
192.168.100.132: rtt min/avg/max/mdev = 1.545/1.545/1.545/0.000 ms
$ _
For those coming across this (like me), tcpdump buffering is the issue.
From the man page:
-l Make stdout line buffered. Useful if you want to see the data
while capturing it. For example:
# tcpdump -l | tee dat
or
# tcpdump -l > dat & tail -f dat
I'm working on enabling Session Multiplexing between two servers.
I want to close all existing sockets for this server(or IP) before creating new one and close newly created one after finishing my task. This is what I've done so far:
remote_ip=192.168.20.2 #User inut
remote_port=222
Can create socket by:
SSHSOCKET=~/.ssh/remote_$remote_ip
ssh -M -f -N -o ControlPath=$SSHSOCKET $remote_ip -p $remote_port
Can search control path by:
ps x | grep $remote_ip | grep ssh | cut -d '=' -f 2
/root/.ssh/remote_192.168.20.2 192.168.20.2 -p 222
Can close socket by:
ssh -S /root/.ssh/remote_192.168.20.2 192.168.20.2 -p 64555 -O exit
Trying to close the socket by:
ps x | grep $remote_ip | grep ssh | cut -d '=' -f 2 | xargs ssh -S | xargs -i {} "-O exit"
But I get:
Pseudo-terminal will not be allocated because stdin is not a terminal.
Tried using -t and -tt:
ps x | grep $remote_ip | grep ssh | cut -d '=' -f 2 | xargs ssh -Stt | xargs -i {} "-O exit"
ssh: Could not resolve hostname /root/.ssh/remote_192.168.20.2: Name or service not known
xargs: ssh: exited with status 255; aborting
Can anyone please help me with this?
If you want to kill every connection from your machine to a given remote IP address and port, you can do so as follows (using fuser, a tool from the psmisc package included with all major Linux distros):
fuser -k -n tcp ",${remote_ip},${remote_port}"
I have a file named "ips" containing all ips I need to ping. In order to ping those IPs, I use the following code:
cat ips|xargs ping -c 2
but the console show me the usage of ping, I don't know how to do it correctly. I'm using Mac os
You need to use the option -n1 with xargs to pass one IP at time as ping doesn't support multiple IPs:
$ cat ips | xargs -n1 ping -c 2
Demo:
$ cat ips
127.0.0.1
google.com
bbc.co.uk
$ cat ips | xargs echo ping -c 2
ping -c 2 127.0.0.1 google.com bbc.co.uk
$ cat ips | xargs -n1 echo ping -c 2
ping -c 2 127.0.0.1
ping -c 2 google.com
ping -c 2 bbc.co.uk
# Drop the UUOC and redirect the input
$ xargs -n1 echo ping -c 2 < ips
ping -c 2 127.0.0.1
ping -c 2 google.com
ping -c 2 bbc.co.uk
With ip or hostname in each line of ips file:
( while read ip; do ping -c 2 $ip; done ) < ips
You can also change timeout, with -W flag, so if some hosts isn'up, it wont lock your script for too much time. Also -q for quiet output is useful in this case.
( while read ip; do ping -c1 -W1 -q $ip; done ) < ips
If the file is 1 ip per line (and it's not overly large), you can do it with a for loop:
for ip in $(cat ips); do
ping -c 2 $ip;
done
You could use fping. It also does that in parallel and has script friendly output.
$ cat ips | xargs fping -q -C 3
10.xx.xx.xx : 201.39 203.62 200.77
10.xx.xx.xx : 288.10 287.25 288.02
10.xx.xx.xx : 187.62 187.86 188.69
...
With GNU Parallel you would do:
parallel -j0 ping -c 2 {} :::: ips
This will run as many jobs in parallel as you have ips or processes.
It also makes sure the output from different jobs are not mixed together, so if you use the output you are guaranteed that you will not get half-a-line from two different jobs.
GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to.
If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:
GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:
Installation
If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash
For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README
Learn more
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel
Try doing this :
cat ips | xargs -i% ping -c 2 %
As suggested by #Lupus you can use "fping", but the output is not human friendly - it will scroll out of your screen in few seconds laving you with no trace as of what is going on. To address this I've just released ping-xray. I tried to make it as visual as possible under ascii terminal plus it creates CSV logs with exact millisecond resolution for all targets.
https://dimon.ca/ping-xray/
Hope you'll find it helpful.