Defining severity level, SNMPTRAP - snmp

I want to send an SNMP trap, using snmptrap command, but I don't know how to set Severity Level.
I'm using this:
snmptrap -v 2c -c public host "" MIB-MODULE::notificationName severity s "MINOR"
On the SNMPManager, the perceived severity of the trap is always "Warning"
Thanks in advance!

Severity is not an attribute of an SNMP trap, in general.
Which SNMP manager are you using? Most likely, the manager is mapping some MIB variable from the trap into the Severity field of its internal alarm format, but you're not providing that variable in your snmptrap command.
You probably need to study the documentation of your SNMP manager to find the solution.

Related

Has anyone managed to filter logs in the macos `log stream` command using pid?

I've been wrecking my brain trying to get the log stream command on macos to work with a passed in pid.
I have an app FooBar with a pid 12345.
The command:
log stream --debug --info --process FooBar
Works perfectly fine. On using ps auf | grep "FooBar" or the Activity Monitor to get the app's pid, and then doing the below command:
log stream --debug --info --process 12345
I never get any logs. Can anyone please tell me if I'm doing something wrong? I can't find any example of anyone actually using the pid online.
Apple's log facility allows selecting the relevant messages with its "predicate-based filtering". It enables quite elaborate filtering via:
log stream --predicate 'PREDICATE BASED FILTER'
The PREDICATE BASED FILTER is an expression in the predicate DSL, which is outlined here. Fields specific to the log facility are available via
log help predicates
Predicate-based filtering is quite powerful, but unfortunately it can get verbose. It seems that Apple wanted to "elevate" some of the common predicates into the top-level flags.
log --process ProcessName
is an example of such "elevated" predicate. Its full form is
log --predicate 'process == "ProcessName"'
Unfortunately, there's no "elevated" predicate for filtering via pid, but it's supported via the full predicate syntax:
log --predicate 'processID == 12345'

SNMP pass command returning OID error but apparently running on server

I'm just moving my first steps with SNMP, I'm trying to add the output of a simple check script to SNMP but I'm facing some issues.
I'm trying to add a temperature check file for a Raspberry Pi 4 to be returned via SNMP to a remote poller, but just following most of the guides online lead to me to nothing, since I'm stuck with this error every time:
No Such Instance currently exists at this OID
I'm trying using the pass function but I had no luck getting any result.
Currently this is what I declared in the snmpd.conf file:
pass 1.3.6.1.2.1.25.1.8 /bin/bash /script/check_temp.sh
This is the command output:
/script/check_temp.sh
.1.3.6.1.2.1.25.1.8
integer
589
This is the command output from the poller:
snmpget -c test -v 2c 1.2.3.4 .1.3.6.1.2.1.25.1.8
HOST-RESOURCES-MIB::hrSystem.8 = No Such Instance currently exists at this OID
But if I try to run snmpd in foreground I don't actually see any error, seems instead that the script is executed:
sudo snmpd -f -Le -Ducd-snmp/pass -Drun
registered debug token ucd-snmp/pass, 1
registered debug token run, 1
NET-SNMP version 5.7.3
ucd-snmp/pass: pass-running: /bin/bash /usr/script/check_temperature/check_temp.sh -g .1.3.6.1.2.1.25.1.8
run:exec: running '/bin/bash /usr/script/check_temperature/check_temp.sh -g .1.3.6.1.2.1.25.1.8'
run:exec: got 120000 bytes
run:exec: child 7480 finished. result=768
What am I doing wrong? None of the guides I checked mentioned creating MIBS, or any other further steps than I'm already doing, but I'm still getting nothing of what I'm expecting.
Thanks in advance for any hint or suggestion that'll get me on the right way.
I hope this could help anybody trying to configure SNMP checks for their Raspberry, or any other tipycal Linux device, since most of the guides I checked were assuming you would already know some SNMP concepts, while it's possible that while you are just starting you are still not mastering them.
Most of the guides will either state to use extend or passas following:
view all included .1.3.6.1.4.1
pass .1.3.6.1.4.1.9999.1 /bin/bash /path/to/command
extend checkcommand /bin/bash /path/to/command
Like this, I faced the issues of my original question; I was able to get things working only when I added the following lines, providing an empty OID branch to the extend function and the view all accessibility to the .1.3.6.1.4.1 branch:
view all included .1.3.6.1.4.1
extend .1.3.6.1.4.1.9999.1 checkcommand /bin/bash /path/to/command
This way I actually get a reply from snmpwalk, and from snmpwalk result on OID .1.3.6.1.4.1.9999.1 you can retrieve the OID snmpd builds for the output of the script, in order to use it to make snmp queries from a poller.
I know this probably is the basis, but all the tutorials and guides I read at first were likely implying one or more of these steps, so I hope this can be of help to any other SNMP beginner.

Is there any way to check email sent success acknowledgement in ksh | UNIX Shell Scripting

I need to send the generated CSV files on regular intervals using script.
I am using UUENCODE and mailx for the same.
But i need to know that is there any method/way to know that email sent successfully? Any kind of acknowledgement or feedback or something???
It is likely to report for any error. Also the file is confidential and is not intended to deviate to some foreign path.
Edit: Code being used for mailing.
subject="Something happened"
to="na734#company.com"
body="Attachment Test"
attachment=/home/iv315/timelog_file_150111.csv
(cat test_msg.txt; uuencode $attachment somefile.csv) | mailx -s "$subject" "$to"
If you are using mailx and e.g. nail from the commandline you can always use the sendwait option, which is according to the fine manual:
sendwait
When sending a message, wait until the mail transfer agent
exits before accepting further commands.
If the mail transfer agent returns a non-zero exit status,
the exit status of mailx will also be non-zero.
And you can also add your mail to the To: field so if you got the message there is a chance that at least the sending process was successful.

Is there a portable way using unix shell commands to tell if there is a configured IPv6 interface?

Specifically I'm looking for a way to tell if the system has a network interface that is configured with a global scoped IPv6 address. Loopback and link scoped addresses "don't count".
What I want to achieve is to match whether getaddrinfo() will return any IPv6 addresses with the AI_ADDRCONFIG hint flag.
I could do this by writing some code which looks up the IPv6 loopback address ("::1") using getaddrinfo(). But this is part of the tests for a fairly involved build procedure and it would be much simpler not to have to build an executable just to test this rather simple thing.
The best I've found so far only works for Linux and uses the ip command that comes with recent distros.
if ip -f inet6 -o addr | cut -f 9 -s -d' ' | grep global > /dev/null ; then
echo "IPv6 addresses configured continuing"
else
echo "No global IPv6 addresses configured - skipping test"
exit 0
fi
However this relies on the current format of the output from ip which I doubt is guaranteed and doesn't exist on other versions of Unix (e.g. FreeBSD).
I don't think that ifconfig provides a similar enough output across different Unix versions to be of use.
Are there any other tools I'm missing?
For now, pretty good way to see if you have at least some v6 connectivity is to check for result of
ping6 -c 10 ipv6.google.com
However it is a kludge. But in a way, it should work on most unices, and possibly more.

Use logging in Bash script

I have a bash script from which i need to write logs. currently i use block like the one below
#log stop request
{
local now=$(date +"%a %b %d %Y %H:%M:%S")
printf "batch stop request successful. \n"
printf " Time :: %s\n" "$now"
printf " PID of process :: %s\n" ${PID[0]}"
} >> "${mylogfile}"
where mylogfile variable will have name of the logfile.
The problem with this approach is that when 2 or more instances are running, the logs tends to get messed up with writes from instances coming interleaved.
Please note i used the block thinking that it would result in the log being written to file in one go, thus avoiding the problem.
I have seen logger command from Vivek Gite post. but the problem is it does not write to a file I can specify, rather does so to /var/log/message.
Any help is much appreciated.
Thanks and Regards
Sibi
POSIX (IEEE Std 1003.1-2001) does not define the behavior of concurrent write() syscalls sending data to the same file, hence you may obtain different results depending on your platform. You can try to merge all printfs into one in the hope that this will work, but even if it does there is no guarantee that it will in the future or on a different platform.
Rather than using concurrency control and flushing to ensure the writes are sequenced, you can send the messages to a third process which will write your log messages sequentially to a file on behalf on all the processes. In fact, this is what is done with logger and syslog in the post you cited. The logger command does not send the messages to /var/log/messages. It sends the log messages to syslog which can be configured to save the log messages anywhere you like. Changing this configuration usually requires administrative privileges, though.
If you can't or don't want to use syslog, you can also use netcat as a logging server. Run this to multiplex all incoming log messages from all scripts into the file (this job should remain running in the background all the time, you can also run it in a screen):
nc -luk localhost 9876 > shared_log_file &
(port 9876 is just an example) and log in each script this way:
printf "Log message\n" > /dev/udp/localhost/9876
You can also use a custom UDP server instead of netcat (e.g. like this one).
The block does not group the printfs in any way, as you have discovered. What you could do, however, is do all of those commands in a sub-shell (between parenthesis) and then redirect the sub-shell's output!
Depending on the nature of your logging, it might be well-suited for the system log instead of its private logfile. In this case, the logger program could be a very good alternative to hand-rolled logging.
Trying putting the 3 printfs as one string, i.e.
printf "Epsilon batch stop request successful. \n Time :: %s\n PID of process :: %s\n" "$now" ${PID[0]}
You may still get interleaving.
Why not start each instance with its own log file, using the ${PID} on your logfile name to keep them separate? i.e.
>> "${mylogfile}.${PID}"
Edit
As you're hoping for logger, check out the man page :
logger(1) - Linux man page
Name
logger - a shell command interface to the syslog(3) system log module
Synopsis
logger [-isd] [-f file] [-p pri] [-t tag] [-u socket] [message ...]
Description
Logger makes entries in the system log. It provides a shell command interface to the syslog(3) system log module.
Options:
-i' Log the process id of the logger process with each line.
-s' Log the message to standard error, as well as the system log.
-f file
Log the specified file.
.....
That looks like what you want.
I hope this helps.

Resources