custom mib for postgresql in SNMP Trap in python - snmp

i want to send snmp trap using pysnmp library for database(postgresql). like when database goes down send a trap, similarly for when database goes up send another trap.
so now my question is how to define or create my own MIB file for the same in python.
thanks in advance

You may not absolutely require a MIB if all you want to do is to send SNMP TRAP. But I do not really know your requirements. Why do you think you need a MIB? Is it that you run some sort of NMS on the receiving side that requires MIB to analyse the event?
What you could do without MIB is to chose TRAP message contents that sufficiently describe the event using just OID-value pairs (e.g. required TRAP message contents + possibly OID-value pairs), then have the receiver of the TRAP analyse it accordingly.
If you still need a MIB, then you could just create one in your text editor. Take one of the existing MIBs as an example. You may test-compile it with mibdump.py to catch possible syntax errors. Once you are done, refer pysnmp to your TRAP object in the MIB so pysnmp which would compile and use it.

Related

Recovering control of a closed input descriptor process

Doing some tests in scm (a scheme interpreter), I've intentionally closed the current-input-port (equivalent to the standard input file descriptor). Once the program work in REPL, the things got crazy, printing systematically a error message. My question is: how could I recover the control of process, that means, how could I reestablish the input file descriptor of such process?
Search for "changing file descriptor of a running process" or something similar, I couldn't find a helpful article.
Thanks in advance
System information: Debian 10.
You almost certainly can't, although this does slightly depend on how the language-level ports are mapped to the underlying OS-level I/O system.
If what you do is close the OS-level standard input then all is lost:
the REPL tries to read from standard input, gets an error as it's closed;
it tries to raise some error which will involve prompting the user for input ...
... from standard input, which is closed, so it gets error;
game over.
The only way to survive this is for one of two things to be true:
either you've wrapped an error handler around the code which is already prepared to deal with this;
or the implementation is smart enough to recognise that it's getting closed-port errors in its closed-port error handler and gives up in some smart way.
Basically once the OS level standard input is gone anything that needs to get input from it is doomed: you can't put it back without OS-level surgery on the process.
However it's possible that the implementation maps a single OS-level I/O stream to multiple language-level streams, and closing only one of these streams would leave the system with some other stream-of-last-resort to which it can still talk, and which still refers to the OS-level standard input. Common Lisp is an example of a system which can (depending on configuration) do this. It has, for instance, *standard-input* *error-output*, *query-io*, *terminal-io* and other streams, and it's very possible to be in a situation where, for instance, *standard-input* has been closed causing read errors, but *query-io* still points somewhere with a human on the end of it.
I don't know if scm does that.

Confusion about rubys IO#(read/write)_nonblock calls

I am currently doing the Ruby on the Web project for The Odin Project. The goal is to implement a very basic webserver that parses and responds to GET or POST requests.
My solution uses IO#gets and IO#read(maxlen) together with the Content-Length Header attribute to do the parsing.
Other solution use IO#read_nonblock. I googled for it, but was quite confused with the documentation for it. It's often mentioned together with Kernel#select, which didn't really help either.
Can someone explain to me what the nonblock calls do differently than the normal ones, how they avoid blocking the thread of execution, and how they play together with the Kernel#select method?
explain to me what the nonblock calls do differently than the normal ones
The crucial difference in behavior is when there is no data available to read at call time, but not at EOF:
read_nonblock() raises an exception kind of IO::WaitReadable
normal read(length) waits until length bytes are read (or EOF)
how they avoid blocking the thread of execution
According to the documentation, #read_nonblock is using the read(2) system call after O_NONBLOCK is set for the underlying file descriptor.
how they play together with the Kernel#select method?
There's also IO.select. We can use it in this case to wait for availability of input data, so that a subsequent read_nonblock() won't cause an error. This is especially useful if there are multiple input streams, where it is not known from which stream data will arrive next and for which read() would have to be called.
In a blocking write you wait until bytes got written to a file, on the other hand a nonblocking write exits immediately. It means, that you can continue to execute your program, while operating system asynchronously writes data to a file. Then, when you want to write again, you use select to see whether the file is ready to accept next write.

Bosun adding external collectors

What is the procedure to define new external collectors in bosun using scollector.
Can we write python or shell scripts to collect data?
The documentation around this is not quite up to date. You can do it as described in http://godoc.org/bosun.org/cmd/scollector#hdr-External_Collectors , but we also support JSON output which is better.
Either way, you write something and put it in the external collectors directory, followed by a frequency directory, and then an executable script or binary. Something like:
<external_collectors_dir>/<freq_sec>/foo.sh.
If the directory frequency is zero 0, then the the script is expected to be continuously running, and you put a sleep inside the code (This is my preferred method for external collectors). The scripts outputs the telnet format, or the undocumented JSON format to stdout. Scollector picks it up, and queues that information for sending.
I created an issue to get this documented not long ago https://github.com/bosun-monitor/bosun/issues/1225. Until one of us gets around to that, here is the PR that added JSON https://github.com/bosun-monitor/bosun/commit/fced1642fd260bf6afa8cba169d84c60f2e23e92
Adding to what Kyle said, you can take a look at some existing external collectors to see what they output. here is one written in java that one of our colleagues wrote to monitor jvm stuff. It uses the text format, which is simply:
metricname timestamp value tag1=foo tag2=bar
If you want to use the JSON format, here is an example from one of our collectors:
{"metric":"exceptional.exceptions.count","timestamp":1438788720,"value":0,"tags":{"application":"AdServer","machine":"ny-web03","source":"NY_Status"}}
And you can also send metadata:
{"Metric":"exceptional.exceptions.count","Name":"rate","Value":"counter"}
{"Metric":"exceptional.exceptions.count","Name":"unit","Value":"errors"}
{"Metric":"exceptional.exceptions.count","Name":"desc","Value":"The number of exceptions thrown per second by applications and machines. Data is queried from multiple sources. See status instances for details on exceptions."}`
Or send error messages to stderror:
2015/08/05 15:32:00 lookup OR-SQL03: no such host

How to get OIDs from a MIB file?

I want to read all the objects from the MIB file that a manager has.
I developed one tool to get some data from a SNMP enabled agent. I want to enhance that tool by showing all the OIDs form the manager's MIB file.
I am using the NET-SNMP library.
I saw the following:
/usr/local/share/snmp/mibs/
folder and it contains many MIB files, but how can I form a list of the OIDs it has?
I went through the MIBs and saw the structures, but how do I get the OIDs of each and every object mentioned in the MIB files?
I want to list all the OIDs as follows:
SNMPv2-MIB::sysDescr.0 = .1.3.6.1.2.1.1.1.0
SNMPv2-MIB::sysObjectID.0 = .1.3.6.1.2.1.1.2.0
... etc
I want to scan all the MIB files and find all the OIDs from the files.
How do I do this?
Use snmptranslate-command from net-snmp library. Try it with the following paramenters:
-M "directory containing your MIB file"
-m ALL
-Pu
-Tso
After some problems I managed to generate the OIDs using the following command.
snmptranslate -Pu -Tz -M ~/.snmp/mibs:/usr/share/snmp/mibs:/usr/share/snmp/mibs/iana:/usr/share/snmp/mibs/ietf:/usr/share/mibs/site:/usr/share/snmp/mibs:/usr/share/mibs/iana:/usr/share/mibs/ietf:/usr/share/mibs/netsnmp:`pwd` -m module_name_NOT_file_name > module_name.oid
To pull the OIDs from a running SNMP server you might like to use the tool snmpwalk using the -Ci option . The tool comes with Net-SNMP.
The other two SO QAs show how you can do it without walking a running system:
"net-snmp sample code to parse MIB file and extract trap related information from it": The answer shows the top-level framework of a C parser which is based on top of the Net-SNMP library.
"Get oid's type (syntax) from MIB using Net-SNMP API": It is the specific function to handle an OID.
That is only the starting point. There is a lot of coding ahead from there.
Update: The another nice tool is the perl SNMP compiler packaged in SNMP::MIB::Compiler. With a script in perl you get all the MIB elements/components pulled into internal data structures and you can pick any information from there, either by looking into the structure tree or by dumping the tree and do a post-parsing on the dump.

Unwanted buffering when filtering console output in Win32

My question is related to "Turn off buffering in pipe" albeit concerning Windows rather than Unix.
I'm writing a Make clone and to stop parallel processes from thrashing each others' console output I've redirected the output to pipes (as described in here) on which I can do any filtering I want. Unfortunately long-running processes now buffer up their output rather than sending it in real-time as they would on a console.
From peeking at the MSVCRT sources it seems the root cause is that GetFileType() is used to check whether the standard I/O handles are attached to a console, which then sets an internal flag and ends up disabling buffering.
Apparently a separate array of inheritable file handles and flags can also be passed on through the undocumented lpReserved2 member of the STARTUPINFO structured when creating the process. About the only working solution I've figured out is to use this list and just lie about the device type when setting the flags for stdout/stderr.
Now then... Is there any sane way of solving this problem?
There is not. Yes, GetFileType() tells it that stdout is no longer a char device, _isatty() return false so the CRT switches the output stream to buffered mode. Important to get reasonable throughput. Flushing output one character at a time is only acceptable when a human is looking at them.
You would have to relink the programs you are trying to redirect with a customized version of the CRT. I don't doubt that if that was possible, you wouldn't be messing with this in the first place. Patching GetFileType() is another un-sane solution.

Resources