SNMP GET snmpexception when response in V1 has trailing data bytes when using SharpSnmp - snmp

A SharpSnmpLib SNMP V1 GET snmpexception is thrown when querying a certain vendor equipment located in remote networks. Other software like iReasoning MIB Browser, SNMPB, or SnmpSharpNet work OK on the same OID and equipment.
The error varies even though the equipment is the same and the OID the same but located in a different network. It seems like a Data segment is added to the end of the UDP packet. One one piece of equipment the error message might be: "BER end of file", and on another identical piece of equipment, the error message is "unsupported data type:34", or "unsupported data type:115" and so on. Many different data types found on the same OID but from different pieces of equipment.
The error occurs in the project source file "MessageFactory.cs" in the ParseMessage routine. If I catch the error and continue the program works OK. I ignore the error for the trailing portion of data bytes that are not properly parsed.
The wireshark packets are also shown below:
And here is another error on another piece of exact same SNMP device and OID, just different IP address.
It seems like the trailing Data portion causes the API to throw an error because it does not recognize it as valid variable. Yet other software packages handle this without seeming error messages. I will have to modify the source code for Sharp SNMP to use the API unless finding a better solution. My modification involves catching the error and moving on. The first variable in the loop is already found and produces the proper value. The error occurs when continuing on from the first variable because the stream has not reached the end.

use the form that has (message, start, length, registry)
this works because it only reads from start to length and not the trailing problems

Related

Recovering control of a closed input descriptor process

Doing some tests in scm (a scheme interpreter), I've intentionally closed the current-input-port (equivalent to the standard input file descriptor). Once the program work in REPL, the things got crazy, printing systematically a error message. My question is: how could I recover the control of process, that means, how could I reestablish the input file descriptor of such process?
Search for "changing file descriptor of a running process" or something similar, I couldn't find a helpful article.
Thanks in advance
System information: Debian 10.
You almost certainly can't, although this does slightly depend on how the language-level ports are mapped to the underlying OS-level I/O system.
If what you do is close the OS-level standard input then all is lost:
the REPL tries to read from standard input, gets an error as it's closed;
it tries to raise some error which will involve prompting the user for input ...
... from standard input, which is closed, so it gets error;
game over.
The only way to survive this is for one of two things to be true:
either you've wrapped an error handler around the code which is already prepared to deal with this;
or the implementation is smart enough to recognise that it's getting closed-port errors in its closed-port error handler and gives up in some smart way.
Basically once the OS level standard input is gone anything that needs to get input from it is doomed: you can't put it back without OS-level surgery on the process.
However it's possible that the implementation maps a single OS-level I/O stream to multiple language-level streams, and closing only one of these streams would leave the system with some other stream-of-last-resort to which it can still talk, and which still refers to the OS-level standard input. Common Lisp is an example of a system which can (depending on configuration) do this. It has, for instance, *standard-input* *error-output*, *query-io*, *terminal-io* and other streams, and it's very possible to be in a situation where, for instance, *standard-input* has been closed causing read errors, but *query-io* still points somewhere with a human on the end of it.
I don't know if scm does that.

What sets SNMP error and error-index fields

I have read several RFCs about the SNMP protocol, and they are usually written in cryptic and opaque style, so I have probably missed the proper information and I apologize in advance for what is probably a simple question...
I am unclear about what kind of error in a get command for instance would set the error and error index fields in the snmp get-response message. Since I have been using Net-SNMP to send commands (and the snmp simulator at demo.snmplabs.com), I have not been able to send improperly formatted messages to see what kind of response I would get. I have started writing my own SNMP test tool (in Visual Basic) just to be able to send improperly formatted messages but it will be a fair amount of work before I can use it as a validated test tool.
When sending requests for non-existing OIDs or with wrong data type, it appears that Net-SNMP handles the errors without needing error/error-index values.
Any suggestion appreciated
It should be possible to generate errors for SNMP GETs but perhaps easier to start with generating errors for SETs.
➜ snmpset -v 2c -c private demohost sysName.0 s "foo"
SNMPv2-MIB::sysName.0 = STRING: foo
In version below agent rejects the SET...
➜ snmpset -v 2c -c private demohost ucdDemoPublicString.0 s "TEST"
Error in packet.
Reason: noCreation (That table does not support row creation or that object can not ever be created)
Failed object: UCD-DEMO-MIB::ucdDemoPublicString.0
Use -d to see the packets back and forth.

What can lead to failures in appending data to a file?

I maintain a program that is responsible for collecting data from a data acquisition system and appending that data to a very large (size > 4GB) binary file. Before appending data, the program must validate the header of this file in order to ensure that the meta-data in the file matches that which has been collected. In order to do this, I open the file as follows:
data_file = fopen(file_name, "rb+");
I then seek to the beginning of the file in order to validate the header. When this is done, I seek to the end of the file as follows:
_fseeki64(data_file, _filelengthi64(data_file), SEEK_SET);
At this point, I write the data that has been collected using fwrite(). I am careful to check the return values from all I/O functions.
One of the computers (windows 7 64 bit) on which we have been testing this program intermittently shows a condition where the data appears to have been written to the file yet neither the file's last changed time nor its size changes. If any of the calls to fopen(), fseek(), or fwrite() fail, my program will throw an exception which will result in aborting the data collection process and logging the error. On this machine, none of these failures seem to be occurring. Something that makes the matter even more mysterious is that, if a restore point is set on the host file system, the problem goes away only to re-appear intermittently appear at some future time.
We have tried to reproduce this problem on other machines (a vista 32 bit operating system) but have had no success in replicating the issue (this doesn't necessarily mean anything since the problem is so intermittent in the first place.
Has anyone else encountered anything similar to this? Is there a potential remedy?
Further Information
I have now found that the failure occurs when fflush() is called on the file and that the win32 error that is being returned by GetLastError() is 665 (ERROR_FILE_SYSTEM_LIMITATION). Searching google for this error leads to a bunch of reports related to "extents" for SQL server files. I suspect that there is some sort of journaling resource that the file system is reporting and this because we are growing a large file by opening it, appending a chunk of data, and closing it. I am now looking for understanding regarding this particular error with the hope for coming up with a valid remedy.
The file append is failing because of a file system fragmentation limit. The question was answered in What factors can lead to Win32 error 665 (file system limitation)?

What is the second paramenter of TCPSocket.send in Ruby?

I am using this line to send a message via a Ruby (1.8.7) socket:
##socket.send login_message, 0
(This works fine)
What is the second parameter for? I can't find the send method in the Ruby API docs...
I first thought that it was some C style length of the message. This is why I used login_message.length as second parameter. That worked but I encountered a strange behavior:
Everything works fine when the second parameter is a odd number. If it's an even number the last character gets lost at receiving on the other side (The other side is a C++ program with a C socket). I inspected the network traffic with Wireshark and noticed that the packets look good. All the data is complete. Why is the last character lost when I receive it?
Thank you
Lennart
This is the flags parameter, the same as the last parameter to the send() system call. Normally it should be 0, but may be something like Socket::MSG_OOB (to send out-of-band data). For Ruby 1.9 this is documented under BasicSocket.

When deleting SMS, they don't get deleted

I connected a GSM/GPRS-modem to my microcontroller and everything works fine.
When I want to delete all messages in the ME storage, I should use this command:
AT+CMGD=1,4
->OK
The deleteflag '4' indicates that I want to delete all messages and flag '1' is overridden. However, when I check if the storage is empty, I get:
AT+CPMS?
+CPMS: 8,100,8,100,8,100
Indicating that the memory is still occupied and no message got deleted.
Does anyone know what I'm doing wrong?
Thanks in advance!
I use the CMGD command to delete messages.
As far as I can see the CMGR command is used to read messages and not delete them.
Edit (since you were using the CMGD command)
It is possible that your modem doesn't support multiple parameters to the CMGD command - my Siemens modem does not (or if it does it doesn't list it in the AT command set document). Instead of deleting all the messages in a single command I do a CMGL to get read messages, parse the results to get the index and them delete them one at a time using CMGD. If you do find another way I'd be interested.
Use the AT+CMGD=? command to find valid values of the parameters.
Edit (since you verified the modem supports CMGD with two parameters
I don't know what the problem is.
I did notice that your CPMS command gives different results to mine, example of mine:
AT+CPMS?
+CPMS: "SM",10,10,"MT",12,35,"MT",12,35
Yours doesn't have any storage memory string. I'm guessing the command you actually did was:
AT+CPMS="ME"
When you switch to ME storage and do a CMGL command does it list the undeleted messages?
Try to set memory to ME using "AT+CPMS="ME"" and then retry your delete command with flag=4. I guess it will work. It not then run "AT+CSAS" to save the earlier setting and the retry delete. I could not test it as there is no SMS in my ME storage area. Let me know if it worked.

Resources