Why did SNMP4J return different result with Net-SNMP? - snmp

I find that with Net-SNMP, I can get the normal ifPhysAddress result .But when I use SNMP4J,I get some wrong result.How to fix it ?
The Net-SNMP result:
The snmp4j result:

That's absolutely normal.
NET-SNMP ships with a set of default MIB documents, so when it makes SNMP operations derived data types (like PhysAddress for ifPhysAddress) can be interpreted more accurately according to the MIB documents.
However, when you consume the raw SNMP4J library, MIB documents are not involved and the only way is to print out ifPhysAddress as the base type of OCTET STRING which is effectively raw bytes and garble characters.
If you like to achieve the same output like NET-SNMP, you can buy SNMP4J's MIB library or search for other alternative ways.

Related

Which eUICC capabilities and how should SMSR use to segment SCP03t script?

From section 5.4.4 of SGP.02-v4.2:
The SM-SR has the responsibility to build the final Command script, depending on eUICC capabilities and selected protocol
It is clear that in some cases SM-SR should split the command script that it received in data field of ES3.SendData. This is supported by examples from SGP.11-v4.0: compare step 14 on page 408 where complete profile package #PROFILE_PACKAGE is passed in ES3.SendData. with steps 3, 5 and 7 on pages 211-212 where segments in the form of #PROFILE_PARTi are sent via ES5.
First of all it is not clear what factors limit the capability to send the whole command script to eUICC. This question is important because the answer to it most probably would allow to answer the most important question: how exactly SMSR decides (as in what parameters it takes into account) how to split the command script.
I know about EUICC-CAPABILITIES but they only specify what transports/algorithms/features are supported. They do not give any hint of how large the single segment (that is #PROFILE_PARTi) of the command script built by SMSR can be. Also specification seems to distinguish eUICC capabilities from selected protocol and that's another factor that makes me think that capabilities mentioned in 5.4.4 are not the same as EUICC-CAPABILITIES.
SGP.11-v4.0 shows that even for CAT_TP or HTTPS segmenting should be used and SMSR somehow needs to decide how to split the command script it received.
So questions:
What eUICC capabilities should be taken into account by SMSR?
How exactly SMSR should use the capabilities to decide the size of the script segment (like #PROFILE_PARTi) it can use?
How SMSR can retrieve those capabilities for a given eUICC?
Can they be retrieved via means defined in some GP specification or is this manufacturer specific?
According to section 4.1.3.3 of SGP.02-v4.2 eUICC SHALL support profile command data segments of at least up to 1024 bytes (including tag and length field). Does it mean that:
SMDP will never create profile command data TLVs with code ‘86’ longer than 1024 bytes?
SMSR is compliant if it sends one command TLV at a time? (This way SMSR does not utilize the capabilities of the eUICC but this algorithm will work for any compliant eUICC regardless of eUICC’s capabilities.)
SM-SR:
Take a look into "EUICC-CAPABILITIES", sect. 5.1.1.2.10 for the supported transport features. For your implementation e.g. also the supported buffer size might be interesting to send as less concatenated SMS as possible or have a better TCP payload size. This must be implicitly known by reading the EID and know the features of the eUICC by interpreting the model and manufacturer. Not sure if yome eUICC are offering here more than the minimum.
2.+3.+4. Store the EIS when the eUICC is registered. (the SM-SR could have it in the database). This contains the EUICC-CAPABILITIES.
SM-DP:
Yes. Including the tag and length, MAC and padding. The profile package is sliced in 1020 byte chunks and then prefixed with the 86 tag and the 3 bytes length.
Yes, most likely, but like mentioned above a streaming protocol like CAT_TP or HTTPS does not have the problems of SMS.

custom mib for postgresql in SNMP Trap in python

i want to send snmp trap using pysnmp library for database(postgresql). like when database goes down send a trap, similarly for when database goes up send another trap.
so now my question is how to define or create my own MIB file for the same in python.
thanks in advance
You may not absolutely require a MIB if all you want to do is to send SNMP TRAP. But I do not really know your requirements. Why do you think you need a MIB? Is it that you run some sort of NMS on the receiving side that requires MIB to analyse the event?
What you could do without MIB is to chose TRAP message contents that sufficiently describe the event using just OID-value pairs (e.g. required TRAP message contents + possibly OID-value pairs), then have the receiver of the TRAP analyse it accordingly.
If you still need a MIB, then you could just create one in your text editor. Take one of the existing MIBs as an example. You may test-compile it with mibdump.py to catch possible syntax errors. Once you are done, refer pysnmp to your TRAP object in the MIB so pysnmp which would compile and use it.

How to get OIDs from a MIB file?

I want to read all the objects from the MIB file that a manager has.
I developed one tool to get some data from a SNMP enabled agent. I want to enhance that tool by showing all the OIDs form the manager's MIB file.
I am using the NET-SNMP library.
I saw the following:
/usr/local/share/snmp/mibs/
folder and it contains many MIB files, but how can I form a list of the OIDs it has?
I went through the MIBs and saw the structures, but how do I get the OIDs of each and every object mentioned in the MIB files?
I want to list all the OIDs as follows:
SNMPv2-MIB::sysDescr.0 = .1.3.6.1.2.1.1.1.0
SNMPv2-MIB::sysObjectID.0 = .1.3.6.1.2.1.1.2.0
... etc
I want to scan all the MIB files and find all the OIDs from the files.
How do I do this?
Use snmptranslate-command from net-snmp library. Try it with the following paramenters:
-M "directory containing your MIB file"
-m ALL
-Pu
-Tso
After some problems I managed to generate the OIDs using the following command.
snmptranslate -Pu -Tz -M ~/.snmp/mibs:/usr/share/snmp/mibs:/usr/share/snmp/mibs/iana:/usr/share/snmp/mibs/ietf:/usr/share/mibs/site:/usr/share/snmp/mibs:/usr/share/mibs/iana:/usr/share/mibs/ietf:/usr/share/mibs/netsnmp:`pwd` -m module_name_NOT_file_name > module_name.oid
To pull the OIDs from a running SNMP server you might like to use the tool snmpwalk using the -Ci option . The tool comes with Net-SNMP.
The other two SO QAs show how you can do it without walking a running system:
"net-snmp sample code to parse MIB file and extract trap related information from it": The answer shows the top-level framework of a C parser which is based on top of the Net-SNMP library.
"Get oid's type (syntax) from MIB using Net-SNMP API": It is the specific function to handle an OID.
That is only the starting point. There is a lot of coding ahead from there.
Update: The another nice tool is the perl SNMP compiler packaged in SNMP::MIB::Compiler. With a script in perl you get all the MIB elements/components pulled into internal data structures and you can pick any information from there, either by looking into the structure tree or by dumping the tree and do a post-parsing on the dump.

Google Protocol Buffers - Storing messages into file

I'm using google protocol buffer to serialize equity market data (ie. timestamp, bid,ask fields).
I can store one message into a file and deserialize it without issue.
How can I store multiple messages into a single file? Not sure how I can separate the messages. I need to be able to append new messages to the file on the fly.
I would recommend using the writeDelimitedTo(OutputStream) and parseDelimitedFrom(InputStream) methods on Message objects. writeDelimitedTo writes the length of the message before the message itself; parseDelimitedFrom then uses that length to read only one message and no farther. This allows multiple messages to be written to a single OutputStream to then be parsed separately. For more information, see https://developers.google.com/protocol-buffers/docs/reference/java/com/google/protobuf/MessageLite#writeDelimitedTo(java.io.OutputStream)
From the docs:
http://code.google.com/apis/protocolbuffers/docs/techniques.html#streaming
Streaming Multiple Messages
If you want to write multiple messages to a single file or stream, it
is up to you to keep track of where one message ends and the next
begins. The Protocol Buffer wire format is not self-delimiting, so
protocol buffer parsers cannot determine where a message ends on their
own. The easiest way to solve this problem is to write the size of
each message before you write the message itself. When you read the
messages back in, you read the size, then read the bytes into a
separate buffer, then parse from that buffer. (If you want to avoid
copying bytes to a separate buffer, check out the CodedInputStream
class (in both C++ and Java) which can be told to limit reads to a
certain number of bytes.)
Protobuf does not include a terminator per outermost record, so you need to do that yourself. The simplest approach is to prefix the data with the length of the record that follows. Personally, I tend to use the approach of writing a string-header (for an arbitrary field number), then the length as a "varint" - this means the entire document is then itself a valid protobuf, and could be consumed as an object with a "repeated" element, however, just a fixed-length (typically 32-bit little-endian) marker would do just as well. With any such storage, it is appendable as you require.
If you're looking for a C++ solution, Kenton Varda submitted a patch to protobuf around August 2015 that adds support for writeDelimitedTo() and readDelimitedFrom() calls that will serialize/deserialize a sequence of proto messages to/from a file in a way that's compatible with the Java version of these calls. Unfortunately this patch hasn't been approved yet, so if you want the functionality you'll need to merge it yourself.
Another option is Google has open sourced protobuf file reading/writing code through other projects. The or-tools library, for example, contains the classes RecordReader and RecordWriter that serialize/deserialize a proto stream to a file.
If you would like stand-alone versions of these classes that have almost no external dependencies, I have a fork of or-tools that contains only these classes. See: https://github.com/moof2k/recordio
Reading and writing with these classes is straightforward:
File* file = File::Open("proto.log", "w");
RecordWriter writer(file);
writer.WriteProtocolMessage(msg1);
writer.WriteProtocolMessage(msg2);
...
writer.Close();
An easier way is to base64 encode each message and store it as a record per line.

Should I use a binary or a text file for storing protobuf messages?

Using Google protobuf, I am saving my serialized messaged data to a file - in each file there are several messages. We have both C++ and Python versions of the code, so I need to use protobuf functions that are available in both languages. I have experimented with using SerializeToArray and SerializeAsString and there seems to be the following unfortunate conditions:
SerializeToArray: As suggested in one answer, the best way to use this is to prefix each message with it's data size. This would work great for C++, but in Python it doesn't look like this is possible - am I wrong?
SerializeAsString: This generates a serialized string equivalent to it's binary counterpart - which I can save to a file, but what happens if one of the characters in the serialization result is \n - how do we find line endings, or the ending of messages for that matter?
Update:
Please allow me to rephrase slightly. As I understand it, I cannot write binary data in C++ because then our Python application cannot read the data, since it can only parse string serialized messages. Should I then instead use SerializeAsString in both C++ and Python? If yes, then is it best practice to store such data in a text file rather than a binary file? My gut feeling is binary, but as you can see this doesn't look like an option.
We have had great success base64 encoding the messages, and using a simple \n to separate messages. This will ofcoirse depend a lot on your use - we need to store the messages in "log" files. It naturally has overhead encoding/decoding this - but this has not even remotely been an issue for us.
The advantage of keeping these messages as line separated text has so far been invaluable for maintenance and debugging. Figure out how many messages are in a file ? wc -l . Find the Nth message - head ... | tail. Figure out what's wrong with a record on a remote system you need to access through 2 VPNs and a citrix solution ? copy paste the message and mail it to the programmer.
The best practice for concatenating messages in this way is to prepend each message with its size. That way you read in the size (try a 32bit int or something), then read that number of bytes into a buffer and deserialize it. Then read the next size, etc. etc.
The same goes for writing, you first write out the size of the message, then the message itself.
See Streaming Multiple Messages on the protobuf documentation for more information.
Protobuf is a binary format, so reading and writing should be done as binary, not text.
If you don't want binary format, you should consider using something other than protobuf (there are lots of textual data formats, such as XML, JSON, CSV); just using text abstractions is not enough.

Resources