I am working on an SNMP agent using net-snmp and developing a MIB for data held in tables.
I am considering using a table key based on a string of around 15 decimal digits.
Is it reasonable to implement this as an OCTET STRING index?
Even if I encode 2 digits per octet, it would be around 8 octets long.
With an OCTET STRING index, each octet would be added as a node to the OID.
I know that I can convert this to an integer(s), but the decimal digits could have leading zeros.
Are there any views or suggestions for this?
Thanks in advance.
You can put zeros into OCTET STRINGS, so yes you can do it that way. OCTET STRINGS can contain binary data, and are encoded as simply a number into the OID. The Net-SNMP API accepts not just a pointer to the char *, but also the length of the data returned. And this is specifically because it's perfectly legal to have OCTET STRINGS with null values encoded within it.
On the OID side, if you had the string consisting of characters A, B 0, D it would generally be encoded as:
blah.blah.4.65.66.0.67
Where 4 is the length of the string. If the OCTET STRING is marked as IMPLICIT, then the 4 would be left out.
You're encoding decimal digits into the string, so your values would be closer to things like 0x15, 0x04, 0x42, etc. Those are just fine as well (you're building a binary string).
Whether you should do that or whether you should just plop the integers themselves down in the string is subject to debate but depends on what you're doing, and the bandwidth constraints of the environment, etc.
Related
I am trying to implement LZ77 compression algorithm and encountered this problem.
I am compressing the input (could be any binary file, not only text files) on a byte by byte basis, and I use 3 bytes to represent a pointer/reference to previously substring. The first byte of the pointer is always an escape character, b"\xCC", to make things easier, let's say it's C.
The "standard" way I know when working with escape character is that, you encode all other chars normally, and escape the literal which has the same value as escape char. So 'ABCDE' encoded to 'ABCCDE'.
The problem is that, the value of the pointer could be 'CCx', where the second byte could be 'C' and makes the pointer un-distinguishable from escaped literal 'CC', and this causes problems.
How do I fix that? Or what's the correct/standard way to do LZ77? Thanks!
For LZ77 to be useful, it needs to be followed by an entropy encoder. It is in that step that you encode your symbols to bits that go in the compressed data.
One approach is to have 258 symbols defined, 256 for the literal bytes, one that indicates that a length and distance for a match follows, and one that indicates end of stream.
Or you can do what deflate does, which is encode the lengths and literals together, so that that symbol decodes to either a literal byte or a length, where a length implies that a distance code follows.
Or you can do what brotli does, which is define "insert and copy" codes, which give the number of literals, that is then followed by that many literal codes and then a copy length and distance.
Or you can invent your own.
What is the algorithm used to convert back IPv6 address from numeric (decimal) format?
I need to convert
42540488177518850335786991633549033211
to Ipv6 address type i.e.
2001:0000:3238:DFE1:0063:0000:0000:FEFB
The IPv6 address is a 16 byte number, usually represented as a hex encoded string, with every pair of bytes separated by a colon.
So, to convert your number into the the hex-encoded format, you have to first convert the number to hex, then insert he colons.
Depending on the programming language you're using you might already have access to built-in or library functions that can hex-encode an arbitrary number. If not, the process is pretty simple:
take the number and keep dividing by 16, keeping track of the reminder
each of the reminders represents each one of the bytes
each byte has to be hex-encode (ie. printed as a number ranging from 00 to FF
start concatenating the numbers that you get, appending each new value to the left
every other byte, insert a colon
Strings in 2.0 no longer conform to CollectionType. Each character in the String is now an Extended Graphene Cluster.
Without digging too deep about the Cluster stuff, I tried a few things with Swift Strings:
String now has a characters property that contains what we humans recognize as characters. Each distinct character in the string is considered a character, and the count property gives us the number of distinct characters.
What I don't quite understand is, even though the characters count shows 10, why does the index show emojis occupying 2 indexes?
The index of a String is no more related to the number of characters (count) in Swift 2.0. It is an “opaque” struct (defined as CharacterView.Index) used only to iterate through the characters of a string. So even if it is printed as an integer, it should not be considered or used as an integer, to which, for instance, you can sum 2 to get the second character from the current one. What you can do is only to apply the two methods predecessor and successor to get the previous or successive index in the String. So, for instance, to get the second character from that with index idx in mixedString you can do:
mixedString[idx.successor().successor()]
Of course you can use more confortable ways of reading the characters of string, like for instance, the for statement or the global function indices(_:).
Consider that the main benefit of this approach is not to the threat multi-bytes characters in Unicode strings, as emoticons, but rather to treat in a uniform way identical (for us humans!) strings that can have multiple representations in Unicode, as different set of “scalars”, or characters. An example is café, that can be represented either with four Unicode “scalars” (unicode characters), or with five Unicode scalars. And note that this is a completely different thing from Unicode representations like UTF-8, UTF-16, etc., that are ways of mapping Unicode scalars into memory bytes.
An Extended Graphene Cluster can still occupy multiple bytes, however, the correct way to determine the index position of a character would be:
let mixed = ("MADE IN THE USA 🇺🇸");
var index = mixed.rangeOfString("🇺🇸")
var intIndex: Int = distance(mixed.startIndex, index!.startIndex)
Result:
16
The way you are trying to get the index would normally be meant for an array, and I think Swift cannot properly work that out with your mixedString.
I have a binary string that I need to convert to a hexadecimal string. I have this code that does it pretty well
binary = '0000010000000000000000000000000000000000000000000000000000000000'
binary.to_i(2).to_s(16)
This will normally work but in this situation, the first four zeros, representing the first hexadecimal place is left out. So instead of
0400000000000000 it is showing 400000000000000.
Now, I know i can loop through the binary string manually and convert 4 bits at a time, but is there a simpler way of getting to my wanted result of '0400000000000000'?
Would rjust(16,'0') be my ideal solution?
You should use string format for such complicated results.
"%016x" % binary.to_i(2)
# => "0400000000000000"
You can use this:
binary = "0000010000000000000000000000000000000000000000000000000000000000"
[binary].pack("B*").unpack("H*").first
# => "0400000000000000"
binary.to_i(2) will convert the value to a number. A number does not know about leading zeros. pack("B*") will convert 8 bit each to a byte, giving you a binary encoded String. 8 x 0 is "\x00", a zero byte. So unlike the number, the string preserves the leading zeros. unpack("H*") then converts 4 bits each into their hex representation. See Array#pack and String#unpack for more information.
I want to define a MIB containing integer-valued OIDs that are 1 or 2 octets long to keep down the size of my TRAP messages - they're going over a mobile data network so we pay by the byte, and transmission times go up with bigger data types as well.
The SNMPv2-SMI only defines Integer32 and Unsigned32, and SNMPv2-TC doesn't extend these to smaller packing sizes. Is there already a standard definition out there for Unsigned8, Integer16 etc? If so, where?
Alternatively, if I define my object type as something like "Integer32(-1..99)", will the MIB compiler etc do the right thing and pack the value into a single byte? We're using SNMP4J on the Agent and Net-SNMP on the Manager.
If you hadn't already guessed I'm a bit of a noob at this so please be tolerant if this is a dumb question :-)
Take a look at SNMP4J javadoc for OctetString e.g. for 1 or 2 bytes ...
OCTET STRING (SIZE(1))
OCTET STRING (SIZE(2))
Quoting http://www.snmpsharpnet.com/?page_id=66 :
Octet String is an SMI data type used to process arrays of Byte
values. Unlike what the name suggests, this data type is not limited
to string values but can store any byte based data type (including
binary data).
I've done a bit of digging in the encoding rules used by SNMP (see en.wikipedia.org/wiki/X.690) and it looks like my question is irrelevant if the code packing the PDU has a bit of intelligence. The Basic Encoding Rules employed by SNMP record an integer value as a triple "tag-length-value" where the "tag" identifies the data type), the length gives the number of bytes holding the value and the value is, well, the value. So if the application is sending the value "1" from a 32-bit integer then there's no need to encode it in 32 bits, simply encode it as 0x02 0x01 0x01.
So it depends on whether the packing library has that intelligence.