Big Endian vs Little Endian 2 ASCII characters per memory address - endianness

This is homework that is asking how ORENTM would be stored in memory if you stored two ASCII characters per memory location. All the examples show one byte ordering. I am wondering if in the little Endian the ASCII character order would be revesed so the at 102 it would be 52 4F instead of 4F 52. Also does anybody have a good reference to read about how this type of storage would work.

Related

how to interpret a two-byte value?

I am trying to read a 2-byte value that is stored lower order byte first. So if the 2 bytes are 10 and 85 in that order (decimal) what is the number they represent? 8510? 851? Or something different?
These values represent the length of an encoded sequence and I need to know the length to properly handle the information it contains. Most only use the first byte and as a decimal number it accurately represents the total number of characters (or bytes) in the sequence... but some use both bytes and I don't understand how to interpret them.
IF anyone can help be with this I would appreciate it.
Thanks
What you're referring to is the "endianness" of the two-byte value (also called a WORD). It's generally not considered beneficial to discuss them in terms of decimal values because no, if the bytes are 10 and 85 in decimal, the values would not be any combination of 10/85. Rather, showing them in the hexadecimal values is far more prudent.
+----+----+
|0x0a|0x55|
+----+----+
To interpret this as Little Endian, the value would be 0x550a (21770 in decimal). And in Big Endian (highest order byte first), the value is 0x0a55 (2645 decimal).
It's very important, for this reason, to know the endianness of your system and be able to properly handle the two. It is also noteworthy that "Network Byte Order" is Big Endian.

Why there are no 5-byte and 6-byte code points in UTF-8?

Why there are no 5-byte or 6-byte code points? I know they were till 2003 when they were removed. But I cannot find why were they removed.
The Wikipedia page on UTF-8 says
In November 2003, UTF-8 was restricted by RFC 3629 to match the constraints of the UTF-16 character encoding: explicitly prohibiting code points corresponding to the high and low surrogate characters removed more than 3% of the three-byte sequences, and ending at U+10FFFF removed more than 48% of the four-byte sequences and all five- and six-byte sequences.
but I don't understand why it's important.
Because there are no Unicode characters which would require them. And these cannot be added either because they'd be impossible to encode with UTF-16 surrogates.
I’ve heard some reasons, but did’t find any of them convincing. Basically, the stupid reason is: UTF-16 was specified before UTF-8 and at that time, 20 bits of storage for characters (yielding 2²⁰+2¹⁶ caracters minus a little like non-characters and surrogates for management) were deemed enough.
UTF-8 and UTF-16 are already variable-length encodings that, as you said for UTF-8, could be extended without big hastle (use 5- and 6-byte words). Extending UTF-32 to include 21 to 31 bits is trivial (32 could be a problem due to signdness), but making it variable-length defeats the use-case of UTF-32 completely.
Extending UTF-16 is hard, but I’ll try. Look at what UTF-8 does in a 2-byte sequence: The initial 110yyyyy acts like a high surrogate and 10zzzzzz like a low surrogate. For UTF-16, flip it around and re-use high surrogates as “initial surrogates” and low surrogates as “continue surrogates”. So, basically, you can have multiple low surrogates.
There’s a problem, though: Unicode streams are supposed to resist misinterpretation when you’re “tuning in” or the sender is “tuning out”.
In UTF-8, if you read a stream of bytes and it ends with 11100010 10000010, you know for sure the stream is incomplete. 1110 tells you: This is a 3-byte word, but one is still missing. In the suggested “extended UTF-16”, there’s nothing like that.
In UTF-16, if you read a stream of bytes and it ends with a high surrogate, you know for sure the stream is incomplete.
The “tuning out” can be solved by using U+10FFFE as an announcement for a single UTF-32 encoding. If the stream stops after U+10FFFE, you know you’re missing something, same goes for an incomplete UTF-32. And if it stops in the middle of the U+10FFFE, it’s lacking a low surrogate. But that does not work becasue “tuning in” to the UTF-32 encoding can mislead you.
What could be utilized are so-called non-characters (the most well-known would be the reverse of the byte order mark) at the end of plane 16: Encode U+10FFFE and U+10FFFF using existing surrogates to announce a 3- or 4-byte sequence, repectively. This is very wasteful: 32 bits are used for the announcement alone, 48 or 64 additional bits are used for the actual encoding. However, it is still better than, say using U+10FFFE and U+10FFFF around a single UTF-32 encoding.
Maybe there’s something flawed in this reasoning. This is an argument of the sort: This is hard and I’ll prove it by trying and showing where the traps are.
right now the space is allocated for 4^8 + 4^10 code points (CP), i.e. 1,114,112, but barely 1/4 to 1/3rd of that is assigned to anything.
so unless there's a sudden need to add in another 750k CPs in a very short duration, up to 4 bytes for UTF-8 should be more than enough for years to come.
** just personal preference for
4^8 + 4^10
on top of clarity and simplicity, it also clearly delineates the CPs by UTF-8 byte count :
4 ^ 8 = 65,536 = all CPs for 1-, 2-, or 3-bytes UTF-8
4 ^ 10 = 1,048,576 = all CPs for 4-bytes UTF-8
instead of something unseemly like
2^16 * 17
or worse,
32^4 + 16^4
*** unrelated sidenote : *the cleanest formula-triplet I managed to conjure up for the starting points of the UTF-16 surrogates are :: *
4^5 * 54 = 55,296 = 0x D800 = High - surrogates
4^5 * 55 = 56,320 = 0x DC00 = Low - surrogates
4^5 * 56 = 57,344 = 0x E000 = just beyond the upper-boundary of 0x DFFF

COBOL COMP-3 number format issue

I have a cobol "tape format" dump which has a mixture of text and number fields. I'm reading the file in C# as a binary array (array of byte). I have the copy book and the formats are lining up fine on the text fields. There are a number of COMP-3 fields as well. The data in those fields doesnt seem to match any BCD format. I know what the data should be and I have the raw bytes of the COMP-3. I tried converting to EBCDIC first which yielded no better results. Any thoughts on how a COMP-3 number can be otherwise internally stored? Below are three examples of the PIC, the raw data and the expected number. I know I have the field positions correct because there is alpha data on either side of the numbers and that all lines up correctly.
First Example:
The PIC of the field is 9(9) COMP-3
There are 5 bytes to the data, the hex values are 02 01 20 91 22
The resulting data should be a date (00CCYYMMDD). This particular date should be 3-17-14.
Second Example:
The PIC of the field is S9(3) COMP-3
There are 2 bytes to the data, the hex values are 0A 14
The resulting value should be between 900 and 999
My understanding is that the "S" means that the last nibble should be 0xC or 0xD to indicate + or -
Third Example:
The PIC of the field is S9(15)V99 COMP-3
There are 9 bytes to the data, the hex values are 00 00 00 00 00 00 01 80 0C
The resulting value should be 12.00
Ok so thanks to the people who responded as they pointed me in the right direction. This is indeed an ASCII/EBCDIC representation issue. The BCD is stored in EBCDIC. Using an ASCII to EBCDIC conversion table yields properly formatted BCD digits:
I used this link to map the data: http://shop.alterlinks.com/ascii-table/ascii-ebcdic-us.php
My data: 0A 14 Converted: 25 3C (turns out that 253 is a valid value, spec was wrong) C = +, all good
My data: 01 80 0C (excluding leading zeros) Converted: 01 20 0C 12.00 C = +, implied 2 digits in format, all good
My data: 02 01 20 91 22 Converted: 02 01 40 31 7F 2014/03/17 (F is unused nibble), all good
There is no such thing as a COBOL "tape format" although the phrase may mean something to the person who gave you the data.
The clue to your problem is that you can read the text. Connect that to the EBCDIC tag and your reference to C#.
So, you are reading data which is originally source from a Mainframe, most likely an IBM Mainframe, which uses EBCDIC instead of ASCII.
COBOL does not have native support for BCD.
What some kind soul has done for you is "convert" the data from EBCDIC to ASCII. Otherwise you wouldn't even recognise the "text".
Unfortunately, what that means for any binary or packed-decimal or floating-point fields (you won't see much of the last, but they are COMP-1/COMP-2) is that "convert" means "potentially scrambled", because the coversion is assuming individual bytes, with simple byte values, whereas all of those fields have conventional coding, either through multiple bytes or non-EBCDIC values or both.
So: COMP-3 PIC 9(9). As you say, five bytes. It is unsigned, so the rightmost nybble will be F (all bits on). You are slightly out with your positions due to the sign position being occupied, even for an unsigned field.
On the Mainframe, it contains a value X'020140317F'. Only that field in its entirety can make any sense as to its value. However, the EBCDIC to ASCII conversion has made it X'0201209122'.
How?
Look up the EBCDIC value of X'02' and X'01'. They don't change. Look up the value of X'40', whoops, that's a space, change it to ASCII X'20'. Look up the value of X'31'. Actually nothing special there, and it has converted to something higher than X'7F', but if you look at the translation table used, I guess you'll see why it happens. The X'7F' is a double-quote, so gets changed to X'22'.
The other values you show suffer the same problem.
You should only ever take data from a Mainframe in character-only format. There are many answers here on this, you should look at the related to the right.
Have a look at this recent question: Convert COMP and COMP-3 Packed Decimal into readable value with C
OK, let's have a look at your first example. Given the format and value the original BCD-content should have been something like
02 01 40 31 7F
When transforming that from EBCDIC to ASCII we run into trouble with the first, second and fourth byte because they are control-characters - so here we would need some more details on how the ASCII->EBCDIC-converter worked. Looking at the two remaining bytes those would be changed
EBCDIC ASCII CHARACTER
40 -> 20 (blank)
7F -> 22 "
So assuming the first two bytes remain unchanged and the third gets converted like 31->91 we end up with
02 01 20 91 22
which is what you got. So it looks like some kind of EBCDIC->ASCII-conversion took place. If that is the case it might be that you can't repair the data since the transformation may not be one-one and thus not reversible.
Looking at the second example and using
EBCDIC ASCII CHARACTER
25 -> 0A (LF)
3C -> 14 (DC4)
you would have started with 25 3C which would fit the format but not the range you gave.
In the third example the original 01 20 0C could be converted to 01 80 0C since 20 also is an EBCDIC control-character with no direct ASCII-equivalent.
But given all other examples I would assume there is some codepage-conversion issue.
If you used some kind of filetransfer to move the data fromm the (supposed) mainframe make sure it is set to binary-mode and don't do any character-conversion before you split the file into fields and know what's meant to be a character and what not.
EDIT: You can find a list of several EBCDIC and ASCII-based codepages here or look here for the same as one pdf.
I'm coming to this a bit late, but have a couple of suggestions that might make your life easier...
First, see if you can get your mainframe conterparts to convert all non-character (i.e. binary numeric and packed decimal) data
to display format (e.g. PIC X) before you
download it. Then you only need to deal with the "printable" range of numeric characters representing 0 through 9. Printable character
only code-page conversions are fairly standaard and tend not to screw up as much. Reformatting data given a copybook is not
a difficult prospect for anybody
proficient in a mainframe environment. Unfortunately, sometimes you get the "runaround" and a claim is made that it is
extremely costly or, takes special software, or any one of a hundred other bogus excuses.
If you get the "runaround" then the next best thing is to is download the file in binary format and do your own codepage conversion
for the character data (fairly
straight forward). Next deal with the binary data based on your copybook definitions. With a few Googles you should be able to find
enough information to get through converting the PACKED-DECIMAL (COMP-3) data to whatever you need.
Here are a couple of links to get you started:
Numeric Data Formats
Packed Decimal
I do not recommend trying to reverse engineer the code page conversions applied by your file transfer package in order to
decode the packed decimal and other binary data.
Ok so thanks to both people who responded as they pointed me in the right direction. This is indeed an ASCII/EBCDIC representation issue. The BCD is stored in EBCDIC. Using an ASCII to EBCDIC conversion table yields properly formatted BCD digits:
I used this link to map the data: http://shop.alterlinks.com/ascii-table/ascii-ebcdic-us.php
My data: 0A 14
Converted: 25 3C (turns out that 253 is a valid value, spec was wrong) C = +, all good
My data: 01 80 0C (excluding leading zeros)
Converted: 01 20 0C 12.00 C = +, implied 2 digits in format, all good
My data: 02 01 20 91 22
Converted: 02 01 40 31 7F 2014/03/17 (F is unused nibble), all good
Thanks again for the two above answers which led me in the right direction.
You can avoid the above issues by having the data converted into a modern method for transferring data: XML.

Handing 32-bit ifindex inside of snmp

In developing my own SNMP poller, I've come across the problem of being able to poll devices with 32-bit interface indexes. I can't find anything out there explaining how to covert the hex (5 bytes) into the 32 bit integer or from the integer into hex as it doesn't use simple hex conversion. Example, the interface index is 436219904. While doing a pcap with a snmpget, I see the hex for this is 81 d0 80 e0 00 which makes no sense. I cannot for the life of me figure out how that converts to an integer value. I've tried to find a RFC dealing with this and have had no luck. The 16 bit interface values convert as they should. 0001 = 1 and so on. Only the 32-bit ones seem to be giving me this problem. Any help is appreciated.
SNMP uses ASN.1 syntax to encode data. Thus, you need to learn the BER rules,
http://en.wikipedia.org/wiki/X.690
For your case, I can say you watched the wrong data, as if 436219904 is going to be encoded as Integer32 in SNMP, the bytes should be 1A 00 30 00.
I guess you have missed some details in the analysis, so you might want to do it once again and add more descriptions (screenshot and so on) to enrich your question.
I suspect the key piece of info missing from your question is that the ifIndex value in question as used in your polling is an index for the table polled (not mentioned which, but we could assume ifTable), which means it will be encoded in as a subidentifier of the OID being polled (give me [some property] for [this ifIndex]) versus a requested value (give me [the ifIndex] for [some other row of some other table]).
Per X.209 (the version of the ASN.1 Basic Encoding Rules used by SNMP) subidentifiers in OIDs (other than the first two) are encoded in one or more octets (8 bits) with the highest order bit used as a continuation bit (i.e. "next octet is part of this subidentifier too"), and then remaining 7 bits for the actual value.
In other words, in your value "81 d0 80 e0 00", the highest bit is set in each of the first 4 octets and cleared in the last octet: this is how you know there are 5 octets in the subidentifier. The remaining 7 bits of each of those octets are concatenated to arrive at the integer value.
The converse of course is that to encode an integer value into a subidentifier of an OID, you have to build it 7 bits at a time.

Convert a byte to something human-readable

I've got a file consisting of a bunch of 'rows' of data (not actually on separate lines). As far as I can tell from the description (PDF), each 'row' has forty bytes precisely, as follows:
4 bytes I can ignore.
2 bytes that are the byte representations of the (decimal) integer 240 or 241 — that is, 00 F0 or 00 F1 — depending on 'row'.
4 bytes I can ignore.
11 bytes: the first few bytes are an ASCII string, which I need, and the rest is padding by 00 bytes.
1 byte I can ignore.
1 byte I need. This seems from the documentation to be ASCII 'B', 'S', or '0'.
1 byte that's the byte representation of a small integer — that is, 00, 01, or the like.
4 bytes that are the byte representation of an integer.
4 bytes that are the byte representation of an integer.
4 bytes that are the byte representation of an integer.
4 bytes that are the byte representation of an integer.
And there's nothing else in the file (no file header, for example). (I may well be wrong in my understanding of the linked-to documentation, and would appreciate any correction; more info may be available elsewhere (PDF).)
I wish to:
convert each byte representation of a number to a human-readable representation of the number and each 00 byte to (say) a human-readable 0, and
convert the file to comma-separated or the like.
Now, step (2) should be doable using sed; I mention it only because I want to make sure step (1) is done is such a manner as to allow step (2) (for example, keep track of how many bytes are each field when doing step (1)). But step (1) I have no idea how to do. Can anyone help?
As a caveat, please note that I'm comfortable with sed and bash and can handle perl, but have no experience with real programming; and that, alas, I'm doing this on a Windows machine I don't have installation-of-programs rights on, so (although I have a sed port) I don't have bash. So, basically, I need to do this in sed or a Windows (DOS) command-line script. (I should be able to download the files to another machine, work with them, and upload them, though, if that turns out to be necessary.)
Perl has an unpack function you could use.

Resources