Mips Assembly Language Saving Bytes in .data - byte

Hello Im pretty new to Mips and I'm trying to save three 8-bit value and there hex value while including the 0X prefix to make sure they are read as hexadecimal. I made a table of three values I would like to store.
Table of Values
I would like these values to be stored in the .data section, I'm aware I need to use .byte to store them but can't figure out how to store multiple values. I later need to loop through each value. Thank you for any help.

Just separate them by commas:
.data
foo: byte 0x0, 0x1, 0x2
Note that the 0x prefix is completely unnecessary. 0, 1, 2 would give you exactly the same values. They are just collections of bits, and whether you want to present them to the user as a base10 string or a base16 string at some point is irrelevant.

Related

Why does MIB encoding of IPAddress not match what I decode in WireShark?

RFC1155, 3.2.3.2 defines an IpAddress as:
This application-wide type represents a 32-bit internet address. It
is represented as an OCTET STRING of length 4, in network byte-order.
And again, in section 6
IpAddress ::=
[APPLICATION 0] -- in network-byte order
IMPLICIT OCTET STRING (SIZE (4))
Also RFC 1156, 5.4.1 defines an ipAddrEntry as a SEQUENCE containing an ipAdEntAddr as an IpAddress.
Okay then, backpedaling through this "(NOT)Simple Network Message Protocol", it should be clear that an IPAddress is 4 byte, network byte order (Big Endian)
RFC1156, 5.4.1 defines an ipAddrEntry to be a SEQUENCE containing among other things, ipAdEntAddr as an IpAddress... Which was defined in the last RFC.
Why then, in Wireshark, when I examine the OID request, the IP address 192.168.1.21 is encoded as
81 40 81 28 01 15
Which isn't 4 octets. It's not even TLV encoding. It's some other 7 bit encoding where the 8th bit is used to indicate "Not the last octet" (I think it's VLQ? )
1000 0001 0100 0000 => 1100 0000
1000 0001 0010 1000 => 1010 1000
If it is supposed to be VLQ, where is that documented?
Why is what I read on Wireshark not what I read in the RFCs?
SNMP protocol and Wireshark are both correct, but you misunderstand the basic concept of SNMP table and OIDs.
While from the text, you might see 192.168.1.21 anywhere in Wireshark's decoded tree view, you must be aware of its actual entity so as to understand how it was encoded.
In the screen shot you can see that 192.168.1.21 highlighted was part of an OID 1.3.6.1.2.1.4.20.1.2.192.168.1.21. This is a typical OID pattern used in tables, where the index 192.168.1.21 is combined with the table column OID 1.3.6.1.2.1.4.20.1.2 to represent the actual cell.
Thus, its bytes are encoded following BER like any other OIDs, not IpAddress,
How does ASN.1 encode an object identifier?
That's why the bytes look different from your initial thought.
you may remember my last answer, in which I explained that every instance of an object has a "name" and an INDEX. I referenced ifTable to show how ifIndex was used to refer to each instance of a row. While many (if not most) SNMP tables use a single INTEGER as the INDEX, it's actually possible to use any combination of primitive data types (there are tables that include an OID in the index, giving you an OID within an OID). The important thing is that the INDEX be unique for each row.
Based on the OID in your screenshot, it looks like you are dealing with ipAdEntIfIndex (from RFC 1213's IP-MIB). If you consult the MIB, you will find this definition for ipAddrEntry (representing a row in ipAddrTable):
ipAddrEntry OBJECT-TYPE
SYNTAX IpAddrEntry
MAX-ACCESS not-accessible
STATUS deprecated
DESCRIPTION
"The addressing information for one of this entity's IPv4
addresses."
INDEX { ipAdEntAddr }
::= { ipAddrTable 1 }
IpAddrEntry ::= SEQUENCE {
ipAdEntAddr IpAddress,
ipAdEntIfIndex INTEGER,
ipAdEntNetMask IpAddress,
ipAdEntBcastAddr INTEGER,
ipAdEntReasmMaxSize INTEGER
}
You can see here that it uses ipAdEntAddr as the INDEX, and that ipAdEntAddr is an IpAddress. The reason that the encoding in your screenshot looks funny is that it's encoded as part of an OID, not as an OctetString, as an IpAddress would normally be.
There are two different specifications that are relevant here. The first is the specification for the ASN.1 Basic Encoding Rules, which is not publicly available for free, but basically says that OID sub-identifiers can only use the lower 7 bits of each byte, and that the MSB is a flag indicating whether there are additional bytes in that sub-identifier. Hence the 0x01 in 0x81 gives bit 7, and 0x40 gives bits 0-6. If you put them together, you get 0xc0, which is 192.
The second specification is RFC 2578 (section 7.7), which defines how an INDEX is encoded within an OID:
The syntax of the objects in the INDEX clause indicate how to form
the instance-identifier:
(1) integer-valued (i.e., having INTEGER as its underlying primitive
type): a single sub-identifier taking the integer value (this
works only for non-negative integers);
(2) string-valued, fixed-length strings (or variable-length preceded by
the IMPLIED keyword): `n' sub-identifiers, where `n' is the length
of the string (each octet of the string is encoded in a separate
sub-identifier);
(3) string-valued, variable-length strings (not preceded by the IMPLIED
keyword): `n+1' sub-identifiers, where `n' is the length of the
string (the first sub-identifier is `n' itself, following this,
each octet of the string is encoded in a separate sub-identifier);
(4) object identifier-valued (when preceded by the IMPLIED keyword):
`n' sub-identifiers, where `n' is the number of sub-identifiers in
the value (each sub-identifier of the value is copied into a
separate sub-identifier);
(5) object identifier-valued (when not preceded by the IMPLIED
keyword): `n+1' sub-identifiers, where `n' is the number of sub-
identifiers in the value (the first sub-identifier is `n' itself,
following this, each sub-identifier in the value is copied);
(6) IpAddress-valued: 4 sub-identifiers, in the familiar a.b.c.d
notation.
An IpAddress is simple, because it's represented as dot-separated integers, just like an OID.

Maximum number of characters output from Win32 ToUnicode()/ToAscii()

What is the maximum number of characters that could be output from the Win32 functions ToUnicode()/ToAscii()?
Surely there is a sensible upper bound on what it can output given a virtual key code, scan key code, and keyboard state?
On my Windows 8 machine USER32!ToAscii calls USER32!ToUnicode with a internal buffer and cchBuff set to 2. Because the output of ToAscii is a LPWORD and not a LPSTR we cannot assume anything about the real limits of ToUnicode from this investigation but we know that ToAscii is always going to output a WORD. The return value tells you if 0, 1 or 2 bytes of this WORD contains useful data.
Moving on to ToUnicode and things get a bit trickier. If it returns 0 then nothing was written. If it returns 1 or -1 then one UCS-2 code point was written. We are then left with the strange 2 <= return expression. We can try to dissect the MSDN documentation:
Two or more characters were written to the buffer specified by pwszBuff. The most common cause for this is that a dead-key character (accent or diacritic) stored in the keyboard layout could not be combined with the specified virtual key to form a single character. However, the buffer may contain more characters than the return value specifies. When this happens, any extra characters are invalid and should be ignored.
You could interpret this as "two or more characters were written but only two of them are valid" but then the return value should be documented as 2 and not 2 ≤ value.
I believe there are two things going on in that sentence and we should eliminate what it calls "extra characters":
However, the buffer may contain more characters than the return value specifies.
This just implies that the function may party on your buffer beyond what it is actually going to return as valid. This is confirmed by:
When this happens, any extra characters are invalid and should be ignored.
This just leaves us with the unfortunate opening sentence:
Two or more characters were written to the buffer specified by pwszBuff.
I have no problem imagining a return value of 2, it can be as simple as a base character combined with a diacritic that does not exist as a pre-composed code point.
The "or more" part could come from multiple sources. If the base character is encoded as a surrogate-pair then any additional diacritic/combining-character will push you over 2. There could simply also be more than one diacritic/combining-character on the base character. There might even be a leading LTR/RTL mark.
I don't know if it is possible to end up with all 3 conditions at the same time but I would play it safe and specify a buffer of 10 or so WCHARs. This should be well within the limits of what you can produce on a keyboard with "a single keystroke".
This is by no means a final answer but it might be the best you are going to get unless somebody from Microsoft responds.
In usual dead-key case we can receive one or two wchar_t (if key cannot be composed with dead-key it returns two whar_t's) for one ToUnicode call.
But Windows also supports ligatures:
A ligature in keyboard terminology means when a single key outputs two or more UTF-16 codepoints. Note that some languages use scripts that are outside of the BMP (Basic Multilingual Plane) and need to be completely realized by ligatures of surrogate pairs (two UTF-16 codepoints).
If we want to look from a practical side of things: Here is a list of Windows system keyboard layouts that are using ligatures.
51 out of 208 system layouts have ligatures
So as we can see from the tables - we can have up to 4 wchar_t for one ToUnicode() call (for one keypress).
If we want to look from a theoretical perspective - we can look at kbd.h in Windows SDK where underlying keyboard layout structures are defined:
/*
* Macro for ligature with "n" characters
*/
#define TYPEDEF_LIGATURE(n) typedef struct _LIGATURE##n { \
BYTE VirtualKey; \
WORD ModificationNumber; \
WCHAR wch[n]; \
} LIGATURE##n, *KBD_LONG_POINTER PLIGATURE##n;
/*
* Table element types (for various numbers of ligatures), used
* to facilitate static initializations of tables.
*
* LIGATURE1 and PLIGATURE1 are used as the generic type
*/
TYPEDEF_LIGATURE(1) // LIGATURE1, *PLIGATURE1;
TYPEDEF_LIGATURE(2) // LIGATURE2, *PLIGATURE2;
TYPEDEF_LIGATURE(3) // LIGATURE3, *PLIGATURE3;
TYPEDEF_LIGATURE(4) // LIGATURE4, *PLIGATURE4;
TYPEDEF_LIGATURE(5) // LIGATURE5, *PLIGATURE5;
typedef struct tagKbdLayer {
....
/*
* Ligatures
*/
BYTE nLgMax;
BYTE cbLgEntry;
PLIGATURE1 pLigature;
....
} KBDTABLES, *KBD_LONG_POINTER PKBDTABLES;
nLgMax here - is a size of a LIGATURE##n.wch[n] array (affects the size of each pLigature object).
cbLgEntry is a number of pLigature objects.
So we have a BYTE value in nLgMax - and that meant that ligature size could be up to 255 wchar_t's (UTF-16 code points) theoretically.

Windows DNS server debug log hostname format

I was reading a Windows DNS server debug log file, in particular the packet captures, and am trying to understand how to parse the host names in order to use them in scripts.
The following is an example from an ANSWER section:
Offset = 0x007f, RR count = 2
Name "[C06A](5)e6033(1)g(10)akamaiedge[C059](3)net(0)"
TYPE A (1)
CLASS 1
TTL 20
DLEN 4
DATA 23.62.18.101
So, looking at the string "[C06A](5)e6033(1)g(10)akamaiedge[C059](3)net(0)" I realized that the numbers in parenthesis are a count of the number of characters that follow. Replacing all of them with dots (except the first and last, which should just be removed) works like a charm.
The stuff in square brackets, though, remains a mystery to me. If I simply remove it all after handling the parenthesis and quotes, the above string becomes e6033.g.akamaiedge.net. That is a valid host name.
So my question is: what does that content in square brackets actually mean? What is the correct way to turn that string into a proper host name I could feed to nslookup and other tools?
It appears it's the 2nd possible form of the NAME field as documented here:
http://www.zytrax.com/books/dns/ch15/#name
NAME This name reflects the QNAME of the question i.e. any may take
one of TWO formats. The first format is the label format defined for
QNAME above. The second format is a pointer (in the interests of data
compression which to fair to the original authors was far more
important then than now). A pointer is an unsigned 16-bit value with
the following format (the top two bits of 11 indicate the pointer
format):
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
1 1
The offset in octets (bytes) from the start of the whole message.
Must point to a label format record to derive name length.
Note: Pointers, if used, terminate names. The name field may consist
of a label (or sequence of labels) terminated with a zero length
record OR a single pointer OR a label (or label sequence) terminated
with a pointer.
where the response is using pointers to refer to data elsewhere in the message.

Why value stored in PE file is reverse?

when i view pe file in hexeditor value is reversing stored in it but why?
for example :
in pe file header structure 2nd record is referred to Number Of Section
that's value is 0300
but real value is 0003
that's mean for read value from pe file we must read it byte to byte from right !
PE format is Little Endian, so the least-significant byte is first.

VB6 Hex in a string - literal value woes

I'm trying to store the literal ascii value of hex FFFF which in decimal is 65535 and is ÿ when written out in VB6. I want to store this value in a buffer which is defined by:
Type HBuff
txt As String * 16
End Type
Global WriteBuffer As HBuff
in the legacy code I inherited.
I want to do something like WriteBuffer.txt = Asc(hex$(-1)) but VB6 stores it as 70
I need to store this value, ÿ in the string, even though it is not printable.
how can I do this?
I'm not sure what your problem is.
If you want to store character number 255 in a string, then do so:
WriteBuffer.txt = Chr$(255)
Be warned though that the result depends on the current locale.
ChrW$(255) does not, but it may yield not the character you want.
For the reference, the code you used returns ASCII code of the first character of the textual hex representation of the number -1. Hex(-1) is FFFF when -1 is typed as Integer (which it is by default), so you get the ASCII code of letter F, which is 70.

Resources