mips finding offset in beq instruction - mips32

I am new to MIPS - 32 and I have problem understanding the offset of the following instruction:
beq $a0,$a1,0x00401200
knowing that
PC=0x0040122C
I think that
$a0=00100
$a1=00101
the instruction should be 000100|00100|00101|0001 0010 0000 0000.
the solution says the offset is -12, but I do not understand why.
Could anyone give me a hand?

The target address is found by TA = PC + 4(offset)
So the offset = (TA - PC)/4
0x00401200 is your target address. We see that it is going backwards, so we must make sure our answer is negative. In this case, I prefer to do PC - TA and then find the 2s complement.
PC - TA : 0x0040122C - 0x00401200 = 0x002C
We can find the 2s complement by doing
0xFFFF(16)-0x002C = 0xFFFD4
Now we can divide by 4 to find the TA by doing a right shift 2 places
1111 1111 1111 1101 0100 -> 11 1111 1111 1111 0101
The offset = 0xFFF5
I ended up with the offset = -11 (0xFFF5). However, if I plug that back into
TA = PC + 4(offset), I get 0x401200.

Related

How data is stored in memory or register

I m new to assembly language and learning it for exams. I ama programmer and worked in C,C++, java, asp.net.
I have tasm with win xp.
I want to know How data is stored in memory or register. I want to know the process. I believe it is something like this:
While Entering data, eg. number:
Input Decimal No. -> Converted to Hex -> Store ASCII of hex in registers or memory.
While Fetching data:
ASCII of hex in registers or memory -> Converted to Hex -> Show Decimal No. on monitor.
Is it correct. ? If not, can anybody tell me with simple e.g
Ok, Michael: See the code below where I am trying to add two 1 digit numbers to display 2 digit result, like 6+5=11
Sseg segment stack
ends
code segment
;30h to 39h represent numbers 0-9
MOV BX, '6' ; ASCII CODE OF 6 IS STORED IN BX, equal to 36h
ADD BX, '5' ; ASCII CODE OF 5 (equal to 35h) IS ADDED IN BX, i.e total is 71h
Thanks Michael... I accept my mistake....
Ok, so here, BX=0071h, right ? Does it mean, BL=00 and BH=71 ?
However, If i do so, I can't find out how to show the result 11 ?
Hey Blechdose,
Can you help me in one more problem. I am trying to compare 2 values. If both are same then dl=1 otherwise dl=0. But in the following code it displays 0 for same values, it is showing me 0. Why is it not jumping ?
sseg segment stack
ends
code segment
assume cs:code
mov dl,0
mov ax,5
mov bx,5
cmp ax,bx
jne NotEqual
je equal
NotEqual:
mov dl,0
add dl,30h
mov ah,02h
int 21h
mov ax,4c00h
int 21h
equal: mov dl,1
add dl,30h
mov ah,02h
int 21h
mov ax,4c00h
int 21h
code ends
end NotEqual
end equal
Registers consist of bits. A bit can have the logic value 0 or 1. It is a "logic value" for us, but actually it is represented by some kind of voltage inside the hardware. For example 4-5 Volt is interpreted as "logic 1" and 0-1 Volt as "logic 0". The BX register has 16 of those bits.
Lets say the current content of BX(Base address register) is: 0000000000110110. Because it is very hard to read those long lines of 0s and 1s for humans, we combine every 4 bits to 1 Hexnumber, to get a more readable format to work with. The CPU does not know what a Hex or decimal number is. It can only work with binary code. Okay, let us use a more readable format for our BX register:
0000 0000 0011 0110 (actual BX content)
0 0 3 6 (HEX format for us)
54 (or corresponding decimal value)
When you send this value (36h), to your output terminal, it will interpret this value as an ASCII-charakter. Thus it will display a "6" for the 36h value.
When you want to add 6 + 2 with assembly, you put 0110 (6) and 0010 (2) in the registers. Your assembler TASM is doing the work for you. It allows you to write '6' (ASCII) or 0x6 (hex) or even 6 (decimal) in the asm-sourcecode and will convert that for you into a binary number, which the register accepts. WARNING: '6' will not put the value 6 into the register, but the ASCII-Code for 6. You cannot calculate with that directly.
Example: 6+2=8
mov BX, 6h ; We put 0110 (6) into BX. (actually 0000 0000 0000 0110,
; because BX is 16 Bit, but I will drop those leading 0s)
add BX, 2h ; we add 0010 (2) to 0110 (6). The result 1000 (8) is stored in BX.
add BX, 30h ; we add 00110000 (30h). The result 00111000 (38h) is stored in BX.
; 38h is the ASCII-code, which your terminal output will interpret as '8'
When you do a calculation like 6+5 = 11, it will be even more complicated, because you have to convert the result 1011 (11) into 2 ASCII-Digits '1' and '1' (3131h = 00110001 00110001)
After adding 6 (0110) + 5 (0101) = 11 (1011), BX will contain this (without blanks):
0000 0000 0000 1011 (binary)
0 0 0 B (Hex)
11 (decimal)
|__________________|
BX
|________||________|
BH BL
BH is the higher Byte of BX, while BL is the lower byte of BX. In our example BH is 00h, while BL contains 0bh.
To display your summation result on your terminal output, you need to convert it to ASCII-Code. In this case, you want to display an '11'. Thus you need two times a '1'-ASCII-Character. By looking up one of the hunderds ASCII-tables on the internet, you will find out, that the Code for the '1'-ASCII-Charakter is 31h. Consequently you need to send 3131h to your terminal:
0011 0001 0011 0001 (binary)
3 1 3 1 (hex)
12593 (decimal)
The trick to do this, is by dividing your 11 (1011) by 10 with the div instruction. After the division by 10 you get a result and a remainder. you need to convert the remainder into an ASCII-number, which you need to save into a buffer. Then you repeat the process by dividing the result from the last step by 10 again. You need to do this, until the result is 0. (using the div operation is a bit tricky. You have to look that up by yourself)
binary (decimal):
divide 1011 (11) by 1010 (10):
result: 0001 (1) remainder: 0001 (1) -> convert remainderto ASCII
divide result by 1010 (10) again:
result: 0000 (1) remainder: 0001 (1) -> convert remainderto ASCII

CRC polynomial calculation

I am trying to understand this document, but can't seem to get it right. http://www.ross.net/crc/download/crc_v3.txt
What's the algorithm used to calculate it?
I thought it uses XOR but I don't quite get it how he gets 0110 from 1100 XOR 1001. It should be 101 (or 0101 or 1010 if a bit goes down). If I can get this, I think the rest would come easy, but for some reason I just don't get it.
9= 1001 ) 0000011000010111 = 0617 = 1559 = DIVIDEND
DIVISOR 0000.,,....,.,,,
----.,,....,.,,,
0000,,....,.,,,
0000,,....,.,,,
----,,....,.,,,
0001,....,.,,,
0000,....,.,,,
----,....,.,,,
0011....,.,,,
0000....,.,,,
----....,.,,,
0110...,.,,,
0000...,.,,,
----...,.,,,
1100..,.,,,
1001..,.,,,
====..,.,,,
0110.,.,,,
0000.,.,,,
----.,.,,,
1100,.,,,
1001,.,,,
====,.,,,
0111.,,,
0000.,,,
----.,,,
1110,,,
1001,,,
====,,,
1011,,
1001,,
====,,
0101,
0000,
----
1011
1001
====
0010 = 02 = 2 = REMAINDER
The part you quoted is just standard long division like you learned in elementary school, except that it is done on binary numbers. At each step you perform a subtraction to get the remainder, and this is done in the example you gave: 1100 - 1001 = 0110.
Note that the article just uses this as a preliminary example, and it is not actually what is done in calculating CRC. Instead of normal numbers, CRC uses division of polynomials over the field GF(2). This can be modeled by using normal binary numbers and doing long division normally, except for using XOR instead of subtraction.
The link you provided says:
we'll do the division using good-'ol long division which you
learnt in school (remember?)
You just repetitively subtract, but since it is in binary, there are only two options: either the number fits once in the current selection, or 0 times. I annotated the steps:
0000011000010111
0000
1001 x 0
---- -
0000
1001 x 0
---- -
0001
1001 x 0
---- -
0011
1001 x 0
---- -
0110
1001 x 0
---- -
1100
1001 x 1
---- -
0110
1001 x 0
---- -
1100
1001 x 1
---- -
0110
and so on

How to interpet the EFL in OllyDbg?

What is and how to interpret the EFL under registers using OllyDbg?
What is NO, NB, E, NE, BE, A, NS ,PO, GE, G, …
Example:
EFL 00000246 (NO,NB,E,BE,NS,PE,GE,LE)
My futile decipher:
00000246 => 0000 ... 0010 0100 0110
NO NB E BE NS PE GE LE
0 0 0 0 0 1 1 1 <- I do not know if this is correct.
(Likely not.)
Operation:
AND ESI,7FFFFFFF
Result:
EFL 00000202 (NO,NB,NE,A,NS,PO,GE,G)
My ASCII, (inspired by):
_---------------------------=> E -> NE
/ _----------------------=> BE -> A
| / _------------=> PE -> PO
| | / _--=> LE -> G
| | | /
| | | |
NO NB NE A NS PO GE G
0000 0000 0000 0000 0000 0010 0100 0110
0 0 0 0 0 1 1 1
Help has the following to say:
Following EFL are the suffixes of conditional commands that satisfy
current flags. If, for example, you see:
EFL 00000A86 (O,NB,NE,A,S,PE,GE,G),
this means that JO and JNE will be taken whereas JB and JPO not.
I suspected CPU Flags, FLAGS register, etc. but cant recognize e.g. NO in any of those.
EFL is the FLAGS register (expanded to include EFLAGS), used among other things to indicate parity, overflow/carry, direction and branch flow as well as various CPU modes.
Olly slightly expands the register by separating out the booleans for common control status bits above EFL (the singular bits named 'C P A Z S T D O').
The abbreviations in brackets next to EFL's value the correlate to what can/cannot pass under the current EFLAGS, ie: NO stands for No-Overflow, toggling the OF bit will switch it to O for overflow.

How do you convert little Endian to big Endian with bitwise operations?

I get that you'd want to do something like take the first four bits put them on a stack (reading from left to right) then do you just put them in a register and shift them x times to put them at the right part of the number?
Something like
1000 0000 | 0000 0000 | 0000 0000 | 0000 1011
Stack: bottom - 1101 - top
shift it 28 times to the left
Then do something similar with the last four bits but shift to the right and store in a register.
Then you and that with an empty return value of 0
Is there an easier way?
Yes there is. Check out the _byteswap functions/intrinsics, and/or the bswap instruction.
You could do this way..
For example
I/p : 0010 1000 and i want output
1000 0010
input store into a variable x
int x;
i = x>>4
j = x<<4
k = i | j
print(K) //it will have 1000 0010.

Algorithm for bitwise fiddling

If I have a 32-bit binary number and I want to replace the lower 16-bit of the binary number with a 16-bit number that I have and keep the upper 16-bit of that number to produce a new binary number.. how can I do this using simple bitwise operator?
For example the 32-bit binary number is:
1010 0000 1011 1111 0100 1000 1010 1001
and the lower 16-bit I have is:
0000 0000 0000 0001
so the result is:
1010 0000 1011 1111 0000 0000 0000 0001
how can I do this?
You do this in two steps:
Mask out the bits that you want to replace (AND it with 0s)
Fill in the replacements (OR it with the new bits)
So in your case,
i32 number;
i32 mask_lower_16 = FFFF0000;
i16 newValue;
number = (number AND mask_lower_16) OR newValue;
In actual programming language implementation, you may also need to address the issue of sign extension on the 16-bit value. In Java, for example, you have to mask the upper 16 bits of the short like this:
short v = (short) 0xF00D;
int number = 0x12345678;
number = (number & 0xFFFF0000) | (v & 0x0000FFFF);
System.out.println(Integer.toHexString(number)); // "1234f00d"
(original32BitNumber & 0xFFFF0000) | 16bitNumber
Well, I could tell you the answer. But perhaps this is homework. So I won't.
Consider that you have a few options:
| // bitwise OR
^ // bitwise XOR
& // bitwise AND
Maybe draw up a little table and decide which one will give you the right result (when you operate on the right section of your larger binary number).
use & to mask off the low bits and then | to merge the 16 bit value with the 32 bit value
uint a = 0xa0bf68a9
short b = 1
uint result = (a & 0xFFFF0000) | b;

Resources