Data Representation in LC-3 - lc3

I was doing my exam prep and I have come across a problem that ive been having issues with mainly because of the lack of info provided. The question is
b.What integer does the 16 bit word F751 represent in the LC-3?
So do we convert the base 16 to base 10 or base 2, Im not really sure how to do this problem.

Take f751 and convert to binary
1111 0111 0101 0001
The most significant bit is 1 so we know the number is negative, so take the 2s complement
0000 1000 1010 1111
And Convert to decimal -2223

The High digit is greater or equal to 8 so the number is negative.
Take the complement to F (fifteen) of each digit: f751
f give 0
7 give 8
5 give A
1 give E
08AE is the 1 complement
08AF is the 2 complement which is in decimal -2223
This prevent to convert to binary

Related

How to compute hamming code for 10 bits?

I have seen examples of hamming code detection and correction with 8 bits or 12 bits. Suppose I had the bit string: 1101 0110 11 which contains 10 bits.
Do I need to add two additional bits to that bit string to complete the nibble? If so do I add 0s or 1s?
I have looked for other examples, but cannot seem to find those with partial nibbles to determine the procedure. Thank you.

Why is the double-dabble algorithm working?

I am trying to understand why the double-dabble algorithm is working, but I am not getting it.
There are a lot of great descriptions of the steps and the purpose of the algorithm, like
http://www.classiccmp.org/cpmarchives/cpm/mirrors/cbfalconer.home.att.net/download/dubldabl.txt or
http://en.wikipedia.org/wiki/Double_dabble
There are also some attempts of explanation. The best one I found is this one:
http://www.minecraftforum.net/forums/minecraft-discussion/redstone-discussion-and/340153-why-does-the-double-dabble-algorithm-work#c6
But I still feel like I am missing the connecting parts. Here is what I get:
I get, that you can convert binary numbers to decimal numbers by reading them from left to right, starting with a decimal value of 0, iterating over the digits of the binary number, adding 1 to the decimal number for every 1 you reach in the binary number, and multiplying by 2, when you move on to the next digit (as explained in the last link).
But why does this lead to the double-dabble-algorithm? We don't want to convert in decimal, we want to convert in BCD. Why are we keeping the multiplying (shifting), but we drop the adding of 1? Where is the connection?
I get, why we have to add 3, when a number in a BCD-field exceeds 4 before shifting. Because if you want that the BCD number gets multiplied by 2 if you shift it, you have to do some fixups. If you shift the BCD number 0000 1000 (8) and you want to get the double 0001 0011 (16), you have to add 3 (the half of 6) before shifting, because by just shifting you end up with 0001 0000 (10) (you're missing 6).
But why do we want this and where is the adding of 1 happening?
I guess I am just missing a little part.
Converting N to its decimal representation involves repeatedly determining R (rest) after dividing by 10. In stead of dividing by 10 you might also divide by 2 (which is just a shift) followed by a division by 5. Hence the 5. Long division means trying to subtract 5 and if that succeeds actually doing the subtraction while keeping track of that success by setting a bit in Q. Adding 3 is the same as subtracting 5 while setting the bit that will subsequently be shifted into Q. Hence the 3.
16 in binary is 10 in bcd.
We want 16 in bcd, so we add 6.
but why we add 3 and not 6?
Because adding is done before shifting, so it’s all divided by two, that’s why we add 3 when higher than 5!
I think I got it while writing this question:
Suppose you want to convert 1111 1111 from binary into BCD.
We use the method to convert the binary number to a decimal number, explained in the question, but we alter it a little bit.
We don't start with a decimal number of 0 but with a BCD number of 0000 0000 (0).
BCD binary
0000 0000 1111 1111
First we have to add 1 to the BCD-number. This can be done by a simple shift
0000 0001 1111 1110
Now we move on and want to multiply the BCD-Number by 2. In the next Step we want to add the current binary digit to the BCD-number. Both can be accomplished in one step by (again) shifting.
0000 0011 1111 1100
This works over and over again. The only situation in which this doesn't work, is when a block of the BCD-numer exceeds 4. In this case you have to do the fixup explained in the question.
After iterating through the binary number, you get the BCD-representation of the number on the left side \o/
The whole idea is to use the shifting explained in your link, but then convert the number on the left into BCD. At each stage, you are getting closer to the actual binary number on the left, but making sure that the number remains in BCD rather than binary.
When you shift in a '1', you are essentially adding it.
Take a look at the link below to get the gist of the 'add 3' argument:
https://olduino.files.wordpress.com/2015/03/why-does-double-dabble-work.pdf

decimal conversion using 5 bit two's complement

i am asked to show the calculation of 11+6 using 5 bit two's complement. i find some rules confusing but have come up with the answer as shown below. please let me know if it is correct or what needs to be done if it is wrong.
two's complement of 11 is
01011
two's complement of 6 is
00110
now adding them using carriers if required:
01011 -----11 in binary
00110 -----6 in binary
10001 ---total
which in decimal is 17. is this the correct method of working out??? because my result in binary shows 10001. isn't 10001 supposed to mean -1 because the first bit in two's complement is a sign bit. please help me solve this if it is wrong. thanks for your help.
Let's explain how this works. You know that 1 is represent as 00001 in 5-bit binary representation. To get -1, a known method for (2's complement) is to :
Invert all bits, which means 11110.
Add 1 to the previous result which lead to 11111 equals to 31 in decimal base (unsigned).
Thus, 10001 is not equal to -1.
Now, let's take 11 (base 10) as an example. We know that 11 equals to 01011.
Invert all bits => 10100
add 1 => 10101 which equals to 21 in unsigned mode or -11 in signed mode.
You can then deduce that 10001 is -15 in signed mode or 17 in unsigned mode.
Also be careful, a signed 5-bit integer is bounded from -16 to 15. An unsigned 5-bit Integer is bounded from 0 to 31. In your case, the answer is 17 which means it's an unsigned integer or -15 as a signed integer.
Hope that helps you.

Bits confusion on unary complement operator

Bitwise unary complement operator (~) of 2 is -3. I read some where the value 2 in binary representation is 0010 and Bitwise unary complement operator changes bits from 0 to 1, or vice versa. So the value of ~2 is 1101. it means -3. But my confusion is why have they taken 2's binary representation as 0010. according to me int is 32bits. so why 2 cant be 00000000000000000000000000000010 and it's unary complement is 11111111111111111111111111111101? I know am wrong but why? please explain?
The answer to your question is "Two's complement was chosen over one's complement because of several convenient characteristics making arithmetic easier to implement in digital circuits".
I believe from the wording of your question that a bit of an illustration would help.
To fully appreciate this, you still need to read up on two's complement notation and arithmetic - both how they work and their history - but I will try to explain the basics here in a story-like fashion.
Let's say that we have 4 bits in which to represent a signed integer value.
Only 16 distinct values can be represented in 4 bits (16 distinct
different "patterns" can be made with 4-bits)... 0000, 0001, 0010,
0011, 0100, ... 1111 (try it, it's easier to see and develop the
pattern in a columnar format, which you'll see I've done below)
Decide which 16 values you want to be able to represent
It makes sense to say that 0000 stands for zero, 0001 for one, and so on for positives, but what about negatives?
Because zero has "taken one place", we can represent 15 other integers so it is immediately obvious that we cannot represent the same amount of positive and negative values.
We make a choice: our range will run from -8 to +7 (we might have said -9 to +6 or -7 to +8 etc but you'll see below how this choice pays off)
Now which bit-patterns should represent the negatives?
I'm sure you'll agree that it would be very nice if every number added to its additive inverse gave zero without us needing to resort to if-negative-then-else(if positive) logic. E.g. If +3 is represented 0011 and we do (binary) addition of 1101 we get the result (carry 1)0000. Ignore the carry and we've got zero. This makes the bit pattern 1101 obvious winner of the tag "-3".
You can play with the remaining values the same way and what you should arrive at is the following...
-8 1000
-7 1001
-6 1010
-5 1011
-4 1100
-3 1101
-2 1110
-1 1111
0 0000
+1 0001
+2 0010
+3 0011
+4 0100
+5 0101
+6 0110
+7 0111
With the following beautiful and convenient characteristics
"Natural counting bit patterns". Look down the column on the far right and you'll see 0 1 0 1..., then 0 0 1 1... in the next column, then 0 0 0 0 1... etc running perfectly in sequence into the positives
Incrementing "wraps around" (0,1,2,...7,-8,-7,-6,...-1,0,1,...etc and the same goes for decrementing)
Binary addition of additive inverses gives zero with no extra logic to deal with signs
All negative numbers have 1 for their first bit, zero and all positive numbers start with a 0. (The first bit is referred to as "the sign bit".)
Additive inverses can be obtained by the following rule/algorithm: "Invert all bits, increment, discard carry". Magic! Try 3 again:
3 : 0011
~ : 1100 (the NOT operator gives "one's complement")
+1: 1101 (the two's complement representation of -3)
~ : 0010
+1: 0011 (back to +3)
etc
This is two's complement notation
If you've understood this 4-bit story, you'll find that it can be easily extended to apply to 32 bit signed integers.

Is there a bijection between any distinct 4 4-bits strings and all the 2-bits strings?

Let me give you an example, let's consider the strings:
1000
0101
0111
0000
and the full range of 2-bits strings:
00
01
10
11
i am wondering if there is a function that has an inverse and that maps the 4 4-bits string to the 2-bits strings.
The number of bijections from a set of n elements to another set of n elements is n!
Consider each destination element consecutively, and pick its matching origin element.
For the first, you can pick among n.
For the second, you can pick among (n-1).
...
You want a bijection between sets of 4 elements, therefore you have 4! = 24 possible functions.
00 would be mapped to 1000 in six of them (3!), to 0101 in six of them, etc.
I'm not sure this answers your question but that's how I understand it.
For the case of 4 4 bits you have 16 ** 4 or 65536 cases this would map to 8 2 bit cells so it would be a trivial problem. However if you restate your problem to be the bijective mapping of the space of all strings made of 4 bits to 2 bits per byte that is a different problem and yes that has a solution.
A simple solution is look at the mappings to an infinite odd binary string space of each type of pattern. The infinite odd string has a last one a finite distance away from start what you do is you start writing bits as they appear you have one flag if its set and you have done last byte (either the 2 or 4 bit whichever set your using) you write a 1000... if the flag is clear you write 000... since there was a one that is the last "1" in the expansion.
for the 2 bit set
00 sets flag
10 does nothing to flag
the rest 01 11 clear the flag
for the 4 bit set
0000 sets flag
1000 does nothing to flag
rest all contain at least 1 or more ones and clear the flag
converting 1000 0101 0111 0000 to is a straight copy to 100001010111000010000...
notice the tail 100.. since flag set. If you do the reverse for the 2 bit set the
flag goes through many state but the 1000.. at end part of flag so in the
2 bit set you get 10 00 01 01 01 11 00 00 no big deal 8 bytes
but on converting 1000 0101 0111 0100 you get 1000010101110000...
when looking at 2 bit set you get 10 00 01 01 01 11 10
which is one 2 bit unit shorter for 7 bytes
for this bijection from 4 bit set to 2 bit set there will always be either
2n bytes for the byte of two bytes or 2n-1 bytes where n number of 4 bit bytes.
This method of mapping to the infinite odd files works for bijective transform of any string
a member of an infinite set of any number of bytes per byte even if the number of
bits that make up a byte vary a a function of n.

Resources