I have 4 binary bits
Bit 3 Bit 2 Bit 1 Bit 0
Normally the answer is simple: 2^4, or 16 different combinations; and it would looks something like the following:
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
However, The LSB (Bit 0) changes state every iteration.
I need an algorithm where the state of a bit only changes once through all the iterations; i.e, I need all my bits to act like the MSB (Bit 3).
How can I do this?
Edit
It seems that most people are converging to there being only 5 possible solutions. However this assumes there is a starting point for the value and an ending point. This doesn't matter so I'm going to give a real world scenario to better explain.
Suppose I have an digital alarm clock that gives me 4 outputs. Each output can be programmed to go on at a certain time and off at a certain time and are programmed independent of each other, eg. I can program output 1 to go on at 1 am and off at 3 am, while I can program output 2 to go on at 7 pm and off at 2 am. There are no restrictions to how long each output can stay on.
Now I want to hook this alarm clock to a computer and get as close as possible to the current correct time. i.e If the clock says the time is 2:15 pm, my computer knows that the alarm is within the 12 pm to 6 pm range for example. I want to be able to get the smallest possible range. Whats the smallest possible range I can get?
There are 4 bits.
Each bit may change state only once.
For each new value, at least one of the bits must have changed state from the previous value.
Therefore, you can have at most 4 state changes, and at most 5 different values.
Example:
0000 -> 0001 -> 0011 -> 0111 -> 1111
Edit:
Very well, let's restate from what you mean rather than from what you say.
There are 4 bits.
Each bit may change state only twice. (once from 0 to 1, and once from 1 to 0)
For each new value, at least one of the bits must have changed state from the previous value.
Therefore, you can have at most 8 state changes, and at most 8 different values (since the last state change necessarily brings all bits back to their initial state)
Example:
0000 -> 0001 -> 0011 -> 0111 -> 1111 -> 1110 -> 1100 -> 1000 -> 0000
So, by setting the outputs for: 3 AM - 3 PM, 6 AM - 6 PM, 9 AM - 9 PM and noon - midnight, you can determine which 3-hour period it is from the outputs. I'd suggest plugging wires into the visual output instead.
You want a Gray Code. Look about half way down for "Constructing an n-bit gray code".
I believe it is impossible to cycle though all possible bit patterns with such a restriction.
If you have an n-bit idea, you can cycle though a total of (n+1) states before having to flip a bit you've already flipped.
For example, in a 3-bit example, if you start with 111, you get
111
110
100
000
And then you're forced to flip one you've already flipped to get a new state.
Based on your alarm clock example I assume you need to finish on the combiation you started on, and that each bit can be cycled on then off only once, e.g.
0000 -> 0001 -> 0011 -> 0111 -> 1111
-> 1110 -> 1100 -> 1000 -> 0000
The number of steps is twice the number of bits, so with 4 bits you could get the current time to within a 3 hour range.
You want each bit to change only once?
Like:
0000 -> 0001 -> 0011 -> 0111 -> 1111
In that case you can use a simple counter where the delta is multiplied by 2 each iteration (or shift left).
If Gamecat got you correctly,
your bitmask values will be:
1 - 1
2 - 1
4 - 1
8 - 1
16 - 1
etc.
2^i - 1
or, using shifts:
(1 << i) - 1 for i in 0..
"I need an algorithm where the state of a bit only changes once through all the iterations"
If the above statement is taken literally, then there are only five states per iteration, as explained in other posts.
If the question is "How many possible sequences can be generated?", then:
Is the first state always 0000?
If not, then you have 16 possible initial states.
Does order matter?
If yes, then you have 4! = 24 possible ways to choose which bits to flip first.
So, this gives a total of 16*24 = 384 possible sequences that can be generated.
Looking back at the original question i think i understand what you mean
simply whats the smallest amount of bits you can use to program a clock, based on the amount of possible bit combinations
the first question is how many sequences are required.
60Secs x 60 Mins x 24hrs = 86400 (combinations required)
the next step is to work out how many bit are required to produce at least 86400 combinations
if somebody know the calculation to
how many bits can produce 86400 combinations then thats your answer.
hopefully there is a formula online somewhere for this calculation
Here is an example to how you can keep a bit from only being flipped once. Not knowing all the parameter of you system it is not easy to give an accurate example but here is one anyway.
char bits = 0x05;
flipp_bit(char bit_flip)
{
static char bits_flipped=0;
if( (bits_flipped & bit_flip) == 0)
{
bits_flipped |= bit_flip;
bits ^= bit_flip;
}
}
Flipping with this function will only allow one flip on each bit.
Related
I've got a question which i have solved and many other similar ones which i have successfully completed as part of my assignment. I am having a little confusion with one of the question which is...
"The decimal number -256 is held in a 9 bit memory location. represent this in sign and magnitude."
the answer that i got is... 11 0000 0000
how i got this is by doing the following:
We first show the binary form and then invert the most significant bit (the bit on the far left. O represents a positive value and 1 represents a negative value) the sign determines whether it is a positive or negative value and the magnitude is the total of the numbers or total of bits etc.
Notice that I’ve added an extra bit to show the representation of -256 in sign and magnitude. This is simply because 9 bits are not enough to show whether 256 is negative or positive as it maxed out. The total of 9 bits gave the magnitude which is 256 and the 10th bit (on the far left) shows the sign which is ‘1’ and this shows that it is a negative 256.
What i find confusing is that the decimal -256 was held in a 9 bit memory location and the result i got was 1 0000 0000. i have added an extra bit to show that the number is negative which then represents it in 10 bit. I am having difficulty with how i can represent -256 in sign and magnitude using only 9 bit. it seems impossible to show in 9 bit and i have therefore used 10 bit but i am only allowed to use 9 bits. could someone help with how this could be achieved. your help will be greatly appreciated. I am just a bit stuck with this. My tutors expect us to use the internet or self knowledge and would not give us even a clue. so that's why i am here.
Know this is a bit late but I wondered this too in my HW just now and looked it up.
The magnitude, given w-bits, of sign magnitude is 2^(w-1)-1.
The decimal equivalent of 100000000 is 256.
Given that the number of bits is 9, 2^(8)-1 = 255.
So it would be impossible to store 256 given 9 bits in sign magnitude.
It is impossible to represent -256 in sign magnitude, with 9 bits, simply because it is not possible to represent 256 in 8 bits, (8 bits of precision, affords 2^8 = 256 different possible values, so you can represent from 0 up to 255, but no further).
Could you please suggest an error detection scheme for detecting
one possible bit flip in the first 32 bytes of a 33-byte message using
no more than 8 bits of additional data?
Could Pearson hashing be a solution?
Detecting a single bit-flip in any message requires only one extra bit, independent of the length of the message: simply xor together all the bits in the message and tack that on the end. If any single bit flips, the parity bit at the end won't match up.
If you're asking to detect which bit flipped, that can't be done, and a simple argument shows it: the extra eight bits can represent up to 256 classes of 32-byte messages, but the zero message and the 256 messages with one on bit each must all be in different classes. Thus, there are 257 messages which must be distinctly classified, and only 256 classes.
You can detect one bit flip with just one extra bit in any length message (as stated by #Daniel Wagner). The parity bit can, simply put, indicate whether the total number of 1-bits is odd or even. Obviously, if the number of bits that are wrong is even, then the parity bit will fail, so you cannot detect 2-bit errors.
Now, for a more accessible understanding of why you can't error-correct 32 bytes (256 bits) with just 8 bits, please read about the Hamming code (like used in ECC memory). Such a scheme uses special error-correcting parity bits (henceforth called "EC parity") that only encode the parity of a subset of the total number of bits. For every 2^m - 1 total bits, you need to use m EC bits. These represent each possible different mask following the pattern "x bits on, x bits off" where x is a power of 2. Thus, the larger the number of bits at once, the better the data/parity bit ratio you get. For example, 7 total bits would allow encoding only 4 data bits after losing 3 EC bits, but 31 total bits can encode 26 data bits after losing 5 EC bits.
Now, to really understand this probably will take an example. Consider the following sets of masks. The first two rows are to be read top down, indicating the bit number (the "Most Significant Byte" I've labeled MSB):
MSB LSB
| |
v v
33222222 22221111 11111100 0000000|0
10987654 32109876 54321098 7654321|0
-------- -------- -------- -------|-
1: 10101010 10101010 10101010 1010101|0
2: 11001100 11001100 11001100 1100110|0
3: 11110000 11110000 11110000 1111000|0
4: 11111111 00000000 11111111 0000000|0
5: 11111111 11111111 00000000 0000000|0
The first thing to notice is that the binary values for 0 to 31 are represented in each column going from right to left (reading the bits in rows 1 through 5). This means that each vertical column is different from each other one (the important part). I put a vertical extra line between bit numbers 0 and 1 for a particular reason: Column 0 is useless because it has no bits set in it.
To perform error-correcting, we will bitwise-AND the received data bits against each EC bit's predefined mask, then compare the resulting parity to the EC bit. For any calculated parities discovered to not match, find the column in which only those bits are set. For example, if error-correcting bits 1, 4, and 5 are wrong when calculated from the received data value, then column #25--containing 1s in only those masks--must be the incorrect bit and can be corrected by flipping it. If only a single error-correcting bit is wrong, then the error is in that error-correcting bit. Here's an analogy to help you understand why this works:
There are 32 identical boxes, with one containing a marble. Your task is to locate the marble using just an old-style scale (the kind with two balanced platforms to compare the weights of different objects) and you are only allowed 5 weighing attempts. The solution is fairly easy: you put 16 boxes on each side of the scale and the heavier side indicates which side the marble is on. Discarding the 16 boxes on the lighter side, you then weigh 8 and 8 boxes keeping the heavier, then 4 and 4, then 2 and 2, and finally locate the marble by comparing the weights of the last 2 boxes 1 to 1: the heaviest box contains the marble. You have completed the task in only 5 weighings of 32, 16, 8, 4, and 2 boxes.
Similarly, our bit patterns have divided up the boxes in 5 different groups. Going backwards, the fifth EC bit determines whether an error is on the left side or the right side. In our scenario with bit #25, it is wrong, so we know that the error bit is on the left side of the group (bits 16-31). In our next mask for EC bit #4 (still stepping backward), we only consider bits 16-31, and we find that the "heavier" side is the left one again, so we have narrowed down the bits 24-31. Following the decision tree downward and cutting the number of possible columns in half each time, by the time we reach EC bit 1 there is only 1 possible bit left--our "marble in a box".
Note: The analogy is useful, though not perfect: 1-bits are not represented by marbles--the erroring bit location is represented by the marble.
Now, some playing around with these masks and thinking how to arrange things will reveal that there is a problem: If we try to make all 31 bits data bits, then we need 5 more bits for EC. But how, then, will we tell if the EC bits themselves are wrong? Just a single EC bit wrong will incorrectly tell us that some data bit needs correction, and we'll wrongly flip that data bit. The EC bits have to somehow encode for themselves! The solution is to position the parity bits inside of the data, in columns from the bit patterns above where only one bit is set. This way, any data bit being wrong will trigger two EC bits to be wrong, making it so that if only one EC bit is wrong, we know it is wrong itself instead of it signifying a data bit is wrong. The columns that satisfy the one-bit condition are 1, 2, 4, 8, and 16. The data bits will be interleaved between these starting at position 2. (Remember, we are not using position 0 as it would never provide any information--none of our EC bits would be set at all).
Finally, adding one more bit for overall parity will allow detecting 2-bit errors and reliably correcting 1-bit errors, as we can then compare the EC bits to it: if the EC bits say something is wrong, but the parity bit says otherwise, we know there are 2 bits wrong and cannot perform correction. We can use the discarded bit #0 as our parity bit! In fact, now we are encoding the following pattern:
0: 11111111 11111111 11111111 11111111
This gives us a final total of 6 Error-Checking and Correcting (ECC) bits. Extending the scheme of using different masks indefinitely looks like this:
32 bits - 6 ECC bits = 26 data
64 bits - 7 ECC bits = 57 data
128 bits - 8 ECC bits = 120 data
256 bits - 9 ECC bits = 247 data
512 bits - 10 ECC bits = 502 data
Now, if we are sure that we only will get a 1-bit error, we can dispense with the #0 parity bit, so we have the following:
31 bits - 5 ECC bits = 26 data
63 bits - 6 ECC bits = 57 data
127 bits - 7 ECC bits = 120 data
255 bits - 8 ECC bits = 247 data
511 bits - 9 ECC bits = 502 data
This is no change because we don't get any more data bits. Oops! 32 bytes (256 bits) as you requested cannot be error-corrected with a single byte, even if we know we can have only a 1-bit error at worst, and we know the ECC bits will be correct (allowing us to move them out of the data region and use them all for data). We need TWO more bits than we have--one must slide up to the next range of 512 bits, then leave out 246 data bits to get our 256 data bits. So that's one more ECC bit AND one more data bit (as we only have 255, exactly what Daniel told you).
Summary:: You need 33 bytes + 1 bit to detect which bit flipped in the first 32 bytes.
Note: if you are going to send 64 bytes, then you're under the 32:1 ratio, as you can error correct that in just 10 bits. But it's that in real world applications, the "frame size" of your ECC can't keep going up indefinitely for a few reasons: 1) The number of bits being worked with at once may be much smaller than the frame size, leading to gross inefficiencies (think ECC RAM). 2) The chance of being able to accurately correct a bit gets less and less, since the larger the frame, the greater the chance it will have more errors, and 2 errors defeats error-correction ability, while 3 or more can defeat even error-detection ability. 3) Once an error is detected, the larger the frame size, the larger the size of the corrupted piece that must be retransmitted.
If you need to use a whole byte instead of a bit, and you only need to detect errors, then the standard solution is to use a cyclic redundancy check (CRC). There are several well-known 8-bit CRCs to choose from.
A typical fast implementation of a CRC uses a table with 256 entries to handle a byte of the message at a time. For the case of an 8 bit CRC this is a special case of Pearson's algorithm.
I'm a Computer Science major, interested in how assembly languages handle a integer divide function. It seems that simply adding up to the numerator, while giving both the division and the mod, is way too impractical, so I came up with another way to divide using bit shifting, subtracting, and 2 look up tables.
Basically, the function takes the denominator, and makes "blocks" based on the highest power of 2. So dividing by 15 makes binary blocks of 4, dividing by 5 makes binary blocks of 3, etc. Then generate the first 2^block-size multiple of the denominator. For each multiple, write the values AFTER the first block into the look up table, keyed by the value of the first block.
Example: Multiples of 5 in binary - block size 3 (octal)
000 000 **101** - 5 maps to 0
000 001 **010** - 2 maps to 1
000 001 **111** - 7 maps to 1
000 010 **100** - 4 maps to 2
000 011 **001** - 1 maps to 3
000 011 **110** - 6 maps to 3
000 100 **011** - 3 maps to 4
000 101 **000** - 0 maps to 5
So the actual procedure involves getting the first block, left bit-shifting over the first block, and subtracting the value that the blocks maps to. If the resulting number comes out to 0, then it's perfectly divisible, and if the value becomes negative, it's not.
If you add another enumeration look up table, where you map the values to a counter as they come in, you can calculate the result of the division!
Example: Multiples of 5 again
5 maps to 1
2 maps to 2
7 maps to 3
4 maps to 4
1 maps to 5
6 maps to 6
3 maps to 7
0 maps to 8
Then all that's left is mapping every block to the counter-table, and you have your answer.
There are a few problems with this method.
If the answer isn't perfectly divisible, then the function returns back junk.
For high Integer values, this won't work, because a 5 block size will get truncated at the end of a 32 bit or 64 bit integer.
It's about 100 times slower than the standard division in C.
If the denominator is a factor of the divisor, then your blocks must map to multiple values, and you need even more tables. This can be solved with prime factorization, but all the methods I've read about easy/quick prime factorization involve dividing, defeating the purpose of this.
So I have 2 questions: First, is there an algorithm similar to this out there already? I've looked around, and I can't seem to find any like it. Second, How do actual assembly languages handle Integer division?
Sorry if there are any formatting mistake, this is my first time posting to stack overflow.
Sorry i answer so late. Ok, first regarding the commenters of your question: they think you are trying to do what the assembly memonic DIV or IDIV achieves by using different instructions in assembly. To me it seems you want to know how the op-codes that are selected by DIV and IDIV achieve division in hardware. To my knowledge Intel uses the SRT algorithm (uses a lookup-table) and AMD uses the Goldschmidt algorithm. I think what you are doing is similar to SRT. You can take a look at both of them here:
http://en.wikipedia.org/wiki/Division_%28digital%29
One of my projects involves drawing text. The project is based around a mid-level 16-bit microcontroller, a dsPIC33FJ128GP802. It is capable of 40 MIPS, but about 92% of that is reserved for background processing (outputting an on screen display), so on average it gets 3 MIPS to render stuff. The processor has hardware multiply, assisted divide (18 cycles) and a full 16-bit barrel shifter.
The original method was simple. It just called the set pixel routine for each pixel that needed to be written, however, this is quite slow: each pixel write requires an address decode, bit mask, and write to memory - on average, around 60 cycles per pixel. Also, two bits need to be written for each pixel to be set: one in the mask array (which determines if the pixel is visible or not), and one in the level array (which determines if the pixel is white or black.) For a single character, 8x14 pixels, this means 13,440 cycles plus overhead. Which is a lot, given the lack of much processing power.
Because of this, I came up with an algorithm for drawing horizontal lines. It could efficiently write up to 16 pixels in about 20 cycles, which is a 60 fold improvement on setting pixels individually; it could also handle lines which did not lie on word boundaries (using some clever bit math), and even lines which lie entirely inside one word. (Note - one word is 16 bits and the video memory is stored as an 4 arrays of 3,072 words, a front buffer and back buffer.) I'm not certain if the algorithm is original - I doubt it - but for those curious, I've documented it here.
Now I'm racking my brains out trying to figure out a way to set multiple distinct pixels across multiple words. Ideally, it would work like this - we want to write this word starting at bit 4 (bits counted from zero) of the first word and allow it to overflow into the next:
Memory before : 0000 0000 0000 0000 0000 0000 0000 0000
Word to write : 1111 1010 1111 1111
Memory after : 0000 0111 1101 0111 1111 1000 0000 0000
If anyone knows of any such algorithm or has done something in the past similar to this it would be great to know how you did it. I'm having a major brain block right now.
Can you rShift 5 bits, do an AND on the first WORD and then lShift 11 then AND on the second WORD or am I missing something?
In the days before sophisticated graphics accelerators, people hid whatever they could implement behind an interface such as bitblt. There is a quick write-up of one example of this in passing at http://swtch.com/~rsc/talks/drawtalk.pdf Some of these worked by auto-generating and then executing machine code. The approach from that paper is described as "Effectively, the draw implementation is the above with
enough conditionals and function calls pushed outside
enough loops to make the overhead bearable." One version I saw was quite long, with a variety of common special cases extracted "fast paths".
I seek an algorithm that will let me represent an incoming sequence of bits as letters ('a' .. 'z' ), in a minimal matter such that the stream of bits can be regenerated from the letters, without ever holding the entire sequence in memory.
That is, given an external bit source (each read returns a practically random bit), and user input of a number of bits, I would like to print out the minimal number of characters that can represent those bits.
Ideally there should be a parameterization - how much memory versus maximum bits before some waste is necessary.
Efficiency Goal - The same number of characters as the base-26 representation of the bits.
Non-solutions:
If sufficient storage was present, store the entire sequence and use a big-integer MOD 26 operation.
Convert every 9 bits to 2 characters - This seems suboptimal, wasting 25% of information capacity of the letters output.
If you assign a different number of bits per letter, you should be able to exactly encode the bits in the twenty-six letters allowed without wasting any bits. (This is a lot like a Huffman code, only with a pre-built balanced tree.)
To encode bits into letters: Accumulate bits until you match exactly one of the bit codes in the lookup table. Output that letter, clear the bit buffer, and keep going.
To decode letters into bits: For each letter, output the bit sequence in the table.
Implementing in code is left as an exercise to the reader. (Or to me, if I get bored later.)
a 0000
b 0001
c 0010
d 0011
e 0100
f 0101
g 01100
h 01101
i 01110
j 01111
k 10000
l 10001
m 10010
n 10011
o 10100
p 10101
q 10110
r 10111
s 11000
t 11001
u 11010
v 11011
w 11100
x 11101
y 11110
z 11111
Convert each block of 47 bits to a base 26 number of 10 digits. This gives you more than 99.99% efficiency.
This method, as well as others like Huffman, needs a padding mechanism to support variable-length input. This introduces some inefficiency which is less significant with longer inputs.
At the end of the bit stream, append an extra 1 bit. This must be done in all cases, even when the length of the bit stream is a multiple of 47. Any high-order letters of "zero" value can be skipped in the last block of encoded output.
When decoding the letters, a truncated final block can be filled out with "zero" letters and converted to a 47-bit base 2 representation. The final 1 bit is not data, but marks the end of the bit stream.
Could Huffman coding be what you're looking for? It's a compression algorithm, which pretty much represents any information with a minimum of wasted bits.
Zero waste would be log_2(26) bits per letter. As pointed out earlier, you can get to 4.7 by reading 47 bits and converting them to 10 letters. However, you can get to 4.67 by converting every 14 bits into 3 characters. This has the advantage that it fits into an integer. If you have storage space and run time is important, you can create a lookup table with 17,576 entries mapping the possible 14 bits into 3 letters. Otherwise, you can do mod and div operations to compute the 3 letters.
number of letters number of bits bits/letter
1 4 4
2 9 4.5
3 14 4.67
4 18 4.5
5 23 4.6
6 28 4.67
7 32 4.57
8 37 4.63
9 42 4.67
10 47 4.7
Any solution you use is going to be space-inefficient because 26 is not a power of 2. As far as an algorithm goes, I'd rather use a lookup table than an on-the-fly calculation for each series of 9 bits. Your lookup table would 512 entries long.
If you want the binary footprint of each letter to have the same size, the optimal solution would be given by Arithmetic Encoding. However, it will not reach your goal of a mean representation of 4.5 bits/char. Given 26 different characters (not including space etc) 4.7 would be the best you can reach without using variable-length encoding (Huffman, for instance. See Jaegers's answer) or other compression algoritms.
A suboptimal, although simpler, solution could be to find a feasible number of characters to fit into a big integer. For instance, if you form a 32-bit integer out of every 6 charachter chunk (which is possible as 26^6 < 2^32), you use 5.33 bits/char. You can actually even fit 13 letters into a 64 bit integer (4.92 bits/char). This is quite close to the optimal solution, and still rather easy to implement. Using bigger ints than 64 bits can be tricky due to missing native support in many progamming languages.
If you want even better compression rates for text, you should definitely also look into dictionary-based compression algorithms, such as LZW or Deflate.