Satellite data processing error - time

A new satellite data processing center has just been completed and ready for the initial testing using live data being sent down from an orbiting satellite. As the very first messages are displayed on the screen and you notice many of the data values are wildly out of range.
For example, on the terminal screen is something defined as “delta time” and it seems to be out of the expected range [0.01 to 10,000.00 seconds], but the value displayed (as a double) is [-4.12318024e-028 seconds]. After further investigation into the raw byte-based data stream, you find the original data being sent down from the satellite for this double word as [0xC0 0x83 0xA1 0xCA 0x66 0x55 0x40 0xBA]. On one of the old terminals, this data is displayed correctly and is within the expected range.
a. [5] What caused this problem?
b. [5] If this is the real problem, what should the actual value be?

Ah, Failure Mode Analysis. Very important indeed!
Well, other terminal shows data correctly --> there is incompatibility between terminal and data.
Big Endian, little Endian perhaps? I am expecting the "old" terminal to be little Endian because it may have been coded in C. Now you can interpret the data.
Here is some code
#include <stdio.h>
union myW {
double x;
// Recieved as:[0xC0 0x83 0xA1 0xCA 0x66 0x55 0x40 0xBA]
unsigned char d[8] = {0x83, 0xC0,0xCA, 0xA1, 0x55, 0x66, 0xBA, 0x40};
};
union myBad {
double x;
// Recieved as:[0xC0 0x83 0xA1 0xCA 0x66 0x55 0x40 0xBA]
unsigned char d[8] = {0xC0, 0x83,0xA1, 0xCA, 0x66, 0x55, 0x40, 0xBA};
};
int main(void)
{
myW value;
value.x = 1.0; // check how reasonable number looks like
printf("Something reasonable: \n");
for(int i = 0; i < 8; i++)
{
printf("%u ", value.d[i]);
}
myW received;
printf("\nWhat shouldve been displayed:\n");
for(int i = 0; i < 8; i++)
{
printf("%u ", received.d[i]);
}
printf("\n%f\n", received.x);
myBad bad;
printf("\nBad output as:\n");
for(int i = 0; i < 8; i++)
{
printf("%u ", bad.d[i]);
}
printf("\n%0.30f\n", bad.x);
}
Output:
Something reasonable:
0 0 0 0 0 0 240 63
What shouldve been displayed::
131 192 202 161 85 102 186 64
6758.334500
Bad output as:
192 131 161 202 102 85 64 186
-0.000000000000000000000000000412
Compiled with g++

Related

C getting a raw keypress with no stdlib

I am working an a very basic operating system for a learning experience, and I am trying to start with key presses. I am making a freestanding executable, so no standard library. How would I go about taking input from a keyboard? I have figured out how to print to the screen through video memory.
/*
* kernel.c
* */
void cls(int line) { // clear the screen
char *vidptr = (char*) 0xb8000;
/* 25 lines each of 80 columns; each element takes 2 bytes */
unsigned int x = 0;
while (x < 80 * 25 * 2) {
// blank character
vidptr[x] = ' ';
// attribute-byte - light grey on black screen
x += 1;
vidptr[x] = 0x07;
x += 1;
}
line = 0;
}
void printf(const char *str, int line, char attr) { // write a string to video memory
char *vidptr = (char*) 0xb8000;
unsigned int y =0, x = 0;
while (str[y] != '\0') {
// the character's ascii
vidptr[x] = str[y];
x++;
// attribute byte - give character black bg and light gray fg
vidptr[x+1] = attr;
x++;
y++;
}
}
void kmain(void) {
unsigned int line = 0;
cls(line);
printf("Testing the Kernel", line, 0x0a);
}
and my assembly:
;; entry point
bits 32 ; nasm directive - 32 bit
global entry
extern _kmain ; kmain is defined in the c file
section .text
entry: jmp start
;multiboot spec
align 4
dd 0x1BADB002 ; black magic
dd 0x00 ; flags
dd -(0x1BADB002 + 0x00) ; checksum. m+f+c should be zero
start:
cli ; block interrupts
mov esp, stack_space ; set stack pointer
call _kmain
hlt ; halt the CPU
section .bss
resb 8192 ; 8KB for stack
stack_space:

Floats in visual studio

Consider single precision floating point number system conforming to IEEE 754 standard. In visual studio, FP switch was set to Strict.
struct FP {
unsigned char a : 8;
unsigned char b : 8;
unsigned char c : 8;
unsigned char d : 8;
}*fp;
fp->a = 63;
fp->b = 128;
fp->c = 0;
fp->d = 1;
std::cout << "raw float = " << *reinterpret_cast<float*>(fp) << "\n";
Tha mathematical value according to standard is 1.00000011920928955078125.
What visual studio prints is raw float = 2.36018991e-38. Why?
Assume sign bit it 0. And 0111 1111 in exponent part.
In the remaining 23 bits assume 01 and 10 are the least significant bits, which means the mathematical value is number1 = 1.00000011920928955078125 and number2 = 1.0000002384185791015625 respectively. The mid value is number3 = 1.000000178813934326171875. So, all values between number1 and number3 should be captured by encoding with 01 in least two significant bits and values between number3 and number2 should be captured by encoding with 10 in least significant bits. But visual studio captures 1.0000001788139343(this actually falls between number1 and number3) and greater values in encoding with 10 in least significant bits. So what am I mising?
If you take a look at https://www.h-schmidt.net/FloatConverter/IEEE754.html
then you can see that binary representation of 2.36018991E-38 is
00000001 00000000 10000000 00111111
and that binary value equals to your struct

how are multiple chunks of 8-bit words converted to a single number?

Ok first of all, I know that if i have a 8 bit computer it can only handle 8 bit numbers, not higher than 8, but I know that it is still possible to represent a 16-bit number or even a 32,64,128-bit number by allocating more memory in ram.
But for the sake of simplicity lets just use a 16 bit number as an example.
Let's say we have a 16 bit number in ram like this:
12 34 <-- That's Hexadecimal btw
Let's also write it in binary just in case yall prefer binary form:
00010010 00110100 <-- Binary
&
4660 in decimal
Now, we know that the computer cant understand this big number(4660) as one single number, because the computer can only understand 8-bit numbers which only goes up to 255. So the byte on the right would stay as it is:
00110100 <-- 52 in decimal
but the left byte:
00010010 <-- would be 18 if it was the byte on the right,
but since it is on the left, it means that its
4608
So my question is, how does the computer read the second byte as 4608 if it can only understand numbers that are lower than 255, and then after that how does it interprets those two bytes as a single number(4660)?
Thanks, if you are confused feel free to ask me down in the comments. I made it as clear as possible.
well this is more programing question then HW architecture as the CPU only does 8bit operations in your test case and has no knowledge about 16bit. Your example is: 16bit arithmetics on 8bit ALU and is usually done by splitting to High and Low half of number (and joining latter). That can be done in more ways for example here few (using C++):
transfer
const int _h=0; // MSB location
const int _l=1; // LSB location
BYTE h,l; // 8 bit halves
WORD hl; // 16 bit value
h=((BYTE*)(&hl))[_h];
l=((BYTE*)(&hl))[_l];
// here do your 8bit stuff on h,l
((BYTE*)(&hl))[_h]=h;
((BYTE*)(&hl))[_l]=l;
You need to copy from/to the 8bit/16bit "register" copies which is slow but sometimes can ease up things.
pointers
const int _h=0; // MSB location
const int _l=1; // LSB location
WORD hl; // 16 bit value
BYTE *h=((BYTE*)(&hl))+_h;
BYTE *l=((BYTE*)(&hl))+_l;
// here do your 8bit stuff on *h,*l or h[0],l[0]
you do not need to copy anything but use pointer access *h,*l instead h,l. The pointer initialization is done just once.
union
const int _h=0; // MSB location
const int _l=1; // LSB location
union reg16
{
WORD dw; // 16 bit value
BYTE db[2]; // 8 bit values
} a;
// here do your 8bit stuff on a.db[_h],a.db[_l]
This is the same as #2 but in more manageable form
CPU 8/16 bit registers
Even 8 bit CPU's have usually 16 bit registers accesible by its half or even full registers. For example on Z80 you got AF,BC,DE,HL,PC,SP most of which are directly accessible by its half registers too. So there are instructions working with hl and also instructions working with h,l separately.
On x86 it is the same for example:
mov AX,1234h
Is the same (apart of timing and possibly code length) as:
mov AH,12h
mov AL,34h
Well that is conversion between 8/16 bit in a nutshell but I assume you are asking more about how the operations are done. That is done with use of Carry flag (which is sadly missing from most of higher languages then assembler). For example 16 bit addition on 8 bit ALU (x86 architecture) is done like this:
// ax=ax+bx
add al,bl
adc ah,bh
So first you add lowest BYTE and then highest + Carry. For more info see:
Cant make value propagate through carry
For more info about how to implement other operations see any implementation on bignum arithmetics.
[Edit1]
Here small C++ example of how to print 16 bit number with only 8 bit arithmetics. You can use 8 bit ALU as a building block to make N*8 bit operations in the same way as I did the 16 bit operations ...
//---------------------------------------------------------------------------
// unsigned 8 bit ALU in C++
//---------------------------------------------------------------------------
BYTE cy; // carry flag cy = { 0,1 }
void inc(BYTE &a); // a++
void dec(BYTE &a); // a--
BYTE add(BYTE a,BYTE b); // = a+b
BYTE adc(BYTE a,BYTE b); // = a+b+cy
BYTE sub(BYTE a,BYTE b); // = a-b
BYTE sbc(BYTE a,BYTE b); // = a-b-cy
void mul(BYTE &h,BYTE &l,BYTE a,BYTE b); // (h,l) = a/b
void div(BYTE &h,BYTE &l,BYTE &r,BYTE ah,BYTE al,BYTE b); // (h,l) = (ah,al)/b ; r = (ah,al)%b
//---------------------------------------------------------------------------
void inc(BYTE &a) { if (a==0xFF) cy=1; else cy=0; a++; }
void dec(BYTE &a) { if (a==0x00) cy=1; else cy=0; a--; }
BYTE add(BYTE a,BYTE b)
{
BYTE c=a+b;
cy=DWORD(((a &1)+(b &1) )>>1);
cy=DWORD(((a>>1)+(b>>1)+cy)>>7);
return c;
}
BYTE adc(BYTE a,BYTE b)
{
BYTE c=a+b+cy;
cy=DWORD(((a &1)+(b &1)+cy)>>1);
cy=DWORD(((a>>1)+(b>>1)+cy)>>7);
return c;
}
BYTE sub(BYTE a,BYTE b)
{
BYTE c=a-b;
if (a<b) cy=1; else cy=0;
return c;
}
BYTE sbc(BYTE a,BYTE b)
{
BYTE c=a-b-cy;
if (cy) { if (a<=b) cy=1; else cy=0; }
else { if (a< b) cy=1; else cy=0; }
return c;
}
void mul(BYTE &h,BYTE &l,BYTE a,BYTE b)
{
BYTE ah,al;
h=0; l=0; ah=0; al=a;
if ((a==0)||(b==0)) return;
// long binary multiplication
for (;b;b>>=1)
{
if (BYTE(b&1))
{
l=add(l,al); // (h,l)+=(ah,al)
h=adc(h,ah);
}
al=add(al,al); // (ah,al)<<=1
ah=adc(ah,ah);
}
}
void div(BYTE &ch,BYTE &cl,BYTE &r,BYTE ah,BYTE al,BYTE b)
{
BYTE bh,bl,sh,dh,dl,h,l;
// init
bh=0; bl=b; sh=0; // (bh,bl) = b<<sh so it is >= (ah,al) without overflow
ch=0; cl=0; r=0; // results = 0
dh=0; dl=1; // (dh,dl) = 1<<sh
if (!b) return; // division by zero error
if ((!ah)&&(!al)) return; // division of zero
for (;bh<128;)
{
if (( ah)&&(bh>=ah)) break;
if ((!ah)&&(bl>=al)) break;
bl=add(bl,bl);
bh=adc(bh,bh);
dl=add(dl,dl);
dh=adc(dh,dh);
sh++;
}
// long binary division
for (;;)
{
l=sub(al,bl); // (h,l) = (ah,al)-(bh,bl)
h=sbc(ah,bh);
if (cy==0) // no overflow
{
al=l; ah=h;
cl=add(cl,dl); // increment result by (dh,dl)
ch=adc(ch,dh);
}
else{ // overflow -> shoft right
if (sh==0) break;
sh--;
bl>>=1; // (bh,bl) >>= 1
if (BYTE(bh&1)) bl|=128;
bh>>=1;
dl>>=1; // (dh,dl) >>= 1
if (BYTE(dh&1)) dl|=128;
dh>>=1;
}
}
r=al; // remainder (low 8bit)
}
//---------------------------------------------------------------------------
// print 16bit dec with 8bit arithmetics
//---------------------------------------------------------------------------
AnsiString prn16(BYTE h,BYTE l)
{
AnsiString s="";
BYTE r; int i,j; char c;
// divide by 10 and print the remainders
for (;;)
{
if ((!h)&&(!l)) break;
div(h,l,r,h,l,10); // (h,l)=(h,l)/10; r=(h,l)%10;
s+=char('0'+r); // add digit to text
}
if (s=="") s="0";
// reverse order
i=1; j=s.Length();
for (;i<j;i++,j--) { c=s[i]; s[i]=s[j]; s[j]=c; }
return s;
}
//---------------------------------------------------------------------------
I use VCL AnsiString for text storage you can change it to what ever string or even char[] instead.
You need to divide the whole number not just the BYTE's separately. See how the div function works. Here example of least significant digit of 264 print 264%10...
a = 264 = 00000001 00001000 bin
b = 10 = 00000000 00001010 bin
d = 1 = 00000000 00000001 bin
// apply shift sh so b>=a
a = 00000001 00001000 bin
b = 00000001 01000000 bin
d = 00000000 00100000 bin
sh = 5
// a-=b c+=d while a>=b
// a<b already so no change
a = 00000001 00001000 bin b = 00000001 01000000 bin c = 00000000 00000000 bin d = 00000000 00100000 bin
// shift right
b = 00000000 10100000 bin d = 00000000 00010000 bin sh = 4
// a-=b c+=d while a>=b
a = 00000000 01101000 bin c = 00000000 00010000 bin
// shift right
b = 00000000 01010000 bin d = 00000000 00001000 bin sh = 3
// a-=b c+=d while a>=b
a = 00000000 00011000 bin c = 00000000 00011000 bin
// shift right
b = 00000000 00101000 bin d = 00000000 00000100 bin sh = 2
b = 00000000 00010100 bin d = 00000000 00000010 bin sh = 1
// a-=b c+=d while a>=b
a = 00000000 00000100 bin c = 00000000 00011010 bin
// shift right
b = 00000000 00001010 bin d = 00000000 00000001 bin sh = 0
// a<b so stop a is remainder -> digit = 4
//now a=c and divide again from the start to get next digit ...
By interpreting them as base-256.
>>> 18*256 + 52
4660

mod of an unsigned char from secure hash function

i have an unsigned char test_SHA1[20] of 20 bytes which is a return value from a hash function. With the following code, I get this output
unsigned char test_SHA1[20];
char hex_output[41];
for(int di = 0; di < 20; di++)
{
sprintf(hex_output + di*2, "%02x", test_SHA1[di]);
}
printf("SHA1 = %s\n", hex_output);
50b9e78177f37e3c747f67abcc8af36a44f218f5
The last 9 bits of this number is 0x0f5 (= 245 in decimal) which I would get by taking a mod 512 of test_SHA1. To take mod 512 of test_SHA1, I do
int x = (unsigned int)test_SHA1 % 512;
printf("x = %d\n", x);
But x turns out to be 158 instead of 245.
I suggest doing a bit-wise and with 0x1ff instead of using the % operator.

clear all but the two most significant set bits in a word

Given an 32 bit int which is known to have at least 2 bits set, is there a way to efficiently clear all except the 2 most significant set bits? i.e. I want to ensure the output has exactly 2 bits set.
What if the input is guaranteed to have only 2 or 3 bits set.?
Examples:
0x2040 -> 0x2040
0x0300 -> 0x0300
0x0109 -> 0x0108
0x5040 -> 0x5000
Benchmarking Results:
Code:
QueryPerformanceFrequency(&freq);
/***********/
value = (base =2)|1;
QueryPerformanceCounter(&start);
for (l=0;l<A_LOT; l++)
{
//!!value calculation goes here
junk+=value; //use result to prevent optimizer removing it.
//advance to the next 2|3 bit word
if (value&0x80000000)
{ if (base&0x80000000)
{ base=6;
}
base*=2;
value=base|1;
}
else
{ value<<=1;
}
}
QueryPerformanceCounter(&end);
time = (end.QuadPart - start.QuadPart);
time /= freq.QuadPart;
printf("--------- name\n");
printf("%ld loops took %f sec (%f additional)\n",A_LOT, time, time-baseline);
printf("words /sec = %f Million\n",A_LOT/(time-baseline)/1.0e6);
Results on using VS2005 default release settings on Core2Duo E7500#2.93 GHz:
--------- BASELINE
1000000 loops took 0.001630 sec
--------- sirgedas
1000000 loops took 0.002479 sec (0.000849 additional)
words /sec = 1178.074206 Million
--------- ashelly
1000000 loops took 0.004640 sec (0.003010 additional)
words /sec = 332.230369 Million
--------- mvds
1000000 loops took 0.005250 sec (0.003620 additional)
words /sec = 276.242030 Million
--------- spender
1000000 loops took 0.009594 sec (0.007964 additional)
words /sec = 125.566361 Million
--------- schnaader
1000000 loops took 0.025680 sec (0.024050 additional)
words /sec = 41.580158 Million
If the input is guaranteed to have exactly 2 or 3 bits then the answer can be computed very quickly. We exploit the fact that the expression x&(x-1) is equal to x with the LSB cleared. Applying that expression twice to the input will produce 0, if 2 or fewer bits are set. If exactly 2 bits are set, we return the original input. Otherwise, we return the original input with the LSB cleared.
Here is the code in C++:
// assumes a has exactly 2 or 3 bits set
int topTwoBitsOf( int a )
{
int b = a&(a-1); // b = a with LSB cleared
return b&(b-1) ? b : a; // check if clearing the LSB of b produces 0
}
This can be written as a confusing single expression, if you like:
int topTwoBitsOf( int a )
{
return a&(a-1)&((a&(a-1))-1) ? a&(a-1) : a;
}
I'd create a mask in a loop. At the beginning, the mask is 0. Then go from the MSB to the LSB and set each corresponding bit in the mask to 1 until you found 2 set bits. Finally AND the value with this mask.
#include <stdio.h>
#include <stdlib.h>
int clear_bits(int value) {
unsigned int mask = 0;
unsigned int act_bit = 0x80000000;
unsigned int bit_set_count = 0;
do {
if ((value & act_bit) == act_bit) bit_set_count++;
mask = mask | act_bit;
act_bit >>= 1;
} while ((act_bit != 0) && (bit_set_count < 2));
return (value & mask);
}
int main() {
printf("0x2040 => %X\n", clear_bits(0x2040));
printf("0x0300 => %X\n", clear_bits(0x0300));
printf("0x0109 => %X\n", clear_bits(0x0109));
printf("0x5040 => %X\n", clear_bits(0x5040));
return 0;
}
This is quite complicated, but should be more efficient as using a for loop over the 32 bits every time (and clear all bits except the 2 most significant set ones). Anyway, be sure to benchmark different ways before using one.
Of course, if memory is not a problem, use a lookup table approach like some recommended - this will be much faster.
how much memory is available at what latency? I would propose a lookup table ;-)
but seriously: if you would perform this on 100s of numbers, an 8 bit lookup table giving 2 msb and another 8 bit lookup table giving 1 msb may be all you need. Depending on the processor this might beat really counting bits.
For speed, I would create a lookup table mapping an input byte to
M(I)=0 if 1 or 0 bits set
M(I)=B' otherwise, where B' is the value of B with the 2 msb bits set.
Your 32 bit int are 4 input bytes I1 I2 I3 I4.
Lookup M(I1), if nonzero, you're done.
Compare M(I1)==0, if zero, repeat previous step for I2.
Else, lookup I2 in a second lookup table with 1 MSB bits, if nonzero, you're done.
Else, repeat previous step for I3.
etc etc. Don't actually loop anything over I1-4 but unroll it fully.
Summing up: 2 lookup tables with 256 entries, 247/256 of cases are resolved with one lookup, approx 8/256 with two lookups, etc.
edit: the tables, for clarity (input, bits table 2 MSB, bits table 1 MSB)
I table2 table1
0 00000000 00000000
1 00000000 00000001
2 00000000 00000010
3 00000011 00000010
4 00000000 00000100
5 00000101 00000100
6 00000110 00000100
7 00000110 00000100
8 00000000 00001000
9 00001001 00001000
10 00001010 00001000
11 00001010 00001000
12 00001100 00001000
13 00001100 00001000
14 00001100 00001000
15 00001100 00001000
16 00000000 00010000
17 00010001 00010000
18 00010010 00010000
19 00010010 00010000
20 00010100 00010000
..
250 11000000 10000000
251 11000000 10000000
252 11000000 10000000
253 11000000 10000000
254 11000000 10000000
255 11000000 10000000
Here's another attempt (no loops, no lookup, no conditionals). This time it works:
var orig=0x109;
var x=orig;
x |= (x >> 1);
x |= (x >> 2);
x |= (x >> 4);
x |= (x >> 8);
x |= (x >> 16);
x = orig & ~(x & ~(x >> 1));
x |= (x >> 1);
x |= (x >> 2);
x |= (x >> 4);
x |= (x >> 8);
x |= (x >> 16);
var solution=orig & ~(x >> 1);
Console.WriteLine(solution.ToString("X")); //0x108
Could probably be shortened by someone cleverer than me.
Following up on my previous answer, here's the complete implementation. I think it is as fast as it can get. (sorry for unrolling the whole thing ;-)
#include <stdio.h>
unsigned char bittable1[256];
unsigned char bittable2[256];
unsigned int lookup(unsigned int);
void gentable(void);
int main(int argc,char**argv)
{
unsigned int challenge = 0x42341223, result;
gentable();
if ( argc > 1 ) challenge = atoi(argv[1]);
result = lookup(challenge);
printf("%08x --> %08x\n",challenge,result);
}
unsigned int lookup(unsigned int i)
{
unsigned int ret;
ret = bittable2[i>>24]<<24; if ( ret ) return ret;
ret = bittable1[i>>24]<<24;
if ( !ret )
{
ret = bittable2[i>>16]<<16; if ( ret ) return ret;
ret = bittable1[i>>16]<<16;
if ( !ret )
{
ret = bittable2[i>>8]<<8; if ( ret ) return ret;
ret = bittable1[i>>8]<<8;
if ( !ret )
{
return bittable2[i] | bittable1[i];
} else {
return (ret | bittable1[i&0xff]);
}
} else {
if ( bittable1[(i>>8)&0xff] )
{
return (ret | (bittable1[(i>>8)&0xff]<<8));
} else {
return (ret | bittable1[i&0xff]);
}
}
} else {
if ( bittable1[(i>>16)&0xff] )
{
return (ret | (bittable1[(i>>16)&0xff]<<16));
} else if ( bittable1[(i>>8)&0xff] ) {
return (ret | (bittable1[(i>>8)&0xff]<<8));
} else {
return (ret | (bittable1[i&0xff]));
}
}
}
void gentable()
{
int i;
for ( i=0; i<256; i++ )
{
int bitset = 0;
int j;
for ( j=128; j; j>>=1 )
{
if ( i&j )
{
bitset++;
if ( bitset == 1 ) bittable1[i] = i&(~(j-1));
else if ( bitset == 2 ) bittable2[i] = i&(~(j-1));
}
}
//printf("%3d %02x %02x\n",i,bittable1[i],bittable2[i]);
}
}
Using a variation of this, I came up with the following:
var orig=56;
var x=orig;
x |= (x >> 1);
x |= (x >> 2);
x |= (x >> 4);
x |= (x >> 8);
x |= (x >> 16);
Console.WriteLine(orig&~(x>>2));
In c# but should translate easily.
EDIT
I'm not so sure I've answered your question. This takes the highest bit and preserves it and the bit next to it, eg. 101 => 100
Here's some python that should work:
def bit_play(num):
bits_set = 0
upper_mask = 0
bit_index = 31
while bit_index >= 0:
upper_mask |= (1 << bit_index)
if num & (1 << bit_index) != 0:
bits_set += 1
if bits_set == 2:
num &= upper_mask
break
bit_index -= 1
return num
It makes one pass over the number. It builds a mask of the bits that it crosses so it can mask off the bottom bits as soon as it hits the second-most significant one. As soon as it finds the second bit, it proceeds to clear the lower bits. You should be able to create a mask of the upper bits and &= it in instead of the second while loop. Maybe I'll hack that in and edit the post.
I'd also use a table based approach, but I believe one table alone should be sufficient. Take the 4 bit case as an example. If you're input is guaranteed to have 2 or 3 bits, then your output can only be one of 6 values
0011
0101
0110
1001
1010
1100
Put these possible values in an array sorted by size. Starting with the largest, find the first value which is equal to or less than your target value. This is your answer. For the 8 bit version you'll have more possible return values, but still easily less than the maximum possible permutations of 8*7.
public static final int [] MASKS = {
0x03, //0011
0x05, //0101
0x06, //0110
0x09, //1001
0x0A, //1010
0x0C, //1100
};
for (int i = 0; i < 16; ++i) {
if (countBits(i) < 2) {
continue;
}
for (int j = MASKS.length - 1; j >= 0; --j) {
if (MASKS[j] <= i) {
System.out.println(Integer.toBinaryString(i) + " " + Integer.toBinaryString(MASKS[j]));
break;
}
}
}
Here's my implementation in C#
uint OnlyMostSignificant(uint value, int count) {
uint newValue = 0;
int c = 0;
for(uint high = 0x80000000; high != 0 && c < count; high >>= 1) {
if ((value & high) != 0) {
newValue = newValue | high;
c++;
}
}
return newValue;
}
Using count, you could make it the most significant (count) bits.
My solution:
Use "The best method for counting bits in a 32-bit integer", then clear the lower bit if the answer is 3. Only works when input is limited to 2 or 3 bits set.
unsigned int c; // c is the total bits set in v
unsigned int v = value;
v = v - ((v >> 1) & 0x55555555);
v = (v & 0x33333333) + ((v >> 2) & 0x33333333); // temp
c = ((v + (v >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; // count
crc+=value&value-(c-2);

Resources