I wants to extract the bits from the CAN message (8 bytes).
so, how to i extract bits from 8 bytes.
Here is a chunk of code to get the bits from one byte:
on message myMessage
{
byte bitArray[8];
int iByteToRead = 0;
f_ByteToBitArray(this.byte(iByteToRead),bitArray);
write ("f_ByteToBitArray %X :Bits %X %X %X %X %X %X %X %X", iByteToRead , bitArray[0],bitArray[1],bitArray[2],bitArray[3],bitArray[4],bitArray[5],bitArray[6],bitArray[7]);
}
void f_ByteToBitArray(byte in, byte out[])
{
// 7 6 5 4 3 2 1 0
// f_ByteToBitArray 1 :Bits 0 0 0 0 0 0 0 1
// f_ByteToBitArray 2 :Bits 0 0 0 0 0 0 1 0
byte a;
a = in;
out[0] = a & 0x01;
a = a >> 1;
out[1] = a & 0x01;
a = a >> 1;
out[2] = a & 0x01;
a = a >> 1;
out[3] = a & 0x01;
a = a >> 1;
out[4] = a & 0x01;
a = a >> 1;
out[5] = a & 0x01;
a = a >> 1;
out[6] = a & 0x01;
a = a >> 1;
out[7] = a & 0x01;
}
Related
My reference are:
Writing a WebSocket server in Java
Base Framing Protocol
Why the first byte 129 represent FIN, RSV1, RSV2, RSV3, and Opcode?
My expected result are:
The first byte is the FIN / 1 bit, RSV1 / 1 bit, RSV2 / 1 bit, RSV3 / 1 bit, Opcode / 1 bit, Mask / 1 bit. Total 9 bits.
The second byte is the Payload length. Total 7 bits.
My actual result are:
The first byte represent FIN, RSV1, RSV2, RSV3, and Opcode.
The second byte represent the Payload length.
Just to illustrate a bit.
First Byte:
Leftmost bit is the fin-bit rightmost 4 bits represents the opcode,
in this case 1=text
10000001
Second Byte:
Leftmost bit indicates if data is masked remaining seven indicate the length
10000000 here the lenght is zero
11111101 here the lenght is exactly 125
11111110 here the lenght indicator is 126 therefor the next two bytes will give you the length followed by four bytes for the mask-key
11111111 here the lenght indicator is 127 therefor the next eight bytes will give you the length followed by four bytes for the mask-key
After all this follows the masked payload.
ADDED 2021-07-19
To extract information like opcode and length, you have to apply some bit operations on the given bytes.
Below is an extraction from
https://github.com/napengam/phpWebSocketServer/blob/master/server/RFC6455.php to show how the server decodes a frame.
public function Decode($frame) {
// detect ping or pong frame, or fragments
$this->fin = ord($frame[0]) & 128;
$this->opcode = ord($frame[0]) & 15;
$length = ord($frame[1]) & 127;
if ($length <= 125) {
$moff = 2;
$poff = 6;
} else if ($length == 126) {
$l0 = ord($frame[2]) << 8;
$l1 = ord($frame[3]);
$length = ($l0 | $l1);
$moff = 4;
$poff = 8;
} else if ($length == 127) {
$l0 = ord($frame[2]) << 56;
$l1 = ord($frame[3]) << 48;
$l2 = ord($frame[4]) << 40;
$l3 = ord($frame[5]) << 32;
$l4 = ord($frame[6]) << 24;
$l5 = ord($frame[7]) << 16;
$l6 = ord($frame[8]) << 8;
$l7 = ord($frame[9]);
$length = ( $l0 | $l1 | $l2 | $l3 | $l4 | $l5 | $l6 | $l7);
$moff = 10;
$poff = 14;
}
$masks = substr($frame, $moff, 4);
$data = substr($frame, $poff, $length); // hgs 30.09.2016
$text = '';
$m0 = $masks[0];
$m1 = $masks[1];
$m2 = $masks[2];
$m3 = $masks[3];
for ($i = 0; $i < $length;) {
$text .= $data[$i++] ^ $m0;
if ($i < $length) {
$text .= $data[$i++] ^ $m1;
if ($i < $length) {
$text .= $data[$i++] ^ $m2;
if ($i < $length) {
$text .= $data[$i++] ^ $m3;
}
}
}
}
return $text;
}
In https://github.com/napengam/phpWebSocketServer/blob/master/phpClient/websocketCore.php you will find encode and decode for the client.
Given an N X M binary matrix ( every element is either 1 or 0) , find the minimum number of moves to convert it to an all 0 matrix.
For converting a matrix, one can choose squares of any size and convert the value of that square. '1' changes to '0' and '0' changes to '1'.This process can be done multiple times with square of the same size. Converting any square counts as 1 move.
Calculate minimum number of moves required..
Example :
input matrix
0 1 1
0 0 0
0 1 1
we need to calculate minimum moves to convert this to all '0' matrix
0 0 0
0 0 0
0 0 0
Here,
For square of size 1 ( 1 X 1 or single element sub-matrix), the total number of moves required to convert this matrix is 4 . he converts elements for position (1,2),(1,3),(3,2),(3,3)
For square of size 2 ( 2 X 2 or single element sub-matrix), it takes 2 moves to convert the matrix
First we can convert elements from (1,2) to (2,3) and the matrix becomes, {{0 0 0}, {0 1 1}, {0 1 1}}
And then we convert elements from (2,2)to (3,3) and the matrix becomes ``{{0 0 0}, {0 0 0}, {0 0 0}}```
So minimum is 2.
Could some help in designing an approach to this ?
I attempted to solve it using Gaussian elimination for every possible square size. But the result is not correct here. There must be some gap in my approach to this problem.
package com.practice.hustle;
import java.util.Arrays;
public class GaussianElimination {
public static void main(String[] args) {
int countMoves = Integer.MAX_VALUE;
byte[][] inputMatrix = new byte[3][3];
inputMatrix[0][0] = 0;
inputMatrix[0][1] = 1;
inputMatrix[0][2] = 1;
inputMatrix[1][0] = 0;
inputMatrix[1][1] = 0;
inputMatrix[1][2] = 0;
inputMatrix[2][0] = 0;
inputMatrix[2][1] = 1;
inputMatrix[2][2] = 1;
int N = inputMatrix.length;
int M = inputMatrix[0].length;
int maxSize = Math.min(N, M);
for (int j = 2; j <= maxSize; ++j) { // loop for every possible square size
byte[][] a = new byte[N * M][(N * M) + 1];
for (int i = 0; i < N * M; i++) { // logic for square wise toggle for every element of N*M elements
byte seq[] = new byte[N * M + 1];
int index_i = i / M;
int index_j = i % M;
if (index_i <= N - j && index_j <= M - j) {
for (int c = 0; c < j; c++) {
for (int k = 0; k < j; k++) {
seq[i + k + M * c] = 1;
}
}
a[i] = seq;
} else {
if (index_i > N - j) {
seq = Arrays.copyOf(a[i - M], N * M + 1);
} else {
seq = Arrays.copyOf(a[i - 1], N * M + 1);
}
}
seq[N * M] = inputMatrix[index_i][index_j];
a[i] = seq;
}
System.out.println("\nSolving for square size = " + j);
print(a, N * M);
int movesPerSquareSize = gaussian(a);
if (movesPerSquareSize != 0) { // to calculate minimum moves
countMoves = Math.min(countMoves, movesPerSquareSize);
}
}
System.out.println(countMoves);
}
public static int gaussian(byte a[][]) {
// n X n+1 matrix
int N = a.length;
for (int k = 0; k < N - 1; k++) {
// Finding pivot element
int max_i = k, max_value = a[k][k];
for (int i = k + 1; i < N; i++) {
if (a[i][k] > max_value) {
max_value = a[i][k];
max_i = i;
}
}
// swap max row with kth row
byte[] temp = a[k];
a[k] = a[max_i];
a[max_i] = temp;
// convert to 0 all cells below pivot in the column
for (int i = k+1; i < N; i++) {
// int scalar = a[i][k] + a[k][k]; // probability of a divide by zero
if (a[i][k] == 1) {
for (int j = 0; j <= N; j++) {
if (a[i][j] == a[k][j]) {
a[i][j] = 0;
} else {
a[i][j] = 1;
}
}
}
}
}
System.out.println("\n\tAfter applying gaussian elimination : ");
print(a, N);
int count = 0;
for (int i = 0; i < N; i++) {
if (a[i][N] == 1)
++count;
}
return count;
}
private static void print(byte[][] a, int N) {
for (int i = 0; i < N; i++) {
System.out.print("\t ");
for (int j = 0; j < N + 1; j++) {
System.out.print(a[i][j] + " ");
}
System.out.println(" ");
}
}
}
Its giving final reduced Euler matrix formed is incorrect and thereby the result is also incorrect.
I think its failing due to the logic used for element like - the cell at index-(2,3) , for that we are not sure which square would it be a part of ( either the square from (1,2) to (2,3) or the square from ( 2,2) to (3,3))
here the input matrix to Gaussian algo is having exactly same sequence at 2nd and 3rd row which could be the culprit of incorrect results.
1 1 0 1 1 0 0 0 0 0
* 0 1 1 0 1 1 0 0 0 1 *
* 0 1 1 0 1 1 0 0 0 1 *
0 0 0 1 1 0 1 1 0 0
0 0 0 0 1 1 0 1 1 0
0 0 0 0 1 1 0 1 1 0
0 0 0 1 1 0 1 1 0 0
0 0 0 0 1 1 0 1 1 1
0 0 0 0 1 1 0 1 1 1
for a sqaure size 2, the above program prints :
Solving for square size = 2
The input to Gaussian algo :
1 1 0 1 1 0 0 0 0 0
0 1 1 0 1 1 0 0 0 1
0 1 1 0 1 1 0 0 0 1
0 0 0 1 1 0 1 1 0 0
0 0 0 0 1 1 0 1 1 0
0 0 0 0 1 1 0 1 1 0
0 0 0 1 1 0 1 1 0 0
0 0 0 0 1 1 0 1 1 1
0 0 0 0 1 1 0 1 1 1
After applying gaussian elimination :
1 0 1 0 0 0 1 0 1 1
0 1 1 0 0 0 0 1 1 0
0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 1 1 0 1 0
0 0 0 0 1 1 0 1 1 1
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 0 1
Is there any performance comparison among those three variants for input checking/default initializaion?
It would be useful a comparison on a recent version e.g. R2014b and and older one R2012b.
An example:
function foo(a,b)
if nargin < 1, a = 1; end
if nargin < 2, b = 2; end
end
versus
function foo(a,b)
if exist('a','var'), a = 1; end
if exist('b','var'), b = 2; end
end
versus
function foo(varargin)
p = inputParser;
addOptional(p,'a',1)
addOptional(p,'b',2)
parse(p,varargin{:})
end
Using Amro's testing suite, on R2014b:
func nargs time
_________________ _____ __________
'foo_nargin' 0 2.3674e-05
'foo_exist' 0 3.1339e-05
'foo_inputparser' 0 9.6934e-05
'foo_nargin' 1 2.4437e-05
'foo_exist' 1 3.2157e-05
'foo_inputparser' 1 0.0001307
'foo_nargin' 2 2.3838e-05
'foo_exist' 2 3.0492e-05
'foo_inputparser' 2 0.00015775
Here is some code to test the three approaches:
function t = testArgParsing()
args = {1, 2};
fcns = {
#foo_nargin ;
#foo_exist ;
#foo_inputparser
};
% parameters sweep
[f,k] = ndgrid(1:numel(fcns), 0:numel(args));
f = f(:); k = k(:);
% test combinations of functions and number of input args
t = cell(numel(f), 3);
for i=1:size(t,1)
t{i,1} = func2str(fcns{f(i)});
t{i,2} = k(i);
t{i,3} = timeit(#() feval(fcns{f(i)}, args{1:k(i)}), 2);
end
% format results in table
t = cell2table(t, 'VariableNames',{'func','nargs','time'});
end
function [aa,bb] = foo_nargin(a,b)
if nargin < 1, a = 1; end
if nargin < 2, b = 2; end
aa = a;
bb = b;
end
function [aa,bb] = foo_exist(a,b)
if ~exist('a','var'), a = 1; end
if ~exist('b','var'), b = 2; end
aa = a;
bb = b;
end
function [aa,bb] = foo_inputparser(varargin)
p = inputParser;
addOptional(p,'a',1);
addOptional(p,'b',2);
parse(p, varargin{:});
aa = p.Results.a;
bb = p.Results.b;
end
Here is what I get in R2014a on my machine:
>> t = testArgParsing
t =
func nargs time
_________________ _____ __________
'foo_nargin' 0 3.4556e-05
'foo_exist' 0 5.2901e-05
'foo_inputparser' 0 0.00010254
'foo_nargin' 1 2.5531e-05
'foo_exist' 1 3.7105e-05
'foo_inputparser' 1 0.0001263
'foo_nargin' 2 2.4991e-05
'foo_exist' 2 3.6772e-05
'foo_inputparser' 2 0.00015148
And a pretty plot to view the results:
tt = unstack(t, 'time', 'func');
names = tt.Properties.VariableNames(2:end);
bar(tt{:,2:end}.')
set(gca, 'XTick',1:numel(names), 'XTickLabel',names, 'YGrid','on')
legend(num2str(tt{:,1}, 'nargin=%d'))
ylabel('Time [sec]'), xlabel('Functions')
I want to write a program to convert from decimal to negabinary.
I cannot figure out how to convert from decimal to negabinary.
I have no idea about how to find the rule and how it works.
Example: 7(base10)-->11011(base-2)
I just know it is 7 = (-2)^0*1 + (-2)^1*1 + (-2)^2*0 + (-2)^3*1 + (-2)^4*1.
The algorithm is described in http://en.wikipedia.org/wiki/Negative_base#Calculation. Basically, you just pick the remainder as the positive base case and make sure the remainder is nonnegative and minimal.
7 = -3*-2 + 1 (least significant digit)
-3 = 2*-2 + 1
2 = -1*-2 + 0
-1 = 1*-2 + 1
1 = 0*-2 + 1 (most significant digit)
def neg2dec(arr):
n = 0
for i, num in enumerate(arr[::-1]):
n+= ((-2)**i)*num
return n
def dec2neg(num):
if num == 0:
digits = ['0']
else:
digits = []
while num != 0:
num, remainder = divmod(num, -2)
if remainder < 0:
num, remainder = num + 1, remainder + 2
digits.append(str(remainder))
return ''.join(digits[::-1])
Just my two cents (C#):
public static int[] negaBynary(int value)
{
List<int> result = new List<int> ();
while (value != 0)
{
int remainder = value % -2;
value = value / -2;
if (remainder < 0)
{
remainder += 2;
value += 1;
}
Console.WriteLine (remainder);
result.Add(remainder);
}
return result.ToArray();
}
There is a method (attributed to Librik/Szudzik/Schröppel) that is much more efficient:
uint64_t negabinary(int64_t num) {
const uint64_t mask = 0xAAAAAAAAAAAAAAAA;
return (mask + num) ^ mask;
}
The conversion method and its reverse are described in more detail in this answer.
Here is some code that solves it and display the math behind it.
Some code taken from "Birender Singh"
#https://onlinegdb.com/xR1E5Cj7L
def neg2dec(arr):
n = 0
for i, num in enumerate(arr[::-1]):
n+= ((-2)**i)*num
return n
def dec2neg(num):
oldNum = num
if num == 0:
digits = ['0']
else:
digits = []
while num != 0:
num, remainder = divmod(num, -10)
if remainder < 0:
num, remainder = num + 1, remainder + 10
print(str(oldNum) + " = " + str(num) + " * -10 + " + str(remainder))
oldNum = num
digits.append(str(remainder))
return ''.join(digits[::-1])
print(dec2neg(-8374932))
Output:
-8374932 = 837494 * -10 + 8
837494 = -83749 * -10 + 4
-83749 = 8375 * -10 + 1
8375 = -837 * -10 + 5
-837 = 84 * -10 + 3
84 = -8 * -10 + 4
-8 = 1 * -10 + 2
1 = 0 * -10 + 1
12435148
What is the best solution for getting the base 2 logarithm of a number that I know is a power of two (2^k). (Of course I know only the value 2^k not k itself.)
One way I thought of doing is by subtracting 1 and then doing a bitcount:
lg2(n) = bitcount( n - 1 ) = k, iff k is an integer
0b10000 - 1 = 0b01111, bitcount(0b01111) = 4
But is there a faster way of doing it (without caching)? Also something that doesn't involve bitcount about as fast would be nice to know?
One of the applications this is:
suppose you have bitmask
0b0110111000
and value
0b0101010101
and you are interested of
(value & bitmask) >> number of zeros in front of bitmask
(0b0101010101 & 0b0110111000) >> 3 = 0b100010
this can be done with
using bitcount
value & bitmask >> bitcount((bitmask - 1) xor bitmask) - 1
or using lg2
value & bitmask >> lg2(((bitmask - 1) xor bitmask) + 1 ) - 2
For it to be faster than bitcount without caching it should be faster than O(lg(k)) where k is the count of storage bits.
Yes. Here's a way to do it without the bitcount in lg(n), if you know the integer in question is a power of 2.
unsigned int x = ...;
static const unsigned int arr[] = {
// Each element in this array alternates a number of 1s equal to
// consecutive powers of two with an equal number of 0s.
0xAAAAAAAA, // 0b10101010.. // one 1, then one 0, ...
0xCCCCCCCC, // 0b11001100.. // two 1s, then two 0s, ...
0xF0F0F0F0, // 0b11110000.. // four 1s, then four 0s, ...
0xFF00FF00, // 0b1111111100000000.. // [The sequence continues.]
0xFFFF0000
}
register unsigned int reg = (x & arr[0]) != 0;
reg |= ((x & arr[4]) != 0) << 4;
reg |= ((x & arr[3]) != 0) << 3;
reg |= ((x & arr[2]) != 0) << 2;
reg |= ((x & arr[1]) != 0) << 1;
// reg now has the value of lg(x).
In each of the reg |= steps, we successively test to see if any of the bits of x are shared with alternating bitmasks in arr. If they are, that means that lg(x) has bits which are in that bitmask, and we effectively add 2^k to reg, where k is the log of the length of the alternating bitmask. For example, 0xFF00FF00 is an alternating sequence of 8 ones and zeroes, so k is 3 (or lg(8)) for this bitmask.
Essentially, each reg |= ((x & arr[k]) ... step (and the initial assignment) tests whether lg(x) has bit k set. If so, we add it to reg; the sum of all those bits will be lg(x).
That looks like a lot of magic, so let's try an example. Suppose we want to know what power of 2 the value 2,048 is:
// x = 2048
// = 1000 0000 0000
register unsigned int reg = (x & arr[0]) != 0;
// reg = 1000 0000 0000
& ... 1010 1010 1010
= 1000 0000 0000 != 0
// reg = 0x1 (1) // <-- Matched! Add 2^0 to reg.
reg |= ((x & arr[4]) != 0) << 4;
// reg = 0x .. 0800
& 0x .. 0000
= 0 != 0
// reg = reg | (0 << 4) // <--- No match.
// reg = 0x1 | 0
// reg remains 0x1.
reg |= ((x & arr[3]) != 0) << 3;
// reg = 0x .. 0800
& 0x .. FF00
= 800 != 0
// reg = reg | (1 << 3) // <--- Matched! Add 2^3 to reg.
// reg = 0x1 | 0x8
// reg is now 0x9.
reg |= ((x & arr[2]) != 0) << 2;
// reg = 0x .. 0800
& 0x .. F0F0
= 0 != 0
// reg = reg | (0 << 2) // <--- No match.
// reg = 0x9 | 0
// reg remains 0x9.
reg |= ((x & arr[1]) != 0) << 1;
// reg = 0x .. 0800
& 0x .. CCCC
= 800 != 0
// reg = reg | (1 << 1) // <--- Matched! Add 2^1 to reg.
// reg = 0x9 | 0x2
// reg is now 0xb (11).
We see that the final value of reg is 2^0 + 2^1 + 2^3, which is indeed 11.
If you know the number is a power of 2, you could just shift it right (>>) until it equals 0. The amount of times you shifted right (minus 1) is your k.
Edit: faster than this is the lookup table method (though you sacrifice some space, but not a ton). See http://doctorinterview.com/index.html/algorithmscoding/find-the-integer-log-base-2-of-an-integer/.
Many architectures have a "find first one" instruction (bsr, clz, bfffo, cntlzw, etc.) which will be much faster than bit-counting approaches.
If you don't mind dealing with floats you can use log(x) / log(2).