I am looking at this page and I am not sure how the author is calculating the checksum. I would contact the author directly, but don't have his email address (its not listed in github).
This is a simple example of a packet with no variables. The author calculates the checksum to be 120 (I assume this is hex as all his other values are in hex). The sum of all the bytes is 0xBA hex or 186 base(10). His notes say "Checksum Low Bit, This bit is checksum of 1-5 bits (MOD 256, if necessary)" but I am not getting what he is saying and I can't figure out how to get to his answer.
Get Version / Return Name
Byte 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Request 16 2 80 20 2 120 16 3
Byte Sample hex Definition
hex (B10)
==== ==== ===== =============================
1 0x16 (22) Preamble 1
2 0x02 (2) Preamble 2
3 0x80 (128) Destination = Chlorinator
4 0x20 (32) Command = Get Name
5 0x02 (2) Not sure. Intellitouch uses 2. Aquarite uses 0. Any of them seem to work.
6 120 Checksum Low Bit, This bit is checksum of 1-5 bits (MOD 256, if necessary)
7 0x16 (22) Post-amble 1
8 0x3 (3) Post-amble 2
Any suggestions would be most appreciated!
Turns out that the commentors were 100% correct: the numbers were express in decimal, not hex as I assumed.
Related
When we only have 6 bits of data on a byte, what do we fill the byte with up to 8? In the picture below the important data , it's only 10 03 , but what is the science behind, how that neimportant bits are choosen ? What mean [55] or [AA]? I mention 10 03 is a request for diagnosis and 50 03 are a response.
The communication its on CAN and that it's a trace with CAN DATA .
I dont understand what are you talking about, but that looks like a Hex representation.
1 byte -> 2 hex characters -> 8 bits. AA -> 10, 10 in decimal -> 1010 1010 (binary)
explicit bits are always the right side or LSB (less significant bits)
example, in javascript regular integer is 32 bit long.
`
const number = 0b1010 //binary
const hexNumber = 0xA // hex
` -> 10 in decimal. As you can see we have only tell the less significant 4 bits. every other bit is an implicit 0
This is a quiz question which I failed in the past and despite having access to the solution, I don't understand the different step to come to the correct answer.
Here is the problem :
Which of these adress is line cache aligned
a. 0x7ffc32a21164
b. 0x560c40e05350
c. 0x560c40e052c0
d. 0x560c3f2d71ff
And the solution to the problem:
Each hex char is represented by 4 bits
It takes 6 bits to represent 64 adress, since ln(64)/ln(2) = 6
0x0 0000
0x4 0100
0x8 1000
0xc 1100
________
2^3 2^2 2^1 2^0
8 4 2 1
Conclusion: if the adress ends if either 00, 40, 80 or c0, then it is aligned on 64 bytes.
The answer is c.
I really don't see how we go from 6 bits representation to this answer. Can anyone adds something to the solution given to make it clearer?
The question boils down to: Which number is a multiple of 64? All that remains is understanding the number system they're using.
In binary, 64 is written as 1000000. In hexadecimal, it's written as 0x40. So multiples of 64 will end in 0x00 (0 * 64), 0x40 (1 * 64), 0x80 (2 * 64), or 0xC0 (3 * 64). (The cycle then repeats.) Answer c is the one with the right ending.
An analogy in decimal would be: Which number is a multiple of 5? 0 * 5 is 0 and 1 * 5 is 5, after which the cycle repeats. So we just need to look at the last digit. If it's a 0 or a 5, we know the number is a multiple of 5.
I have two .dat files. They are world.dat and sensor_data.dat. I have a folder name in D: drive named tutorial. Within this tutorial file, there are two folders data and code. Now in the data folder, there are two files as I mentioned earlier world.dat and sensor_data.dat. In the code folder, there is a file name main.m as it is a Matlab file.
The code that is written on this file(main.m) is
clc;
clear;
close all;
% Read *.dat files containing landmark data
landmarks = fopen('../data/world.dat');
landmarks_data = fread(landmarks);
% Read *.dat files containing odometry and range-bearing sensor data
data = fopen('../data/sensor_data.dat');
data_data = fread(data);
But when I print landmarks_data and data_data they print something other than that is written on those two files(world.dat,sensor_data.dat)
world.dat file contains:
1 2 1
2 0 4
3 2 7
4 9 2
5 10 5
6 9 8
7 5 5
8 5 3
9 5 9
My output:
>> landmarks_data
landmarks_data =
49
32
50
32
49
10
50
32
48
32
52
10
51
32
50
32
55
10
52
32
57
32
50
10
53
32
49
48
32
53
10
54
32
57
32
56
10
55
32
53
32
53
10
56
32
53
32
51
10
57
32
53
32
I don't know where they get those data? The same thing happened for data_data variable.
Need help to fix the problem.
You are getting the ASCII values of the characters in the file.
ASCII value of 1 equals 49.
ASCII value of ' ' (space) equals 32.
ASCII value of 2 equals 50...
fread reads data from binary file, and you are using fread for reading a text file. The binary value of a text character is the ASCII code (it can also be a UNICODE value).
In case you want to read the data as text, and keep the matrix structure, you can use readmatrix function:
landmarks = readmatrix('../data/world.dat');
Result:
landmarks =
1 2 1
2 0 4
3 2 7
4 9 2
5 10 5
6 9 8
7 5 5
8 5 3
9 5 9
Remark: In case your MATLAB version is before R2019a, you can use dlmread instead.
I have this question in an Operating System test:
Given a disk of 1GB with 16KB blocks:
(1) Calculate the size of the File Allocation Table:
My Answer: since there are 2^16 blocks in the disk, we have a table with 2^16 entry, and every entry needs to store 16 bit (since there are 2^16 different blocks, we need 16 bit to identify each of them). So the size is 2^16 times 16 bit = 2^16 x 2^4 = 2^20 bit = 2^17 byte = 128Kb.
(2) Given the following table, indicate in which block are stored the following byte:
-byte 131080 of FileA starting at block 4.
-byte 62230 of FileB starting at block 3.
Entry Content
0 10
1 2
2 0
3 6
4 1
5 8
6 7
7 11
8 12
So FileA is (4) -> (1) -> (2) but the problem is: since every block is 16Kb = 2^4 x 2^10 byte = 2^14 byte = 16384 byte, block 4 contains from 1 to 16384, block 1 contains from 16385 to 32768, and block 2 from 32769 to 49152, where am I supposed to find the byte 131080???
Where is this wrong??
I'm trying to create an algorithm to convert a greyscale from 12 bit to 8 bit.
I got a greyscale like this one:
The scale is represented in a Matrix. The problem is, that the simple multiplication with 1/16 destroys the first grey-columns.
Here the Codeexample:
in =[
1 1 1 3 3 3 15 15 15 63 63 63;
1 1 1 3 3 3 15 15 15 63 63 63;
1 1 1 3 3 3 15 15 15 63 63 63;
1 1 1 3 3 3 15 15 15 63 63 63
];
[zeilen spalten] = size(in);
eight = round(in/16);
imshow(uint8(eight));
Destroy mean, that the New Columns are Black now
Simply rescale the image so that you divide every single element by the maximum possible intensity that corresponds to a 12-bit (or 2^12 - 1 = 4095) unsigned integer and then multiply by the maximum possible intensity that corresponds to an 8-bit unsigned integer (or 2^8 - 1 = 255).
Therefore:
out = uint8((255.0/4095.0)*(double(in)));
You need to cast to double to ensure that you maintain floating point precision when performing this scaling, and then cast to uint8 so that the image type is ensured to be 8-bit. You have cleverly deduced that this scaling factor is roughly (1/16) (since 255.0/4095.0 ~ 1/16). However, the output of your test image will have its first 6 columns to surely be zero because intensities of 1 and 3 for a 12-bit image are just too small to be represented in its equivalent 8-bit form, which is why it gets rounded down to 0. If you think about it, for every 16 intensity increase that you have for your 12-bit image, this registers as an equivalent single intensity increase for an 8-bit image, or:
12-bit --> 8-bit
0 --> 0
15 --> 1
31 --> 2
47 --> 3
63 --> 4
... --> ...
4095 --> 255
Because your values of 1 and 3 are not high enough to get to the next level, these get rounded down to 0. However, your values of 15 get mapped to 1, and the values of 63 get mapped to 4, which is what we expect when you run the above code on your test input.