go - encoding unsigned 16 bit float in binary - go

In Go, how can I encode a float into a byte array as a 16 bit unsigned float with 11 explicit bits of mantissa and 5 bits of explicit exponent?
There doesn't seem to be a clean way to do it. The only thing I can think of is encoding it as in Convert byte array "[]uint8" to float64 in GoLang and manually truncating the bits.
Is there a "go" way to do this?
Here's the exact definition:
A 16 bit unsigned float with 11 explicit bits of mantissa and 5 bits of explicit exponent
The bit format is loosely modeled after IEEE 754. For example, 1 microsecond is represented as 0x1, which has an exponent of zero, presented in the 5 high order bits, and mantissa of 1, presented in the 11 low order bits. When the explicit exponent is greater than zero, an implicit high-order 12th bit of 1 is assumed in the mantissa. For example, a floatingvalue of 0x800 has an explicit exponent of 1, as well as an explicit mantissa of 0, but then has an effective mantissa of 4096 (12th bit is assumed to be 1). Additionally, the actual exponent is one-less than the explicit exponent, and the value represents 4096 microseconds. Any values larger than the representable range are clamped to 0xFFFF.

I am not sure whether I understand the encoding correctly (see my comment on the original question), but here is a function which may do what you want:
func EncodeFloat(seconds float64) uint16 {
us := math.Floor(1e6*seconds + 0.5)
if us < 0 {
panic("cannot encode negative value")
} else if us > (1<<30)*4095+0.5 {
return 0xffff
}
usInt := uint64(us)
expBits := uint16(0)
if usInt >= 2048 {
exp := uint16(1)
for usInt >= 4096 {
exp++
usInt >>= 1
}
usInt -= 2048
expBits = exp << 11
}
return expBits | uint16(usInt)
}
(code is at http://play.golang.org/p/G599VOBMcL )

Related

Get value of one bit from 32 bits

How do you apply a mask to get only one bit after you shift right? Does it depend on how many positions you shifted right?
In a 32 bit structure I'm trying to get the value of the 9th bit and the 10th bit.
x := uint32(11537664)
0000 0000 1011 0000 0000 1101 0000 0000
^^
So for the 9th bit, if I shift right 23 bits I need to mask one byte? That seems to isolate the 9th bit because I'm getting a value of 1.
(x >> 23) & 0xff
9th bit...should be 1... looks ok.
00000000000000000000000000000001
0x1
So to get the 10th bit which should be 0 I am shifting one less bit which does make 0 all the way to the right. But there is a 1 after it which needs to be masked. I figured 1 byte plus 1 bit for the mask but I'm still seeing the the bit in position two so that can't be right.
(x >> 22) & 0x1ff
10th bit... should be 0, but this shift and mask does not look correct.
00000000000000000000000000000010
^ This bit I don't want.
0x2
Link to example:
https://play.golang.org/p/zqofCAAKDZz
package main
import (
"fmt"
)
func bin(i uint32) {
fmt.Printf("%032b\n", i)
}
func hex(i uint32) {
fmt.Printf("0x%x\n", i)
}
func show(i uint32) {
bin(i)
hex(i)
fmt.Println()
}
func main() {
x := uint32(11537664)
fmt.Println("Data")
show(x)
fmt.Println("First 8 bits.")
show(x >> 24)
fmt.Println("9th bit...should be 1")
show((x >> 23) & 0xff)
fmt.Println("10th bit... should be 0")
show((x >> 22) & 0x1ff)
}
After the shift you get a number being 0b10, and you only need the lowest bit. So why are you masking with 0x1ff? That has 9 one bits, that will leave the lowest 9 bits unchanged (unmasked).
Instead mask with 0b01 = 0x01. That only leaves the lowest bit, and zeroes all others:
show((x >> 22) & 0x01)
Try it on the Go Playground.
Also note that if you just want to test if a certain bit is one or zero, you don't neccessarily have to shift. Masking by a proper bitmask that contains a single one at the certain position is enough. You may compare the masking result with zero.
The proper bitmask for testing the nth bit is simply 1<<n (where bits are zero indexed). The 2 bits you want to test are the 22. and 23. bits.
See this example:
x := uint32(11537664)
fmt.Printf("x : %032b\n", x)
fmt.Println()
const mask22 = 1 << 22
fmt.Printf("mask22 : %032b\n", mask22)
fmt.Printf("22. bit: %032b %t\n", x&mask22, x&mask22 != 0)
fmt.Println()
const mask23 = 1 << 23
fmt.Printf("mask23 : %032b\n", mask23)
fmt.Printf("23. bit: %032b %t\n", x&mask23, x&mask23 != 0)
It outputs (try it on the Go Playground):
x : 00000000101100000000110100000000
mask22 : 00000000010000000000000000000000
22. bit: 00000000000000000000000000000000 false
mask23 : 00000000100000000000000000000000
23. bit: 00000000100000000000000000000000 true

Golang shift operator conversion

I can not understand in golang how 1<<s return 0 if var s uint = 33.
But 1<<33 return 8589934592.
How a shift operator conversion end up with a value of 0.
I'm reading the language specification and stuck in this section:
https://golang.org/ref/spec#Operators
Specifically this paragraph from docs:
"The right operand in a shift expression must have unsigned integer
type or be an untyped constant representable by a value of type uint.
If the left operand of a non-constant shift expression is an untyped
constant, it is first implicitly converted to the type it would assume
if the shift expression were replaced by its left operand alone."
Some example from official Golang docs:
var s uint = 33
var i = 1<<s // 1 has type int
var j int32 = 1<<s // 1 has type int32; j == 0
var k = uint64(1<<s) // 1 has type uint64; k == 1<<33
Update:
Another very related question, with an example:
package main
import (
"fmt"
)
func main() {
v := int16(4336)
fmt.Println(int8(v))
}
This program return -16
How does the number 4336 become -16 in converting int16 to int8
If you have this:
var s uint = 33
fmt.Println(1 << s)
Then the quoted part applies:
If the left operand of a non-constant shift expression is an untyped constant, it is first implicitly converted to the type it would assume if the shift expression were replaced by its left operand alone.
Because s is not a constant (it's a variable), therefore 1 >> s is a non-constant shift expression. And the left operand is 1 which is an untyped constant (e.g. int(1) would be a typed constant), so it is converted to a type that it would get if the expression would be simply 1 instead of 1 << s:
fmt.Println(1)
In the above, the untyped constant 1 would be converted to int, because that is its default type. Default type of constants is in Spec: Constants:
An untyped constant has a default type which is the type to which the constant is implicitly converted in contexts where a typed value is required, for instance, in a short variable declaration such as i := 0 where there is no explicit type. The default type of an untyped constant is bool, rune, int, float64, complex128 or string respectively, depending on whether it is a boolean, rune, integer, floating-point, complex, or string constant.
And the result of the above is architecture dependent. If int is 32 bits, it will be 0. If int is 64 bits, it will be 8589934592 (because shifting a 1 bit 33 times will shift it out of a 32-bit int number).
On the Go playground, size of int is 32 bits (4 bytes). See this example:
fmt.Println("int size:", unsafe.Sizeof(int(0)))
var s uint = 33
fmt.Println(1 << s)
fmt.Println(int32(1) << s)
fmt.Println(int64(1) << s)
The above outputs (try it on the Go Playground):
int size: 4
0
0
8589934592
If I run the above app on my 64-bit computer, the output is:
int size: 8
8589934592
0
8589934592
Also see The Go Blog: Constants for how constants work in Go.
Note that if you write 1 << 33, that is not the same, that is not a non-constant shift expression, which your quote applies to: "the left operand of a non-constant shift expression". 1<<33 is a constant shift expression, which is evaluated at "constant space", and the result would be converted to int which does not fit into a 32-bit int, hence the compile-time error. It works with variables, because variables can overflow. Constants do not overflow:
Numeric constants represent exact values of arbitrary precision and do not overflow.
See How does Go perform arithmetic on constants?
Update:
Answering your addition: converting from int16 to int8 simply keeps the lowest 8 bits. And integers are represented using the 2's complement format, where the highest bit is 1 if the number is negative.
This is detailed in Spec: Conversions:
When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended. It is then truncated to fit in the result type's size. For example, if v := uint16(0x10F0), then uint32(int8(v)) == 0xFFFFFFF0. The conversion always yields a valid value; there is no indication of overflow.
So when you convert a int16 value to int8, if source number has a 1 in bit position 7 (8th bit), the result will be negative, even if the source wasn't negative. Similarly, if the source has 0 at bit position 7, the result will be positive, even if the source is negative.
See this example:
for _, v := range []int16{4336, -129, 8079} {
fmt.Printf("Source : %v\n", v)
fmt.Printf("Source hex: %4x\n", uint16(v))
fmt.Printf("Result hex: %4x\n", uint8(int8(v)))
fmt.Printf("Result : %4v\n", uint8(int8(v)))
fmt.Println()
}
Output (try it on the Go Playground):
Source : 4336
Source hex: 10f0
Result hex: f0
Result : -16
Source : -129
Source hex: ff7f
Result hex: 7f
Result : 127
Source : 8079
Source hex: 1f8f
Result hex: 8f
Result : -113
See related questions:
When casting an int64 to uint64, is the sign retained?
Format printing the 64bit integer -1 as hexadecimal deviates between golang and C
You're building and running the program in 32bit mode (go playground?). In it, int is 32-bit wide and behaves the same as int32.

Whats happening with this method?

type IntSet struct {
words []uint64
}
func (s *IntSet) Has(x int) bool {
word, bit := x/64, uint(x%64)
return word < len(s.words) && s.words[word]&(1<<bit) != 0
}
Lets go through what I think is going on:
A new type is declared called IntSet. Underneath its new type declaration it is unint64 slice.
A method is created called Has(). It can only receive IntSet types, after playing around with ints she returns a bool
Before she can play she needs two ints. She stores these babies on the stack.
Lost for words
This methods purpose is to report whether the set contains the non-negative value x. Here is a the go test:
func TestExample1(t *testing.T) {
//!+main
var x, y IntSet
fmt.Println(x.Has(9), x.Has(123)) // "true false"
//!-main
// Output:
// true false
}
Looking for some guidance understanding what this method is doing inside. And why the programmer did it in such complicated means (I feel like I am missing something).
The return statement:
return word < len(s.words) && s.words[word]&(1<<bit) != 0
Are the order of operations this?
return ( word < len(s.words) && ( s.words[word]&(1<<bit)!= 0 )
And what is the [words] and & doing within:
s.words[word]&(1<<bit)!= 0
edit: Am beginning to see slightly seeing that:
s.words[word]&(1<<bit)!= 0
Is just a slice but don't understand the &
As I read the code, I scribbled some notes:
package main
import "fmt"
// A set of bits
type IntSet struct {
// bits are grouped into 64 bit words
words []uint64
}
// x is the index for a bit
func (s *IntSet) Has(x int) bool {
// The word index for the bit
word := x / 64
// The bit index within a word for the bit
bit := uint(x % 64)
if word < 0 || word >= len(s.words) {
// error: word index out of range
return false
}
// the bit set within the word
mask := uint64(1 << bit)
// true if the bit in the word set
return s.words[word]&mask != 0
}
func main() {
nBits := 2*64 + 42
// round up to whole word
nWords := (nBits + (64 - 1)) / 64
bits := IntSet{words: make([]uint64, nWords)}
// bit 127 = 1 * 64 + 63
bits.words[1] = 1 << 63
fmt.Printf("%b\n", bits.words)
for i := 0; i < nWords*64; i++ {
has := bits.Has(i)
if has {
fmt.Println(i, has)
}
}
has := bits.Has(127)
fmt.Println(has)
}
Playground: https://play.golang.org/p/rxquNZ_23w1
Output:
[0 1000000000000000000000000000000000000000000000000000000000000000 0]
127 true
true
The Go Programming Language Specification
Arithmetic operators
& bitwise AND integers
peterSO's answer is spot on - read it. But I figured this might also help you understand.
Imagine I want to store some random numbers in the range 1 - 8. After I store these numbers I will be asked if the number n (also in the range of 1 - 8) appears in the numbers I recorded earlier. How would we store the numbers?
One, probably obvious, way would be to store them in a slice or maybe a map. Maybe we would choose a map since lookups will be constant time. So we create our map
seen := map[uint8]struct{}{}
Our code might look something like this
type IntSet struct {
seen: map[uint8]struct{}
}
func (i *IntSet) AddValue(v uint8) {
i.seen[v] = struct{}{}
}
func (i *IntSet) Has(v uint8) bool {
_, ok := i.seen[v]
return ok
}
For each number we store we take up (at least) 1 byte (8 bits) of memory. If we were to store all 8 numbers we would be using 64 bits / 8 bytes.
However, as the name implies, this is an int Set. We don't care about duplicates, we only care about membership (which Has provides for us).
But there is another way we could store these numbers, and we could do it all within a single byte. Since a byte provides 8 bits, we can use these 8 bits as markers for values we have seen. The initial value (in binary notation) would be
00000000 == uint8(0)
If we did an AddValue(3) we could change the 3rd bit and end up with
00000100 == uint8(3)
^
|______ 3rd bit
If we then called AddValue(8) we would have
10000100 == uint8(132)
^ ^
| |______ 3rd bit
|___________ 8th bit
So after adding 3 and 8 to our IntSet we have the internally stored integer value of 132. But how do we take 132 and figure out whether a particular bit is set? Easy, we use bitwise operators.
The & operator is a logical AND. It will return the value of the bits common between the numbers on each side of the operator. For example
10001100 01110111 11111111
& 01110100 & 01110000 & 00000001
-------- -------- --------
00000100 01110000 00000001
So to find out if n is in our set we simply do
our_set_value & (1 << (value_we_are_looking_for - 1))
which if we were searching for 4 would yield
10000100
& 00001000
----------
0 <-- so 4 is not present
or if we were searching for 8
10000100
& 10000000
----------
10000000 <-- so 8 is present
You may have noticed I subtracted 1 from our value_we_are_looking for. This is because I am fitting 1-8 into our 8bit number. If we only wanted to store seven numbers then we could just skip using the very first bit and assume our counting starts at bit #2 then we wouldn't have to subtract 1, like the code you posted does.
Assuming you understand all of that, here's where things get interesting. So far we have been storing our values in a uint8 (so we could only have 8 values, or 7 if you omit the first bit). But there are larger numbers that have more bits, like uint64. Instead of 8 values, we can store 64 values! But what happens if the range of values we want to track exceed 1-64? What if we want to store 65? This is where the slice of words comes from in the original code.
Since the code posted skips the first bit, from now on I will do so as well.
We can use the first uint64 to store the numbers 1 - 63. When we want to store the numbers 64-127 we need a new uint64. So our slice would be something like
[ uint64_of_1-63, uint64_of_64-127, uint64_of_128-192, etc]
Now, to answer the question about whether a number is in our set we need to first find the uint64 whose range would contain our number. If we were searching for 110 we would want to use the uint64 located at index 1 (uint64_of_64-128) because 110 would fall in that range.
To find the index of the word we need to look at, we take the whole number value of n / 64. In the case of 110 we would get 1, which is exactly what we want.
Now we need to examine the specific bit of that number. The bit that needs to be checked would be the remainder when dividing 110 by 64, or 46. So if the 46th bit of the word at index 1 is set, then we have seen 110 before.
This is how it might look in code
type IntSet struct {
words []uint64
}
func (s *IntSet) Has(x int) bool {
word, bit := x/64, uint(x%64)
return word < len(s.words) && s.words[word]&(1<<bit) != 0
}
func (s *IntSet) AddValue(x int) {
word := x / 64
bit := x % 64
if word < len(s.words) {
s.words[word] |= (1 << uint64(bit))
}
}
And here is some code to test it
func main() {
rangeUpper := 1000
bits := IntSet{words: make([]uint64, (rangeUpper/64)+1)}
bits.AddValue(127)
bits.AddValue(8)
bits.AddValue(63)
bits.AddValue(64)
bits.AddValue(998)
fmt.Printf("%b\n", bits.words)
for i := 0; i < rangeUpper; i++ {
if ok := bits.Has(i); ok {
fmt.Printf("Found %d\n", i)
}
}
}
OUTPUT
Found 8
Found 63
Found 64
Found 127
Found 998
Playground of above
Note
The |= is another bitwise operator OR. It means combine the two values keeping anywhere there is a 1 in either value
10000000 00000001 00000001
& 01000000 & 10000000 & 00000001
-------- -------- --------
11000000 10000001 00000001 <-- important that we
can set the value
multiple times
Using this method we can reduce the cost of storage for 65535 numbers from 131KB to just 1KB. This type of bit manipulation for set membership is very common in implementations of Bloom Filters
An IntSet represents a Set of integers. The presence in the set of any of a contiguous range of integers can be established by writing a single bit in the IntSet. Likewise, checking whether a specific integer is in the IntSet can be done by checking whether the particular integer corresponding to that bit is set.
So the code is finding the specific uint64 in the Intset corresponding to the integer:
word := x/64
and then the specific bit in that uint64:
bit := uint(x%64)
and then checking first that the integer being tested is in the range supported by the IntSet:
word < len(s.words)
and then whether the specific bit corresponding to the specific integer is set:
&& s.words[word]&(1<<bit) != 0
This part:
s.words[word]
pulls out the specific uint64 of the IntSet that tracks whether the integer in question is in the set.
&
is a bitwise AND.
(1<<bit)
means take a 1, shift it to the bit position representing the specific integer being tested.
Performing the bitwise AND between the integer in question, and the bit-shifted 1 will return a 0 if the bit corresponding to the integer is not set, and a 1 if the bit is set (meaning, the integer in question is a member of the IntSet).

Index Out of Range when using binary.PutVarint(...)

http://play.golang.org/p/RqScJVvpS7
package main
import (
"fmt"
"math/rand"
"encoding/binary"
)
func main() {
buffer := []byte{0, 0, 0, 0, 0, 0, 0, 0}
num := rand.Int63()
count := binary.PutVarint(buffer, num)
fmt.Println(count)
}
I had this working awhile ago when num was just an incrementing uint64 and I was using binary.PutUvarint but now that it's a random int64 and binary.PutVarint I get an error:
panic: runtime error: index out of range
goroutine 1 [running]:
encoding/binary.PutUvarint(0x1042bf58, 0x8, 0x8, 0x6ccb, 0xff9faa4, 0x9acb0442, 0x7fcfd52, 0x4d658221)
/usr/local/go/src/encoding/binary/varint.go:44 +0xc0
encoding/binary.PutVarint(0x1042bf58, 0x8, 0x8, 0x6ccb, 0x7fcfd52, 0x4d658221, 0x14f9e0, 0x104000e0)
/usr/local/go/src/encoding/binary/varint.go:83 +0x60
main.main()
/tmp/sandbox010341234/main.go:12 +0x100
What am I missing? I would have thought this to be a trivial change...
EDIT: I just tried extending my buffer array. For some odd reason it works and I get a count of 10. How can that be? int64 is 64 bits = 8 bytes, right?
Quoting the doc of encoding/binary:
The varint functions encode and decode single integer values using a variable-length encoding; smaller values require fewer bytes. For a specification, see https://developers.google.com/protocol-buffers/docs/encoding.
So the binary.PutVarint() is not a fixed, but a variable-length encoding. When passing an int64, it will need more than 8 bytes for large numbers, and less than 8 bytes for small numbers. Since the number you're encoding is a random number, it will have random bits even in its highest byte.
See this simple example:
buffer := make([]byte, 100)
for num := int64(1); num < 1<<60; num <<= 4 {
count := binary.PutVarint(buffer, num)
fmt.Printf("Num=%d, bytes=%d\n", num, count)
}
Output:
Num=1, bytes=1
Num=16, bytes=1
Num=256, bytes=2
Num=4096, bytes=2
Num=65536, bytes=3
Num=1048576, bytes=4
Num=16777216, bytes=4
Num=268435456, bytes=5
Num=4294967296, bytes=5
Num=68719476736, bytes=6
Num=1099511627776, bytes=6
Num=17592186044416, bytes=7
Num=281474976710656, bytes=8
Num=4503599627370496, bytes=8
Num=72057594037927936, bytes=9
The essence of variable-length encoding is that small numbers use less bytes, but this can only be achieved if in turn big numbers may use more than 8 bytes (that would be size of int64).
Details of the specific encoding is on the linked page.
A very easy example would be: A byte is 8 bits. Use 7 bits of the output byte as the "useful" bits to encode the data/number. If the highest bit is 1, that means more bytes are required. If highest bit is 0, we're done. You can see that small numbers can be encoded using 1 output byte (e.g. n=10), while we're using 1 extra bit for every 7-bit useful data, so if the input number uses all the 64 bits, we will end up with more than 8 bytes: 10 groups are required to cover 64 bits, so we will need 10 bytes (9 groups is only 9*7=63 bits).

How to calculate g values from LIS3DH sensor?

I am using LIS3DH sensor with ATmega128 to get the acceleration values to get motion. I went through the datasheet but it seemed inadequate so I decided to post it here. From other posts I am convinced that the sensor resolution is 12 bit instead of 16 bit. I need to know that when finding g value from the x-axis output register, do we calculate the two'2 complement of the register values only when the sign bit MSB of OUT_X_H (High bit register) is 1 or every time even when this bit is 0.
From my calculations I think that we calculate two's complement only when MSB of OUT_X_H register is 1.
But the datasheet says that we need to calculate two's complement of both OUT_X_L and OUT_X_H every time.
Could anyone enlighten me on this ?
Sample code
int main(void)
{
stdout = &uart_str;
UCSRB=0x18; // RXEN=1, TXEN=1
UCSRC=0x06; // no parit, 1-bit stop, 8-bit data
UBRRH=0;
UBRRL=71; // baud 9600
timer_init();
TWBR=216; // 400HZ
TWSR=0x03;
TWCR |= (1<<TWINT)|(1<<TWSTA)|(0<<TWSTO)|(1<<TWEN);//TWCR=0x04;
printf("\r\nLIS3D address: %x\r\n",twi_master_getchar(0x0F));
twi_master_putchar(0x23, 0b000100000);
printf("\r\nControl 4 register 0x23: %x", twi_master_getchar(0x23));
printf("\r\nStatus register %x", twi_master_getchar(0x27));
twi_master_putchar(0x20, 0x77);
DDRB=0xFF;
PORTB=0xFD;
SREG=0x80; //sei();
while(1)
{
process();
}
}
void process(void){
x_l = twi_master_getchar(0x28);
x_h = twi_master_getchar(0x29);
y_l = twi_master_getchar(0x2a);
y_h = twi_master_getchar(0x2b);
z_l = twi_master_getchar(0x2c);
z_h = twi_master_getchar(0x2d);
xvalue = (short int)(x_l+(x_h<<8));
yvalue = (short int)(y_l+(y_h<<8));
zvalue = (short int)(z_l+(z_h<<8));
printf("\r\nx_val: %ldg", x_val);
printf("\r\ny_val: %ldg", y_val);
printf("\r\nz_val: %ldg", z_val);
}
I wrote the CTRL_REG4 as 0x10(4g) but when I read them I got 0x20(8g). This seems bit bizarre.
Do not compute the 2s complement. That has the effect of making the result the negative of what it was.
Instead, the datasheet tells us the result is already a signed value. That is, 0 is not the lowest value; it is in the middle of the scale. (0xffff is just a little less than zero, not the highest value.)
Also, the result is always 16-bit, but the result is not meant to be taken to be that accurate. You can set a control register value to to generate more accurate values at the expense of current consumption, but it is still not guaranteed to be accurate to the last bit.
the datasheet does not say (at least the register description in chapter 8.2) you have to calculate the 2' complement but stated that the contents of the 2 registers is in 2's complement.
so all you have to do is receive the two bytes and cast it to an int16_t to get the signed raw value.
uint8_t xl = 0x00;
uint8_t xh = 0xFC;
int16_t x = (int16_t)((((uint16)xh) << 8) | xl);
or
uint8_t xa[2] {0x00, 0xFC}; // little endian: lower byte to lower address
int16_t x = *((int16*)xa);
(hope i did not mixed something up with this)
I have another approach, which may be easier to implement as the compiler will do all of the work for you. The compiler will probably do it most efficiently and with no bugs too.
Read the raw data into the raw field in:
typedef union
{
struct
{
// in low power - 8 significant bits, left justified
int16 reserved : 8;
int16 value : 8;
} lowPower;
struct
{
// in normal power - 10 significant bits, left justified
int16 reserved : 6;
int16 value : 10;
} normalPower;
struct
{
// in high resolution - 12 significant bits, left justified
int16 reserved : 4;
int16 value : 12;
} highPower;
// the raw data as read from registers H and L
uint16 raw;
} LIS3DH_RAW_CONVERTER_T;
than use the value needed according to the power mode you are using.
Note: In this example, bit fields structs are BIG ENDIANS.
Check if you need to reverse the order of 'value' and 'reserved'.
The LISxDH sensors are 2's complement, left-justified. They can be set to 12-bit, 10-bit, or 8-bit resolution. This is read from the sensor as two 8-bit values (LSB, MSB) that need to be assembled together.
If you set the resolution to 8-bit, just can just cast LSB to int8, which is the likely your processor's representation of 2's complement (8bit). Likewise, if it were possible to set the sensor to 16-bit resolution, you could just cast that to an int16.
However, if the value is 10-bit left justified, the sign bit is in the wrong place for an int16. Here is how you convert it to int16 (16-bit 2's complement).
1.Read LSB, MSB from the sensor:
[MMMM MMMM] [LL00 0000]
[1001 0101] [1100 0000] //example = [0x95] [0xC0] (note that the LSB comes before MSB on the sensor)
2.Assemble the bytes, keeping in mind the LSB is left-justified.
//---As an example....
uint8_t byteMSB = 0x95; //[1001 0101]
uint8_t byteLSB = 0xC0; //[1100 0000]
//---Cast to U16 to make room, then combine the bytes---
assembledValue = ( (uint16_t)(byteMSB) << UINT8_LEN ) | (uint16_t)byteLSB;
/*[MMMM MMMM LL00 0000]
[1001 0101 1100 0000] = 0x95C0 */
//---Shift to right justify---
assembledValue >>= (INT16_LEN-numBits);
/*[0000 00MM MMMM MMLL]
[0000 0010 0101 0111] = 0x0257 */
3.Convert from 10-bit 2's complement (now right-justified) to an int16 (which is just 16-bit 2's complement on most platforms).
Approach #1: If the sign bit (in our example, the tenth bit) = 0, then just cast it to int16 (since positive numbers are represented the same in 10-bit 2's complement and 16-bit 2's complement).
If the sign bit = 1, then invert the bits (keeping just the 10bits), add 1 to the result, then multiply by -1 (as per the definition of 2's complement).
convertedValueI16 = ~assembledValue; //invert bits
convertedValueI16 &= ( 0xFFFF>>(16-numBits) ); //but keep just the 10-bits
convertedValueI16 += 1; //add 1
convertedValueI16 *=-1; //multiply by -1
/*Note that the last two lines could be replaced by convertedValueI16 = ~convertedValueI16;*/
//result = -425 = 0xFE57 = [1111 1110 0101 0111]
Approach#2: Zero the sign bit (10th bit) and subtract out half the range 1<<9
//----Zero the sign bit (tenth bit)----
convertedValueI16 = (int16_t)( assembledValue^( 0x0001<<(numBits-1) ) );
/*Result = 87 = 0x57 [0000 0000 0101 0111]*/
//----Subtract out half the range----
convertedValueI16 -= ( (int16_t)(1)<<(numBits-1) );
[0000 0000 0101 0111]
-[0000 0010 0000 0000]
= [1111 1110 0101 0111];
/*Result = 87 - 512 = -425 = 0xFE57
Link to script to try out (not optimized): http://tpcg.io/NHmBRR

Resources