Printing char with value over 255 - char

I have a problem with printf function.
It shows me 48, when i'm trying to printf char which is -720.
printf("%d",c)=48
Why is this happening ?

In most compilers char variables can only contain values from -128 to 127 or from 0 to 255. Your value of -720 overflowed and resulted into 48.

char having the range -128 to 127. When we are assigning the value as -720. So the range overflow is occured.
When we are overflowing the value for a data type. It will enable the bits with in the range. When the range is exceed the 127 it will start the allocation from the -128. So it
will continue until the given range is come to an end.
For example you are assigning the value value to char is char a=255. If you print this
using %d it will print the value as -1. It will allocate 128 characters in positive 0 to 127.
Now remaining is 127. It will allocate from the negative side. So the -128 - 127. Now the answer is -1.

Related

Can I set TIMx ARR value to initialize at chosen value?

So I am trying to use a rotary encoders to control menu on my STM32 project. I am using two rotary encoders to control each side of the screen (split menu).
When I initialize the ARR registers of both timers which are responsible for counting the encoder's pulses, it initializes the registers to a 0 value and when I move the encoders counterclockwise the registers overflow and goes to maximum values of 65535 and messes with how my code calculates detents.
Can you guys tell me if there is any way to set the the TIM->CNT value to a custom value somewhere in the middle between 0 and 65535 ?
This way I could easily check differences between values and not worry about the jump in numbers.
when I move the encoders counterclockwise the registers overflow and goes to maximum values of 65535 and messes with how my code calculates detents.
The counter is a 16 bit value put into a 32 bit unsigned register. To get a proper signed value, cast it to int16_t.
int wheelposition = (int16_t)TIMx->CNT;
A value of 65535 (0xFFFF) would be sign-extended to 0xFFFFFFFF, which is interpreted as -1 in a 32 bit integer variable. But then you'd have the problem that it'd overflow from -32768 to +32767.
If you are interested in the signed difference of two position readings, you can do the subtraction on the unsigned values, and cast the result to int16_t.
uint32_t oldposition, newposition;
int wheelmovement;
oldposition = TIMx->CNT;
/* wait a bit */
newposition = TIMx->CNT;
wheelmovement = (int16_t)(newposition - oldposition);
It will give you the signed difference, with 16 bit overflow taken into account.
is any way to set the the TIM->CNT value to a custom value somewhere in the middle between 0 and 65535 ?
You can simply assign any value to TIMx->CNT, it will continue counting from there.

General large input test. Hexadecimal arrays in C++11

Almost all my input parameters work except for some larger number inputs for my hexadecimal addition program. My program asks user to input two numbers and it would give out the sum of the two numbers. However, the number should be only 10 digits long, and if it exceeds 10 digits it should output an Addition Overflow.
If I input "1" for first number and "ffffffffff" (10 digits) for my second number I would get an output of "Addition Overflow.". (Correct)
However, if my first number were to be a "0" and second number to still be "ffffffffff" I would get an "Addition Overflow." which is incorrect. The output should still be "ffffffffff"
When you take the final carry, and shift everything over in this loop:
if (c != 0){
for(i = digits; i >= 0; i--)
answer[i] = answer[i-1];
answer[0] = hex[c];
}
You need to increment digits by one, as you have now added another digit at the end.
Then you need to change this statement:
if(digits > length - 1)
to this statement:
if(digits > length)
That will fix your problem.

How to display last N characters of a C string?

I'm trying to do some programming homework but I cant figure out how to display the last N characters of a C string. This is my attempt at it so far. I am also supposed to ask for the number of characters and validate it. Any help would be greatly appreciated.
void choice_4(char str[]) {
int characters;
cout << "How many characters from the end of the string do you want to display? ";
cin >> characters;
if (str[characters] != '\0')
cout<<str.Substring(str.length - characters,characters)
}
As usual with homework questions, I won't give a solution but a few hints.
I assume by “validate” you mean check whether the string is long enough. For example, you cannot show the last 12 characters of a string that is only 7 characters long. Your current attempt looking at the n-th byte of the string cannot work, however. If the string is shorter than n bytes, you'll index it out of range and invoke undefined behavior. If it is longer – which is perfectly valid and, in fact, will be the case except for trivial cases – your test wrongly returns that the requested length is invalid.
What you should do instead is computing the length N of the string and then test whether n &leq; N. You can use the standard library function std::strlen to obtain the length of a NUL terminated character array. Or you can loop it yourself and count how many bytes you see until the first NUL byte.
A C-style string is just a pointer to a byte in memory with the implicit contract that any bytes up to the first NUL byte that follow it belong to the string. Therefore, if you add m &leq; N to the pointer, you get the sub-string starting at the m-th (zero-based) byte.
Therefore, in order to get the sub-string with the last n characters of a string with N characters, how do you determine m?
By the way: A NUL byte is a char with the integer value 0. You can encode it as '\0' (as you did) but 0 works perfectly fine, too.

How to convert a positive signed int32 value to its negative one?

I have try to write one logic is to convert an int32 positive value to a corresponding negative one, i.e., abs(negativeInt32) == positiveInt32.
I have tried with both:
First:
fmt.Printf("%v\n", int32(^uint32(int32(2) -1)))
This results in an error : prog.go:8: constant 4294967294 overflows int32
Second:
var b int32 = 2
fmt.Printf("%v\n", int32(^uint32(int32(b)-1)))
This results in -2.
How can both result in different results. I think they are equal.
play.golang.org
EDIT
Edit for replacing uint32 with int32 for the first situation.
ANSWERED
For those who come to this problem, I have answered the question myself. :)
The two results are different because the first value is typecast to an unsigned int32 (a uint32).
This occurs here: uint32(^uint32(int32(2) -1))
Or more simply: uint32(-2)
An int32 can store any integer between -2147483648 and 2147483647.
That's a total of 4294967296 different integer values (2^32... i.e. 32 bits).
An unsigned int32 can store the same amount of different integer values, but drops the signage (+/-). In other words, an unsigned int32 can store any value from 0 to 4294967295.
But what happens when we typecast a signed int32 (with a value of -2) to an unsigned int32, which cannot possibly store the value of -2?
Well as you have found, we get the value of 4294967294. Which in a number system where one integer less than 0 is 4294967295; 4294967294 happens to be the sum of 0 - 2.
Hello You can simply try below code
var z int32 =5
a:=-(z)
Occasionally, i have learned that why we can not do
fmt.Printf("%v\n", int32(^uint32(int32(2) -1)))
at compile time. It is that ^uint32(int32(2)-1) is treated as a constant value with uint32 type. It's value is 4294967294. This exceeds the maximum value of int32 for 2147483647. So when you run go build on the source code file. Compile error is shown.
The right answer to this should be:
fmt.Printf("%v\n, ^(int32(2) - 1))
i.e., we should first get the corresponding value of 1 in int32 type and, then convert it to the two's complementary form as value of -1.
However, according to this golang blog's An exercise: The largest unsigned int section, this is legal in runtime. So the code
var b int32 = 2
fmt.Printf("%v\n", int32(^uint32(int32(b)-1)))
is alright.
And, finally it comes to that this is related to constants in Golang. :)

"interval is empty", Lua math.random isn't working for large numbers?

I didn't know if this is a bug in Lua itself or if I was doing something wrong. I couldn't find anything about it anywhere. I am using Lua for Windows (Lua 5.1.4):
>return math.random(0, 1000000000)
1251258
This returns a random integer between 0 and 10000000000, as expected. This seems to work for all other values. But if I add a single 0:
>return math.random(0, 10000000000)
stdin:1: bad argument #2 to 'random' (interval is empty)
Any number higher than that does the same thing.
I tried to figure out exactly how high a number has to be to cause this and found something even weirder:
>return math.random(0, 2147483647)
-75617745
If the value is 2147483647 then it gives me negative numbers. Any higher than that and it throws an error. Any lower than that and it works fine.
That's 0b1111111111111111111111111111111 in binary, 31 binary digits exactly. I am not sure what that means though.
This unexpected behavior (bug?) is due to how math.random treats the input arguments passed in Lua 5.1. From lmathlib.c:
case 2: { /* lower and upper limits */
int l = luaL_checkint(L, 1);
int u = luaL_checkint(L, 2);
luaL_argcheck(L, l<=u, 2, "interval is empty");
lua_pushnumber(L, floor(r*(u-l+1))+l); /* int between `l' and `u' */
break;
}
As you may know in C, a standard int can represent values -2,147,483,648 to 2,147,483,647. Adding +1 to 2,147,483,647, like in your use-case, will overflow and wrap around the value giving -2,147,483,648. The end result is negative since you're multiplying a positive with a negative number.
Furthermore, anything above 2,147,483,647 will fail the luaL_argcheck due to overflow wraparound.
There are a few ways to address this problem:
Upgrade to Lua 5.2. That one has since fixed this issue by treating the input arguments as lua_Number instead.
Switch to LuaJIT which does not have this integer overflow issue.
Patch the Lua 5.1 source yourself with the fix and recompile.
Modify your random range so it does not overflow.
If you need a range that is larger than what the random function supports (32 bit signed integers or 2^31 due to sign bit, because math.random is at C level), but smaller than the range of Lua "number" type (based on What is the maximum value of a number in Lua?, 2^52, or maybe even 2^53), you could try generating two random numbers: scale the first to the range desired; add the second to "fill the gap". For example, say you want a range of 0 to 2^36. The largest from math.random is 2^31. So you could do:
-- 2^36 = 2^31 * 2^5 so
scale = 2^5
baseRand = scale * math.random(0, 2^31)
-- baseRand is now between 0 and 2^36 but there are gaps of 2^5 in the set
-- of possible values; fill the gaps with second random number:
fillGap = math.random(0, 2^5)
randNum = baseRand + fillGap
This will work as long as the desired range is less than the Lua interpreter's maximum for Lua numbers, which is a configurable compile time parameter but if you use stock build it is 2^52, a very large number (although not as large as largest long integer, 2^63).
Note also that largest positive N-bit integer is 2^N-1 (not 2^N), but the above technique can be applied to any range, you could have for instance scale = 10^6 then randNum = 10^6 * math.random(0, 10^8) + math.random(0, 10^6).

Resources