typecast from float to int error - visual-studio

I am using following code to typecast from float to int. I always have float with up to 1 decimal point. First I multiply it with 10 and then typecast it to int
float temp1 = float.Parse(textBox.Text);
int temp = (int)(temp1*10);
for 25.3 I get 252 after typecasting but for 25.2, 25.4 I get correct output 252,254 respectively.
Now performing same operation little bit in different way gives correct output.
float temp1 = float.Parse(textBox.Text);
temp1 = temp1*10;
int temp = (int)temp1;
now for 25.3 I get 253. What is the reason for this because logically first method is also correct? I am using Visual Studio 2010.

Its all because of precision in float & double & there rounding off to integer
By default arithmetic operation are performed to in double precision
here is your first code
float temp1 = float.Parse(textBox.Text);
int temp = (int)(temp1*10);
which gets executed as
float temp1 = float.Parse(textBox.Text);
double xVariable = temp1*10
int temp = (int)xVariable;
here is your second code which executes as float conversion of multiplication
float temp1 = float.Parse(textBox.Text);
float xVariable = temp1*10;
int temp = (int)xVariable;
More Information about precision
http://en.wikipedia.org/wiki/Single-precision_floating-point_format
What range of numbers can be represented in a 16-, 32- and 64-bit IEEE-754 systems?

try
decimal temp1 = decimal.Parse(textBox.Text);
int temp = (int)(temp1*10);

Use this:
float temp1 = 25.3;
int temp = Int32. conversation (temp1)

Related

GCC complier :strange behavior when doing float operation , float value saturating to 65536 where float is of 4 bytes

[I tried to compute a float multiplication, I observed the value was getting saturated to 65536 and was not updating.
the issue is only with the below code.]1
Result for the above code
I tried this with online GCC compiler the issue was still the same.
does this have anything to do with float precision ? is compiler optimizing my float precision during operation?
is there any compiler flags that I can add to overcome this issue?
can anyone please guide me on how to solve this issue?
Attaching the code for reference
#include <stdio.h>
int main()
{
float dummy1, dummy2;
unsigned int i =0;
printf("Hello World");
printf("size of float = %ld\n", sizeof(dummy1));
dummy2 = 0.0;
dummy1 =65535.5;
dummy2 = 60.00 * 0.00005;
for( i= 0; i< 300; i++)
{
dummy1 = dummy1 + dummy2;
printf("dummy1 = %f %f\n", dummy1, dummy2);
}
return 0;
};
(This answers presumes IEEE-754 single and double precision binary formats are used for float and double.)
60.00 * 0.00005 is computed with double arithmetic and produces 0.003000000000000000062450045135165055398829281330108642578125. When this is stored in dummy2, it is converted to 0.0030000000260770320892333984375.
In the loop, dummy1 eventually reaches the value 65535.99609375. Then, when dummy1 and dummy2 are added, the result computed with real-number arithmetic would be 65535.9990000000260770320892333984375. This value is not representable in the float format, so it is rounded to the nearest value representable in the float format, and that is the result that the + operator produces.
The nearest representable values in the float format are 65535.99609375 and 65536. Since 65536 is closer to 65535.9990000000260770320892333984375, it is the result.
In the next iteration, 65536 and 0.0030000000260770320892333984375 are added. The real-arithmetic result would be 65536.0030000000260770320892333984375. This is also not representable in float. The nearest representable values are 65536 and 65536.0078125. Again 65536 is closer, so it is the computed result.
From then on, the loop always produces 65536 as a result.
You can get better results either by using double arithmetic or by computing dummy1 afresh in each iteration instead of accumulating rounding errors from iteration to iteration:
for (i = 0; i < 300; ++i)
{
dummy1 = 65535.5 + i * 60. * .00005;
printf("%.99g\n", dummy1);
}
Note that because dummy1 is a float, it does not have the precision required to distinguish some successive values of the sequence. For example, output of the above includes:
65535.9921875
65535.99609375
65535.99609375
65536
65536.0078125
65536.0078125
65536.0078125
65536.015625
65536.015625
65536.015625

How to cast uint16_t into a float in C++

I am running C++14 on MacOS High Sierra.
I have an uint16_t returned from a method and the value can range from 100 to like 8000 odd.
I want to convert it to a float. So, if it is 289 then the float should be 289.0. I am trying all different ways to cast the uint16_t but it my float variable always gets zeroes.
uint16_t i_value = 289;
Tried this:
float f_value = static_cast(i_value);
And tried this:
float f_value = (float)i_value;
But nothing works.
Question:
How can I cast uint16_t into a float?
It is an implicit conversion (both ways), no cast is required:
uint16_t i_value = 289;
float f = i_value;

Cannot convert from float to int Processing/Java

I have some code here:
int mutate(float x){
if (random(1) < .1){
float offset = randomGaussian()/2;
float newx = x + offset;
return newx;
} else {
return x;
}
}
This code gives an error on both samples of returning a value saying "Type mismatch: Cannot convert from float to int." What is wrong with my code?
Thanks in advance.
You need to change the return type to float in order to return decimal values (if that's what you are interested in):
float mutate(float x){
if (random(1) < .1){
float offset = randomGaussian()/2;
float newx = x + offset;
return newx;
} else {
return x;
}
}
First off, remember what int and float are:
int can only hold whole numbers without decimal places, like 1, 42, and -54321.
float can hold numbers with decimal places, like 0.25, 1.9999, and -543.21.
So, you need to figure out what you meant to return from your function: should it be an int or a float value? If it's supposed to be a float value, then you can simply change the return type of the function to float. If you want it to return an int value, then you'll have to rethink the logic inside your function so it's using int values instead.
Note that you can convert from a float to an int using the int() function. More info can be found in the reference.

implementing PID algorithm in line following robot

I'm working on a small project with NXT mindstorms set. My intention was to build a Robot that can follow a line very smoothly and as fast as possible. Therefore after a small research I found the PID algorithm and I was able to understand and implement the algorithm into a NXC code. The robot has just did everything right according to the algorithm but when the line is interrupted (gaps) the robot loses the line and can't get back to it. The thing is that when the gap is up to 9cm it is fine he can get back but in 10 he just loses the line. I'm using one light sensor. Is there any way that I can adjust the PID code to work with this Problem?
My Code:
// kd ,ki kp are also defined
task main()
{
int error = 0;
float previous_error = 0;
float setpoint = 0;
float actual_position = 0;
int integral = 0;
float derivative = 0;
float speed=50;
float lasterror = 0
float correction = 0
float fahrenA = 0
float fahrenC = 0
SetSensorLight(IN_2);
SENSOR_TYPE_LIGHT_ACTIVE;
while(true)
{
actual_position = LIGHTSENSOR;
error = setpoit - actual_position ;
integral = error + intergral ;
derivative = error - previous_error;
correction = (kp * error )+ (ki * intergral) + (kd * derivative );
turn = correction / 100;
fahrenA = Tp + turn;
fahrenC = Tp – turn;
OnFwd(OUT_A,fahrenA);
OnFwd(OUT_C,fahrenC);
previous_error = error ;
By a sine-wave pattern, we mean that the robot may follow the following path to increase the chances of re-capturing the line after losing it. You can code the path using simple if-else and timers/tachometer readings. (Thanks to #Spektre for the suggestion!):

PyCUDA - passing a matrix by reference from python to C++ CUDA code

I have to write in a PyCUDA function that gets two matrices Nx3 and Mx3, and return a matrix NxM, but I can't figure out how to pass by reference a matrix without knowing the number of columns.
My code basically is something like that:
#kernel declaration
mod = SourceModule("""
__global__ void distance(int N, int M, float d1[][3], float d2[][3], float res[][M])
{
int i = threadIdx.x;
int j = threadIdx.y;
float x, y, z;
x = d2[j][0]-d1[i][0];
y = d2[j][1]-d1[i][1];
z = d2[j][2]-d1[i][2];
res[i][j] = x*x + y*y + z*z;
}
""")
#load data
data1 = numpy.loadtxt("data1.txt").astype(numpy.float32) # Nx3 matrix
data2 = numpy.loadtxt("data2.txt").astype(numpy.float32) # Mx3 matrix
N=data1.shape[0]
M=data2.shape[0]
res = numpy.zeros([N,M]).astype(numpy.float32) # NxM matrix
#invoke kernel
dist_gpu = mod.get_function("distance")
dist_gpu(cuda.In(numpy.int32(N)), cuda.In(numpy.int32(M)), cuda.In(data1), cuda.In(data2), cuda.Out(res), block=(N,M,1))
#save data
numpy.savetxt("results.txt", res)
Compiling this I receive an error:
kernel.cu(3): error: a parameter is not allowed
that is, I cannot use M as the number of columns for res[][] in the declaretion of the function. I cannot either left the number of columns undeclared...
I need a matrix NxM as an output, but I can't figure out how to do this. Can you help me?
You should use pitched linear memory access inside the kernel, that is how ndarray and gpuarray store data internally, and PyCUDA will pass a pointer to the data in gpu memory allocated for a gpuarray when it is supplied as a argument to a PyCUDA kernel. So (if I understand what you are trying to do) your kernel should be written as something like:
__device__ unsigned int idx2d(int i, int j, int lda)
{
return j + i*lda;
}
__global__ void distance(int N, int M, float *d1, float *d2, float *res)
{
int i = threadIdx.x + blockDim.x * blockIdx.x;
int j = threadIdx.y + blockDim.y * blockIdx.y;
float x, y, z;
x = d2[idx2d(j,0,3)]-d1[idx2d(i,0,3)];
y = d2[idx2d(j,1,3)]-d1[idx2d(i,1,3)];
z = d2[idx2d(j,2,3)]-d1[idx2d(i,2,3)];
res[idx2d(i,j,N)] = x*x + y*y + z*z;
}
Here I have assumed the numpy default row major ordering in defining the idx2d helper function. There are still problems with the Python side of the code you posted, but I guess you know that already.
EDIT: Here is a complete working repro case based of the code posted in your question. Note that it only uses a single block (like the original), so be mindful of block and grid dimensions when trying to run it on anything other than trivially small cases.
import numpy as np
from pycuda import compiler, driver
from pycuda import autoinit
#kernel declaration
mod = compiler.SourceModule("""
__device__ unsigned int idx2d(int i, int j, int lda)
{
return j + i*lda;
}
__global__ void distance(int N, int M, float *d1, float *d2, float *res)
{
int i = threadIdx.x + blockDim.x * blockIdx.x;
int j = threadIdx.y + blockDim.y * blockIdx.y;
float x, y, z;
x = d2[idx2d(j,0,3)]-d1[idx2d(i,0,3)];
y = d2[idx2d(j,1,3)]-d1[idx2d(i,1,3)];
z = d2[idx2d(j,2,3)]-d1[idx2d(i,2,3)];
res[idx2d(i,j,N)] = x*x + y*y + z*z;
}
""")
#make data
data1 = np.random.uniform(size=18).astype(np.float32).reshape(-1,3)
data2 = np.random.uniform(size=12).astype(np.float32).reshape(-1,3)
N=data1.shape[0]
M=data2.shape[0]
res = np.zeros([N,M]).astype(np.float32) # NxM matrix
#invoke kernel
dist_gpu = mod.get_function("distance")
dist_gpu(np.int32(N), np.int32(M), driver.In(data1), driver.In(data2), \
driver.Out(res), block=(N,M,1), grid=(1,1))
print res

Resources