I am running a for loop like so:
for var i: Float = 1.000; i > 0; i -= 0.005 {
println(i)
}
and I have found that after i has decreased past a certain value instead of decreasing by exactly 0.005, it decreases by ever so slightly less then 0.005, so that when it reaches the 201 iteration, i is not 0 but rather something infinitesimally close 0, and so the for loop runs. The output is as follows:
1.0
0.995
0.99
0.985
...
0.48
0.475001
0.470001
...
0.0100008 // should be 0.01
0.00500081 // should 0.005
8.12113e-07 // should be 0
My question is, first of all, why is this happening, and second of all what can I do so that i always decreases by 0.005 so that the loop does not run on the 201 iteration?
Thanks a lot,
bigelerow
The Swift Floating-Point Number documentation states:
Note
Double has a precision of at least 15 decimal digits, whereas the precision of Float can be as little as 6 decimal digits. The appropriate floating-point type to use depends on the nature and range of values you need to work with in your code. In situations where either type would be appropriate, Double is preferred.
In this case, it looks like the error is on the order of 4.060564999999999e-09 in each subtraction, based on the amount left over after 200 subtractions. Indeed changing Float to Double reduces the precision such that the loop runs until i = 0.00499999999999918 when it should be 0.005.
That is all well and good, however we still have the problem of construction a loop that will run until i becomes zero. If the amount that you reduce i by remains constant throughout the loop, one only slightly unfortunate work around is:
var x: Double = 1
let reduction = 0.005
for var i = Int(x/reduction); i >= 0; i -= 1, x = Double(i) * reduction {
println(x)
}
In this case your error won't compound since we are using an integer to index how many reductions we need to reach the current x, and thus is independent of the length of the loop.
Related
I am writing some data on a bitmap file, and I have this loop to calculate the data which runs for 480,000 times according to each pixel in 800 * 600 resolution, hence different arguments (coordinates) and different return value at each iteration which is then stored in an array of size 480,000. This array is then used for further calculation of colours.
All these iterations combined take a lot of time, around a minute at runtime in Visual Studio (for different values at each execution). How can I ensure that the time is greatly reduced? It's really stressing me out.
Is it the fault of my machine (i5 9th gen, 8GB RAM)? Visual Studio 2019? Or the algorithm entirely? If it's the algorithm, what can I do to reduce its time?
Here's the loop that runs for each individual iteration:
int getIterations(double x, double y) //x and y are coordinates
{
complex<double> z = 0; //These are complex numbers, imagine a pair<double>
complex<double> c(x, y);
int iterations = 0;
while (iterations < max_iterations) // max_iterations has to be 1000 to get decent image quality
{
z = z * z + c;
if (abs(z) > 2) // abs(z) = square root of the sum of squares of both elements in the pair
{
break;
}
iterations++;
}
return iterations;
}
While I don't know how exactly your abs(z) works, but based on your description, it might be slowing down your program by a lot.
Based on your description, your are taking the sum of squares of both element of your complex number, then get a square root out of it. Whatever your methods of square root is, it probably takes more than just a few lines of codes to run.
Instead, just compare complex.x * complex.x + complex.y * complex.y > 4, it's definitely faster than getting the square root first, then compare it with 2
There's a reason the above should be done during run-time?
I mean: the result of this loop seems dependant only on "x" and "y" (which are only coordinates), thus you can try to constexpr-ess all these calculation to be done at compile-time to pre-made a map of results...
At least, just try to build that map once during run-time initialisation.
I'm trying to reduce the number of calls to std::max in my inner loop, as I'm calling it millions of times (no exaggeration!) and that's making my parallel code run slower than the sequential code. The basic idea (yes, this IS for an assignment) is that the code calculates the temperature at a certain gridpoint, iteration by iteration, until the maximum change is no more than a certain, very tiny number (e.g 0.01). The new temp is the average of the temps in the cells directly above, below and beside it. Each cell has a different value as a result, and I want to return the largest change in any cell for a given chunk of the grid.
I've got the code working but it's slow because I'm doing a large (excessively so) number of calls to std::max in the inner loop and it's O(n*n). I have used a 1D domain decomposition
Notes: tdiff doesn't depend on anything but what's in the matrix
the inputs of the reduction function are the result of the lambda function
diff is the greatest change in a single cell in that chunk of the grid over 1 iteration
blocked range is defined earlier in the code
t_new is new temperature for that grid point, t_old is the old one
max_diff = parallel_reduce(range, 0.0,
//lambda function returns local max
[&](blocked_range<size_t> range, double diff)-> double
{
for (size_t j = range.begin(); j<range.end(); j++)
{
for (size_t i = 1; i < n_x-1; i++)
{
t_new[j*n_x+i]=0.25*(t_old[j*n_x+i+1]+t_old[j*n_x+i-1]+t_old[(j+1)*n_x+i]+t_old[(j-1)*n_x+i]);
tdiff = fabs(t_old[j*n_x+i] - t_new[j*n_x+i]);
diff = std::max(diff, tdiff);
}
}
return diff; //return biggest value of tdiff for that iteration - once per 'i'
},
//reduction function - takes in all the max diffs for each iteration, picks the largest
[&](double a, double b)-> double
{
convergence = std::max(a,b);
return convergence;
}
);
How can I make my code more efficient? I want to make less calls to std::max but need to maintain the correct values. Using gprof I get:
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls ms/call ms/call name
61.66 3.47 3.47 3330884 0.00 0.00 double const& std::max<double>(double const&, double const&)
38.03 5.61 2.14 5839 0.37 0.96 _ZZ4mainENKUlN3tbb13blocked_rangeImEEdE_clES1_d
ETA: 61.66% of the time spent executing my code is on the std::max calls, it calls over 3 million times. The reduce function is called for every output of the lambda function, so reducing the number of calls to std::max in the lambda function will also reduce the number of calls to the reduce function
First of all, I would expect std::max to be inlined into its caller, so it's suspicious that gprof points it out as a separate hotspot. Do you maybe analyze a debug configuration?
Also, I do not think that std::max is a culprit here. Unless some special checks are enabled in its implementation, I believe it should be equivalent to (diff<tdiff)?tdiff:diff. Since one of the arguments to std::max is the variable that you update, you can try if (tdiff>diff) diff = tdiff; instead, but I doubt it will give you much (and perhaps compilers can do such optimization on their own).
Most likely, std::max is highlighted as the result of sampling skid; i.e. the real hotspot is in computations above std::max, which makes perfect sense, due to both more work and accesses to non-local data (arrays) that might have longer latency, especially if the corresponding locations are not in CPU cache.
Depending on the size of the rows (n_x) in your grid, processing it by rows like you do can be inefficient, cache-wise. It's better to reuse data from t_old as much as possible while those are in cache. Processing by rows, you either don't re-use a point from t_old at all until the next row (for i+1 and i-1 points) or only reuse it once (for two neighbors in the same row). A better approach is to process the grid by rectangular blocks, which helps to re-use data that are hot in cache. With TBB, the way to do that is to use blocked_range2d. It will need minimal changes in your code; basically, changing the range type and two loops inside the lambda: the outer and inner loops should iterate over range.rows() and range.cols(), respectively.
I ended up using parallel_for:
parallel_for(range, [&](blocked_range<size_t> range)
{
double loc_max = 0.0;
double tdiff;
for (size_t j = range.begin(); j<range.end(); j++)
{
for (size_t i = 1; i < n_x-1; i++)
{
t_new[j*n_x+i]=0.25*(t_old[j*n_x+i+1]+t_old[j*n_x+i-1]+t_old[(j+1)*n_x+i]+t_old[(j-1)*n_x+i]);
tdiff = fabs(t_old[j*n_x+i] - t_new[j*n_x+i]);
loc_max = std::max(loc_max, tdiff);
}
}
//reduction function - takes in all the max diffs for each iteration, picks the largest
{
max_diff = std::max(max_diff, loc_max);
}
}
);
And now my code runs in under 2 seconds for an 8000x8000 grid :-)
With different values in a collection, will this algorithm (pseudeocode) ever terminate?
while (curElement != average(allElements))
{
curElement = average(allElements);
nextElement();
}
Note that I'm assuming that we will re-start from the beginning if we're at the end of the array.
Since this is pseudocode, a simple example with 2 elements will reveal that there are cases where the program won't terminate:
x = 0, y = 1;
x y
Step 1: 0.5 1
Step 2: 0.5 0.75
Step 3: 0.635 0.75
//and so one
With some math involved, lim(x-y) = lim( 1 / 2^n )
So the numbers converge, but they're never equal.
However, if you'd actually implement this on a computer, they will turn out equal because of hardware limitations - not all numbers can be expressed in a limited number of bits.
It depends.
If your elements hold discrete values, then most likely they will fall into the same value after a few runs.
If your elements hold limited precision values (such as floats or doubles), then it will take longer, but finite time.
If your elements hold arbitrary precision values, then your algorithm may never finish. (If you count up every piece of an integral and add it to a figure you have on a piece of paper, you need infinite time, an infinitely large piece of paper, and infinite patience with this analogy.)
There is little difference between your code and the following:
var i = 1;
while (i != 0)
i = i / 2;
Will it ever terminate? That really depends on the implementation.
How can I find the cube root of a number in an efficient way?
I think Newton-Raphson method can be used, but I don't know how to guess the initial solution programmatically to minimize the number of iterations.
This is a deceptively complex question. Here is a nice survey of some possible approaches.
In view of the "link rot" that overtook the Accepted Answer, I'll give a more self-contained answer focusing on the topic of quickly obtaining an initial guess suitable for superlinear iteration.
The "survey" by metamerist (Wayback link) provided some timing comparisons for various starting value/iteration combinations (both Newton and Halley methods are included). Its references are to works by W. Kahan, "Computing a Real Cube Root", and by K. Turkowski, "Computing the Cube Root".
metamarist updates the DEC-VAX era bit-fiddling technique of W. Kahan with this snippet, which "assumes 32-bit integers" and relies on IEEE 754 format for doubles "to generate initial estimates with 5 bits of precision":
inline double cbrt_5d(double d)
{
const unsigned int B1 = 715094163;
double t = 0.0;
unsigned int* pt = (unsigned int*) &t;
unsigned int* px = (unsigned int*) &d;
pt[1]=px[1]/3+B1;
return t;
}
The code by K. Turkowski provides slightly more precision ("approximately 6 bits") by a conventional powers-of-two scaling on float fr, followed by a quadratic approximation to its cube root over interval [0.125,1.0):
/* Compute seed with a quadratic qpproximation */
fr = (-0.46946116F * fr + 1.072302F) * fr + 0.3812513F;/* 0.5<=fr<1 */
and a subsequent restoration of the exponent of two (adjusted to one-third). The exponent/mantissa extraction and restoration make use of math library calls to frexp and ldexp.
Comparison with other cube root "seed" approximations
To appreciate those cube root approximations we need to compare them with other possible forms. First the criteria for judging: we consider the approximation on the interval [1/8,1], and we use best (minimizing the maximum) relative error.
That is, if f(x) is a proposed approximation to x^{1/3}, we find its relative error:
error_rel = max | f(x)/x^(1/3) - 1 | on [1/8,1]
The simplest approximation would of course be to use a single constant on the interval, and the best relative error in that case is achieved by picking f_0(x) = sqrt(2)/2, the geometric mean of the values at the endpoints. This gives 1.27 bits of relative accuracy, a quick but dirty starting point for a Newton iteration.
A better approximation would be the best first-degree polynomial:
f_1(x) = 0.6042181313*x + 0.4531635984
This gives 4.12 bits of relative accuracy, a big improvement but short of the 5-6 bits of relative accuracy promised by the respective methods of Kahan and Turkowski. But it's in the ballpark and uses only one multiplication (and one addition).
Finally, what if we allow ourselves a division instead of a multiplication? It turns out that with one division and two "additions" we can have the best linear-fractional function:
f_M(x) = 1.4774329094 - 0.8414323527/(x+0.7387320679)
which gives 7.265 bits of relative accuracy.
At a glance this seems like an attractive approach, but an old rule of thumb was to treat the cost of a FP division like three FP multiplications (and to mostly ignore the additions and subtractions). However with current FPU designs this is not realistic. While the relative cost of multiplications to adds/subtracts has come down, in most cases to a factor of two or even equality, the cost of division has not fallen but often gone up to 7-10 times the cost of multiplication. Therefore we must be miserly with our division operations.
static double cubeRoot(double num) {
double x = num;
if(num >= 0) {
for(int i = 0; i < 10 ; i++) {
x = ((2 * x * x * x) + num ) / (3 * x * x);
}
}
return x;
}
It seems like the optimization question has already been addressed, but I'd like to add an improvement to the cubeRoot() function posted here, for other people stumbling on this page looking for a quick cube root algorithm.
The existing algorithm works well, but outside the range of 0-100 it gives incorrect results.
Here's a revised version that works with numbers between -/+1 quadrillion (1E15). If you need to work with larger numbers, just use more iterations.
static double cubeRoot( double num ){
boolean neg = ( num < 0 );
double x = Math.abs( num );
for( int i = 0, iterations = 60; i < iterations; i++ ){
x = ( ( 2 * x * x * x ) + num ) / ( 3 * x * x );
}
if( neg ){ return 0 - x; }
return x;
}
Regarding optimization, I'm guessing the original poster was asking how to predict the minimum number of iterations for an accurate result, given an arbitrary input size. But it seems like for most general cases the gain from optimization isn't worth the added complexity. Even with the function above, 100 iterations takes less than 0.2 ms on average consumer hardware. If speed was of utmost importance, I'd consider using pre-computed lookup tables. But this is coming from a desktop developer, not an embedded systems engineer.
I'm looking for a decent, elegant method of calculating this simple logic.
Right now I can't think of one, it's spinning my head.
I am required to do some action only 15% of the time.
I'm used to "50% of the time" where I just mod the milliseconds of the current time and see if it's odd or even, but I don't think that's elegant.
How would I elegantly calculate "15% of the time"? Random number generator maybe?
Pseudo-code or any language are welcome.
Hope this is not subjective, since I'm looking for the "smartest" short-hand method of doing that.
Thanks.
Solution 1 (double)
get a random double between 0 and 1 (whatever language you use, there must be such a function)
do the action only if it is smaller than 0.15
Solution 2 (int)
You can also achieve this by creating a random int and see if it is dividable to 6 or 7. UPDATE --> This is not optimal.
You can produce a random number between 0 and 99, and check if it's less than 15:
if (rnd.Next(100) < 15) ...
You can also reduce the numbers, as 15/100 is the same as 3/20:
if (rnd.Next(20) < 3) ...
Random number generator would give you the best randomness. Generate a random between 0 and 1, test for < 0.15.
Using the time like that isn't true random, as it's influenced by processing time. If a task takes less than 1 millisecond to run, then the next random choice will be the same one.
That said, if you do want to use the millisecond-based method, do milliseconds % 20 < 3.
Just use a PRNG. Like always, it's a performance v. accuracy trade-off. I think making your own doing directly off the time is a waste of time (pun intended). You'll probably get biasing effects even worse than a run of the mill linear congruential generator.
In Java, I would use nextInt:
myRNG.nextInt(100) < 15
Or (mostly) equivalently:
myRNG.nextInt(20) < 3
There are way to get a random integer in other languages (multiple ways actually, depending how accurate it has to be).
Using modulo arithmetic you can easily do something every Xth run like so
(6 will give you ruthly 15%
if( microtime() % 6 === ) do it
other thing:
if(rand(0,1) >= 0.15) do it
boolean array[100] = {true:first 15, false:rest};
shuffle(array);
while(array.size > 0)
{
// pop first element of the array.
if(element == true)
do_action();
else
do_something_else();
}
// redo the whole thing again when no elements are left.
Here's one approach that combines randomness and a guarantee that eventually you get a positive outcome in a predictable range:
Have a target (15 in your case), a counter (initialized to 0), and a flag (initialized to false).
Accept a request.
If the counter is 15, reset the counter and the flag.
If the flag is true, return negative outcome.
Get a random true or false based on one of the methods described in other answers, but use a probability of 1/(15-counter).
Increment counter
If result is true, set flag to true and return a positive outcome. Else return a negative outcome.
Accept next request
This means that the first request has probability of 1/15 of return positive, but by the 15th request, if no positive result has been returned, there's a probability of 1/1 of a positive result.
This quote is from a great article about how to use a random number generator:
Note: Do NOT use
y = rand() % M;
as this focuses on the lower bits of
rand(). For linear congruential random
number generators, which rand() often
is, the lower bytes are much less
random than the higher bytes. In fact
the lowest bit cycles between 0 and 1.
Thus rand() may cycle between even and
odd (try it out). Note rand() does not
have to be a linear congruential
random number generator. It's
perfectly permissible for it to be
something better which does not have
this problem.
and it contains formulas and pseudo-code for
r = [0,1) = {r: 0 <= r < 1} real
x = [0,M) = {x: 0 <= x < M} real
y = [0,M) = {y: 0 <= y < M} integer
z = [1,M] = {z: 1 <= z <= M} integer