How to change frameRate for a particular function (processing) - processing

I’m trying to implement a clock/timer. It works as of now but only problem is I need to make the frameRate to 1 but it affects the whole program’s frameRate. How do I change frameRate only for the clock function?
def clock():
global sec, minutes, hours, col
sec+=1
if(sec == 60):
sec = 0
minutes+=1
if(minutes == 60):
minutes = 0
hours+=1
if(hours == 24):
hours = 0
minutes = 0
sec = 0
textSize(25)
fill(255, 0, 0)
text(floor(sec), 185, 110)
text(floor(minutes), 135, 110)
text(floor(hours), 85, 110)
if(sec % 2 == 0):
col = color(0)
else:
col = color(255, 0, 0)
fill(col)
textSize(30)
text(":", 120, 110)
text(":", 170, 110)

You can't change the frame rate only for one function because it doesn't make sense: The draw() function of processing is called in a loop at a defined frame rate (let's say that it's fixed at 60 times by second even thought in reality it can change). When you use the frameRate() function to change this value you change how fast the draw() function is called and since it is the one calling all your other functions you can't define that only for a specific function.
However you have other ways to achieve your clock/timer function:
First processing provides several time functions:
millis() returns the number of milliseconds since the program started. You could have you clock() function called by draw() make make it convert millis() to a number of seconds, minutes, hours, etc... This way you don't have to keep track of the time by yourself which will simplify your code a lot.
Depending on what you want to do you can also access your computer clock with second(), minute() and all the functions in the "Time & Date" section here.
Secondly you could use the time module of python as shown in this SO question it's a bit of the equivalent of the millis() idea but with native python function.
Finally, still depending on your needs, you could want to have a look at python's Timer objects to execute your clock() function at a defined interval outside of the draw() loop, but while it is completely possible it is not straight forward and can be tricky for someone new to programming.

Related

How to compute blot exposure in backgammon efficiently

I am trying to implement an algorithm for backgammon similar to td-gammon as described here.
As described in the paper, the initial version of td-gammon used only the raw board encoding in the feature space which created a good playing agent, but to get a world-class agent you need to add some pre-computed features associated with good play. One of the most important features turns out to be the blot exposure.
Blot exposure is defined here as:
For a given blot, the number of rolls out of 36 which would allow the opponent to hit the blot. The total blot exposure is the number of rolls out of 36 which would allow the opponent to hit any blot. Blot exposure depends on: (a) the locations of all enemy men in front of the blot; (b) the number and location of blocking points between the blot and the enemy men and (c) the number of enemy men on the bar, and the rolls which allow them to re-enter the board, since men on the bar must re-enter before blots can be hit.
I have tried various approaches to compute this feature efficiently but my computation is still too slow and I am not sure how to speed it up.
Keep in mind that the td-gammon approach evaluates every possible board position for a given dice roll, so each turn for every players dice roll you would need to calculate this feature for every possible board position.
Some rough numbers: assuming there are approximately 30 board position per turn and an average game lasts 50 turns we get that to run 1,000,000 game simulations takes: (x * 30 * 50 * 1,000,000) / (1000 * 60 * 60 * 24) days where x is the number of milliseconds to compute the feature. Putting x = 0.7 we get approximately 12 days to simulate 1,000,000 games.
I don't really know if that's reasonable timing but I feel there must be a significantly faster approach.
So here's what I've tried:
Approach 1 (By dice roll)
For every one of the 21 possible dice rolls, recursively check to see a hit occurs. Here's the main workhorse for this procedure:
private bool HitBlot(int[] dieValues, Checker.Color checkerColor, ref int depth)
{
Moves legalMovesOfDie = new Moves();
if (depth < dieValues.Length)
{
legalMovesOfDie = LegalMovesOfDie(dieValues[depth], checkerColor);
}
if (depth == dieValues.Length || legalMovesOfDie.Count == 0)
{
return false;
}
bool hitBlot = false;
foreach (Move m in legalMovesOfDie.List)
{
if (m.HitChecker == true)
{
return true;
}
board.ApplyMove(m);
depth++;
hitBlot = HitBlot(dieValues, checkerColor, ref depth);
board.UnapplyMove(m);
depth--;
if (hitBlot == true)
{
break;
}
}
return hitBlot;
}
What this function does is take as input an array of dice values (i.e. if the player rolls 1,1 the array would be [1,1,1,1]. The function then recursively checks to see if there is a hit and if so exits with true. The function LegalMovesOfDie computes the legal moves for that particular die value.
Approach 2 (By blot)
With this approach I first find all the blots and then for each blot I loop though every possible dice value and see if a hit occurs. The function is optimized so that once a dice value registers a hit I don't use it again for the next blot. It is also optimized to only consider moves that are in front of the blot. My code:
public int BlotExposure2(Checker.Color checkerColor)
{
if (DegreeOfContact() == 0 || CountBlots(checkerColor) == 0)
{
return 0;
}
List<Dice> unusedDice = Dice.GetAllDice();
List<int> blotPositions = BlotPositions(checkerColor);
int count = 0;
for(int i =0;i<blotPositions.Count;i++)
{
int blotPosition = blotPositions[i];
for (int j =unusedDice.Count-1; j>= 0;j--)
{
Dice dice = unusedDice[j];
Transitions transitions = new Transitions(this, dice);
bool hitBlot = transitions.HitBlot2(checkerColor, blotPosition);
if(hitBlot==true)
{
unusedDice.Remove(dice);
if (dice.ValuesEqual())
{
count = count + 1;
}
else
{
count = count + 2;
}
}
}
}
return count;
}
The method transitions.HitBlot2 takes a blotPosition parameter which ensures that only moves considered are those that are in front of the blot.
Both of these implementations were very slow and when I used a profiler I discovered that the recursion was the cause, so I then tried refactoring these as follows:
To use for loops instead of recursion (ugly code but it's much faster)
To use parallel.foreach so that instead of checking 1 dice value at a time I check these in parallel.
Here are the average timing results of my runs for 50000 computations of the feature (note the timings for each approach was done of the same data):
Approach 1 using recursion: 2.28 ms per computation
Approach 2 using recursion: 1.1 ms per computation
Approach 1 using for loops: 1.02 ms per computation
Approach 2 using for loops: 0.57 ms per computation
Approach 1 using parallel.foreach: 0.75 ms per computation
6 Approach 2 using parallel.foreach: 0.75 ms per computation
I've found the timings to be quite volatile (Maybe dependent on the random initialization of the neural network weights) but around 0.7 ms seems achievable which if you recall leads to 12 days of training for 1,000,000 games.
My questions are: Does anyone know if this is reasonable? Is there a faster algorithm I am not aware of that can reduce training?
One last piece of info: I'm running on a fairly new machine. Intel Cote (TM) i7-5500U CPU #2.40 GHz.
Any more info required please let me know and I will provide.
Thanks,
Ofir
Yes, calculating these features makes really hairy code. Look at the GNU Backgammon code. find the eval.c and look at the lines for 1008 to 1267. Yes, it's 260 lines of code. That code calculates what the number of rolls that hits at least one checker, and also the number of rolls that hits at least 2 checkers. As you see, the code is hairy.
If you find a better way to calculate this, please post your results. To improve I think you have to look at the board representation. Can you represent the board in a different way that makes this calculation faster?

Swift Math Operations on small Float Values

I am running a for loop like so:
for var i: Float = 1.000; i > 0; i -= 0.005 {
println(i)
}
and I have found that after i has decreased past a certain value instead of decreasing by exactly 0.005, it decreases by ever so slightly less then 0.005, so that when it reaches the 201 iteration, i is not 0 but rather something infinitesimally close 0, and so the for loop runs. The output is as follows:
1.0
0.995
0.99
0.985
...
0.48
0.475001
0.470001
...
0.0100008 // should be 0.01
0.00500081 // should 0.005
8.12113e-07 // should be 0
My question is, first of all, why is this happening, and second of all what can I do so that i always decreases by 0.005 so that the loop does not run on the 201 iteration?
Thanks a lot,
bigelerow
The Swift Floating-Point Number documentation states:
Note
Double has a precision of at least 15 decimal digits, whereas the precision of Float can be as little as 6 decimal digits. The appropriate floating-point type to use depends on the nature and range of values you need to work with in your code. In situations where either type would be appropriate, Double is preferred.
In this case, it looks like the error is on the order of 4.060564999999999e-09 in each subtraction, based on the amount left over after 200 subtractions. Indeed changing Float to Double reduces the precision such that the loop runs until i = 0.00499999999999918 when it should be 0.005.
That is all well and good, however we still have the problem of construction a loop that will run until i becomes zero. If the amount that you reduce i by remains constant throughout the loop, one only slightly unfortunate work around is:
var x: Double = 1
let reduction = 0.005
for var i = Int(x/reduction); i >= 0; i -= 1, x = Double(i) * reduction {
println(x)
}
In this case your error won't compound since we are using an integer to index how many reductions we need to reach the current x, and thus is independent of the length of the loop.

Minimising number of calls to std::max in nested loop

I'm trying to reduce the number of calls to std::max in my inner loop, as I'm calling it millions of times (no exaggeration!) and that's making my parallel code run slower than the sequential code. The basic idea (yes, this IS for an assignment) is that the code calculates the temperature at a certain gridpoint, iteration by iteration, until the maximum change is no more than a certain, very tiny number (e.g 0.01). The new temp is the average of the temps in the cells directly above, below and beside it. Each cell has a different value as a result, and I want to return the largest change in any cell for a given chunk of the grid.
I've got the code working but it's slow because I'm doing a large (excessively so) number of calls to std::max in the inner loop and it's O(n*n). I have used a 1D domain decomposition
Notes: tdiff doesn't depend on anything but what's in the matrix
the inputs of the reduction function are the result of the lambda function
diff is the greatest change in a single cell in that chunk of the grid over 1 iteration
blocked range is defined earlier in the code
t_new is new temperature for that grid point, t_old is the old one
max_diff = parallel_reduce(range, 0.0,
//lambda function returns local max
[&](blocked_range<size_t> range, double diff)-> double
{
for (size_t j = range.begin(); j<range.end(); j++)
{
for (size_t i = 1; i < n_x-1; i++)
{
t_new[j*n_x+i]=0.25*(t_old[j*n_x+i+1]+t_old[j*n_x+i-1]+t_old[(j+1)*n_x+i]+t_old[(j-1)*n_x+i]);
tdiff = fabs(t_old[j*n_x+i] - t_new[j*n_x+i]);
diff = std::max(diff, tdiff);
}
}
return diff; //return biggest value of tdiff for that iteration - once per 'i'
},
//reduction function - takes in all the max diffs for each iteration, picks the largest
[&](double a, double b)-> double
{
convergence = std::max(a,b);
return convergence;
}
);
How can I make my code more efficient? I want to make less calls to std::max but need to maintain the correct values. Using gprof I get:
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls ms/call ms/call name
61.66 3.47 3.47 3330884 0.00 0.00 double const& std::max<double>(double const&, double const&)
38.03 5.61 2.14 5839 0.37 0.96 _ZZ4mainENKUlN3tbb13blocked_rangeImEEdE_clES1_d
ETA: 61.66% of the time spent executing my code is on the std::max calls, it calls over 3 million times. The reduce function is called for every output of the lambda function, so reducing the number of calls to std::max in the lambda function will also reduce the number of calls to the reduce function
First of all, I would expect std::max to be inlined into its caller, so it's suspicious that gprof points it out as a separate hotspot. Do you maybe analyze a debug configuration?
Also, I do not think that std::max is a culprit here. Unless some special checks are enabled in its implementation, I believe it should be equivalent to (diff<tdiff)?tdiff:diff. Since one of the arguments to std::max is the variable that you update, you can try if (tdiff>diff) diff = tdiff; instead, but I doubt it will give you much (and perhaps compilers can do such optimization on their own).
Most likely, std::max is highlighted as the result of sampling skid; i.e. the real hotspot is in computations above std::max, which makes perfect sense, due to both more work and accesses to non-local data (arrays) that might have longer latency, especially if the corresponding locations are not in CPU cache.
Depending on the size of the rows (n_x) in your grid, processing it by rows like you do can be inefficient, cache-wise. It's better to reuse data from t_old as much as possible while those are in cache. Processing by rows, you either don't re-use a point from t_old at all until the next row (for i+1 and i-1 points) or only reuse it once (for two neighbors in the same row). A better approach is to process the grid by rectangular blocks, which helps to re-use data that are hot in cache. With TBB, the way to do that is to use blocked_range2d. It will need minimal changes in your code; basically, changing the range type and two loops inside the lambda: the outer and inner loops should iterate over range.rows() and range.cols(), respectively.
I ended up using parallel_for:
parallel_for(range, [&](blocked_range<size_t> range)
{
double loc_max = 0.0;
double tdiff;
for (size_t j = range.begin(); j<range.end(); j++)
{
for (size_t i = 1; i < n_x-1; i++)
{
t_new[j*n_x+i]=0.25*(t_old[j*n_x+i+1]+t_old[j*n_x+i-1]+t_old[(j+1)*n_x+i]+t_old[(j-1)*n_x+i]);
tdiff = fabs(t_old[j*n_x+i] - t_new[j*n_x+i]);
loc_max = std::max(loc_max, tdiff);
}
}
//reduction function - takes in all the max diffs for each iteration, picks the largest
{
max_diff = std::max(max_diff, loc_max);
}
}
);
And now my code runs in under 2 seconds for an 8000x8000 grid :-)

Generating a random number using a timer in capl

I was trying to generate a random number in CAPL program (similar to C language) using timers.
Say I have a timer X and I start it
/****Timer start****/
on start
{
settimer (x,20000); // setting the timer for 20 secs
}
Now I need a random number only between 300ms to 20 secs with a resolution of 500ms.
CAPL has a inbuilt function called random() to do this.
I did like
int random(int x);
Now how can I make sure that I get a random value only with resolution of 500ms?
Any suggestions?
How about
y = random(40);
TestWaitForTimeout(300+y*500);
y gets a random value between 0 and 39, corresponding to 0-19.5 seconds with 500 ms resolution. Then you add 300ms to the total timeout. The resulting timeout will be between 300ms and 20s with a resolution of 500ms.
I was able to generate random numbers by writing a test function as below.
The random function generates a random number between 0 to n-1.
As far as resolution is concerned the library function random() doesn't allow to vary the resolution.
testfunction Random_No ()
{
dword y;
y = random(20000);
TestWaitForTimeout(y);
}

Generating random numbers based on an expected value

I am programming in java and I have come across a problem I could use some help with. Basically I need the user to enter how many times they expect a certain event to happen in a certain amount of times. The event takes a certain amount of time to complete as well. With all that said I need to use a random number generator to decide whether or not the event should happen based on the expected value.
Here's an example. Say the event takes 2 seconds to complete. The user says they want 100 seconds total and they expect the event to happen 25 times. Right now this is what I have. Units is the units of time and expectedLanding is how many times they would like the event to take place.
double isLandingProb = units/expectedLanding;
double isLanding = isLandingProb * random.nextDouble();
if(isLanding >= isLandingProb/2){
//do event here
}
This solution isn't working, and I'm having trouble thinking of something that would work.
Try this:
double isLandingProb = someProbability;
double isLanding = random.nextDouble();
if(isLanding <= isLandingProb){
//do event here
}
For example, if your probability is .25 (1 out of 4), and nextDouble returns a random number between 0 and 1, then your nextDouble needs to be less than (or equal to) .25 to achieve a landing.
Given an event that takes x seconds to run, but you want it to run on average once every y seconds, then it needs to execute with probability x/y. Then the expectation of the number of seconds the event is running over y seconds is x = one event.
int totalSeconds;
int totalTimes;
double eventTime;
double secondsPerEvent = 1.0d * totalSeconds / totalTimes;
if( eventTime > secondsPerEvent ) throw new Exception("Impossible to satisfy");
double eventProbability = eventTime / secondsPerEvent;
if( eventProbability < random.nextDouble() )
// do event

Resources