I was trying to generate a random number in CAPL program (similar to C language) using timers.
Say I have a timer X and I start it
/****Timer start****/
on start
{
settimer (x,20000); // setting the timer for 20 secs
}
Now I need a random number only between 300ms to 20 secs with a resolution of 500ms.
CAPL has a inbuilt function called random() to do this.
I did like
int random(int x);
Now how can I make sure that I get a random value only with resolution of 500ms?
Any suggestions?
How about
y = random(40);
TestWaitForTimeout(300+y*500);
y gets a random value between 0 and 39, corresponding to 0-19.5 seconds with 500 ms resolution. Then you add 300ms to the total timeout. The resulting timeout will be between 300ms and 20s with a resolution of 500ms.
I was able to generate random numbers by writing a test function as below.
The random function generates a random number between 0 to n-1.
As far as resolution is concerned the library function random() doesn't allow to vary the resolution.
testfunction Random_No ()
{
dword y;
y = random(20000);
TestWaitForTimeout(y);
}
Related
Since there is no built-in random function in OpenCL (not that I know of, please correct me if this is not true). To generate a random list that put into kernel will not work for the purpose of my work. It has to be a random generator running on GPU (kernel). I intended to write my own function that generate random number in the range from 0 to 1. The code below is what i have run on CPU, and it seem to work well.
array_of_random_numbers = [0.0] * N
seed = 19073486328125
multiplier = 19073486328125
adder = 0
random_number = 19073486328125
modulus = 2**48.0
RAND_MAX = 2**48.0
for i in range( 0 , (N-1) ):
random_number = (multiplier * random_number + adder ) % (modulus)
print(multiplier * random_number)
array_of_random_numbers[i] = (random_number / RAND_MAX)
However, I have hard time migrate the code to kernel since I cannot set the random_number in a loop and let it change over iterations.
kernel = """__kernel void get_new_rand(__global float* c)
{
random_number = (19073486328125 * 19073486328125 + 0) % (281474976710656);
c[thread_id] = (random_number / 281474976710656.0);
}"""
Is there a way I can write the random generator on kernel?
Thank you in advance!
I intended to write my own function that generate random number in the range from 0 to 1. The code below is what i have run on CPU, and it seem to work well. However, I have hard time migrate the code to kernel since I cannot set the random_number in a loop and let it change over iterations.
The simplest approach is to use a linear congruence generator function (like you already have), that takes in a seed value and outputs a pseudo-random number that can be normalized to the range [0,1[. The remaining problem is how to get the seed.
Solution: you can pass a seed value as a global parameter from the host side, and change the seed value for every call of the kernel on the host side. This does not have any performance impact. Finally, to get different seeds for the different GPU threads, add the global ID to the seed passed over from the host before calling the LCG function.
This way you don't need any array of numbers stored. Also you have full control over the seed value on the host side, and it all remains fully deterministic.
I’m trying to implement a clock/timer. It works as of now but only problem is I need to make the frameRate to 1 but it affects the whole program’s frameRate. How do I change frameRate only for the clock function?
def clock():
global sec, minutes, hours, col
sec+=1
if(sec == 60):
sec = 0
minutes+=1
if(minutes == 60):
minutes = 0
hours+=1
if(hours == 24):
hours = 0
minutes = 0
sec = 0
textSize(25)
fill(255, 0, 0)
text(floor(sec), 185, 110)
text(floor(minutes), 135, 110)
text(floor(hours), 85, 110)
if(sec % 2 == 0):
col = color(0)
else:
col = color(255, 0, 0)
fill(col)
textSize(30)
text(":", 120, 110)
text(":", 170, 110)
You can't change the frame rate only for one function because it doesn't make sense: The draw() function of processing is called in a loop at a defined frame rate (let's say that it's fixed at 60 times by second even thought in reality it can change). When you use the frameRate() function to change this value you change how fast the draw() function is called and since it is the one calling all your other functions you can't define that only for a specific function.
However you have other ways to achieve your clock/timer function:
First processing provides several time functions:
millis() returns the number of milliseconds since the program started. You could have you clock() function called by draw() make make it convert millis() to a number of seconds, minutes, hours, etc... This way you don't have to keep track of the time by yourself which will simplify your code a lot.
Depending on what you want to do you can also access your computer clock with second(), minute() and all the functions in the "Time & Date" section here.
Secondly you could use the time module of python as shown in this SO question it's a bit of the equivalent of the millis() idea but with native python function.
Finally, still depending on your needs, you could want to have a look at python's Timer objects to execute your clock() function at a defined interval outside of the draw() loop, but while it is completely possible it is not straight forward and can be tricky for someone new to programming.
I am running a for loop like so:
for var i: Float = 1.000; i > 0; i -= 0.005 {
println(i)
}
and I have found that after i has decreased past a certain value instead of decreasing by exactly 0.005, it decreases by ever so slightly less then 0.005, so that when it reaches the 201 iteration, i is not 0 but rather something infinitesimally close 0, and so the for loop runs. The output is as follows:
1.0
0.995
0.99
0.985
...
0.48
0.475001
0.470001
...
0.0100008 // should be 0.01
0.00500081 // should 0.005
8.12113e-07 // should be 0
My question is, first of all, why is this happening, and second of all what can I do so that i always decreases by 0.005 so that the loop does not run on the 201 iteration?
Thanks a lot,
bigelerow
The Swift Floating-Point Number documentation states:
Note
Double has a precision of at least 15 decimal digits, whereas the precision of Float can be as little as 6 decimal digits. The appropriate floating-point type to use depends on the nature and range of values you need to work with in your code. In situations where either type would be appropriate, Double is preferred.
In this case, it looks like the error is on the order of 4.060564999999999e-09 in each subtraction, based on the amount left over after 200 subtractions. Indeed changing Float to Double reduces the precision such that the loop runs until i = 0.00499999999999918 when it should be 0.005.
That is all well and good, however we still have the problem of construction a loop that will run until i becomes zero. If the amount that you reduce i by remains constant throughout the loop, one only slightly unfortunate work around is:
var x: Double = 1
let reduction = 0.005
for var i = Int(x/reduction); i >= 0; i -= 1, x = Double(i) * reduction {
println(x)
}
In this case your error won't compound since we are using an integer to index how many reductions we need to reach the current x, and thus is independent of the length of the loop.
I am programming in java and I have come across a problem I could use some help with. Basically I need the user to enter how many times they expect a certain event to happen in a certain amount of times. The event takes a certain amount of time to complete as well. With all that said I need to use a random number generator to decide whether or not the event should happen based on the expected value.
Here's an example. Say the event takes 2 seconds to complete. The user says they want 100 seconds total and they expect the event to happen 25 times. Right now this is what I have. Units is the units of time and expectedLanding is how many times they would like the event to take place.
double isLandingProb = units/expectedLanding;
double isLanding = isLandingProb * random.nextDouble();
if(isLanding >= isLandingProb/2){
//do event here
}
This solution isn't working, and I'm having trouble thinking of something that would work.
Try this:
double isLandingProb = someProbability;
double isLanding = random.nextDouble();
if(isLanding <= isLandingProb){
//do event here
}
For example, if your probability is .25 (1 out of 4), and nextDouble returns a random number between 0 and 1, then your nextDouble needs to be less than (or equal to) .25 to achieve a landing.
Given an event that takes x seconds to run, but you want it to run on average once every y seconds, then it needs to execute with probability x/y. Then the expectation of the number of seconds the event is running over y seconds is x = one event.
int totalSeconds;
int totalTimes;
double eventTime;
double secondsPerEvent = 1.0d * totalSeconds / totalTimes;
if( eventTime > secondsPerEvent ) throw new Exception("Impossible to satisfy");
double eventProbability = eventTime / secondsPerEvent;
if( eventProbability < random.nextDouble() )
// do event
I am facing an algorithm problem.
We have a task that runs every 10ms and during the running, an event can happen or not happen. Is there any simple algorithm that allows us to keep track of how many time an event is triggered within the latest, say, 1 second?
The only idea that I have is to implement an array and save all the events. As we are programming embedded systems, there is not enough space...
Thanks in advance.
an array of 13 bytes for a second worth of events in 10ms steps.
consider it an array of 104 bits marking 0ms to 104ms
if the event occurs mark the bit and increment to the next time, else just increment to next bit/byte.
if you want ... run length encode after each second to offload the event bits into another value.
or ... treat it as a circular buffer and keep the count available for query.
or both
You could reduce the array size to match the space available.
It is not clear if an event could occur multiple times while your task was running, or if it is always 10ms between events.
This is more-or-less what Dtyree and Weeble have suggested, but an example implementation may help ( C code for illustration):
#include <stdint.h>
#include <stdbool.h>
#define HISTORY_LENGTH 100 // 1 second when called every 10ms
int rollingcount( bool event )
{
static uint8_t event_history[(HISTORY_LENGTH+7) / 8] ;
static int next_history_bit = 0 ;
static int event_count = 0 ;
// Get history byte index and bit mask
int history_index = next_history_bit >> 3 ; // ">> 3" is same as "/ 8" but often faster
uint8_t history_mask = 1 << (next_history_bit & 0x7) ; // "& 0x07" is same as "% 8" but often faster
// Get current bit value
bool history_bit = (event_history[history_index] & history_mask) != 0 ;
// If oldest history event is not the same as new event, adjust count
if( history_bit != event )
{
if( event )
{
// Increment count for 0->1
event_count++ ;
// Replace oldest bit with 1
event_history[history_index] |= history_mask ;
}
else
{
// decrement count for 1->0
event_count-- ;
// Replace oldest bit with 0
event_history[history_index] &= ~history_mask ;
}
}
// increment to oldest history bit
next_history_bit++ ;
if( next_history_bit >= HISTORY_LENGTH ) // Could use "next_history_bit %= HISTORY_COUNT" here, but may be expensive of some processors
{
next_history_bit = 0 ;
}
return event_count ;
}
For a 100 sample history, it requires 13 bytes plus two integers of statically allocated memory, I have used int for generality, but in this case uint8_t counters would suffice. In addition there are three stack variables, and again the use of int is not necessary if you need to really optimise memory use. So in total it is possible to use as little as 15 bytes plus three bytes of stack. The event argument may or may not be passed on the stack, then there is the function call return address, but again that depends on the calling convention of your compiler/processor.
You need some kind of list/queue etc, but a ringbuffer has probably the best performance.
You need to store 100 counters (1 for each time period of 10 ms during the last second) and a current counter.
Ringbuffer solution:
(I used pseudo code).
Create a counter_array of 100 counters (initially filled with 0's).
int[100] counter_array;
current_counter = 0
During the 10 ms cycle:
counter_array[current_counter] = 0;
current_counter++;
For every event:
counter_array[current_counter]++
To check the number of events during the last s, take the sum of counter_array
Can you afford an array of 100 booleans? Perhaps as a bit field? As long as you can afford the space cost, you can track the number of events in constant time:
Store:
A counter C, initially 0.
The array of booleans B, of size equal to the number of intervals you want to track, i.e. 100, initially all false.
An index I, initially 0.
Each interval:
read the boolean at B[i], and decrement C if it's true.
set the boolean at B[i] to true if the event occurred in this interval, false otherwise.
Increment C if the event occurred in this interval.
When I reaches 100, reset it to 0.
That way you at least avoid scanning the whole array every interval.
EDIT - Okay, so you want to track events over the last 3 minutes (180s, 18000 intervals). Using the above algorithm and cramming the booleans into a bit-field, that requires total storage:
2 byte unsigned integer for C
2 byte unsigned integer for I
2250 byte bit-field for B
That's pretty much unavoidable if you require to have a precise count of the number of events in the last 180.0 seconds at all times. I don't think it would be hard to prove that you need all of that information to be able to give an accurate answer at all times. However, if you could live with knowing only the number of events in the last 180 +/- 2 seconds, you could instead reduce your time resolution. Here's a detailed example, expanding on my comment below.
The above algorithm generalizes:
Store:
A counter C, initially 0.
The array of counters B, of size equal to the number of intervals you want to track, i.e. 100, initially all 0.
An index I, initially 0.
Each interval:
read B[i], and decrement C by that amount.
write the number of events that occurred this interval into B[i].
Increment C by the number of events that occurred this interval.
When I reaches the length of B, reset it to 0.
If you switch your interval to 2s, then in that time 0-200 events might occur. So each counter in the array could be a one-byte unsigned integer. You would have 90 such intervals over 3 minutes, so your array would need 90 elements = 90 bytes.
If you switch your interval to 150ms, then in that time 0-15 events might occur. If you are pressed for space, you could cram this into a half-byte unsigned integer. You would have 1200 such intervals over 3 minutes, so your array would need 1200 elements = 600 bytes.
Will the following work for you application?
A rolling event counter that increments every event.
In the routine that runs every 10ms, you compare the current event counter value with the event counter value stored the last time the routine ran.
That tells you how many events occurred during the 10ms window.