how to test if time range has x number of laps - time

I'm trying to solve a 'decaying' puzzle that goes somewhat like this:
given A is 100 at DateTime.new(2012,5,10,0,0,0) and is decaying by 0.5 every 12 seconds, has it decayed exactly 20 by DateTime.new(2012,5,10,0,8,0)?
It so happens that the answer to that question is - well, true :)
But what about
A being 1304.5673,
the decay 0.00000197 every 1.2 msec
and end time being not one but 2000 DateTime.new's
I've tried with
fd=3.minutes.ago.to_datetime
td=Time.now
material=1304.5673
decay=0.00000197
step=0.00012.seconds
fd.step(td,step){ |n| material-=decay }
puts material
and the processing time is acceptable - but if I step any further back in time (like perhaps 10.hours or even 2.hours; my CPU cooler starts building up momentum, like it was about to propel the entire Mac into orbit :(

I've toiled with this problem for quite a while - even though the timespan from question to answer on SO does indicate the opposite <:)
(and the answer, to me, explicitly demonstrates why Ruby is such a wonderful language!)
# recap the variables in the question
total_decay = ((td.to_time - fd.to_time).divmod( step))[0]* decay
puts "new material: #{material - total_decay}"
The results will probably not pass scientific scrutiny, but I'm OK with that (for now) ;)

Related

Commence key press collection contingent on sound duration

I am implementing an experiment in Psychopy in which I am designing a same-different discrimination task comparing two sounds that are of variable duration (sound_1, sound_2) played in succession with an interval of 0.5 s in between. Now I have managed to start sound_1 at 0.0 and sound_2 at 0.5 s after the end of sound_1 using "$sound_1.getDuration() + 0.5"; however, I want to get a key press response with the RT measured from the end of sound_2 on; I tried start time "$sound_1.getDuration() + 0.5 + sound_2.getDuration()", however the keypress is already functional during the presentation of sound_2 and RTs appear to be too long as compared with usual RTs observed for this kind of task. Does anyone know how to obtain the accurate onset for measuring RTs here?
Btw my question is similar, however not fully answered by the following thread:
variable stimuli duration but two kinds of fixed ISI in PsychoPy

polyfit on GPUArray is extremely slow [duplicate]

function w=oja(X, varargin)
% get the dimensionality
[m n] = size(X);
% random initial weights
w = randn(m,1);
options = struct( ...
'rate', .00005, ...
'niter', 5000, ...
'delta', .0001);
options = getopt(options, varargin);
success = 0;
% run through all input samples
for iter = 1:options.niter
y = w'*X;
for ii = 1:n
% y is a scalar, not a vector
w = w + options.rate*(y(ii)*X(:,ii) - y(ii)^2*w);
end
end
if (any(~isfinite(w)))
warning('Lost convergence; lower learning rate?');
end
end
size(X)= 400 153600
This code implements oja's rule and runs slow. I am not able to vectorize it any more. To make it run faster I wanted to do computations on the GPU, therefore I changed
X=gpuArray(X)
But the code instead ran slower. The computation used seems to be compatible with GPU. Please let me know my mistake.
Profile Code Output:
Complete details:
https://drive.google.com/file/d/0B16PrXUjs69zRjFhSHhOSTI5RzQ/view?usp=sharing
This is not a full answer on how to solve it, but more an explanation why GPUs does not speed up, but actually enormously slow down your code.
GPUs are fantastic to speed up code that is parallel, meaning that they can do A LOT of things at the same time (i.e. my GPU can do 30070 things at the same time, while a modern CPU cant go over 16). However, GPU processors are very slow! Nowadays a decent CPU has around 2~3Ghz speed while a modern GPU has 700Mhz. This means that a CPU is much faster than a GPU, but as GPUs can do lots of things at the same time they can win overall.
Once I saw it explained as: What do you prefer, A million dollar sports car or a scooter? A million dolar car or a thousand scooters? And what if your job is to deliver pizza? Hopefully you answered a thousand scooters for this last one (unless you are a scooter fan and you answered the scooters in all of them, but that's not the point). (source and good introduction to GPU)
Back to your code: your code is incredibly sequential. Every inner iteration depends in the previous one and the same with the outer iteration. You can not run 2 of these in parallel, as you need the result from one iteration to run the next one. This means that you will not get a pizza order until you have delivered the last one, thus what you want is to deliver 1 by 1, as fast as you can (so sports car is better!).
And actually, each of these 1 line equations is incredibly fast! If I run 50 of them in my computer I get 13.034 seconds on that line which is 1.69 microseconds per iteration (7680000 calls).
Thus your problem is not that your code is slow, is that you call it A LOT of times. The GPU will not accelerate this line of code, because it is already very fast, and we know that CPUs are faster than GPUs for these kind of things.
Thus, unfortunately, GPUs suck for sequential code and your code is very sequential, therefore you can not use GPUs to speed up. An HPC will neither help, because every loop iteration depends in the previous one (no parfor :( ).
So, as far I can say, you will need to deal with it.

How can I precisely profile /benchmark algorithms in MATLAB?

The algorithm repeats the same thing again-and-again. I expected to get the same time in each trial but I got very unexpected times for the four identical trials
in which I expected the curves to be identical but they act totally differently. The reason is probably in the tic/toc precision.
What kind of profiling/timing tools should I use in Matlab?
What am I doing wrong in the below code? How reliable is the tic/toc profiling?
Anyway to guarantee consistent results?
Algorithm
A=[];
for ii=1:25
tic;
timerval=tic;
AlgoCalculatesTheSameThing();
tElapsed=toc(timerval);
A=[A,tElapsed];
end
You should try timeit.
Have a look at this related question:
How to benchmark Matlab processes?
A snippet from Sam Roberts answer to the other question:
It handles many subtle issues related to benchmarking MATLAB code for you, such as:
ensuring that JIT compilation is used by wrapping the benchmarked code in a function
warming up the code
running the code several times and averaging
Have a look at this question for discussion regarding warm up:
Why does Matlab run faster after a script is "warmed up"?
Update:
Since timeit was first submitted at the fileexchange, the source code is available here and can be studied and analyzed (as opposed to most other MATLAB functions).
From the header of timeit.m:
% TIMEIT handles automatically the usual benchmarking procedures of "warming
% up" F, figuring out how many times to repeat F in a timing loop, etc.
% TIMEIT also compensates for the estimated time-measurement overhead
% associated with tic/toc and with calling function handles. TIMEIT returns
% the median of several repeated measurements.
You can go through the function step-by-step. The comments are very good and descriptive in my opinion. It is of course possible that Mathworks has changed parts of the code, but the overall functionality is there.
For instance, to account for the time it takes to run tic/toc:
function t = tictocTimeExperiment
% Call tic/toc 100 times and return the average time required.
It is later substracted from the total time.
The following is said regarding number of computations:
function t = roughEstimate(f, num_f_outputs)
% Return rough estimate of time required for one execution of
% f(). Basic warmups are done, but no fancy looping, medians,
% etc.
This rough estimate is used to determine how many times the computations should run.
If you want to change the number of computation times, you can modify the timeit function yourself, as it is available. I would recommend you to save it as my_timeit, or something else, so that you avoid overwriting the built-in version.
Qualitatively there are large differences between the same runs. I did the same four trials as in the question and tested them with the methods suggested so far and I created my own version of the timeit timeitH because the timeit has too large standard deviation between different trials. The timeitH returns far more robust results to other methods because it warm ups the code similarly to the timeit and then it has increased the amount of outer loops in the original timeit from 11 to 50.
The below has the four trials done with the three different methods. The closer the curves are to each other, the better.
TimeitH: results pretty good!
Some observations.
timeit: result smoothed but bumps
tic/toc: easy to adjust for larger cases to get the standard deviation smaller in computation times but no warming up
timeitH: download the code and change 60th line to num_outer_iterations = 50; to get smoother results
In summarum
I think the timeitH is the best candidate here, yet only tested in evaluating sparse polynomials. The timeit and tic/toc like 500 times do not result into robust results.
Timeit
500 trials and average with tic/toc
Algorithm for the 500 trials with tic/toc
for ii=1:25
numTrials = 500;
tic;
for ii=1:numTrials
AlgoCalculatesTheSameThing();
end
tTotal = toc;
tElapsed = tTotal/numTrials;
A=[A,tElapsed];
end
Is the time for AlgoCalculatesTheSameThing() relatively short (fractions of sec or a few sec) or long (multi-minutes or hours)? If the former I would suggest doing it more like this: move your timing functions outside your loop, then compute averages:
A=[];
numTrials = 25;
tic;
for ii=1:numTrials
AlgoCalculatesTheSameThing();
end
tTotal = toc;
tAvg = tTotal/numTrials;
If the event is short enough (fraction of sec) then you should also increase the value of numTrials to 100s or even 1000s.
You have to consider that with any timing function there will be error bars (like in any other measurement). If the event your timing is short enough, the uncertainties in your measurement can be relatively big, keeping in mind that the resolution of tic and toc also has some finite value.
More discussion on the accuracy of tic and toc can be found here.
You need to work out these uncertainties for your specific application, so do experiments: perform averages over a number of trials and then compute the standard deviation to get a sense of the "scatter" or uncertainly in your results.

How to benchmark Matlab processes?

Searching for an idea how to avoid using loop in my Matlab code, I found following comments under one question on SE:
The statement "for loops are slow in Matlab" is no longer generally true since Matlab...euhm, R2008a?
and
Have you tried to benchmark a for loop vs what you already have? sometimes it is faster than vectorized code...
So I would like to ask, is there commonly used way to test the speed of a process in Matlab? Can user see somewhere how much time the process takes or the only way is to extend the processes for several minutes in order to compare the times between each other?
The best tool for testing the performance of MATLAB code is Steve Eddins' timeit function, available here from the MATLAB Central File Exchange.
It handles many subtle issues related to benchmarking MATLAB code for you, such as:
ensuring that JIT compilation is used by wrapping the benchmarked code in a function
warming up the code
running the code several times and averaging
Update: As of release R2013b, timeit is part of core MATLAB.
Update: As of release R2016a, MATLAB also includes a performance testing framework that handles the above issues for you in a similar way to timeit.
You can use the profiler to assess how much time your functions, and the blocks of code within them, are taking.
>> profile on; % Starts the profiler
>> myfunctiontorun( ); % This can be a function, script or block of code
>> profile viewer; % Opens the viewer showing you how much time everything took
Viewer also clears the current profile data for next time.
Bear in mind, profile does tend to slow execution a bit, but I believe it does so in a uniform way across everything.
Obviously if your function is very quick, you might find you don't get reliable results so if you can run it many times or extend the computation that would improve matters.
If it's really simple stuff you're testing, you can also just time it using tic and toc:
>> tic; % Start the timer
>> myfunctionname( );
>> toc; % End the timer and display elapsed time
Also if you want multiple timers, you can assign them to variables:
>> mytimer = tic;
>> myfunctionname( );
>> toc(mytimer);
Finally, if you want to store the elapsed time instead of display it:
>> myresult = toc;
I think that I am right to state that many of us time Matlab by wrapping the block of code we're interested in between tic and toc. Furthermore, we take care to ensure that the total time is of the order of 10s of seconds (rather than 1s of seconds or 100s of seconds) and repeat it 3 - 5 times and take some measure of central tendency (such as the mean) and draw our conclusions from that.
If the piece of code takes less than, say 10s, then repeat it as many times as necessary to bring it into the range, being careful to avoid any impact of one iteration on the next. And if the code naturally takes 100s of seconds or longer, either spend longer on the testing or try it with artificially small input data to run more quickly.
In my experience it's not necessary to run programs for minutes to get data on average run time with acceptably low variance. If I run a program 5 times and one (or two) of the results is wildly different from the mean I'll re-run it.
Of course, if the code has any features which make its run time non-deterministic then it's a different matter.

Prob with moving sprite in SFML looks laggy (because of sf::Clock)

I'm trying out SFML and am creating the classic game Snake.
I've successfully made the snake move a certain amount of pixels after a certain amount of time. The problem is that it takes different time for the gameloop to execute. So when I write out the differtime for every move it looks like this:
Time: 0.273553
Time: 0.259873
Time: 0.260135
Time: 0.396735
Time: 0.258397
Time: 0.262811
Time: 0.259681
Time: 0.257136
Time: 0.266248
Time: 0.412772
Time: 0.260008
The bumps with 0.39 and 0.41 are not good. They make the snake sometimes move slower and does not look good at all.
The time should always be 0.25 so that the snake will run smoothly on the screen.
Here is my code that is implemented in the gameloop (snake.getSpeed() function returns 4):
if(_clock.GetElapsedTime() > (1 / snake.getSpeed())){
std::cout << "Time: " << _clock.GetElapsedTime() << std::endl;
snake.changeDir(keyDown); //Change direction
snake.move(); //Move the snake
_clock.Reset();
}
Is the processor just to slow or do anyone have another idea on how to make the code better?
EDIT: IGNORE THE ABOVE. The real time bumper seems to be as the GetEvent functions. Don't know why but it takes all from 0 to 0.2 seconds to execute. Here is my test code:
//This is just i bit of code, therefore there's no ending brackets ;)
_clock.Reset();
while(_mainWindow.GetEvent(currentEvent)) {
std::cout << _clock.GetElapsedTime() << std::endl; //cout the elapsed time for GetEvent
(The _mainWindow is a sf::RenderWindow)
Don't know if this can be fixed but I'm leaving question unanswered and if anyone got an idea then that's great. Thanks!
First I advise you to use SFML 2, because SFML 1.6 hasn't been maintained for over 2.5 years, has quite a few known and ugly bugs and lacks many nice features from SFML 2.
Next it's mostly better to not trying to force a certain framerate, since there are factors one can't really do something about it (OS interrupts, lots of events when moving the mouse, etc.), but rather make the movement depend on the frame-rate.
The simplest way would be to use the Euler method:
pos.x = pos.x*velocity.x*dt
Where pos is a vector of the position of an object, velocity is a vector for the two-dimensional velocity and dt is the delta time, i.e. the time between two frames.
Unfortunately the simple Euler method isn't very precise and maybe the Verlet integration could give a smoother movement.
But that's again not all, because even though the movement is now more tightly bound to the frame-rate, spikes will still occur and lead to unwanted effects. Thus it's better to fix your time steps, so the rendering with it's FPS (frames per second) count is independent from the physics calculation. There are again many approaches to this and one article that I've found useful is that one.

Resources