Commence key press collection contingent on sound duration - duration

I am implementing an experiment in Psychopy in which I am designing a same-different discrimination task comparing two sounds that are of variable duration (sound_1, sound_2) played in succession with an interval of 0.5 s in between. Now I have managed to start sound_1 at 0.0 and sound_2 at 0.5 s after the end of sound_1 using "$sound_1.getDuration() + 0.5"; however, I want to get a key press response with the RT measured from the end of sound_2 on; I tried start time "$sound_1.getDuration() + 0.5 + sound_2.getDuration()", however the keypress is already functional during the presentation of sound_2 and RTs appear to be too long as compared with usual RTs observed for this kind of task. Does anyone know how to obtain the accurate onset for measuring RTs here?
Btw my question is similar, however not fully answered by the following thread:
variable stimuli duration but two kinds of fixed ISI in PsychoPy

Related

Modelica total time calculation of simulation and equation initialization

I would like to measure the total simulation and initialization time of a system of DAEs. I am interested in the wall-clock time (like the one given in Matlab by the function tic-toc).
I noticed in Modelica there are different flags for the simulation time but actually the time I get is very small compared to the time that elapses since I press the simulation button to the end of the simulation (approximately measured with the clock of my phone).
I guess this short time is just the time required for the simulation and it does not include the initialization of the system of eqs.
Is there a way to calculate this total time?
Thank you so much in advance,
Gabriele
Dear Marco,
Thank you so much for your extremely detailed and useful reply!
I am actually using OpenModelica and not Dymola so unfortunately I have to build the function that does it for me and I am very new with OpenModelica language.
So far, I have a model that simulate the physical behavior based on a DAEs. Now, I am trying to build what you suggest here:
With get time() you can build a function that: reads the system time as t_start translates the model and simulate for 0 seconds reads the system time again and as t_stop computes the difference between t_start and t_stop.
Could you please, give me more details: Which command can I use to read the system at time t_start and to simulate it for 0 seconds? To do this for both t_start and t_stop do I need to different function?
Once I have done this, do I have to call the function (or functions) inside the OpenModelica Model of which I want to know its time?
Thank you so much again for your precious help!
Very best regards, Gabriele
Depending on the tool you have, this could mean a lot of work.
The first problem is that the MSL allows you to retrieve the system time, but there is nothing included to easily compute time deltas. Therefore the Testing library in Dymola features the operator records DateTime and Duration. Note, that it is planned to integrate them in future MSL versions, but at the moment this is only available via the Testing library for Dymola users.
The second problem is that there is no standardized way to translate and simulate models. Every tools has its own way to do that from scripts. So without knowing what tool you are using, it's not possible to give an exact answer.
What Modelica offers in the MSL
In the current Modelica Standard Library version 3.2.3 you can read the actual system time via Modelica.Utilities.System.getTime().
This small example shows how to use it:
function printSystemTime
protected
Integer ms, s, min, h, d, mon, a;
algorithm
(ms, s, min, h, d, mon, a) := Modelica.Utilities.System.getTime();
Modelica.Utilities.Streams.print("Current time is: "+String(h)+":"+String(min)+":"+String(s));
end printSystemTime;
You see it gives the current system date and time via 7 return values. These variables are not very nice to deal with if you want to compute a time delta, as you will end up with 14 variables, each with its own value range.
How to measure translation and simulation time in general
With gettime() you can build a function that:
reads the system time as t_start
translates the model and simulate for 0 seconds
reads the system time again and as t_stop
computes the difference of t_start and t_stop.
Step 2 depends on the tool. In Dymola you would call
DymolaCommands.SimulatorAPI.simulateModel("path-to-model", 0, 0);
which translates your model and simulates it for 0 seconds, so it only runs the initialization section.
For Dymola users
The Testing library contains the function Testing.Utilities.Simulation.timing, which does almost exactly what you want.
To translate and simulate your model call it as follows:
Testing.Utilities.Simulation.timing(
"Modelica.Blocks.Examples.PID_Controller",
task=Testing.Utilities.Simulation.timing.Task.fullTranslate_simulate,
loops=3);
This will translate your model and simulate for 1 second three times and compute the average.
To simulate for 0s, duplicate the function and change this
if simulate then
_ :=simulateModel(c);
end if;
to
if simulate then
_ :=simulateModel(c, 0, 0);
end if;

How to simulate JME3 bullet physics as fast as possible

I am working on a car simulation using bullet physics and I want to be able to speed up the simulation - even to be able to run the physics simulation as fast as possible.
I tried to call pSpace.update(1/60, 1) (which calls directly DynamicsWorld.stepSimulation), then listen for physicsTick and call this again ( => no waiting for anything). Unfortunately it looks like the thread is not waiting for all the bullet's work done and objects go through surface then (when I got over StackOverflowError).
I would probably need some mechansim to be called by bullet when the computation is done and I can call it again.
Or does bullet have its own clock which cannot be speeden up and I am completely wrong? I see the whole thing that it works as a single computation of forces on given time.
I know that JME3 can speed up bullet by calling stepSimulation(speed * tpf, 4), but it only speeds the simulation 4x on maximum as it makes 4 steps in a row, is this the way?
Thank you very much for anybody's hint.

How to benchmark Matlab processes?

Searching for an idea how to avoid using loop in my Matlab code, I found following comments under one question on SE:
The statement "for loops are slow in Matlab" is no longer generally true since Matlab...euhm, R2008a?
and
Have you tried to benchmark a for loop vs what you already have? sometimes it is faster than vectorized code...
So I would like to ask, is there commonly used way to test the speed of a process in Matlab? Can user see somewhere how much time the process takes or the only way is to extend the processes for several minutes in order to compare the times between each other?
The best tool for testing the performance of MATLAB code is Steve Eddins' timeit function, available here from the MATLAB Central File Exchange.
It handles many subtle issues related to benchmarking MATLAB code for you, such as:
ensuring that JIT compilation is used by wrapping the benchmarked code in a function
warming up the code
running the code several times and averaging
Update: As of release R2013b, timeit is part of core MATLAB.
Update: As of release R2016a, MATLAB also includes a performance testing framework that handles the above issues for you in a similar way to timeit.
You can use the profiler to assess how much time your functions, and the blocks of code within them, are taking.
>> profile on; % Starts the profiler
>> myfunctiontorun( ); % This can be a function, script or block of code
>> profile viewer; % Opens the viewer showing you how much time everything took
Viewer also clears the current profile data for next time.
Bear in mind, profile does tend to slow execution a bit, but I believe it does so in a uniform way across everything.
Obviously if your function is very quick, you might find you don't get reliable results so if you can run it many times or extend the computation that would improve matters.
If it's really simple stuff you're testing, you can also just time it using tic and toc:
>> tic; % Start the timer
>> myfunctionname( );
>> toc; % End the timer and display elapsed time
Also if you want multiple timers, you can assign them to variables:
>> mytimer = tic;
>> myfunctionname( );
>> toc(mytimer);
Finally, if you want to store the elapsed time instead of display it:
>> myresult = toc;
I think that I am right to state that many of us time Matlab by wrapping the block of code we're interested in between tic and toc. Furthermore, we take care to ensure that the total time is of the order of 10s of seconds (rather than 1s of seconds or 100s of seconds) and repeat it 3 - 5 times and take some measure of central tendency (such as the mean) and draw our conclusions from that.
If the piece of code takes less than, say 10s, then repeat it as many times as necessary to bring it into the range, being careful to avoid any impact of one iteration on the next. And if the code naturally takes 100s of seconds or longer, either spend longer on the testing or try it with artificially small input data to run more quickly.
In my experience it's not necessary to run programs for minutes to get data on average run time with acceptably low variance. If I run a program 5 times and one (or two) of the results is wildly different from the mean I'll re-run it.
Of course, if the code has any features which make its run time non-deterministic then it's a different matter.

how to test if time range has x number of laps

I'm trying to solve a 'decaying' puzzle that goes somewhat like this:
given A is 100 at DateTime.new(2012,5,10,0,0,0) and is decaying by 0.5 every 12 seconds, has it decayed exactly 20 by DateTime.new(2012,5,10,0,8,0)?
It so happens that the answer to that question is - well, true :)
But what about
A being 1304.5673,
the decay 0.00000197 every 1.2 msec
and end time being not one but 2000 DateTime.new's
I've tried with
fd=3.minutes.ago.to_datetime
td=Time.now
material=1304.5673
decay=0.00000197
step=0.00012.seconds
fd.step(td,step){ |n| material-=decay }
puts material
and the processing time is acceptable - but if I step any further back in time (like perhaps 10.hours or even 2.hours; my CPU cooler starts building up momentum, like it was about to propel the entire Mac into orbit :(
I've toiled with this problem for quite a while - even though the timespan from question to answer on SO does indicate the opposite <:)
(and the answer, to me, explicitly demonstrates why Ruby is such a wonderful language!)
# recap the variables in the question
total_decay = ((td.to_time - fd.to_time).divmod( step))[0]* decay
puts "new material: #{material - total_decay}"
The results will probably not pass scientific scrutiny, but I'm OK with that (for now) ;)

TCL - how to know how much time a function has worked?

Say I have a proc and the proc consists of several statements and function calls. How I can know how much time the function has taken so far?
a very crude example would be something like:
set TIME_start [clock clicks -milliseconds]
...do something...
set TIME_taken [expr [clock clicks -milliseconds] - $TIME_start]
Using the time proc, you can do the following:
% set tt [time {set x [expr 23 * 34]}]
38 microseconds per iteration
To measure the time some code has taken, you either use time or clock.
The time command will run its script argument and return a description of how long the script took, in milliseconds (plus some descriptive text, which is trivial to chop off with lindex). If you're really doing performance analysis work, you can supply an optional count argument that makes the script be run repeatedly, but for just general monitoring you can ignore that.
The clock command lets you get various sorts of timestamps (as well as doing formatting, parsing and arithmetic with times). The coarsest is got with clock seconds, which returns the amount of time since the beginning of the Unix epoch (in seconds computed with civil time; that's what you want unless you're doing something specialized). If you need more detail, you should use clock milliseconds or clock microseconds. There's also clock clicks, but it's not typically defined what unit that's counting in (unless you pass the -milliseconds or -microseconds option). It's up to you to turn the timestamps into something useful to you.
If you're timing things on Tcl 8.4 (or before!) then you're constrained to using time, clock seconds or clock clicks (and even the -microseconds option is absent; there's no microsecond-resolution timer exposed in 8.4). In that case, you should consider upgrading to 8.5, as it's generally faster. Faster is Good! (If you're using pre-8.4, definitely upgrade as you're enormously behind on the support front.)
To tell how long a function has taken, you can either use the time command (wrapped around the function call) or use clock clicks to get the current time before and then during the function. The time option is simple but can only time a whole function (and will only give you a time when the function returns). Using clock clicks can be done several times, but you will need to subtract the current time from the starting time yourself.
In case your really looking for some kind of profiler, have a look at the profiler package in Tcllib:
http://tcllib.sourceforge.net/doc/profiler.html

Resources