Why does EnvGen restart on every loop iteration and how to prevent this behavior? - supercollider

How can I use EnvGen in a loop in such a way that it won't restart at every iteration of the loop?
What I need it for: piecewise synthesis. I want e.g. 50ms of a xfade between first and second Klang, then a 50ms xfade between second and third Klang, then a 50ms xfade between third and fourth Klang and so on, and I want this concatenation as a whole to be modulated by an envelope.
Unfortunately the EnvGen seems to restart from the beginning on every iteration of the loop that plays the consecutive Klang pairs. I want a poiiiiinnnnnnnnnng, but no matter what I try all I get is popopopopopopopopo.
2019 EDIT:
OK, since nobody would answer the "how to achieve the goal" question, I am now downgrading this question to a mere "why doesn't this particular approach work", changing the title too.
Before I paste some code, a bit of an explanation: this is a very simplified example. While my original desire was to modulate a complicated, piecewise-generated sound with an envelope, this simplified example only "scissors" 100ms segments out of the output of a SinOsc, just to artificially create the "piecewise generation" situation.
What happens in this program is that the EnvGen seems to restart at every loop iteration: the envelope restarts from t=0. I expect to get one 1s long exponentially fading sound, like plucking a string. What I get is a series of 100ms "pings" due to the envelope restarting at the beginning of each loop iteration.
How do I prevent this from happening?
Here's the code:
//Exponential decay over 1 second
var envelope = {EnvGen.kr(Env.new([1,0.001],[1],curve: 'exp'), timeScale: 1, doneAction: 2)};
var myTask = Task({
//A simple tone
var oscillator = {SinOsc.ar(880,0,1);};
var scissor;
//Prepare a scissor that will cut 100ms of the oscillator signal
scissor = {EnvGen.kr(Env.new([1,0],[1],'hold'),timeScale: 0.1)};
10.do({
var scissored,modulated;
//Cut the signal with the scisor
scissored = oscillator*scissor;
//Try modulating with the envelope. The goal is to get a single 1s exponentially decaying ping.
modulated = {scissored*envelope};
//NASTY SURPRISE: envelope seems to restart here every iteration!!!
//How do I prevent this and allow the envelope to live its whole
//one-second life while the loop and the Task dance around it in 100ms steps?
modulated.play;
0.1.wait;
});
});
myTask.play;
(This issue, with which I initially struggled for MONTHS without success, actually caused me to shelve my efforts at learning SuperCollider for TWO YEARS, and now I'm picking up where I left off.)

You way of working here is kind of unusual.
With SuperCollider, the paradigm shift you're looking for is to create SynthDefs as discrete entities:
s.waitForBoot ({
b = Bus.new('control');
SynthDef(\scissors, {arg bus;
var env;
env = EnvGen.kr(Env.linen);
//EnvGen.kr(Env.new([1,0.001],[1],curve: 'exp'), timeScale: 1, doneAction: 2);
Out.kr(bus, env);
}).add;
SynthDef(\oscillator, {arg bus, out=0, freq=440, amp = 0.1;
var oscillator, scissored;
oscillator = SinOsc.ar(freq,0,1);
scissored = oscillator * In.kr(bus) * amp;
Out.ar(out, scissored);
}).add;
s.sync;
Task({
Synth(\scissors, [\bus, b]);
s.sync;
10.do({|i|
Synth(\oscillator, [\bus, b, \freq, 100 * (i+1)]);
0.1.wait;
});
}).play
});
I've changed for a longer envelope and a change in pitch, so you can hear all the oscillators start.
What I've done is I've defined two SynthDefs and a bus.
The first SynthDef has an envelope, which I've lengthened for purposes of audibility. It writes the value of that envelope out to a bus. This way, every other SynthDef that wants to use that shared envelope can get it by reading the bus.
The second SynthDef has an a SinOsc. We multiply the output of that by the bus input. This uses the shared envelope to change the amplitude.
This "works", but if you run it a second time, you'll get another nasty surprise! The oscillator SynthDefs haven't ended and you'll hear them again. To solve this, you'll need to give them their own envelopes or something else with a doneAction. Otherwise, they'll live forever.Putting envelopes on each individual oscillator synth is also a good way to shape the onset of each one.
The other new thing you might notice in this example is the s.sync; lines. A major feature of SuperCollider is that the audio server and the language are separate processes. That line makes sure the server has caught up, so we don't try to use server-side resources before they're ready. This client/server split is also why it's best to define synthdefs before using them.
I hope that the long wait for an answer has not turned you off permanently. You may find it helpful to look at some tutorials and get started that way.

Related

glutMainLoop() vs glutTimerFunc()?

I know that glutMainLoop() is used to call display over and over again, maintaining a constant frame rate. At the same time, if I also have glutTimerFunc(), which calls glutPostRedisplay() at the end, so it can maintain a different framerate.
When they are working together, what really happens ? Does the timer function add on to the framerate of main loop and make it faster ? Or does it change the default refresh rate of main loop ? How do they work in conjunction ?
I know that glutMainLoop() is used to call display over and over again, maintaining a constant frame rate.
Nope! That's not what glutMainLoop does. The purpose of glutMainLoop is to pull operating system events, check if timers elapsed, see if windows have to be redrawn and then call into the respective callback functions registered by the user. This happens in a loop and usually this loop is started from the main entry point of the program, hence the name "main - loop".
When they are working together, what really happens ? Does the timer function add on to the framerate of main loop and make it faster ? Or does it change the default refresh rate of main loop ? How do they work in conjunction?
As already told, dispatching timers is part of the responsibility of glutMainLoop, so you can't have GLUT timers without that. More importantly if there happened no events and no re-display was posted and if there's not idle function registerd, glutMainLoop will "block" the program until some interesting happens (i.e. no CPU cycles are being consumed).
Essentially it goes like
void glutMainLoop(void)
{
for(;;){
/* ... */
foreach(t in timers){
if( t.elapsed() ){
t.callback(…);
continue;
}
}
/* ... */
if( display.posted ){
display.callback();
display.posted = false;
continue;
}
idle.callback();
}
}
At the same time, if I also have glutTimerFunc(), which calls glutPostRedisplay() at the end, so it can maintain a different framerate.
The timers provided by GLUT make no guarantees about their precision and jitter. Hence they're not particularly well suited for framerate limiting.
Normally the framerate is limited by v-sync (or it should be), but blocking on v-sync means you can not use that time to do something usefull, because the process is blockd. A better approach is to register an idle function, in which you poll a high resolution timer (on POSIX compliant systems clock_gettime(CLOCK_MONOTONIC, …), on Windows QueryPerformanceCounter) and perform a glutPostRedisplay after one display refresh interval minus the time required for rendering the frame elapsed.
Of course it's hard to predict how long rendering is going to take exactly, so the usual approach is to collect sliding window average and deviation and adjust with that. Also you want to align that timer with v-sync.
This is of course a solved problem (at least in electrical engineering) which can be addressed by a Phase Locked Loop. Essentially you have a "phase comparator" (i.e. something that compares if your timer runs slower or faster than something you want synchronize to), a "charge pump" (a variable you add to or subtract from the delta from the phase comparator), a "loop filter" (sliding window average) and an "oscillator" (a timer) controlled by the loop filtered value in the charge pump.
So you poll the status of the v-sync (not possible with GLUT functions, and not even possible with core OpenGL or even some of the swap control extensions – you'll have to use OS specific functions for that) and compare if your timers lag beind or run fast compared to that. You add that delta to the "charge pump", filter it and feed the result back into the timer. The nice thing about this approach is, that this will automatically adjust to and filter the time spent for rendering frames as well.
From the glutMainLoop doc pages:
glutMainLoop enters the GLUT event processing loop. This routine should be called at most once in a GLUT program. Once called, this routine will never return. It will call as necessary any callbacks that have been registered. (grifos mine)
That means that the idea of glutMainLoop is just processing events, calling anything that is installed. Indeed, I do not believe that it keeps calling display over and over, but only when there is an event that request its redisplay.
This is where glutTimerFunc() comes into the play. It register a timer event callback to be called by glutMainLoop when this event is triggered. Note that this is one of several possible others event callbacks that can be registered. That explains why in doc they use the expression at least.
(...) glutTimerFunc registers the timer callback func to be triggered in at least msecs milliseconds. (...)

Sampling using new VideoReader readFrame() function in MATLAB [duplicate]

I am trying to process a video in Matlab that I read in using VideoReader. I can process the the frames without a problem, but I only want to process every fifth frame. I tried using the step function but this doesn't work on my videoreader object. Right now I can call readFrame five times, but this obviously slows down the whole process (its a lot of video material). How can I efficiently skip five frames, process five frame, skip another five, ... using Matlab?
Error message:
Undefined function 'step' for input arguments of type 'VideoReader'.
However, calling the help function on step gets me this example:
WORKED=step(VR,DELTA)
Moves the frame counter by DELTA frames for video VR. This is a
generalization of NEXT. Returns 0 on an unsuccessful step. Note that
not all plugins support stepping, especially with negative numbers. In
the following example, both IM1 and IM2 should be the same for most
plugins.
vr = videoReader(...myurl...);
if (~next(vr)), error('couldn''t read first frame'); end
im1 = getframe(vr);
if (~step(vr,-1)), error('could not step back to frame 0'); end
im2 = getframe(vr);
if (any(im1 ~= im2)),
error('first frame and frame 0 are not the same');
end
vr = close(vr);
FNUM should be an integer.
After the videoReader constructor is called, NEXT, SEEK, or step should
be called at least once before GETFRAME is called.
Here, step is clearly called on a VideoReader object, is it not? Help would be greatly appreciated.
I've had this issue too. Without using deprecated code, the only way to do what you are trying is to call readFrame five times for every output frame. This is slow and very inefficient. However, if you use the deprecated read method (and assuming your video is a file rather than a stream), you can specify a frame number as well. I don't know why The MathWorks have gone backwards on this. I suggest that you file a service request to ask about it and say why this functionality is important to you.
In the meantime, you can try out my frame2jpg function that extracts particular frames from a video file. It tries to use the deprecated read method and falls back to readFrame if that fails. I've found the read method to be ten times faster in my own application with 1080p 60 fps MPEG-4 video. Feel free to modify the code to suit your your needs.
Don`t know if this is still of use but I´ve found a way to work around the issue.
As the readFrame reads the CURRENT frame, provided by the vid.CurrentTime property you can simply advance the property by the amount of frames you want to skip.
vid = VideoReader('myvid.mpeg')
vidFig = figure();
currAxes = axes;
n = 10;
while hasFrame(vid)
vidFrame = readFrame(vid);
vid.CurrentTime = vid.CurrentTime + n/vid.FrameRate;
image(vidFrame, 'Parent', currAxes);
currAxes.Visible = 'off';
end
Changing the value of n makes the video skip the amount of frames through every loop. I hope this helped.

std::copy runtime_error when working with uint16_t's

I'm looking for input as to why this breaks. See the addendum for contextual information, but I don't really think it is relevant.
I have an std::vector<uint16_t> depth_buffer that is initialized to have 640*480 elements. This means that the total space it takes up is 640*480*sizeof(uint16_t) = 614400.
The code that breaks:
void Kinect360::DepthCallback(void* _depth, uint32_t timestamp) {
lock_guard<mutex> depth_data_lock(depth_mutex);
uint16_t* depth = static_cast<uint16_t*>(_depth);
std::copy(depth, depth + depthBufferSize(), depth_buffer.begin());/// the error
new_depth_frame = true;
}
where depthBufferSize() will return 614400 (I've verified this multiple times).
My understanding of std::copy(first, amount, out) is that first specifies the memory address to start copying from, amount is how far in bytes to copy until, and out is the memory address to start copying to.
Of course, it can be done manually with something like
#pragma unroll
for(auto i = 0; i < 640*480; ++i) depth_buffer[i] = depth[i];
instead of the call to std::copy, but I'm really confused as to why std::copy fails here. Any thoughts???
Addendum: the context is that I am writing a derived class that inherits from FreenectDevice to work with a Kinect 360. Officially the error is a Bus Error, but I'm almost certain this is because libfreenect interprets an error in the DepthCallback as a Bus Error. Stepping through with lldb, it's a standard runtime_error being thrown from std::copy. If I manually enter depth + 614400 it will crash, though if I have depth + (640*480) it will chug along. At this stage I am not doing something meaningful with the depth data (rendering the raw depth appropriately with OpenGL is a separate issue xD), so it is hard to tell if everything got copied, or just a portion. That said, I'm almost positive it doesn't grab it all.
Contrasted with the corresponding VideoCallback and the call inside of copy(video, video + videoBufferSize(), video_buffer.begin()), I don't see why the above would crash. If my understanding of std::copy were wrong, this should crash too since videoBufferSize() is going to return 640*480*3*sizeof(uint8_t) = 640*480*3 = 921600. The *3 is from the fact that we have 3 uint8_t's per pixel, RGB (no A). The VideoCallback works swimmingly, as verified with OpenGL (and the fact that it's essentially identical to the samples provided with libfreenect...). FYI none of the samples I have found actually work with the raw depth data directly, all of them colorize the depth and use an std::vector<uint8_t> with RGB channels, which does not suit my needs for this project.
I'm happy to just ignore it and move on in some senses because I can get it to work, but I'm really quite perplexed as to why this breaks. Thanks for any thoughts!
The way std::copy works is that you provide start and end points of your input sequence and the location to begin copying to. The end point that you're providing is off the end of your sequence, because your depthBufferSize function is giving an offset in bytes, rather than the number of elements in your sequence.
If you remove the multiply by sizeof(uint16_t), it will work. At that point, you might also consider calling std::copy_n instead, which takes the number of elements to copy.
Edit: I just realised that I didn't answer the question directly.
Based on my understanding of std::copy, it shouldn't be throwing exceptions with the input you're giving it. The only thing in that code that could throw a runtime_error is the locking of the mutex.
Considering you have undefined behaviour as a result of running off of the end of your buffer, I'm tempted to say that has something to do with it.

MSG::time is later than timeGetTime

After noticing some timing descrepencies with events in my code, I boiled the problem all the way down to my Windows Message Loop.
Basically, unless I'm doing something strange, I'm experiencing this behaviour:-
MSG message;
while (PeekMessage(&message, _applicationWindow.Handle, 0, 0, PM_REMOVE))
{
int timestamp = timeGetTime();
bool strange = message.time > timestamp; //strange == true!!!
TranslateMessage(&message);
DispatchMessage(&message);
}
The only rational conclusion I can draw is that MSG::time uses a different timing mechanism then timeGetTime() and therefore is free to produce differing results.
Is this the case or am i missing something fundemental?
Could this be a signed unsigned issue? You are comparing a signed int (timestamp) to an unsigned DWORD (msg.time).
Also, the clock wraps every 40ish days - when that happens strange could well be true.
As an aside, if you don't have a great reason to use timeGetTime, you can use GetTickCount here - it saves you bringing in winmm.
The code below shows how you should go about using times - you should never compare the times directly, because clock wrapping messes that up. Instead you should always subtract the start time from the current time and look at the interval.
// This is roughly equivalent code, however strange should never be true
// in this code
DWORD timestamp = GetTickCount();
bool strange = (timestamp - msg.time < 0);
I don't think it's advisable to expect or rely on any particular relationship between the absolute values of timestamps returned from different sources. For one thing, the multimedia timer may have a different resolution from the system timer. For another, the multimedia timer runs in a separate thread, so you may encounter synchronisation issues. (I don't know if each CPU maintains its own independent tick count.) Furthermore, if you are running any sort of time synchronisation service, it may be making its own adjustments to your local clock and affecting the timestamps you are seeing.
Are you by any chance running an AMD dual core? There is an issue where since each core has a separate timer and can run at different speeds, the timers can diverge from each other. This can manifest itself in negative ping times, for example.
I had similar issues when measuring timeouts in different threads using GetTickCount().
Install this driver (IIRC) to resolve the issue.
MSG.time is based on GetTickCount(), and timeGetTime() uses the multimedia timer, which is completely independent of GetTickCount(). I would not be surprised to see that one timer has 'ticked' before the other.

How do I Acquire Images at Timed Intervals using MATLAB?

I'm a MATLAB beginner and I would like to know how I can acquire and save 20 images at 5 second intervals from my camera. Thank you very much.
First construct a video input interface
vid = videoinput('winvideo',1,'RGB24_400x300');
You'll need to adjust the last bit for your webcam. To find a list of webcam devices (and other things besides) use:
imaqhwinfo
The following makes the first webcam into an object
a=imaqhwinfo('winvideo',1)
Find the list of supported video formats with
a.SupportedFormats
You'll then want to start up the interface:
start(vid);
preview(vid);
Now you can do the following:
pics=cell(1,20)
for i=1:20
pause(5);
pics{i}=getsnapshot(vid);
end
Or, as other commentators have noted, you could also use a Matlab timer for the interval.
If you wish to capture images with a considerably shorter interval (1 or more per second), it may be more useful to consider the webcam as a video source. I've left an answer to this question which lays out methods for achieving that.
There are several ways to go about this, each with advantages and disadvantages. Based on the information that you've posted so far, here is how I would do this:
vid = videoinput('dcam', 1'); % Change for your hardware of course.
vid.FramesPerTrigger = 20;
vid.TriggerRepeat = inf;
triggerconfig(vid, 'manual');
vid.TimerFcn = 'trigger(vid)';
vid.TimerPeriod = 5;
start(vid);
This will acquire 20 images every five seconds until you call STOP. You can change the TriggerRepeat parameter to change how many times acquisition will occur.
This obviously doesn't do any processing on the images after they are acquired.
Here is a quick tutorial on getting one image http://www.mathworks.com/products/imaq/description5.html Have you gotten this kind of thing to work yet?
EDIT:
Now that you can get one image, you want to get twenty. A timer object or a simple for loop is what you are going to need.
Simple timer object example
Video example of timers in MATLAB
Be sure to set the "tasks to execute" field to twenty. Also, you should wrap up all the code you have for one picture snap into a single function.
To acquire the image, does the camera comes with some documented way to control it from a computer? MATLAB supports linking to outside libraries. Or you can buy the appropriate MATLAB toolbox as suggested by MatlabDoug.
To save the image, IMWRITE is probably the easiest option.
To repeat the action, a simple FOR loop with a PAUSE will give you roughly what you want with very little work:
for ctr = 1:20
img = AcquireImage(); % your function goes here
fname = ['Image' num2str(ctr)]; % make a file name
imwrite(img, fname, 'TIFF');
pause(5); % or whatever number suits your needs
end
If, however, you need exact 5 second intervals, you'll have to dive into TIMERs. Here's a simple example:
function AcquireAndSave
persistent FileNum;
if isempty(FileNum)
FileNum = 1;
end
img = AcquireImage();
fname = ['Image' num2str(FileNum)];
imwrite(img, fname, 'TIFF');
disp(['Just saved image ' fname]);
FileNum = FileNum + 1;
end
>> t = timer('TimerFcn', 'ShowTime', 'Period', 5.0, 'ExecutionMode', 'fixedRate');
>> start(t);
...you should see the disp line from AcquireAndSave repeat every 5 seconds...
>> stop(t);
>> delete(t);

Resources