How do I Acquire Images at Timed Intervals using MATLAB? - image

I'm a MATLAB beginner and I would like to know how I can acquire and save 20 images at 5 second intervals from my camera. Thank you very much.

First construct a video input interface
vid = videoinput('winvideo',1,'RGB24_400x300');
You'll need to adjust the last bit for your webcam. To find a list of webcam devices (and other things besides) use:
imaqhwinfo
The following makes the first webcam into an object
a=imaqhwinfo('winvideo',1)
Find the list of supported video formats with
a.SupportedFormats
You'll then want to start up the interface:
start(vid);
preview(vid);
Now you can do the following:
pics=cell(1,20)
for i=1:20
pause(5);
pics{i}=getsnapshot(vid);
end
Or, as other commentators have noted, you could also use a Matlab timer for the interval.
If you wish to capture images with a considerably shorter interval (1 or more per second), it may be more useful to consider the webcam as a video source. I've left an answer to this question which lays out methods for achieving that.

There are several ways to go about this, each with advantages and disadvantages. Based on the information that you've posted so far, here is how I would do this:
vid = videoinput('dcam', 1'); % Change for your hardware of course.
vid.FramesPerTrigger = 20;
vid.TriggerRepeat = inf;
triggerconfig(vid, 'manual');
vid.TimerFcn = 'trigger(vid)';
vid.TimerPeriod = 5;
start(vid);
This will acquire 20 images every five seconds until you call STOP. You can change the TriggerRepeat parameter to change how many times acquisition will occur.
This obviously doesn't do any processing on the images after they are acquired.

Here is a quick tutorial on getting one image http://www.mathworks.com/products/imaq/description5.html Have you gotten this kind of thing to work yet?
EDIT:
Now that you can get one image, you want to get twenty. A timer object or a simple for loop is what you are going to need.
Simple timer object example
Video example of timers in MATLAB
Be sure to set the "tasks to execute" field to twenty. Also, you should wrap up all the code you have for one picture snap into a single function.

To acquire the image, does the camera comes with some documented way to control it from a computer? MATLAB supports linking to outside libraries. Or you can buy the appropriate MATLAB toolbox as suggested by MatlabDoug.
To save the image, IMWRITE is probably the easiest option.
To repeat the action, a simple FOR loop with a PAUSE will give you roughly what you want with very little work:
for ctr = 1:20
img = AcquireImage(); % your function goes here
fname = ['Image' num2str(ctr)]; % make a file name
imwrite(img, fname, 'TIFF');
pause(5); % or whatever number suits your needs
end
If, however, you need exact 5 second intervals, you'll have to dive into TIMERs. Here's a simple example:
function AcquireAndSave
persistent FileNum;
if isempty(FileNum)
FileNum = 1;
end
img = AcquireImage();
fname = ['Image' num2str(FileNum)];
imwrite(img, fname, 'TIFF');
disp(['Just saved image ' fname]);
FileNum = FileNum + 1;
end
>> t = timer('TimerFcn', 'ShowTime', 'Period', 5.0, 'ExecutionMode', 'fixedRate');
>> start(t);
...you should see the disp line from AcquireAndSave repeat every 5 seconds...
>> stop(t);
>> delete(t);

Related

Why does EnvGen restart on every loop iteration and how to prevent this behavior?

How can I use EnvGen in a loop in such a way that it won't restart at every iteration of the loop?
What I need it for: piecewise synthesis. I want e.g. 50ms of a xfade between first and second Klang, then a 50ms xfade between second and third Klang, then a 50ms xfade between third and fourth Klang and so on, and I want this concatenation as a whole to be modulated by an envelope.
Unfortunately the EnvGen seems to restart from the beginning on every iteration of the loop that plays the consecutive Klang pairs. I want a poiiiiinnnnnnnnnng, but no matter what I try all I get is popopopopopopopopo.
2019 EDIT:
OK, since nobody would answer the "how to achieve the goal" question, I am now downgrading this question to a mere "why doesn't this particular approach work", changing the title too.
Before I paste some code, a bit of an explanation: this is a very simplified example. While my original desire was to modulate a complicated, piecewise-generated sound with an envelope, this simplified example only "scissors" 100ms segments out of the output of a SinOsc, just to artificially create the "piecewise generation" situation.
What happens in this program is that the EnvGen seems to restart at every loop iteration: the envelope restarts from t=0. I expect to get one 1s long exponentially fading sound, like plucking a string. What I get is a series of 100ms "pings" due to the envelope restarting at the beginning of each loop iteration.
How do I prevent this from happening?
Here's the code:
//Exponential decay over 1 second
var envelope = {EnvGen.kr(Env.new([1,0.001],[1],curve: 'exp'), timeScale: 1, doneAction: 2)};
var myTask = Task({
//A simple tone
var oscillator = {SinOsc.ar(880,0,1);};
var scissor;
//Prepare a scissor that will cut 100ms of the oscillator signal
scissor = {EnvGen.kr(Env.new([1,0],[1],'hold'),timeScale: 0.1)};
10.do({
var scissored,modulated;
//Cut the signal with the scisor
scissored = oscillator*scissor;
//Try modulating with the envelope. The goal is to get a single 1s exponentially decaying ping.
modulated = {scissored*envelope};
//NASTY SURPRISE: envelope seems to restart here every iteration!!!
//How do I prevent this and allow the envelope to live its whole
//one-second life while the loop and the Task dance around it in 100ms steps?
modulated.play;
0.1.wait;
});
});
myTask.play;
(This issue, with which I initially struggled for MONTHS without success, actually caused me to shelve my efforts at learning SuperCollider for TWO YEARS, and now I'm picking up where I left off.)
You way of working here is kind of unusual.
With SuperCollider, the paradigm shift you're looking for is to create SynthDefs as discrete entities:
s.waitForBoot ({
b = Bus.new('control');
SynthDef(\scissors, {arg bus;
var env;
env = EnvGen.kr(Env.linen);
//EnvGen.kr(Env.new([1,0.001],[1],curve: 'exp'), timeScale: 1, doneAction: 2);
Out.kr(bus, env);
}).add;
SynthDef(\oscillator, {arg bus, out=0, freq=440, amp = 0.1;
var oscillator, scissored;
oscillator = SinOsc.ar(freq,0,1);
scissored = oscillator * In.kr(bus) * amp;
Out.ar(out, scissored);
}).add;
s.sync;
Task({
Synth(\scissors, [\bus, b]);
s.sync;
10.do({|i|
Synth(\oscillator, [\bus, b, \freq, 100 * (i+1)]);
0.1.wait;
});
}).play
});
I've changed for a longer envelope and a change in pitch, so you can hear all the oscillators start.
What I've done is I've defined two SynthDefs and a bus.
The first SynthDef has an envelope, which I've lengthened for purposes of audibility. It writes the value of that envelope out to a bus. This way, every other SynthDef that wants to use that shared envelope can get it by reading the bus.
The second SynthDef has an a SinOsc. We multiply the output of that by the bus input. This uses the shared envelope to change the amplitude.
This "works", but if you run it a second time, you'll get another nasty surprise! The oscillator SynthDefs haven't ended and you'll hear them again. To solve this, you'll need to give them their own envelopes or something else with a doneAction. Otherwise, they'll live forever.Putting envelopes on each individual oscillator synth is also a good way to shape the onset of each one.
The other new thing you might notice in this example is the s.sync; lines. A major feature of SuperCollider is that the audio server and the language are separate processes. That line makes sure the server has caught up, so we don't try to use server-side resources before they're ready. This client/server split is also why it's best to define synthdefs before using them.
I hope that the long wait for an answer has not turned you off permanently. You may find it helpful to look at some tutorials and get started that way.

Sampling using new VideoReader readFrame() function in MATLAB [duplicate]

I am trying to process a video in Matlab that I read in using VideoReader. I can process the the frames without a problem, but I only want to process every fifth frame. I tried using the step function but this doesn't work on my videoreader object. Right now I can call readFrame five times, but this obviously slows down the whole process (its a lot of video material). How can I efficiently skip five frames, process five frame, skip another five, ... using Matlab?
Error message:
Undefined function 'step' for input arguments of type 'VideoReader'.
However, calling the help function on step gets me this example:
WORKED=step(VR,DELTA)
Moves the frame counter by DELTA frames for video VR. This is a
generalization of NEXT. Returns 0 on an unsuccessful step. Note that
not all plugins support stepping, especially with negative numbers. In
the following example, both IM1 and IM2 should be the same for most
plugins.
vr = videoReader(...myurl...);
if (~next(vr)), error('couldn''t read first frame'); end
im1 = getframe(vr);
if (~step(vr,-1)), error('could not step back to frame 0'); end
im2 = getframe(vr);
if (any(im1 ~= im2)),
error('first frame and frame 0 are not the same');
end
vr = close(vr);
FNUM should be an integer.
After the videoReader constructor is called, NEXT, SEEK, or step should
be called at least once before GETFRAME is called.
Here, step is clearly called on a VideoReader object, is it not? Help would be greatly appreciated.
I've had this issue too. Without using deprecated code, the only way to do what you are trying is to call readFrame five times for every output frame. This is slow and very inefficient. However, if you use the deprecated read method (and assuming your video is a file rather than a stream), you can specify a frame number as well. I don't know why The MathWorks have gone backwards on this. I suggest that you file a service request to ask about it and say why this functionality is important to you.
In the meantime, you can try out my frame2jpg function that extracts particular frames from a video file. It tries to use the deprecated read method and falls back to readFrame if that fails. I've found the read method to be ten times faster in my own application with 1080p 60 fps MPEG-4 video. Feel free to modify the code to suit your your needs.
Don`t know if this is still of use but I´ve found a way to work around the issue.
As the readFrame reads the CURRENT frame, provided by the vid.CurrentTime property you can simply advance the property by the amount of frames you want to skip.
vid = VideoReader('myvid.mpeg')
vidFig = figure();
currAxes = axes;
n = 10;
while hasFrame(vid)
vidFrame = readFrame(vid);
vid.CurrentTime = vid.CurrentTime + n/vid.FrameRate;
image(vidFrame, 'Parent', currAxes);
currAxes.Visible = 'off';
end
Changing the value of n makes the video skip the amount of frames through every loop. I hope this helped.

How to pyplot pause() for zero time?

I'm plotting an animation of circles. It looks and works great as long as speed is set to a positive number. However, I want to set speed to 0.0. When I do that, something changes and it no longer animates. Instead, I have to click the 'x' on the window after each frame. I tried using combinations of plt.draw() and plt.show() to get the same effect as plt.pause(), but the frames don't show up. How do I replicate the functionality of plt.pause() precisely either without the timer involved or with it set to 0.0?
speed = 0.0001
plt.ion()
for i in range(timesteps):
fig, ax = plt.subplots()
for j in range(num):
circle = plt.Circle(a[j], b[j]), r[j], color='b')
fig.gca().add_artist(circle)
plt.pause(speed)
#plt.draw()
#plt.show()
plt.clf()
plt.close()
I've copied the code of pyplot.pause() here:
def pause(interval):
"""
Pause for *interval* seconds.
If there is an active figure it will be updated and displayed,
and the GUI event loop will run during the pause.
If there is no active figure, or if a non-interactive backend
is in use, this executes time.sleep(interval).
This can be used for crude animation. For more complex
animation, see :mod:`matplotlib.animation`.
This function is experimental; its behavior may be changed
or extended in a future release.
"""
backend = rcParams['backend']
if backend in _interactive_bk:
figManager = _pylab_helpers.Gcf.get_active()
if figManager is not None:
canvas = figManager.canvas
canvas.draw()
show(block=False)
canvas.start_event_loop(interval)
return
# No on-screen figure is active, so sleep() is all we need.
import time
time.sleep(interval)
As you can see, it calls start_event_loop, which starts a separate crude event loop for interval seconds. What happens if interval == 0 seems backend-dependend. For instance, for the WX backend a value of 0 means that this loop is blocking and never ends (I had to look in the code here, it doesn't show up in the documentation. See line 773).
In short, 0 is a special case. Can't you set it to a small value, e.g. 0.1 seconds?
The pause docstring above says that it can only be used for crude anmiations, you may have to resort to the animation module if you want something more sophisticated.

std::copy runtime_error when working with uint16_t's

I'm looking for input as to why this breaks. See the addendum for contextual information, but I don't really think it is relevant.
I have an std::vector<uint16_t> depth_buffer that is initialized to have 640*480 elements. This means that the total space it takes up is 640*480*sizeof(uint16_t) = 614400.
The code that breaks:
void Kinect360::DepthCallback(void* _depth, uint32_t timestamp) {
lock_guard<mutex> depth_data_lock(depth_mutex);
uint16_t* depth = static_cast<uint16_t*>(_depth);
std::copy(depth, depth + depthBufferSize(), depth_buffer.begin());/// the error
new_depth_frame = true;
}
where depthBufferSize() will return 614400 (I've verified this multiple times).
My understanding of std::copy(first, amount, out) is that first specifies the memory address to start copying from, amount is how far in bytes to copy until, and out is the memory address to start copying to.
Of course, it can be done manually with something like
#pragma unroll
for(auto i = 0; i < 640*480; ++i) depth_buffer[i] = depth[i];
instead of the call to std::copy, but I'm really confused as to why std::copy fails here. Any thoughts???
Addendum: the context is that I am writing a derived class that inherits from FreenectDevice to work with a Kinect 360. Officially the error is a Bus Error, but I'm almost certain this is because libfreenect interprets an error in the DepthCallback as a Bus Error. Stepping through with lldb, it's a standard runtime_error being thrown from std::copy. If I manually enter depth + 614400 it will crash, though if I have depth + (640*480) it will chug along. At this stage I am not doing something meaningful with the depth data (rendering the raw depth appropriately with OpenGL is a separate issue xD), so it is hard to tell if everything got copied, or just a portion. That said, I'm almost positive it doesn't grab it all.
Contrasted with the corresponding VideoCallback and the call inside of copy(video, video + videoBufferSize(), video_buffer.begin()), I don't see why the above would crash. If my understanding of std::copy were wrong, this should crash too since videoBufferSize() is going to return 640*480*3*sizeof(uint8_t) = 640*480*3 = 921600. The *3 is from the fact that we have 3 uint8_t's per pixel, RGB (no A). The VideoCallback works swimmingly, as verified with OpenGL (and the fact that it's essentially identical to the samples provided with libfreenect...). FYI none of the samples I have found actually work with the raw depth data directly, all of them colorize the depth and use an std::vector<uint8_t> with RGB channels, which does not suit my needs for this project.
I'm happy to just ignore it and move on in some senses because I can get it to work, but I'm really quite perplexed as to why this breaks. Thanks for any thoughts!
The way std::copy works is that you provide start and end points of your input sequence and the location to begin copying to. The end point that you're providing is off the end of your sequence, because your depthBufferSize function is giving an offset in bytes, rather than the number of elements in your sequence.
If you remove the multiply by sizeof(uint16_t), it will work. At that point, you might also consider calling std::copy_n instead, which takes the number of elements to copy.
Edit: I just realised that I didn't answer the question directly.
Based on my understanding of std::copy, it shouldn't be throwing exceptions with the input you're giving it. The only thing in that code that could throw a runtime_error is the locking of the mutex.
Considering you have undefined behaviour as a result of running off of the end of your buffer, I'm tempted to say that has something to do with it.

How to use output of CIFilter recursively as new input?

I've written an own CIFilter kernel which is doing some image processing on the camera signal. It takes two arguments:
Argument one is "inputImage" (the current camera image) argument 2 is "backgroundImage" which is being initialized with the first camera image.
The filter is supposed to work recursively. The result of the filter should be used as new "backgroundImage" in the next iteration. I am calculating a background image and some variances and therefore need the result from the previous render.
Unfortunately I cannot use the output CIImage of the CIFilter in the next iteration, because the memory load gets up and up. After 10 seconds of processing it ends up with 1.4GB of RAM usage. Using the filter in a standard manner (without recursion) memory management is fine.
How can I reuse the output of a filter as input in the next iteration?
I've done a NSLog on the result image. Ant it told me
background {
CISampler:0x1002e0360 image {
FEPromise: 0x1013aa230 extent [0 0 1280 720]; DOD [0 0 1280 720]; filter MyFeatureDetectFilter: 0x101388dd0; kernel coreImageKernel; image {
CISampler:0x10139e200 image {
FEBufferImage: 0x10139bee0 metadata.colorspace: HDTV; extent: [0 0 1280 720]; format: BGRA_8; uid 5
}
After some seconds the log becomes sth. like
}
}
}
}
}
This tells me that CIImages are 'always' prototypes of the desired operation. And using them recursively adds just the "resulting CIImage 'prototype'" as input into the new 'prototype'.
Over time the "rule" for rendering blows up into a huge structure of nested prototypes.
Is there any way to force CIImages to flatten the structure inside memory?
I would be happy if I could do recursive processing, because this would blow up the power of QuartzCore to the extreme.
I tried the same in QuartzComposer. Connecting the output with the input works, but takes a lot of memory, too. After some time it crashes. Then I tried to use the Queue from QC and everything worked fine. What is the "xcode" equivalent of the QC Queue? Or is there any mechanism to rewrite my kernel to keep "results" in memory for the next iteration?
It seems like what you're looking for is the CIImageAccumulator class. This allows you to use the output of a filter as its input on the next iteration.
Edit:
For an example of how to use it, you can check out this Apple sample code.

Resources