Can I create one AVFrame and use it for decoding of all frames? So I call av_frame_alloc() once, decode all frames, and then call av_frame_unref() once. Or I should call av_frame_alloc / av_frame_unref for each frame?
When exactly I should call av_frame_unref, before decoding or after decoding?
A.
av_frame_alloc()
av_frame_unref()
(decoding...)
B. Or this variant:
av_frame_alloc()
(decoding...)
av_frame_unref()
You could only use one AVFrame struct for the entire decoding/encoding process, by calling av_frame_alloc() once.
av_frame_unref() is only needed when you decide to enable reference counting for your encoding/decoding context, called for each frame.
Use av_frame_free() to free the frame struct and all its buffers at the end of your encoding/decoding process.
See ffmpeg's official examples for how to use them:
demuxing_decoding
Reference counting is a generic process where a dynamically allocated source is shared (e.g. among multiple threads). To prevent freeing the source by a thread and put others offside position, this mechanism is used, often implemented with a simple atomic counter associated with the object.
The threads which accesses the source calls addref, that typically will increase the counter by 1 and when the thread is done using it calls unref and that decreases the counter (in case of ffmpeg these are av_frame_ref and av_frame_unref if I'm not mistaken).
This ensures that the source remains valid until the thread using it is done with it.
Eventually, when the counter reaches zero (no user left), the source is freed safely.
Related
I want to clone AVFrame. For this purpose, I call av_frame_clone function.
Then I want to free all memory allocated by old AVFrame. For this purpose I call av_frame_free function. Memory which is pointer by data is not freed by av_frame_free function. So what is the correct way of cloning and deleting a AVFrame in ffmpeg ?
Thanks for responses.
The docs for av_frame_clone() say:
Create a new frame that references the same data as src. This is a
shortcut for av_frame_alloc()+av_frame_ref().
Those for av_frame_free() say:
Free the frame and any dynamically allocated objects in it, e.g.
extended_data. If the frame is reference counted, it will be
unreferenced first.
So, combining these two functions looks correct.
What happens with the original frame? Probably it needs to an unref?
I'm trying to optimize a program that issues all OpenGL ES calls in the main thread. Main performance issue seems to be frequent buffer uploads via glBufferData, more specifically a memcpy inside this function that is done synchronously with the main thread (the buffers a pretty large).
My current plan would be to instead map the buffer in the main thread using glMapBuffer, then send the pointer to a different thread which performs the memcpy, once this thread is finished call glUnmapBuffer again in the main thread. After that, the buffer is used for rendering.
Would this approach work or is it dangerous to use glMapBuffer pointers in a thread that doesn't have the gl context? Or is there a way to ensure no memcpy is performed on the main thread and everything is done on the pipeline thread?
Regards
Once you've mapped the buffer then the pointer is a "normal" CPU pointer, so can be used just like any other CPU pointer including cross-thread access.
Just make sure that you've complete any writes and sync the threads before calling glUnmapBuffer().
I use compute shaders to compute a triangle list and to store it in a RWStructuredBuffer. For testing I read this buffer and pass it to the IA via context.InputAssembler.SetVertexBuffers (…). This approach works, but is valid only for testing the data for correctness.
Now I want to bind the (already existing) buffer to the IA stage using a resource view (aka without passing a pointer to the vertex buffer).
I am reading some good books (Frank D. Luna, Jason Zink), but they never mention this case.
===============
EDIT:
The syntax I am using here in imposed by the SharpDX wrapper.
I can bind the buffer to the vertex shader via context.VertexShader.SetShaderResource(...), bindig a ResoureceView. In the VS I use SV_VertexID to access the buffer. So I HAVE a working solution for moment, but there might be cases in the future where I must bind the buffer to the input assembler.
Simply put, you can't bind a structured buffer to the IA stage, at least directly, runtime will not allow this.
If you put ResourceOptionFlags.BufferStructured as OptionFlags, you are not allowed to use : VertexBuffer/IndexBuffer/StreamOutput/ConstantBuffer/RenderTarget/Depth as bind flags, Resource creation will fail.
One option, which costs you a GPU copy, is to create a second buffer with VertexBuffer BindFlags, and Default usage (same size as your structured buffer).
Once you are done processing your structuredbuffer, call:
DeviceContext.CopyResource
And you'll have a standard vertex buffer ready to use.
I'm writing a kernel module which uses a customized print-on-screen system. Basically each time a print is involved the string is inserted into a linked list.
Every X seconds I need to process the list and perform some operations on the strings before printing them.
Basically I have two choices to implement such a filter:
1) Timer (which restarts itself in the end)
2) Kernel thread which sleeps for X seconds
While the filter is performing its stuff nothing else can use the linked list and, of course, while inserting a string the filter function shall wait.
AFAIK timer runs in interrupt context so it cannot sleep, but what about kernel threads? Can they sleep? If yes is there some reason for not to use them in my project? What other solution could be used?
To summarize: my filter function has got only 3 requirements:
1) Must be able to printk
2) When using the list everything else which is trying to access the list must block until the filter function finishes execution
3) Must run every X seconds (not a realtime requirement)
kthreads are allowed to sleep. (However, not all kthreads offer sleepful execution to all clients. softirqd for example would not.)
But then again, you could also use spinlocks (and their associated cost) and do without the extra thread (that's basically what the timer does, uses spinlock_bh). It's a tradeoff really.
each time a print is involved the string is inserted into a linked list
I don't really know if you meant print or printk. But if you're talking about printk(), You would need to allocate memory and you are in trouble because printk() may be called in an atomic context. Which leaves you the option to use a circular buffer (and thus, you should be tolerent to drop some strings because you might not have enough memory to save all the strings).
Every X seconds I need to process the list and perform some operations on the strings before printing them.
In that case, I would not even do a kernel thread: I would do the processing in print() if not too costly.
Otherwise, I would create a new system call:
sys_get_strings() or something, that would dump the whole linked list into userspace (and remove entries from the list when copied).
This way the whole behavior is controlled by userspace. You could create a deamon that would call the syscall every X seconds. You could also do all the costly processing in userspace.
You could also create a new device says /dev/print-on-screen:
dev_open would allocate the memory, and print() would no longer be a no-op, but feed the data in the device pre-allocated memory (in case print() would be used in atomic context and all).
dev_release would throw everything out
dev_read would get you the strings
dev_write could do something on your print-on-screen system
I'm using the Windows multimedia APIs to record and process wave audio (waveInOpen and friends). I'd like to use a small number of buffers in a round robin fashion.
I know that you're supposed to use waveInPrepareHeader before adding a buffer to the device, and that you're supposed to call waveInUnprepareHeader after the wave device has "returned the buffer to the application" and before you deallocate it.
My question is, do I have to unprepare and re-prepare in order to re-use a buffer? Or can I just add a previously used buffer back to the device?
Also, does it matter what thread I do this on? I'm using the callback function, which seems to be called on a worker thread that belongs to the audio system. Can I call waveInUnprepareHeader, waveInPrepareHeader, and waveInAddBuffer on that thread, during the callback?
Yes, my experience has been you need to call prepare and unprepare every time. From memory, it returns an error if you try to reuse the same one.
And you typically call the prepare and unprepare on whatever thread you are handling the callbacks on.
When you create the buffers, call waveInPrepareHeader. Then you can simply set the prepared flag before you call waveInAddBuffer on a buffer that was returned from the device.
pHdr->dwFlags = WHDR_PREPARED;
You can do this on the callback thread (or in the message handler).