Gstremer Events - events

I am having a bit of trouble understanding how events work in Gstreamer. I understand that you can pass events to elements from the application to end a stream or block a pad etc., but when I look at the sample code in here, it seems like the program is not sending any specific event, just listening them through probes. If the program is only listening events through probes, then these events have to be sent between elements in some kind of fashion after certain things automatically. However, I couldn't find any information regarding to this. How does the events work in Gstreamer?

More information on the design of gstreamer events can be found here (https://github.com/GStreamer/gstreamer/blob/master/docs/random/events). This document describes how the various events propagate through a pipeline.
In the provided sample code, an EOS event is sent to an element with the function :
gst_pad_send_event (sinkpad, gst_event_new_eos ());
The element then proceeds to flush all of its buffers and forwards the EOS event downstream to the next element by posting the event on its src pad. This event continues through the elements until it reaches the installed probe which contains special logic to manipulate the pipeline if an EOS event is received.
This sample shows several things in regards to your question:
- Events are intrinsically handled within the gstreamer pipeline. The gstreamer elements automatically handle them.
- Pad Probes can be used to externally observe/modify events as they propagate through the pipeline.
- Events can be directly inserted within a pipeline using the function gst_pad_send_event or gst_element_send_event

Related

Order of wl_display_dispatch and wl_display_roundtrip call

I am trying to make sense of which one should be called before and which one later between wl_display_dispatch and wl_display_roundtrip. I have seen both order so wondering which one is correct.
1st order:
wl_display_get_registry(display); wl_registry_add_listener() // this call is just informational
wl_display_dispatch();
wl_display_roundtrip();
what i think : wl_display_dispatch() will read and dispatch events from display fd, whatever is sent by server but in between server might be still processing requests and for brief time fd might be empty.
wl_display_dispatch returns assuming all events are dispatched. Then wl_display_roundtrip() is called and will block until server has processed all request and put then in event queue. So after this, event queue still has pending events, but there is no call to wl_display_dispatch(). How those pending events will be dispatched ? Is that wl_display_dispatch() wait for server to process all events and then dispatch all events?
2nd order:
wl_display_get_registry(display); wl_registry_add_listener() // this call is just informational
wl_display_roundtrip();
wl_display_dispatch();
In this case, wl_display_roundtrip() wait for server to process all events and put them in event queue, So once this return we can assume all events sent from server are available in queue. Then wl_display_dispatch() is called which will dispatch all pending events.
Order 2nd looks correct and logical to me, as there is no chance of leftover pending events in queue. but I have seen Order 1st in may places including in weston client examples code so I am confused whats the correct order of calling.
It would be great if someone could clarify here.
Thanks in advance
2nd order is correct.
client can't do much without getting proxy(handle for global object). what i mean is client can send request by binding to the global object advertised by server so for this client has to block until all global object are bind in registry listener callback.
for example for client to create surface you need to bind wl_compositor interface then to shell interface to give role and then shm(for share memory) and so on.wl_display_dispatch cannot guaranty all the events are processed if your lucky it may dispatch all events too but cannot guarantee every-time. so you should use wl_display_roundtrip for registry at-least.

How do "special" epoll flags correspond to kqueue ones?

I'm struggling to draw a parallel between epoll and kqueue flags, specifically EPOLLONESHOT EPOLLET EPOLLEXCLUSIVE and EV_CLEAR/EV_DISPATCH/EV_ONESHOT. I'm investigating the kqueue for the first time; I only had an experience with epoll.
EV_DISPATCH
It feels like the mix of EPOLLEXCLUSIVE and EPOLLONESHOT flags; from the kqueue documentation:
EV_DISPATCH Disable the event source immediately after delivery of an
event. See EV_DISABLE above.
EV_DISABLE Disable the event so kevent() will not return it. The fil-
ter itself is not disabled.
Do I understand the documentation correctly that the event is signalled and then immediately discarded, if there was at least one kqueue instance which polled for this event? That is, if we poll a socket for EVFILT_READ on two kqueues, only one will receive it, and then, until the same event is set with EVFILT_ENABLE, there won't be any further events at all, even if new data comes to socket?
EV_CLEAR
Looks like it is close to EPOLLET; from the kqueue documentation:
EV_CLEAR After the event is retrieved by the user, its state is
reset. This is useful for filters which report state tran-
sitions instead of the current state. Note that some fil-
ters may automatically set this flag internally.
So, for example, given the same socket with EVFILT_READ, all kqueues, which poll it simultaneously, will wake up with EVFILT_READ. If, however, not all data is read (i.e. until EAGAIN), no further events are reported. If and only if all the data was read and a new data arrives, a new EVFILT_READ event would be triggered. Is it correct?
EV_ONESHOT
Looks like it maps to EPOLLONESHOT; from the kqueue documentation:
EV_ONESHOT Causes the event to return only the first occurrence of the
filter being triggered. After the user retrieves the event
from the kqueue, it is deleted.
Questions
So, the questions:
Is my understanding correct? Did I understand these special flags right, compared to epoll? The documentation seems to be a bit tricky to me; perhaps the problem is that I've only used epoll before and didn't yet played with kqueue.
Could you please provide good sources or examples to see kqueue techniques? It would be nice if it would be not that complex like Boost.Asio; it would be also nice these sources would be written in C.
Can these flags be combined together? For example, EPOLLONESHOT cannot be combined with EPOLLEXCLUSIVE, but EV_DISPATCH seems to be exactly something in the middle between these flags.
Thank you for your help!
References
kqueue(2): FreeBSD System Calls Manual
epoll(7): Linux Programmer's Manual
epoll_ctl(7): Linux Programmer's Manual
EV_CLEAR is not equal to EPOLLET, e.g. some listen socket has 5 pending connections, and you don't consume all of them(accept until EAGAIN), then with EV_CLEAR, you won't get EVFILT_READ event from kevent until the 6th connection appears.
EPOLLEXCLUSIVE is used for CPU binding, it isn't related to EV_DISPATCH.
EV_ONESHOT means delete knote after the specific event is triggered, while EV_DISPATCH only disable it.
If one socket fd is registered to several kqueues, then the event is broadcasted while the event is triggered.
EV_ONESHOT is almost equal to EPOLLONESHOT, it is useful in the case that different threads need to call kevent with same kqueue fd.

Adding processing delay in VEINS app layer

How can I add a processing delay into an app layer module such as TraCIDemo11p?
For example, when a beacon arrives, the module should virtually do some processing and then perform some action (sending back a beacon).
Also, should I worry about putting a message queue as well in this case (because the module will continuously getting beacons from other vehicles)?
How to modeling processing delay is covered in the introductory OMNeT++ tutorials, for example the Tic Toc tutorial's step 6:
In OMNeT++ such timing is achieved by the module sending a message to
itself. Such messages are called self-messages (but only because of
the way they are used, otherwise they are ordinary message objects).
As a quick hack, you can also simply specify a send delay for events sent from the application to lower layers. This models an application that can instantly receive all messages, can handle infinitely an arbitrary number of messages at the same time, but that takes some time to send a reply.

How to ignore events in LabView triggered outside of a particular sequence frame?

Using event structures in LabView can get confusing, especially when mixing them with a mostly synchronous workflow. My question is, when an event structure exists in one frame of a sequence, how can I force it to ignore events (e.g. mousedown on a particular button) that were triggered while the workflow is in another frame of the sequence?
Currently, the event structures only process the events at the correct frame in the sequence, but if one was triggered while the workflow is in the previous frame, it processes those too and I want it to ignore any events that weren't triggered in the frame that the event structure exists within.
http://puu.sh/hwnoO/acdd4c011d.png
Here's part of my workflow. If the mousedown is triggered while the left part is executing, I want the event structure to ignore those events once the sequence reaches it.
Instead of placing the event structure inside your main program sequence, put it in a separate loop and have it pass the details of each event to the main sequence by means of a queue. Then you can discard the details of the events you don't want by flushing the queue at the appropriate point.
Alternatively you could use a boolean control to determine whether the event loop sends event details to the queue or discards them, and toggle the boolean with a local variable from the main sequence.
You can register for events dynamically. Registration is the point in time at which the event structure starts enqueueing events, and in your case this happens when the VI the event structure is in enters run mode (meaning it's executing or one of its callers is). You can change it so that you register using the Register for Events node and then you would only get events from that point on. When you unregister you will stop getting events.
There's a very good presentation by Jack Dunaway going into some details about events here.
You can find the code for it here.
In LabVIEW 2013 and later there are additional options for controlling the events queue, but I won't go into them here.
http://puu.sh/hwsBE/fe50dee671.png
I couldn't figure out how to flush the event queue for built-in event types like mousedown, but I managed to get around that by creating a static reference to the VI and setting the cursor to busy during the previous sequence, disabling clicking. Then when the sequence for the event structure is reached, I unset the cursor from busy, which re-enables clicking.

Event handler loop intersecting Stream run-loop

I am trying to make a socket server that spews mouse move events, in Cocoa.
This thread: Mouse tracking daemon
has info regarding the mouse event handler, which was really helpful, however, I need to stream these events out via socket.
Using http://developer.apple.com/mac/library/documentation/Cocoa/Conceptual/Streams/Articles/PollingVersusRunloop.html#//apple_ref/doc/uid/20002275-CJBEDDBG
as as guide was helpful, but I have a disconnect with regard to intersecting the run-loop of the stream with the event handler loop.
All I really want is when I get a mouse move event, to spit it out of the socket. Do I even need a run-loop for the stream? If not, how do I do this??
Thanks for any input!
Chris
All I really want is when I get a mouse move event, to spit it out of the socket.
From the documentation you linked to:
A potential problem with stream processing is blocking. A thread that is writing to … a stream might have to wait indefinitely until there is … space on the stream to put bytes….
If you simply did a blocking write, you could block forever, hanging your application.
If you simply did a non-blocking write, you may only write part of what you intended to. This will confuse the other side if you don't remember what you have left and try to send it again later.
Enter the run loop. Mouse-move events come in on the run loop, too—you don't need or want a separate run loop. “Intersecting” the two sources of events is exactly what a run loop is for.
You'll want symmetry between your event handlers: Each should either send some bytes or remember some state (using a couple of instance variables for the latter).
In the mouse-moved handler, if you have previously received a has-space-available event and did not have a mouse-moved event to send, send the one you just got. Otherwise, remember the event for later. (Keep only one event at a time—if you get another one, throw the older one away.)
In the has-space-available handler, if you have a mouse-moved event that you have not sent, send it now. Otherwise, remember that you have space available, so you can use it at your next mouse-moved event.
Either way, when you try to write the bytes and only write some of them, remember the bytes and where you left off. You should only start sending a new mouse-moved event after completely sending a previous one.
Note that the solution as I have described it will drop events if you're getting events faster than you can send them out. This is not a problem: If you're getting them faster than you can send them out, and you keep them around until you can send them out, then either they will pile up and tip over (you run out of memory and your app crashes/your app stops working/you bog down the system) or the user will see instances of “catch-up” where the mouse on the receiving end slowly replays all the events as they slowly come in. I say it's better, in your case, to drop events you couldn't send and just let the receiving mouse teleport across space to make up the lost time.

Resources