I'd like to know when the SampleRequested event is fired in UWP. According to the official doc page it
Occurs when the MediaStreamSource request a MediaStreamSample for a specified stream.
but I'd like to know more in more detail when this request occurs. For instance what make this event happen? Every frame change? Every packet received from the RTSP stream?
Furthermore I'd like to know if there is a way to "control" this event, i.e. firing it programmatically since I need to take the MediaStreamSample only in a specific moment and only one time. And it looks like it happens multiple times during my rtsp streaming effecting the latency of my stream (about 4000ms lag).
Thanks.
I haven't used MediaStreamSource in UWP extensively so I provide just a general suggestion, maybe someone more experienced will provide a more useful answer.
I presume this event is called by the media player control to preload the stream and you cannot control the frequency at which it is called. What you can control however is how soon you provide the response - MediaStreamSourceSampleRequestedEventArgs event args have a Request property, which is of type MediaStreamSourceSampleRequest. You can use GetDeferral method to indicate you need to delay the delivery of the sample and return it only after a specific delay. You can indicate "loading" to the user using the ReportSampleProgress method. When you are done, you can indicate this on the deferral by calling deferral.Complete().
Finally, if you no longer want to provide any samples, just assign the Sample property to null.
Related
I am trying to make sense of which one should be called before and which one later between wl_display_dispatch and wl_display_roundtrip. I have seen both order so wondering which one is correct.
1st order:
wl_display_get_registry(display); wl_registry_add_listener() // this call is just informational
wl_display_dispatch();
wl_display_roundtrip();
what i think : wl_display_dispatch() will read and dispatch events from display fd, whatever is sent by server but in between server might be still processing requests and for brief time fd might be empty.
wl_display_dispatch returns assuming all events are dispatched. Then wl_display_roundtrip() is called and will block until server has processed all request and put then in event queue. So after this, event queue still has pending events, but there is no call to wl_display_dispatch(). How those pending events will be dispatched ? Is that wl_display_dispatch() wait for server to process all events and then dispatch all events?
2nd order:
wl_display_get_registry(display); wl_registry_add_listener() // this call is just informational
wl_display_roundtrip();
wl_display_dispatch();
In this case, wl_display_roundtrip() wait for server to process all events and put them in event queue, So once this return we can assume all events sent from server are available in queue. Then wl_display_dispatch() is called which will dispatch all pending events.
Order 2nd looks correct and logical to me, as there is no chance of leftover pending events in queue. but I have seen Order 1st in may places including in weston client examples code so I am confused whats the correct order of calling.
It would be great if someone could clarify here.
Thanks in advance
2nd order is correct.
client can't do much without getting proxy(handle for global object). what i mean is client can send request by binding to the global object advertised by server so for this client has to block until all global object are bind in registry listener callback.
for example for client to create surface you need to bind wl_compositor interface then to shell interface to give role and then shm(for share memory) and so on.wl_display_dispatch cannot guaranty all the events are processed if your lucky it may dispatch all events too but cannot guarantee every-time. so you should use wl_display_roundtrip for registry at-least.
If a user is publishing to a tokbox session and for any reason that same user logs in on a different device or re-opens the session in another browser window I want to stop the 2nd one from publishing.
Luckily, on the metadata for the streams, I am saving the user id, so when there is a list of streams it's easy to see if an existing stream belongs to the user that is logged in.
When a publisher gets initialized the following happens:
Listen for session.on("streamCreated") when this happens, subscribe to new streams
Start publishing
The problem is, when the session gets initialized, there is no way to inspect the current streams of the session to see if this user is already publishing. We don't know what the streams are until the on("streamCreated") callback fires.
I have a hunch that there is an easy solution that I am missing. Any ideas?
I assume that when you said you save the user ID on the stream metadata, that means when you initialize the Publisher, you set the "name" property. That's a great technique.
My idea is slightly a hack, but its the best I can come up with right now. I would solve this problem by essentially breaking up the subscription of streams into 2 phases:
all streams created before this client connection
all streams created after
During #1 I would check each stream's "name" property to see if it belongs to the user at this client connection. If it does, then you know they are entering the session twice and you can set a flag (lets call it "userRejoining". In order to know that #1 is complete, I would set a timer (this is why I call it a hack) for a reasonable amount of time such as 1 second each time a "streamCreated" event arrives, and remove the any previous timer.
Then, if the "userRejoining" flag is not set, the Publisher is initialized and published to the session.
During #2, you just subscribe to any stream that is created.
The downside is that you've now delayed your user experience of publishing by ~1 second everywhere. In larger group scenarios this could be a deal breaker, but in smaller (1:1) types of sessions this should be acceptable. I hope this explanation is clear, and if not I can try to write some sample code for you.
I am having a bit of trouble understanding how events work in Gstreamer. I understand that you can pass events to elements from the application to end a stream or block a pad etc., but when I look at the sample code in here, it seems like the program is not sending any specific event, just listening them through probes. If the program is only listening events through probes, then these events have to be sent between elements in some kind of fashion after certain things automatically. However, I couldn't find any information regarding to this. How does the events work in Gstreamer?
More information on the design of gstreamer events can be found here (https://github.com/GStreamer/gstreamer/blob/master/docs/random/events). This document describes how the various events propagate through a pipeline.
In the provided sample code, an EOS event is sent to an element with the function :
gst_pad_send_event (sinkpad, gst_event_new_eos ());
The element then proceeds to flush all of its buffers and forwards the EOS event downstream to the next element by posting the event on its src pad. This event continues through the elements until it reaches the installed probe which contains special logic to manipulate the pipeline if an EOS event is received.
This sample shows several things in regards to your question:
- Events are intrinsically handled within the gstreamer pipeline. The gstreamer elements automatically handle them.
- Pad Probes can be used to externally observe/modify events as they propagate through the pipeline.
- Events can be directly inserted within a pipeline using the function gst_pad_send_event or gst_element_send_event
I am sending files (up to 100Mo on my android handled) using the Channel Api.
I decided to create a handler to update the progress of the transfer to that the user is aware of the progress.
I use the Message Api to send the file size to my handled and I update the progress checking each x milliseconds the size of the file.
The matter is that I don't know first if that's a good way of doing what I want, and second, due to the fact that it's asynchronous, I have to wait that I correctly received the file size in the onMessageReceived before sending the file.
If you are using ChannelApis, you can use the low level version of transfer (using output stream and input stream) and then on the sender side, you can update your progress bar with the amount that you are writing to the output stream. If you are using sendFile() method, you don't have any view into the progress on the sender side so you need to report that back using, say, Message Apis as you are doing. Instead of doing that at x milliseconds, you may decide to make it a bit smarter; if you have the size of the whole file, you probably wouldn't want to send a message if the visual change in the progress bar is not going to be much or noticeable; in other words, try to reduce the number of communications as much as possible.
I have a DirectShow application written in Delphi 6 using the DSPACK component library. When I shut down my filter graphs (stop play), I get an access violation due to a callback from the Sample Grabber DirectShow filter occurring after the object that owns the callback method has been destroyed. It doesn't happen every time, but fairly often. Can someone point me to a code sample or document that tells me the steps I need to take to shut down my graphs in a way that that makes sure all pending Sample Grabber callbacks have been received or eliminated?
What about issuing ISampleGrabber::SetCallback(NULL, ... prior to stopping/releasing the filter graph?
More to this, you can set an internal flag indicating termination and check it in the callbacks you have to immeditely return without further processing.