We are using Clappr Player to live stream HLS.
We are trying to make the video start playing quicker.
Anyone know if there is a way to set the initial buffer size of Clappr Player?
I want to set it lower to seep up the video starting...
Thanks.
Clappr uses hls.js. You should be able to pass a hlsjsConfig object in the player's configuration. You can find the fine-tuning options here.
That being said you probably need to tweak the HLS encoding too.
Related
I want to hardsub all the movies and add watermark to them, I used ffmpeg once bu it’s slow, if you can recommend new way or how to use it faster.
To use a hardsub and watermark you will need to transcoding the entire movie. It is a slow operation regardless the software you use.
Maybe you can think about use a subtitle track. Watermarks can be used on players side too. That way you won't need transcoding.
I have ported Cobalt8 code onto our embedded system, and it is now able to decode VP9 up to 4k quality.
However, I ran into an issue with fast forward and rewind. Specially, when I fast forward a few times
and then followed by a rewind operation, there is a chance that audio or video streaming data will stop coming into
the range buffer. I am not familiar with the streaming mechanism. It would be great if someone can shed some light on
where or what I can look to debug this issue.
PS: I have drawn a quick picture to show the problem.
thanks.
Is it possible for you to tell me the values of the following settings in your configuration_public.h:
SB_MEDIA_SOURCE_BUFFER_STREAM_AUDIO_MEMORY_LIMIT
SB_MEDIA_SOURCE_BUFFER_STREAM_VIDEO_MEMORY_LIMIT
SB_MEDIA_MAIN_BUFFER_BUDGET
I am implementing a Camera Preview application. I am using V4L and until now I basically use this code> https://linuxtv.org/downloads/v4l-dvb-apis/capture-example.html
In this example, or any other example I found for that matter, I could not find a possibility to change the frame format to MJPEG to get a higher fps rate. Is there a way to tell V4L to use MJPEG instead of YUY2?
Found it, it is actually pretty simple. Just change the pixelformat in the format struct to V4L2_PIX_FMT_MJPEG.
So
format.fmt.pix.pixelformat = V4L2_PIX_FMT_YUYV;
becomes
format.fmt.pix.pixelformat = V4L2_PIX_FMT_MJPEG;
I'm using Media Foundation and the IMFSampleGrabberSinkCallback to playback video files and render them to a texture. I am able to get video samples in the IMFSampleGrabberSinkCallback::OnProcessSample method, but those samples are compressed. I have way less samples than I have pixels in my render target. According to this, the media session should load any decoder that is needed (if available), but that does not seem to be the case. Even if I create the decoder and add it to the topology myself, the video samples are still compressed. Is there anything in particular I am missing here ?
Thanks.
I am working on an application and I have a problem I just cant seem to find a solution for. The application is written in vc++. What I need to do is display a YUV video feed with text on top of it.
Right now it works correctly by drawing the text in the OnPaint method using GDI and the video on a DirectDraw overlay. I need to get rid of the overlay because it causes to many problems. It wont work on some video cards, vista, 7, etc.
I cant figure out a way to complete the same thing in a more compatible way. I can draw the video using DirectDraw with a back buffer and copy it to the primary buffer just fine. The issue here is that the text being drawn in GDI flashes because of the amount of times the video is refreshed. I would really like to keep the code to draw the text intact if possible since it works well.
Is there a way to draw the text directly to a DirectDraw buffer or memory buffer or something and then blt it to the back buffer? Should I be looking at another method all together? The two important OS's are XP and 7. If anyone has any ideas just let me know and I will test them out. Thanks.
Try to look into DirectShow and the Ticker sample on microsoft.com:
DirectShow Ticker sample
This sample uses the Video Mixing Renderer to blend video and text. It uses the IVMRMixerBitmap9 interface to blend text onto the bottom portion of the video window.
DirectShow is for building filter graphs for playing back audio or video streams an adding different filters for different effects and manipulation of video and audio samples.
Instead of using the Video Mixing Renderer of DirectShow, you can also use the ISampleGrabber interface. The advantage is, that it is a filter which can be used with other renderers as well, for example when not showing the video on the screen but streaming it over network or dumping it to a file.