youtube media source not come in - cobalt

I have ported Cobalt8 code onto our embedded system, and it is now able to decode VP9 up to 4k quality.
However, I ran into an issue with fast forward and rewind. Specially, when I fast forward a few times
and then followed by a rewind operation, there is a chance that audio or video streaming data will stop coming into
the range buffer. I am not familiar with the streaming mechanism. It would be great if someone can shed some light on
where or what I can look to debug this issue.
PS: I have drawn a quick picture to show the problem.
thanks.

Is it possible for you to tell me the values of the following settings in your configuration_public.h:
SB_MEDIA_SOURCE_BUFFER_STREAM_AUDIO_MEMORY_LIMIT
SB_MEDIA_SOURCE_BUFFER_STREAM_VIDEO_MEMORY_LIMIT
SB_MEDIA_MAIN_BUFFER_BUDGET

Related

Can you set initial buffer for Clappr Player?

We are using Clappr Player to live stream HLS.
We are trying to make the video start playing quicker.
Anyone know if there is a way to set the initial buffer size of Clappr Player?
I want to set it lower to seep up the video starting...
Thanks.
Clappr uses hls.js. You should be able to pass a hlsjsConfig object in the player's configuration. You can find the fine-tuning options here.
That being said you probably need to tweak the HLS encoding too.

JPEG to Video stream

We are getting images from a third party and want to have a software that compresses the images and streams them. We are wondering if anyone knows of any software/api that does this.
I saw these online but unsure if it what I want.
http://www.aforgenet.com/framework/
http://splicer.codeplex.com/
Again we are getting images from a third party, we want to stream these images as a video feed on a website (we dont want to display them as image)
avifile wrapper should be one of the best options.

convert swf to video in Flash Pro CS5.5

I have searched the web and here for answers but so far, the links are dead, the how-tos no longer work for the version I have, or there are no answers.
I have a swf animation with full sound and scripting that I'd like to convert into a video or an flv. For some reason, the site I post on screws with my timeline somehow (the timing is off, sounds no longer match up properly with the text) so I thought a video would work better.
I tried using File>Export>Export to movie to resolve this. I tried to export to an AVI. When it's scaled down to 300x400 it works just fine (though it looks like total crap). However when I export at the full size, using full colors no compression, I get this.
I'm not sure what to do with it. It's slanted with lines through it and grayscale. VLC player is the only thing that will run it too. WMP dies with errors, saying it's an invalid or corrupt format. Funny thing is, the thumbnail for the video is exactly what it should look like.
I'm not sure what to do with it. Converting it to an .flv is just fine. I have a video converter for that. I just can't get it to convert to flv or even a movie type properly.
Why is it doing this to my video? Is there something better to use to convert? Is there a good one that won't plaster a giant watermark over it?
image being totally screwed up.
Flabaco is an online SWF to video converter. To answer your questions: It's free, doesn't impose banners or watermarks.
I have a swf animation with full sound and scripting that I'd like to
convert into a video or an flv.
Flabaco converts scripted content. It preserves the frame rate (fps) & color. It's capable of generating professional quality HD content.
It doesn't convert sound. Nonetheless the converted quality is good and you might be able to get by using another video tool to add sound to the converted video.
You can use the online converter app here: www.Flash-Banner-Converter.com
PS: There are some older posts on StackOverflow related to your question. Just search SWF to video / Flabaco.
Kayo,
FLABACO (FLAshBAnnerCOnverter)

Where does directshow get image dimensions from?

We are using a directshow interface to capture images from a video stream. These images are presented in a fixed size window.
Once we have captured an image we store it as a bitmap. Downstream we have the ability to add annotation to the image, for example letters in a fixed size font.
In one of our desktop environments, the annotation has started appearing at half the size that it normally appears at. This implies that the image we are merging the text onto has dimensions that are maybe twice as large.
The system that this happens on is a shared resource as in some unknown individual has installed software on the system that differs from our baseline.
We have two approaches - the 1st is to reimage the system to get our default text size behaviour back. The 2nd is to figure out how directshow manages image dimensions so that we can set the scaling on the image correctly.
A survey of the directshow literature indicates that the above is not a trivial task. The original work was done by another team that did not document what they did. Can anybody point us in the direction of what directshow object we want to deal with to properly size the sampled image?
DirectShow - as a framework - does not deal with resolutions directly. Your video source (such as capture hardware) is capable of providing video feed in certain resolution which you possibly can change. You normally use IAMStreamConfig as described in Configure the Video Output Format in order to choose capture resolution.
Sometimes you cannot affect capture resolution and you need to resample the image in whatever dimensions you captured it. There is no stock filter for this, however Media Foundation provides a suitable Video Resizer DSP which does most of the task. Unfortunately it does not fit DirectShow pipeline smoothly, so you need fitting and/or custom filter for resizing.
When filters connect in DirectShow, they have an AM_MEDIA_TYPE. Here you will find a VIDEOINFOHEADER with a BITMAPINFOHEADER and this header has a biWidth and biHeight.
Try to build the FilterGraph manually (with GraphEdit or GraphStudioNext) and inspect these fields.

Drawing video with text on top

I am working on an application and I have a problem I just cant seem to find a solution for. The application is written in vc++. What I need to do is display a YUV video feed with text on top of it.
Right now it works correctly by drawing the text in the OnPaint method using GDI and the video on a DirectDraw overlay. I need to get rid of the overlay because it causes to many problems. It wont work on some video cards, vista, 7, etc.
I cant figure out a way to complete the same thing in a more compatible way. I can draw the video using DirectDraw with a back buffer and copy it to the primary buffer just fine. The issue here is that the text being drawn in GDI flashes because of the amount of times the video is refreshed. I would really like to keep the code to draw the text intact if possible since it works well.
Is there a way to draw the text directly to a DirectDraw buffer or memory buffer or something and then blt it to the back buffer? Should I be looking at another method all together? The two important OS's are XP and 7. If anyone has any ideas just let me know and I will test them out. Thanks.
Try to look into DirectShow and the Ticker sample on microsoft.com:
DirectShow Ticker sample
This sample uses the Video Mixing Renderer to blend video and text. It uses the IVMRMixerBitmap9 interface to blend text onto the bottom portion of the video window.
DirectShow is for building filter graphs for playing back audio or video streams an adding different filters for different effects and manipulation of video and audio samples.
Instead of using the Video Mixing Renderer of DirectShow, you can also use the ISampleGrabber interface. The advantage is, that it is a filter which can be used with other renderers as well, for example when not showing the video on the screen but streaming it over network or dumping it to a file.

Resources