I want to read and write subtitle stream
ffmpeg -i E:/Video/Waka.mp4 -vf subtitles=E:/Video/Waka.srt out.mp4
equivalent code in c or c++
and please provide how to add subtitle stream and encoding parameter, what is procedure to read subtitle stream and render at screan
Subtitle Stream encode and while decoding we find data stream so dnt worry just get data with stream index while reading packet.
Related
The issue: I need to convert an h.264 stream streamed over RTP into MJPEG, but for very convoluted reasons I am required to use the libjpeg-turbo library, not the mjpeg encoder that comes with ffmpeg. So the only thing FFMPEG needs to do is convert the h.264 RTP stream to rawvideo in RGBA and output to a socket where I then manually do the transcoding.
However, libjpeg-turbo only expects complete frames, meaning I need to collect rawvideo packet fragments and somehow synchronize them. Putting incoming raw video fragments into a buffer as they come results in heavily broken images.
Is there some way of saving the header information of the initial h.264 RTP packets? The command I'm currently using is very straightforward:
-i rtsp://: -vcodec rawvideo -f rawvideo udp://:
Here's how I stream MPEG-TS to a relay using ffmpeg:
ffmpeg -re -i out.ts -f mpegts -vcodec copy -acodec copy http://localhost:8081/secret
My question is in the internals of ffmpeg, I want to understand the core process as to how ffmpeg stream mpegts, what it does to the file to stream it, does it manipulate the byte it streams or it just stream as-is?
In this case, the transport stream is parsed, the audio and video elementary streams are read and depacketized. They are then repacketized, and remuxed into a new transport stream, then sent over http.
If you changed containers, the elementary streams may be converted to slightly different format depending on the codec and container global headers before being remuxed.
And if you transcoded the elementary stream would have been converted to raw pixels, and PCM, the reencoded back to a new elementary stream.
My task is to create html5 compatible video from input video (.avi, .mov, .mp4, etc.). My understanding is that my output should be .webm or .mp4 (H264 video, aac audio).
I use ffmpeg for conversion and it takes a lot of time. I wonder if I could use ffprobe to test if input video is "H264" and "aac" and if so then maybe I could just copy video/audio into output without modifications.
I.e. I have next idea:
Get input video info using ffprobe:
ffprobe {input} -v quiet -show_entries stream=codec_name,codec_type -print_format json
The result would be JSON like this:
"streams": [
{codec_name="mjpeg",codec_type="video"},
{codec_name="aac",codec_type="audio"}
]
If JSON tells that video codec is h264 then I think I could just copy video stream. If JSON tells that audio codec is h264 aac then I think I could just copy audio stream.
JSON above tells that my audio is "aac". So I think I could just copy audio stream into ouput video but still needs video stream conversion. For the above example my ffmpeg command would be like:
ffmpeg -i {input} -c:v libx264 -c:a copy ouput.mp4
The question is if I could always use this idea to produce html5 compatible video and if this method will actually speed up video conversion.
The question is if I could always use this idea to produce html5 compatible video
Probably, but some caveats:
Your output may use H.264 High profile, but your target device may not support that (but that is not too likely now).
Ensure that the pixel format is yuv420p. If it is not then it may not play and you will have to re-encode with -vf format=yuv420p. You can check with pix_fmt in your -show_entries stream.
If the file is directly from a video camera, or some other device with inefficient encoding, then the file size may be excessively large for your viewer.
Add -movflags +faststart to your command so the video can begin playback before the file is completely downloaded.
and if this method will actually speed up video conversion.
Yes, because you're only stream copying (re-muxing) which is fast, and not re-encoding some/all streams which is slow.
I know we can combine videos using ffmpeg, but can I use ffmpeg to extract the H.264 content from a decrypted RTP stream? If so, how?
I have a query regarding using ffmpeg to encode a raw video(yuv sequence) to Raw Theora packets,
i.e. some kind of 'elementary bit-stream' without the Ogg container.
I am able to use ffmpeg to encode a raw video to Ogg theora bit stream, but i need to obtain a Theora bit stream with Raw Theora packets with no Ogg container header/
1) How can i achieve this?
2)If not using ffmpeg, then is there any other way/solution/tool to obtain what i need to get?
Thank you.
-AD.
I tried this out and think it will do what you're looking for; I can tell it's not in an Ogg container but haven't found a good way to play it back.
ffmpeg -i inputfile -vcodec libtheora -f rawvideo outputfile