Transcoding via FFmpeg. Set starting pcr value - ffmpeg

I transcoding via FFmpeg (video codec - h264, container - MPEG-TS) output writing to local file (out.mpg). When FFmpeg dropped I restarted it with output to the same file (out.mpg). After this my video player shows incorrect file duration due to new FFmpeg process start counting PCR from 0.
Can I set starting pcr value at start FFmpeg ?

Run FFmpeg with key -copyts solved this issue.

Related

How to append timecode into metadata using ffmpeg

We have a request to append output/timecode into metadata starting from frame 0.
It is done in nuke with addtimecode node, starting at frame 0 and timecode as 00:00:00:00 with a prefix of "output"
I'm trying to do it with ffmpeg but nto able to get it :(
I have tried with -metadata output/timecode="00:00:00:00" It didn't work
Metadata fields are dependent on your output format.

FFMPEG DASH - Live Streaming a Sequence of MP3 Clips

I am attempting to create a online radio application using FFMPEG - an audio only DASH stream.
I have a directory of mp3 clips (all of the same bitrate and sample size) which I am encoding to the AAC format and outputting to a mpd.
This is the current command I am working with to stream a single mp3 file:
ffmpeg -re -i <input>.mp3 -c:a aac -use_timeline 1 -use_template 1 -window_size 5 -f dash <out>.mpd
(Input and output paths have been substituted for < input >.mp3 and < output >.mpd in this snippet)
I am running a web server and have made the mpd accessible on it. I am testing the stream using VLC player at the moment.
The problem:
Well, the command works, but it will only work for one clip at a time. Once the next command is run immediately proceeding the completion of the first, VLC player will halt and I need to refresh the player to continue.
I'm aiming for an uninterrupted stream wherein the clips play in sequence.
I imagine the problem is that a new mpd is being created with no reference to the previous one, and what I ought to be doing is appending segments to the existing mpd - but I don't know how to do that using FFMPEG.
The question: Is there such a command to append segments to a previously existing mpd file in FFMPEG? or am I coming at this problem all wrong? Perhaps I should be using FFMPEG to format the clips into these segments, but then adjusting the mpd file manually.
Any help or suggestions would be very much appreciated!

ffmpeg-cli-wrapper: Unable to merge image and audio (mp3) input to an video output (mp4)

I want to combine an image and an audio file to create a video output.
I'm able to do it from command line:
ffmpeg -loop 1 -i /tmp/image.jpeg -i /tmp/audiot.mp3 -vf 'scale=320:trunc(ow/a/2)*2' -acodec libvo_aacenc -b:a 192k -shortest /tmp/output.mp4
But when I'm trying to implement the same behavior using ffmpeg-cli-wrapper java, the progam just keeps running indefinitely.
Raised issue on GitHub too.
I couldn't find any example for this online. If someone knows how to implement this behavior using the same or any other framework, please let me know.
Here is my sample test java program.
FFmpeg ffmpeg = new FFmpeg("/usr/bin/ffmpeg/ffmpeg");
FFprobe ffprobe = new FFprobe("/usr/bin/ffprobe");
FFmpegBuilder builder = new FFmpegBuilder()
.setInput("/tmp/image.jpeg")// Filename, or a FFmpegProbeResult
.addInput("/tmp/audio.mp3")
.addExtraArgs("-loop","1")
.overrideOutputFiles(true) // Override the output if it exists
.addOutput("/tmp/output.mp4") // Filename for the destination
.setFormat("mp4") // Format is inferred from filename, or can be set
//.setTargetSize(250_000) // Aim for a 250KB file
//.disableSubtitle() // No subtiles
//.setAudioChannels(1) // Mono audio
.setAudioCodec("aac") // using the aac codec
.setAudioSampleRate(48_000) // at 48KHz
.setAudioBitRate(32768) // at 32 kbit/s
.setVideoCodec("libx264") // Video using x264
.setVideoFrameRate(24, 1) // at 24 frames per second
.setVideoResolution(640, 480) // at 640x480 resolution
//.setComplexVideoFilter("scale=320:trunc(ow/a/2)*2")
.setStrict(FFmpegBuilder.Strict.EXPERIMENTAL) // Allow FFmpeg to use experimental specs
.done();
FFmpegExecutor executor = new FFmpegExecutor(ffmpeg, ffprobe);
// Run a one-pass encode
executor.createJob(builder).run();
Your example command and the command in the code appear to be different.
Your image input is looping continuously using the -loop 1 input option (or at least I'm assuming it is being applied, but hard to tell), but your code is missing the -shortest output option causing the command to run indefinitely.

FFmpeg record rtsp stream to file error

I use ffmpeg to record rtsp stream, it work good but the output file got some proble, when I use use K-Lite Codec Pack to open the output (avi) file the video cant be seek, forward, backward and dont display video time. It lock like i am viewing streaming.
here is the command i used
ffmpeg -i rtsp://27.74.xxx.xxx:55/ufirststream -acodec copy -vcodec copy abc.avi
video playing error with K-Lite Codec Park image
Looks like header file not been updated on output file. It's recommended to close output file by pressing "q" button while ffmpeg reads input stream to properly finalize output file.

Live transcoding and streaming of MP4 works in Android but fails in Flash player with NetStream.Play.FileStructureInvalid error

Recently I had a task to use ffmpeg as a transcoding as well a streaming tool. The task was to convert the file from a given format to MP4 and immediately stream it, by capturing it from stdout. So far so good. The streaming works well with the native player of android tabs as well as the VLC player. The issue is with the flash player. It gives the following error:
NetStream.Play.FileStructureInvalid : Adobe Flash cannot import files that have invalid file structures.
ffmpeg flags used are
$ ffmpeg -loglevel quiet -i somefile.avi -vbsf h264_mp4toannexb -vcodec libx264 \
-acodec aac -f MP4 -movflags frag_keyframe+empty_moov -re - 2>&1
As noted in the docs for -movflags
The mov/mp4/ismv muxer supports fragmentation. Normally, a MOV/MP4 file has all the metadata about all packets stored in one location (written at the end of the file, it can be moved to the start for better playback using the qt-faststart tool). A fragmented file consists of a number of fragments, where packets and metadata about these packets are stored together. Writing a fragmented file has the advantage that the file is decodable even if the writing is interrupted (while a normal MOV/MP4 is undecodable if it is not properly finished), and it requires less memory when writing very long files (since writing normal MOV/MP4 files stores info about every single packet in memory until the file is closed). The downside is that it is less compatible with other applications.
Either switch to a flash player that can handle fragmented MP4 files, or use a different container format that supports streaming better.
Also, -re is an input-only option, so it would make more sense to specify it before the input, instead of before the output.

Resources