I tried to capture streaming via rtsp and limit the clip duration under 3 sec
But the option doesn't work. The ffmpeg won't be terminated anymore.
Is there any workaround to fix the problem.
Because I have to run hundreds of similar commands in a batch with Python script.
ffmpeg -loglevel verbose -i rtsp://172.19.1.42/live.sdp -acodec copy -vcodec copy c0_s1_h264_640x480_30_vbr_500_99_40000000.mp4 -timeout 3 -y
$ ffmpeg -h ffmpeg version 1.2.4 Copyright (c) 2000-2013 the FFmpeg
developers built on Nov 22 2013 11:59:59 with Apple LLVM version 5.0
(clang-500.2.79) (based on LLVM 3.3svn)
The detailed log at https://gist.github.com/poc7667/8234701
From your console output:
Trailing options were found on the commandline.
Option placement matters:
ffmpeg [global options] [input options] -i input [output options] output
How is ffmpeg supposed to interpret your trailing options? Your command should look like:
ffmpeg -y -loglevel verbose -timeout 3 -i rtsp://172.19.1.42/live.sdp -acodec copy -vcodec copy c0_s1_h264_640x480_30_vbr_500_99_40000000.mp4
See the FFmpeg RTSP Protocol Documentation for more information, but you should refer to your local copy of your docs since the online docs are synced with current code from Git master and your ffmpeg version is old.
You need to use stimeout parameter:
ffmpeg -loglevel verbose -i rtsp://172.19.1.42/live.sdp -acodec copy -vcodec copy c0_s1_h264_640x480_30_vbr_500_99_40000000.mp4 -stimeout 3000 -y
Note that you should use microseconds with stimeout param
Related
I have an ffmpeg version built with VMAF library. I can use it to calculate the VMAF scores of a distorted video against a reference video using commands like this:
ffmpeg -i distorted.mp4 -i original.mp4 -filter_complex "[0:v]scale=640:480:flags=bicubic[main];[main][1:v]libvmaf=model_path=model/vmaf_v0.6.1.json:log_path=log.json" -f null -
Now, I remember there was a way to get VMAF scores while performing regular ffmpeg encoding. How can I do that at the same time?
I want to encode a video like this, while also calulate the VMAF of the output file:
ffmpeg -i original.mp4 -crf 27 -s 640x480 out.mp4
[edited]
Alright, scratch what I said earlier...
You should be able to use [the `tee` muxer](http://ffmpeg.org/ffmpeg-formats.html#tee-1) to save the file and pipe the encoded frames to another ffmpeg process. Something like this should work for you:
ffmpeg -i original.mp4 -crf 27 -s 640x480 -f tee "out.mp4 | [f=mp4]-" \
| ffmpeg -i - -i original.mp4 -filter_complex ...
(make them into 2 lines and remove \ for Windows)
Here is what works on my Windows PC (thanks to #Rotem for his help)
ffmpeg -i in.mp4 -vcodec libx264 -crf 27 -f nut pipe:
|
ffmpeg -i in.mp4 -f nut -i pipe:
-filter_complex "[0:v][1:v]libvmaf=log_fmt=json:log_path=log.json,nullsink"
-map 1 -c copy out.mp4
The main issue that #Rotem and I missed is that we need to terminate the libvmaf's output. Also, h264 raw format does not carry header info, and using `nut alleviates that issue.
There are a couple caveats:
testing with the testsrc example that #Rotem suggested in the comment below does not produce any libvmaf log, at least as far as I can see, but in debug mode, you can see the filter is getting initialized.
You'll likely see [nut # 0000026b123afb80] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8) message in the log. This just means that the frames are piped in faster than the 2nd ffmpeg is processing. FFmpeg does block on both ends, so no info should be lost.
For the full disclosure, I posted my Python test script on GitHub. It just runs the shell command, so it should be easy to follow even if you don't do Python.
I am running below ffmpeg command that is working fine in mac but in ubuntu give error:
[aac # 0x15187a0] The encoder 'aac' is experimental but experimental codecs are not enabled, add '-strict -2' if you want to use it.
Command:
ffmpeg -i intro.mp4 -vf "drawtext=fontfile=FutuMd.ttf: text="Audi": x=680: y=500: fontsize=55: fontcolor=white: enable='between(t,4,6)'" introfinal.mp4
Thanks in advance
Because of the aac encoder for ubuntu is experimental and not enabled by default. So you should use -strict -2 argument to enable it manually.
Change your command to:
ffmpeg -i intro.mp4 -strict -2 -vf "drawtext=fontfile=FutuMd.ttf: text="Audi": x=680: y=500: fontsize=55: fontcolor=white: enable='between(t,4,6)'" introfinal.mp4
I have wav file in which sound starts exactly at 00:00:00 (checked with sonic-visualiser).
I also have mp4 file without audio and when I combine them with:
ffmpeg -i videoOnly.mp4 -i audio.wav -c:v copy -c:a aac -strict experimental out.mp4
And then examine wav file generated from combined file:
ffmpeg -i out.mp4 out.wav
I see 50ms of silence before actual sound starts. The videoOnly.mp4 doesn't have 'edts' atom, so it's not related to 'elst' atom.
The question is why audio is shifted and how to avoid that?
I figured it out and I think this may be ffmpeg / avconv issue ...
The reason why I've mentioned 'edts' and 'elst' atoms is that I've replaced 'edts' with 'free' in hex editor and since 'elst' is inside of 'edts' I simply ignored it.
However I thought that maybe ffmpeg / avconv doesn't ignore it, so in next try I've replaced 'elst' with zeros in hex editor and did the same steps again.
The result was mp4 file with audio which doesn't have 50ms delay.
Here is avconv version which I was using:
avconv version 9.20-6:9.20-0ubuntu0.14.04.1, Copyright (c) 2000-2014 the Libav developers
built on Dec 7 2016 21:22:31 with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.3)
Ubuntu 12.04. ffmpeg version git-2013-03-26-1741fec Copyright (c) 2000-2013 the FFmpeg developers
The command I use is:
ffmpeg -i output_20140630.avi -f mpegts udp://192.168.1.56:1234
The streaming fps is about 600, shown in the same terminal.
If I use the command:
ffmpeg -i output_20140630.avi -f mpegts udp://236.0.0.1:200
Then it is fine. And I can use the command below to play the streamed video:
ffplay udp://236.0.0.1:2000
I need to add -re, which means real time
ffmpeg -re -i output_20140630.avi -f mpegts udp://236.0.0.1:200
ffmpeg -i rtmp:/vid2/recordings -acodec copy -vcodec copy -y captured.flv
or
ffmpeg -i rtmp://localhost/vid2/recordings -acodec copy -vcodec copy -y captured.flv
The above command only give me this error:
rtmp://localhost/vid2/recordings: no such file or directory
Isn't ffmpeg supposed to be able to handle rtmp streams?
Are you using the Xuggler version of ffmpeg? Here's a tutorial explaining how to obtain and encode rtmp streams with the Xuggler ffmpeg.
http://wiki.xuggle.com/Live_Encoding_Tutorial
No need to use Xuggler's build. Version .6 of ffmpeg does support rtmp. However, make sure you compile with
--enable-librtmp
ffmpeg can catch the stream of rtmp. Try it with entering port like 1935
ffmpeg -i rtmp://localhost:1935/live/newStream
But before doing that check if newStream exist. If not, open new cmd and enter ffmpeg/bin folder
ffmpeg -i sample.avi -f flv rtmp://localhost/live/newStream
Then try to run first code.
appears it can (analyzeduration to get rid of an initial delay)
$ ffplay -analyzeduration 0 -i "rtmp://localhost/live/stream_name live=1"
See http://betterlogic.com/roger/2012/08/ffmpeg-receiving-rtmp-stream-from-flash-media-server/ for some instructions on how to stream to it, as well.
I have same problem with FFMPEG.
I publish video from FFMPEG on FMS correctly and I can see that on the FMS video player.
ffmpeg -re -i /home/videos/sample.mp4 -f flv rtmp://localhost/live/sample
Now I would like to create live stream.
For this case I use this code in the FFMPEG on linux:
ffmpeg -re -i rtmp://localhost:1935/live/sample -vcodec copy -acodec copy -f flv rtmp://localhost/livepkgr/sample_streamd?adbe-live-event=sample_event
By use this syntax I get same error:
Closing connection: NetStream.Play.StreamNotFound
rtmp://localhost:1935/live/sample: Operation not permitt