I have a problem and cant find any suitable answer for it.
Its about use multi GPU proccess.
I have 3 graphic card and you could see it :
[![enter image description here][1]][1]
if image not loaded use this image link : https://i.stack.imgur.com/msR83.jpg enter link description here
My problem is: when i run more than one ffmpeg command with cuda all process assigned to first GPU.
like below image:
[![enter image description here][2]][2]
if image not loaded use this image link: https://i.stack.imgur.com/PfYfz.jpg
you see? all 6 proccess assigned to first GPU.
I really confused how could i fix it.
my FFMPEG code is :
ffmpeg -y -vsync 0 -hwaccel cuda -hwaccel_output_format cuda -i my-video.mp4 \
-vf scale_npp=w=426:h=240 -c:v h264_nvenc -profile:v main -b:v 400k -sc_threshold 0 -g 25 \
-c:a aac -b:a 64k -ar 48000 \
-f hls -hls_time 6 -hls_playlist_type vod \
-hls_allow_cache 1 -hls_key_info_file encription.keyinfo \
-hls_segment_filename f-0-seg-%d.ts f-0.m3u8
i run top FFMPEG code for 6 diffrent video at same time.
please help to find answer. by sharing your knowledge or some links that could help me.
Thanks a lot.
The Process seems fairly straigh forward from what I can tell:
https://developer.nvidia.com/blog/nvidia-ffmpeg-transcoding-guide/
Encoding and decoding work must be explicitly assigned to a GPU when using multiple GPUs in one system. GPUs are identified by their index number; by default all work is performed on the GPU with index 0. Use the following command to obtain a list of all NVIDIA GPUs in the system and their corresponding ID numbers:
ffmpeg -vsync 0 -i input.mp4 -c:v h264_nvenc -gpu list -f null –
Once you know the index, the -hwaccel_device index flag can be used to set the active GPU for decoding and encoding. In the example below the work will be executed on the gpu with index 1.
ffmpeg -vsync 0 -hwaccel cuvid -hwaccel_device 1 -c:v h264_cuvid -i input.mp4 -c:a copy -c:v h264_nvenc -b:v 5M output.mp4
I'm working on FFMPEG with Jetson nano and PC.
I want to know the ordering element of FFMPEG can be effect in the result? Is it necessory to observe the ordering of elements?
In the concept of decoding and encoding streams:
ffmpeg -hwaccel cuvid -c:v h264_nvdec -c:v h264_cuvid -i input -c:v
h264_nvenc output
If I replace the -hwaccel cuvid with -c:v h264_nvdec, such as:
ffmpeg -c:v h264_nvdec -c:v h264_cuvid -hwaccel cuvid -i input -c:v
h264_nvenc output
Q1- May be these two above commands corecttly run, but in the backgrand these two commands have same operations and results?
Q2 -Or if I want to put -hwaccel cuvid after -i input, are these two commands have same operations and results?
I'm doing screen recordings of gameplay (Dota2) using my NVIDIA graphics card GeForce experience hardware recording (NVEC Encoder). This creates a variable frame rate output video. My NVIDIA settings are 60 fps 15000 kbps. I have paid a guy to make a program that generates scripts that given start/stop timepoints can extract clips from the video and merge them with crossfade. See example code below. The script works for many input recordings but fails often: The audio and video are desynchronized (usually audio delay) in many of the clips, ca 0.5 seconds. I think it fails more when frame rate dropped more during recording. He does not know how to fix the problem, and I wonder if anyone could point out if anything could be fixed in the script (example below)?
Processing speed is quite important (now making a 10 min 'highlight' video takes ca 7-10 min). Solutions increasing that amount very much more is not of too big interest, unfortunately. His approach has been to work separately with audio and video and merge in the end. He already has a program to make ffmpeg code for working with different scenarios (also adding overlays, adding music, intro/outro) so it would be preferable with some easy fixes to his code and not dramatic redesigning of the logic. But if nothing else can fix the problem, a redesign in logic is ok. Using other tools than ffmpeg is also ok, but should be automatable (scripts/cli) and not increase processing times too much.
Running the program "mediainfo" on the input video shows that framerate dropped quite low for this input video:
Frame rate mode: Variable
Frame rate : 60.000 FPS
Minimum frame rate: 3.059 FPS
Maximum frame rate: 63.739 FPS
Full report here: https://pastebin.com/TX061Wih
The input video can be downloaded from dropbox here (6 GB):
https://www.dropbox.com/s/ftwdgapazbi62pr/fullgame.mp4?dl=0
Here the example of a script when asked to extract two clips from input video at 9:57 (41 sec length) and 15:45 (28 sec length) and crossfade merge them with a 0.5 crossfade time. There might be some code-remnants from options that are not used in this example (overlays, music, intro/outro). Using the input video above, this creates audio/video desync.
6 commands excecuted in sequence:
ffmpeg.exe -loglevel warning -ss 00:09:57 -i fullgame.mp4 -t 00:00:41 -filter_complex "[0:a]afade=t=out:st=40.5:d=0.5[a1]" -map "[a1]" -y out_temp_00.mp4.wav
ffmpeg.exe -loglevel warning -i fullgame.mp4 -ss 00:09:57 -t 00:00:41 -an -vcodec copy -f mpegts -avoid_negative_ts make_zero -y out_temp_00.mp4.ts
ffmpeg.exe -loglevel warning -ss 00:15:45 -i fullgame.mp4 -t 00:00:28 -filter_complex "[0:a]afade=t=in:st=0:d=0.5[a1]" -map "[a1]" -y out_temp_01.mp4.wav
ffmpeg.exe -loglevel warning -i fullgame.mp4 -ss 00:15:45 -t 00:00:28 -an -vcodec copy -f mpegts -avoid_negative_ts make_zero -y out_temp_01.mp4.ts
ffmpeg.exe -loglevel warning -i out_temp_00.mp4.wav -i out_temp_01.mp4.wav -y -filter_complex "[0:a]adelay=0|0[a0];[1:a]adelay=40500|40500[a1];[a0][a1]amix=inputs=2:dropout_transition=68.5,atrim=duration=68.5[outa0];[outa0]loudnorm[outa]" -map "[outa]" -ar 48000 -acodec aac -strict -2 fullgame_Output.mp4.aac
ffmpeg.exe -loglevel warning -i out_temp_00.mp4.ts -i out_temp_01.mp4.ts -y -i fullgame_Output.mp4.aac -filter_complex "[0:v]trim=start=0.5,setpts=PTS-STARTPTS[0c];[1:v]trim=start=0.5,setpts=PTS-STARTPTS[1c];[0:v]trim=40.5:41,setpts=PTS-STARTPTS[fo];[1:v]trim=0:0.5[fi];[fi]format=pix_fmts=yuva420p,fade=t=in:st=0:d=0.5:alpha=1[z];[fo]format=pix_fmts=yuva420p,fade=t=out:st=0:d=0.5:alpha=1[x];[z]fifo[w];[x]fifo[q];[q][w]overlay[r];[0c][r][1c]concat=n=3[outv]" -map "[outv]" -map 2:a -shortest -acodec copy -vcodec libx264 -preset ultrafast -b 15000k -aspect 1920:1080 fullgame_Output.mp4
P.S.
I already asked for help at an ffmpeg chat room. One guy said he knew what the problem was, but didnt know how to fix it(?):
[00:10] <kepstin> oh, wait, you're using -vcodec copy
[00:10] <kepstin> that explains everything.
[00:10] <kepstin> when you're using -vcodec copy, the start time (set with -ss) is rounded to the nearest keyframe
[00:10] <kepstin> it's not exact
[00:11] <kepstin> depending on the keyframe interval, this will result in possibly quite large shifts
[00:11] <kepstin> (also, your commands are applying audio filters on commands with -an, which is confusing/contradictory)
[00:12] <birdboy88> so the problem is that the audio temporary clips are not being extracted from the same excat timepoints?
[00:13] <kepstin> birdboy88: yeah, your audio is being re-encoded to wav so it's being cut sample-accurate, but the video's not being precisely cut.
[00:16] <birdboy88> kepstin: so I need to use slow seek (?) to extract video accurately? Or somehow extract audio only where there are video keyframes?
[00:17] <kepstin> birdboy88: i don't know how to extract audio starting at video keyframes with ffmpeg cli. You're already doing slow seek, which doesn't help (you should move the -ss option to before the -i option to speed it up)
[00:17] <kepstin> if you want accurate video cutting when saving to a file, you have to re-encode the video
[00:18] <kepstin> (doing this in a single ffmpeg command means you don't have to save to a file, so you can avoid the issue)
[00:18] * kepstin is off for a bit now
EDIT:
Everything is done with the latest ffmpeg version.
I was unable to get Gyan's code to work. It always loses some audio (audio is either 40.5 or 27.5, so only one audio is used). This is the only one working for me (changes were adelay=40500|40500 and amix=inputs=2[a0];[a0]loudnorm):
ffmpeg -i fullgame.mp4 -filter_complex "[0]split=2[vpre][vpost];
[0]asplit=2[apre][apost];
[vpre]trim=start='00:09:57':duration='00:00:41',setpts=PTS-STARTPTS[vpre-t];
[apre]atrim=start='00:09:57':duration='00:00:41',asetpts=PTS-STARTPTS,afade=t=out:st=40.5:d=0.5[apre-t];
[vpost]trim=start='00:15:45':duration='00:00:28',setpts=PTS-STARTPTS,format=yuva420p,fade=t=in:st=0:d=0.5:alpha=1,setpts=PTS+40.5/TB[vpost-t];
[apost]atrim=start='00:15:45':duration='00:00:28',asetpts=PTS-STARTPTS,afade=t=in:st=0:d=0.5,adelay=40500|40500[apost-t];
[vpre-t][vpost-t]overlay[v];
[apre-t][apost-t]amix=inputs=2[a0];[a0]loudnorm[a]" -map "[v]" -map "[a]" -y -c:v libx264 -preset ultrafast -b:v 15000k -aspect 1920:1080 -c:a aac fullgame_Output.mp4
Then I tried using a similar setup but with 3 clips, but on one machine I got error: "Error while filtering: Cannot allocate memory". And my 16 GB memory machine the processing speed is 0.02x! Any way to avoid this? This is the code I tried:
ffmpeg -i fullgame.mp4 -filter_complex "[0]split=3[vpre][vpost][v3];
[0]asplit=3[apre][apost][a3];
[vpre]trim=start=357:duration=41,setpts=PTS-STARTPTS[vpre-t];
[apre]atrim=start=357:duration=41,asetpts=PTS-STARTPTS,afade=t=out:st=40.5:d=0.5[apre-t];
[vpost]trim=start=795:duration=28,setpts=PTS-STARTPTS,format=yuva420p,fade=t=in:st=0:d=0.5:alpha=1,fade=t=out:st=40.5:d=0.5:alpha=1,setpts=PTS+40.5/TB[vpost-t];
[apost]atrim=start=795:duration=28,asetpts=PTS-STARTPTS,afade=t=in:st=0:d=0.5,afade=t=out:st=27.5:d=0.5,adelay=40500|40500[apost-t];
[v3]trim=start=95:duration=30,setpts=PTS-STARTPTS,format=yuva420p,fade=t=in:st=0:d=0.5,setpts=PTS+41+28-0.5/TB[v3-t];
[a3]atrim=start=95:duration=30,asetpts=PTS-STARTPTS,afade=t=in:st=0:d=0.5,adelay=68500|68500[a3-t];
[vpre-t][vpost-t]overlay[v1];
[v1][v3-t]overlay[v];
[apre-t][apost-t][a3-t]amix=inputs=3[a0];
[a0]loudnorm[a]" -map "[v]" -map "[a]" -y -c:v libx264 -preset ultrafast -b:v 15000k -aspect 1920:1080 -c:a aac fullgame_Output.mp4
Just do it in one command.
Besides the keyframe seek issue, which is true, your present sequence has an error in the last command. You have [0:v]trim=start=0.5...[0c] which trims out the first 0.5 seconds and will cause a desync of its own. Since this is the first clip, it should be [0:v]trim=0:40.5.
The full single command should be
ffmpeg -i fullgame.mp4 -filter_complex
"[0]split=2[vpre][vpost];[0]asplit=2[apre][apost];
[vpre]trim=start='00:09:57':duration='00:00:41',setpts=PTS-STARTPTS[vpre-t];
[apre]atrim=start='00:09:57':duration='00:00:41',asetpts=PTS-STARTPTS,afade=t=out:st=40.5:d=0.5[apre-t];
[vpost]trim=start='00:15:45':duration='00:00:28',setpts=PTS-STARTPTS,format=yuva420p,fade=t=in:st=0:d=0.5:alpha=1,setpts=PTS+40.5/TB[vpost-t];
[apost]atrim=start='00:15:45':duration='00:00:28',asetpts=PTS-STARTPTS,afade=t=in:st=0:d=0.5[apost-t];
[vpre-t][vpost-t]overlay[v];
[apre-t][apost-t]acrossfade=d=0.5,loudnorm,aresample=48000[a]"
-map "[v]" -map "[a]" -c:v libx264 -preset ultrafast -b:v 15000k -aspect 1920:1080 -c:a aac fullgame_Output.mp4
Your original sequence had -strict -2 for audio AAC encoding. That hasn't been needed since Dec 2015. You have a very old version of ffmpeg if your ffmpeg throws an error without it. Upgrade first.
I did not test the above with your file, as it will take too long to filter 16 min of Full HD 60 fps video, but I tested the below faster command and it works fine with the latest git build of ffmpeg:
ffmpeg -ss 00:09:57 -t 00:00:41 -i fullgame.mp4 -ss 00:15:45 -t 00:00:28 -i fullgame.mp4 -filter_complex
"[0]afade=t=out:st=40.5:d=0.5[apre-t];
[1]format=yuva420p,fade=t=in:st=0:d=0.5:alpha=1,setpts=PTS+40.5/TB[vpost-t];
[1]afade=t=in:st=0:d=0.5[apost-t];
[0][vpost-t]overlay[v];
[apre-t][apost-t]acrossfade=d=0.5,loudnorm,aresample=48000:ocl=stereo[a]"
-map "[v]" -map "[a]" -c:v libx264 -preset ultrafast -b:v 15000k -aspect 1920:1080 -c:a aac fullgame_Output.mp4
I'm using ffmpeg from the command line on Windows 10 and I wanted to gave GPU acceleration a try to improve execution times. A simple cut command like this works fine and its execution time is reduced by 60-70%. Awesome.
ffmpeg -hwaccel cuvid -c:v h264_cuvid -ss 00:00:10 -i in.mp4 -c:v h264_nvenc out.mp4
Now I tried to use the -filter_complex flag to overlay, fade and translate a png image over a video. The working non-GPU enhanced command is this one:
ffmpeg -i in.mp4 -loop 1 -t 75 -i overlay.png -filter_complex "[1:v]fade=t=in:st=30:d=0.3:alpha=1,fade=t=out:st=35.7:d=0.3:alpha=1[png1];[0:v][png1]overlay=x='if(gte(t,30), (t-30)*10, NAN)'" -movflags +faststart out.mp4
Then, I added the GPU-related flags to the command like this.
ffmpeg -hwaccel cuvid -c:v h264_cuvid -i in.mp4 -loop 1 -t 75 -i overlay.png -filter_complex "[1:v]fade=t=in:st=30:d=0.3:alpha=1,fade=t=out:st=35.7:d=0.3:alpha=1[png1];[0:v][png1]overlay=x='if(gte(t,30), 60-tanh((t-30)*30/5)*60, NAN)'" -movflags +faststart -c:v h264_nvenc out.mp4
But it won't work. I get this error.
Impossible to convert between the formats supported by the filter 'graph 0 input from stream 0:0' and the filter 'auto_scaler_0'
Error reinitializing filters!
Failed to inject frame into filter network: Function not implemented
Error while processing the decoded data for stream #0:0
I don't even know what it means. Can I actually run ANY ffmpeg command on the GPU with GPU acceleration? I've found some information about the hwupload_cuda flag but I'm not sure if I should use it and how. My attempts have failed so far.
Any advice on how I should modify the command to make it work on the GPU?
Just add
hwdownload,YOUR_FILTER,hwupload
like this:
ffmpeg.exe
-y
-hwaccel_device 2
-hwaccel cuvid
-c:v h264_cuvid
-i input.mp4
-vfhwdownload,format=nv12,drawtext=textfile=file.txt:x=0:y=0:fontfile='c\:\/windows\/fonts\/arial.ttf',hwupload
-c:v h264_nvenc
-f mp4 output.mp4"
like above question, I want find out what ffmpeg command can help me reduce cpu usage when running 50 IP camera (running same 50 command).
My ffmpeg command:
ffmpeg -f mjpeg -y -use_wallclock_as_timestamps 1 -i 'http://x.x.x.x:8090/test1?.mjpg' -r 3 -reconnect 1 -loglevel 16 -c:v mjpeg -an -qscale 10 -copyts '1.mp4'
50 command like that take my computer (4 core) 200% CPU
I want this computer can run for 150 camera, any advise?
=========================================================
using -c:v copy can make it faster but fize size is terrible
I try slow down frame rate by 3 with -r 3 or -framerate 3 to decrease file size but not succesful (because vcodec copy can't do that).
Have any option to force input frame rate by 3?
(sorry for my bad English)
by setting -c:v mjpeg you are decoding and re-encoding the stream. set -c:v copy to copy the data without re-encoding it.
ffmpeg -re -i 'rtsp://user:password#10.10.10.30/rtsp_tunnel' -pix_fmt yuv420p -c:v libx264 -preset ultrafast -profile baseline -crf 18 -f h264 udp://0.0.0.0:3001