Try to create video with 2 overlays - filter

Could someone tell me what is wrong with this command. Basically I create a video with 2 overlays left and right. It keeps telling : unconnected output.
/Users/Marco/Documents/#Dev/ffmpeg/ffmpeg -report -i EM2022_LP01_BL01_P1.m4v -i EM2022_LP01_BL01_P2.m4v -filter_complex "nullsrc=size=5760x240[base]; [0:v]setpts=PTS-STARTPTS,scale=2880x240[p1]; [1:v]setpts=PTS-STARTPTS,scale=2880x240[p2]; [base][p1]overlay=shortest=1:[base+p1]; [base+p1][p2]overlay=shortest=1:x=2880[base+p2]" led.m4v
I tried to use existing answers with no success.

Found it: I added: -map "[base+p2]" -y led.m4v
And this works. Hope it might help somebody else.
Cheers,
Marco

Related

imagemagick -auto-level in ffmpeg

I've been looking for a solution to perform the equivalent of magick -auto-level in ffmpeg but am unable to find anything. There are some references stating I should first manually discover the levels using other software like GIMP, however, I'm looking for an automated and simpler solution. Any ideas how to address this?
I've tried the following - the first enhanced the image which was initially prettry dark, but the second over-exposed it, causing it to become mostly white:
convert img.jpg -auto-level img2.jpg
ffmpeg -i img.jpg -vf "normalize" -y img2.jpg
Note: I apologize I cannot share the image as it is restricted by privacy policy

FFMPEG - Strange issue with video copy

I'm new here.
I have a set of TIF frames that equal 1 minute and 25 seconds of a video.
I'm attempting to copy the frames without re-encoding using the "-c:v copy" function to avoid visible quality loss for a process I'm doing on my side. The command is as follows:
ffmpeg -r 23.977 -i %06d.tif -c:v copy out.mkv
However for some reason, the timing does not seem to be accurate and the video is slightly desynced from the original, ending at 1 minute and 22 seconds instead.
When I use the following command:
ffmpeg -r 23.977 -i %06d.tif out.mkv
It comes out with the proper timing at 1 minute and 25 seconds, however, I did not appreciate the quality loss that came with it.
Is there a workaround to this or is there something I'm missing?
I used both Command Line and Windows Terminal.
In general, it would make sense to transcode when you go from tiff to video format. (I'm surprised it actually works.) You can set encoding quality to your own liking. See [this FFmpeg Wiki article[(https://trac.ffmpeg.org/wiki/Encode/H.264).

ffmpeg - automatically chopped?

I have an ISO file that I would like to encode it out to mp4 in parts so it's easier to upload to youtube. I am not sure how it handles with chapters etc in this iso file. I have tried
ffmpeg -i file.iso newfile.mp4
which works great however it's one large file.
I have google and read some where that says if you put a % within the output file, it should automatically give you parts of the video based on the -t you set, so I went ahead and did this
ffmpeg -i file.iso -t 30 newfile%.mp4
however, the above does not work as it only give me 30 seconds with the file name: newfile%.mp4
Thanks for your time and hoping I can get some help with this. Thank you in advance!
You can use -t and -ss in conjunction to do this with a script.
Here is one: http://grapsus.net/blog/post/A-script-for-splitting-videos-using-ffmpeg

Piping avs to ffmpeg using avs2yuv

I am trying to use avs2yuv to pipe avs output to ffmpeg for further conversion.
My video file is called "sample.avi" (No sound, just video)
My audio file is called "sample.wav"
My avs file(s) is called sample.avs, and looks like this:
V = AviSource("sample.avi")
A = WavSource("sample.wav")
AudioDub(V ,A)
or
V = DirectShowSource("sample.avi")
A = DirectShowSource("sample.wav")
AudioDub(V ,A)
Here is how I pipe:
avs2yuv sample.avs - | ffmpeg -y -f yuv4mpegpipe -i - output.mp4
Now here is the PROBLEM: No matter what files I try as an input, there is NO SOUND in my output. I do not understand what I am doing wrong, and why my audio does not make it to the output. If anyone has experience with avisynth and avs2yuv, your help would be GREATLY appreciated.
Thank you!
I would try to play your avs file with ffplay in order to check your avs file.
And you can also try to build some GRaph with GraphEdit in order to do something like that
A = DirectShowSource("sample_audio.grf", video=false)
V = DirectShowSource("sample_video.grf", audio=false)
AudioDub(V ,A)
With DirectShow you can add several parameter like fps, frame-count etc... sometime it helps.
Good Luck
As per this link:
Avs2YUV is a command-line program, intended for use under Wine, to
interface between Avisynth and Linux-based video tools.
avs2yuv.exe only handles the video stream which it output in a YUV color-space. It is that simple: the audio stream is ignored.
Here are some ways to process both audio and video streaams in .avs. These methods work in Linux using wine, and do of course work in Windows:
Encode in Avidemux via AvsProxy (AvsProxy ships with Avidemux)
Use VirutalDub as the encoder gui
otherwise encode the audio seperately, then mux in the video in a seperate step.
I believe avs2pipe can handle both video and audio streams fron a .avs, but I haven't tried it yet. Here is a link to some info about avs2pipe
Summary: Using avs2yuv mainly makes sense in a Linux/Unix environment.
Try makeAVIS.exe from the ffdshow package:
wine makeavis.exe -p -i example.avs -a output.wav

Is it possible to pull a RTMP stream from one server and broadcast it to another?

I essentially have a situation where I need to pull a stream from one Wowza media server and publish it to a Red5 or Flash Media Server instance with FFMPEG. Is there a command to do this? I'm essentially looking for something like this:
while [ true ]; do
ffmpeg -i rtmp://localhost:2000/vod/streamName.flv rtmp://localhost:1935/live/streamName
done
Is this currently possible from FFMPEG? I remembered reading something like this, but I can't remember how exactly to do it.
Yes. An example (pulling from a local server, publishing to a local server):
$ ffmpeg -analyzeduration 0 -i "rtmp://localhost/live/b live=1" -f flv rtmp://localhost:1936/live/c
analyzeduration is to make it start faster. You can also add other parameters in there to "reencode" etc. if desired.
try typing in this way:
$ffmpeg -i "[InputSourceAddress]" -f [Outputfileformat] "[OutputSourceAddress]"
The input source address can be in type rtmp, or rtsp/m3u8/etc.

Resources