Can someone please tell me what I'm doing wrong?
I'm using the following arguments to watermark a video with ffmpeg from a c# app:
-i "video.AVI" -s 384x288 -vhook "vhook/imlib2.dll -x 0 -y 0 -i
"watermark.png"" -y "output.avi"
-sameq
The orginal file size is 233mb but the output is 60 odd mb. I thought using the -sameq argument would give me the same size and quality output.
Instead of -sameq try defining the bitrates manually with -ab and -vb.
Related
Using FFMPG I'm creating a poster image from a video and adding an watermark/overlay to the poster. The following works great with small video files, but destroys my CPU with 1080p files.
ffmpeg -ss 15 -i preview.mp4 -i play-button.png \
-filter_complex overlay='(main_w-overlay_w)/2:(main_h-overlay_h)/2', \
scale='min(640\, iw):-1' -vframes 1 poster.jpg
Is there any way to speed this up? Or should I look to another solution for the overlay?
My solution is similar yours. But i use -s to set the output resolution for the image, and -f image2 for rendering. This commands works fine for me:
ffmpeg -ss 15 -i preview.mp4 -i play-button.png -filter_complex "overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2" -vframes 1 -s 640x360 -f image2 -y poster.jpg
I'm trying to convert a gif file to webm file using the below which works fine however I’m wondering is it also possible to reverse it as well using ffmpeg or would I need to reverse it using imagemagick first then cover it using ffmpeg
ffmpeg -i your_gif.gif -c:v libvpx -crf 12 -b:v 500K output.webm
Any help is appreciated
The script posted here might help you.
This one seems to be in bash but ripping the commands should work on Windows as well.
https://github.com/WhatIsThisImNotGoodWithComputers/ffmpeg-webm-scripts
These are the relevant lines of code (note that they need to edited for your setup):
ffmpeg -i "${INPUT_FILE}" -ss $START_TIME -to $TO_TIME -an -qscale 1 $TEMP_FOLDER/%06d.jpg
cat $(ls -r $TEMP_FOLDER/*jpg) | ffmpeg -f image2pipe -vcodec mjpeg -r 25 -i - -c:v libvpx -crf 20 -b:v $FRAMERATE $CROPSCALE -threads 0 -an $OUTPUT_FILE
You basically have to convert all stills to jpgs and then back into webm, but in reverse order.
From ffmpeg --help, you can see what codecs ffmpeg supports with ffmpeg -codecs. ffmpeg -codecs|grep -i gif on mine says it supports gif.
ffmpeg checks extensions to get file type if you don't override,
ffmpeg -i onoz.webm onoz.gif
does the trick just fine.
I am trying to create a video output from multiple video cameras.
Following the example given here Presenting more than 2 videos using FFmpeg
and other similar examples.
but Im getting the error
Output pad "default" for the filter "src" of type "buffer" not connected to any destination
when i run
ffmpeg -i /dev/video1 -i /dev/video0 -filter_complex "[0:0]pad=iw*2:ih[a];[a][1:0]overlay=w[b];[b][2:0]overlay=w:h" -shortest output.mp4
Im not really sure what this means or how to fix it.
Any help would be greatly appreciated!
Thanks.
When using the "padding" option, you have to specify which is the size of the output image and where you want to put the input image
[0:0]pad=iw*2:ih:0:0
tested under windows 7 with file of same size
ffmpeg -i out.avi -i out.avi -filter_complex "[0:0]pad=iw*2:ih:0:0[a];[a][1:0]overlay=w" -shortest output.mp4
and with WebCam Cap (vfwcap) and a still picture (as i have only o=1 WebCam). BTW you can see how to scale one the source to fit in the target (just in case your source have different resolution)
ffmpeg -y -f vfwcap -r 10 -i 0 -loop 1 -i photo.jpg -filter_complex "[0:0]pad=iw*2:ih:0:0[a];[1:0]scale=640:480[b];[a][b]overlay=w" -shortest output.mp4
under Linux:
ffmpeg -i /dev/video1 -i /dev/video0 -filter_complex "[0:0]pad=iw*2:ih:0:0[[a];a][1:0]overlay=w" -shortest output.mp4
if it doesn't work test a simple record of video 1 and after of video 0 and check their properties (type, resolution, fps).
ffmpeg -i /dev/video1 -shortest output1.mp4
ffmpeg -I output1.mp4
If you still have issue, update your question with ffmpeg console output (as text) for video and video 0 capture and also of the call with the overlay
Hey all this command works fine for me while extracting keyframes : ffmpeg -vf select="eq(pict_type\,PICT_TYPE_I)" -i yourvideo.mp4 -vsync 2 -s 160x90 -f image2 thumbnails-%02d.jpeg.
I was just wondering if someone knows what will work with this to restrict the number of keyframes to say,200. Thanks.
You can add -vframes 200 as an output option.
I currently have a jpeg file which I converted to an flv using the following command:
ffmpeg -r 10 -b 180000 -i test.jpg test.mp4
Now, I want to increase the duration of this .mp4 clip, so the picture stays on the screen for more than a split second. Eventually, I hope to merge a stream of these files to create a slide show out of jpeg files.
Does anyone know how to increase the duration of a clip in ffmpeg?
Looping the input and setting a duration should achieve the effect you want:
ffmpeg -loop_input -i test.jpg -t 10 test.mp4
Doing something like this should work (at least for a single image):
ffmpeg -loop_input -i picture.jpg -r 1 -vcodec flv -b 192k -i Music.mp3 -acodec copy -shortest output.flv
I bet you could get it working with multiple images by adding more inputs though I haven't tested.
(http://forum.videohelp.com/threads/280695-FFMPEG-Loop-input-video)