I have transferred some film to video files from 16mm (native 4:3). The image looks great.
When I scanned them, I scanned to a native 16:9. As I overscanned them, I got the entire height of the frame, which is what I want. But it also got the soundtrack and perforation. But I want to go just to the frame line on the sides as well.
I can CROP the image down with FFMPEG to remove the information outside of the framing I want [-vf crop=1330:1080:00:00].
I know this will result in a non-standard aspect ratio.
This plays fine on a computer (vlc just adapts to the non-standard).
But for standardized delivery, I would love to keep the native 1920x1080 pixels, but just make everything outside of the centered 1330:1080 black.
Is there a way to specifically select where the pillar bars are?
I really want to re-encode the video as little as possible.
In that vein, does anyone have a better tool than -vf crop as well?
thank you very very much.
Use crop then pad:
ffmpeg -i input -vf "crop=1330:ih,pad=1920:ih:-1:-1" output
Related
I have use ffmpeg and mp4parser to add image watermark on video.
both works when video size is small like less than 5MB to 7Mb but
when it comes to large video size(anything above than 7MB or so..)
it fails and it doesn't not work.
what are the resources that helps to adding watermark on video quickly. if you have any useful resources that please let me know?
It depends on what exactly you need.
If the watermark is just needed when the video is viewed on the android device, the easiest and quickest way is to overlay the image with a transparent background over the video view. You will need to think about fullscreen vs inline and portrait vs landscape to ensure it lines up as you want.
If you want to watermark the video itself, so that the watermark is included if the video is copied or sent elsewhere, then ffmpeg is likely as fast as other solutions on the device itself. If you are able to send the video to a server and have the watermark applied there you will have the ability to use much more powerful compute resource.
I have a video file of an online lecture cosisting of a slideshow with audio in the background.
I want to save images of each slide as well as the timestamp of that slide.
I do this using the scene and metadata filters:
ffmpeg -i week-01.mp4 -filter_complex "select='gt(scene,0.011)',metadata=print:file=frames/time.txt" -vsync vfr frames/img%03d.jpg
This works fine exept for one thing, there is a timer onscreen on the right in the video file.
If i set the thershold small enough to pick up all the slide changes, it also picks up the timer changes.
So here is my question, Can I ask ffmpeg to:
analize part of the frame (only the right side till roughly 75% to the left).
Then, on detecting a scene change in this area, save the entire frame and the timestamp.
I though of making a script that
crops the video and saves it alongside the origional
analize the cropped video for scene changes and save the timestamps
extract the frames from the origional video using the timestamps
Is there a better/faster/shorter way to do this?
Thanks in advance!
You can do it in one command like this,
ffmpeg -i week-01.mp4 -filter_complex "[0]split=2[full][no_timer];[no_timer]drawbox=w=0.25*iw:h=ih:x=0.75*iw:y=0[no_timer];[no_timer]select='gt(scene,0.011)',metadata=print:file=frames/time.txt[no_timer];[no_timer][full]overlay" -vsync vfr frames/img%03d.jpg
Basically, make two copies of the video, use drawbox on one copy to paint solid black over the quarter of the screen on the right, analyze scene change and record scores to file; then overlay the full unpainted frame on top of the painted ones. Due to how overlay syncs frames, only the full frames with corresponding timestamps will be used to overlay on top of the base selected frames.
I'm struggling with FFMPEGs Remap Filter. I have a security camera that streams a bunch of different options, but the default is this FishEye:
I see a TON of maps for Ricotah Theta's, but nothing that shows me how to generate those map files for a different layout, like the one I have. I've tried doing just 2 pano's, but the image gets stretched out so much when I stream to YouTube. Can someone point me in the right direction???
You posted a modified imaged (cropped and moved), so applying ffmpeg directly gives weird results, but with raw images, which probably look like this...
using this command...
ffmpeg -i input.png -vf v360=fisheye:e:ih_fov=180:iv_fov=180:pitch=-90 -y output.jpg
you would get this result:
You can then view it here: https://renderstuff.com/tools/360-panorama-web-viewer/
I was making this far too difficult. Just send youtube the fisheye using FFMPEG. You can tweak the size to prevent some of the distortion.
You need the v360 filter. Make sure you use the latest ffmpeg build; older versions don't include this filter.
I used these parameters for a security camera:
-vf v360=fisheye:equirect:ih_fov=180:iv_fov=180
Result:
You might want to crop the video (because of the black margins):
-vf crop=1500:1500:250:0,v360=fisheye:equirect:ih_fov=180:iv_fov=180,crop=1500:1500:750:0
Of course, adjust the crop filter parameters to your situation.
I have a series of png's that have an alpha channel as a background. Each file is named like file_name.0001.png and so on, in subsequent order. I'd like to join these png's into a video with ffmpeg and maintain the transparency.
I've tried a couple of things but I suspect I'm running into a codec issue. When I run ffmpeg, the video is created but the background is black.
If it makes a difference, I'm wanting to use the video in Microsoft Powerpoint. Thanks!
Edit
The suggested duplicate is very close to what I was looking for, thank you! The only reason it's not a complete solution is none of the options presented in the other thread work well with Microsoft Powerpoint. None of the codecs used in the suggested solution play well with Powerpoint. This is not the fault of ffmpeg, but of Powerpoint.
Though ffmpeg doesn't seem to be able to do what I need, I found that imagemagick did the trick. I was able to create a gif from the images and the alpha channel was preserved. I used the following:
convert -dispose 3 -coalesce images.*.png gif_file_name.gif
The -dispose 3 is critical as it tells imagemagick to clear the image prior to overlay, otherwise, you can see each image overlaid on each other (since they have the transparent background).
I couldn't get ffmpeg to create a video that preserved the alpha channel and was Powerpoint friendly (not the fault of ffmpeg). Though ffmpeg doesn't seem to be able to do what I need, I found that imagemagick did the trick. I was able to create a gif from the images and the alpha channel was preserved. I used the following:
convert -dispose 3 -coalesce images.*.png gif_file_name.gif
The -dispose 3 is critical as it tells imagemagick to clear the image prior to overlay, otherwise, you can see each image overlaid on each other (since they have the transparent background).
Is there a way to generate thumbnails from scene changes using ffmpeg? Instead of picking the I-Frames but the middle of two I-Frames? This is assuming the middle of two major changes is usually the best time to take thumbnail shot from.
Yes, there is a way!
Execute this command:
ffmpeg -i yourvideo.mp4 -vf
select="eq(pict_type\,I)" -vsync 0 -an
thumb%03d.png
It is really simple. ffmpeg algorithms will do all the work and generate thumbnails from the middle of every scene change.
The items on bold:
yourvideo.mp4 (you need to change this to your video input)
thumb%03d.png (This is your output)