I'm struggling with FFMPEGs Remap Filter. I have a security camera that streams a bunch of different options, but the default is this FishEye:
I see a TON of maps for Ricotah Theta's, but nothing that shows me how to generate those map files for a different layout, like the one I have. I've tried doing just 2 pano's, but the image gets stretched out so much when I stream to YouTube. Can someone point me in the right direction???
You posted a modified imaged (cropped and moved), so applying ffmpeg directly gives weird results, but with raw images, which probably look like this...
using this command...
ffmpeg -i input.png -vf v360=fisheye:e:ih_fov=180:iv_fov=180:pitch=-90 -y output.jpg
you would get this result:
You can then view it here: https://renderstuff.com/tools/360-panorama-web-viewer/
I was making this far too difficult. Just send youtube the fisheye using FFMPEG. You can tweak the size to prevent some of the distortion.
You need the v360 filter. Make sure you use the latest ffmpeg build; older versions don't include this filter.
I used these parameters for a security camera:
-vf v360=fisheye:equirect:ih_fov=180:iv_fov=180
Result:
You might want to crop the video (because of the black margins):
-vf crop=1500:1500:250:0,v360=fisheye:equirect:ih_fov=180:iv_fov=180,crop=1500:1500:750:0
Of course, adjust the crop filter parameters to your situation.
Related
Hope you can hep me figure out where I am misleading myself. I am trying to watermark a bunch of videos with varying resolution sizes with a .png watermark file that is 1200x600. I have videos that are as large as 2480x1280 and as small as 360x141.
I had originally thought that ffmpeg could handle it, but I am having issues with the conversion, and I am pretty sure it is my misunderstanding of how to leverage scale2ref command properly. Now, from scale2ref documentation they say this:
Scale a subtitle stream (b) to match the main video (a) in size before overlaying
'scale2ref[b][a];[a][b]overlay'
I understand that stream[b] is my watermark file, and stream[a] is my video file.
My ffmpeg command is this:
while read -r line || [[ -n "$line" ]]; do fn_out="w$line"; (/u2/vidmarktest/scripttesting/files_jeud8334j/dioffmpeg/ffmpeg -report -nostdin -i /u2/vidmarktest/scripttesting/files_jeud8334j/"$line" -i /u2/vidmarktest/mw1.1200x600.png -filter_complex "scale2ref[b][a];[b][a]overlay=x=(main_w-overlay_w)/2:y=(main_h-overlay_h)/2" /u2/vidmarktest/scripttesting/files_jeud8334j/converted/"$fn_out" -c:v libx265); done < videos.txt
To do some 'splainin, it is going to be part of a bash script that will be cron-ed so we can make sure we have all our latest submissions to the directory watermarked.
The problem is this:
All of my converted videos are scaled to fit 1200x600 now, rather than remaining in their original configuration with the watermark being the part that should be scaled to fit the video.
To note, that in this section: [b][a]overlay=x=(main_w-overlay_w)/2:y=(main_h-overlay_h)/2, many will say I need to switch the [a] and [b] values. When you do that, the watermark is obscured by the video, not interlaced. Switching those two values puts the [b] value (the watermark) over the [a] value (the video).
Any feedback will be highly appreciated, welcomed, and graded :)
I am expecting to get the watermark to adjust to the video resolution, but am failing miserably. What do I not know about ffmpeg and scale2ref that is causing my problem?
In the command, your video is the first input, and the watermark the 2nd. In scale2ref, the inputs aren't explicitly set, so ffmpeg picks streams in input order, whereas you need the watermark to be the first input.
Use
[1:v][0:v]scale2ref[wm][v];[v][wm]overlay=...
I have two fisheye images (one from back and one from front) and i want to join those two images to single equirectangular image.
Is it possible?
the command i am looking for is something this this:
ffmpeg -i ./image_from_front.jpg -i ./image_from_back.jpg filters_to_use ./final_single_equirectangular_image.jpg
I don't think this is possible directly. But I was able to do this with ImageMagick and ffmpeg.
To illustrate I take these fisheye images from wikipedia (original by Peter.wieden):
First, combine the two fisheye images into one image file:
magick ./image_from_front.jpg ./image_from_back.jpg +append ./dual_fisheye_image.jpg
Then, create the equirectangular image with ffmpeg:
./ffmpeg -i ./dual_fisheye_image.jpg -vf v360=input=dfisheye:iv_fov=195:ih_fov=195:output=equirect ./final_single_equirectangular_image.jpg
The parameters may need adjustments to match your cameras field of view. Also note that no stiching or blending of the images is performed.
See this article for general information on the transformations involved and example pictures.
Also see https://ffmpeg.org/ffmpeg-filters.html#v360 for more information and options of the v360 filter.
I have transferred some film to video files from 16mm (native 4:3). The image looks great.
When I scanned them, I scanned to a native 16:9. As I overscanned them, I got the entire height of the frame, which is what I want. But it also got the soundtrack and perforation. But I want to go just to the frame line on the sides as well.
I can CROP the image down with FFMPEG to remove the information outside of the framing I want [-vf crop=1330:1080:00:00].
I know this will result in a non-standard aspect ratio.
This plays fine on a computer (vlc just adapts to the non-standard).
But for standardized delivery, I would love to keep the native 1920x1080 pixels, but just make everything outside of the centered 1330:1080 black.
Is there a way to specifically select where the pillar bars are?
I really want to re-encode the video as little as possible.
In that vein, does anyone have a better tool than -vf crop as well?
thank you very very much.
Use crop then pad:
ffmpeg -i input -vf "crop=1330:ih,pad=1920:ih:-1:-1" output
I'm using ffmpeg to join frames into a video with some parameters.
Here is a sample of the commands I run :
"ffmpeg -y -r 24 -f image2 -i "C:\Users\Pictures\me\frame%04d.bmp" -filter_complex "[0:v]select=between(n,0,76)[selected];[selected]crop=in_w:in_h-60-60:0:60[cropped];[cropped]scale=w=2ceil(2048.0/20.5):h=2ceil(858.0 /20.5) " -c:v libx264 -q:v 1 -b:v 2M "C:\Users\me\Video\output.mp4""
When I run this command I have calculated the size of my cropping on the frames to remove black rectangles at the top and bottom of the frame (I tried using cropdetect but it doesn't fit my usecase so I'm using another soft).So my first that was that ffmpeg would crop on the input stream so it would only crop my black rectangles. But when I change my scale it crops a part of the image.
So my understanding is that ffmpeg crops after scaling (maybe I'm wrong) and if I get the crop parameters on the input images it is sure the they will be wrong if I apply them on the scaled video.
I tried using ";" and "," to separate my filters. I tried naming and not naming my streams between filters. Nothing seems to solve my issue.
What could I do to fix that or am I understanding the issue incorrectely?
Thanks in advance
So actually I didn't understand the problem correctly. The filters are indeed applied in the correct order but it seems like "scale" crops my video again so I'm losing the bottom of my images.
I'm gonna investigate that.
Is there a way to generate thumbnails from scene changes using ffmpeg? Instead of picking the I-Frames but the middle of two I-Frames? This is assuming the middle of two major changes is usually the best time to take thumbnail shot from.
Yes, there is a way!
Execute this command:
ffmpeg -i yourvideo.mp4 -vf
select="eq(pict_type\,I)" -vsync 0 -an
thumb%03d.png
It is really simple. ffmpeg algorithms will do all the work and generate thumbnails from the middle of every scene change.
The items on bold:
yourvideo.mp4 (you need to change this to your video input)
thumb%03d.png (This is your output)