I am a novice moodle administrator and, in addition, they ask me for things that I think are out of the ordinary. My users need their videos to have a dynamic watermark to prevent piracy.
I have evaluated some options, such as VDO Cipher, but they seem too expensive for a training that is offered almost free of charge.
Can you advise me something else? In addition to being an administrator, I am an application architect and maybe I could do a custom development, maybe using fmpeg or similar.
What do you recommend?
You can try this
ffmpeg -i video.mp4 -vf "drawtext=text='Text':x=w-mod(max(t\,0)*(w+tw)/10\,(w+tw)):y=h-text_h:fontsize=24:fontcolor=black:box=1:boxcolor=white#0.5:boxborderw=5" -c:a copy -y output.mp4
This will create a text that will scroll horizontally, for the duration of the video
"drawtext=text='Text' Change the "Text" here to match what you want
x=w-mod(max(t\,0)*(w+tw)/10\,(w+tw)) makes the text run across the screen in 10 seconds.
y=h-text_h positions the text at the bottom of the video
black:box=1:boxcolor=white#0.5: boxborderw=5 creates a frame around the text to make it more visible, but you can remove it
Note that you can use a font file of your choice by adding the "fontfile" option in the filter
Related
We are developing a stop motion app for kids and schools.
So what we have is:
A sequence of images and audio files (no overlapping audio, in v1. But there can be gaps between them)
What we need to do:
Combine the images to a video with a frame rate between 1-12 fps
Add multiple audio files at a given start times
encode with H265 to mp4 format
I would really like to avoid maintaining a VM or Azure batch jobs running ffmpeg jobs if possible.
Is there any good frameworks or third party APIs?
I have only found transloadit as the closes match but they don't have the option to add multiple audio files.
Any suggestions or experience in this area is very appreciated.
You've mentionned FFmpeg in your tag and it is a tool that checks all the boxes.
For the first part of your project (making a video from images) you should check this link. To sum up, you'll use this kind of command:
ffmpeg -r 12 -f image2 -i PATH_TO_FOLDER/frame%d.png PATH_TO_FOLDER/output.mp4
-r 12 being your framerate, here 12. You control the output format with the file extension. To control the video codec check out this link, you'll need to use the option -c:v libx265before the filename of your output.
With FFmpeg you add audio as you add video, with -i option followed by your filename. If you want to cut audio you should seek in your audio with -ss -t two options good for that. If you want and audio to start at a certain point, check out -itoffset, you can find a lot of examples.
I want to hardsub all the movies and add watermark to them, I used ffmpeg once bu it’s slow, if you can recommend new way or how to use it faster.
To use a hardsub and watermark you will need to transcoding the entire movie. It is a slow operation regardless the software you use.
Maybe you can think about use a subtitle track. Watermarks can be used on players side too. That way you won't need transcoding.
I'm struggling with FFMPEGs Remap Filter. I have a security camera that streams a bunch of different options, but the default is this FishEye:
I see a TON of maps for Ricotah Theta's, but nothing that shows me how to generate those map files for a different layout, like the one I have. I've tried doing just 2 pano's, but the image gets stretched out so much when I stream to YouTube. Can someone point me in the right direction???
You posted a modified imaged (cropped and moved), so applying ffmpeg directly gives weird results, but with raw images, which probably look like this...
using this command...
ffmpeg -i input.png -vf v360=fisheye:e:ih_fov=180:iv_fov=180:pitch=-90 -y output.jpg
you would get this result:
You can then view it here: https://renderstuff.com/tools/360-panorama-web-viewer/
I was making this far too difficult. Just send youtube the fisheye using FFMPEG. You can tweak the size to prevent some of the distortion.
You need the v360 filter. Make sure you use the latest ffmpeg build; older versions don't include this filter.
I used these parameters for a security camera:
-vf v360=fisheye:equirect:ih_fov=180:iv_fov=180
Result:
You might want to crop the video (because of the black margins):
-vf crop=1500:1500:250:0,v360=fisheye:equirect:ih_fov=180:iv_fov=180,crop=1500:1500:750:0
Of course, adjust the crop filter parameters to your situation.
Is there a way to generate thumbnails from scene changes using ffmpeg? Instead of picking the I-Frames but the middle of two I-Frames? This is assuming the middle of two major changes is usually the best time to take thumbnail shot from.
Yes, there is a way!
Execute this command:
ffmpeg -i yourvideo.mp4 -vf
select="eq(pict_type\,I)" -vsync 0 -an
thumb%03d.png
It is really simple. ffmpeg algorithms will do all the work and generate thumbnails from the middle of every scene change.
The items on bold:
yourvideo.mp4 (you need to change this to your video input)
thumb%03d.png (This is your output)
I want to split one video up into multiple parts based on detecting the first frame of each shot, by select scene in ffmpeg.
The following entry records the scene frames and creates a photo mosaic out of them. This indicates to me that the select portion is functional, but I want to use this to create many separate videos, each scene it's own video file.
ffmpeg -i video.mpg -vf select='gt(scene\,0.2331)','scale=320x240,tile=1x100' -frames:v preview.png
Thank you. I think I am close, and I am open to any solution.
You should definitely use -ss(stream start time) and -t(number of second of video from the the start time) options, Can you get the time for each of these scene frames? Then you are good to go.