I'd like to loop an audio watermark over a longer piece of audio, say every 20 seconds.
Right now I have mixed the two pieces of audio and the watermark plays at the very beginning:
command = ffmpeg(tempFilePath) // File path to base audio file
.setFfmpegPath(ffmpeg_static)
.input(tempWatermarkAudioPath) // File path to watermark audio file
// Need to loop the watermark
.complexFilter("amix=inputs=2:duration=longest")
.audioChannels(2)
.audioFrequency(48000)
.format('mp3')
.output(targetTempFilePath);
I have looked at ffmpeg: How to repeat an audio "watermark" and tried to do the following to no avail:
.complexFilter("amovie=" + tempWatermarkAudioPath + ":loop=0,asetpts=N/SR/TB[beep];[0][beep]amix=duration=longest,volume=2")
and
.complexFilter("amovie=[b]:loop=0,asetpts=N/SR/TB[b];[0][b]amix=duration=longest,volume=2")
In both these cases I get "File path not found" using Google Cloud Functions.
Any help would be greatly appreciated.
Related
I want to record video from a camera, save it to file, and at the same time have access to the last frame recorded.
One idea would be to use ffmpeg's Multiple Outputs functionality where I split the stream into two, one gets saved to file, one spits out the last recorded frame (ideally, the frames won't need to be written to disk, but piped onwards for processing).
What I don't know is how to get ffmpeg to spit "the last frame" from a stream.
Any ideas?
Output a video and continuously update an image every second
ffmpeg -i input.foo outputvideo.mp4 -r 1 -update 1 image.jpg
Output a video and output a new image every second
ffmpeg -i input.foo outputvideo.mp4 -r 1 images_%04d.jpg
Output will be named images_0001.jpg, images_0002.jpg, etc.
Also see
FFmpeg image muxer documentation for more info and options.
How can I extract a good quality JPEG image from a video file with ffmpeg?
We are developing a stop motion app for kids and schools.
So what we have is:
A sequence of images and audio files (no overlapping audio, in v1. But there can be gaps between them)
What we need to do:
Combine the images to a video with a frame rate between 1-12 fps
Add multiple audio files at a given start times
encode with H265 to mp4 format
I would really like to avoid maintaining a VM or Azure batch jobs running ffmpeg jobs if possible.
Is there any good frameworks or third party APIs?
I have only found transloadit as the closes match but they don't have the option to add multiple audio files.
Any suggestions or experience in this area is very appreciated.
You've mentionned FFmpeg in your tag and it is a tool that checks all the boxes.
For the first part of your project (making a video from images) you should check this link. To sum up, you'll use this kind of command:
ffmpeg -r 12 -f image2 -i PATH_TO_FOLDER/frame%d.png PATH_TO_FOLDER/output.mp4
-r 12 being your framerate, here 12. You control the output format with the file extension. To control the video codec check out this link, you'll need to use the option -c:v libx265before the filename of your output.
With FFmpeg you add audio as you add video, with -i option followed by your filename. If you want to cut audio you should seek in your audio with -ss -t two options good for that. If you want and audio to start at a certain point, check out -itoffset, you can find a lot of examples.
I have ffmpeg setup.
Is there a way to extract pictures/stills/photos (etc) from a container (file) that's from an old CD-I game that I have.
I don't want to extract the audio nor video. And I don't want frames from the videos either.
I want the bitmaps (etc) from INSIDE that container file.
I know my Windows 8.1 PC can't read inside that container file - so I'm hoping there's a way to extract all the files (I want) instead using ffmpeg.
(IsoBuster only gives the audio and video so I know already about IsoBuster.)
I think there are no individual headers for the pictures/stills/photos, etc.
Here's what ExifTool decoded the file as:
ExifTool Version Number (10.68)
File Name (green.3t)
File Size (610 MB)
File Permissions (rw-rw-rw-)
File Type (MPEG)
File Type Extension (mpg)
MIME Type (video/mpeg)
MPEG Audio Version (1)
Audio Layer (2)
Audio Bitrate (80 kbps)
Sample Rate (44100)
Channel Mode (Single Channel)
Mode Extension (Bands 4-31)
Copyright Flag (False)
Original Media (False)
Emphasis (None)
Image Width (368)
Image Height (272)
Aspect Ratio (1.0695)
Frame Rate (25 fps)
Video Bitrate (1.29 Mbps)
Duration (1:02:12 approx)
Image Size (368x272)
Megapixels (0.100)
Thank you for reading and - help!! :D
I have been looking for a way to convert a sequence of PNGs to a video. There are ways to do that using the CONCAT function within FFmpeg and using a script.
The problem is that I want to show certain images longer than others. And I need it to be accurate. I can set a duration (in seconds) in the script file. But I need it to be frame-accurate. So far I have not been successful.
This is what I want to make:
Quicktime video with transparancy (Prores4444 or other codec that supports transparancy + alpha channel)
25fps
This is what I have: [ TimecodeIn - TimecodeOut in destination video ]
img001.png [0:00:05:10 - 0:00:07:24]
img002.png [0:00:09:02 - 0:00:12:11]
img003.png [0:00:15:00 - 0:00:17:20]
...
img120.png [0:17:03:11 - 0:17:07:01]
Of course this is not the format of the script file. Just an idea about what kind of data I am dealing with. The PNG-imagefiles are subtitles I generate elsewhere in my application. I would like to be able to export the subtitles as a transparent movie that I can easily import in my video editing software.
I also have been thinking of using blank transparent images I will use as spacers, between the actual subtitle images.
After looking around I think this might help:
On the FFMPEG site they explain about making a timed slideshow
In the Concat demuxer section they talk about making a slideshow, based on a text file, with references to the image files and the duration of the image.
So, I create all the PNG images I need. These images have the subtitle text. Each image holds one subtitle page.
For the moments I want to hide the subtitle, I use a blank PNG.
I generate a text file as explained on the FFMPEG website.
This text file will reference to all the PNGs. For the duration I just calculate the outcue - incue. Easy... I think...
I'm trying to make a batch of videos for uploading to youtube. My emphasis is on the audio (mostly MP3, with some WMA). I have several image files that need to be picked up at random to go with the audio (ie) display an image for 5 seconds before showing the next. I want the video to stop when the audio stream ends. How should I use ffmpeg to achieve this ?
Ref:
http://trac.ffmpeg.org/wiki/Create%20a%20video%20slideshow%20from%20images