I have a raw file that includes encoded data with g711 audio codec. FFMpeg decoded successfully the raw file. I want to split the wav file by time so I used segment_time option.
Here is my code:
ffmpeg -acodec pcm_alaw -f sln -i g711.raw -ar 8000 -acodec pcms16le -f segment -segment_time 10 g711.wav
This code worked without an error but I want to know that how many wav files created. Is there any chance to know that by adding an option to the code?
Thanks,
Here are 3 methods:
1. Look at the console output / log
It will output lines like:
[segment # 0x55a7f065c700] Opening 'output_003.wav' for writing
This is the last of such lines in this example, so there are 4 WAV files (it starts at 0 unless you use -segment_start_number).
2. Just list the WAV files
Linux example:
$ ls -m1 *.wav | wc -l
4
3. Use -segment_list
ffmpeg -acodec pcm_alaw -f sln -i g711.raw -ar 8000 -acodec pcm_s16le -f segment -segment_time 10 -segment_list list.txt output_%03d.wav
Example contents of list.txt:
output_000.wav
output_001.wav
output_002.wav
output_003.wav
See the segment muxer documentation for more info.
Related
I have 40 videos that I need to trim, so they would start at the 6th second and then combine them into a single video.
I also have a list_of_movies.txt that contains the name of all the files I need to trim.
This is my problem:
When concatenating the videos to a single file, ffmpeg accepts:
ffmpeg -f concat -i list_of_movies.txt -c copy output.mp4
but when trimming, it would not accept:
ffmpeg -ss 00:00:06 -i list_of_movies.txt trimmed .mp4
What am I doing wrong?
If you want to trim each video to start from its 6th second, you'll have to specify it for each file within the text file, so
file first.mp4
inpoint 5
file second.mp4
inpoint 5
file third.mp4
inpoint 5
[...]
and then run
ffmpeg -f concat -i list_of_movies.txt -c copy output.mp4
However, in copy mode, ffmpeg can only extract, starting at a video keyframe, so expect extra frames and maybe broken audio sync.
It's best to re-encode,
ffmpeg -f concat -segment_time_metadata 1 -i list_of_movies.txt -vf select=concatdec_select,setpts=N/FRAME_RATE/TB -af aselect=concatdec_select,asetpts=N/SR/TB output.mp4
I have a bunch if h264 encoded mp4 files (of about 10-15 seconds) and I want to mix them with another bunch of jpegs (which should be displayed for x seconds each).
So I've setup the concat.txt file :
file slide_1.jpg
duration 3
file movie_1.mp4
file slide_2.jpg
duration 5
file movie_2.mp4
and I am trying to run
yes | scripts/ffmpeg -f concat -i concat.txt -vcodec copy -c:a copy final.mp4
which generates a movie with the length of 6 hours (6:48:34) and in which I can only see the 1st picture.
How do I fix this ?
As LordNeckbeard said, the slides should first be converted to movies first.
So in my case I convert the slide to movie like this (slide 1 will be a 3 seconds clip):
yes | scripts/ffmpeg -loop 1 -r 25 -i slide_1.jpg -t 00:00:03 -vcodec libx264 -pix_fmt yuv420p -an slide_1.mp4
Then the concat file looks like this:
file slide_1.mp4
file movie_1.mp4
file slide_2.mp4
file movie_2.mp4
and the concatenation command is:
yes | scripts/ffmpeg -f concat -i concat.txt -vcodec copy -c:a copy final.mp4
Note that all the movie pieces must be of the same width and height
I am trying to create a video out of a sequence of images and various audio files using FFmpeg. While it is no problem to create a video containing the sequence of images with the following command:
ffmpeg -f image2 -i image%d.jpg video.mpg
I haven't found a way yet to add audio files at specific points to the generated video.
Is it possible to do something like:
ffmpeg -f image2 -i image%d.jpg -i audio1.mp3 AT 10s -i audio2.mp3 AT 15s video.mpg
Any help is much appreciated!
EDIT:
The solution in my case was to use sox as suggested by blahdiblah in the answer below. You first have to create an empty audio file as a starting point like that:
sox -n -r 44100 -c 2 silence.wav trim 0.0 20.0
This generates a 20 sec empty WAV file. After that you can mix the empty file with other audio files.
sox -m silence.wav "|sox sound1.mp3 -p pad 0" "|sox sound2.mp3 -p pad 2" out.wav
The final audio file has a duration of 20 seconds and plays sound1.mp3 right at the beginning and sound2.mp3 after 2 seconds.
To combine the sequence of images with the audio file we can use FFmpeg.
ffmpeg -i video_%05d.png -i out.wav -r 25 out.mp4
See this question on adding a single audio input with some offset. The -itsoffset bug mentioned there is still open, but see users' comments for some cases in which it does work.
If it works in your case, that would be ideal:
ffmpeg -i in%d.jpg -itsoffset 10 -i audio1.mp3 -itsoffset 15 -i audio2.mp3 out.mpg
If not, you should be able to combine all the audio files with sox, overlaying or inserting silence to produce the correct offsets and then use that as input to FFmpeg. Not as convenient, but guaranteed to work.
One approach I can think of is to create your audio file for the whole duration of the video first and then mux the audio with the video file
I have input file as 02.mp3. I want to change it to mp3 file with some bit rate. While doing so, I want to preserve all the metadata plus the APIC, attached picture corresponding to image should also be transfered to the destionation file. I am using FFMPEG and i am using the following command...
ffmpeg -y -i 02.mp3 -id3v2_version 3 -ab 128000 -ss 0 -acodec libmp3lame -f mp3 -ac 2 -ar 44100 output.mp3
source file: 02.mp3
destination file : output.mp3.
But in destination file, i am not getting APIC(attached picture corresponding to 02.mp3).I am getting all other mp3 tags in output.mp3 except for APIC. How to get APIC in destinaton file as well?
You will need to patch your FFMPEG source to support support binary in metadata and rebuild. The patch is here:
http://lists.mplayerhq.hu/pipermail/ffmpeg-devel/2011-December/118085.html
Does anyone know if it is possible to encode a video using ffmpeg in reverse? (So the resulting video plays in reverse?)
I think I can by generating images for each frame (so a folder of images labelled 1.jpg, 2.jpg etc), then write a script to change the image names, and then re-encode the ivdeo from these files.
Does anyone know of a quicker way?
This is an FLV video.
Thank you
No, it isn't possible using ffmpeg to encode a video in reverse without dumping it to images and then back again. There are a number of guides available online to show you how to do it, notably:
http://ubuntuforums.org/showthread.php?t=1353893
and
https://sites.google.com/site/linuxencoding/ffmpeg-tips
The latter of which follows:
Dump all video frames
$ ffmpeg -i input.mkv -an -qscale 1 %06d.jpg
Dump audio
$ ffmpeg -i input.mkv -vn -ac 2 audio.wav
Reverse audio
$ sox -V audio.wav backwards.wav reverse
Cat video frames in reverse order to FFmpeg as input
$ cat $(ls -r *jpg) | ffmpeg -f image2pipe -vcodec mjpeg -r 25 -i - -i backwards.wav -vcodec libx264 -vpre slow -crf 20 -threads 0 -acodec flac output.mkv
Use mencoder to deinterlace PAL dv and double the frame rate from 25 to 50, then pipe to FFmpeg.
$ mencoder input.dv -of rawvideo -ofps 50 -ovc raw -vf yadif=3,format=i420 -nosound -really-quiet -o - | ffmpeg -vsync 0 -f rawvideo -s 720x576 -r 50 -pix_fmt yuv420p -i - -vcodec libx264 -vpre slow -crf 20 -threads 0 video.mkv
I've created a script for this based on Andrew Stubbs' answer
https://gist.github.com/hfossli/6003302
Can be used like so
./ffmpeg_sox_reverse.sh -i Desktop/input.dv -o test.mp4
New Solution
A much simpler method exists now, simply use the command (adjusting input.mkv and reversed.mkv accordingly):
ffmpeg -i input.mkv -af areverse -vf reverse reversed.mkv
The -af areverse will reverse audio, and -vf reverse will reverse video. The video and audio will be in sync automatically in the output file reversed.mkv, no need to worry about the input frame rate or anything else.
On one video if I only specified the -vf reverse to reverse video (but not audio), the output file didn't play correctly in mkv format but did work if I changed it to mp4 output format (I don't think this use case of reversing video only but not audio is common, but if you do run into this issue you can try changing the output format). On large input videos that exceed the RAM available in your computer, this method may not work and you may need to chop up the input file or use the old solution below.
Old Solution
One issue is the frame rate can vary depending on the video, many answers depend on a specific frame rate (like "-r 25" for 25 frames per second). If the frame rate in the video is different, this will cause the reversed audio and video to go out of sync.
You can of course manually adjust the frame rate each time (you can get the frame rate by running ffmpeg -i video.mkv and look for the number in front of the fps, this is sometimes a decimal number like 23.98). But with some bash code you can easily extract the fps, store it in a variable, and automatically pass it to the programs.
Based on this I've created the following bash script to do that. Simply chmod +x it and run it ./make-reversed-video.sh input.mkv output.mkv. The code is as follows:
#!/bin/bash
#Partially based on https://nhs.io/reverse/, but with some modifications, including automatic extraction of the frame rate.
#Get parameters.
VIDEO_FILE=$1
OUTPUT_FILE=$2
TEMP_FOLDER=$3
echo Using input file: $VIDEO_FILE
echo Using output file: $OUTPUT_FILE
mkdir /tmp/create_reversed_video
#Get frame rate.
FRAME_RATE=$(ffmpeg -i "$VIDEO_FILE" 2>&1 | grep -o -P '[0-9\\. ]+fps' | grep -o -P '[0-9\\.]+')
echo The frame rate is: $FRAME_RATE
#Extract audio from video.
ffmpeg -i "$VIDEO_FILE" -vn -ac 2 /tmp/create_reversed_video/audio.wav
#Reverse the audio.
sox -V /tmp/create_reversed_video/audio.wav /tmp/create_reversed_video/backwards.wav reverse
#Extract each video frame as an image.
ffmpeg -i "$VIDEO_FILE" -an -qscale 1 /tmp/create_reversed_video/%06d.jpg
#Recombine into reversed video.
ls -1 /tmp/create_reversed_video/*.jpg | sort -r | xargs cat | ffmpeg -framerate $FRAME_RATE -f image2pipe -i - -i /tmp/create_reversed_video/backwards.wav "$OUTPUT_FILE"
#Delete temporary files.
rm -rf /tmp/create_reversed_video
I've tested it and it works well on my Ubuntu 18.04 machine on lots of videos (after installing the dependencies like sox). Please let me know if it works on other Linux distributions and versions.