ffmpeg transcoding reset the start time of file - ffmpeg

I use a segmenter to segment my MPEG 2 Ts file into a series of media segment for HTTP live streaming
and each segment's start time following the previous one
(ex:start time of segments: 00:00,00:10,00:20,00:30,...)
(In Ubuntu)
The Question is:
When I use ffmpeg to transcode one of the media segment (ex 800k bps to 200k bps)
the start time of transcoded media segment will be reset to 0
ex:As I transcode the third segement,
start time of segments changing to : 00:00,00:10,00:00,00:30,...
It cause my player freezing once play the transcoded media segment
Is there any solution to transcode media file with the same start time?
I guess it's the ffmpeg reset the PTS(presentation timestamp) of segment
But I don't know how to fix it...
here is my ffmpeg command (transcode to 250k bps)
============================
ffmpeg -y -i sample-03.ts -f mpegts -acodec libfaac -ar 48000 -ab 64k -vcodec libx264 -b 250k -flags +loop -cmp +chroma \
-partitions +parti4x4+partp8x8+partb8x8 -subq 7 -trellis 0 -refs 0 -coder 0 -me_range 16 -keyint_min 25 \
-sc_threshold 40 -i_qfactor 0.71 -maxrate 250k -bufsize 250k -rc_eq 'blurCplx^(1-qComp)' -qcomp 0.6 \
-qmin 10 -qmax 51 -qdiff 4 -level 30 -aspect 320:240 -g 30 -async 2 sample.ts
============================
Help!
thanks

direct packet time shifting of h264 encoded segments
I ended up linking with ffmpeg libavformat/avcodec libraries to read in, and directly shift the packet time headers. Offset time is specified in seconds
unsigned int tsShift = offsetTime * 90000; // h264 defined sample rate is 90khz
and further below
do {
double segmentTime;
AVPacket packet;
decodeDone = av_read_frame(pInFormatCtx, &packet);
if (decodeDone < 0) {
break;
}
if (av_dup_packet(&packet) < 0) {
cout << "Could not duplicate packet" << endl;
av_free_packet(&packet);
break;
}
if (packet.stream_index == videoIndex && (packet.flags & AV_PKT_FLAG_KEY)) {
segmentTime = (double)pVideoStream->pts.val * pVideoStream->time_base.num / pVideoStream->time_base.den;
}
else if (videoIndex < 0) {
segmentTime = (double)pAudioStream->pts.val * pAudioStream->time_base.num / pAudioStream->time_base.den;
}
else {
segmentTime = prevSegmentTime;
}
// cout << "before packet pts dts " << packet.pts << " " << packet.dts;
packet.pts += tsShift;
packet.dts += tsShift;
// cout << " after packet pts dts " << packet.pts << " " << packet.dts << endl;
ret = av_interleaved_write_frame(pOutFormatCtx, &packet);
if (ret < 0) {
cout << "Warning: Could not write frame of stream" << endl;
}
else if (ret > 0) {
cout << "End of stream requested" << endl;
av_free_packet(&packet);
break;
}
av_free_packet(&packet);
} while (!decodeDone);
mpegts shifter source
shifted streams in a round about way
but the time delta is not precisely what I specify
Here's how
first convert the original ts file into a raw format
ffmpeg -i original.ts original.avi
apply a setpts filter and convert to encoded format
(this will differ depending on frame rate and desired time shift)
ffmpeg -i original.avi -filter:v 'setpts=240+PTS' -sameq -vcodec libx264 shift.mp4
segment the resulting shift.mp4
ffmpeg -i shift.mp4 -qscale 0 -bsf:v h264_mp4toannexb -vcodec copy -an -map 0 -f segment -segment_time 10 -segment_format mpegts -y ./temp-%03d.ts
the last segment file created, temp-001.ts in my case, was time shifted
the problem: this method feels obtuse for merely shifting some ts packet times, and it resulted in a start time of 10.5+ instead of precisely 10 seconds desired for the new ts file
original suggestion did not work as described below
ffmpeg -itoffset prevTime (rest of ts gen args) | ffmpeg -ss prevTime -i _ -t 10 stuff.ts
prevTime is the duration of all previous segments
no good as the second ffmpeg -ss call makes the output mpegts file relative to time 0 (or sometimes 1.4sec - perhaps a bug in the construction of single ts files)

IMO - you have a serialized list of segments and you want to concatenate them.
Thats it as long as the serial order of the segments is preserved thru the concatenation.
Process to run on each segment entry so that it can be concatentated....
getVideoRaw to its own file
getAudioRaw to its own file
When you have splitout to raw all of your segements do this....
concatenate video preserving serialized order so video segments remain correct order in videoConCatOUT.
concatenate the audio as above
then mux the respective concatOUT files into a single container.
This can be scripted and can follow the std. example in the ffmpeg FAQ on Concat
see '3.14.4' section here
Note the 'tail' cmd and the explain about dropping line no. 1 from all except the first segment input to the concat process...

You should transcode before segmenting. When you're transcoding an individual segment, it is creating a new ts stream each time, and the ts time data is not being copied over.

Have a look at the setpts filter. That should give you plenty of control over each piece's PTS.

there is a segmenter muxer https://ffmpeg.org/ffmpeg-formats.html#segment_002c-stream_005fsegment_002c-ssegment which may help you with what you're after...

Related

The audio of the file is distorted but in the clipped video - ffmpeg

Hello I am joining 3 video files. At the end the audio of the file is distorted but in the clipped video it is heard well.
I have tried recoding the audio to the video but I have not been able to find the solution.
Code that generates the video with the audio that sounds good
comand_audio_video = "ffmpeg -y -i " + _path_videos_principal_intro + "finaltext.mp4 -i " + _path_videos_principal_intro + "final.wav -map 0 -map 1:a -c:v copy -shortest " + _path_videos_principal_intro + "finalVideoAudio.mp4"
Code that generates the video concatenated with the audio that is not heard well
comand_all_video = "ffmpeg -y -f concat -safe 0 -i " + _path_videos_principal_intro + "videos.txt -acodec copy " + _path_videos_principal_intro + "finalVideoAudioConcat.mp4"
.txt File
file 'intro.mp4'
file 'finalVideoAudio.mp4'
file 'intro.mp4'
I will appreciate your help!

Use ffmpeg to record 2 webcams on raspberry pi

I want to record 2 webcams using ffmpeg, i have a simple python script but it doesn't work when I run the 2 subprocesses at the same time.
ROOT_PATH = os.getenv("ROOT_PATH", "/home/pi")
ENCODING = os.getenv("ENCODING", "copy")
new_dir = datetime.datetime.now().strftime("%Y_%m_%d_%H_%M_%S")
RECORDINGS_PATH1 = os.getenv("RECORDINGS_PATH", "RecordingsCam1")
RECORDINGS_PATH2 = os.getenv("RECORDINGS_PATH", "RecordingsCam2")
recording_path1 = os.path.join(ROOT_PATH, RECORDINGS_PATH1, new_dir)
recording_path2 = os.path.join(ROOT_PATH, RECORDINGS_PATH2, new_dir)
os.mkdir(recording_path1)
os.mkdir(recording_path2)
segments_path1 = os.path.join(recording_path1, "%03d.avi")
segments_path2 = os.path.join(recording_path2, "%03d.avi")
record1 = "ffmpeg -nostdin -i /dev/video0 -c:v {} -an -sn -dn -segment_time 30 -f segment {}".format(ENCODING, segments_path1)
record2 = "ffmpeg -nostdin -i /dev/video2 -c:v {} -an -sn -dn -segment_time 30 -f segment {}".format(ENCODING, segments_path2)
subprocess.Popen(record1, shell=True)
subprocess.Popen(record2, shell=True)
Also, i tried capturing the 2 sources side by side but it gives the error:`Filtering and streamcopy cannot be used together.
This has nothing to do with running two processes at the same time. FFmpeg clearly states that it cannot find /dev/video0 and /dev/video2. It seems your video camera is not detected. You can check this with following command :
$ ls /dev/ | grep video
will list all devices which have video in their name. If video0 and video2 do not exist, its clear FFmpeg gives such error. If they do exist, i do not know how to resolve this. You may try to run the FFmpeg commands directly in terminal.

read Frames from Video using FFMPEG gpu python

import os
import subprocess
class FFMPEGFrames:
def __init__(self, output):
self.output = output
def extract_frames(self, input, fps):
output = input.split('/')[-1].split('.')[0]
if not os.path.exists(self.output + output):
os.makedirs(self.output + output)
query = "ffmpeg -y -hwaccel cuvid -c:v h264_cuvid -resize 1024x70 -i " + input + " -vf scale_npp=format=yuv420p,hwdownload,format=yuv420p -vf fps=" + str(fps) + " -pix_fmt yuvj420p -color_range 2 -vframes 1 -y " + self.output + output + "/output%06d.jpg"
response = subprocess.Popen(query, shell=True, stdout=subprocess.PIPE).stdout.read()
s = str(response).encode('utf-8')
i received this error :
Impossible to convert between the formats supported by the filter 'Parsed_fps_0' and the filter 'auto_scaler_0'
Error reinitializing filters!
Failed to inject frame into filter network: Function not implemented
Error while processing the decoded data for stream #0:0
Conversion failed!
You likely missed an error message:
Only '-vf fps' read, ignoring remaining -vf options: Use ',' to separate filters
Combine both of your -vf into one. Also, consider replacing -pix_fmt yuvj420p with the format filter so all filtering can be within one filtergraph and you can choose exactly where that occurs within the filtergraph.

Xamarin Android merge audio files with FFMpeg

I am using this binding library for FFMpeg:
https://github.com/gperozzo/XamarinAndroidFFmpeg
My goal is to mix two audio files.
String s = "-i " + "test.wav" + " -i " + test2.mp3 + " -filter_complex amix=inputs=2:duration=first " + "result.mp3";
Device.BeginInvokeOnMainThread(async () =>
{
await FFMpeg.Xamarin.FFmpegLibrary.Run(Forms.Context, s);
});
So I have 2 input files: one is .mp3 and another one is .wav.
I've tried also next commands:
String s= "-i "+ "test.wav" +" -i "+ "test2.mp3" + " -filter_complex [0:0][1:0]concat=n=2:v=0:a=1[out] -map [out] " + "result.mp3";
String s = "-i " + "test.wav" + " -i " + "test2.mp3" + " -filter_complex [0:a][1:a]amerge=inputs=2[aout] -map [aout] -ac 2 " + "result.mp3";
1) Could I mix two different audio formats (in my case .mp3 & .wav) or they should be equivalent?
2) What is the correct command line for the mixing?
Thanks in advance.
1) Could I mix two different audio formats (in my case .mp3 & .wav)?
Yes. The input format does not matter because it will be fully decoded to PCM audio before being fed to the filter, but you have to be aware of how the various input channels will be mixed to create the channel layout for the output. Read the documentation on the amerge and amix filters for more info.
2) What is the correct command line for the mixing?
Your command using amerge should work:
ffmpeg -i test.wav -i test2.mp3 -filter_complex "[0:a][1:a]amerge=inputs=2[aout]" -map "[aout]" -ac 2 result.mp3
Or using amix:
ffmpeg -i test.wav -i test2.mp3 -filter_complex "[0:a][1:a]amix=inputs=2:duration=shortest[aout]" -map "[aout]" result.mp3

ffmpeg choose exact frame from long film strip

I'm working with ffmpeg to choose the better thumbnail for my video. and the selection would be based on the slider.
as per the requirement multiple thumbnails not needed just a single long film image and with the help of slider select the thumbnail and save it.
i used below command to get the long strip thumbnail.
ffmpeg -loglevel panic -y -i "video.mp4" -frames 1 -q:v 1 -vf "select=not(mod(n\,40)),scale=-1:120,tile=100x1" video_preview.jpg
I followed the instructions from this tutorial
I'm able to get the long film image:
This is working fine, they moving the image in slider which is fine.
My question is how can I select a particular frame from that slider / film strip. How can I calculate the exact time duration from the slider and then execute a command to extract that frame?
In one of my project I implemented the below scenario. In the code where I'm getting Video duration from the ffmgpeg command, if duration is less than 0.5 second than I set a new thumbnail interval for the same. In your case you should set time interval for the thumbnail creation. Hope this will help you.
$dur = 'ffprobe -i '.$video.' -show_entries format=duration -v quiet -of csv="p=0"';
$duration= exec($dur);
if($duration < 0.5) {
$interval = 0.1;
} else {
$interval = 0.5;
}
screenshot size
$size = '320x240';
ffmpeg command
$cmd = "ffmpeg -i $video -deinterlace -an -ss $interval -f mjpeg -t 1 -r 1 -y $image";
exec($cmd);
You can try this:
ffmpeg -vsync 0 -ss duration -t 0.0001 -noaccurate_seek -i filename -ss 00:30 -t 0.0001 -noaccurate_seek -i filename -filter_complex [0:v][1:v] concat=n=2[con];[con]scale=80:60:force_original_aspect_ratio=decrease[sc];[sc]tile=2x1[out] -map [out]:v -frames 1 -f image2 filmStrip.jpg
2 frame

Resources