I have 2 MPTS files with the same pid 3511. first file base, second after transcoder. I want to calculte PSNR between it.
.\ffmpeg.exe -i origin.ts -i ref.ts -streamid 0:3511 -streamid 1:3511 -lavfi psnr=stats_file=psnr_logfile.txt -f null -
but ffmpeg return me an error:
[Parsed_psnr_0 # 000002161e3c6a40] Width and height of input videos must be same.
[Parsed_psnr_0 # 000002161e3c6a40] Failed to configure input pad on Parsed_psnr_0
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Error while processing the decoded data for stream #0:2
Is it posible to select the same pid from first and second streams and calculate PSNR?
Related
I need to trim and merge the video into one. I need to implement a cross fade or any smooth transitioning between each cut. Can I implement and how to implement the xfade or other ffmpeg smooth transition?
I did read this from multiple source.
Merging multiple video files with ffmpeg and xfade filter
But I still fail to generate a working code
Below is the example command and video section that I need to trim
ffmpeg -y -i example.mp4 -filter_complex
"[0:v]trim=start=0.1:end=0.7333333333333333,setpts=PTS-STARTPTS[v0];
[0:v]trim=start=1.2333333333333334:end=4.8,setpts=PTS-STARTPTS[v0];
[0:v]trim=start=4.966666666666667:end=10.466666666666667,setpts=PTS-STARTPTS[v0];
[0:v]trim=start=10.6:end=13.066666666666666,setpts=PTS-STARTPTS[v0];
[0:v]trim=start=13.733333333333333:end=17.333333333333332,setpts=PTS-STARTPTS[v0];
[0:v]trim=start=39.9:end=40.56666666666667,setpts=PTS-STARTPTS[v0];
[0:a]atrim=start=0.1:end=0.7333333333333333,asetpts=PTS-STARTPTS[a0];
[0:a]atrim=start=1.2333333333333334:end=4.8,asetpts=PTS-STARTPTS[a1];
[0:a]atrim=start=4.966666666666667:end=10.466666666666667,asetpts=PTS-STARTPTS[a2];
[0:a]atrim=start=10.6:end=13.066666666666666,asetpts=PTS-STARTPTS[a3];
[0:a]atrim=start=13.733333333333333:end=17.333333333333332,asetpts=PTS-STARTPTS[a4];
[0:a]atrim=start=39.9:end=40.56666666666667,asetpts=PTS-STARTPTS[a5];
[v0] [a0] [v1] [a1] [v2] [a2] [v3] [a3] [v4] [a4] [v5] [a5] [a0] [a1] [a2] [a3] [a4] [a5]concat=n=6:v=1:a=1 [out]"
-map "[out]" example_COMPLEX.mp4
I generate this script file with xfade effect
ffmpeg -y -i example.mp4 -filter_complex
"[0:v]trim=start=0.1:end=0.7333333333333333,setpts=PTS-STARTPTS[v0];
[0:v]trim=start=1.2333333333333334:end=4.8,setpts=PTS-STARTPTS[v1];
[0:v]trim=start=4.966666666666667:end=10.466666666666667,setpts=PTS-STARTPTS[v2];
[0:v]trim=start=10.6:end=13.066666666666666,setpts=PTS-STARTPTS[v3];
[0:v]trim=start=13.733333333333333:end=17.333333333333332,setpts=PTS-STARTPTS[v4];
[0:v]trim=start=39.9:end=40.56666666666667,setpts=PTS-STARTPTS[v5];
[0:a]atrim=start=0.1:end=0.7333333333333333,asetpts=PTS-STARTPTS[a0];
[0:a]atrim=start=1.2333333333333334:end=4.8,asetpts=PTS-STARTPTS[a1];
[0:a]atrim=start=4.966666666666667:end=10.466666666666667,asetpts=PTS-STARTPTS[a2];
[0:a]atrim=start=10.6:end=13.066666666666666,asetpts=PTS-STARTPTS[a3];
[0:a]atrim=start=13.733333333333333:end=17.333333333333332,asetpts=PTS-STARTPTS[a4];
[0:a]atrim=start=39.9:end=40.56666666666667,asetpts=PTS-STARTPTS[a5];
[v0][v1]xfade=transition=fade:duration=0.5:offset=8.2[x1];
[x1][v2]xfade=transition=fade:duration=0.5:offset=8.2[x2];
[x2][v3]xfade=transition=fade:duration=0.5:offset=10.166666666666666[x3];
[x3][v4]xfade=transition=fade:duration=0.5:offset=13.266666666666666[x4];
[x4][v5]xfade=transition=fade:duration=0.5:offset=13.433333333333337,format=yuv420p[video];
[a0] [a1] [a2] [a3] [a4] [a5]concat=n=6:v=1:a=1 [out]"
-map "[video]" -map "[out]" example_COMPLEX.mp4
But there is an error message
[Parsed_asetpts_13 # 0000014db55ea140] Media type mismatch between the 'Parsed_asetpts_13' filter output pad 0 (audio) and the 'Parsed_concat_30' filter input pad 0 (video)
[AVFilterGraph # 0000014db5414580] Cannot create the link asetpts:0 -> concat:0
Error initializing complex filters.
Invalid argument
I receive an MPEG TS container over network (UDP). It contains two streams: an mpeg2video vidoe stream with yuv420p pixel format and a data stream encoded using a propitiatory KLV format.
My receiver program must be in Python. So, I can't use FFMPEG library (like AVFormat, AVCodec) directly.
Now my problem is as follows:
I need to receive video frames and save them as RGB image as raw numpy array. I also need for each frame to parse the corresponding KLV data. There is a one to one relationship between video frames and KLV data units.
I thought I use ffprobe to output the packets including their payload data from incoming container and then parse the output of ffprobe to get the images and metadata:
$ ffprobe -show_packets -show_data -print_format json udp://127.0.0.1:12345 > test_video.packets.data.json
This gives me an output (in test_video.packets.data.json file) like:
{
"codec_type": "video",
"stream_index": 0,
"pts": 140400,
"pts_time": "1.560000",
"dts": 136800,
"dts_time": "1.520000",
"duration": 3600,
"duration_time": "0.040000",
"size": "21301",
"pos": "3788012",
"flags": "K_",
"side_data_list": [
{
"side_data_type": "MPEGTS Stream ID"
}
],
"data": "... "
},
{
"codec_type": "data",
"stream_index": 1,
"pts": 140400,
"pts_time": "1.560000",
"dts": 140400,
"dts_time": "1.560000",
"size": "850",
"pos": "3817904",
"flags": "K_",
"side_data_list": [
{
"side_data_type": "MPEGTS Stream ID"
}
],
"data": ".... "
}
I can extract the KLV data from the data packets and parse it. However the data from the video packets in encoded as mpeg2video video with yuv420p pixel format.
My Questions:
How can I get the raw pixel values from that mpeg2 encoded payload?
Is it possible to use ffmpeg to receive the original container and copy it (with both streams) into a new container, but with raw video instead of mpeg2 video? if yes, how? what should be the command? I tried for example: ffmpeg -i udp://127.0.0.1:12345 -map 0:0 -codec rawvideo -pix_fmt rgb24 -map 0:1 -codec copy -f mpegts udp://127.0.0.1:11112, but it gives me again mpeg2 encoded video data in payload of video packets
MPEG-TS supports a limited number of video codecs. However, ffmpeg's muxer will silently mux even unsupported streams as private data streams.
To mux a raw RGB stream, convert to rgb24 pixel format and code using rawvideo codec.
ffmpeg -i udp://127.0.0.1:12345 -map 0:0 -map 0:1 -c copy -c:v rawvideo -pix_fmt rgb24 -f mpegts udp://127.0.0.1:11112
import os
import subprocess
class FFMPEGFrames:
def __init__(self, output):
self.output = output
def extract_frames(self, input, fps):
output = input.split('/')[-1].split('.')[0]
if not os.path.exists(self.output + output):
os.makedirs(self.output + output)
query = "ffmpeg -y -hwaccel cuvid -c:v h264_cuvid -resize 1024x70 -i " + input + " -vf scale_npp=format=yuv420p,hwdownload,format=yuv420p -vf fps=" + str(fps) + " -pix_fmt yuvj420p -color_range 2 -vframes 1 -y " + self.output + output + "/output%06d.jpg"
response = subprocess.Popen(query, shell=True, stdout=subprocess.PIPE).stdout.read()
s = str(response).encode('utf-8')
i received this error :
Impossible to convert between the formats supported by the filter 'Parsed_fps_0' and the filter 'auto_scaler_0'
Error reinitializing filters!
Failed to inject frame into filter network: Function not implemented
Error while processing the decoded data for stream #0:0
Conversion failed!
You likely missed an error message:
Only '-vf fps' read, ignoring remaining -vf options: Use ',' to separate filters
Combine both of your -vf into one. Also, consider replacing -pix_fmt yuvj420p with the format filter so all filtering can be within one filtergraph and you can choose exactly where that occurs within the filtergraph.
I'm trying to stream a live webm stream.
I tested some server and Icecast is my pic.
With ffmpeg capturing from an IP camera and publishing in icecast server I'm able to see video in html5
using this command:
ffmpeg.exe -rtsp_transport tcp -i "rtsp://192.168.230.121/profile?token=media_profile1&SessionTimeout=60" -f webm -r 20 -c:v libvpx -b:v 3M -s 300x200 -acodec none -content_type video/webm -crf 63 -g 0 icecast://source:hackme#192.168.0.146:8001/test
I'm using java and tryed to make this with xuggler, but I'm getting an error when opening the stream
final String urlOut = "icecast://source:hackme#192.168.0.146:8001/agora.webm";
final IContainer outContainer = IContainer.make();
final IContainerFormat outContainerFormat = IContainerFormat.make();
outContainerFormat.setOutputFormat("webm", urlOut, "video/webm");
int rc = outContainer.open(urlOut, IContainer.Type.WRITE, outContainerFormat);
if(rc>=0) {
}else {
Logger.getLogger(WebmPublisher.class.getName()).log(Level.INFO, "Fail to open Container " + IError.make(rc));
}
Any help?
I'm getting the error -2:
Error: could not open file (../../../../../../../csrc/com/xuggle/xuggler/Container.cpp:544)
It's is also very importatn to set the content type as video/webm because icecast by default set the mime type to audio/mpeg
I use a segmenter to segment my MPEG 2 Ts file into a series of media segment for HTTP live streaming
and each segment's start time following the previous one
(ex:start time of segments: 00:00,00:10,00:20,00:30,...)
(In Ubuntu)
The Question is:
When I use ffmpeg to transcode one of the media segment (ex 800k bps to 200k bps)
the start time of transcoded media segment will be reset to 0
ex:As I transcode the third segement,
start time of segments changing to : 00:00,00:10,00:00,00:30,...
It cause my player freezing once play the transcoded media segment
Is there any solution to transcode media file with the same start time?
I guess it's the ffmpeg reset the PTS(presentation timestamp) of segment
But I don't know how to fix it...
here is my ffmpeg command (transcode to 250k bps)
============================
ffmpeg -y -i sample-03.ts -f mpegts -acodec libfaac -ar 48000 -ab 64k -vcodec libx264 -b 250k -flags +loop -cmp +chroma \
-partitions +parti4x4+partp8x8+partb8x8 -subq 7 -trellis 0 -refs 0 -coder 0 -me_range 16 -keyint_min 25 \
-sc_threshold 40 -i_qfactor 0.71 -maxrate 250k -bufsize 250k -rc_eq 'blurCplx^(1-qComp)' -qcomp 0.6 \
-qmin 10 -qmax 51 -qdiff 4 -level 30 -aspect 320:240 -g 30 -async 2 sample.ts
============================
Help!
thanks
direct packet time shifting of h264 encoded segments
I ended up linking with ffmpeg libavformat/avcodec libraries to read in, and directly shift the packet time headers. Offset time is specified in seconds
unsigned int tsShift = offsetTime * 90000; // h264 defined sample rate is 90khz
and further below
do {
double segmentTime;
AVPacket packet;
decodeDone = av_read_frame(pInFormatCtx, &packet);
if (decodeDone < 0) {
break;
}
if (av_dup_packet(&packet) < 0) {
cout << "Could not duplicate packet" << endl;
av_free_packet(&packet);
break;
}
if (packet.stream_index == videoIndex && (packet.flags & AV_PKT_FLAG_KEY)) {
segmentTime = (double)pVideoStream->pts.val * pVideoStream->time_base.num / pVideoStream->time_base.den;
}
else if (videoIndex < 0) {
segmentTime = (double)pAudioStream->pts.val * pAudioStream->time_base.num / pAudioStream->time_base.den;
}
else {
segmentTime = prevSegmentTime;
}
// cout << "before packet pts dts " << packet.pts << " " << packet.dts;
packet.pts += tsShift;
packet.dts += tsShift;
// cout << " after packet pts dts " << packet.pts << " " << packet.dts << endl;
ret = av_interleaved_write_frame(pOutFormatCtx, &packet);
if (ret < 0) {
cout << "Warning: Could not write frame of stream" << endl;
}
else if (ret > 0) {
cout << "End of stream requested" << endl;
av_free_packet(&packet);
break;
}
av_free_packet(&packet);
} while (!decodeDone);
mpegts shifter source
shifted streams in a round about way
but the time delta is not precisely what I specify
Here's how
first convert the original ts file into a raw format
ffmpeg -i original.ts original.avi
apply a setpts filter and convert to encoded format
(this will differ depending on frame rate and desired time shift)
ffmpeg -i original.avi -filter:v 'setpts=240+PTS' -sameq -vcodec libx264 shift.mp4
segment the resulting shift.mp4
ffmpeg -i shift.mp4 -qscale 0 -bsf:v h264_mp4toannexb -vcodec copy -an -map 0 -f segment -segment_time 10 -segment_format mpegts -y ./temp-%03d.ts
the last segment file created, temp-001.ts in my case, was time shifted
the problem: this method feels obtuse for merely shifting some ts packet times, and it resulted in a start time of 10.5+ instead of precisely 10 seconds desired for the new ts file
original suggestion did not work as described below
ffmpeg -itoffset prevTime (rest of ts gen args) | ffmpeg -ss prevTime -i _ -t 10 stuff.ts
prevTime is the duration of all previous segments
no good as the second ffmpeg -ss call makes the output mpegts file relative to time 0 (or sometimes 1.4sec - perhaps a bug in the construction of single ts files)
IMO - you have a serialized list of segments and you want to concatenate them.
Thats it as long as the serial order of the segments is preserved thru the concatenation.
Process to run on each segment entry so that it can be concatentated....
getVideoRaw to its own file
getAudioRaw to its own file
When you have splitout to raw all of your segements do this....
concatenate video preserving serialized order so video segments remain correct order in videoConCatOUT.
concatenate the audio as above
then mux the respective concatOUT files into a single container.
This can be scripted and can follow the std. example in the ffmpeg FAQ on Concat
see '3.14.4' section here
Note the 'tail' cmd and the explain about dropping line no. 1 from all except the first segment input to the concat process...
You should transcode before segmenting. When you're transcoding an individual segment, it is creating a new ts stream each time, and the ts time data is not being copied over.
Have a look at the setpts filter. That should give you plenty of control over each piece's PTS.
there is a segmenter muxer https://ffmpeg.org/ffmpeg-formats.html#segment_002c-stream_005fsegment_002c-ssegment which may help you with what you're after...