Can I have a rawvideo stream in a MPEG TS container - ffmpeg

I receive an MPEG TS container over network (UDP). It contains two streams: an mpeg2video vidoe stream with yuv420p pixel format and a data stream encoded using a propitiatory KLV format.
My receiver program must be in Python. So, I can't use FFMPEG library (like AVFormat, AVCodec) directly.
Now my problem is as follows:
I need to receive video frames and save them as RGB image as raw numpy array. I also need for each frame to parse the corresponding KLV data. There is a one to one relationship between video frames and KLV data units.
I thought I use ffprobe to output the packets including their payload data from incoming container and then parse the output of ffprobe to get the images and metadata:
$ ffprobe -show_packets -show_data -print_format json udp://127.0.0.1:12345 > test_video.packets.data.json
This gives me an output (in test_video.packets.data.json file) like:
{
"codec_type": "video",
"stream_index": 0,
"pts": 140400,
"pts_time": "1.560000",
"dts": 136800,
"dts_time": "1.520000",
"duration": 3600,
"duration_time": "0.040000",
"size": "21301",
"pos": "3788012",
"flags": "K_",
"side_data_list": [
{
"side_data_type": "MPEGTS Stream ID"
}
],
"data": "... "
},
{
"codec_type": "data",
"stream_index": 1,
"pts": 140400,
"pts_time": "1.560000",
"dts": 140400,
"dts_time": "1.560000",
"size": "850",
"pos": "3817904",
"flags": "K_",
"side_data_list": [
{
"side_data_type": "MPEGTS Stream ID"
}
],
"data": ".... "
}
I can extract the KLV data from the data packets and parse it. However the data from the video packets in encoded as mpeg2video video with yuv420p pixel format.
My Questions:
How can I get the raw pixel values from that mpeg2 encoded payload?
Is it possible to use ffmpeg to receive the original container and copy it (with both streams) into a new container, but with raw video instead of mpeg2 video? if yes, how? what should be the command? I tried for example: ffmpeg -i udp://127.0.0.1:12345 -map 0:0 -codec rawvideo -pix_fmt rgb24 -map 0:1 -codec copy -f mpegts udp://127.0.0.1:11112, but it gives me again mpeg2 encoded video data in payload of video packets

MPEG-TS supports a limited number of video codecs. However, ffmpeg's muxer will silently mux even unsupported streams as private data streams.
To mux a raw RGB stream, convert to rgb24 pixel format and code using rawvideo codec.
ffmpeg -i udp://127.0.0.1:12345 -map 0:0 -map 0:1 -c copy -c:v rawvideo -pix_fmt rgb24 -f mpegts udp://127.0.0.1:11112

Related

How to make use of data field from ffprobe's show_packets show_data

I have this ffprobe command reading from rtsp feed. My aim is to extract video frame and audio frame from the feed along with their associated pts_time. My rtsp feed has h265 video format and aac audio format.
I need to pass the video data to OpenCV for processing. OpenCV takes bgr24 format. I used to rely on ffmpeg -i rtsp:// -c copy -f rawvideo -pix_fmt bgr24 -pipe: where its stdout produces video frame in bytes. I am not so sure if this is similar to the "data" in ffprobe packets or not. Doing so has limitation when I need to work on audio and synchronizing audio and video. It seems ffprobe provide both audio and video data naturally in one simple command along with pts_time reference.
I have been trying find reference for show_data and its use of data. It would be appreciated if anyone provide guidance on this.
ffprobe -hide_banner -loglevel fatal -i rtsp://... -show_packets -show_data -print_format json
{
"codec_type": "video",
"stream_index": 0,
"pts": 28128,
"pts_time": "0.312533",
"dts": 28128,
"dts_time": "0.312533",
"duration": 3600,
"duration_time": "0.040000",
"size": "7937",
"flags": "__",
"data": "\n00000000: 0000 0102 01d0 02b9 7420 4bfb 5df1 637e ........t K.].c~\n00000010: 0000 0302 2067 e2e9 48f8 6197 3952 8432 .... g..H.a.9R.2\n00000020: a689 afb5 69ec 0ca>
},
{
"codec_type": "audio",
"stream_index": 1,
"pts": 6280,
"pts_time": "0.392500",
"dts": 6280,
"dts_time": "0.392500",
"duration": 1024,
"duration_time": "0.064000",
"size": "258",
"flags": "K_",
"data": "\n00000000: 0102 9ffe 0b24 2ad1 2962 a5ca a569 0275 .....$*.)b...i.u\n00000010: 15a0 f442 2f92 95ee abca 7892 00f6 aac8 ...B/.....x.....\n00000020: ff8d f8b7 f368 5fb>
},

ffmpeg xfade for (complex filter or select filter)

I need to trim and merge the video into one. I need to implement a cross fade or any smooth transitioning between each cut. Can I implement and how to implement the xfade or other ffmpeg smooth transition?
I did read this from multiple source.
Merging multiple video files with ffmpeg and xfade filter
But I still fail to generate a working code
Below is the example command and video section that I need to trim
ffmpeg -y -i example.mp4 -filter_complex
"[0:v]trim=start=0.1:end=0.7333333333333333,setpts=PTS-STARTPTS[v0];
[0:v]trim=start=1.2333333333333334:end=4.8,setpts=PTS-STARTPTS[v0];
[0:v]trim=start=4.966666666666667:end=10.466666666666667,setpts=PTS-STARTPTS[v0];
[0:v]trim=start=10.6:end=13.066666666666666,setpts=PTS-STARTPTS[v0];
[0:v]trim=start=13.733333333333333:end=17.333333333333332,setpts=PTS-STARTPTS[v0];
[0:v]trim=start=39.9:end=40.56666666666667,setpts=PTS-STARTPTS[v0];
[0:a]atrim=start=0.1:end=0.7333333333333333,asetpts=PTS-STARTPTS[a0];
[0:a]atrim=start=1.2333333333333334:end=4.8,asetpts=PTS-STARTPTS[a1];
[0:a]atrim=start=4.966666666666667:end=10.466666666666667,asetpts=PTS-STARTPTS[a2];
[0:a]atrim=start=10.6:end=13.066666666666666,asetpts=PTS-STARTPTS[a3];
[0:a]atrim=start=13.733333333333333:end=17.333333333333332,asetpts=PTS-STARTPTS[a4];
[0:a]atrim=start=39.9:end=40.56666666666667,asetpts=PTS-STARTPTS[a5];
[v0] [a0] [v1] [a1] [v2] [a2] [v3] [a3] [v4] [a4] [v5] [a5] [a0] [a1] [a2] [a3] [a4] [a5]concat=n=6:v=1:a=1 [out]"
-map "[out]" example_COMPLEX.mp4
I generate this script file with xfade effect
ffmpeg -y -i example.mp4 -filter_complex
"[0:v]trim=start=0.1:end=0.7333333333333333,setpts=PTS-STARTPTS[v0];
[0:v]trim=start=1.2333333333333334:end=4.8,setpts=PTS-STARTPTS[v1];
[0:v]trim=start=4.966666666666667:end=10.466666666666667,setpts=PTS-STARTPTS[v2];
[0:v]trim=start=10.6:end=13.066666666666666,setpts=PTS-STARTPTS[v3];
[0:v]trim=start=13.733333333333333:end=17.333333333333332,setpts=PTS-STARTPTS[v4];
[0:v]trim=start=39.9:end=40.56666666666667,setpts=PTS-STARTPTS[v5];
[0:a]atrim=start=0.1:end=0.7333333333333333,asetpts=PTS-STARTPTS[a0];
[0:a]atrim=start=1.2333333333333334:end=4.8,asetpts=PTS-STARTPTS[a1];
[0:a]atrim=start=4.966666666666667:end=10.466666666666667,asetpts=PTS-STARTPTS[a2];
[0:a]atrim=start=10.6:end=13.066666666666666,asetpts=PTS-STARTPTS[a3];
[0:a]atrim=start=13.733333333333333:end=17.333333333333332,asetpts=PTS-STARTPTS[a4];
[0:a]atrim=start=39.9:end=40.56666666666667,asetpts=PTS-STARTPTS[a5];
[v0][v1]xfade=transition=fade:duration=0.5:offset=8.2[x1];
[x1][v2]xfade=transition=fade:duration=0.5:offset=8.2[x2];
[x2][v3]xfade=transition=fade:duration=0.5:offset=10.166666666666666[x3];
[x3][v4]xfade=transition=fade:duration=0.5:offset=13.266666666666666[x4];
[x4][v5]xfade=transition=fade:duration=0.5:offset=13.433333333333337,format=yuv420p[video];
[a0] [a1] [a2] [a3] [a4] [a5]concat=n=6:v=1:a=1 [out]"
-map "[video]" -map "[out]" example_COMPLEX.mp4
But there is an error message
[Parsed_asetpts_13 # 0000014db55ea140] Media type mismatch between the 'Parsed_asetpts_13' filter output pad 0 (audio) and the 'Parsed_concat_30' filter input pad 0 (video)
[AVFilterGraph # 0000014db5414580] Cannot create the link asetpts:0 -> concat:0
Error initializing complex filters.
Invalid argument

read Frames from Video using FFMPEG gpu python

import os
import subprocess
class FFMPEGFrames:
def __init__(self, output):
self.output = output
def extract_frames(self, input, fps):
output = input.split('/')[-1].split('.')[0]
if not os.path.exists(self.output + output):
os.makedirs(self.output + output)
query = "ffmpeg -y -hwaccel cuvid -c:v h264_cuvid -resize 1024x70 -i " + input + " -vf scale_npp=format=yuv420p,hwdownload,format=yuv420p -vf fps=" + str(fps) + " -pix_fmt yuvj420p -color_range 2 -vframes 1 -y " + self.output + output + "/output%06d.jpg"
response = subprocess.Popen(query, shell=True, stdout=subprocess.PIPE).stdout.read()
s = str(response).encode('utf-8')
i received this error :
Impossible to convert between the formats supported by the filter 'Parsed_fps_0' and the filter 'auto_scaler_0'
Error reinitializing filters!
Failed to inject frame into filter network: Function not implemented
Error while processing the decoded data for stream #0:0
Conversion failed!
You likely missed an error message:
Only '-vf fps' read, ignoring remaining -vf options: Use ',' to separate filters
Combine both of your -vf into one. Also, consider replacing -pix_fmt yuvj420p with the format filter so all filtering can be within one filtergraph and you can choose exactly where that occurs within the filtergraph.

Add two commands in ffmpeg

I am using two commands, one to set size of frames and other to add water mark to left top corner
This command set size of frames to 720*1280
String[] complexCommandOne = {"-y" ,"-i", path,"-strict","experimental", "-vf", "scale=720:1280","-preset", "ultrafast", output};
Below command add watermark to above output file
String[] complexCommandTwo = {"-y" ,"-i", output,"-strict","experimental", "-vf", "movie="+pngpath+" [watermark]; [in][watermark] overlay=x=10:y=10 [out]","-s", "720x1280","-r", "30", "-b", "15496k", "-vcodec", "mpeg4","-ab", "48000", "-ac", "2", "-ar", "22050","-preset", "ultrafast", fileName};
Both these commands take 3-5 minutes on 20 seconds video
I want to merge these so that time can be reduced.
Any help. I am new i Ffgmeg
Never seen such thing, but looks like it basically just using regular FFmpeg CLI syntax.
So it would be this, I guess:
{"-y", "-i", input, "-strict", "experimental", "-vf", "movie="+pngpath+" [watermark]; [in] scale=720:1280 [scaled]; [scaled][watermark] overlay=x=10:y=10 [out]", "-s", "720x1280", "-r:v", "30", "-b:v", "15496k", "-c:v", "mpeg4", "-b:a", "48000", "-ac", "2", "-r:a", "22050", "-preset:v", "ultrafast", fileName}
which woud normally look like this:
ffmpeg -y -i INPUTFILE -strict experimental -vf "movie=LOGOFILE [watermark]; [in] scale=720:1280 [scaled]; [scaled][watermark] overlay=x=10:y=10 [out]" -s 720x1280 -r:v 30 -b:v 15496k -c:v mpeg4 -b:a 48000 -ac 2 -r:a 22050 -preset:v ultrafast OUTPUTFILE
What FFmpeg version do you have?
Because over 3.0 you can omit "-strict", "experimental" (it was needed to enable FFmpeg's own AAC audio codec when it was still considered as an experimental feature).

Xuggler can't open IContainer of icecast server [Webm live video stream]

I'm trying to stream a live webm stream.
I tested some server and Icecast is my pic.
With ffmpeg capturing from an IP camera and publishing in icecast server I'm able to see video in html5
using this command:
ffmpeg.exe -rtsp_transport tcp -i "rtsp://192.168.230.121/profile?token=media_profile1&SessionTimeout=60" -f webm -r 20 -c:v libvpx -b:v 3M -s 300x200 -acodec none -content_type video/webm -crf 63 -g 0 icecast://source:hackme#192.168.0.146:8001/test
I'm using java and tryed to make this with xuggler, but I'm getting an error when opening the stream
final String urlOut = "icecast://source:hackme#192.168.0.146:8001/agora.webm";
final IContainer outContainer = IContainer.make();
final IContainerFormat outContainerFormat = IContainerFormat.make();
outContainerFormat.setOutputFormat("webm", urlOut, "video/webm");
int rc = outContainer.open(urlOut, IContainer.Type.WRITE, outContainerFormat);
if(rc>=0) {
}else {
Logger.getLogger(WebmPublisher.class.getName()).log(Level.INFO, "Fail to open Container " + IError.make(rc));
}
Any help?
I'm getting the error -2:
Error: could not open file (../../../../../../../csrc/com/xuggle/xuggler/Container.cpp:544)
It's is also very importatn to set the content type as video/webm because icecast by default set the mime type to audio/mpeg

Resources