How to make use of data field from ffprobe's show_packets show_data - ffmpeg

I have this ffprobe command reading from rtsp feed. My aim is to extract video frame and audio frame from the feed along with their associated pts_time. My rtsp feed has h265 video format and aac audio format.
I need to pass the video data to OpenCV for processing. OpenCV takes bgr24 format. I used to rely on ffmpeg -i rtsp:// -c copy -f rawvideo -pix_fmt bgr24 -pipe: where its stdout produces video frame in bytes. I am not so sure if this is similar to the "data" in ffprobe packets or not. Doing so has limitation when I need to work on audio and synchronizing audio and video. It seems ffprobe provide both audio and video data naturally in one simple command along with pts_time reference.
I have been trying find reference for show_data and its use of data. It would be appreciated if anyone provide guidance on this.
ffprobe -hide_banner -loglevel fatal -i rtsp://... -show_packets -show_data -print_format json
{
"codec_type": "video",
"stream_index": 0,
"pts": 28128,
"pts_time": "0.312533",
"dts": 28128,
"dts_time": "0.312533",
"duration": 3600,
"duration_time": "0.040000",
"size": "7937",
"flags": "__",
"data": "\n00000000: 0000 0102 01d0 02b9 7420 4bfb 5df1 637e ........t K.].c~\n00000010: 0000 0302 2067 e2e9 48f8 6197 3952 8432 .... g..H.a.9R.2\n00000020: a689 afb5 69ec 0ca>
},
{
"codec_type": "audio",
"stream_index": 1,
"pts": 6280,
"pts_time": "0.392500",
"dts": 6280,
"dts_time": "0.392500",
"duration": 1024,
"duration_time": "0.064000",
"size": "258",
"flags": "K_",
"data": "\n00000000: 0102 9ffe 0b24 2ad1 2962 a5ca a569 0275 .....$*.)b...i.u\n00000010: 15a0 f442 2f92 95ee abca 7892 00f6 aac8 ...B/.....x.....\n00000020: ff8d f8b7 f368 5fb>
},

Related

ffmpeg can take picture preview but errors on streaming video?

Im trying to setup a Homebridge on a raspberry pi so I can have a cheap home camera.
I was able to get everything set up alright but when trying to edit the config for the homebridge-camera-ffmpeg plugin I keep getting errors.
Im able to take a picture preview with the camera just fine but video seems to throw errors.
[Logitech-C525] [fatal] Invalid input file index: 1.
[Logitech-C525] FFmpeg exited with code: 1 and signal: null (Error)
[Logitech-C525] Error occurred terminating main FFmpeg process: Error [ERR_STREAM_DESTROYED]: Cannot call write after a stream was destroyed
here's my config
{
"platform": "Camera-ffmpeg",
"cameras": [
{
"name": "Logitech-C525",
"videoConfig": {
"source": "-s 1280x720 -f video4linux2 -i /dev/video0",
"stillImageSource": "-s 1280x720 -f video4linux2 -i /dev/video0",
"maxStreams": 2,
"maxWidth": 1280,
"maxHeight": 720,
"maxFPS": 30,
"audio": false,
"debug": true,
"packetSize": 188,
"mapvideo": "1",
"mapaudio": "0"
}
}
]
}
changing the source to be -re -r 6 -s 1280x720 -f video4linux2 -i /dev/video0 and deleting the maxFPS as well seemed to work!

Can I have a rawvideo stream in a MPEG TS container

I receive an MPEG TS container over network (UDP). It contains two streams: an mpeg2video vidoe stream with yuv420p pixel format and a data stream encoded using a propitiatory KLV format.
My receiver program must be in Python. So, I can't use FFMPEG library (like AVFormat, AVCodec) directly.
Now my problem is as follows:
I need to receive video frames and save them as RGB image as raw numpy array. I also need for each frame to parse the corresponding KLV data. There is a one to one relationship between video frames and KLV data units.
I thought I use ffprobe to output the packets including their payload data from incoming container and then parse the output of ffprobe to get the images and metadata:
$ ffprobe -show_packets -show_data -print_format json udp://127.0.0.1:12345 > test_video.packets.data.json
This gives me an output (in test_video.packets.data.json file) like:
{
"codec_type": "video",
"stream_index": 0,
"pts": 140400,
"pts_time": "1.560000",
"dts": 136800,
"dts_time": "1.520000",
"duration": 3600,
"duration_time": "0.040000",
"size": "21301",
"pos": "3788012",
"flags": "K_",
"side_data_list": [
{
"side_data_type": "MPEGTS Stream ID"
}
],
"data": "... "
},
{
"codec_type": "data",
"stream_index": 1,
"pts": 140400,
"pts_time": "1.560000",
"dts": 140400,
"dts_time": "1.560000",
"size": "850",
"pos": "3817904",
"flags": "K_",
"side_data_list": [
{
"side_data_type": "MPEGTS Stream ID"
}
],
"data": ".... "
}
I can extract the KLV data from the data packets and parse it. However the data from the video packets in encoded as mpeg2video video with yuv420p pixel format.
My Questions:
How can I get the raw pixel values from that mpeg2 encoded payload?
Is it possible to use ffmpeg to receive the original container and copy it (with both streams) into a new container, but with raw video instead of mpeg2 video? if yes, how? what should be the command? I tried for example: ffmpeg -i udp://127.0.0.1:12345 -map 0:0 -codec rawvideo -pix_fmt rgb24 -map 0:1 -codec copy -f mpegts udp://127.0.0.1:11112, but it gives me again mpeg2 encoded video data in payload of video packets
MPEG-TS supports a limited number of video codecs. However, ffmpeg's muxer will silently mux even unsupported streams as private data streams.
To mux a raw RGB stream, convert to rgb24 pixel format and code using rawvideo codec.
ffmpeg -i udp://127.0.0.1:12345 -map 0:0 -map 0:1 -c copy -c:v rawvideo -pix_fmt rgb24 -f mpegts udp://127.0.0.1:11112

Add two commands in ffmpeg

I am using two commands, one to set size of frames and other to add water mark to left top corner
This command set size of frames to 720*1280
String[] complexCommandOne = {"-y" ,"-i", path,"-strict","experimental", "-vf", "scale=720:1280","-preset", "ultrafast", output};
Below command add watermark to above output file
String[] complexCommandTwo = {"-y" ,"-i", output,"-strict","experimental", "-vf", "movie="+pngpath+" [watermark]; [in][watermark] overlay=x=10:y=10 [out]","-s", "720x1280","-r", "30", "-b", "15496k", "-vcodec", "mpeg4","-ab", "48000", "-ac", "2", "-ar", "22050","-preset", "ultrafast", fileName};
Both these commands take 3-5 minutes on 20 seconds video
I want to merge these so that time can be reduced.
Any help. I am new i Ffgmeg
Never seen such thing, but looks like it basically just using regular FFmpeg CLI syntax.
So it would be this, I guess:
{"-y", "-i", input, "-strict", "experimental", "-vf", "movie="+pngpath+" [watermark]; [in] scale=720:1280 [scaled]; [scaled][watermark] overlay=x=10:y=10 [out]", "-s", "720x1280", "-r:v", "30", "-b:v", "15496k", "-c:v", "mpeg4", "-b:a", "48000", "-ac", "2", "-r:a", "22050", "-preset:v", "ultrafast", fileName}
which woud normally look like this:
ffmpeg -y -i INPUTFILE -strict experimental -vf "movie=LOGOFILE [watermark]; [in] scale=720:1280 [scaled]; [scaled][watermark] overlay=x=10:y=10 [out]" -s 720x1280 -r:v 30 -b:v 15496k -c:v mpeg4 -b:a 48000 -ac 2 -r:a 22050 -preset:v ultrafast OUTPUTFILE
What FFmpeg version do you have?
Because over 3.0 you can omit "-strict", "experimental" (it was needed to enable FFmpeg's own AAC audio codec when it was still considered as an experimental feature).

ffprobe select audio and video streams

I use this code for extracting video information by ffprobe :
ffprobe -show_streams -of json -v quiet -i input.mp4
The information of all streams appears in the output while I need only the information of v:0 and a:0 streams.
I know that there is -select_streams option for stream selection but it accepts only one argument like: -select_streams v:0.
Can I use -select_streams by two arguments v:0 and a:0 or using it twice?
I know that I'm late to the party, but in case anybody else searches for something similar (from here):
ffprobe -show_streams -select_streams a INPUT
where a stands for audio and could of course be replaced by:
v for video;
a:1 for the audio packets belonging to audio stream with index 1;
v:99 for the video packets belonging to video stream with index 99 and so on.
Note that if you want to view 2 different streams (like audio and video) you need to run ffprobe twice.
For more goodies, although very generally written, you can also check: https://trac.ffmpeg.org/wiki/FFprobeTips
I had a similar scenario where I wanted to limit the output of ffprobe -show_frames to a specific audio and video streams.
It seems that -select_streams cannot accept more than 1 stream_specifier nor can it be provided multiple times for the same ffprobe command.
Moreover, ffprobe do not accept the -map parameter like ffmpeg does. This parameters allows ffmpeg to process specific streams and can be provided multiple times.
What I ended up doing is filtering the required streams using ffmpeg -map and piping the output to ffprobe -show_frames as follows:
ffmpeg -i INPUT -map 0:0 -map 0:1 -c copy -f matroska - | ffprobe -show_frames -
Several notes:
I used -f matroska in ffmpeg command since this muxer support non-seekable output (stdout)
the -c copy is necessary to avoid transcoding of the selected streams.
You can simply omit the -select_streams argument and use the -show_entries argument to pass
the fields you would like to see in the output, like so:
ffprobe -show_streams -show_entries format=bit_rate,filename,start_time:stream=duration,width,height,display_aspect_ratio,r_frame_rate,bit_rate -of json -v quiet -i input.mp4
That should give you an output similar to this:
{
"programs": [
],
"streams": [
{
"width": 360,
"height": 202,
"display_aspect_ratio": "16:9",
"r_frame_rate": "2997/100",
"duration": "68.601935",
"bit_rate": "449366",
"disposition": {
"default": 1,
"dub": 0,
"original": 0,
"comment": 0,
"lyrics": 0,
"karaoke": 0,
"forced": 0,
"hearing_impaired": 0,
"visual_impaired": 0,
"clean_effects": 0,
"attached_pic": 0
},
"tags": {
"language": "eng",
"handler_name": "VideoHandler"
}
},
{
"r_frame_rate": "0/0",
"duration": "68.475646",
"bit_rate": "65845",
"disposition": {
"default": 1,
"dub": 0,
"original": 0,
"comment": 0,
"lyrics": 0,
"karaoke": 0,
"forced": 0,
"hearing_impaired": 0,
"visual_impaired": 0,
"clean_effects": 0,
"attached_pic": 0
},
"tags": {
"language": "eng",
"handler_name": "SoundHandler"
}
}
],
"format": {
"filename": "input.mp4",
"start_time": "0.000000",
"bit_rate": "522013"
}
}
From which you can just index into the stream you want, as shown in Powershell, with the JSON object streams that is returned:
PS C:\Users\User> $json.streams[0]
width : 360
height : 202
display_aspect_ratio : 16:9
r_frame_rate : 2997/100
duration : 68.601935
bit_rate : 449366
disposition : #{default=1; dub=0; original=0; comment=0; lyrics=0; karaoke=0; forced=0; hearing_impaired=0; visual_impaired=0; clean_effects=0; attached_pic=0}
tags : #{language=eng; handler_name=VideoHandler}
PS C:\Users\User> $json.streams[1]
r_frame_rate : 0/0
duration : 68.475646
bit_rate : 65845
disposition : #{default=1; dub=0; original=0; comment=0; lyrics=0; karaoke=0; forced=0; hearing_impaired=0; visual_impaired=0; clean_effects=0; attached_pic=0}
tags : #{language=eng; handler_name=SoundHandler}
There are a list of the key field names that you can get from the different types of streams here: https://trac.ffmpeg.org/wiki/FFprobeTips

ffprobe - getting file info from pipe

I've got an oog file (it was mixed by sox from two audiostreams recorded by pbx Asterisk) and I'm trying to get file information with ffprobe.
When I use something like
cat %filename%.ogg | ffprobe -i -
I get invalid file info (Duration : N/A, wrong bitrate and etc.)
When I try
ffprobe -i %filename%
Everything works fine and I get file info.
What could be wrong? File content?
As of version 1.0.7 of ffprobe you can even get the output in a JSON formatted output:
ffprobe -v quiet -print_format json -show_format Ramp\ -\ Apathy.mp3
Which produces the follwing output:
{
"format": {
"filename": "Ramp - Apathy.mp3",
"nb_streams": 2,
"format_name": "mp3",
"format_long_name": "MP2/3 (MPEG audio layer 2/3)",
"start_time": "0.000000",
"duration": "203.638856",
"size": "4072777",
"bit_rate": "159999",
"tags": {
"title": "Apathy",
"artist": "Ramp",
"album": "Evolution Devolution Revolution",
"date": "1999",
"genre": "Metal"
}
}
}
I think you can get the probe using cat, do you have any requirement to cat the file contents? If not just use ffprobe without cat.
Just a quick note to say that piping input to ffprobe seems to work just fine. Use a hyphen in place of the input file and you are off to the races. Here is an example with a random video file on my system:
cat 01.mp4 | ffprobe -show_format -pretty -loglevel quiet -
Returns:
[FORMAT]
filename=pipe:
nb_streams=2
nb_programs=0
format_name=mov,mp4,m4a,3gp,3g2,mj2
format_long_name=QuickTime / MOV
start_time=N/A
duration=0:02:56.400000
size=N/A
bit_rate=N/A
probe_score=100
TAG:major_brand=isom
TAG:minor_version=512
TAG:compatible_brands=isomiso2mp41
TAG:creation_time=1970-01-01T00:00:00.000000Z
TAG:title=yy.mp4
TAG:encoder=Lavf52.78.3
[/FORMAT]
And you can pipeline it from remote site by curl
curl --silent --header "Range: bytes=0-51200" https://example.com/your.mp4 | ffprobe -v quiet -show_format -of flat=s=_ -show_entries stream=height,width,nb_frames,duration,codec_name -

Resources