Im trying to setup a Homebridge on a raspberry pi so I can have a cheap home camera.
I was able to get everything set up alright but when trying to edit the config for the homebridge-camera-ffmpeg plugin I keep getting errors.
Im able to take a picture preview with the camera just fine but video seems to throw errors.
[Logitech-C525] [fatal] Invalid input file index: 1.
[Logitech-C525] FFmpeg exited with code: 1 and signal: null (Error)
[Logitech-C525] Error occurred terminating main FFmpeg process: Error [ERR_STREAM_DESTROYED]: Cannot call write after a stream was destroyed
here's my config
{
"platform": "Camera-ffmpeg",
"cameras": [
{
"name": "Logitech-C525",
"videoConfig": {
"source": "-s 1280x720 -f video4linux2 -i /dev/video0",
"stillImageSource": "-s 1280x720 -f video4linux2 -i /dev/video0",
"maxStreams": 2,
"maxWidth": 1280,
"maxHeight": 720,
"maxFPS": 30,
"audio": false,
"debug": true,
"packetSize": 188,
"mapvideo": "1",
"mapaudio": "0"
}
}
]
}
changing the source to be -re -r 6 -s 1280x720 -f video4linux2 -i /dev/video0 and deleting the maxFPS as well seemed to work!
Related
I have this ffprobe command reading from rtsp feed. My aim is to extract video frame and audio frame from the feed along with their associated pts_time. My rtsp feed has h265 video format and aac audio format.
I need to pass the video data to OpenCV for processing. OpenCV takes bgr24 format. I used to rely on ffmpeg -i rtsp:// -c copy -f rawvideo -pix_fmt bgr24 -pipe: where its stdout produces video frame in bytes. I am not so sure if this is similar to the "data" in ffprobe packets or not. Doing so has limitation when I need to work on audio and synchronizing audio and video. It seems ffprobe provide both audio and video data naturally in one simple command along with pts_time reference.
I have been trying find reference for show_data and its use of data. It would be appreciated if anyone provide guidance on this.
ffprobe -hide_banner -loglevel fatal -i rtsp://... -show_packets -show_data -print_format json
{
"codec_type": "video",
"stream_index": 0,
"pts": 28128,
"pts_time": "0.312533",
"dts": 28128,
"dts_time": "0.312533",
"duration": 3600,
"duration_time": "0.040000",
"size": "7937",
"flags": "__",
"data": "\n00000000: 0000 0102 01d0 02b9 7420 4bfb 5df1 637e ........t K.].c~\n00000010: 0000 0302 2067 e2e9 48f8 6197 3952 8432 .... g..H.a.9R.2\n00000020: a689 afb5 69ec 0ca>
},
{
"codec_type": "audio",
"stream_index": 1,
"pts": 6280,
"pts_time": "0.392500",
"dts": 6280,
"dts_time": "0.392500",
"duration": 1024,
"duration_time": "0.064000",
"size": "258",
"flags": "K_",
"data": "\n00000000: 0102 9ffe 0b24 2ad1 2962 a5ca a569 0275 .....$*.)b...i.u\n00000010: 15a0 f442 2f92 95ee abca 7892 00f6 aac8 ...B/.....x.....\n00000020: ff8d f8b7 f368 5fb>
},
I want to record 2 webcams using ffmpeg, i have a simple python script but it doesn't work when I run the 2 subprocesses at the same time.
ROOT_PATH = os.getenv("ROOT_PATH", "/home/pi")
ENCODING = os.getenv("ENCODING", "copy")
new_dir = datetime.datetime.now().strftime("%Y_%m_%d_%H_%M_%S")
RECORDINGS_PATH1 = os.getenv("RECORDINGS_PATH", "RecordingsCam1")
RECORDINGS_PATH2 = os.getenv("RECORDINGS_PATH", "RecordingsCam2")
recording_path1 = os.path.join(ROOT_PATH, RECORDINGS_PATH1, new_dir)
recording_path2 = os.path.join(ROOT_PATH, RECORDINGS_PATH2, new_dir)
os.mkdir(recording_path1)
os.mkdir(recording_path2)
segments_path1 = os.path.join(recording_path1, "%03d.avi")
segments_path2 = os.path.join(recording_path2, "%03d.avi")
record1 = "ffmpeg -nostdin -i /dev/video0 -c:v {} -an -sn -dn -segment_time 30 -f segment {}".format(ENCODING, segments_path1)
record2 = "ffmpeg -nostdin -i /dev/video2 -c:v {} -an -sn -dn -segment_time 30 -f segment {}".format(ENCODING, segments_path2)
subprocess.Popen(record1, shell=True)
subprocess.Popen(record2, shell=True)
Also, i tried capturing the 2 sources side by side but it gives the error:`Filtering and streamcopy cannot be used together.
This has nothing to do with running two processes at the same time. FFmpeg clearly states that it cannot find /dev/video0 and /dev/video2. It seems your video camera is not detected. You can check this with following command :
$ ls /dev/ | grep video
will list all devices which have video in their name. If video0 and video2 do not exist, its clear FFmpeg gives such error. If they do exist, i do not know how to resolve this. You may try to run the FFmpeg commands directly in terminal.
I receive an MPEG TS container over network (UDP). It contains two streams: an mpeg2video vidoe stream with yuv420p pixel format and a data stream encoded using a propitiatory KLV format.
My receiver program must be in Python. So, I can't use FFMPEG library (like AVFormat, AVCodec) directly.
Now my problem is as follows:
I need to receive video frames and save them as RGB image as raw numpy array. I also need for each frame to parse the corresponding KLV data. There is a one to one relationship between video frames and KLV data units.
I thought I use ffprobe to output the packets including their payload data from incoming container and then parse the output of ffprobe to get the images and metadata:
$ ffprobe -show_packets -show_data -print_format json udp://127.0.0.1:12345 > test_video.packets.data.json
This gives me an output (in test_video.packets.data.json file) like:
{
"codec_type": "video",
"stream_index": 0,
"pts": 140400,
"pts_time": "1.560000",
"dts": 136800,
"dts_time": "1.520000",
"duration": 3600,
"duration_time": "0.040000",
"size": "21301",
"pos": "3788012",
"flags": "K_",
"side_data_list": [
{
"side_data_type": "MPEGTS Stream ID"
}
],
"data": "... "
},
{
"codec_type": "data",
"stream_index": 1,
"pts": 140400,
"pts_time": "1.560000",
"dts": 140400,
"dts_time": "1.560000",
"size": "850",
"pos": "3817904",
"flags": "K_",
"side_data_list": [
{
"side_data_type": "MPEGTS Stream ID"
}
],
"data": ".... "
}
I can extract the KLV data from the data packets and parse it. However the data from the video packets in encoded as mpeg2video video with yuv420p pixel format.
My Questions:
How can I get the raw pixel values from that mpeg2 encoded payload?
Is it possible to use ffmpeg to receive the original container and copy it (with both streams) into a new container, but with raw video instead of mpeg2 video? if yes, how? what should be the command? I tried for example: ffmpeg -i udp://127.0.0.1:12345 -map 0:0 -codec rawvideo -pix_fmt rgb24 -map 0:1 -codec copy -f mpegts udp://127.0.0.1:11112, but it gives me again mpeg2 encoded video data in payload of video packets
MPEG-TS supports a limited number of video codecs. However, ffmpeg's muxer will silently mux even unsupported streams as private data streams.
To mux a raw RGB stream, convert to rgb24 pixel format and code using rawvideo codec.
ffmpeg -i udp://127.0.0.1:12345 -map 0:0 -map 0:1 -c copy -c:v rawvideo -pix_fmt rgb24 -f mpegts udp://127.0.0.1:11112
I'm trying to stream a live webm stream.
I tested some server and Icecast is my pic.
With ffmpeg capturing from an IP camera and publishing in icecast server I'm able to see video in html5
using this command:
ffmpeg.exe -rtsp_transport tcp -i "rtsp://192.168.230.121/profile?token=media_profile1&SessionTimeout=60" -f webm -r 20 -c:v libvpx -b:v 3M -s 300x200 -acodec none -content_type video/webm -crf 63 -g 0 icecast://source:hackme#192.168.0.146:8001/test
I'm using java and tryed to make this with xuggler, but I'm getting an error when opening the stream
final String urlOut = "icecast://source:hackme#192.168.0.146:8001/agora.webm";
final IContainer outContainer = IContainer.make();
final IContainerFormat outContainerFormat = IContainerFormat.make();
outContainerFormat.setOutputFormat("webm", urlOut, "video/webm");
int rc = outContainer.open(urlOut, IContainer.Type.WRITE, outContainerFormat);
if(rc>=0) {
}else {
Logger.getLogger(WebmPublisher.class.getName()).log(Level.INFO, "Fail to open Container " + IError.make(rc));
}
Any help?
I'm getting the error -2:
Error: could not open file (../../../../../../../csrc/com/xuggle/xuggler/Container.cpp:544)
It's is also very importatn to set the content type as video/webm because icecast by default set the mime type to audio/mpeg
I've got an oog file (it was mixed by sox from two audiostreams recorded by pbx Asterisk) and I'm trying to get file information with ffprobe.
When I use something like
cat %filename%.ogg | ffprobe -i -
I get invalid file info (Duration : N/A, wrong bitrate and etc.)
When I try
ffprobe -i %filename%
Everything works fine and I get file info.
What could be wrong? File content?
As of version 1.0.7 of ffprobe you can even get the output in a JSON formatted output:
ffprobe -v quiet -print_format json -show_format Ramp\ -\ Apathy.mp3
Which produces the follwing output:
{
"format": {
"filename": "Ramp - Apathy.mp3",
"nb_streams": 2,
"format_name": "mp3",
"format_long_name": "MP2/3 (MPEG audio layer 2/3)",
"start_time": "0.000000",
"duration": "203.638856",
"size": "4072777",
"bit_rate": "159999",
"tags": {
"title": "Apathy",
"artist": "Ramp",
"album": "Evolution Devolution Revolution",
"date": "1999",
"genre": "Metal"
}
}
}
I think you can get the probe using cat, do you have any requirement to cat the file contents? If not just use ffprobe without cat.
Just a quick note to say that piping input to ffprobe seems to work just fine. Use a hyphen in place of the input file and you are off to the races. Here is an example with a random video file on my system:
cat 01.mp4 | ffprobe -show_format -pretty -loglevel quiet -
Returns:
[FORMAT]
filename=pipe:
nb_streams=2
nb_programs=0
format_name=mov,mp4,m4a,3gp,3g2,mj2
format_long_name=QuickTime / MOV
start_time=N/A
duration=0:02:56.400000
size=N/A
bit_rate=N/A
probe_score=100
TAG:major_brand=isom
TAG:minor_version=512
TAG:compatible_brands=isomiso2mp41
TAG:creation_time=1970-01-01T00:00:00.000000Z
TAG:title=yy.mp4
TAG:encoder=Lavf52.78.3
[/FORMAT]
And you can pipeline it from remote site by curl
curl --silent --header "Range: bytes=0-51200" https://example.com/your.mp4 | ffprobe -v quiet -show_format -of flat=s=_ -show_entries stream=height,width,nb_frames,duration,codec_name -