Use ffmpeg to record 2 webcams on raspberry pi - ffmpeg

I want to record 2 webcams using ffmpeg, i have a simple python script but it doesn't work when I run the 2 subprocesses at the same time.
ROOT_PATH = os.getenv("ROOT_PATH", "/home/pi")
ENCODING = os.getenv("ENCODING", "copy")
new_dir = datetime.datetime.now().strftime("%Y_%m_%d_%H_%M_%S")
RECORDINGS_PATH1 = os.getenv("RECORDINGS_PATH", "RecordingsCam1")
RECORDINGS_PATH2 = os.getenv("RECORDINGS_PATH", "RecordingsCam2")
recording_path1 = os.path.join(ROOT_PATH, RECORDINGS_PATH1, new_dir)
recording_path2 = os.path.join(ROOT_PATH, RECORDINGS_PATH2, new_dir)
os.mkdir(recording_path1)
os.mkdir(recording_path2)
segments_path1 = os.path.join(recording_path1, "%03d.avi")
segments_path2 = os.path.join(recording_path2, "%03d.avi")
record1 = "ffmpeg -nostdin -i /dev/video0 -c:v {} -an -sn -dn -segment_time 30 -f segment {}".format(ENCODING, segments_path1)
record2 = "ffmpeg -nostdin -i /dev/video2 -c:v {} -an -sn -dn -segment_time 30 -f segment {}".format(ENCODING, segments_path2)
subprocess.Popen(record1, shell=True)
subprocess.Popen(record2, shell=True)
Also, i tried capturing the 2 sources side by side but it gives the error:`Filtering and streamcopy cannot be used together.

This has nothing to do with running two processes at the same time. FFmpeg clearly states that it cannot find /dev/video0 and /dev/video2. It seems your video camera is not detected. You can check this with following command :
$ ls /dev/ | grep video
will list all devices which have video in their name. If video0 and video2 do not exist, its clear FFmpeg gives such error. If they do exist, i do not know how to resolve this. You may try to run the FFmpeg commands directly in terminal.

Related

ffmpeg downloading parts of Youtube videos but for some of them they have a black screen for a few seconds

So im using ffmpeg to download some youtube videos with specific start and stop times. My code looks like os.system("ffmpeg -i $(youtube-dl --no-check-certificate -f 18 --get-url %s) -ss %s -to %s -c:v copy -c:a copy %s"% (l, y, z, w)) where the variables would all be the name of the file, the url, and the start and stop times. Some of the vidoes come out just fine, others have a black screen and only a portion of the video, and a very few amount have just audio files. My time is formated as x.y where x would be the seconds and y would be the milliseconds. Is this the issue so I need to transform it to 00:00:00.0 format? Any help is appreciated
os.system("ffmpeg -ss %s -i $(youtube-dl --no-check-certificate -f 18 --get-url %s) -t %s -c:v copy -c:a copy %s"% (l, y, z, w))
-ss start the video with 00:00:00.0000 format
-t the duration of scene in seconds
example if you want to extract a scene from second 30, and a duration of 3 seconds
os.system("ffmpeg -ss 00:00:30.0000 -i $(youtube-dl --no-check-certificate -f 18 --get-url %s) -t 3 -c:v copy -c:a copy %s"% (l, y, z, w))
Try this with Python :)
Add '-c:a', 'copy',to ffmpeg command line (part) helps out with the black picture / frames / screen in the video.
def ydl_info():
ydl_opts = {
'format': 'bestvideo[height<=720][tbr>1][filesize>0.05M]',
'outtmpl': '%(id)s.%(ext)s', # Template for output names.
}
with youtube_dl.YoutubeDL(ydl_opts) as ydl:
info = ydl.extract_info(
'https://www.youtube.com/watch?v=HZhWTjnIn78',
download=False # False for just extract the info
)
return info
def ffmpeg_cut():
info = ydl_info()
URL = info['formats'][-1]['url'] # media_url # If there are several media formats then to get media url for the last format:
START = args.start_time
END = '00:00:03.00'
OUTPUT = os.path.join(args.output, args.name+args.format)
print('Output:', OUTPUT)
#ffmpeg -ss 00:00:15.00 -i "OUTPUT-OF-FIRST URL" -t 00:00:10.00 -c copy out.mp4
#cmd = "ffmpeg -ss {} -i {} -t {} -c copy {}".format(START, URL, END, OUTPUT)
subprocess.call([
'ffmpeg',
'-i', URL,
'-ss', START,
'-t', END,
'-c:a', 'copy', OUTPUT, # '-c:v' copies only video and '-c:a' only audio
])
return None

What are supported ffmpeg rtp_mpegts Muxer options? (mpegts Muxer options are ignored)

I created a UDP stream with -f mpegts and some options like -mpegts_transport_stream_id.
I received the stream with "StreamXpert - Real-time stream analyzer" that shows all options are in the output. See my ffmpeg parameters and the StreamXpert at the end.
The same Muxer options seem to be ignored with -f rtp_mpegts.
I have tried to use -f mpegts and pipe it to -f rtp_mpegts like so:
ffmpeg -i ... -f mpegts pipe: | ffmpeg pipe: -c copy -f rtp_mpegts "rtp://239.1.1.9:1234?pkt_size=1316"
The options are still ignored.
This ticket "support options for MPEGTS muxer when using RTP_MPEGTS" also notices the ignored option. Furthermore in this comment, "thovo" gives an analysis and suggests a solution.
Obviously the problem still exists. Anybody found a workaround for this?
My additional question: I have not questioned if my project really needs rtp in the first place. Maybe my coworker didn't know better and requested rtp when udp would be sufficient as well.
The aim was to receive the RTP stream with a TV using DVB via IP. This was successful an a Panasonic TV.
The SAT>IP Specification on page 10 requires rtp for Media Transport:
The SAT>IP protocol makes use of:
UPnP for Addressing, Discovery and Description,
RTSP or HTTP for Control,
RTP or HTTP for Media Transport.
Is udp out of the equation?
ffmpeg: (all options are in the output with -f mpegts)
(HEX to decimal: 0x005A = 90, 0x005B = 91 0x005C = 92, 0x005D = 93, 0x005E= 94)
ffmpeg -f lavfi -i testsrc \
-r 25 \
-c:v libx264 \
-pix_fmt yuv420p \
-profile:v main -level 3.1 \
-preset veryfast \
-vf scale=1280:720,setdar=dar=16/9 \
-an \
-bsf:v h264_mp4toannexb \
-flush_packets 0 \
-b:v 4M \
-muxrate 8M \
-pcr_period 20 \
-pat_period 0.10 \
-sdt_period 0.25 \
-metadata:s:a:0 language=nya \
-mpegts_flags +pat_pmt_at_frames \
-mpegts_transport_stream_id 0x005A \
-mpegts_original_network_id 0x005B \
-mpegts_service_id 0x005C \
-mpegts_pmt_start_pid 0x005D \
-mpegts_start_pid 0x005E \
-mpegts_service_type advanced_codec_digital_hdtv \
-metadata service_provider='WI' \
-metadata service_name='W' \
-mpegts_flags system_b -flush_packets 0 \
-f mpegts "udp://239.1.1.10:1234?pkt_size=1316"
StreamXpert Output:
-mpegts_transport_stream_id = Transport Stream ID (yellow text highlight)
-mpegts_original_network_id = Original Network ID, onw (green text highlight)
-mpegts_service_id = Program, service (pink text highlight)
-mpegts_pmt_start_pid = PMT PID, Table PID (turquoise text highlight)
-mpegts_start_pid = PID, PCR PID (red text highlight)
-mpegts_service_type = service type (blue text)
service_name = Service name (orange text)
service_provider = Service provider (pink text)

ffmpeg choose exact frame from long film strip

I'm working with ffmpeg to choose the better thumbnail for my video. and the selection would be based on the slider.
as per the requirement multiple thumbnails not needed just a single long film image and with the help of slider select the thumbnail and save it.
i used below command to get the long strip thumbnail.
ffmpeg -loglevel panic -y -i "video.mp4" -frames 1 -q:v 1 -vf "select=not(mod(n\,40)),scale=-1:120,tile=100x1" video_preview.jpg
I followed the instructions from this tutorial
I'm able to get the long film image:
This is working fine, they moving the image in slider which is fine.
My question is how can I select a particular frame from that slider / film strip. How can I calculate the exact time duration from the slider and then execute a command to extract that frame?
In one of my project I implemented the below scenario. In the code where I'm getting Video duration from the ffmgpeg command, if duration is less than 0.5 second than I set a new thumbnail interval for the same. In your case you should set time interval for the thumbnail creation. Hope this will help you.
$dur = 'ffprobe -i '.$video.' -show_entries format=duration -v quiet -of csv="p=0"';
$duration= exec($dur);
if($duration < 0.5) {
$interval = 0.1;
} else {
$interval = 0.5;
}
screenshot size
$size = '320x240';
ffmpeg command
$cmd = "ffmpeg -i $video -deinterlace -an -ss $interval -f mjpeg -t 1 -r 1 -y $image";
exec($cmd);
You can try this:
ffmpeg -vsync 0 -ss duration -t 0.0001 -noaccurate_seek -i filename -ss 00:30 -t 0.0001 -noaccurate_seek -i filename -filter_complex [0:v][1:v] concat=n=2[con];[con]scale=80:60:force_original_aspect_ratio=decrease[sc];[sc]tile=2x1[out] -map [out]:v -frames 1 -f image2 filmStrip.jpg
2 frame

concat two video using ffmpeg working on local but not working on server why?

Here the code is for concat two video and adding watermark to that video but this code working perfectly in local but not in server.
The code is as follows:
$videoFileName = rand('111111', '999999').'_'.time().'.'.$request->file('video1')->getClientOriginalExtension();
$intermediateVideo1 = rand('1111111', '9999999').'_'.time().'.ts';
$intermediateVideo2 = rand('1111111', '9999999').'_'.time().'.ts';
$concatVideoFileName = rand('111111', '999999').'_'.time().'.'.$request->file('video1')->getClientOriginalExtension();
exec('ffmpeg -i '.$request->file('video1').' -c copy -bsf:v h264_mp4toannexb -f mpegts '.$intermediateVideo1);
exec('ffmpeg -i '.$request->file('video2').' -c copy -bsf:v h264_mp4toannexb -f mpegts '.$intermediateVideo2);
exec('ffmpeg -i "concat:'.$intermediateVideo1.'|'.$intermediateVideo2.'" -c copy -bsf:a aac_adtstoasc '.public_path('uploads/videos/'.$concatVideoFileName));
exec('ffmpeg -i '.public_path('uploads/videos/'.$concatVideoFileName).' -i '.storage_path("assets/image/watermark.png").' -filter_complex "overlay" '.public_path('uploads/videos/'.$videoFileName));
File::delete($intermediateVideo1);
File::delete($intermediateVideo2);
File::delete(public_path('uploads/videos/'.$concatVideoFileName));

How to Create video from selected images of a folder using FFMPEG?

For the time being I am doing
ProcessStartInfo ffmpeg = new ProcessStartInfo();
ffmpeg.CreateNoWindow = false;
ffmpeg.UseShellExecute = false;
ffmpeg.FileName = "e:\ffmpeg\ffmpeg.exe";
ffmpeg.Arguments = "for file in (D:\\Day\\*.jpg); do ffmpeg -i \"$file\" -vf fps=1/60 -q:v 3 \"D:\\images\\out.mp4\"; done;";
ffmpeg.RedirectStandardOutput = true;
Process x = Process.Start(ffmpeg);
Here I'm getting exception saying system cannot find specified file.
For time being I'm considering all the files in D:\Day\*.jpg but actually I need to query individual files from a list.
Where am I wrong in the above scenario?
We need to create a separate text file with the image names and use that text file to create your video.
inside frameList.txt :
file 'D:\20180205_054616_831.jpg'
file 'D:\20180205_054616_911.jpg'
file 'D:\20180205_054617_31.jpg'
file 'D:\20180205_054617_111.jpg'
and in Arguments of the process use,
"-report -y -r 15/1 -f concat -safe 0 -i frameList.txt -c:v libx264 -s 1920*1080 -b:v 2000k -vf fps=15,format=yuv420p out.mp4"

Resources