Raspberry Pi IP camera on my network broadcasting to a web browser. I want to save 10 minutes long video clips. This is the line:
raspivid -t -0 -w 1080 -h 720 -awb auto -fps 30 -b 1200000 -o - |ffmpeg -loglevel quiet -i - -vcodec copy -an -f flv -metadata streamName=myStream tcp://0.0.0.0:6666&
Following a youtube tutorial I managed to watch my rpi ip camera on the browser but I want to record myself sleeping to detect any breath interruption.
raspivid -t -0 -w 1080 -h 720 -awb auto -fps 30 -b 1200000 -o - |ffmpeg -loglevel quiet -i - -vcodec copy -an -f flv -metadata streamName=myStream tcp://0.0.0.0:6666&
Works fine! I only want to add recording to a file 10 minutes videos (in chronological order if it's possible)
You can use the segment muxer to save the recording in 10 minute segments.
ffmpeg -loglevel quiet -i - -c copy -an -f flv -metadata streamName=myStream tcp://0.0.0.0:6666 -c copy -an -f segment -segment_time 600 -reset_timestamps 1 vid%d.mp4
This will generate, in addition to streaming, vid1.mp4, vid2.mp4, vid3.mp4...
Due to keyframe placement, segments may not be exactly 10 minutes long.
Another way from #Gyan's suggestion, you can combine segment and strftime format to record files with file name as time that it starts recording like:
video_2019-08-04-12.00.00.flv
video_2019-08-04-12.10.00.flv
video_2019-08-04-12.20.00.flv
...
Following command below:
ffmpeg -loglevel quiet -i - -vcodec copy -an -f flv -metadata streamName=myStream tcp://0.0.0.0:6666 \
-f segment -strftime 1 \
-segment_time 00:10:00 \
-segment_format flv \
-an -vcodec copy \
-reset_timestamps 1 \
video_%Y-%m-%d-%H.%M.%S.flv
Related
I'm trying to retrieve the timestamp of each frame of a camera using an rstp stream and them.
For recording I use the following command line and it's work :
ffmpeg
-correct_ts_overflow 0
-probesize 1G
-analyzeduration 1G
-i rtsp://user:password#ip:port
-vcodec copy
-bsf:v h264_mp4toannexb
-bufsize 10M
-acodec copy
-f ssegment
-segment_list_flags live
-segment_atclocktime 1
-reset_timestamps 1
-write_empty_segments 1
-segment_time 15
-segment_list C:\Video\Delivery\ffmpeg\list.video
-segment_list_type csv
-strftime 1 "C:\Video\Delivery\ffmpeg\%%Y%%m%%d_%%H-%%M-%%S.ts"
And for some utility I would like to be able to retrieve the timestamp of the machine when I receive a frame, so by searching a bit I found different post on '-mkvtimestamp_v2'. By trying it alone with the camera as if below:
ffmpeg
-copyts ^
-correct_ts_overflow 0 ^
-probesize 1G ^
-analyzeduration 1G ^
-i rtsp://user:password#ip:port
-c copy
-pix_fmt yuv420p
-flush_packets 1
-vframes 10
-reset_timestamps 1
-timestamp now
-copyts
-f mkvtimestamp_v2 timestamp.txt
-vsync 0
It works perfectly.
But from the moment I try to record AND try to retrieve the timestamp simultaneously with the following command :
ffmpeg
-use_wallclock_as_timestamps 1
-correct_ts_overflow 0
-probesize 1G
-analyzeduration 1G
-i rtsp://user:password#ip:port
-vcodec copy
-bsf:v h264_mp4toannexb
-bufsize 10M
-acodec copy
-f ssegment
-segment_list_flags live
-segment_atclocktime 1
-reset_timestamps 1
-write_empty_segments 1
-segment_time 15
-segment_list C:\Video\Delivery\ffmpeg\list.video
-segment_list_type csv
-strftime 1 "C:\Video\Delivery\ffmpeg\%%Y%%m%%d_%%H-%%M-%%S.ts"
-copyts
-vcodec copy
-flush_packets 1
-f mkvtimestamp_v2 log.txt
-vsync 0
I get a lot of: Non-monotonous DTS in output stream 0:0 warning.
I also have on average one minute delay between the recorded timestamps, and the real timestamp.
And the first video recorded have a bugged timer on a video player like this : Here
I've tried arranging the command in different orders but I get nothing conclusive...
So if you have any idea that would be a big help!
I work on Windows 10 and I use ffmpeg-3.4.1.
Cordially,
Jay
I solved it by piping the second output to another ffmpeg instance. The reason why I think this works is because the second ffmpeg will discard the timestamp offset that was added by -use_wallclock_as_timestamps 1 and reset the offset to 0.
ffmpeg -use_wallclock_as_timestamps 1 -i rtsp://#ip:port -c copy -copyts -y -f mkvtimestamp_v2 timestamps.txt -vsync 0 -c copy -f mpegts - | ffmpeg -f mpegts -i - -c copy -f segment output-segment-%d.mp4
Another problem however with this solution is that if RTSP drops some frames, then the mkvtimestamp_v2 file will be skipping some time values, making it hard to correlate the segments with the timestamps.txt file.
So instead I solved it by embedding the wall clock timestamps into the segments themselves.
ffmpeg -use_wallclock_as_timestamps 1 -i rtsp://#ip:port -c copy -copyts -vsync passthrough -f segment -segment_time 10 out%d.mp4
Then I can run ffprobe later on each segment to know their actual start time. (relative to system clock).
I have one m3u8 file with ts segments. I am trying to convert a part of it to mp4 using the below command.
ffmpeg -i playlist.m3u8 -ss 30 -t 120 -c copy -bsf:a aac_adtstoasc -flags +global_header -y output.mp4
I manually calculated where my segments are located and concatenated those to form output.ts. And then converted that to mp4 using the below commands.
ffmpeg -f concat -safe 0 -i <(for f in ./*.ts; do echo "file '$PWD/$f'"; done) -c copy output.ts
ffmpeg -i output.ts -c copy -bsf:a aac_adtstoasc -flags +global_header -y output.mp4
I found that the second approach is taking far lesser time compared to the first one, an order of 10s of seconds. Someone, please let me know whether the comparison makes any sense and why there is so much difference between the two.
I was using -ss incorrectly for the live stream.
-ss has to be used along side -live_start_index 0 before input file option -i input.m3u8.
For the live streaming from FFMpeg part, one should use -f hls -hls_playlist_type event than -f segment -segment_list_flags live for seek to work on live streaming.
As mentioned in the document for -ss, seek doesn't start exactly at 15th sec. And the duration is also not honoured(< 30secs).
ffmpeg -live_start_index 0 -ss 15 -i playlist.m3u8 -t 00:00:30 -c copy -bsf:a aac_adtstoasc -flags +global_header -y input.mp4
When used without -c copy and with transcoding and -accurate_seek, the duration is fine. But the seek position is the same as the one with -c copy.
I'm trying to concatenate a 15 second clip of a video (MOVIE.mp4) with 5 seconds (no audio) of an image (IMAGE.jpg) using FFmpeg.
Something seems to be wrong with my filtergraph, although I'm unable to determine what. The command I've put together is the following:
ffmpeg \
-loop 1 -t 5 -I IMAGE.jpg \
-t 15 -I MOVIE.mp4 \
-filter_complex "[0:v]scale=480:640[1_v];anullsrc[1_a];[1:v][1:a][1_v][1_a]concat=n=2:v=1:a=1[out]" \
-map "[out]" \
-strict experimental tst_full.mp4
Unfortunately, this seems to be creating some strange results:
On my personal computer (FFmpeg 4.2.1) it correctly concatenates the movie with the static image; however, the static image lasts for an unbounded length of time. (After entering ctrl-C, the movie is still viewable, but is of an extremely long length--e.g., 35 min--depending on when I interrupt the process.)
On a remote machine where I need to do the ultimate video processing (FFmpeg 2.8.15-0ubuntu0.16.04.1), the command does not terminate, and instead, I get cascading errors of the following form:
Past duration 0.611458 too large
...
[output stream 0:0 # 0x21135a0] 100 buffers queued in output stream 0:0, something may be wrong.
...
[output stream 0:0 # 0x21135a0] 100000 buffers queued in output stream 0:0, something may be wrong.
I haven't been able to find much documentation that elucidates what these errors mean, so I don't know what's going wrong.
As Gyan pointed out, you only have to add atrim to your audio:
anullsrc,atrim=0:5[silent-audio]
Instead of scale you could use scale2ref and setsar to automatically make your image the same size and aspect ratio as the video.
ffmpeg \
-loop 1 -t 5 -i IMAGE.jpg \
-t 15 -i MOVIE.mp4 \
-filter_complex "[0:v][1:v]scale2ref[img][v];[img]setsar=1[img]; \
anullsrc,atrim=0:5[silent-audio];[v][1:a][img]
[silent-audio]concat=n=2:v=1:a=1[out]" \
-map "[out]" \
-strict experimental tst_full.mp4
Alternatively you could use anullsrc as a 3rd input:
ffmpeg \
-t 15 -i MOVIE.mp4 \
-loop 1 -t 5 -i IMAGE.jpg \
-f lavfi -t 5 -i anullsrc \
-filter_complex "[1:v][0:v]scale2ref[img][v];\
[img]setsar=1[img];[v][0:a][img][2:a]concat=n=2:v=1:a=1[out]" \
-map "[out]" \
-strict experimental tst_full.mp4
I'm trying to synchronize video and audio from my raspicam and arecord and make rtmp stream with ffmpeg and 3g modem, but don't know how to do that. I have tried with this code:
raspivid -t 0 -w 1024 -h 768 -fps 25 -vf -hf -b 1000000 -v -o temp.v & arecord -f cd -D plughw:0 | ffmpeg -i temp.v -itsoffset 13.5 -i - -c:v copy -c:a libmp3lame -b:a 64k -vsync 0 -f flv rtmp://ipofmynginxserver/myapp/mystream
and
raspivid -t 0 -w 1024 -h 768 -fps 25 -g 60 -vf -hf -b 1000000 -v -o temp.v & arecord -f cd -D plughw:0 | ffmpeg -r 25.37 -i temp.v -itsoffset 13.5 -i - -c:v copy -c:a libmp3lame -b:a 64k -vsync 0 -async 1 -f flv rtmp://ipofmynginxserver/myapp/mystream
At first everything is fine (because of the offset) But after connection lost for 2-4 seconds audio and video starting to play asynchronus (audio first)
I have tried to change options vsync and async but it didn't make sense. If I'm trying to get alsa microphone with ffmpeg I'm getting alsa buffer xrun and it doesn't work, the only way for me is to use raspivid and arecord, how can I solve my problem? Thanks and sorry for my English.
I've been looking all around for this. Problem is that most google searches end up with being about creating a video from solely PNG files.
I've found this command which does the job :
ffmpeg -y -loop 1 -framerate 60 -t 5 -i firstimage.jpg -t 5 -f lavfi -i aevalsrc=0 -loop 1 -framerate 60 -t 5 -i secondimage.png -t 5 -f lavfi -i aevalsrc=0 -loop 1 -framerate 60 -t 5 -i thirdimage.png -t 5 -f lavfi -i aevalsrc=0 -i "shadowPlayVid.mp4" -filter_complex "[0:0][1:0][2:0][3:0][4:0][5:0][6:0][6:1] concat=n=4:v=1:a=1 [v] [a]" -map [v] -map [a] output.mp4 >> log_file1.txt 2>&1
But it seems to reencode the whole video, the input video is H.264 without CFR, but it seems to me that putting just some images before the video shouldn't take too long.
Because it ends up encoding the whole thing, this takes about 2 hours with a video of 30 minutes on a strong computer, while I feel like without encoding this should be able to be done much quicker. How do I make sure it doesn't re-encode while maintaining every image showing for 5 seconds first?
Generate your playervid.mp4 via
ffmpeg -y -loop 1 -framerate 60 -t 5 -i sample-out3.jpg -f lavfi -t 5 -i aevalsrc=0 -vf settb=1/60000 -video_track_timescale 60000 -c:v libx264 -pix_fmt yuv420p playervid.mp4